Data Engineer
Data engineer job in Fairfield, CT
Data Engineer - Vice President
Greenwich, CT
About the Firm
We are a global investment firm focused on applying financial theory to practical investment decisions. Our goal is to deliver long-term results by analyzing market data and identifying what truly matters. Technology is central to our approach, enabling insights across both traditional and alternative strategies.
The Team
A new Data Engineering team is being established to work with large-scale datasets across the organization. This team partners directly with researchers and business teams to build and maintain infrastructure for ingesting, validating, and provisioning large volumes of structured and unstructured data.
Your Role
As a Data Engineer, you will help design and build an enterprise data platform used by research teams to manage and analyze large datasets. You will also create tools to validate data, support back-testing, and extract actionable insights. You will work closely with researchers, portfolio managers, and other stakeholders to implement business requirements for new and ongoing projects. The role involves working with big data technologies and cloud platforms to create scalable, extensible solutions for data-intensive applications.
What You'll Bring
6+ years of relevant experience in data engineering or software development
Bachelor's, Master's, or PhD in Computer Science, Engineering, or related field
Strong coding, debugging, and analytical skills
Experience working directly with business stakeholders to design and implement solutions
Knowledge of distributed data systems and large-scale datasets
Familiarity with big data frameworks such as Spark or Hadoop
Interest in quantitative research (no prior finance or trading experience required)
Exposure to cloud platforms is a plus
Experience with Python, NumPy, pandas, or similar data analysis tools is a plus
Familiarity with AI/ML frameworks is a plus
Who You Are
Thoughtful, collaborative, and comfortable in a fast-paced environment
Hard-working, intellectually curious, and eager to learn
Committed to transparency, integrity, and innovation
Motivated by leveraging technology to solve complex problems and create impact
Compensation & Benefits
Salary range: $190,000 - $260,000 (subject to experience, skills, and location)
Eligible for annual discretionary bonus
Comprehensive benefits including paid time off, medical/dental/vision insurance, 401(k), and other applicable benefits
We are an Equal Opportunity Employer. EEO/VET/DISABILITY
The Phoenix Group Advisors is an equal opportunity employer. We are committed to creating a diverse and inclusive workplace and prohibit discrimination and harassment of any kind based on race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status. We strive to attract talented individuals from all backgrounds and provide equal employment opportunities to all employees and applicants for employment.
Data Engineer
Data engineer job in Fort Lee, NJ
The Senior Data Analyst will be responsible for developing MS SQL queries and procedures, building custom reports, and modifying ERP user forms to support and enhance organizational productivity. This role will also design and maintain databases, ensuring high levels of stability, reliability, and performance.
Responsibilities
Analyze, structure, and interpret raw data.
Build and maintain datasets for business use.
Design and optimize database tables, schemas, and data structures.
Enhance data accuracy, consistency, and overall efficiency.
Develop views, functions, and stored procedures.
Write efficient SQL queries to support application integration.
Create database triggers to support automation processes.
Oversee data quality, integrity, and database security.
Translate complex data into clear, actionable insights.
Collaborate with cross-functional teams on multiple projects.
Present data through graphs, infographics, dashboards, and other visualization methods.
Define and track KPIs to measure the impact of business decisions.
Prepare reports and presentations for management based on analytical findings.
Conduct daily system maintenance and troubleshoot issues across all platforms.
Perform additional ad hoc analysis and tasks as needed.
Qualification
Bachelor's Degree in Information Technology or relevant
4+ years of experience as a Data Analyst or Data Engineer, including database design experience.
Strong ability to extract, manipulate, analyze, and report on data, as well as develop clear and effective presentations.
Proficiency in writing complex SQL queries, including table joins, data aggregation (SUM, AVG, COUNT), and creating, retrieving, and updating views.
Excellent written, verbal, and interpersonal communication skills.
Ability to manage multiple tasks in a fast-paced and evolving environment.
Strong work ethic, professionalism, and integrity.
Advanced proficiency in Microsoft Office applications.
C++ Market Data Engineer
Data engineer job in Stamford, CT
We are seeking a C++ Market Data Engineer to design and optimize ultra-low-latency feed handlers that power global trading systems. This is a high-impact role where your code directly drives real-time decision making.
What You'll Do:
Build high-performance feed handlers in modern C++ (14/17/20) for equities, futures, and options
Optimize systems for micro/nanosecond latency with lock-free algorithms and cache-friendly design
Ensure reliable data delivery with failover, gap recovery, and replay mechanisms
Collaborate with researchers and engineers to align data formats for trading and simulation
Instrument and test systems for continuous performance improvements
What We're Looking For:
3+ years of C++ development experience (low-latency, high-throughput systems)
Experience with real-time market data feeds (e.g., Bloomberg B-PIPE, CME MDP, Refinitiv, OPRA, ITCH)
Strong knowledge of concurrency, memory models, and compiler optimizations
Python scripting skills for testing and automation
Familiarity with Docker/Kubernetes and cloud networking (AWS/GCP) is a plus
Senior Azure Data Engineer
Data engineer job in Stamford, CT
Great opportunity with a private equity firm located in Stamford, CT.
The Azure Data Engineer in this role will partner closely with investment and operations teams to build scalable data pipelines and modern analytics solutions across the firm and its portfolio.
The Senior Azure Data Engineer main responsibilities will be:
Designing and implementing machine learning solutions as part of high-volume data ingestion and transformation pipelines
Experience in designing solutions for large data warehouses and databases (Azure, Databricks and/or Snowflake)
Gather requirements from business stakeholders.
Experience in data architecture, data governance, data modeling, data transformation (from converting data, to data cleansing, to building data structures) data lineage, data Integration, and master data management.
Technical Skills
Architecting and delivering solutions using the Azure Data Analytics platform including Azure Databricks/Azure SQL Data Warehouse
Utilizing Databricks (for processing and transforming massive quantities of data and exploring the data through machine learning models)
Design and build solutions powered by DBT models and integrate with Databricks.
Utilize Snowflake for data application development, and secure sharing and consumption of real-time and/or shared data.
Expertise in data manipulation and analysis using Python.
SQL for data migration and analysis.
Pluses:
Past work experience in financial markets is a plus (Asset Management, Multi-strategy, Private Equity, Structured Products, Fixed Income, Trading, Portfolio Management, etc.).
E-Mail: DIANA@oakridgestaffing.com
Please feel free to connect with me on LinkedIn:
www.linkedin.com/in/dianagjuraj
SAP Data Migration Developer
Data engineer job in Englewood, NJ
SAP S4 Data Migration Developer
Duration: 6 Months
Rate: Competitive Market Rate
This key role is responsible for development and configuration of SAP Data Services Platform with in Client's Corporate technology to deliver a successful data conversion and migration from SAP ECC to SAP S4 as part of project Keystone.
KEY RESPONSIBILITIES -
Responsible for SAP Data Services development, design, job creation and execution. Responsible for efficient design, performance tuning and ensuring timely data processing, validation & verification.
Responsible for creating content within SAP Data Services for both master and transaction data conversion (standard SAP and Custom data objects). Responsible for data conversion using Staging tables and work with SAP teams for data loads in SAP S4 and MDG environments.
Responsible for building validation rules, scorecards and data for consumption in Information Steward pursuant to conversion rules as per Functional Specifications. Responsible to adhere to project timelines, deliverables and account for object delivery for teams involved. To take part in meetings, execute plans, design, develop custom solutions within Clients O&T Engineering scope.
Work in all facets of SAP Data Migration projects with focus on SAP S4 Data Migration using SAP Data Services Platform
Hands-on development experience with ETL from legacy SAP ECC environment, conversions and jobs.
Demonstrate capabilities with performance tuning, handling large data sets.
Understand SAP tables, fields & load processes into SAP S4, MDG systems
Build validation rules, customize, and deploy Information Steward scorecards, data reconciliation and validation
Be a problem solver and build robust conversion, validation per requirements.
SKILLS AND EXPERIENCE
6-8 years of experience in SAP Data Services application as a developer
At least 2 SAP S4 Conversion projects with DMC, Staging Tables & updating SAP Master Data Governance
Good communication skills, ability to deliver key objects on time and support with testing, mock cycles.
4-5 Years development experience in SAP Data Services 4.3 Designer, Information Steward
Taking ownership and ensuring high quality results
Active in seeking feedback and making necessary changes
Specific previous experience -
Proven experience in implementing SAP Data Services in a multinational environment.
Experience in design of data loads of large volumes to SAP S4 from SAP ECC
Must have used HANA Staging tables
Experience in developing Information Steward for Data Reconciliation & Validation (not profiling)
REQUIREMENTS
Adhere to work availability schedule as noted above, be on time for meeting
Written and verbal communication in English
Principal Software Engineer (Embedded Systems)
Data engineer job in Norwalk, CT
Position Type: Full-Time / Direct Hire (W2)
Salary: $200K+ base + 13% bonus
Experience Required: 10-20 years
Domain: Industrial Automation & Robotics
Work Authorization: US Citizen or Green Card
Interview Process: 2× Teams Interviews → Onsite Interview (expenses paid)
“How Many Years With” (Candidate Screening Section)
C:
C++:
RTOS:
Embedded Software Development:
Device Driver Software Development:
Job Description
We are seeking a Principal Software Engineer - Embedded Systems to join a high-performance engineering team building next-generation industrial automation and robotics platforms. This role blends hardware, firmware, real-time systems, machine learning components, and high-performance automation into one of the most technically challenging environments.
The ideal candidate is passionate about writing software that interacts directly with real machines, drives motion control, solves physical-world problems, and contributes to global-scale automation systems.
This role is hands-on, impact-driven, and perfect for someone who wants to see their code operating in motion - not just in a console.
Key Responsibilities
Design, implement, and optimize embedded software in C/C++ for real-time control systems.
Develop and maintain real-time operating system (RTOS)-based applications.
Implement low-latency firmware, control loops, and motion-control algorithms.
Work with hardware teams to integrate sensors, actuators, and automation components.
Architect scalable, high-performance embedded platforms for industrial robotics.
Develop device drivers, board support packages (BSPs), and hardware abstraction layers.
Own full lifecycle development: requirements → design → implementation → testing → deployment.
Develop machine-learning-based modules for system categorization and algorithm organization (experience helpful, not required).
Build real-time monitoring tools, diagnostics interfaces, and system health analytics.
Troubleshoot complex hardware/software interactions in a real-time environment.
Work closely with electrical, mechanical, and controls engineers.
Participate in code reviews, architectural discussions, and continuous improvement.
Required Qualifications
Bachelor's degree in Computer Engineering, Electrical Engineering, Computer Science, or related field (Master's a plus).
10-20 years professional experience in:
C and C++ programming
Embedded Software Development
RTOS-based design (e.g., FreeRTOS, QNX, VxWorks, ThreadX, etc.)
Control systems and real-time embedded environments
Strong experience with:
Device driver development
Board bring-up and hardware interfacing
Debugging tools (oscilloscopes, logic analyzers, JTAG, etc.)
Excellent understanding of:
Memory management
Multithreading
Interrupt-driven systems
Communication protocols (UART, SPI, I2C, CAN, Ethernet)
Preferred Qualifications
Experience with robotics, motion control, industrial automation, or safety-critical systems.
Exposure to machine learning integration in embedded platforms.
Experience in high-precision or high-speed automation workflows.
Target Industries / Domains
Ideal candidates may come from:
Medical Devices
Semiconductor Equipment
Aerospace & Defense
Industrial Control Systems
Robotics & Automation
Machinery & Mechatronics
Appliances & Devices
Embedded Consumer or Industrial Electronics
Software Engineer
Data engineer job in Hauppauge, NY
JSG is hiring a Software Engineer in Hauppauge, NY. Must be a US Citizen and work onsite. Salary range: $127K-$137K - Bonus
Our charter is to develop fuel measurement, management and inserting systems for commercial and defense airframers. The Software Engineering team works closely with the Systems and Electronic Hardware Engineering teams to develop, qualify and certify these technologies as products for our customers in aerospace and industrial markets.
Develop embedded software using C and/or model-based tools such as SCADE
Develop high level and low level software requirements
Create requirements-based test cases and verification procedures
Perform software integration testing on target hardware using both real and simulated inputs/outputs
Analyze software requirements, design and code to assure compliance to standards and guidelines
Perform traceability analysis to customer specification requirements to software code
Participate in software certification audits, e.g. stages of involvement (SOI)
BS in Software Engineering, Computer Engineering, Computer Science or related field
5+ years of experience performing software development, verification and/or integration
Strong technical aptitude with analytical and problem-solving capabilities
Excellent interpersonal and communication skills, both verbal and written
Ability to work in a team environment, cultivate strong relationships and demonstrate initiative
Experience with C programming language
Experience with model-based software development using SCADE
Experience developing embedded software control systems
Experience planning and executing projects using Agile software development methodology
Experience managing requirements using DOORS or DOORS Next Gen (DNG)
Experience with digital signal processing or digital filter design
Experience with ARM microprocessors
Experience with serial communication protocols (e.g. CANbus, ARINC, RS-232)
Familiarity with aerospace (e.g., DO-178, DO-330, DO-331) and/or industrial (e.g., IEC 61508) software certification requirements
Familiarity with functional safety standards such as ISO 13489, IEC 61508, IEC 62061, ISO 26262 or ARP4761Software Engineer to join our team. We are looking for a candidate who has working experience designing, developing and verifying embedded software in aerospace and/or industrial applications. The candidate should be familiar with industry-standard software development and design assurance practices (such as DO-178, ISO 26262, EN 50128, IEC 61508 or IEC 62304) and their application across the entire software development lifecycle.
Johnson Service Group (JSG) is an Equal Opportunity Employer. JSG provides equal employment opportunities to all applicants and employees without regard to race, color, religion, sex, age, sexual orientation, gender identity, national origin, disability, marital status, protected veteran status, or any other characteristic protected by law.
Senior Backend Software Engineer (.NET/C#)
Data engineer job in Garden City, NY
Our client is an AI powered ecommerce/marketing software company. Their suite of products integrates Web Content Management, eCommerce, eMarketing, Social Media management, and Web Analytics. They are seeking a Backend Software Engineer to play a key role in building, integrating, and optimizing their flagship site search product. Will design, develop, and maintain API-first backend services using C#, NET, NoSQL, AWS technologies. Work closely with product and engineering teams to deliver secure, scalable, and efficient solutions. Design new features to work with AWS services in a cost-effective, automated and scalable manner.
***ONSITE in GARDEN CITY, NY***
***DIRECT HIRE POSITION****
***NO C2C or SPONSORSHIP***
This position is an excellent opportunity for someone who enjoys solving complex problems, working independently and in a collaborative environment, and has a strong interest in search technologies.
Responsibilities
Design, develop, and maintain API-first backend services using C#
Work closely with product and engineering teams to deliver secure, scalable, and efficient solutions.
Design new features to work with AWS services in a cost-effective, automated and scalable manner.
Support ecommerce platform integrations.
Collaborate with the infrastructure team on deployments, monitoring, and performance tuning.
Contribute to technical design discussions and help drive architectural decisions.
Write clean, maintainable, and testable code.
Required Qualifications
3-10 years of professional software engineering experience
MUST HAVE eCommerce or Search Technology experience. Examples: Lucene, Elasticsearch/OpenSearch, Algolia, Coveo, SearchSpring, Klevu, Bloomreach. BigCommerce, Optimizely Configured Commerce, Shopify, Magento/Adobe Commerce, WooCommerce, Shopware, Unilog.
Must HAVE Strong proficiency in backend development. NET/C# strongly preferred
Hands-on experience with cloud platforms (AWS preferred, GCP and Azure also accepted).
Strong understanding of APIs, microservices, and integration patterns.
Solid knowledge of relational databases and/or NoSQL databases.
Ability to work on-site in Garden City, NY.
Full Stack Hedge Fund Software Engineer (Java/Angular/React)
Data engineer job in Stamford, CT
Focus Capital Markets is supporting its Connecticut-based hedge fund by providing a unique opportunity for talented a senior software engineer to work across verticals within the organization. In this role, you will assist the business by developing front-end and back-end applications, building and scaling APIs and working with the business to define technical solutions for business requirements. You will work with both sides of the stack, with a Core Java back-end (and C#/.Net) and latest versions of Angular on the front-end.
.
Although the organization is outside of the NYC area, it is just as lucrative and would afford someone with career stability, longevity and growth opportunity within the organization. The parent company is best of breed in the hedge fund industry and there are opportunities to grow from within.
You will work onsite Monday-Thursday.
Requirements:
5+ years of software engineering experience leveraging Angular on the front-end and Core Java or C# on the back-end.
Experience with React is relevant.
Experience with SQL is preferred.
Experience with REST APIs
Bachelors degree or higher in computer science, mathematics or related field.
Must have excellent communication skills
Senior Software Engineer (Full Stack), Data Analytics - Pharma
Data engineer job in Ridgefield, CT
$140-210K + Bonus
(*At this time our client cannot sponsor or transfer work visas, including H1B, OPT, F1)
Global Pharmaceutical and Healthcare conglomerate seeks dynamic and collaborative Lead Full Stack Software Engineer with 7+ years hands on front and back-end software developer experience designing, developing, testing, and delivering fully functioning Cloud-based, Data Analytics applications and backend services, to help lead application development from ideation and architecture to deployment and optimization, and integrate data science solutions like analytics and machine learning. Must have full stack Data Analytics experience, AWS Cloud, and hands-on experience in Data pipeline creation, ETL/ELT (AWS Glue, Databricks, DBT). This is a visible role that will deliver on key data transformation initiatives, and shape the future of data-driven decision making across the business.
Requirements
Hands-on experience in Data pipeline creation, ETL/ELT (AWS Glue, Databricks, DBT).
Build and maintain robust backend systems using AWS Lambda, API Gateway, and other serverless technologies.
Experience with frontend visualization tools like Tableau or PowerBI.
Hands-on expertise in Agile Development, Test Automation, IT Security best practices, Continuous Development and deployment tools (Git, Jenkins, Docker, Kubernetes), and functional programming.
Familiarity with IT security, container platforms, and software environments across QA, DEV, STAGING, and PROD phases.
Demonstrated thought leadership in driving innovation and best practice adoption.
Leadership & Collaboration: Strong communication, mentorship, and cross-functional teamwork.
Responsibilities include:
Application Development: Design, develop, and maintain both the front-end and back-end components of full-fledged applications using state-of-the-art programming languages and frameworks.
Architecture Integration: Incorporate API-enabled backend technologies into application architecture, following established architecture frameworks and standards to deliver cohesive and maintainable solutions.
Agile Collaboration: Work collaboratively with the team and product owner, following agile methodologies, to deliver secure, scalable, and fully functional applications in iterative cycles.
Testing Strategy and Framework Design: Develop and implement comprehensive testing strategies and frameworks to ensure the delivery of high-quality, reliable software.
For immediate consideration please submit resume in Word or PDF format
** AN EQUAL OPPORTUNITY EMPLOYER **
Java Software Engineer
Data engineer job in Englewood Cliffs, NJ
Hiring fulltime Java Developer in Englewood Cliffs, NJ- Onsite, no remote
Need 10+ yrs exp Java developer with Microservices
Data Scientist - Early Career (USA)
Data engineer job in Stamford, CT
Trexquant actively trades multiple asset classes, including global equities, futures, corporate bonds, options, and foreign exchange. Data is at the core of everything we do and we are looking for individuals who are passionate about working with data and are curious about how it is transformed into robust and profitable predictive models. As a data scientist, you will specialize in one of these asset classes, becoming the go-to expert for all data-related matters within that domain. This is a unique opportunity to lead the data efforts for a specific asset class and make a direct impact on the firm's trading strategies.
Responsibilities
* Collaborate closely with strategy and machine learning teams to identify predictive signals and help develop models using relevant data variables.
* Evaluate and explore new datasets recommended by researchers and partners.
* Develop deep familiarity with datasets in your asset class by engaging with data vendors and attending data conferences.
* Stay up to date with advancements in data science and machine learning techniques relevant to quantitative investing.
Requirements
* Bachelor's, Master's, or Ph.D. in Mathematics, Statistics, Computer Science, or a related STEM field.
* Experience in data science, with a focus on quantitative analysis and model development.
* Strong quantitative and analytical skills, with a deep understanding of statistical modeling and data-driven problem solving.
* Proficient in Python, with experience using relevant libraries for data analysis, machine learning, and numerical computing.
Benefits
* Competitive salary plus bonus based on individual and company performance.
* Collaborative, casual, and friendly work environment.
* PPO Health, dental and vision insurance premiums fully covered for you and your. dependents.
* Pre-tax commuter benefits.
* Weekly company meals.
Trexquant is an Equal Opportunity Employer
Principal Data Scientist
Data engineer job in Bridgeport, CT
Description & Requirements We now have an exciting opportunity for a Principal Data Scientist to join the Maximus AI Accelerator supporting both the enterprise and our clients. We are looking for an accomplished hands-on individual contributor and team player to be a part of the AI Accelerator team.
You will be responsible for architecting and optimizing scalable, secure AI systems and integrating AI models in production using MLOps best practices, ensuring systems are resilient, compliant, and efficient. This role requires strong systems thinking, problem-solving abilities, and the capacity to manage risk and change in complex environments. Success depends on cross-functional collaboration, strategic communication, and adaptability in fast-paced, evolving technology landscapes.
This position will be focused on strategic company-wide initiatives but will also play a role in project delivery and capture solutioning (i.e., leaning in on existing or future projects and providing solutioning to capture new work.)
This position requires occasional travel to the DC area for client meetings.
Essential Duties and Responsibilities:
- Make deep dives into the data, pulling out objective insights for business leaders.
- Initiate, craft, and lead advanced analyses of operational data.
- Provide a strong voice for the importance of data-driven decision making.
- Provide expertise to others in data wrangling and analysis.
- Convert complex data into visually appealing presentations.
- Develop and deploy advanced methods to analyze operational data and derive meaningful, actionable insights for stakeholders and business development partners.
- Understand the importance of automation and look to implement and initiate automated solutions where appropriate.
- Initiate and take the lead on AI/ML initiatives as well as develop AI/ML code for projects.
- Utilize various languages for scripting and write SQL queries. Serve as the primary point of contact for data and analytical usage across multiple projects.
- Guide operational partners on product performance and solution improvement/maturity options.
- Participate in intra-company data-related initiatives as well as help foster and develop relationships throughout the organization.
- Learn new skills in advanced analytics/AI/ML tools, techniques, and languages.
- Mentor more junior data analysts/data scientists as needed.
- Apply strategic approach to lead projects from start to finish;
Job-Specific Minimum Requirements:
- Develop, collaborate, and advance the applied and responsible use of AI, ML and data science solutions throughout the enterprise and for our clients by finding the right fit of tools, technologies, processes, and automation to enable effective and efficient solutions for each unique situation.
- Contribute and lead the creation, curation, and promotion of playbooks, best practices, lessons learned and firm intellectual capital.
- Contribute to efforts across the enterprise to support the creation of solutions and real mission outcomes leveraging AI capabilities from Computer Vision, Natural Language Processing, LLMs and classical machine learning.
- Contribute to the development of mathematically rigorous process improvement procedures.
- Maintain current knowledge and evaluation of the AI technology landscape and emerging. developments and their applicability for use in production/operational environments.
Minimum Requirements
- Bachelor's degree in related field required.
- 10-12 years of relevant professional experience required.
Job-Specific Minimum Requirements:
- 10+ years of relevant Software Development + AI / ML / DS experience.
- Professional Programming experience (e.g. Python, R, etc.).
- Experience in two of the following: Computer Vision, Natural Language Processing, Deep Learning, and/or Classical ML.
- Experience with API programming.
- Experience with Linux.
- Experience with Statistics.
- Experience with Classical Machine Learning.
- Experience working as a contributor on a team.
Preferred Skills and Qualifications:
- Masters or BS in quantitative discipline (e.g. Math, Physics, Engineering, Economics, Computer Science, etc.).
- Experience developing machine learning or signal processing algorithms:
- Ability to leverage mathematical principles to model new and novel behaviors.
- Ability to leverage statistics to identify true signals from noise or clutter.
- Experience working as an individual contributor in AI.
- Use of state-of-the-art technology to solve operational problems in AI and Machine Learning.
- Strong knowledge of data structures, common computing infrastructures/paradigms (stand alone and cloud), and software engineering principles.
- Ability to design custom solutions in the AI and Advanced Analytics sphere for customers. This includes the ability to scope customer needs, identify currently existing technologies, and develop custom software solutions to fill any gaps in available off the shelf solutions.
- Ability to build reference implementations of operational AI & Advanced Analytics processing solutions.
Background Investigations:
- IRS MBI - Eligibility
#techjobs #VeteransPage
EEO Statement
Maximus is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, age, national origin, disability, veteran status, genetic information and other legally protected characteristics.
Pay Transparency
Maximus compensation is based on various factors including but not limited to job location, a candidate's education, training, experience, expected quality and quantity of work, required travel (if any), external market and internal value analysis including seniority and merit systems, as well as internal pay alignment. Annual salary is just one component of Maximus's total compensation package. Other rewards may include short- and long-term incentives as well as program-specific awards. Additionally, Maximus provides a variety of benefits to employees, including health insurance coverage, life and disability insurance, a retirement savings plan, paid holidays and paid time off. Compensation ranges may differ based on contract value but will be commensurate with job duties and relevant work experience. An applicant's salary history will not be used in determining compensation. Maximus will comply with regulatory minimum wage rates and exempt salary thresholds in all instances.
Accommodations
Maximus provides reasonable accommodations to individuals requiring assistance during any phase of the employment process due to a disability, medical condition, or physical or mental impairment. If you require assistance at any stage of the employment process-including accessing job postings, completing assessments, or participating in interviews,-please contact People Operations at **************************.
Minimum Salary
$
156,740.00
Maximum Salary
$
234,960.00
Easy ApplyDirector, ERM - Actuary or Data Scientist
Data engineer job in Greenwich, CT
Company Details
"Our Company provides a state of predictability which allows brokers and agents to act with confidence."
Founded in 1967, W. R. Berkley Corporation has grown from a small investment management firm into one of the largest commercial lines property and casualty insurers in the United States.
Along the way, we've been listed on the New York Stock Exchange, become a Fortune 500 Company, joined the S&P 500, and seen our gross written premiums exceed $10 billion.
Today the Berkley brand comprises more than 60+ businesses worldwide and is divided into two segments: Insurance and Reinsurance and Monoline Excess. Led by our Executive Chairman, founder and largest shareholder, William. R. Berkley and our President and Chief Executive Officer, W. Robert Berkley, Jr., W.R. Berkley Corporation is well-positioned to respond to opportunities for future growth.
The Company is an equal employment opportunity employer.
Responsibilities
*Please provide a one-page resume when applying.
Enterprise Risk Management (ERM) Team
Our key risk management aim is to maximize Berkley's return on capital over the long term for an acceptable level of risk. This requires regular interaction with senior management both in corporate and our business units. The ERM team comprises ERM actuaries and catastrophe modelers responsible for identification, quantification and reporting on insurance, investment, credit and operational risks. The ERM team is a corporate function at Berkley's headquarters in Greenwich, CT.
The Role
The successful candidate will collaborate with other ERM team members on a variety of projects with a focus on exposure management and catastrophe modeling for casualty (re)insurance. The candidate is expected to demonstrate expertise in data and analytics and be capable of presenting data-driven insights to senior executives.
Key responsibilities include:
Casualty Accumulation and Catastrophe Modeling
• Lead the continuous enhancement of the casualty data ETL process
• Analyze and visualize casualty accumulations by insureds, lines and industries to generate actionable insight for business leaders
• Collaborate with data engineers to resolve complex data challenges and implement scalable solutions
• Support the development of casualty catastrophe scenarios by researching historical events and emerging risks
• Model complex casualty reinsurance protections
Risk Process Automation and Group Reporting
• Lead AI-driven initiatives aimed at automating key risk processes and projects
• Contribute to Group-level ERM reports, including deliverables to senior executives, rating agencies and regulators
Qualifications
• Minimum of 5 years of experience in P&C (re)insurance, with a focus on casualty
• Proficiency in R/Python and Excel
• Strong verbal and written communication skills
• Proven ability to manage multiple projects and meet deadlines in a dynamic environment
Education Requirement
• Minimum of Bachelor's degree required (preferably in STEM)
• ACAS/FCAS is a plus
Sponsorship Details Sponsorship Offered for this Role
Auto-ApplyLead Data Engineer
Data engineer job in Stamford, CT
About Us Over the past 25 years, Waste Harmonics Keter has been at the forefront of the waste and recycling industry, delivering innovative, data-driven solutions. We help companies right-size their waste operations and get out of the waste business with industry-leading expertise, state-of-the-art waste technologies, and industry-leading customer service. Visit Waste Harmonics Keter for more information.
Role Purpose
The Data Engineer designs, builds, and optimizes scalable, reusable, and performance-oriented data infrastructure that supports enterprise-wide reporting and analytics needs. The role aligns with modern data platform architecture and engineering best practices to ensure long-term maintainability, flexibility, and data trust.
Key Responsibilities
Design, build, and maintain data pipelines to support reporting, analytics, and business intelligence.
Develop and optimize scalable ETL/ELT processes using modern frameworks (e.g., dbt, Azure Data Factory, Fabric).
Implement data models that support reporting and analytics needs (e.g., star schema, slowly changing dimensions).
Ensure data quality, lineage, and observability for reliable business use.
Collaborate with cross-functional teams to deliver integrated data solutions across the enterprise.
Troubleshoot and optimize pipeline performance and SQL queries.
Deliver documentation for technical workflows, transformations, and logic.
Support governance by applying security, access control, and compliance standards in cloud environments.
Core Competencies & Behaviors
Technical Excellence: Strong understanding of data architecture, modelling, and transformation flows.
Problem Solving: Able to troubleshoot complex performance issues and propose efficient solutions.
Collaboration: Works effectively with analysts, engineers, and business teams to deliver end-to-end solutions.
Continuous Improvement: Applies CI/CD, version control, and best practices to improve workflow efficiency.
Detail Orientation: Ensures data accuracy, completeness, and consistency across systems.
Adaptability: Thrives in a modern, cloud-based data environment and adapts to evolving technologies.
Experience & Knowledge
Experience building and maintaining cloud-based data pipelines (e.g., Azure, Snowflake, Fabric).
Hands-on use of orchestration tools and ETL/ELT frameworks (dbt, ADF).
Strong knowledge of data modelling principles and data architecture concepts.
Experience with CI/CD pipelines, version control (e.g., Git), and modern data stack practices.
Familiarity with monitoring and observability tools for pipelines.
Understanding of security and access controls in cloud data platforms.
Qualifications
Bachelor's degree in Computer Science, Data Engineering, Information Systems, or related field (or equivalent experience).
Proficiency in SQL and modern data stack tools (dbt, Snowflake, Azure Data Factory, Fabric).
Strong technical documentation and communication skills.
Waste Harmonics Keter Comprehensive Benefits Package
Competitive Compensation
Annual Bonus Plan at Every Level
Continuous Learning and Development Opportunities
401(k) Retirement Savings with Company Match; Immediate Vesting
Medical & Dental Insurance
Vision Insurance (Company Paid)
Life Insurance (Company Paid)
Short-term & Long-term Disability (Company paid)
Employee Assistance Program
Flexible Spending Accounts/Health Savings Accounts
Paid Time Off (PTO), Including birthday off, community volunteer hours and a Friday off in the summer
7 Paid Holidays
At Waste Harmonics Keter , we celebrate diversity and are committed to creating an inclusive environment for all employees. We welcome candidates from all backgrounds to apply.
Auto-ApplyTech Lead, Data & Inference Engineer
Data engineer job in Greenwich, CT
Our Client
A fast moving and venture backed advertising technology startup based in San Francisco. They have raised twelve million dollars in funding and are transforming how business to business marketers reach their ideal customers. Their identity resolution technology blends business and consumer signals to convert static audience lists into high match and cross channel segments without the use of cookies. By transforming first party and third party data into precision targetable audiences across platforms such as Meta, Google and YouTube, they enable marketing teams to reach higher match rates, reduce wasted advertising spend and accelerate pipeline growth. With a strong understanding of how business buyers behave in channels that have traditionally been focused on business to consumer activity, they are redefining how business brands scale demand generation and account based efforts.
About Us
Catalyst Labs is a leading talent agency with a specialized vertical in Applied AI, Machine Learning, and Data Science. We stand out as an agency thats deeply embedded in our clients recruitment operations.
We collaborate directly with Founders, CTOs, and Heads of AI in those themes who are driving the next wave of applied intelligence from model optimization to productized AI workflows. We take pride in facilitating conversations that align with your technical expertise, creative problem-solving mindset, and long-term growth trajectory in the evolving world of intelligent systems.
Location: San Francisco
Work type: Full Time,
Compensation: above market base + bonus + equity
Roles & Responsibilities
Lead the design, development and scaling of an end to end data platform from ingestion to insights, ensuring that data is fast, reliable and ready for business use.
Build and maintain scalable batch and streaming pipelines, transforming diverse data sources and third party application programming interfaces into trusted and low latency systems.
Take full ownership of reliability, cost and service level objectives. This includes achieving ninety nine point nine percent uptime, maintaining minutes level latency and optimizing cost per terabyte. Conduct root cause analysis and provide long lasting solutions.
Operate inference pipelines that enhance and enrich data. This includes enrichment, scoring and quality assurance using large language models and retrieval augmented generation. Manage version control, caching and evaluation loops.
Work across teams to deliver data as a product through the creation of clear data contracts, ownership models, lifecycle processes and usage based decision making.
Guide architectural decisions across the data lake and the entire pipeline stack. Document lineage, trade offs and reversibility while making practical decisions on whether to build internally or buy externally.
Scale integration with application programming interfaces and internal services while ensuring data consistency, high data quality and support for both real time and batch oriented use cases.
Mentor engineers, review code and raise the overall technical standard across teams. Promote data driven best practices throughout the organization.
Qualifications
Bachelors or Masters degree in Computer Science, Computer Engineering, Electrical Engineering, or Mathematics.
Excellent written and verbal communication; proactive and collaborative mindset.
Comfortable in hybrid or distributed environments with strong ownership and accountability.
A founder-level bias for actionable to identify bottlenecks, automate workflows, and iterate rapidly based on measurable outcomes.
Demonstrated ability to teach, mentor, and document technical decisions and schemas clearly.
Core Experience
6 to 12 years of experience building and scaling production-grade data systems, with deep expertise in data architecture, modeling, and pipeline design.
Expert SQL (query optimization on large datasets) and Python skills.
Hands-on experience with distributed data technologies (Spark, Flink, Kafka) and modern orchestration tools (Airflow, Dagster, Prefect).
Familiarity with dbt, DuckDB, and the modern data stack; experience with IaC, CI/CD, and observability.
Exposure to Kubernetes and cloud infrastructure (AWS, GCP, or Azure).
Bonus: Strong Node.js skills for faster onboarding and system integration.
Previous experience at a high-growth startup (10 to 200 people) or early-stage environment with a strong product mindset.
Lead Data Engineer
Data engineer job in Stamford, CT
About Us Over the past 25 years, Waste Harmonics Keter has been at the forefront of the waste and recycling industry, delivering innovative, data-driven solutions. We help companies right-size their waste operations and get out of the waste business with industry-leading expertise, state-of-the-art waste technologies, and industry-leading customer service. Visit Waste Harmonics Keter for more information.
We are excited to grow our Data & Insights Team and meet talented professionals who want to make an impact. At this time, we are unable to offer visa sponsorship for the Lead Data Engineer role. Candidates must have authorization to work in the United States now and in the future without the need for sponsorship.
If this aligns with your current eligibility, we warmly encourage you to apply, we would love to learn more about you and the value you can bring to Waste Harmonics Keter.
Role Purpose
The Data Engineer designs, builds, and optimizes scalable, reusable, and performance-oriented data infrastructure that supports enterprise-wide reporting and analytics needs. The role aligns with modern data platform architecture and engineering best practices to ensure long-term maintainability, flexibility, and data trust.
Key Responsibilities
Design, build, and maintain data pipelines to support reporting, analytics, and business intelligence.
Develop and optimize scalable ETL/ELT processes using modern frameworks (e.g., dbt, Azure Data Factory, Fabric).
Implement data models that support reporting and analytics needs (e.g., star schema, slowly changing dimensions).
Ensure data quality, lineage, and observability for reliable business use.
Collaborate with cross-functional teams to deliver integrated data solutions across the enterprise.
Troubleshoot and optimize pipeline performance and SQL queries.
Deliver documentation for technical workflows, transformations, and logic.
Support governance by applying security, access control, and compliance standards in cloud environments.
Core Competencies & Behaviors
Technical Excellence: Strong understanding of data architecture, modelling, and transformation flows.
Problem Solving: Able to troubleshoot complex performance issues and propose efficient solutions.
Collaboration: Works effectively with analysts, engineers, and business teams to deliver end-to-end solutions.
Continuous Improvement: Applies CI/CD, version control, and best practices to improve workflow efficiency.
Detail Orientation: Ensures data accuracy, completeness, and consistency across systems.
Adaptability: Thrives in a modern, cloud-based data environment and adapts to evolving technologies.
Experience & Knowledge
Experience building and maintaining cloud-based data pipelines (e.g., Azure, Snowflake, Fabric).
Hands-on use of orchestration tools and ETL/ELT frameworks (dbt, ADF).
Strong knowledge of data modelling principles and data architecture concepts.
Experience with CI/CD pipelines, version control (e.g., Git), and modern data stack practices.
Familiarity with monitoring and observability tools for pipelines.
Understanding of security and access controls in cloud data platforms.
Qualifications
Bachelor's degree in Computer Science, Data Engineering, Information Systems, or related field (or equivalent experience).
Proficiency in SQL and modern data stack tools (dbt, Snowflake, Azure Data Factory, Fabric).
Strong technical documentation and communication skills.
Waste Harmonics Keter Comprehensive Benefits Package
Competitive Compensation
Annual Bonus Plan at Every Level
Continuous Learning and Development Opportunities
401(k) Retirement Savings with Company Match; Immediate Vesting
Medical & Dental Insurance
Vision Insurance (Company Paid)
Life Insurance (Company Paid)
Short-term & Long-term Disability (Company paid)
Employee Assistance Program
Flexible Spending Accounts/Health Savings Accounts
Paid Time Off (PTO), Including birthday off, community volunteer hours and a Friday off in the summer
7 Paid Holidays
At Waste Harmonics Keter , we celebrate diversity and are committed to creating an inclusive environment for all employees. We welcome candidates from all backgrounds to apply.
Auto-ApplyPrincipal Data Engineer for AI Platform
Data engineer job in Harrison, NY
**Our Purpose** _Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we're helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential._
**Title and Summary**
Principal Data Engineer for AI Platform
About Mastercard
Mastercard is a global technology company in the payments industry, connecting billions of consumers, financial institutions, merchants, governments, and businesses worldwide. We are driving the future of commerce by enabling secure, simple, and smart transactions. Artificial Intelligence is at the core of our strategy to make Mastercard stronger and commerce safer, smarter and more personal. At Mastercard, we're building next-generation AI-powered platforms to drive innovation and impact.
Role:
- Drive modernization from legacy and on-prem systems to modern, cloud-native, and hybrid data platforms.
- Architect and lead the development of a Multi-Agent ETL Platform for batch and event streaming, integrating AI agents to autonomously manage ETL tasks such as data discovery, schema mapping, and error resolution.
- Define and implement data ingestion, transformation, and delivery pipelines using scalable frameworks (e.g., Apache Airflow, Nifi, dbt, Spark, Kafka, or Dagster).
- Leverage LLMs, and agent frameworks (e.g., LangChain, CrewAI, AutoGen) to automate pipeline management and monitoring.
- Ensure robust data governance, cataloging, versioning, and lineage tracking across the ETL platform.
- Define project roadmaps, KPIs, and performance metrics for platform efficiency and data reliability.
- Establish and enforce best practices in data quality, CI/CD for data pipelines, and observability.
- Collaborate closely with cross-functional teams (Data Science, Analytics, and Application Development) to understand requirements and deliver efficient data ingestion and processing workflows.
- Establish and enforce best practices, automation standards, and monitoring frameworks to ensure the platform's reliability, scalability, and security.
- Build relationships and communicate effectively with internal and external stakeholders, including senior executives, to influence data-driven strategies and decisions.
- Continuously engage and improve teams' performance by conducting recurring meetings, knowing your people, managing career development, and understanding who is at risk.
- Oversee deployment, monitoring, and scaling of ETL and agent workloads across multi cloud environments.
- Continuously improve platform performance, cost efficiency, and automation maturity.
All About You:
- Hands-on experience in data engineering, data platform strategy, or a related technical domain.
- Proven experience leading global data engineering or platform engineering teams.
- Proven experience in building and modernizing distributed data platforms using technologies such as Apache Spark, Kafka, Flink, NiFi, and Cloudera/Hadoop.
- Strong experience with one or more of data pipeline tools (Nifi, Airflow, dbt, Spark, Kafka, Dagster, etc.) and distributed data processing at scale.
- Experience building and managing AI-augmented or agent-driven systems will be a plus.
- Proficiency in Python, SQL, and data ecosystems (Oracle, AWS Glue, Azure Data Factory, BigQuery, Snowflake, etc.).
- Deep understanding of data modeling, metadata management, and data governance principles.
- Proven success in leading technical teams and managing complex, cross-functional projects.
- Passion for staying current in a fast-paced field with proven ability to lead innovation in a scaled organization.
- Excellent communication skills, with the ability to tailor technical concepts to executive, operational, and technical audiences.
- Expertise and ability to lead technical decision-making considering scalability, cost efficiency, stakeholder priorities, and time to market.
- Proven track leading high-performing teams with experience leading and coaching director level reports and experienced individual contributors.
- Advanced degree in Data Science, Computer Science, Information Technology, Business Administration, or a related field. Equivalent experience will also be considered.
Why Join Us?
At Mastercard, you'll help shape the future of AI in global commerce-solving complex challenges at scale, driving financial inclusion, and reinforcing the trust and security that define our brand. You'll work with world-class talent, cutting-edge technologies, and will make a lasting impact.
\#AI1
Mastercard is a merit-based, inclusive, equal opportunity employer that considers applicants without regard to gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law. We hire the most qualified candidate for the role. In the US or Canada, if you require accommodations or assistance to complete the online application process or during the recruitment process, please contact reasonable_accommodation@mastercard.com and identify the type of accommodation or assistance you are requesting. Do not include any medical or health information in this email. The Reasonable Accommodations team will respond to your email promptly.
**Corporate Security Responsibility**
All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must:
+ Abide by Mastercard's security policies and practices;
+ Ensure the confidentiality and integrity of the information being accessed;
+ Report any suspected information security violation or breach, and
+ Complete all periodic mandatory security trainings in accordance with Mastercard's guidelines.
In line with Mastercard's total compensation philosophy and assuming that the job will be performed in the US, the successful candidate will be offered a competitive base salary and may be eligible for an annual bonus or commissions depending on the role. The base salary offered may vary depending on multiple factors, including but not limited to location, job-related knowledge, skills, and experience. Mastercard benefits for full time (and certain part time) employees generally include: insurance (including medical, prescription drug, dental, vision, disability, life insurance); flexible spending account and health savings account; paid leaves (including 16 weeks of new parent leave and up to 20 days of bereavement leave); 80 hours of Paid Sick and Safe Time, 25 days of vacation time and 5 personal days, pro-rated based on date of hire; 10 annual paid U.S. observed holidays; 401k with a best-in-class company match; deferred compensation for eligible roles; fitness reimbursement or on-site fitness facilities; eligibility for tuition reimbursement; and many more. Mastercard benefits for interns generally include: 56 hours of Paid Sick and Safe Time; jury duty leave; and on-site fitness facilities in some locations.
**Pay Ranges**
Arlington, Virginia: $170,000 - $273,000 USD
Purchase, New York: $170,000 - $273,000 USD
San Francisco, California: $178,000 - $284,000 USD
Application Support Engineer
Data engineer job in Fairfield, CT
bout Us
We are a global investment firm focused on combining financial theory with practical application. Our goal is to deliver long-term results by cutting through market noise, identifying the most impactful factors, and developing ideas that stand up to rigorous testing. Over the years, we have built a reputation as innovators in portfolio management and alternative investment strategies.
Our team values intellectual curiosity, honesty, and a commitment to understanding what drives financial markets. Collaboration, transparency, and openness to new ideas are central to our culture, fostering innovation and continuous improvement.
Your Role
We are seeking an Application Support Engineer to operate at the intersection of technical systems and business processes that power our investment operations. This individual contributor role involves supporting a complex technical environment, resolving production issues, and contributing to projects that enhance systems and processes. You will gain hands-on experience with cloud-deployed portfolio management and research systems and work closely with both business and technical teams.
This role is ideal for someone passionate about technology and systems reliability, looking to grow into a systems reliability or engineering-focused position.
Responsibilities
Develop and maintain expertise in the organization's applications to support internal users.
Manage user expectations and ensure satisfaction with our systems and tools.
Advocate for users with project management and development teams.
Work closely with QA to report and track issues identified by users.
Ensure proper escalation for unresolved issues to maintain user satisfaction.
Participate in production support rotations, including off-hours coverage.
Identify gaps in support processes and create documentation or workflows in collaboration with development and business teams.
Diagnose and resolve system issues, including debugging code, analyzing logs, and investigating performance or resource problems.
Collaborate across teams to resolve complex technical problems quickly and efficiently.
Maintain documentation of system behavior, root causes, and process improvements.
Contribute to strategic initiatives that enhance system reliability and operational efficiency.
Qualifications
Bachelor's degree in Engineering, Computer Science, or equivalent experience.
2+ years of experience supporting complex software systems, collaborating with business users and technical teams.
Hands-on technical skills including SQL and programming/debugging (Python preferred).
Strong written and verbal communication skills.
Ability to work independently and within small teams.
Eagerness to learn new technologies and automate manual tasks to improve system reliability.
Calm under pressure and demonstrates responsibility, maturity, and trustworthiness.
Compensation & Benefits
Salary range: $115,000-$135,000 (may vary based on experience, location, or organizational needs).
Eligible for annual discretionary bonus.
Comprehensive benefits package including paid time off, medical/dental/vision coverage, 401(k), and other benefits as applicable.
The Phoenix Group Advisors is an equal opportunity employer. We are committed to creating a diverse and inclusive workplace and prohibit discrimination and harassment of any kind based on race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status. We strive to attract talented individuals from all backgrounds and provide equal employment opportunities to all employees and applicants for employment.
Principal Data Engineer for AI Platform
Data engineer job in Harrison, NY
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we're helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential.
Title and Summary
Principal Data Engineer for AI Platform
About Mastercard
Mastercard is a global technology company in the payments industry, connecting billions of consumers, financial institutions, merchants, governments, and businesses worldwide. We are driving the future of commerce by enabling secure, simple, and smart transactions. Artificial Intelligence is at the core of our strategy to make Mastercard stronger and commerce safer, smarter and more personal. At Mastercard, we're building next-generation AI-powered platforms to drive innovation and impact.
Role:
* Drive modernization from legacy and on-prem systems to modern, cloud-native, and hybrid data platforms.
* Architect and lead the development of a Multi-Agent ETL Platform for batch and event streaming, integrating AI agents to autonomously manage ETL tasks such as data discovery, schema mapping, and error resolution.
* Define and implement data ingestion, transformation, and delivery pipelines using scalable frameworks (e.g., Apache Airflow, Nifi, dbt, Spark, Kafka, or Dagster).
* Leverage LLMs, and agent frameworks (e.g., LangChain, CrewAI, AutoGen) to automate pipeline management and monitoring.
* Ensure robust data governance, cataloging, versioning, and lineage tracking across the ETL platform.
* Define project roadmaps, KPIs, and performance metrics for platform efficiency and data reliability.
* Establish and enforce best practices in data quality, CI/CD for data pipelines, and observability.
* Collaborate closely with cross-functional teams (Data Science, Analytics, and Application Development) to understand requirements and deliver efficient data ingestion and processing workflows.
* Establish and enforce best practices, automation standards, and monitoring frameworks to ensure the platform's reliability, scalability, and security.
* Build relationships and communicate effectively with internal and external stakeholders, including senior executives, to influence data-driven strategies and decisions.
* Continuously engage and improve teams' performance by conducting recurring meetings, knowing your people, managing career development, and understanding who is at risk.
* Oversee deployment, monitoring, and scaling of ETL and agent workloads across multi cloud environments.
* Continuously improve platform performance, cost efficiency, and automation maturity.
All About You:
* Hands-on experience in data engineering, data platform strategy, or a related technical domain.
* Proven experience leading global data engineering or platform engineering teams.
* Proven experience in building and modernizing distributed data platforms using technologies such as Apache Spark, Kafka, Flink, NiFi, and Cloudera/Hadoop.
* Strong experience with one or more of data pipeline tools (Nifi, Airflow, dbt, Spark, Kafka, Dagster, etc.) and distributed data processing at scale.
* Experience building and managing AI-augmented or agent-driven systems will be a plus.
* Proficiency in Python, SQL, and data ecosystems (Oracle, AWS Glue, Azure Data Factory, BigQuery, Snowflake, etc.).
* Deep understanding of data modeling, metadata management, and data governance principles.
* Proven success in leading technical teams and managing complex, cross-functional projects.
* Passion for staying current in a fast-paced field with proven ability to lead innovation in a scaled organization.
* Excellent communication skills, with the ability to tailor technical concepts to executive, operational, and technical audiences.
* Expertise and ability to lead technical decision-making considering scalability, cost efficiency, stakeholder priorities, and time to market.
* Proven track leading high-performing teams with experience leading and coaching director level reports and experienced individual contributors.
* Advanced degree in Data Science, Computer Science, Information Technology, Business Administration, or a related field. Equivalent experience will also be considered.
Why Join Us?
At Mastercard, you'll help shape the future of AI in global commerce-solving complex challenges at scale, driving financial inclusion, and reinforcing the trust and security that define our brand. You'll work with world-class talent, cutting-edge technologies, and will make a lasting impact.
#AI1
Mastercard is a merit-based, inclusive, equal opportunity employer that considers applicants without regard to gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law. We hire the most qualified candidate for the role. In the US or Canada, if you require accommodations or assistance to complete the online application process or during the recruitment process, please contact reasonable_accommodation@mastercard.com and identify the type of accommodation or assistance you are requesting. Do not include any medical or health information in this email. The Reasonable Accommodations team will respond to your email promptly.
Corporate Security Responsibility
All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must:
* Abide by Mastercard's security policies and practices;
* Ensure the confidentiality and integrity of the information being accessed;
* Report any suspected information security violation or breach, and
* Complete all periodic mandatory security trainings in accordance with Mastercard's guidelines.
In line with Mastercard's total compensation philosophy and assuming that the job will be performed in the US, the successful candidate will be offered a competitive base salary and may be eligible for an annual bonus or commissions depending on the role. The base salary offered may vary depending on multiple factors, including but not limited to location, job-related knowledge, skills, and experience. Mastercard benefits for full time (and certain part time) employees generally include: insurance (including medical, prescription drug, dental, vision, disability, life insurance); flexible spending account and health savings account; paid leaves (including 16 weeks of new parent leave and up to 20 days of bereavement leave); 80 hours of Paid Sick and Safe Time, 25 days of vacation time and 5 personal days, pro-rated based on date of hire; 10 annual paid U.S. observed holidays; 401k with a best-in-class company match; deferred compensation for eligible roles; fitness reimbursement or on-site fitness facilities; eligibility for tuition reimbursement; and many more. Mastercard benefits for interns generally include: 56 hours of Paid Sick and Safe Time; jury duty leave; and on-site fitness facilities in some locations.
Pay Ranges
Arlington, Virginia: $170,000 - $273,000 USD
Purchase, New York: $170,000 - $273,000 USD
San Francisco, California: $178,000 - $284,000 USD
Auto-Apply