Post job

Data engineer jobs in Bridgeport, CT

- 342 jobs
All
Data Engineer
Data Scientist
Principal Software Engineer
Software Engineer
Senior Data Architect
Applications Support Engineer
Senior Software Engineer
  • Data Engineer

    The Phoenix Group 4.8company rating

    Data engineer job in Fairfield, CT

    Data Engineer - Vice President Greenwich, CT About the Firm We are a global investment firm focused on applying financial theory to practical investment decisions. Our goal is to deliver long-term results by analyzing market data and identifying what truly matters. Technology is central to our approach, enabling insights across both traditional and alternative strategies. The Team A new Data Engineering team is being established to work with large-scale datasets across the organization. This team partners directly with researchers and business teams to build and maintain infrastructure for ingesting, validating, and provisioning large volumes of structured and unstructured data. Your Role As a Data Engineer, you will help design and build an enterprise data platform used by research teams to manage and analyze large datasets. You will also create tools to validate data, support back-testing, and extract actionable insights. You will work closely with researchers, portfolio managers, and other stakeholders to implement business requirements for new and ongoing projects. The role involves working with big data technologies and cloud platforms to create scalable, extensible solutions for data-intensive applications. What You'll Bring 6+ years of relevant experience in data engineering or software development Bachelor's, Master's, or PhD in Computer Science, Engineering, or related field Strong coding, debugging, and analytical skills Experience working directly with business stakeholders to design and implement solutions Knowledge of distributed data systems and large-scale datasets Familiarity with big data frameworks such as Spark or Hadoop Interest in quantitative research (no prior finance or trading experience required) Exposure to cloud platforms is a plus Experience with Python, NumPy, pandas, or similar data analysis tools is a plus Familiarity with AI/ML frameworks is a plus Who You Are Thoughtful, collaborative, and comfortable in a fast-paced environment Hard-working, intellectually curious, and eager to learn Committed to transparency, integrity, and innovation Motivated by leveraging technology to solve complex problems and create impact Compensation & Benefits Salary range: $190,000 - $260,000 (subject to experience, skills, and location) Eligible for annual discretionary bonus Comprehensive benefits including paid time off, medical/dental/vision insurance, 401(k), and other applicable benefits We are an Equal Opportunity Employer. EEO/VET/DISABILITY The Phoenix Group Advisors is an equal opportunity employer. We are committed to creating a diverse and inclusive workplace and prohibit discrimination and harassment of any kind based on race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status. We strive to attract talented individuals from all backgrounds and provide equal employment opportunities to all employees and applicants for employment.
    $190k-260k yearly 2d ago
  • C++ Market Data Engineer

    TBG | The Bachrach Group

    Data engineer job in Stamford, CT

    We are seeking a C++ Market Data Engineer to design and optimize ultra-low-latency feed handlers that power global trading systems. This is a high-impact role where your code directly drives real-time decision making. What You'll Do: Build high-performance feed handlers in modern C++ (14/17/20) for equities, futures, and options Optimize systems for micro/nanosecond latency with lock-free algorithms and cache-friendly design Ensure reliable data delivery with failover, gap recovery, and replay mechanisms Collaborate with researchers and engineers to align data formats for trading and simulation Instrument and test systems for continuous performance improvements What We're Looking For: 3+ years of C++ development experience (low-latency, high-throughput systems) Experience with real-time market data feeds (e.g., Bloomberg B-PIPE, CME MDP, Refinitiv, OPRA, ITCH) Strong knowledge of concurrency, memory models, and compiler optimizations Python scripting skills for testing and automation Familiarity with Docker/Kubernetes and cloud networking (AWS/GCP) is a plus
    $84k-114k yearly est. 17h ago
  • Senior Azure Data Engineer

    Oakridge Staffing

    Data engineer job in Stamford, CT

    Great opportunity with a private equity firm located in Stamford, CT. The Azure Data Engineer in this role will partner closely with investment and operations teams to build scalable data pipelines and modern analytics solutions across the firm and its portfolio. The Senior Azure Data Engineer main responsibilities will be: Designing and implementing machine learning solutions as part of high-volume data ingestion and transformation pipelines Experience in designing solutions for large data warehouses and databases (Azure, Databricks and/or Snowflake) Gather requirements from business stakeholders. Experience in data architecture, data governance, data modeling, data transformation (from converting data, to data cleansing, to building data structures) data lineage, data Integration, and master data management. Technical Skills Architecting and delivering solutions using the Azure Data Analytics platform including Azure Databricks/Azure SQL Data Warehouse Utilizing Databricks (for processing and transforming massive quantities of data and exploring the data through machine learning models) Design and build solutions powered by DBT models and integrate with Databricks. Utilize Snowflake for data application development, and secure sharing and consumption of real-time and/or shared data. Expertise in data manipulation and analysis using Python. SQL for data migration and analysis. Pluses: Past work experience in financial markets is a plus (Asset Management, Multi-strategy, Private Equity, Structured Products, Fixed Income, Trading, Portfolio Management, etc.). E-Mail: DIANA@oakridgestaffing.com Please feel free to connect with me on LinkedIn: www.linkedin.com/in/dianagjuraj
    $84k-114k yearly est. 3d ago
  • Senior Data Architect

    JCW Group 3.7company rating

    Data engineer job in Cheshire, CT

    JCW has partnered with leading provider of insurance, risk management, and data-driven solutions dedicated to the housing industry, to identify a Senior Data Architect to lead the strategy, design, and evolution of their enterprise data platform. This is a hybrid role based in the Cheshire, CT area. This individual will own the enterprise data architecture roadmap, guide data modeling standards, strengthen governance and security practices, and enable high-quality analytics across the business. You'll partner closely with Application, Infrastructure/Cloud, Cybersecurity, and business teams while shaping the data platform's long-term scalability and performance. Key Responsibilities Define and evolve the enterprise data architecture roadmap and reference architectures (warehouse, lakehouse). Own canonical models, dimensional designs, and integration standards across pipelines. Establish data governance, quality, lineage, and security practices. Collaborate with Infrastructure/Cloud to guide reliability, observability, and cost optimization. Enable high-quality analytics and a governed Power BI semantic layer for self-service use cases. Qualifications 5+ years in data architecture or data engineering roles with hands-on SQL and modeling expertise. Strong experience with Azure data services (Synapse, Data Factory, Data Lake) and modern data patterns (ELT/ETL, lakehouse). Proven ability to design scalable analytics/data models and optimize performance. Experience with data governance, security (RBAC, encryption), and change control. Prior background supporting insurance data domains is highly preferred but not required. If this sounds like you feel free to apply!
    $106k-136k yearly est. 2d ago
  • Principal Software Engineer (Embedded Systems)

    Fustis LLC

    Data engineer job in Norwalk, CT

    Position Type: Full-Time / Direct Hire (W2) Salary: $200K+ base + 13% bonus Experience Required: 10-20 years Domain: Industrial Automation & Robotics Work Authorization: US Citizen or Green Card Interview Process: 2× Teams Interviews → Onsite Interview (expenses paid) “How Many Years With” (Candidate Screening Section) C: C++: RTOS: Embedded Software Development: Device Driver Software Development: Job Description We are seeking a Principal Software Engineer - Embedded Systems to join a high-performance engineering team building next-generation industrial automation and robotics platforms. This role blends hardware, firmware, real-time systems, machine learning components, and high-performance automation into one of the most technically challenging environments. The ideal candidate is passionate about writing software that interacts directly with real machines, drives motion control, solves physical-world problems, and contributes to global-scale automation systems. This role is hands-on, impact-driven, and perfect for someone who wants to see their code operating in motion - not just in a console. Key Responsibilities Design, implement, and optimize embedded software in C/C++ for real-time control systems. Develop and maintain real-time operating system (RTOS)-based applications. Implement low-latency firmware, control loops, and motion-control algorithms. Work with hardware teams to integrate sensors, actuators, and automation components. Architect scalable, high-performance embedded platforms for industrial robotics. Develop device drivers, board support packages (BSPs), and hardware abstraction layers. Own full lifecycle development: requirements → design → implementation → testing → deployment. Develop machine-learning-based modules for system categorization and algorithm organization (experience helpful, not required). Build real-time monitoring tools, diagnostics interfaces, and system health analytics. Troubleshoot complex hardware/software interactions in a real-time environment. Work closely with electrical, mechanical, and controls engineers. Participate in code reviews, architectural discussions, and continuous improvement. Required Qualifications Bachelor's degree in Computer Engineering, Electrical Engineering, Computer Science, or related field (Master's a plus). 10-20 years professional experience in: C and C++ programming Embedded Software Development RTOS-based design (e.g., FreeRTOS, QNX, VxWorks, ThreadX, etc.) Control systems and real-time embedded environments Strong experience with: Device driver development Board bring-up and hardware interfacing Debugging tools (oscilloscopes, logic analyzers, JTAG, etc.) Excellent understanding of: Memory management Multithreading Interrupt-driven systems Communication protocols (UART, SPI, I2C, CAN, Ethernet) Preferred Qualifications Experience with robotics, motion control, industrial automation, or safety-critical systems. Exposure to machine learning integration in embedded platforms. Experience in high-precision or high-speed automation workflows. Target Industries / Domains Ideal candidates may come from: Medical Devices Semiconductor Equipment Aerospace & Defense Industrial Control Systems Robotics & Automation Machinery & Mechatronics Appliances & Devices Embedded Consumer or Industrial Electronics
    $200k yearly 3d ago
  • Principal Software Engineer (C++)

    Rise Technical

    Data engineer job in Danbury, CT

    Principal Embedded Software Engineer Danbury, Connecticut (On-site) $150,000 - $200,000 + Bonus + Re-location Package + Healthcare Insurance + 401k + PTO Are you a Senior or Principal Software Engineer, with extensive experience in C++ and mechatronics, looking for a hands-on development role offering progression to management and the chance to recognized as a technical expert? This is an exciting opportunity to sharpen your technical skillset while taking ownership of projects from concept to completion. You will work on the design and delivery of advanced control and automation software for sophisticated, high-precision industrial systems, pushing the boundaries of performance and innovation. In this role, you will lead the design, development, and deployment of real-time control software for high-precision industrial systems. You will collaborate with cross-functional teams to define requirements, architect solutions, and integrate hardware and software. You'll oversee system testing, resolve technical challenges, and drive critical projects to completion. This role combines deep C++ and mechatronics expertise with leadership to shape current products and future innovation. This is a great opportunity for a Principal Engineer to join a market leading company who will support technical advancement and career progression whilst being recognized as a technical expert. The Role: *Lead design and deployment of real-time control software *Oversee system testing and troubleshoot technical issues *Own complex high-impact projects to meet critical deadlines The Person: *Vast experience with C++ and embedded systems *Experience with mechatronics *Experience with Real-time control systems *US Citizen or Green Card Holder
    $104k-139k yearly est. 17h ago
  • Senior Software Engineer (Full Stack), Data Analytics - Pharma

    Nylon Search-Recruitment and Executive Search

    Data engineer job in Ridgefield, CT

    $140-210K + Bonus (*At this time our client cannot sponsor or transfer work visas, including H1B, OPT, F1) Global Pharmaceutical and Healthcare conglomerate seeks dynamic and collaborative Lead Full Stack Software Engineer with 7+ years hands on front and back-end software developer experience designing, developing, testing, and delivering fully functioning Cloud-based, Data Analytics applications and backend services, to help lead application development from ideation and architecture to deployment and optimization, and integrate data science solutions like analytics and machine learning. Must have full stack Data Analytics experience, AWS Cloud, and hands-on experience in Data pipeline creation, ETL/ELT (AWS Glue, Databricks, DBT). This is a visible role that will deliver on key data transformation initiatives, and shape the future of data-driven decision making across the business. Requirements Hands-on experience in Data pipeline creation, ETL/ELT (AWS Glue, Databricks, DBT). Build and maintain robust backend systems using AWS Lambda, API Gateway, and other serverless technologies. Experience with frontend visualization tools like Tableau or PowerBI. Hands-on expertise in Agile Development, Test Automation, IT Security best practices, Continuous Development and deployment tools (Git, Jenkins, Docker, Kubernetes), and functional programming. Familiarity with IT security, container platforms, and software environments across QA, DEV, STAGING, and PROD phases. Demonstrated thought leadership in driving innovation and best practice adoption. Leadership & Collaboration: Strong communication, mentorship, and cross-functional teamwork. Responsibilities include: Application Development: Design, develop, and maintain both the front-end and back-end components of full-fledged applications using state-of-the-art programming languages and frameworks. Architecture Integration: Incorporate API-enabled backend technologies into application architecture, following established architecture frameworks and standards to deliver cohesive and maintainable solutions. Agile Collaboration: Work collaboratively with the team and product owner, following agile methodologies, to deliver secure, scalable, and fully functional applications in iterative cycles. Testing Strategy and Framework Design: Develop and implement comprehensive testing strategies and frameworks to ensure the delivery of high-quality, reliable software. For immediate consideration please submit resume in Word or PDF format ** AN EQUAL OPPORTUNITY EMPLOYER **
    $87k-113k yearly est. 3d ago
  • Full Stack Hedge Fund Software Engineer (Java/Angular/React)

    Focus Capital Markets

    Data engineer job in Stamford, CT

    Focus Capital Markets is supporting its Connecticut-based hedge fund by providing a unique opportunity for talented a senior software engineer to work across verticals within the organization. In this role, you will assist the business by developing front-end and back-end applications, building and scaling APIs and working with the business to define technical solutions for business requirements. You will work with both sides of the stack, with a Core Java back-end (and C#/.Net) and latest versions of Angular on the front-end. . Although the organization is outside of the NYC area, it is just as lucrative and would afford someone with career stability, longevity and growth opportunity within the organization. The parent company is best of breed in the hedge fund industry and there are opportunities to grow from within. You will work onsite Monday-Thursday. Requirements: 5+ years of software engineering experience leveraging Angular on the front-end and Core Java or C# on the back-end. Experience with React is relevant. Experience with SQL is preferred. Experience with REST APIs Bachelors degree or higher in computer science, mathematics or related field. Must have excellent communication skills
    $70k-93k yearly est. 1d ago
  • Software Engineer

    JSG (Johnson Service Group, Inc.

    Data engineer job in Hauppauge, NY

    JSG is hiring a Software Engineer in Hauppauge, NY. Must be a US Citizen and work onsite. Salary range: $127K-$137K - Bonus Our charter is to develop fuel measurement, management and inserting systems for commercial and defense airframers. The Software Engineering team works closely with the Systems and Electronic Hardware Engineering teams to develop, qualify and certify these technologies as products for our customers in aerospace and industrial markets. Develop embedded software using C and/or model-based tools such as SCADE Develop high level and low level software requirements Create requirements-based test cases and verification procedures Perform software integration testing on target hardware using both real and simulated inputs/outputs Analyze software requirements, design and code to assure compliance to standards and guidelines Perform traceability analysis to customer specification requirements to software code Participate in software certification audits, e.g. stages of involvement (SOI) BS in Software Engineering, Computer Engineering, Computer Science or related field 5+ years of experience performing software development, verification and/or integration Strong technical aptitude with analytical and problem-solving capabilities Excellent interpersonal and communication skills, both verbal and written Ability to work in a team environment, cultivate strong relationships and demonstrate initiative Experience with C programming language Experience with model-based software development using SCADE Experience developing embedded software control systems Experience planning and executing projects using Agile software development methodology Experience managing requirements using DOORS or DOORS Next Gen (DNG) Experience with digital signal processing or digital filter design Experience with ARM microprocessors Experience with serial communication protocols (e.g. CANbus, ARINC, RS-232) Familiarity with aerospace (e.g., DO-178, DO-330, DO-331) and/or industrial (e.g., IEC 61508) software certification requirements Familiarity with functional safety standards such as ISO 13489, IEC 61508, IEC 62061, ISO 26262 or ARP4761Software Engineer to join our team. We are looking for a candidate who has working experience designing, developing and verifying embedded software in aerospace and/or industrial applications. The candidate should be familiar with industry-standard software development and design assurance practices (such as DO-178, ISO 26262, EN 50128, IEC 61508 or IEC 62304) and their application across the entire software development lifecycle. Johnson Service Group (JSG) is an Equal Opportunity Employer. JSG provides equal employment opportunities to all applicants and employees without regard to race, color, religion, sex, age, sexual orientation, gender identity, national origin, disability, marital status, protected veteran status, or any other characteristic protected by law.
    $80k-107k yearly est. 2d ago
  • Principal Software Engineer

    Smith Arnold Partners 4.0company rating

    Data engineer job in Danbury, CT

    If you're an experienced software engineer who wants to build things that actually move - fast, accurately, and at scale - this is a role worth considering. This is a role for engineers who want to build the core - not just bolt things on. If you want to own complex systems, influence product direction, and work in an environment where your expertise is valued and your career can thrive - let's talk. Apply now or message us directly for a confidential conversation. We're a well-established global tech organization that builds the software and systems behind some of the most advanced, high-speed electromechanical equipment in the industry. Our technology helps run the operations behind critical communications and logistics around the world. Principle Principal Software Engineer Location: Norwalk, CT Salary - $170.000 - $190,000 +Bonus Right now, we're looking for a Senior Principal Software Engineer to take a leading role in architecting and delivering the software that powers our next generation of machine control systems. This is a hands-on, senior-level position where you'll have real ownership, technical influence, and direct impact on the business. Why This Role Stands Out: High Visibility: You won't be buried in code no one sees - this role is front and center across engineering, product, and executive teams. Complex, Real-World Problems: This isn't app development. You'll be building control software for high-speed, precision-driven systems that integrate mechanical, electrical, and software components. Stability + Innovation: Join a company that's been around for decades - but continues to evolve. The tech is serious, the teams are strong, and the roadmaps are ambitious. Long-Term Career Growth: This isn't a stepping-stone job. It's a long-term opportunity to lead, grow, and shape the future of how our machines perform. What You'll Do: Design and develop real-time control software in C++ for large-scale, high-performance electromechanical systems Lead cross-functional efforts across software, hardware, systems, and manufacturing Guide architecture and technical strategy for multiple products and platforms Debug and optimize at the system level - from code to motion control to hardware integration Play a key role in roadmap planning and technical decision-making What You Bring: 10+ years of experience in object-oriented software design and full-lifecycle development Deep hands-on experience in C++ and real-time operating systems (such as RTX) Strong background in mechatronics, machine control, or similar system-level environments Ability to lead multidisciplinary teams and drive projects through ambiguity to delivery Excellent communication skills - both with engineers and stakeholders BS or MS in Computer Science or a related field Bonus Points: Experience with motion or servo motor control Exposure to .NET, Java, or ASP.NET Background in SQL Server, Oracle, or web-based service architecture Knowledge of industrial automation or paper-handling/mailing systems
    $115k-143k yearly est. 3d ago
  • Principal Data Scientist

    Maximus 4.3company rating

    Data engineer job in Bridgeport, CT

    Description & Requirements We now have an exciting opportunity for a Principal Data Scientist to join the Maximus AI Accelerator supporting both the enterprise and our clients. We are looking for an accomplished hands-on individual contributor and team player to be a part of the AI Accelerator team. You will be responsible for architecting and optimizing scalable, secure AI systems and integrating AI models in production using MLOps best practices, ensuring systems are resilient, compliant, and efficient. This role requires strong systems thinking, problem-solving abilities, and the capacity to manage risk and change in complex environments. Success depends on cross-functional collaboration, strategic communication, and adaptability in fast-paced, evolving technology landscapes. This position will be focused on strategic company-wide initiatives but will also play a role in project delivery and capture solutioning (i.e., leaning in on existing or future projects and providing solutioning to capture new work.) This position requires occasional travel to the DC area for client meetings. Essential Duties and Responsibilities: - Make deep dives into the data, pulling out objective insights for business leaders. - Initiate, craft, and lead advanced analyses of operational data. - Provide a strong voice for the importance of data-driven decision making. - Provide expertise to others in data wrangling and analysis. - Convert complex data into visually appealing presentations. - Develop and deploy advanced methods to analyze operational data and derive meaningful, actionable insights for stakeholders and business development partners. - Understand the importance of automation and look to implement and initiate automated solutions where appropriate. - Initiate and take the lead on AI/ML initiatives as well as develop AI/ML code for projects. - Utilize various languages for scripting and write SQL queries. Serve as the primary point of contact for data and analytical usage across multiple projects. - Guide operational partners on product performance and solution improvement/maturity options. - Participate in intra-company data-related initiatives as well as help foster and develop relationships throughout the organization. - Learn new skills in advanced analytics/AI/ML tools, techniques, and languages. - Mentor more junior data analysts/data scientists as needed. - Apply strategic approach to lead projects from start to finish; Job-Specific Minimum Requirements: - Develop, collaborate, and advance the applied and responsible use of AI, ML and data science solutions throughout the enterprise and for our clients by finding the right fit of tools, technologies, processes, and automation to enable effective and efficient solutions for each unique situation. - Contribute and lead the creation, curation, and promotion of playbooks, best practices, lessons learned and firm intellectual capital. - Contribute to efforts across the enterprise to support the creation of solutions and real mission outcomes leveraging AI capabilities from Computer Vision, Natural Language Processing, LLMs and classical machine learning. - Contribute to the development of mathematically rigorous process improvement procedures. - Maintain current knowledge and evaluation of the AI technology landscape and emerging. developments and their applicability for use in production/operational environments. Minimum Requirements - Bachelor's degree in related field required. - 10-12 years of relevant professional experience required. Job-Specific Minimum Requirements: - 10+ years of relevant Software Development + AI / ML / DS experience. - Professional Programming experience (e.g. Python, R, etc.). - Experience in two of the following: Computer Vision, Natural Language Processing, Deep Learning, and/or Classical ML. - Experience with API programming. - Experience with Linux. - Experience with Statistics. - Experience with Classical Machine Learning. - Experience working as a contributor on a team. Preferred Skills and Qualifications: - Masters or BS in quantitative discipline (e.g. Math, Physics, Engineering, Economics, Computer Science, etc.). - Experience developing machine learning or signal processing algorithms: - Ability to leverage mathematical principles to model new and novel behaviors. - Ability to leverage statistics to identify true signals from noise or clutter. - Experience working as an individual contributor in AI. - Use of state-of-the-art technology to solve operational problems in AI and Machine Learning. - Strong knowledge of data structures, common computing infrastructures/paradigms (stand alone and cloud), and software engineering principles. - Ability to design custom solutions in the AI and Advanced Analytics sphere for customers. This includes the ability to scope customer needs, identify currently existing technologies, and develop custom software solutions to fill any gaps in available off the shelf solutions. - Ability to build reference implementations of operational AI & Advanced Analytics processing solutions. Background Investigations: - IRS MBI - Eligibility #techjobs #VeteransPage EEO Statement Maximus is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, age, national origin, disability, veteran status, genetic information and other legally protected characteristics. Pay Transparency Maximus compensation is based on various factors including but not limited to job location, a candidate's education, training, experience, expected quality and quantity of work, required travel (if any), external market and internal value analysis including seniority and merit systems, as well as internal pay alignment. Annual salary is just one component of Maximus's total compensation package. Other rewards may include short- and long-term incentives as well as program-specific awards. Additionally, Maximus provides a variety of benefits to employees, including health insurance coverage, life and disability insurance, a retirement savings plan, paid holidays and paid time off. Compensation ranges may differ based on contract value but will be commensurate with job duties and relevant work experience. An applicant's salary history will not be used in determining compensation. Maximus will comply with regulatory minimum wage rates and exempt salary thresholds in all instances. Accommodations Maximus provides reasonable accommodations to individuals requiring assistance during any phase of the employment process due to a disability, medical condition, or physical or mental impairment. If you require assistance at any stage of the employment process-including accessing job postings, completing assessments, or participating in interviews,-please contact People Operations at **************************. Minimum Salary $ 156,740.00 Maximum Salary $ 234,960.00
    $77k-112k yearly est. Easy Apply 3d ago
  • Data Scientist - Early Career (USA)

    Trexquant 4.0company rating

    Data engineer job in Stamford, CT

    Trexquant actively trades multiple asset classes, including global equities, futures, corporate bonds, options, and foreign exchange. Data is at the core of everything we do and we are looking for individuals who are passionate about working with data and are curious about how it is transformed into robust and profitable predictive models. As a data scientist, you will specialize in one of these asset classes, becoming the go-to expert for all data-related matters within that domain. This is a unique opportunity to lead the data efforts for a specific asset class and make a direct impact on the firm's trading strategies. Responsibilities * Collaborate closely with strategy and machine learning teams to identify predictive signals and help develop models using relevant data variables. * Evaluate and explore new datasets recommended by researchers and partners. * Develop deep familiarity with datasets in your asset class by engaging with data vendors and attending data conferences. * Stay up to date with advancements in data science and machine learning techniques relevant to quantitative investing. Requirements * Bachelor's, Master's, or Ph.D. in Mathematics, Statistics, Computer Science, or a related STEM field. * Experience in data science, with a focus on quantitative analysis and model development. * Strong quantitative and analytical skills, with a deep understanding of statistical modeling and data-driven problem solving. * Proficient in Python, with experience using relevant libraries for data analysis, machine learning, and numerical computing. Benefits * Competitive salary plus bonus based on individual and company performance. * Collaborative, casual, and friendly work environment. * PPO Health, dental and vision insurance premiums fully covered for you and your. dependents. * Pre-tax commuter benefits. * Weekly company meals. Trexquant is an Equal Opportunity Employer
    $79k-115k yearly est. 8d ago
  • Junior Data Scientist

    Bexorg

    Data engineer job in New Haven, CT

    About Us Bexorg is revolutionizing drug discovery by restoring molecular activity in postmortem human brains. Through our BrainEx platform, we directly experiment on functionally preserved human brain tissue, creating enormous high-fidelity molecular datasets that fuel AI-driven breakthroughs in treating CNS diseases. We are looking for a Junior Data Scientist to join our team and dive into this one-of-a-kind data. In this onsite role, you will work at the intersection of computational biology and machine learning, helping analyze high-dimensional brain data and uncover patterns that could lead to the next generation of CNS therapeutics. This is an ideal opportunity for a recent graduate or early-career scientist to grow in a fast-paced, mission-driven environment. The Job Data Analysis & Exploration: Work with large-scale molecular datasets from our BrainEx experiments - including transcriptomic, proteomic, and metabolic data. Clean, transform, and explore these high-dimensional datasets to understand their structure and identify initial insights or anomalies. Collaborative Research Support: Collaborate closely with our life sciences, computational biology and deep learning teams to support ongoing research. You will help biologists interpret data results and assist machine learning researchers in preparing data for modeling, ensuring that domain knowledge and data science intersect effectively. Machine Learning Model Execution: Run and tune machine learning and deep learning models on real-world central nervous system (CNS) data. You'll help set up experiments, execute training routines (for example, using scikit-learn or PyTorch models), and evaluate model performance to extract meaningful patterns that could inform drug discovery. Statistical Insight Generation: Apply statistical analysis and visualization techniques to derive actionable insights from complex data. Whether it's identifying gene expression patterns or correlating molecular changes with experimental conditions, you will contribute to turning data into scientific discoveries. Reporting & Communication: Document your analysis workflows and results in clear reports or dashboards. Present findings to the team, highlighting key insights and recommendations. You will play a key role in translating data into stories that drive decision-making in our R&D efforts. Qualifications and Skills: Strong Python Proficiency: Expert coding skills in Python and deep familiarity with the standard data science stack. You have hands-on experience with NumPy, pandas, and Matplotlib for data manipulation and visualization; scikit-learn for machine learning; and preferably PyTorch (or similar frameworks like TensorFlow) for deep learning tasks. Educational Background: A Bachelor's or Master's degree in Data Science, Computer Science, Computational Biology, Bioinformatics, Statistics, or a related field. Equivalent practical project experience or internships in data science will also be considered. Machine Learning Knowledge: Solid understanding of machine learning fundamentals and algorithms. Experience developing or applying models to real or simulated datasets (through coursework or projects) is expected. Familiarity with high-dimensional data techniques or bioinformatics methods is a plus. Analytical & Problem-Solving Skills: Comfortable with statistics and data analysis techniques for finding signals in noisy data. Able to break down complex problems, experiment with solutions, and clearly interpret the results. Team Player: Excellent communication and collaboration skills. Willingness to learn from senior scientists and ability to contribute effectively in a multidisciplinary team that includes biologists, data engineers, and AI researchers. Motivation and Curiosity: Highly motivated, with an evident passion for data-driven discovery. You are excited by Bexorg's mission and eager to take on challenging tasks - whether it's mastering a new analysis method or digging into scientific literature - to push our research forward. Local to New Haven, CT preferred. No relocation offered for this position. Bexorg is an equal opportunity employer. We strive to create a supportive and inclusive workplace where contributions are valued and celebrated, and our employees thrive by being themselves and are inspired to do their best work. We seek applicants of all backgrounds and identities, across race, color, ethnicity, national origin or ancestry, citizenship, religion, sex, sexual orientation, gender identity or expression, veteran status, marital status, pregnancy or parental status, or disability. Applicants will not be discriminated against based on these or other protected categories or social identities. Bexorg will also consider for employment qualified applicants with criminal histories in a manner consistent with applicable federal, state and local law.
    $75k-105k yearly est. 60d+ ago
  • Director, ERM - Actuary or Data Scientist

    Berkley 4.3company rating

    Data engineer job in Greenwich, CT

    Company Details "Our Company provides a state of predictability which allows brokers and agents to act with confidence." Founded in 1967, W. R. Berkley Corporation has grown from a small investment management firm into one of the largest commercial lines property and casualty insurers in the United States. Along the way, we've been listed on the New York Stock Exchange, become a Fortune 500 Company, joined the S&P 500, and seen our gross written premiums exceed $10 billion. Today the Berkley brand comprises more than 60+ businesses worldwide and is divided into two segments: Insurance and Reinsurance and Monoline Excess. Led by our Executive Chairman, founder and largest shareholder, William. R. Berkley and our President and Chief Executive Officer, W. Robert Berkley, Jr., W.R. Berkley Corporation is well-positioned to respond to opportunities for future growth. The Company is an equal employment opportunity employer. Responsibilities *Please provide a one-page resume when applying. Enterprise Risk Management (ERM) Team Our key risk management aim is to maximize Berkley's return on capital over the long term for an acceptable level of risk. This requires regular interaction with senior management both in corporate and our business units. The ERM team comprises ERM actuaries and catastrophe modelers responsible for identification, quantification and reporting on insurance, investment, credit and operational risks. The ERM team is a corporate function at Berkley's headquarters in Greenwich, CT. The Role The successful candidate will collaborate with other ERM team members on a variety of projects with a focus on exposure management and catastrophe modeling for casualty (re)insurance. The candidate is expected to demonstrate expertise in data and analytics and be capable of presenting data-driven insights to senior executives. Key responsibilities include: Casualty Accumulation and Catastrophe Modeling • Lead the continuous enhancement of the casualty data ETL process • Analyze and visualize casualty accumulations by insureds, lines and industries to generate actionable insight for business leaders • Collaborate with data engineers to resolve complex data challenges and implement scalable solutions • Support the development of casualty catastrophe scenarios by researching historical events and emerging risks • Model complex casualty reinsurance protections Risk Process Automation and Group Reporting • Lead AI-driven initiatives aimed at automating key risk processes and projects • Contribute to Group-level ERM reports, including deliverables to senior executives, rating agencies and regulators Qualifications • Minimum of 5 years of experience in P&C (re)insurance, with a focus on casualty • Proficiency in R/Python and Excel • Strong verbal and written communication skills • Proven ability to manage multiple projects and meet deadlines in a dynamic environment Education Requirement • Minimum of Bachelor's degree required (preferably in STEM) • ACAS/FCAS is a plus Sponsorship Details Sponsorship Offered for this Role
    $74k-101k yearly est. Auto-Apply 60d+ ago
  • Lead Data Engineer

    Waste Harmonics LLC

    Data engineer job in Stamford, CT

    About Us Over the past 25 years, Waste Harmonics Keter has been at the forefront of the waste and recycling industry, delivering innovative, data-driven solutions. We help companies right-size their waste operations and get out of the waste business with industry-leading expertise, state-of-the-art waste technologies, and industry-leading customer service. Visit Waste Harmonics Keter for more information. Role Purpose The Data Engineer designs, builds, and optimizes scalable, reusable, and performance-oriented data infrastructure that supports enterprise-wide reporting and analytics needs. The role aligns with modern data platform architecture and engineering best practices to ensure long-term maintainability, flexibility, and data trust. Key Responsibilities Design, build, and maintain data pipelines to support reporting, analytics, and business intelligence. Develop and optimize scalable ETL/ELT processes using modern frameworks (e.g., dbt, Azure Data Factory, Fabric). Implement data models that support reporting and analytics needs (e.g., star schema, slowly changing dimensions). Ensure data quality, lineage, and observability for reliable business use. Collaborate with cross-functional teams to deliver integrated data solutions across the enterprise. Troubleshoot and optimize pipeline performance and SQL queries. Deliver documentation for technical workflows, transformations, and logic. Support governance by applying security, access control, and compliance standards in cloud environments. Core Competencies & Behaviors Technical Excellence: Strong understanding of data architecture, modelling, and transformation flows. Problem Solving: Able to troubleshoot complex performance issues and propose efficient solutions. Collaboration: Works effectively with analysts, engineers, and business teams to deliver end-to-end solutions. Continuous Improvement: Applies CI/CD, version control, and best practices to improve workflow efficiency. Detail Orientation: Ensures data accuracy, completeness, and consistency across systems. Adaptability: Thrives in a modern, cloud-based data environment and adapts to evolving technologies. Experience & Knowledge Experience building and maintaining cloud-based data pipelines (e.g., Azure, Snowflake, Fabric). Hands-on use of orchestration tools and ETL/ELT frameworks (dbt, ADF). Strong knowledge of data modelling principles and data architecture concepts. Experience with CI/CD pipelines, version control (e.g., Git), and modern data stack practices. Familiarity with monitoring and observability tools for pipelines. Understanding of security and access controls in cloud data platforms. Qualifications Bachelor's degree in Computer Science, Data Engineering, Information Systems, or related field (or equivalent experience). Proficiency in SQL and modern data stack tools (dbt, Snowflake, Azure Data Factory, Fabric). Strong technical documentation and communication skills. Waste Harmonics Keter Comprehensive Benefits Package Competitive Compensation Annual Bonus Plan at Every Level Continuous Learning and Development Opportunities 401(k) Retirement Savings with Company Match; Immediate Vesting Medical & Dental Insurance Vision Insurance (Company Paid) Life Insurance (Company Paid) Short-term & Long-term Disability (Company paid) Employee Assistance Program Flexible Spending Accounts/Health Savings Accounts Paid Time Off (PTO), Including birthday off, community volunteer hours and a Friday off in the summer 7 Paid Holidays At Waste Harmonics Keter , we celebrate diversity and are committed to creating an inclusive environment for all employees. We welcome candidates from all backgrounds to apply.
    $84k-114k yearly est. Auto-Apply 60d+ ago
  • Tech Lead, Data & Inference Engineer

    Catalyst Labs

    Data engineer job in Greenwich, CT

    Our Client A fast moving and venture backed advertising technology startup based in San Francisco. They have raised twelve million dollars in funding and are transforming how business to business marketers reach their ideal customers. Their identity resolution technology blends business and consumer signals to convert static audience lists into high match and cross channel segments without the use of cookies. By transforming first party and third party data into precision targetable audiences across platforms such as Meta, Google and YouTube, they enable marketing teams to reach higher match rates, reduce wasted advertising spend and accelerate pipeline growth. With a strong understanding of how business buyers behave in channels that have traditionally been focused on business to consumer activity, they are redefining how business brands scale demand generation and account based efforts. About Us Catalyst Labs is a leading talent agency with a specialized vertical in Applied AI, Machine Learning, and Data Science. We stand out as an agency thats deeply embedded in our clients recruitment operations. We collaborate directly with Founders, CTOs, and Heads of AI in those themes who are driving the next wave of applied intelligence from model optimization to productized AI workflows. We take pride in facilitating conversations that align with your technical expertise, creative problem-solving mindset, and long-term growth trajectory in the evolving world of intelligent systems. Location: San Francisco Work type: Full Time, Compensation: above market base + bonus + equity Roles & Responsibilities Lead the design, development and scaling of an end to end data platform from ingestion to insights, ensuring that data is fast, reliable and ready for business use. Build and maintain scalable batch and streaming pipelines, transforming diverse data sources and third party application programming interfaces into trusted and low latency systems. Take full ownership of reliability, cost and service level objectives. This includes achieving ninety nine point nine percent uptime, maintaining minutes level latency and optimizing cost per terabyte. Conduct root cause analysis and provide long lasting solutions. Operate inference pipelines that enhance and enrich data. This includes enrichment, scoring and quality assurance using large language models and retrieval augmented generation. Manage version control, caching and evaluation loops. Work across teams to deliver data as a product through the creation of clear data contracts, ownership models, lifecycle processes and usage based decision making. Guide architectural decisions across the data lake and the entire pipeline stack. Document lineage, trade offs and reversibility while making practical decisions on whether to build internally or buy externally. Scale integration with application programming interfaces and internal services while ensuring data consistency, high data quality and support for both real time and batch oriented use cases. Mentor engineers, review code and raise the overall technical standard across teams. Promote data driven best practices throughout the organization. Qualifications Bachelors or Masters degree in Computer Science, Computer Engineering, Electrical Engineering, or Mathematics. Excellent written and verbal communication; proactive and collaborative mindset. Comfortable in hybrid or distributed environments with strong ownership and accountability. A founder-level bias for actionable to identify bottlenecks, automate workflows, and iterate rapidly based on measurable outcomes. Demonstrated ability to teach, mentor, and document technical decisions and schemas clearly. Core Experience 6 to 12 years of experience building and scaling production-grade data systems, with deep expertise in data architecture, modeling, and pipeline design. Expert SQL (query optimization on large datasets) and Python skills. Hands-on experience with distributed data technologies (Spark, Flink, Kafka) and modern orchestration tools (Airflow, Dagster, Prefect). Familiarity with dbt, DuckDB, and the modern data stack; experience with IaC, CI/CD, and observability. Exposure to Kubernetes and cloud infrastructure (AWS, GCP, or Azure). Bonus: Strong Node.js skills for faster onboarding and system integration. Previous experience at a high-growth startup (10 to 200 people) or early-stage environment with a strong product mindset.
    $84k-114k yearly est. 25d ago
  • Lead Data Engineer

    Keter Environmental Services LLC 4.0company rating

    Data engineer job in Stamford, CT

    About Us Over the past 25 years, Waste Harmonics Keter has been at the forefront of the waste and recycling industry, delivering innovative, data-driven solutions. We help companies right-size their waste operations and get out of the waste business with industry-leading expertise, state-of-the-art waste technologies, and industry-leading customer service. Visit Waste Harmonics Keter for more information. We are excited to grow our Data & Insights Team and meet talented professionals who want to make an impact. At this time, we are unable to offer visa sponsorship for the Lead Data Engineer role. Candidates must have authorization to work in the United States now and in the future without the need for sponsorship. If this aligns with your current eligibility, we warmly encourage you to apply, we would love to learn more about you and the value you can bring to Waste Harmonics Keter. Role Purpose The Data Engineer designs, builds, and optimizes scalable, reusable, and performance-oriented data infrastructure that supports enterprise-wide reporting and analytics needs. The role aligns with modern data platform architecture and engineering best practices to ensure long-term maintainability, flexibility, and data trust. Key Responsibilities Design, build, and maintain data pipelines to support reporting, analytics, and business intelligence. Develop and optimize scalable ETL/ELT processes using modern frameworks (e.g., dbt, Azure Data Factory, Fabric). Implement data models that support reporting and analytics needs (e.g., star schema, slowly changing dimensions). Ensure data quality, lineage, and observability for reliable business use. Collaborate with cross-functional teams to deliver integrated data solutions across the enterprise. Troubleshoot and optimize pipeline performance and SQL queries. Deliver documentation for technical workflows, transformations, and logic. Support governance by applying security, access control, and compliance standards in cloud environments. Core Competencies & Behaviors Technical Excellence: Strong understanding of data architecture, modelling, and transformation flows. Problem Solving: Able to troubleshoot complex performance issues and propose efficient solutions. Collaboration: Works effectively with analysts, engineers, and business teams to deliver end-to-end solutions. Continuous Improvement: Applies CI/CD, version control, and best practices to improve workflow efficiency. Detail Orientation: Ensures data accuracy, completeness, and consistency across systems. Adaptability: Thrives in a modern, cloud-based data environment and adapts to evolving technologies. Experience & Knowledge Experience building and maintaining cloud-based data pipelines (e.g., Azure, Snowflake, Fabric). Hands-on use of orchestration tools and ETL/ELT frameworks (dbt, ADF). Strong knowledge of data modelling principles and data architecture concepts. Experience with CI/CD pipelines, version control (e.g., Git), and modern data stack practices. Familiarity with monitoring and observability tools for pipelines. Understanding of security and access controls in cloud data platforms. Qualifications Bachelor's degree in Computer Science, Data Engineering, Information Systems, or related field (or equivalent experience). Proficiency in SQL and modern data stack tools (dbt, Snowflake, Azure Data Factory, Fabric). Strong technical documentation and communication skills. Waste Harmonics Keter Comprehensive Benefits Package Competitive Compensation Annual Bonus Plan at Every Level Continuous Learning and Development Opportunities 401(k) Retirement Savings with Company Match; Immediate Vesting Medical & Dental Insurance Vision Insurance (Company Paid) Life Insurance (Company Paid) Short-term & Long-term Disability (Company paid) Employee Assistance Program Flexible Spending Accounts/Health Savings Accounts Paid Time Off (PTO), Including birthday off, community volunteer hours and a Friday off in the summer 7 Paid Holidays At Waste Harmonics Keter , we celebrate diversity and are committed to creating an inclusive environment for all employees. We welcome candidates from all backgrounds to apply.
    $96k-137k yearly est. Auto-Apply 60d+ ago
  • Principal Data Engineer for AI Platform

    Mastercard 4.7company rating

    Data engineer job in Harrison, NY

    **Our Purpose** _Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we're helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential._ **Title and Summary** Principal Data Engineer for AI Platform About Mastercard Mastercard is a global technology company in the payments industry, connecting billions of consumers, financial institutions, merchants, governments, and businesses worldwide. We are driving the future of commerce by enabling secure, simple, and smart transactions. Artificial Intelligence is at the core of our strategy to make Mastercard stronger and commerce safer, smarter and more personal. At Mastercard, we're building next-generation AI-powered platforms to drive innovation and impact. Role: - Drive modernization from legacy and on-prem systems to modern, cloud-native, and hybrid data platforms. - Architect and lead the development of a Multi-Agent ETL Platform for batch and event streaming, integrating AI agents to autonomously manage ETL tasks such as data discovery, schema mapping, and error resolution. - Define and implement data ingestion, transformation, and delivery pipelines using scalable frameworks (e.g., Apache Airflow, Nifi, dbt, Spark, Kafka, or Dagster). - Leverage LLMs, and agent frameworks (e.g., LangChain, CrewAI, AutoGen) to automate pipeline management and monitoring. - Ensure robust data governance, cataloging, versioning, and lineage tracking across the ETL platform. - Define project roadmaps, KPIs, and performance metrics for platform efficiency and data reliability. - Establish and enforce best practices in data quality, CI/CD for data pipelines, and observability. - Collaborate closely with cross-functional teams (Data Science, Analytics, and Application Development) to understand requirements and deliver efficient data ingestion and processing workflows. - Establish and enforce best practices, automation standards, and monitoring frameworks to ensure the platform's reliability, scalability, and security. - Build relationships and communicate effectively with internal and external stakeholders, including senior executives, to influence data-driven strategies and decisions. - Continuously engage and improve teams' performance by conducting recurring meetings, knowing your people, managing career development, and understanding who is at risk. - Oversee deployment, monitoring, and scaling of ETL and agent workloads across multi cloud environments. - Continuously improve platform performance, cost efficiency, and automation maturity. All About You: - Hands-on experience in data engineering, data platform strategy, or a related technical domain. - Proven experience leading global data engineering or platform engineering teams. - Proven experience in building and modernizing distributed data platforms using technologies such as Apache Spark, Kafka, Flink, NiFi, and Cloudera/Hadoop. - Strong experience with one or more of data pipeline tools (Nifi, Airflow, dbt, Spark, Kafka, Dagster, etc.) and distributed data processing at scale. - Experience building and managing AI-augmented or agent-driven systems will be a plus. - Proficiency in Python, SQL, and data ecosystems (Oracle, AWS Glue, Azure Data Factory, BigQuery, Snowflake, etc.). - Deep understanding of data modeling, metadata management, and data governance principles. - Proven success in leading technical teams and managing complex, cross-functional projects. - Passion for staying current in a fast-paced field with proven ability to lead innovation in a scaled organization. - Excellent communication skills, with the ability to tailor technical concepts to executive, operational, and technical audiences. - Expertise and ability to lead technical decision-making considering scalability, cost efficiency, stakeholder priorities, and time to market. - Proven track leading high-performing teams with experience leading and coaching director level reports and experienced individual contributors. - Advanced degree in Data Science, Computer Science, Information Technology, Business Administration, or a related field. Equivalent experience will also be considered. Why Join Us? At Mastercard, you'll help shape the future of AI in global commerce-solving complex challenges at scale, driving financial inclusion, and reinforcing the trust and security that define our brand. You'll work with world-class talent, cutting-edge technologies, and will make a lasting impact. \#AI1 Mastercard is a merit-based, inclusive, equal opportunity employer that considers applicants without regard to gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law. We hire the most qualified candidate for the role. In the US or Canada, if you require accommodations or assistance to complete the online application process or during the recruitment process, please contact reasonable_accommodation@mastercard.com and identify the type of accommodation or assistance you are requesting. Do not include any medical or health information in this email. The Reasonable Accommodations team will respond to your email promptly. **Corporate Security Responsibility** All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: + Abide by Mastercard's security policies and practices; + Ensure the confidentiality and integrity of the information being accessed; + Report any suspected information security violation or breach, and + Complete all periodic mandatory security trainings in accordance with Mastercard's guidelines. In line with Mastercard's total compensation philosophy and assuming that the job will be performed in the US, the successful candidate will be offered a competitive base salary and may be eligible for an annual bonus or commissions depending on the role. The base salary offered may vary depending on multiple factors, including but not limited to location, job-related knowledge, skills, and experience. Mastercard benefits for full time (and certain part time) employees generally include: insurance (including medical, prescription drug, dental, vision, disability, life insurance); flexible spending account and health savings account; paid leaves (including 16 weeks of new parent leave and up to 20 days of bereavement leave); 80 hours of Paid Sick and Safe Time, 25 days of vacation time and 5 personal days, pro-rated based on date of hire; 10 annual paid U.S. observed holidays; 401k with a best-in-class company match; deferred compensation for eligible roles; fitness reimbursement or on-site fitness facilities; eligibility for tuition reimbursement; and many more. Mastercard benefits for interns generally include: 56 hours of Paid Sick and Safe Time; jury duty leave; and on-site fitness facilities in some locations. **Pay Ranges** Arlington, Virginia: $170,000 - $273,000 USD Purchase, New York: $170,000 - $273,000 USD San Francisco, California: $178,000 - $284,000 USD
    $84k-114k yearly est. 23d ago
  • Application Support Engineer

    The Phoenix Group 4.8company rating

    Data engineer job in Fairfield, CT

    bout Us We are a global investment firm focused on combining financial theory with practical application. Our goal is to deliver long-term results by cutting through market noise, identifying the most impactful factors, and developing ideas that stand up to rigorous testing. Over the years, we have built a reputation as innovators in portfolio management and alternative investment strategies. Our team values intellectual curiosity, honesty, and a commitment to understanding what drives financial markets. Collaboration, transparency, and openness to new ideas are central to our culture, fostering innovation and continuous improvement. Your Role We are seeking an Application Support Engineer to operate at the intersection of technical systems and business processes that power our investment operations. This individual contributor role involves supporting a complex technical environment, resolving production issues, and contributing to projects that enhance systems and processes. You will gain hands-on experience with cloud-deployed portfolio management and research systems and work closely with both business and technical teams. This role is ideal for someone passionate about technology and systems reliability, looking to grow into a systems reliability or engineering-focused position. Responsibilities Develop and maintain expertise in the organization's applications to support internal users. Manage user expectations and ensure satisfaction with our systems and tools. Advocate for users with project management and development teams. Work closely with QA to report and track issues identified by users. Ensure proper escalation for unresolved issues to maintain user satisfaction. Participate in production support rotations, including off-hours coverage. Identify gaps in support processes and create documentation or workflows in collaboration with development and business teams. Diagnose and resolve system issues, including debugging code, analyzing logs, and investigating performance or resource problems. Collaborate across teams to resolve complex technical problems quickly and efficiently. Maintain documentation of system behavior, root causes, and process improvements. Contribute to strategic initiatives that enhance system reliability and operational efficiency. Qualifications Bachelor's degree in Engineering, Computer Science, or equivalent experience. 2+ years of experience supporting complex software systems, collaborating with business users and technical teams. Hands-on technical skills including SQL and programming/debugging (Python preferred). Strong written and verbal communication skills. Ability to work independently and within small teams. Eagerness to learn new technologies and automate manual tasks to improve system reliability. Calm under pressure and demonstrates responsibility, maturity, and trustworthiness. Compensation & Benefits Salary range: $115,000-$135,000 (may vary based on experience, location, or organizational needs). Eligible for annual discretionary bonus. Comprehensive benefits package including paid time off, medical/dental/vision coverage, 401(k), and other benefits as applicable. The Phoenix Group Advisors is an equal opportunity employer. We are committed to creating a diverse and inclusive workplace and prohibit discrimination and harassment of any kind based on race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status. We strive to attract talented individuals from all backgrounds and provide equal employment opportunities to all employees and applicants for employment.
    $115k-135k yearly 3d ago
  • Principal Data Engineer for AI Platform

    Mastercard 4.7company rating

    Data engineer job in Harrison, NY

    Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we're helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title and Summary Principal Data Engineer for AI Platform About Mastercard Mastercard is a global technology company in the payments industry, connecting billions of consumers, financial institutions, merchants, governments, and businesses worldwide. We are driving the future of commerce by enabling secure, simple, and smart transactions. Artificial Intelligence is at the core of our strategy to make Mastercard stronger and commerce safer, smarter and more personal. At Mastercard, we're building next-generation AI-powered platforms to drive innovation and impact. Role: * Drive modernization from legacy and on-prem systems to modern, cloud-native, and hybrid data platforms. * Architect and lead the development of a Multi-Agent ETL Platform for batch and event streaming, integrating AI agents to autonomously manage ETL tasks such as data discovery, schema mapping, and error resolution. * Define and implement data ingestion, transformation, and delivery pipelines using scalable frameworks (e.g., Apache Airflow, Nifi, dbt, Spark, Kafka, or Dagster). * Leverage LLMs, and agent frameworks (e.g., LangChain, CrewAI, AutoGen) to automate pipeline management and monitoring. * Ensure robust data governance, cataloging, versioning, and lineage tracking across the ETL platform. * Define project roadmaps, KPIs, and performance metrics for platform efficiency and data reliability. * Establish and enforce best practices in data quality, CI/CD for data pipelines, and observability. * Collaborate closely with cross-functional teams (Data Science, Analytics, and Application Development) to understand requirements and deliver efficient data ingestion and processing workflows. * Establish and enforce best practices, automation standards, and monitoring frameworks to ensure the platform's reliability, scalability, and security. * Build relationships and communicate effectively with internal and external stakeholders, including senior executives, to influence data-driven strategies and decisions. * Continuously engage and improve teams' performance by conducting recurring meetings, knowing your people, managing career development, and understanding who is at risk. * Oversee deployment, monitoring, and scaling of ETL and agent workloads across multi cloud environments. * Continuously improve platform performance, cost efficiency, and automation maturity. All About You: * Hands-on experience in data engineering, data platform strategy, or a related technical domain. * Proven experience leading global data engineering or platform engineering teams. * Proven experience in building and modernizing distributed data platforms using technologies such as Apache Spark, Kafka, Flink, NiFi, and Cloudera/Hadoop. * Strong experience with one or more of data pipeline tools (Nifi, Airflow, dbt, Spark, Kafka, Dagster, etc.) and distributed data processing at scale. * Experience building and managing AI-augmented or agent-driven systems will be a plus. * Proficiency in Python, SQL, and data ecosystems (Oracle, AWS Glue, Azure Data Factory, BigQuery, Snowflake, etc.). * Deep understanding of data modeling, metadata management, and data governance principles. * Proven success in leading technical teams and managing complex, cross-functional projects. * Passion for staying current in a fast-paced field with proven ability to lead innovation in a scaled organization. * Excellent communication skills, with the ability to tailor technical concepts to executive, operational, and technical audiences. * Expertise and ability to lead technical decision-making considering scalability, cost efficiency, stakeholder priorities, and time to market. * Proven track leading high-performing teams with experience leading and coaching director level reports and experienced individual contributors. * Advanced degree in Data Science, Computer Science, Information Technology, Business Administration, or a related field. Equivalent experience will also be considered. Why Join Us? At Mastercard, you'll help shape the future of AI in global commerce-solving complex challenges at scale, driving financial inclusion, and reinforcing the trust and security that define our brand. You'll work with world-class talent, cutting-edge technologies, and will make a lasting impact. #AI1 Mastercard is a merit-based, inclusive, equal opportunity employer that considers applicants without regard to gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law. We hire the most qualified candidate for the role. In the US or Canada, if you require accommodations or assistance to complete the online application process or during the recruitment process, please contact reasonable_accommodation@mastercard.com and identify the type of accommodation or assistance you are requesting. Do not include any medical or health information in this email. The Reasonable Accommodations team will respond to your email promptly. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: * Abide by Mastercard's security policies and practices; * Ensure the confidentiality and integrity of the information being accessed; * Report any suspected information security violation or breach, and * Complete all periodic mandatory security trainings in accordance with Mastercard's guidelines. In line with Mastercard's total compensation philosophy and assuming that the job will be performed in the US, the successful candidate will be offered a competitive base salary and may be eligible for an annual bonus or commissions depending on the role. The base salary offered may vary depending on multiple factors, including but not limited to location, job-related knowledge, skills, and experience. Mastercard benefits for full time (and certain part time) employees generally include: insurance (including medical, prescription drug, dental, vision, disability, life insurance); flexible spending account and health savings account; paid leaves (including 16 weeks of new parent leave and up to 20 days of bereavement leave); 80 hours of Paid Sick and Safe Time, 25 days of vacation time and 5 personal days, pro-rated based on date of hire; 10 annual paid U.S. observed holidays; 401k with a best-in-class company match; deferred compensation for eligible roles; fitness reimbursement or on-site fitness facilities; eligibility for tuition reimbursement; and many more. Mastercard benefits for interns generally include: 56 hours of Paid Sick and Safe Time; jury duty leave; and on-site fitness facilities in some locations. Pay Ranges Arlington, Virginia: $170,000 - $273,000 USD Purchase, New York: $170,000 - $273,000 USD San Francisco, California: $178,000 - $284,000 USD
    $84k-114k yearly est. Auto-Apply 23d ago

Learn more about data engineer jobs

How much does a data engineer earn in Bridgeport, CT?

The average data engineer in Bridgeport, CT earns between $73,000 and $131,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.

Average data engineer salary in Bridgeport, CT

$98,000

What are the biggest employers of Data Engineers in Bridgeport, CT?

The biggest employers of Data Engineers in Bridgeport, CT are:
  1. Waters
  2. The Phoenix Group
Job type you want
Full Time
Part Time
Internship
Temporary