Post job

Data engineer jobs in Stratford, CT

- 352 jobs
All
Data Engineer
Data Scientist
Principal Software Engineer
Senior Software Engineer
Senior Systems Developer
Lead Data Architect
Applications Support Engineer
  • Data Engineer

    The Phoenix Group 4.8company rating

    Data engineer job in Fairfield, CT

    Data Engineer - Vice President Greenwich, CT About the Firm We are a global investment firm focused on applying financial theory to practical investment decisions. Our goal is to deliver long-term results by analyzing market data and identifying what truly matters. Technology is central to our approach, enabling insights across both traditional and alternative strategies. The Team A new Data Engineering team is being established to work with large-scale datasets across the organization. This team partners directly with researchers and business teams to build and maintain infrastructure for ingesting, validating, and provisioning large volumes of structured and unstructured data. Your Role As a Data Engineer, you will help design and build an enterprise data platform used by research teams to manage and analyze large datasets. You will also create tools to validate data, support back-testing, and extract actionable insights. You will work closely with researchers, portfolio managers, and other stakeholders to implement business requirements for new and ongoing projects. The role involves working with big data technologies and cloud platforms to create scalable, extensible solutions for data-intensive applications. What You'll Bring 6+ years of relevant experience in data engineering or software development Bachelor's, Master's, or PhD in Computer Science, Engineering, or related field Strong coding, debugging, and analytical skills Experience working directly with business stakeholders to design and implement solutions Knowledge of distributed data systems and large-scale datasets Familiarity with big data frameworks such as Spark or Hadoop Interest in quantitative research (no prior finance or trading experience required) Exposure to cloud platforms is a plus Experience with Python, NumPy, pandas, or similar data analysis tools is a plus Familiarity with AI/ML frameworks is a plus Who You Are Thoughtful, collaborative, and comfortable in a fast-paced environment Hard-working, intellectually curious, and eager to learn Committed to transparency, integrity, and innovation Motivated by leveraging technology to solve complex problems and create impact Compensation & Benefits Salary range: $190,000 - $260,000 (subject to experience, skills, and location) Eligible for annual discretionary bonus Comprehensive benefits including paid time off, medical/dental/vision insurance, 401(k), and other applicable benefits We are an Equal Opportunity Employer. EEO/VET/DISABILITY The Phoenix Group Advisors is an equal opportunity employer. We are committed to creating a diverse and inclusive workplace and prohibit discrimination and harassment of any kind based on race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status. We strive to attract talented individuals from all backgrounds and provide equal employment opportunities to all employees and applicants for employment.
    $190k-260k yearly 3d ago
  • C++ Market Data Engineer

    TBG | The Bachrach Group

    Data engineer job in Stamford, CT

    We are seeking a C++ Market Data Engineer to design and optimize ultra-low-latency feed handlers that power global trading systems. This is a high-impact role where your code directly drives real-time decision making. What You'll Do: Build high-performance feed handlers in modern C++ (14/17/20) for equities, futures, and options Optimize systems for micro/nanosecond latency with lock-free algorithms and cache-friendly design Ensure reliable data delivery with failover, gap recovery, and replay mechanisms Collaborate with researchers and engineers to align data formats for trading and simulation Instrument and test systems for continuous performance improvements What We're Looking For: 3+ years of C++ development experience (low-latency, high-throughput systems) Experience with real-time market data feeds (e.g., Bloomberg B-PIPE, CME MDP, Refinitiv, OPRA, ITCH) Strong knowledge of concurrency, memory models, and compiler optimizations Python scripting skills for testing and automation Familiarity with Docker/Kubernetes and cloud networking (AWS/GCP) is a plus
    $84k-114k yearly est. 1d ago
  • Lead Data Platform Architect

    Sibitalent Corp

    Data engineer job in Melville, NY

    We are growing our data platform team and are seeking an experienced Data Platform Architect with deep cloud data platform expertise to drive the overall architecture and design of a modern, scalable data platform. This role is responsible for defining and advancing data platform architecture to support a data-driven organization, ensuring solutions are efficient, reusable, and aligned with long-term business and technology objectives. This position carries architectural and strategic responsibility for the design and implementation of the enterprise data platform. The role will support multiple initiatives across the data ecosystem, including data lake design, data engineering, analytics, data architecture, AI/ML, streaming and batch processing, metadata management, and service integrations. DUTIES AND RESPONSIBILITIES: • Lead technical assessments of the current data platform and define the architectural roadmap forward • Collaborate on strategic direction and prioritize data platform architecture to support business and technical objectives • Partner with enterprise and solution architects to ensure consistent standards and best practices across the data platform • Architect and design end-to-end data platform solutions on cloud infrastructure, emphasizing scalability, performance, and reusable design patterns • Design cloud-first, cost-effective data platform architectures • Architect batch, real-time, and unstructured ingestion frameworks with scale and reliability • Enable semantic interoperability of data across multiple sources and structures • Implement automation for lineage, orchestration, and data flows to streamline platform operations • Design and maintain metadata management frameworks to support current and future tools • Continually enhance automation and CI/CD frameworks across the data platform • Architect solutions with security-by-design principles • Monitor industry trends and emerging technologies to continuously improve the data platform architecture • Provide technical leadership and guidance to data platform engineers executing against the roadmap • Own and maintain data platform architecture documentation DUTIES AND RESPONSIBILITIES (CONTINUED): • Support a wide range of data platform use cases, including data engineering, business intelligence, real-time analytics, visualization, AI/ML, and service integrations • Collaborate with third-party vendors and partners on data platform integrations EDUCATION AND EXPERIENCE: • Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field required • Minimum of 10 years of experience designing high-availability data platform architectures • Minimum of 8 years of experience implementing modern cloud-based data platforms • Strong experience with Google Cloud Platform services, including BigQuery, Google Cloud Storage, and Cloud Composer • Minimum of 5 years of experience designing data lake architectures • Deep expertise across modern data platform, database, and streaming technologies (e.g., Kafka, Spark) • Experience with source control and CI/CD pipelines • Experience operationalizing AI/ML models preferred • Experience working with unstructured data preferred • Experience operating within Agile delivery models • Minimum of 3 years of experience with infrastructure as code (Terraform preferred) REQUIRED TECHNICAL EXPERIENCE (ADDED): • Hands-on experience designing and operating data platforms on Google Cloud Platform (GCP) • Strong experience with Databricks for large-scale data processing and analytics • Experience integrating data from IoT devices and machine monitoring systems is highly preferred • Familiarity with industrial, sensor-based, or operational technology (OT) data pipelines is a plus SKILLS: • Strong cross-functional communication and collaboration skills • Excellent organizational, time management, verbal, and written communication skills • Expertise across modern data platform technologies and best practices (BigQuery, Kafka, Hadoop, Spark) • Strong understanding of semantic layers and data interoperability (e.g., LookML, dbt) • Proven ability to design reusable, automated data platform patterns • Demonstrated leadership in distributed or remote environments • Track record of delivering data platform solutions at enterprise scale • Ability to write testable code and promote solutions into production environments • Experience with Google Cloud Composer or Apache Airflow preferred • Ability to quickly understand complex business systems and data flows • Strong analytical judgment and decision-making capabilities OTHER REQUIREMENTS: • Ability to travel up to 10 percent as required • This role may require access to regulated or controlled information
    $92k-124k yearly est. 16h ago
  • Senior Power BI & Systems Integration Developer - 5498

    Benchmark It-Technology Talent

    Data engineer job in Shelton, CT

    Senior Power BI & Systems Integration Developer Type: Contract-to-Hire or Full-time Our client, a leading precision manufacturing company in Connecticut, is seeking a Senior Power BI & Systems Integration Developer to join their IT team. This strategic role is central to modernizing ERP and MES systems, leading critical integration initiatives, and enhancing data-driven decision-making across the organization. The position offers the opportunity to influence IT strategy, optimize operational workflows, and deliver insights that directly impact business outcomes in a fast-paced, high-visibility environment. Key Responsibilities: Lead the design, development, and optimization of Power BI dashboards and advanced data models to provide actionable insights for senior management and operational teams. Drive ERP and MES integration projects, ensuring accurate real-time visibility into production, Work-In-Progress (WIP), and operational KPIs. Collaborate closely with business and IT leadership to define requirements, architect solutions, and implement high-impact initiatives. Required Skills and Experience: Senior-level expertise: 10+ years of experience in Power BI, SQL, and data integration technologies (APIs, .NET, Python, etc.). Proven experience with ERP systems (Infor LN preferred) and MES platforms (Aegis FactoryLogix preferred). Strong ability to translate complex business needs into technical solutions. Software engineering experience (e.g., .NET) is a strong plus. Exceptional communication skills, with experience presenting insights to executive leadership. On-site presence required; local candidates strongly preferred. This is a full-time position that may start as a contract-to-hire as well….great opportunity to make an immediate impact and grow with a company investing in its next phase of digital transformation. Must be a U.S. Citizen or Green Card holder (federal contract requirement) By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Benchmark IT, LLC and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here: ************************************
    $88k-117k yearly est. 2d ago
  • Principal Software Engineer (Embedded Systems)

    Fustis LLC

    Data engineer job in Norwalk, CT

    Position Type: Full-Time / Direct Hire (W2) Salary: $200K+ base + 13% bonus Experience Required: 10-20 years Domain: Industrial Automation & Robotics Work Authorization: US Citizen or Green Card Interview Process: 2× Teams Interviews → Onsite Interview (expenses paid) “How Many Years With” (Candidate Screening Section) C: C++: RTOS: Embedded Software Development: Device Driver Software Development: Job Description We are seeking a Principal Software Engineer - Embedded Systems to join a high-performance engineering team building next-generation industrial automation and robotics platforms. This role blends hardware, firmware, real-time systems, machine learning components, and high-performance automation into one of the most technically challenging environments. The ideal candidate is passionate about writing software that interacts directly with real machines, drives motion control, solves physical-world problems, and contributes to global-scale automation systems. This role is hands-on, impact-driven, and perfect for someone who wants to see their code operating in motion - not just in a console. Key Responsibilities Design, implement, and optimize embedded software in C/C++ for real-time control systems. Develop and maintain real-time operating system (RTOS)-based applications. Implement low-latency firmware, control loops, and motion-control algorithms. Work with hardware teams to integrate sensors, actuators, and automation components. Architect scalable, high-performance embedded platforms for industrial robotics. Develop device drivers, board support packages (BSPs), and hardware abstraction layers. Own full lifecycle development: requirements → design → implementation → testing → deployment. Develop machine-learning-based modules for system categorization and algorithm organization (experience helpful, not required). Build real-time monitoring tools, diagnostics interfaces, and system health analytics. Troubleshoot complex hardware/software interactions in a real-time environment. Work closely with electrical, mechanical, and controls engineers. Participate in code reviews, architectural discussions, and continuous improvement. Required Qualifications Bachelor's degree in Computer Engineering, Electrical Engineering, Computer Science, or related field (Master's a plus). 10-20 years professional experience in: C and C++ programming Embedded Software Development RTOS-based design (e.g., FreeRTOS, QNX, VxWorks, ThreadX, etc.) Control systems and real-time embedded environments Strong experience with: Device driver development Board bring-up and hardware interfacing Debugging tools (oscilloscopes, logic analyzers, JTAG, etc.) Excellent understanding of: Memory management Multithreading Interrupt-driven systems Communication protocols (UART, SPI, I2C, CAN, Ethernet) Preferred Qualifications Experience with robotics, motion control, industrial automation, or safety-critical systems. Exposure to machine learning integration in embedded platforms. Experience in high-precision or high-speed automation workflows. Target Industries / Domains Ideal candidates may come from: Medical Devices Semiconductor Equipment Aerospace & Defense Industrial Control Systems Robotics & Automation Machinery & Mechatronics Appliances & Devices Embedded Consumer or Industrial Electronics
    $200k yearly 3d ago
  • Senior Software Engineer (Full Stack), Data Analytics - Pharma

    Nylon Search-Recruitment and Executive Search

    Data engineer job in Ridgefield, CT

    $140-210K + Bonus (*At this time our client cannot sponsor or transfer work visas, including H1B, OPT, F1) Global Pharmaceutical and Healthcare conglomerate seeks dynamic and collaborative Lead Full Stack Software Engineer with 7+ years hands on front and back-end software developer experience designing, developing, testing, and delivering fully functioning Cloud-based, Data Analytics applications and backend services, to help lead application development from ideation and architecture to deployment and optimization, and integrate data science solutions like analytics and machine learning. Must have full stack Data Analytics experience, AWS Cloud, and hands-on experience in Data pipeline creation, ETL/ELT (AWS Glue, Databricks, DBT). This is a visible role that will deliver on key data transformation initiatives, and shape the future of data-driven decision making across the business. Requirements Hands-on experience in Data pipeline creation, ETL/ELT (AWS Glue, Databricks, DBT). Build and maintain robust backend systems using AWS Lambda, API Gateway, and other serverless technologies. Experience with frontend visualization tools like Tableau or PowerBI. Hands-on expertise in Agile Development, Test Automation, IT Security best practices, Continuous Development and deployment tools (Git, Jenkins, Docker, Kubernetes), and functional programming. Familiarity with IT security, container platforms, and software environments across QA, DEV, STAGING, and PROD phases. Demonstrated thought leadership in driving innovation and best practice adoption. Leadership & Collaboration: Strong communication, mentorship, and cross-functional teamwork. Responsibilities include: Application Development: Design, develop, and maintain both the front-end and back-end components of full-fledged applications using state-of-the-art programming languages and frameworks. Architecture Integration: Incorporate API-enabled backend technologies into application architecture, following established architecture frameworks and standards to deliver cohesive and maintainable solutions. Agile Collaboration: Work collaboratively with the team and product owner, following agile methodologies, to deliver secure, scalable, and fully functional applications in iterative cycles. Testing Strategy and Framework Design: Develop and implement comprehensive testing strategies and frameworks to ensure the delivery of high-quality, reliable software. For immediate consideration please submit resume in Word or PDF format ** AN EQUAL OPPORTUNITY EMPLOYER **
    $87k-113k yearly est. 4d ago
  • Principal Software Engineer

    Smith Arnold Partners 4.0company rating

    Data engineer job in Danbury, CT

    If you're an experienced software engineer who wants to build things that actually move - fast, accurately, and at scale - this is a role worth considering. This is a role for engineers who want to build the core - not just bolt things on. If you want to own complex systems, influence product direction, and work in an environment where your expertise is valued and your career can thrive - let's talk. Apply now or message us directly for a confidential conversation. We're a well-established global tech organization that builds the software and systems behind some of the most advanced, high-speed electromechanical equipment in the industry. Our technology helps run the operations behind critical communications and logistics around the world. Principle Principal Software Engineer Location: Norwalk, CT Salary - $170.000 - $190,000 +Bonus Right now, we're looking for a Senior Principal Software Engineer to take a leading role in architecting and delivering the software that powers our next generation of machine control systems. This is a hands-on, senior-level position where you'll have real ownership, technical influence, and direct impact on the business. Why This Role Stands Out: High Visibility: You won't be buried in code no one sees - this role is front and center across engineering, product, and executive teams. Complex, Real-World Problems: This isn't app development. You'll be building control software for high-speed, precision-driven systems that integrate mechanical, electrical, and software components. Stability + Innovation: Join a company that's been around for decades - but continues to evolve. The tech is serious, the teams are strong, and the roadmaps are ambitious. Long-Term Career Growth: This isn't a stepping-stone job. It's a long-term opportunity to lead, grow, and shape the future of how our machines perform. What You'll Do: Design and develop real-time control software in C++ for large-scale, high-performance electromechanical systems Lead cross-functional efforts across software, hardware, systems, and manufacturing Guide architecture and technical strategy for multiple products and platforms Debug and optimize at the system level - from code to motion control to hardware integration Play a key role in roadmap planning and technical decision-making What You Bring: 10+ years of experience in object-oriented software design and full-lifecycle development Deep hands-on experience in C++ and real-time operating systems (such as RTX) Strong background in mechatronics, machine control, or similar system-level environments Ability to lead multidisciplinary teams and drive projects through ambiguity to delivery Excellent communication skills - both with engineers and stakeholders BS or MS in Computer Science or a related field Bonus Points: Experience with motion or servo motor control Exposure to .NET, Java, or ASP.NET Background in SQL Server, Oracle, or web-based service architecture Knowledge of industrial automation or paper-handling/mailing systems
    $115k-143k yearly est. 4d ago
  • Sr. WAN Developer

    Nam Info Inc. 4.3company rating

    Data engineer job in Holtsville, NY

    Duties & Responsibilities Strong Expertise in Java and Android Solid Experience with mobile LTE(4G)/5G technologies especially telephony, HAL and QCRIL Solid understanding of RIL and Telephony framework Proficiency with debugging in embedded software systems. Familiarity with JTAG. Exposure to one or more telecom networks and technologies like GSM, 3G, LTE, IMS, 5g RAN architecture. Working experience on eSIM. Good working experience on private 5G Exposure and working in Qualcomm chipsets and tools QXDM etc. Exposure on features and bug fixing related to NA carriers (AT&T, Verizon, and T-Mobile). Mandatory Technical Skills Bachelor's degree and 8+ years' experience Strong Expertise in Java and Android Solid Experience with mobile LTE(4G)/5G technologies especially telephony, HAL and QCRIL Solid understanding of RIL and Telephony framework Proficiency with debugging in embedded software systems. Familiarity with JTAG. Exposure to one or more telecom networks and technologies like GSM, 3G, LTE, IMS, 5g RAN architecture
    $103k-127k yearly est. 3d ago
  • Data Scientist - Analytics roles draw analytical talent hunting for roles.

    Boxncase

    Data engineer job in Commack, NY

    About the Role We believe that the best decisions are backed by data. We are seeking a curious and analytical Data Scientist to champion our data -driven culture. In this role, you will act as a bridge between technical data and business strategy. You will mine massive datasets, build predictive models, and-most importantly-tell the story behind the numbers to help our leadership team make smarter choices. You are perfect for this role if you are as comfortable with SQL queries as you are with slide decks. ### What You Will Do Exploratory Analysis: Dive deep into raw data to discover trends, patterns, and anomalies that others miss. Predictive Modeling: Build and test statistical models (Regression, Time -series, Clustering) to forecast business outcomes and customer behavior. Data Visualization: Create clear, impactful dashboards using Tableau, PowerBI, or Python libraries (Matplotlib/Seaborn) to visualize success metrics. Experimentation: Design and analyze A/B tests to optimize product features and marketing campaigns. Data Cleaning: Work with Data Engineers to clean and structure messy data for analysis. Strategy: Present findings to stakeholders, translating complex math into clear, actionable business recommendations. Requirements Experience: 2+ years of experience in Data Science or Advanced Analytics. The Toolkit: Expert proficiency in Python or R for statistical analysis. Data Querying: Advanced SQL skills are non -negotiable (Joins, Window Functions, CTEs). Math Mindset: Strong grasp of statistics (Hypothesis testing, distributions, probability). Visualization: Ability to communicate data visually using Tableau, PowerBI, or Looker. Communication: Excellent verbal and written skills; you can explain a p -value to a non -technical manager. ### Preferred Tech Stack (Keywords) Languages: Python (Pandas, NumPy), R, SQL Viz Tools: Tableau, PowerBI, Looker, Plotly Machine Learning: Scikit -learn, XGBoost (applied to business problems) Big Data: Spark, Hadoop, Snowflake Benefits Salary Range: $50,000 - $180,000 USD / year (Commensurate with location and experience) Remote Friendly: Work from where you are most productive. Learning Budget: Stipend for data courses (Coursera, DataCamp) and books.
    $50k-180k yearly 6d ago
  • Principal Data Scientist

    Maximus 4.3company rating

    Data engineer job in Bridgeport, CT

    Description & Requirements We now have an exciting opportunity for a Principal Data Scientist to join the Maximus AI Accelerator supporting both the enterprise and our clients. We are looking for an accomplished hands-on individual contributor and team player to be a part of the AI Accelerator team. You will be responsible for architecting and optimizing scalable, secure AI systems and integrating AI models in production using MLOps best practices, ensuring systems are resilient, compliant, and efficient. This role requires strong systems thinking, problem-solving abilities, and the capacity to manage risk and change in complex environments. Success depends on cross-functional collaboration, strategic communication, and adaptability in fast-paced, evolving technology landscapes. This position will be focused on strategic company-wide initiatives but will also play a role in project delivery and capture solutioning (i.e., leaning in on existing or future projects and providing solutioning to capture new work.) This position requires occasional travel to the DC area for client meetings. Essential Duties and Responsibilities: - Make deep dives into the data, pulling out objective insights for business leaders. - Initiate, craft, and lead advanced analyses of operational data. - Provide a strong voice for the importance of data-driven decision making. - Provide expertise to others in data wrangling and analysis. - Convert complex data into visually appealing presentations. - Develop and deploy advanced methods to analyze operational data and derive meaningful, actionable insights for stakeholders and business development partners. - Understand the importance of automation and look to implement and initiate automated solutions where appropriate. - Initiate and take the lead on AI/ML initiatives as well as develop AI/ML code for projects. - Utilize various languages for scripting and write SQL queries. Serve as the primary point of contact for data and analytical usage across multiple projects. - Guide operational partners on product performance and solution improvement/maturity options. - Participate in intra-company data-related initiatives as well as help foster and develop relationships throughout the organization. - Learn new skills in advanced analytics/AI/ML tools, techniques, and languages. - Mentor more junior data analysts/data scientists as needed. - Apply strategic approach to lead projects from start to finish; Job-Specific Minimum Requirements: - Develop, collaborate, and advance the applied and responsible use of AI, ML and data science solutions throughout the enterprise and for our clients by finding the right fit of tools, technologies, processes, and automation to enable effective and efficient solutions for each unique situation. - Contribute and lead the creation, curation, and promotion of playbooks, best practices, lessons learned and firm intellectual capital. - Contribute to efforts across the enterprise to support the creation of solutions and real mission outcomes leveraging AI capabilities from Computer Vision, Natural Language Processing, LLMs and classical machine learning. - Contribute to the development of mathematically rigorous process improvement procedures. - Maintain current knowledge and evaluation of the AI technology landscape and emerging. developments and their applicability for use in production/operational environments. Minimum Requirements - Bachelor's degree in related field required. - 10-12 years of relevant professional experience required. Job-Specific Minimum Requirements: - 10+ years of relevant Software Development + AI / ML / DS experience. - Professional Programming experience (e.g. Python, R, etc.). - Experience in two of the following: Computer Vision, Natural Language Processing, Deep Learning, and/or Classical ML. - Experience with API programming. - Experience with Linux. - Experience with Statistics. - Experience with Classical Machine Learning. - Experience working as a contributor on a team. Preferred Skills and Qualifications: - Masters or BS in quantitative discipline (e.g. Math, Physics, Engineering, Economics, Computer Science, etc.). - Experience developing machine learning or signal processing algorithms: - Ability to leverage mathematical principles to model new and novel behaviors. - Ability to leverage statistics to identify true signals from noise or clutter. - Experience working as an individual contributor in AI. - Use of state-of-the-art technology to solve operational problems in AI and Machine Learning. - Strong knowledge of data structures, common computing infrastructures/paradigms (stand alone and cloud), and software engineering principles. - Ability to design custom solutions in the AI and Advanced Analytics sphere for customers. This includes the ability to scope customer needs, identify currently existing technologies, and develop custom software solutions to fill any gaps in available off the shelf solutions. - Ability to build reference implementations of operational AI & Advanced Analytics processing solutions. Background Investigations: - IRS MBI - Eligibility #techjobs #VeteransPage EEO Statement Maximus is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, age, national origin, disability, veteran status, genetic information and other legally protected characteristics. Pay Transparency Maximus compensation is based on various factors including but not limited to job location, a candidate's education, training, experience, expected quality and quantity of work, required travel (if any), external market and internal value analysis including seniority and merit systems, as well as internal pay alignment. Annual salary is just one component of Maximus's total compensation package. Other rewards may include short- and long-term incentives as well as program-specific awards. Additionally, Maximus provides a variety of benefits to employees, including health insurance coverage, life and disability insurance, a retirement savings plan, paid holidays and paid time off. Compensation ranges may differ based on contract value but will be commensurate with job duties and relevant work experience. An applicant's salary history will not be used in determining compensation. Maximus will comply with regulatory minimum wage rates and exempt salary thresholds in all instances. Accommodations Maximus provides reasonable accommodations to individuals requiring assistance during any phase of the employment process due to a disability, medical condition, or physical or mental impairment. If you require assistance at any stage of the employment process-including accessing job postings, completing assessments, or participating in interviews,-please contact People Operations at **************************. Minimum Salary $ 156,740.00 Maximum Salary $ 234,960.00
    $77k-112k yearly est. Easy Apply 2d ago
  • Data Scientist

    Drive Devilbiss Healthcare

    Data engineer job in Port Washington, NY

    Job Description The Sales Data Scientist will use data analytics and statistical techniques to generate insights that support sales performance and revenue growth. This role focuses on building and improving reporting tools, analyzing data, and providing actionable recommendations to help the sales organization make informed decisions. Key Responsibilities · Data Analysis & Reporting · Analyze sales data to identify trends, patterns, and opportunities. · Create and maintain dashboards and reports for Sales and leadership teams. · Support root-cause analysis and process improvement initiatives. · Sales Insights · Provide data-driven recommendations for pricing, discount strategies, and sales funnel optimization. · Assist in segmentation analysis to identify key customer groups and markets. · Collaboration · Work closely with Sales, Marketing, Finance, and Product teams to align analytics with business needs. · Present findings in clear, actionable formats to stakeholders. · Data Infrastructure · Ensure data accuracy and integrity across reporting tools. · Help automate reporting processes for efficiency and scalability. Required Qualifications: · 2-4 years of experience in a data analytics or sales operations role. · Strong Excel skills (pivot tables, formulas, data analysis). · Bachelor's degree in Mathematics, Statistics, Economics, Data Science, or related field-or equivalent experience. Preferred Qualifications: · Familiarity with Python, R, SQL, and data visualization tools (e.g., Power BI). · Experience leveraging AI/ML tools and platforms (e.g., predictive analytics, natural language processing, automated insights). · Experience with CRM systems (Salesforce) and marketing automation platforms. · Strong analytical and problem-solving skills with attention to detail. · Ability to communicate insights clearly to non-technical audiences. · Collaborative mindset and willingness to learn new tools and techniques. Why Apply to Drive DeVilbiss… Competitive Benefits, Paid Time Off, 401(k) Savings Plan Pursuant to New York law, Drive Medical provides a salary range in job advertisements. The salary range for this role is $95,000.00 to $125,000.00 per year. Actual salaries may vary depending on factors such as the applicant's experience, specialization, education, as well as the company's requirements. The provided salary range does not include bonuses, incentives, differential pay, or other forms of compensation or benefits which may be offered to the applicant, if eligible according to the company's policies. Drive Medical is an Equal Opportunity Employer and provides equal employment opportunities to all employees and applicants for employment. Drive Medical strictly prohibits and does not tolerate discrimination against employees, applicants, or any other covered person because of race, color, religion, gender, sexual orientation, gender identity, pregnancy and/or parental status, national origin, age, disability status, protected veteran status, genetic information (including family medical history), or any other characteristic protected by federal, state, or local law. Drive Medical complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities.
    $95k-125k yearly 1d ago
  • Junior Data Scientist

    Bexorg

    Data engineer job in New Haven, CT

    About Us Bexorg is revolutionizing drug discovery by restoring molecular activity in postmortem human brains. Through our BrainEx platform, we directly experiment on functionally preserved human brain tissue, creating enormous high-fidelity molecular datasets that fuel AI-driven breakthroughs in treating CNS diseases. We are looking for a Junior Data Scientist to join our team and dive into this one-of-a-kind data. In this onsite role, you will work at the intersection of computational biology and machine learning, helping analyze high-dimensional brain data and uncover patterns that could lead to the next generation of CNS therapeutics. This is an ideal opportunity for a recent graduate or early-career scientist to grow in a fast-paced, mission-driven environment. The Job Data Analysis & Exploration: Work with large-scale molecular datasets from our BrainEx experiments - including transcriptomic, proteomic, and metabolic data. Clean, transform, and explore these high-dimensional datasets to understand their structure and identify initial insights or anomalies. Collaborative Research Support: Collaborate closely with our life sciences, computational biology and deep learning teams to support ongoing research. You will help biologists interpret data results and assist machine learning researchers in preparing data for modeling, ensuring that domain knowledge and data science intersect effectively. Machine Learning Model Execution: Run and tune machine learning and deep learning models on real-world central nervous system (CNS) data. You'll help set up experiments, execute training routines (for example, using scikit-learn or PyTorch models), and evaluate model performance to extract meaningful patterns that could inform drug discovery. Statistical Insight Generation: Apply statistical analysis and visualization techniques to derive actionable insights from complex data. Whether it's identifying gene expression patterns or correlating molecular changes with experimental conditions, you will contribute to turning data into scientific discoveries. Reporting & Communication: Document your analysis workflows and results in clear reports or dashboards. Present findings to the team, highlighting key insights and recommendations. You will play a key role in translating data into stories that drive decision-making in our R&D efforts. Qualifications and Skills: Strong Python Proficiency: Expert coding skills in Python and deep familiarity with the standard data science stack. You have hands-on experience with NumPy, pandas, and Matplotlib for data manipulation and visualization; scikit-learn for machine learning; and preferably PyTorch (or similar frameworks like TensorFlow) for deep learning tasks. Educational Background: A Bachelor's or Master's degree in Data Science, Computer Science, Computational Biology, Bioinformatics, Statistics, or a related field. Equivalent practical project experience or internships in data science will also be considered. Machine Learning Knowledge: Solid understanding of machine learning fundamentals and algorithms. Experience developing or applying models to real or simulated datasets (through coursework or projects) is expected. Familiarity with high-dimensional data techniques or bioinformatics methods is a plus. Analytical & Problem-Solving Skills: Comfortable with statistics and data analysis techniques for finding signals in noisy data. Able to break down complex problems, experiment with solutions, and clearly interpret the results. Team Player: Excellent communication and collaboration skills. Willingness to learn from senior scientists and ability to contribute effectively in a multidisciplinary team that includes biologists, data engineers, and AI researchers. Motivation and Curiosity: Highly motivated, with an evident passion for data-driven discovery. You are excited by Bexorg's mission and eager to take on challenging tasks - whether it's mastering a new analysis method or digging into scientific literature - to push our research forward. Local to New Haven, CT preferred. No relocation offered for this position. Bexorg is an equal opportunity employer. We strive to create a supportive and inclusive workplace where contributions are valued and celebrated, and our employees thrive by being themselves and are inspired to do their best work. We seek applicants of all backgrounds and identities, across race, color, ethnicity, national origin or ancestry, citizenship, religion, sex, sexual orientation, gender identity or expression, veteran status, marital status, pregnancy or parental status, or disability. Applicants will not be discriminated against based on these or other protected categories or social identities. Bexorg will also consider for employment qualified applicants with criminal histories in a manner consistent with applicable federal, state and local law.
    $75k-105k yearly est. 60d+ ago
  • Data Engineer w AWS services such as AWS Glue, Lambda, Athena, Step Functions, and Lake

    Intermedia Group

    Data engineer job in Ridgefield, CT

    OPEN JOB: Data Engineer w AWS services such as AWS Glue, Lambda, Athena, Step Functions, and Lake **HYBRID - This candidate will work on site 2-3X per week in Ridgefield CT location SALARY: $140,000 to $185,000 2 Openings NOTE: CANDIDATE MUST BE US CITIZEN OR GREEN CARD HOLDER We are seeking a highly skilled and experienced Data Engineer to design, build, and maintain our scalable and robust data infrastructure on a cloud platform. In this pivotal role, you will be instrumental in enhancing our data infrastructure, optimizing data flow, and ensuring data availability. You will be responsible for both the hands-on implementation of data pipelines and the strategic design of our overall data architecture. Seeking a candidate with hands-on experience with AWS services such as AWS Glue, Lambda, Athena, Step Functions, and Lake, Proficiency in Python and SQL and DevOps/CI/CD experience Duties & Responsibilities Design, develop, and maintain scalable data pipelines and ETL processes to support data integration and analytics. Collaborate with data architects, modelers and IT team members to help define and evolve the overall cloud-based data architecture strategy, including data warehousing, data lakes, streaming analytics, and data governance frameworks Collaborate with data scientists, analysts, and other business stakeholders to understand data requirements and deliver solutions. Optimize and manage data storage solutions (e.g., S3, Snowflake, Redshift) ensuring data quality, integrity, security, and accessibility. Implement data quality and validation processes to ensure data accuracy and reliability. Develop and maintain documentation for data processes, architecture, and workflows. Monitor and troubleshoot data pipeline performance and resolve issues promptly. Consulting and Analysis: Meet regularly with defined clients and stakeholders to understand and analyze their processes and needs. Determine requirements to present possible solutions or improvements. Technology Evaluation: Stay updated with the latest industry trends and technologies to continuously improve data engineering practices. Requirements Cloud Expertise: Expert-level proficiency in at least one major cloud platform (AWS, Azure, or GCP) with extensive experience in their respective data services (e.g., AWS S3, Glue, Lambda, Redshift, Kinesis; Azure Data Lake, Data Factory, Synapse, Event Hubs; GCP BigQuery, Dataflow, Pub/Sub, Cloud Storage); experience with AWS data cloud platform preferred SQL Mastery: Advanced SQL writing and optimization skills. Data Warehousing: Deep understanding of data warehousing concepts, Kimball methodology, and various data modeling techniques (dimensional, star/snowflake schemas). Big Data Technologies: Experience with big data processing frameworks (e.g., Spark, Hadoop, Flink) is a plus. Database Systems: Experience with relational and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB, Cassandra). DevOps/CI/CD: Familiarity with DevOps principles and CI/CD pipelines for data solutions. Hands-on experience with AWS services such as AWS Glue, Lambda, Athena, Step Functions, and Lake Formation Proficiency in Python and SQL Desired Skills, Experience and Abilities 4+ years of progressive experience in data engineering, with a significant portion dedicated to cloud-based data platforms. ETL/ELT Tools: Hands-on experience with ETL/ELT tools and orchestrators (e.g., Apache Airflow, Azure Data Factory, AWS Glue, dbt). Data Governance: Understanding of data governance, data quality, and metadata management principles. AWS Experience: Ability to evaluate AWS cloud applications, make architecture recommendations; AWS solutions architect certification (Associate or Professional) is a plus Familiarity with Snowflake Knowledge of dbt (data build tool) Strong problem-solving skills, especially in data pipeline troubleshooting and optimization If you are interested in pursuing this opportunity, please respond back and include the following: Full CURRENT Resume Required compensation Contact information Availability Upon receipt, one of our managers will contact you to discuss in full STEPHEN FLEISCHNER Recruiting Manager INTERMEDIA GROUP, INC. EMAIL: *******************************
    $140k-185k yearly Easy Apply 47d ago
  • Senior Market Data Engineer

    Worldquant 4.6company rating

    Data engineer job in Old Greenwich, CT

    WorldQuant develops and deploys systematic financial strategies across a broad range of asset classes and global markets. We seek to produce high-quality predictive signals (alphas) through our proprietary research platform to employ financial strategies focused on market inefficiencies. Our teams work collaboratively to drive the production of alphas and financial strategies - the foundation of a balanced, global investment platform. WorldQuant is built on a culture that pairs academic sensibility with accountability for results. Employees are encouraged to think openly about problems, balancing intellectualism and practicality. Excellent ideas come from anyone, anywhere. Employees are encouraged to challenge conventional thinking and possess an attitude of continuous improvement. Our goal is to hire the best and the brightest. We value intellectual horsepower first and foremost, and people who demonstrate an outstanding talent. There is no roadmap to future success, so we need people who can help us build it. Technologists at WorldQuant research, design, code, test and deploy firmwide platforms and tooling while working collaboratively with researchers and portfolio managers. Our environment is relaxed yet intellectually driven. We seek people who think in code and are motivated by being around like-minded people. The Role: * Design and build real-time market data processing systems covering global markets and multiple asset classes * Architect and implement high-performance software solutions for processing market data feeds at scale * Drive technical innovation by leveraging emerging technologies to enhance system telemetry, monitoring, and operational efficiency * Provide technical leadership and escalation support for production market data systems * Analyze system performance and design data-driven approaches to optimize market data processing workflows * Lead the design of data governance systems for tracking availability, access patterns, and usage metrics What You Will Bring: * Degree in a quantitative or technical discipline from top university and strong academic scores * Expert-level C++ proficiency with demonstrated experience in other object-oriented languages (Java, C#) * Experience with scripting languages such as Perl, Python, and shell scripting for automation and data processing * Deep experience with tick-by-tick market data processing, including data normalization, feed handling, and real-time analytic * Excellent communication skills with ability to collaborate effectively across technical and business teams * Have experience working under Linux environment Our Benefits: * Core Benefits: Fully paid medical and dental insurance for employees and dependents, flexible spending account, 401k, fully paid parental leave, generous PTO (paid time off) that consists of: * twenty vacation days that are pro-rated based on the employee's start date, at an accrual of 1.67 days per month, * three personal days, and * ten sick days. * Perks: Employee discounts for gym memberships, wellness activities, healthy snacks, casual dress code * Training: learning and development courses, speakers, team-building off-site * Employee resource groups Pay Transparency: WorldQuant is a total compensation organization where you will be eligible for a base salary, discretionary performance bonus, and benefits. To provide greater transparency to candidates, we share base pay ranges for all US-based job postings regardless of state. We set standard base pay ranges for all roles based on job function and level, benchmarked against similar stage organizations. When finalizing an offer, we will take into consideration an individual's experience level and the qualifications they bring to the role to formulate a competitive total compensation package. The Base Pay Range For This Position Is $175,000 - $250,000 USD. At WorldQuant, we are committed to providing candidates with all necessary information in compliance with pay transparency laws. If you believe any required details are missing from this job posting, please notify us at [email protected], and we will address your concerns promptly. By submitting this application, you acknowledge and consent to terms of the WorldQuant Privacy Policy. The privacy policy offers an explanation of how and why your data will be collected, how it will be used and disclosed, how it will be retained and secured, and what legal rights are associated with that data (including the rights of access, correction, and deletion). The policy also describes legal and contractual limitations on these rights. The specific rights and obligations of individuals living and working in different areas may vary by jurisdiction. #LI-RS1 By submitting this application, you acknowledge and consent to terms of the WorldQuant Privacy Policy. The privacy policy offers an explanation of how and why your data will be collected, how it will be used and disclosed, how it will be retained and secured, and what legal rights are associated with that data (including the rights of access, correction, and deletion). The policy also describes legal and contractual limitations on these rights. The specific rights and obligations of individuals living and working in different areas may vary by jurisdiction. Copyright 2025 WorldQuant, LLC. All Rights Reserved. WorldQuant is an equal opportunity employer and does not discriminate in hiring on the basis of race, color, creed, religion, sex, sexual orientation or preference, age, marital status, citizenship, national origin, disability, military status, genetic predisposition or carrier status, or any other protected characteristic as established by applicable law.
    $175k-250k yearly 60d+ ago
  • Tech Lead, Data & Inference Engineer

    Catalyst Labs

    Data engineer job in Greenwich, CT

    Job Description Our Client A fast moving and venture backed advertising technology startup based in San Francisco. They have raised twelve million dollars in funding and are transforming how business to business marketers reach their ideal customers. Their identity resolution technology blends business and consumer signals to convert static audience lists into high match and cross channel segments without the use of cookies. By transforming first party and third party data into precision targetable audiences across platforms such as Meta, Google and YouTube, they enable marketing teams to reach higher match rates, reduce wasted advertising spend and accelerate pipeline growth. With a strong understanding of how business buyers behave in channels that have traditionally been focused on business to consumer activity, they are redefining how business brands scale demand generation and account based efforts. About Us Catalyst Labs is a leading talent agency with a specialized vertical in Applied AI, Machine Learning, and Data Science. We stand out as an agency thats deeply embedded in our clients recruitment operations. We collaborate directly with Founders, CTOs, and Heads of AI in those themes who are driving the next wave of applied intelligence from model optimization to productized AI workflows. We take pride in facilitating conversations that align with your technical expertise, creative problem-solving mindset, and long-term growth trajectory in the evolving world of intelligent systems. Location: San Francisco Work type: Full Time, Compensation: above market base + bonus + equity Roles & Responsibilities Lead the design, development and scaling of an end to end data platform from ingestion to insights, ensuring that data is fast, reliable and ready for business use. Build and maintain scalable batch and streaming pipelines, transforming diverse data sources and third party application programming interfaces into trusted and low latency systems. Take full ownership of reliability, cost and service level objectives. This includes achieving ninety nine point nine percent uptime, maintaining minutes level latency and optimizing cost per terabyte. Conduct root cause analysis and provide long lasting solutions. Operate inference pipelines that enhance and enrich data. This includes enrichment, scoring and quality assurance using large language models and retrieval augmented generation. Manage version control, caching and evaluation loops. Work across teams to deliver data as a product through the creation of clear data contracts, ownership models, lifecycle processes and usage based decision making. Guide architectural decisions across the data lake and the entire pipeline stack. Document lineage, trade offs and reversibility while making practical decisions on whether to build internally or buy externally. Scale integration with application programming interfaces and internal services while ensuring data consistency, high data quality and support for both real time and batch oriented use cases. Mentor engineers, review code and raise the overall technical standard across teams. Promote data driven best practices throughout the organization. Qualifications Bachelors or Masters degree in Computer Science, Computer Engineering, Electrical Engineering, or Mathematics. Excellent written and verbal communication; proactive and collaborative mindset. Comfortable in hybrid or distributed environments with strong ownership and accountability. A founder-level bias for actionable to identify bottlenecks, automate workflows, and iterate rapidly based on measurable outcomes. Demonstrated ability to teach, mentor, and document technical decisions and schemas clearly. Core Experience 6 to 12 years of experience building and scaling production-grade data systems, with deep expertise in data architecture, modeling, and pipeline design. Expert SQL (query optimization on large datasets) and Python skills. Hands-on experience with distributed data technologies (Spark, Flink, Kafka) and modern orchestration tools (Airflow, Dagster, Prefect). Familiarity with dbt, DuckDB, and the modern data stack; experience with IaC, CI/CD, and observability. Exposure to Kubernetes and cloud infrastructure (AWS, GCP, or Azure). Bonus: Strong Node.js skills for faster onboarding and system integration. Previous experience at a high-growth startup (10 to 200 people) or early-stage environment with a strong product mindset.
    $84k-114k yearly est. 6d ago
  • Data Engineer (Hybrid - Ridgefield, CT) - 1760

    Placingit

    Data engineer job in Ridgefield, CT

    Employment Type: Full-time employment - no consulting or corp to corp Salary Range: $140K - $185K + bonus Visa restrictions: US Citizen or Green Card only. This role isn't eligible for H-1B, TN, F1 or OPT Overview We are looking for a hands-on Data Engineer to design, build, and maintain scalable data platforms and pipelines in a modern cloud environment. You will play a key role in shaping our data architecture, optimizing data flow, and ensuring data quality and availability across the organization. This role offers the opportunity to contribute directly to meaningful work that supports the development and delivery of life-changing products. You will collaborate with global teams and be part of a culture that values impact, growth, balance, and well-being. What You'll Do Design, build, and optimize data pipelines and ETL/ELT workflows to support analytics and reporting. Partner with architects and engineering teams to define and evolve our cloud-based data architecture, including data lakes, data warehouses, and streaming data platforms. Work closely with data scientists, analysts, and business partners to understand requirements and deliver reliable, reusable data solutions. Develop and maintain scalable data storage solutions (e.g., AWS S3, Redshift, Snowflake) with a focus on performance, reliability, and security. Implement data quality checks, validation processes, and metadata documentation. Monitor, troubleshoot, and improve pipeline performance and workflow efficiency. Stay current on industry trends and recommend new technologies and approaches. Qualifications Data Engineer (Mid-Level) Strong understanding of data integration, data modeling, and SDLC. Experience working on project teams and delivering within Agile environments. Hands-on experience with AWS data services (e.g., Glue, Lambda, Athena, Step Functions, Lake Formation). Associate degree + 8 years experience, or Bachelor's + 4 years, or Master's + 2 years. Or Associate degree + 4 years experience, or Bachelor's + 2 years, or Master's + 1 year experience. Expert-level proficiency in at least one major cloud platform (AWS preferred). Advanced SQL and strong understanding of data warehousing and data modeling (Kimball/star schema). Experience with big data processing (e.g., Spark, Hadoop, Flink) is a plus. Experience with relational and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB, Cassandra). Familiarity with CI/CD pipelines and DevOps principles. Proficiency in Python and SQL (required). Desired Skills Experience with ETL/ELT tools (e.g., Airflow, dbt, AWS Glue, ADF). Understanding of data governance and metadata management. Experience with Snowflake. AWS certification is a plus. Strong problem-solving skills and ability to troubleshoot pipeline performance issues.
    $84k-114k yearly est. 47d ago
  • C++ Market Data Engineer (USA)

    Trexquant 4.0company rating

    Data engineer job in Stamford, CT

    Trexquant is a growing systematic fund at the forefront of quantitative finance, with a core team of highly accomplished researchers and engineers. To keep pace with our expanding global trading operations, we are seeking a C++ Market Data Engineer to design and build ultra-low-latency feed handlers for premier vendor feeds and major exchange multicast feeds. This is a high-impact role that sits at the heart of Trexquant's trading platform; the quality, speed, and reliability of your code directly influence every strategy we run. Responsibilities * Design & implement high-performance feed handlers in modern C++ for equities, futures, and options across global venues (e.g., NYSE, CME, Refinitiv RTS, Bloomberg B-PIPE). * Optimize for micro- and nanosecond latency using lock-free data structures, cache-friendly memory layouts, and kernel-bypass networking where appropriate. * Build reusable libraries for message decoding, normalization, and publication to internal buses shared by research, simulation, and live trading systems. * Collaborate with cross-functional teams to tune TCP/UDP multicast stacks, kernel parameters, and NIC settings for deterministic performance. * Provide robust failover, gap-recovery, and replay mechanisms to guarantee data integrity under packet loss or venue outages. * Instrument code paths with precision timestamping and performance metrics; drive continuous latency regression testing and capacity planning. * Partner closely with quantitative researchers to understand downstream data requirements and to fine-tune delivery formats for both simulation and live trading. * Produce clear architecture documents, operational run-books, and post-mortems; participate in a 24×7 follow-the-sun support rotation for mission-critical market-data services.
    $95k-136k yearly est. 7d ago
  • Network Planning Data Scientist (Manager)

    Atlas Air Worldwide Holdings 4.9company rating

    Data engineer job in White Plains, NY

    Atlas Air is seeking a detail-oriented and analytical Network Planning Analyst to help optimize our global cargo network. This role plays a critical part in the 2-year to 11-day planning window, driving insights that enable operational teams to execute the most efficient and reliable schedules. The successful candidate will provide actionable analysis on network delays, utilization trends, and operating performance, build models and reports to govern network operating parameters, and contribute to the development and implementation of software optimization tools that improve reliability and streamline planning processes. This position requires strong analytical skills, a proactive approach to problem-solving, and the ability to translate data into operational strategies that protect service quality and maximize network efficiency. Responsibilities Analyze and Monitor Network Performance Track and assess network delays, capacity utilization, and operating constraints to identify opportunities for efficiency gains and reliability improvements. Develop and maintain key performance indicators (KPIs) for network operations and planning effectiveness. Modeling & Optimization Build and maintain predictive models to assess scheduling scenarios and network performance under varying conditions. Support the design, testing, and implementation of software optimization tools to enhance operational decision-making. Reporting & Governance Develop periodic performance and reliability reports for customers, assisting in presentation creation Produce regular and ad hoc reports to monitor compliance with established operating parameters. Establish data-driven processes to govern scheduling rules, protect operational integrity, and ensure alignment with reliability targets. Cross-Functional Collaboration Partner with Operations, Planning, and Technology teams to integrate analytics into network planning and execution. Provide insights that inform schedule adjustments, fleet utilization, and contingency planning. Innovation & Continuous Improvement Identify opportunities to streamline workflows and automate recurring analyses. Contributes to the development of new planning methodologies and tools that enhance decision-making and operational agility. Qualifications Proficiency in SQL (Python and R are a plus) for data extraction and analysis; experience building decision-support tools, reporting tools dashboards (e.g., Tableau, Power BI) Bachelor's degree required in Industrial Engineering, Operations Research, Applied Mathematics, Data Science or related quantitative discipline or equivalent work experience. 5+ years of experience in strategy, operations planning, finance or continuous improvement, ideally with airline network planning Strong analytical skills with experience in statistical analysis, modeling, and scenario evaluation. Strong problem-solving skills with the ability to work in a fast-paced, dynamic environment. Excellent communication skills with the ability to convey complex analytical findings to non-technical stakeholders. A proactive, solution-focused mindset with a passion for operational excellence and continuous improvement. Knowledge of operations, scheduling, and capacity planning, ideally in airlines, transportation or other complex network operations Salary Range: $131,500 - $177,500 Financial offer within the stated range will be based on multiple factors to include but not limited to location, relevant experience/level and skillset. The Company is an Equal Opportunity Employer. It is our policy to afford equal employment opportunity to all employees and applicants for employment without regard to race, color, religion, sex, sexual orientation, national origin, citizenship, place of birth, age, disability, protected veteran status, gender identity or any other characteristic or status protected by applicable in accordance with federal, state and local laws. If you'd like more information about your EEO rights as an applicant under the law, please download the available EEO is the Law document at ****************************************** To view our Pay Transparency Statement, please click here: Pay Transparency Statement “Know Your Rights: Workplace Discrimination is Illegal” Poster The "EEO Is The Law" Poster
    $131.5k-177.5k yearly Auto-Apply 60d+ ago
  • Application Support Engineer

    The Phoenix Group 4.8company rating

    Data engineer job in Fairfield, CT

    bout Us We are a global investment firm focused on combining financial theory with practical application. Our goal is to deliver long-term results by cutting through market noise, identifying the most impactful factors, and developing ideas that stand up to rigorous testing. Over the years, we have built a reputation as innovators in portfolio management and alternative investment strategies. Our team values intellectual curiosity, honesty, and a commitment to understanding what drives financial markets. Collaboration, transparency, and openness to new ideas are central to our culture, fostering innovation and continuous improvement. Your Role We are seeking an Application Support Engineer to operate at the intersection of technical systems and business processes that power our investment operations. This individual contributor role involves supporting a complex technical environment, resolving production issues, and contributing to projects that enhance systems and processes. You will gain hands-on experience with cloud-deployed portfolio management and research systems and work closely with both business and technical teams. This role is ideal for someone passionate about technology and systems reliability, looking to grow into a systems reliability or engineering-focused position. Responsibilities Develop and maintain expertise in the organization's applications to support internal users. Manage user expectations and ensure satisfaction with our systems and tools. Advocate for users with project management and development teams. Work closely with QA to report and track issues identified by users. Ensure proper escalation for unresolved issues to maintain user satisfaction. Participate in production support rotations, including off-hours coverage. Identify gaps in support processes and create documentation or workflows in collaboration with development and business teams. Diagnose and resolve system issues, including debugging code, analyzing logs, and investigating performance or resource problems. Collaborate across teams to resolve complex technical problems quickly and efficiently. Maintain documentation of system behavior, root causes, and process improvements. Contribute to strategic initiatives that enhance system reliability and operational efficiency. Qualifications Bachelor's degree in Engineering, Computer Science, or equivalent experience. 2+ years of experience supporting complex software systems, collaborating with business users and technical teams. Hands-on technical skills including SQL and programming/debugging (Python preferred). Strong written and verbal communication skills. Ability to work independently and within small teams. Eagerness to learn new technologies and automate manual tasks to improve system reliability. Calm under pressure and demonstrates responsibility, maturity, and trustworthiness. Compensation & Benefits Salary range: $115,000-$135,000 (may vary based on experience, location, or organizational needs). Eligible for annual discretionary bonus. Comprehensive benefits package including paid time off, medical/dental/vision coverage, 401(k), and other benefits as applicable. The Phoenix Group Advisors is an equal opportunity employer. We are committed to creating a diverse and inclusive workplace and prohibit discrimination and harassment of any kind based on race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status. We strive to attract talented individuals from all backgrounds and provide equal employment opportunities to all employees and applicants for employment.
    $115k-135k yearly 4d ago
  • Data Platform Engineer (USA)

    Trexquant 4.0company rating

    Data engineer job in Stamford, CT

    Trexquant is a growing systematic fund at the forefront of quantitative finance, with a core team of highly accomplished researchers and engineers. To keep pace with our expanding global trading operations, we are seeking a highly motivated and technically rigorous Data Platform Engineer to help modernize our foundational data infrastructure. As a Data Platform Engineer, you will be at the center of building the systems that ensure the quality, reliability, and discoverability of mission-critical data. Your work will directly impact the data operators and downstream consumers by creating robust tools, monitoring, and workflows that ensure accuracy, validity, and timeliness of data across the firm. Responsibilities * Architect and maintain core components of the Data Platform with a strong focus on reliability and scalability. * Build and maintain tools to manage data feeds, monitor validity, and ensure data timeliness. * Design and implement event-based data orchestration pipelines. * Evaluate and integrate data quality and observability tools via POCs and MVPs. * Stand up a data catalog system to improve data discoverability and lineage tracking. * Collaborate closely with infrastructure teams to support operational excellence and platform uptime. * Write and maintain data quality checks to validate real-time and batch data. * Validate incoming real-time data using custom Python-based validators. * Ensure low-level data correctness and integrity, especially in high-precision environments. * Build robust and extensible systems that will be used by data operators to ensure the health of our data ecosystem. * Own the foundational systems used by analysts and engineers alike to trust and explore our datasets.
    $95k-136k yearly est. 7d ago

Learn more about data engineer jobs

How much does a data engineer earn in Stratford, CT?

The average data engineer in Stratford, CT earns between $73,000 and $131,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.

Average data engineer salary in Stratford, CT

$98,000

What are the biggest employers of Data Engineers in Stratford, CT?

The biggest employers of Data Engineers in Stratford, CT are:
  1. The Health Plan
  2. Waters
  3. The Phoenix Group
Job type you want
Full Time
Part Time
Internship
Temporary