DataEngineer Check below to see if you have what is needed for this opportunity, and if so, make an application asap. - Data Integration, IBM Corporation, Armonk, NY and various unanticipated client sites throughout the US: Manage the end-to-end delivery of data migration projects, implementing ETL/ELT concepts and leveraging ETL tools such as Informatica and DataStage, and cloud platforms like Google Cloud.
Design and build end-to-end data pipelines to extract, integrate, transform, and load data from diverse source systems into target environments such as databases, data warehouses, or data marts.
Collaborate with clients to define data mapping and transformation rules, ensuring accurate application prior to loading.
Normalize data and establish relational structures to support system migrations.
Develop processes for data cleaning, filtering, aggregation, and augmentation to maintain data integrity.
Implement validation checks and data quality controls to ensure accuracy and consistency across systems.
Create, maintain, and optimize SQL procedures, functions, triggers, and ETL/ELT processes.
Develop, debug, and maintain ETL jobs while applying query optimization techniques
- such as indexing, clustering, partitioning, and use of analytical functions
- to enhance performance on large datasets.
Partner with data analysts, data scientists, and business stakeholders to understand requirements and ensure delivery of the right data.
Capture fallouts and prepare reports using Excel, Power BI, Looker, Crystal Reports, etc.
Perform root cause analysis and resolution.
Monitor and maintain pipelines to ensure stability and efficiency of data pipelines through regular monitoring, troubleshooting, and performance optimization.
Maintain thorough and up-to-date documentation of all data integration processes, pipelines, and architectures.
Analyze current trends, tools, and technologies in dataengineering and integration.
Utilize: Google Cloud Platform (Google Big Query, Cloud Storage, Google Looker), Procedural language/Structured Query Language (PL/SQL), Informatica, DataStage, Data Integration, Data Warehousing, Database Design / Modelling, Data Visualization (Power BI/ Crystal reports).
Required: Master's degree or equivalent in Computer Science or related (employer will accept a bachelor's degree plus five (5) years of progressive experience in lieu of a master's degree) and one (1) year of experience as a DataEngineer or related.
One (1) year of experience must include utilizing Google Cloud Platform (Google Big Query, Cloud Storage, Google Looker), Procedural language/Structured Query Language (PL/SQL), Informatica, DataStage, Data Integration, Data Warehousing, Database Design / Modelling, Data Visualization (Power BI/ Crystal reports).
$167835 to $216700 per year.
Please send resumes to
Applicants must reference D185 in the subject line. xevrcyc
JobiqoTJN.
Keywords: DataEngineer, Location: NORTH CASTLE, NY
- 10504
$167.8k-216.7k yearly 1d ago
Looking for a job?
Let Zippia find it for you.
Senior Data Architect - Power & Utilities AI Platforms
Ernst & Young Oman 4.7
Data engineer job in Stamford, CT
A leading global consulting firm is seeking a Senior Manager in Data Architecture for the Power & Utilities sector. This role requires at least 12 years of consulting experience and expertise in data architecture and engineering. The successful candidate will manage technology projects, lead teams, and develop innovative data solutions that drive significant business outcomes. Strong relationship management and communication skills are essential for engaging with clients and stakeholders. Join us to help shape a better working world.
#J-18808-Ljbffr
$112k-156k yearly est. 3d ago
Data Scientist - Analytics roles draw analytical talent hunting for roles.
Boxncase
Data engineer job in Commack, NY
About the Role We believe that the best decisions are backed by data. We are seeking a curious and analytical Data Scientist to champion our data -driven culture.
In this role, you will act as a bridge between technical data and business strategy. You will mine massive datasets, build predictive models, and-most importantly-tell the story behind the numbers to help our leadership team make smarter choices. You are perfect for this role if you are as comfortable with SQL queries as you are with slide decks.
### What You Will Do
Exploratory Analysis: Dive deep into raw data to discover trends, patterns, and anomalies that others miss.
Predictive Modeling: Build and test statistical models (Regression, Time -series, Clustering) to forecast business outcomes and customer behavior.
Data Visualization: Create clear, impactful dashboards using Tableau, PowerBI, or Python libraries (Matplotlib/Seaborn) to visualize success metrics.
Experimentation: Design and analyze A/B tests to optimize product features and marketing campaigns.
Data Cleaning: Work with DataEngineers to clean and structure messy data for analysis.
Strategy: Present findings to stakeholders, translating complex math into clear, actionable business recommendations.
Requirements
Experience: 2+ years of experience in Data Science or Advanced Analytics.
The Toolkit: Expert proficiency in Python or R for statistical analysis.
Data Querying: Advanced SQL skills are non -negotiable (Joins, Window Functions, CTEs).
Math Mindset: Strong grasp of statistics (Hypothesis testing, distributions, probability).
Visualization: Ability to communicate data visually using Tableau, PowerBI, or Looker.
Communication: Excellent verbal and written skills; you can explain a p -value to a non -technical manager.
### Preferred Tech Stack (Keywords)
Languages: Python (Pandas, NumPy), R, SQL
Viz Tools: Tableau, PowerBI, Looker, Plotly
Machine Learning: Scikit -learn, XGBoost (applied to business problems)
Big Data: Spark, Hadoop, Snowflake
Benefits
Salary Range: $50,000 - $180,000 USD / year (Commensurate with location and experience)
Remote Friendly: Work from where you are most productive.
Learning Budget: Stipend for data courses (Coursera, DataCamp) and books.
$50k-180k yearly 34d ago
Senior Data Engineer
Stratacuity
Data engineer job in Bristol, CT
Description/Comment: Disney Streaming is the leading premium streaming service offering live and on-demand TV and movies, with and without commercials, both in and outside the home. Operating at the intersection of entertainment and technology, Disney Streaming has a unique opportunity to be the number one choice for TV. We captivate and connect viewers with the stories they love, and we're looking for people who are passionate about redefining TV through innovation, unconventional thinking, and embracing fun. Join us and see what this is all about.
The Product Performance Data Solutions team for the Data organization within Disney Streaming (DS), a segment under the Disney Media & Entertainment Distribution is in search of a Senior DataEngineer. As a member of the Product Performance team, you will work on building foundational datasets from clickstream and quality of service telemetry data - enabling dozens of engineering and analytical teams to unlock the power of data to drive key business decisions and provide engineering, analytics, and operational teams the critical information necessary to scale the largest streaming service. The Product Performance Data Solutions team is seeking to grow their team of world-class DataEngineers that share their charisma and enthusiasm for making a positive impact.
Responsibilities:
* Contribute to maintaining, updating, and expanding existing data pipelines in Python / Spark while maintaining strict uptime SLAs
* Architect, design, and code shared libraries in Scala and Python that abstract complex business logic to allow consistent functionality across all data pipelines
* Tech stack includes Airflow, Spark, Databricks, Delta Lake, Snowflake, Scala, Python
* Collaborate with product managers, architects, and other engineers to drive the success of the Product Performance Data and key business stakeholders
* Contribute to developing and documenting both internal and external standards for pipeline configurations, naming conventions, partitioning strategies, and more
* Ensure high operational efficiency and quality of datasets to ensure our solutions meet SLAs and project reliability and accuracy to all our partners (Engineering, Data Science, Operations, and Analytics teams)
* Be an active participant and advocate of agile/scrum ceremonies to collaborate and improve processes for our team
* Engage with and understand our customers, forming relationships that allow us to understand and prioritize both innovative new offerings and incremental platform improvements
* Maintain detailed documentation of your work and changes to support data quality and data governance requirements
Additional Information: NOTE: There will be no SPC for this role Interview process: 4 rounds (1 with HM, 2 tech rounds, and a final with Product) We need an expert in SQL, extensive experience with Scala, a proven self-starter (expected to discover the outcome, and then chase after it), not only able to speak technical but clearly articulate that info to the business as well.
Preferred Qualifications: Candidates with Click stream, user browse data are highly preferred
Apex Systems is a world-class IT services company that serves thousands of clients across the globe. When you join Apex, you become part of a team that values innovation, collaboration, and continuous learning. We offer quality career resources, training, certifications, development opportunities, and a comprehensive benefits package. Our commitment to excellence is reflected in many awards, including ClearlyRated's Best of Staffing in Talent Satisfaction in the United States and Great Place to Work in the United Kingdom and Mexico. Apex uses a virtual recruiter as part of the application process. Click here for more details.
Apex Benefits Overview: Apex offers a range of supplemental benefits, including medical, dental, vision, life, disability, and other insurance plans that offer an optional layer of financial protection. We offer an ESPP (employee stock purchase program) and a 401K program which allows you to contribute typically within 30 days of starting, with a company match after 12 months of tenure. Apex also offers a HSA (Health Savings Account on the HDHP plan), a SupportLinc Employee Assistance Program (EAP) with up to 8 free counseling sessions, a corporate discount savings program and other discounts. In terms of professional development, Apex hosts an on-demand training program, provides access to certification prep and a library of technical and leadership courses/books/seminars once you have 6+ months of tenure, and certification discounts and other perks to associations that include CompTIA and IIBA. Apex has a dedicated customer service team for our Consultants that can address questions around benefits and other resources, as well as a certified Career Coach. You can access a full list of our benefits, programs, support teams and resources within our 'Welcome Packet' as well, which an Apex team member can provide.
Employee Type:
Contract
Location:
Bristol, CT, US
Job Type:
Date Posted:
January 8, 2026
Pay Range:
$50 - $100 per hour
Similar Jobs
* Senior DataEngineer
* Sr DataEngineers x12
* Senior Data Scientist
* Senior DataEngineer - SQL & Reporting
* DataEngineer
$50-100 hourly 2d ago
Senior Data Engineer - Product Performance Data -1573
Akube
Data engineer job in Bristol, CT
City: Bristol, CT /NYC Onsite/ Hybrid/ Remote: Hybrid (4 days a week Onsite)Duration: 10 months Rate Range: Up to $96/hr on W2 depending on experience (no C2C or 1099 or sub -contract) Work Authorization: GC, USC, All valid EADs except OPT, CPT, H1B
Must Have:
Advanced SQL expertise
Strong Scala development experience
Python for dataengineering
Apache Spark in production
Airflow for orchestration
Databricks platform experience
Cloud data storage experience (S3 or equivalent)
Responsibilities:
Build and maintain large -scale data pipelines with strict SLAs.
Design shared libraries in Scala and Python to standardize data logic.
Develop foundational datasets from clickstream and telemetry data.
Ensure data quality, reliability, and operational efficiency.
Partner with product, engineering, and analytics teams.
Define and document data standards and best practices.
Participate actively in Agile and Scrum ceremonies.
Communicate technical outcomes clearly to business stakeholders.
Maintain detailed technical and data governance documentation.
Qualifications:
5+ years of dataengineering experience.
Strong problem -solving and algorithmic skills.
Expert -level SQL with complex analytical queries.
Hands -on experience with distributed systems at scale.
Experience supporting production data platforms.
Self -starter who can define outcomes and drive solutions.
Ability to translate technical concepts for non -technical audiences.
Bachelor's degree or equivalent experience.
$96 hourly 3d ago
Big Data Engineer
Cardinal Integrated 4.4
Data engineer job in Jericho, NY
Title: Big DataEngineer Location: Jericho, NY (source locally first, can source outside of the state as long as the candidate is willing to relocate at their own expense) Duration: C2H Rate: $70 - $80.00 per hour C2C Visa Type: H1, GC, US Citizen Interview: 1 or 2 rounds of phone followed by an on-site interview (may be able to provide a Skype or Webex to out of state candidates)
Travel: no
Description:
The Big DataEngineer is responsible for the design, architecture, and development of projects powered by Google BigData and MapR Hadoop distribution
Must-Have Skills/Experience:
* Bachelors Degree required
* 5+ years of solution architecture in Hadoop
* Demonstrated experience in architecture, engineering, and implementation of enterprise-grade production big data use cases
* Extensive hands-on experience in MapReduce, Hive, Java, HBase, and the following Hadoop eco-system products: Sqoop, Flume, Oozie, Storm, Spark, and/or Kaftka.
* Extensive experience in Shell Scripting
* Solid understanding of different file formats and data serialization formats such as ProtoBuf, Avro, JSON.
* Hands-on delivery experience working on popular Hadoop distribution platforms like Cloudera, HortonWorks or MapR [MapR preferrably]
* Excellent communication skills
Nice to have:
* Coordinating the movement of data from original data sources into NoSQL data lakes and cloud environments
* Hands-on experience with Talend used in conjunction with Hadoop MapReduce/Spark/Hive.
* Experience with Google cloud platform (Google BigQuery)
* Source control (preferably Git Hub)
* Knowledge of agile development methodologies
* Experience in IDE framework like Hue, Jupyter, Zepplin
* Needs to have a good experience on ETL Technologies and concepts of Data Warehouse
$70-80 hourly 24d ago
Data Scientist
Drive Devilbiss Healthcare
Data engineer job in Port Washington, NY
Who is Drive Medical..
Drive Medical has become a leading manufacturer of medical products with a strong and consistent track record of growth achieved both organically and through acquisitions. We are proud of our high-quality, diverse product portfolio, channel footprint and global operating scale. Our products are sold into the homecare, long-term care, retail, and e-commerce channels in more than 100 countries around the world.
“Leading the World with Innovative Healthcare Solutions that Enhance Lives”
Summary (Major Purpose of the Role):
Position Summary
The Sales Data Scientist will use data analytics and statistical techniques to generate insights that support sales performance and revenue growth. This role focuses on building and improving reporting tools, analyzing data, and providing actionable recommendations to help the sales organization make informed decisions.
Key Responsibilities
Data Analysis & Reporting
Analyze sales data to identify trends, patterns, and opportunities.
Create and maintain dashboards and reports for Sales and leadership teams.
Support root-cause analysis and process improvement initiatives.
Sales Insights
Provide data-driven recommendations for pricing, discount strategies, and sales funnel optimization.
Assist in segmentation analysis to identify key customer groups and markets.
Collaboration
Work closely with Sales, Marketing, Finance, and Product teams to align analytics with business needs.
Present findings in clear, actionable formats to stakeholders.
Data Infrastructure
Ensure data accuracy and integrity across reporting tools.
Help automate reporting processes for efficiency and scalability.
Required Qualifications:
2-4 years of experience in a data analytics or sales operations role.
Strong Excel skills (pivot tables, formulas, data analysis).
Bachelor's degree in Mathematics, Statistics, Economics, Data Science, or related field-or equivalent experience.
Preferred Qualifications:
Familiarity with Python, R, SQL, and data visualization tools (e.g., Power BI).
Experience leveraging AI/ML tools and platforms (e.g., predictive analytics, natural language processing, automated insights).
Experience with CRM systems (Salesforce) and marketing automation platforms.
Strong analytical and problem-solving skills with attention to detail.
Ability to communicate insights clearly to non-technical audiences.
Collaborative mindset and willingness to learn new tools and techniques.
Why Apply…
Competitive Benefits, Paid Time Off, 401(k) Savings Plan
This position does not offer sponsorship opportunities.
Pursuant to New York law, Drive Medical provides a salary range in job advertisements. The salary range for this role is $95,000.00 to $125,000.00 per year. Actual salaries may vary depending on factors such as the applicant's experience, specialization, education, as well as the company's requirements. The provided salary range does not include bonuses, incentives, differential pay, or other forms of compensation or benefits which may be offered to the applicant, if eligible according to the company's policies.
Drive Medical is an Equal Opportunity Employer and provides equal employment opportunities to all employees and applicants for employment. Drive Medical strictly prohibits and does not tolerate discrimination against employees, applicants, or any other covered person because of race, color, religion, gender, sexual orientation, gender identity, pregnancy and/or parental status, national origin, age, disability status, protected veteran status, genetic information (including family medical history), or any other characteristic protected by federal, state, or local law. Drive Medical complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities.
$95k-125k yearly Auto-Apply 30d ago
Data Scientist
The Connecticut Rise Network
Data engineer job in New Haven, CT
RISE Data Scientist
Reports to: Monitoring, Evaluation, and Learning Manager
Salary: Competitive and commensurate with experience
Please note: Due to the upcoming holidays, application review for this position will begin the first week of January. Applicants can expect outreach by the end of the week of January 5.
Overview:
The RISE Network's mission is to ensure all high school students graduate with a plan and the skills and confidence to achieve college and career success. Founded in 2016, RISE partners with public high schools to lead networks where communities work together to use data to learn and improve. Through its core and most comprehensive network, RISE partners with nine high schools and eight districts, serving over 13,000 students in historically marginalized communities.
RISE high schools work together to ensure all students experience success as they transition to, through, and beyond high school by using data to pinpoint needs, form hypotheses, and pursue ideas to advance student achievement. Partner schools have improved Grade 9 promotion rates by nearly 20 percentage points, while also decreasing subgroup gaps and increasing schoolwide graduation and college access rates. In 2021, the RISE Network was honored to receive the Carnegie Foundation's annual Spotlight on Quality in Continuous Improvement recognition. Increasingly, RISE is pursuing opportunities to scale its impact through research publications, consulting partnerships, professional development experiences, and other avenues to drive excellent student outcomes.
Position Summary and Essential Job Functions:
The RISE Data Scientist will play a critical role in leveraging data to support continuous improvement, program evaluation, and research, enhancing the organization's evidence-based learning and decision-making. RISE is seeking a talented and motivated individual to design and conduct rigorous quantitative analyses to assess the outcomes and impacts of programs.
The ideal candidate is an experienced analyst who is passionate about using data to drive social change, with strong skills in statistical modeling, data visualization, and research design. This individual will also lead efforts to monitor and analyze organization-wide data related to mission progress and key performance indicators (KPIs), and communicate these insights in ways that inspire improvement and action. This is an exciting opportunity for an individual who thrives in an entrepreneurial environment and is passionate about closing opportunity gaps and supporting the potential of all students, regardless of life circumstances. The role will report to the Monitoring, Evaluation, and Learning (MEL) Manager and sit on the MEL team.
Responsibilities include, but are not limited to:
1. Research and Evaluation (30%)
Collaborate with MEL and network teams to design and implement rigorous process, outcome, and impact evaluations.
Lead in the development of data collection tools and survey instruments.
Manage survey data collection, reporting, and learning processes.
Develop RISE learning and issue briefs supported by quantitative analysis.
Design and implement causal inference approaches where applicable, including quasi-experimental designs.
Provide technical input on statistical analysis plans, monitoring frameworks, and indicator selection for network programs.
Translate complex findings into actionable insights and policy-relevant recommendations for non-technical audiences.
Report data for RISE leadership and staff, generating new insights to inform program design.
Create written reports, presentations, publications, and communications pieces.
2. Quantitative Analysis and Statistical Modeling (30%)
Clean, transform, and analyze large and complex datasets from internal surveys, the RISE data warehouse, and external data sources such as the National Student Clearinghouse (NSC).
Conduct exploratory research that informs organizational learning.
Lead complex statistical analyses using advanced methods (regression modeling, propensity score matching, difference in differences analysis, time-series analysis, etc).
Contribute to data cleaning and analysis for key performance indicator reporting.
Develop processes that support automation of cleaning and analysis for efficiency.
Develop and maintain analytical code and workflows to ensure reproducibility.
3. Data Visualization and Tool-building (30%)
Work closely with non-technical stakeholders to understand the question(s) they are asking and the use cases they have for specific data visualizations or tools
Develop well-documented overviews and specifications for new tools.
Create clear, compelling data visualizations and dashboards.
Collaborate with DataEngineering to appropriately and sustainably source data for new tools.
Manage complex projects to build novel and specific tools for internal or external stakeholders.
Maintain custom tools for the duration of their usefulness, including by responding to feedback and requests from project stakeholders.
4. Data Governance and Quality Assurance (10%)
Support data quality assurance protocols and standards across the MEL team.
Ensure compliance with data protection, security, and ethical standards.
Maintain organized, well-documented code and databases.
Collaborate with the DataEngineering team to maintain RISE MEL data infrastructure.
Qualifications
Master's degree (or PhD) in statistics, economics, quantitative social sciences, public policy, data science, or related field.
Minimum of 3 years of professional experience conducting statistical analysis and managing large datasets.
Advanced proficiency in R, Python, or Stata for data analysis and modeling.
Experience designing and implementing quantitative research and evaluation studies.
Strong understanding of inferential statistics, experimental and quasi-experimental methods, and sampling design.
Strong knowledge of survey data collection tools such as Key Surveys, Google Forms, etc.
Excellent data visualization and communication skills
Experience with data visualization tools; strong preference for Tableau.
Ability to translate complex data into insights for diverse audiences, including non-technical stakeholders.
Ability to cultivate relationships and earn credibility with a diverse range of stakeholders.
Strong organizational and project management skills.
Strong sense of accountability and responsibility for results.
Ability to work in an independent and self-motivated manner.
Demonstrated proficiency with Google Workspace.
Commitment to equity, ethics, and learning in a nonprofit or mission-driven context.
Positive attitude and willingness to work in a collaborative environment.
Strong belief that all students can learn and achieve at high levels.
Preferred
Experience working on a monitoring, evaluation, and learning team.
Familiarity with school data systems and prior experience working in a school, district, or similar K-12 educational context preferred.
Experience working with survey data (e.g., DHS, LSMS), administrative datasets, or real-time digital data sources.
Working knowledge of dataengineering or database management (SQL, cloud-based platforms).
Salary Range
$85k - $105k
Most new hires' salaries fall within the first half of the range, allowing team members to grow in their roles. For those who already have significant and aligned experiences at the same level as the role, placement may be at the higher end of the range.
The Connecticut RISE Network is an equal opportunity employer and welcomes candidates from diverse backgrounds.
RISE Interview & Communication Policy
The RISE interview process includes:
A video or phone screening with the hiring manager
Interviews with the hiring panel
A performance task
Reference checks
Applicants will never receive an offer unless they have completed the full interview process.
All official communications with applicants are sent only through ADP or CT RISE email addresses (@ctrise.org). There has been a job offer scam circulating from various email addresses using the domain @careers-ctrise.org-this is not a valid RISE email address.
If you receive an email from anyone claiming to represent RISE with a job offer outside of our official channels, or requesting written screening information, and you have not completed the full interview process, please do not respond and report it to ******************.
$85k-105k yearly Auto-Apply 32d ago
Junior Data Scientist
Bexorg
Data engineer job in New Haven, CT
About Us
Bexorg is revolutionizing drug discovery by restoring molecular activity in postmortem human brains. Through our BrainEx platform, we directly experiment on functionally preserved human brain tissue, creating enormous high-fidelity molecular datasets that fuel AI-driven breakthroughs in treating CNS diseases. We are looking for a Junior Data Scientist to join our team and dive into this one-of-a-kind data. In this onsite role, you will work at the intersection of computational biology and machine learning, helping analyze high-dimensional brain data and uncover patterns that could lead to the next generation of CNS therapeutics. This is an ideal opportunity for a recent graduate or early-career scientist to grow in a fast-paced, mission-driven environment.
The Job
Data Analysis & Exploration: Work with large-scale molecular datasets from our BrainEx experiments - including transcriptomic, proteomic, and metabolic data. Clean, transform, and explore these high-dimensional datasets to understand their structure and identify initial insights or anomalies.
Collaborative Research Support: Collaborate closely with our life sciences, computational biology and deep learning teams to support ongoing research. You will help biologists interpret data results and assist machine learning researchers in preparing data for modeling, ensuring that domain knowledge and data science intersect effectively.
Machine Learning Model Execution: Run and tune machine learning and deep learning models on real-world central nervous system (CNS) data. You'll help set up experiments, execute training routines (for example, using scikit-learn or PyTorch models), and evaluate model performance to extract meaningful patterns that could inform drug discovery.
Statistical Insight Generation: Apply statistical analysis and visualization techniques to derive actionable insights from complex data. Whether it's identifying gene expression patterns or correlating molecular changes with experimental conditions, you will contribute to turning data into scientific discoveries.
Reporting & Communication: Document your analysis workflows and results in clear reports or dashboards. Present findings to the team, highlighting key insights and recommendations. You will play a key role in translating data into stories that drive decision-making in our R&D efforts.
Qualifications and Skills:
Strong Python Proficiency: Expert coding skills in Python and deep familiarity with the standard data science stack. You have hands-on experience with NumPy, pandas, and Matplotlib for data manipulation and visualization; scikit-learn for machine learning; and preferably PyTorch (or similar frameworks like TensorFlow) for deep learning tasks.
Educational Background: A Bachelor's or Master's degree in Data Science, Computer Science, Computational Biology, Bioinformatics, Statistics, or a related field. Equivalent practical project experience or internships in data science will also be considered.
Machine Learning Knowledge: Solid understanding of machine learning fundamentals and algorithms. Experience developing or applying models to real or simulated datasets (through coursework or projects) is expected. Familiarity with high-dimensional data techniques or bioinformatics methods is a plus.
Analytical & Problem-Solving Skills: Comfortable with statistics and data analysis techniques for finding signals in noisy data. Able to break down complex problems, experiment with solutions, and clearly interpret the results.
Team Player: Excellent communication and collaboration skills. Willingness to learn from senior scientists and ability to contribute effectively in a multidisciplinary team that includes biologists, dataengineers, and AI researchers.
Motivation and Curiosity: Highly motivated, with an evident passion for data-driven discovery. You are excited by Bexorg's mission and eager to take on challenging tasks - whether it's mastering a new analysis method or digging into scientific literature - to push our research forward.
Local to New Haven, CT preferred. No relocation offered for this position.
Bexorg is an equal opportunity employer. We strive to create a supportive and inclusive workplace where contributions are valued and celebrated, and our employees thrive by being themselves and are inspired to do their best work. We seek applicants of all backgrounds and identities, across race, color, ethnicity, national origin or ancestry, citizenship, religion, sex, sexual orientation, gender identity or expression, veteran status, marital status, pregnancy or parental status, or disability. Applicants will not be discriminated against based on these or other protected categories or social identities. Bexorg will also consider for employment qualified applicants with criminal histories in a manner consistent with applicable federal, state and local law.
$75k-105k yearly est. Auto-Apply 60d+ ago
Data Engineer w AWS services such as AWS Glue, Lambda, Athena, Step Functions, and Lake
Intermedia Group
Data engineer job in Ridgefield, CT
OPEN JOB: DataEngineer w AWS services such as AWS Glue, Lambda, Athena, Step Functions, and Lake **HYBRID - This candidate will work on site 2-3X per week in Ridgefield CT location SALARY: $140,000 to $185,000
2 Openings
NOTE: CANDIDATE MUST BE US CITIZEN OR GREEN CARD HOLDER
We are seeking a highly skilled and experienced DataEngineer to design, build, and maintain our scalable and robust data infrastructure on a cloud platform. In this pivotal role, you will be instrumental in enhancing our data infrastructure, optimizing data flow, and ensuring data availability. You will be responsible for both the hands-on implementation of data pipelines and the strategic design of our overall data architecture.
Seeking a candidate with hands-on experience with AWS services such as AWS Glue, Lambda, Athena, Step Functions, and Lake, Proficiency in Python and SQL and DevOps/CI/CD experience
Duties & Responsibilities
Design, develop, and maintain scalable data pipelines and ETL processes to support data integration and analytics.
Collaborate with data architects, modelers and IT team members to help define and evolve the overall cloud-based data architecture strategy, including data warehousing, data lakes, streaming analytics, and data governance frameworks
Collaborate with data scientists, analysts, and other business stakeholders to understand data requirements and deliver solutions.
Optimize and manage data storage solutions (e.g., S3, Snowflake, Redshift) ensuring data quality, integrity, security, and accessibility.
Implement data quality and validation processes to ensure data accuracy and reliability.
Develop and maintain documentation for data processes, architecture, and workflows.
Monitor and troubleshoot data pipeline performance and resolve issues promptly.
Consulting and Analysis: Meet regularly with defined clients and stakeholders to understand and analyze their processes and needs. Determine requirements to present possible solutions or improvements.
Technology Evaluation: Stay updated with the latest industry trends and technologies to continuously improve dataengineering practices.
Requirements
Cloud Expertise: Expert-level proficiency in at least one major cloud platform (AWS, Azure, or GCP) with extensive experience in their respective data services (e.g., AWS S3, Glue, Lambda, Redshift, Kinesis; Azure Data Lake, Data Factory, Synapse, Event Hubs; GCP BigQuery, Dataflow, Pub/Sub, Cloud Storage); experience with AWS data cloud platform preferred
SQL Mastery: Advanced SQL writing and optimization skills.
Data Warehousing: Deep understanding of data warehousing concepts, Kimball methodology, and various data modeling techniques (dimensional, star/snowflake schemas).
Big Data Technologies: Experience with big data processing frameworks (e.g., Spark, Hadoop, Flink) is a plus.
Database Systems: Experience with relational and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB, Cassandra).
DevOps/CI/CD: Familiarity with DevOps principles and CI/CD pipelines for data solutions.
Hands-on experience with AWS services such as AWS Glue, Lambda, Athena, Step Functions, and Lake Formation
Proficiency in Python and SQL
Desired Skills, Experience and Abilities
4+ years of progressive experience in dataengineering, with a significant portion dedicated to cloud-based data platforms.
ETL/ELT Tools: Hands-on experience with ETL/ELT tools and orchestrators (e.g., Apache Airflow, Azure Data Factory, AWS Glue, dbt).
Data Governance: Understanding of data governance, data quality, and metadata management principles.
AWS Experience: Ability to evaluate AWS cloud applications, make architecture recommendations; AWS solutions architect certification (Associate or Professional) is a plus
Familiarity with Snowflake
Knowledge of dbt (data build tool)
Strong problem-solving skills, especially in data pipeline troubleshooting and optimization
If you are interested in pursuing this opportunity, please respond back and include the following:
Full CURRENT Resume
Required compensation
Contact information
Availability
Upon receipt, one of our managers will contact you to discuss in full
STEPHEN FLEISCHNER
Recruiting Manager
INTERMEDIA GROUP, INC.
EMAIL: *******************************
$140k-185k yearly Easy Apply 60d+ ago
Data Engineer I
Epicured, Inc.
Data engineer job in Glen Cove, NY
Job DescriptionWhy Epicured?
Epicured is on a mission to combat and prevent chronic disease, translating scientific research into high-quality food products and healthcare services nationwide. Our evidence-based approach brings together the best of the clinical, culinary, and technology worlds to help people eat better, feel better, and live better one meal at a time.
By joining Epicured's Technology team, you'll help power the data infrastructure that supports Medicaid programs, clinical services, life sciences initiatives, and direct-to-consumer operations - enabling better decisions, better outcomes, and scalable growth.
Role Overview
Epicured is seeking a DataEngineer I to support data ingestion, reporting, and analytics across multiple business lines. Reporting to the SVP of Software Engineering, this role will focus on building and maintaining reliable reporting pipelines, supporting business requests, and managing data from a growing ecosystem of healthcare, operational, and e-commerce systems.
This position is ideal for a self-starter with strong SQL skills who is comfortable working with evolving requirements, healthcare-adjacent data, and modern data platforms such as Microsoft Fabric and Power BI.
Key Responsibilities
Build, maintain, and support reports across all Epicured business lines using Power BI, exports, and Microsoft Fabric.
Ingest and integrate new data sources, including SQL Server, operational systems, and external data exchanges.
Support reporting and analytics requests across Clinical & Life Sciences, Section 1115 Medicaid Waiver programs, Health Information Exchanges (e.g., Healthix), and Self-Pay e-commerce operations.
Handle HIPAA-sensitive data, ensuring proper governance, access control, and compliance standards are maintained.
Manage Shopify and other e-commerce data requests for Epicured's Self-Pay division.
Keep reporting environments organized, documented, and operational while prioritizing incoming requests.
Operate and help scale Epicured's Microsoft Fabric environment, contributing to platform strategy and best practices.
Partner with stakeholders to clarify ambiguous requirements and translate business questions into data solutions.
Qualifications
3+ years of experience in dataengineering, analytics, or business intelligence roles.
Strong SQL skills with experience working in relational databases.
Experience with Azure, Microsoft Fabric, Power BI, or similar modern data platforms.
Strong proficiency in Excel / Google Sheets.
Ability to work independently and manage multiple priorities in a fast-growing environment.
Experience working with healthcare or HIPAA-adjacent data, including exposure to health information exchanges.
Familiarity with ETL / ELT pipelines and data modeling best practices.
Experience integrating operational, financial, logistics, and clinical datasets.
Preferred Qualifications
Experience with C#.
Python experience is a plus.
Healthcare or life sciences background.
Experience supporting analytics for Medicaid, payer, or regulated environments.
Compensation & Benefits
Salary Range: $115,000-$130,000 annually, commensurate with experience
Benefits include:
401(k)
Health, Dental, and Vision insurance
Unlimited Paid Time Off (PTO)
Opportunity to grow with Epicured's expanding data and technology organization
Equal Employment Opportunity
Epicured is proud to be an Equal Opportunity Employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We do not discriminate on the basis of age, race, creed, color, national origin, religion, gender, sexual orientation, gender identity or expression, disability, veteran status, or any other protected status under federal, state, or local law.
$115k-130k yearly 13d ago
Data Engineer
Innovative Rocket Technologies Inc. 4.3
Data engineer job in Hauppauge, NY
Job Description
Data is pivotal to our goal of frequent launch and rapid iteration. We're recruiting a DataEngineer at iRocket to build pipelines, analytics, and tools that support propulsion test, launch operations, manufacturing, and vehicle performance.
The Role
Design and build data pipelines for test stands, manufacturing machines, launch telemetry, and operations systems.
Develop dashboards, real-time monitoring, data-driven anomaly detection, performance trending, and predictive maintenance tools.
Work with engineers across propulsion, manufacturing, and operations to translate data-needs into data-products.
Maintain data architecture, ETL processes, cloud/edge-data systems, and analytics tooling.
Support A/B testing, performance metrics, and feed insights back into design/manufacturing cycles.
Requirements
Bachelor's degree in Computer Science, DataEngineering, or related technical field.
2+ years of experience building data pipelines, ETL/ELT workflows, and analytics systems.
Proficient in Python, SQL, cloud data platforms (AWS, GCP, Azure), streaming/real-time analytics, and dashboarding (e.g., Tableau, PowerBI).
Strong ability to work cross-functionally and deliver data-products to engineering and operations teams.
Strong communication, documentation, and a curiosity-driven mindset.
Benefits
Health Care Plan (Medical, Dental & Vision)
Retirement Plan (401k, IRA)
Life Insurance (Basic, Voluntary & AD&D)
Paid Time Off (Vacation, Sick & Public Holidays)
Family Leave (Maternity, Paternity)
Short Term & Long Term Disability
Wellness Resources
$102k-146k yearly est. 3d ago
ETL/Data Platform Engineer
Clarapath
Data engineer job in Hawthorne, NY
JOB TITLE: ETL/Data Platform Engineer
TYPE: Full time, regular
COMPENSATION: $130,000 - $180,000/yr
Clarapath is a medical robotics company based in Westchester County, NY. Our mission is to transform and modernize laboratory workflows with the goal of improving patient care, decreasing costs, and enhancing the quality and consistency of laboratory processes. SectionStar by Clarapath is a ground-breaking electro-mechanical system designed to elevate and automate the workflow in histology laboratories and provide pathologists with the tissue samples they need to make the most accurate diagnoses. Through the use of innovative technology, data, and precision analytics, Clarapath is paving the way for a new era of laboratory medicine.
Role Summary:
The ETL/Data Platform Engineer will play a key role in designing, building, and maintaining Clarapath s data pipelines and platform infrastructure supporting SectionStar , our advanced electro-mechanical device. This role requires a strong foundation in dataengineering, including ETL/ELT development, data modeling, and scalable data platform design. Working closely with cross-functional teams including software, firmware, systems, and mechanical engineering this individual will enable reliable ingestion, transformation, and storage of device and operational data. The engineer will help power analytics, system monitoring, diagnostics, and long-term insights that support product performance, quality, and continuous improvement. We are seeking a proactive, detail-oriented engineer who thrives in a fast-paced, rapidly growing environment and is excited to apply dataengineering best practices to complex, data-driven challenges in a regulated medical technology setting.
Responsibilities:
Design, develop, and maintain robust ETL/ELT pipelines for device, telemetry, and operational data
Build and optimize data models to support analytics, reporting, and system insights
Develop and maintain scalable data platform infrastructure (cloud and/or on-prem)
Ensure data quality, reliability, observability, and performance across pipelines
Support real-time or near real-time data ingestion where applicable
Collaborate with firmware and software teams to integrate device-generated data
Enable dashboards, analytics, and internal tools for engineering, quality, and operations teams
Implement best practices for data security, access control, and compliance
Troubleshoot pipeline failures and improve system resilience
Document data workflows, schemas, and platform architecture
Qualifications:
Bachelor s degree in Computer Science, Engineering, or a related field (or equivalent experience)
3+ years of experience in dataengineering, ETL development, or data platform roles
Strong proficiency in SQL and at least one programming language (Python preferred)
Experience building and maintaining ETL/ELT pipelines
Familiarity with data modeling concepts and schema design
Experience with cloud platforms (AWS, GCP, or Azure) or hybrid environments
Understanding of data reliability, monitoring, and pipeline orchestration
Strong problem-solving skills and attention to detail
Experience with streaming data or message-based systems (ex: Kafka, MQTT), a plus
Experience working with IoT, device, or telemetry data, a plus
Familiarity with data warehouses and analytics platforms, a plus
Experience in regulated environments (medical device, healthcare, life sciences), a plus
Exposure to DevOps practices, CI/CD, or infrastructure-as-code, a plus
Company Offers:
Competitive salary, commensurate with experience and education
Comprehensive benefits package available: (healthcare, vision, dental and life insurances; 401k; PTO and holidays)
A collaborative and diverse work environment where our teams thrive on solving complex challenges
Ability to file IP with the company
Connections with world class researchers and their laboratories
Collaboration with strategic leaders in healthcare and pharmaceutical world
A mission driven organization where every team member will be responsible for changing the standards of delivering healthcare
Clarapath is proud to be an equal opportunity employer. We are committed to providing equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. In addition to federal law requirements, Clarapath complies with applicable state and local laws governing nondiscrimination in employment. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation, and training.
$130k-180k yearly 5d ago
Tech Lead, Data & Inference Engineer
Catalyst Labs
Data engineer job in Stamford, CT
Our Client
A fast moving and venture backed advertising technology startup based in San Francisco. They have raised twelve million dollars in funding and are transforming how business to business marketers reach their ideal customers. Their identity resolution technology blends business and consumer signals to convert static audience lists into high match and cross channel segments without the use of cookies. By transforming first party and third party data into precision targetable audiences across platforms such as Meta, Google and YouTube, they enable marketing teams to reach higher match rates, reduce wasted advertising spend and accelerate pipeline growth. With a strong understanding of how business buyers behave in channels that have traditionally been focused on business to consumer activity, they are redefining how business brands scale demand generation and account based efforts.
About Us
Catalyst Labs is a leading talent agency with a specialized vertical in Applied AI, Machine Learning, and Data Science. We stand out as an agency thats deeply embedded in our clients recruitment operations.
We collaborate directly with Founders, CTOs, and Heads of AI in those themes who are driving the next wave of applied intelligence from model optimization to productized AI workflows. We take pride in facilitating conversations that align with your technical expertise, creative problem-solving mindset, and long-term growth trajectory in the evolving world of intelligent systems.
Location: San Francisco
Work type: Full Time,
Compensation: above market base + bonus + equity
Roles & Responsibilities
Lead the design, development and scaling of an end to end data platform from ingestion to insights, ensuring that data is fast, reliable and ready for business use.
Build and maintain scalable batch and streaming pipelines, transforming diverse data sources and third party application programming interfaces into trusted and low latency systems.
Take full ownership of reliability, cost and service level objectives. This includes achieving ninety nine point nine percent uptime, maintaining minutes level latency and optimizing cost per terabyte. Conduct root cause analysis and provide long lasting solutions.
Operate inference pipelines that enhance and enrich data. This includes enrichment, scoring and quality assurance using large language models and retrieval augmented generation. Manage version control, caching and evaluation loops.
Work across teams to deliver data as a product through the creation of clear data contracts, ownership models, lifecycle processes and usage based decision making.
Guide architectural decisions across the data lake and the entire pipeline stack. Document lineage, trade offs and reversibility while making practical decisions on whether to build internally or buy externally.
Scale integration with application programming interfaces and internal services while ensuring data consistency, high data quality and support for both real time and batch oriented use cases.
Mentor engineers, review code and raise the overall technical standard across teams. Promote data driven best practices throughout the organization.
Qualifications
Bachelors or Masters degree in Computer Science, Computer Engineering, Electrical Engineering, or Mathematics.
Excellent written and verbal communication; proactive and collaborative mindset.
Comfortable in hybrid or distributed environments with strong ownership and accountability.
A founder-level bias for actionable to identify bottlenecks, automate workflows, and iterate rapidly based on measurable outcomes.
Demonstrated ability to teach, mentor, and document technical decisions and schemas clearly.
Core Experience
6 to 12 years of experience building and scaling production-grade data systems, with deep expertise in data architecture, modeling, and pipeline design.
Expert SQL (query optimization on large datasets) and Python skills.
Hands-on experience with distributed data technologies (Spark, Flink, Kafka) and modern orchestration tools (Airflow, Dagster, Prefect).
Familiarity with dbt, DuckDB, and the modern data stack; experience with IaC, CI/CD, and observability.
Exposure to Kubernetes and cloud infrastructure (AWS, GCP, or Azure).
Bonus: Strong Node.js skills for faster onboarding and system integration.
Previous experience at a high-growth startup (10 to 200 people) or early-stage environment with a strong product mindset.
$84k-114k yearly est. 60d+ ago
C++ Market Data Engineer (USA)
Trexquant Investment 4.0
Data engineer job in Stamford, CT
Trexquant is a growing systematic fund at the forefront of quantitative finance, with a core team of highly accomplished researchers and engineers. To keep pace with our expanding global trading operations, we are seeking a C++ Market DataEngineer to design and build ultra-low-latency feed handlers for premier vendor feeds and major exchange multicast feeds. This is a high-impact role that sits at the heart of Trexquant's trading platform; the quality, speed, and reliability of your code directly influence every strategy we run.
Responsibilities
Design & implement high-performance feed handlers in modern C++ for equities, futures, and options across global venues (e.g., NYSE, CME, Refinitiv RTS, Bloomberg B-PIPE).
Optimize for micro- and nanosecond latency using lock-free data structures, cache-friendly memory layouts, and kernel-bypass networking where appropriate.
Build reusable libraries for message decoding, normalization, and publication to internal buses shared by research, simulation, and live trading systems.
Collaborate with cross-functional teams to tune TCP/UDP multicast stacks, kernel parameters, and NIC settings for deterministic performance.
Provide robust failover, gap-recovery, and replay mechanisms to guarantee data integrity under packet loss or venue outages.
Instrument code paths with precision timestamping and performance metrics; drive continuous latency regression testing and capacity planning.
Partner closely with quantitative researchers to understand downstream data requirements and to fine-tune delivery formats for both simulation and live trading.
Produce clear architecture documents, operational run-books, and post-mortems; participate in a 24×7 follow-the-sun support rotation for mission-critical market-data services.
Requirements
BS/MS/PhD in Computer Science, Electrical Engineering, or related field.
3+ years of professional C++ (14,17,20) development experience focused on low-latency, high-throughput systems.
Proven track record building or maintaining real-time market-data feeds (e.g., Refinitiv RTS/TREP, Bloomberg B-PIPE, OPRA, CME MDP, ITCH).
Strong grasp of concurrency, lock-free algorithms, memory-model semantics, and compiler optimizations.
Familiarity with serialization formats (FAST, SBE, Protocol Buffers) and time-series databases or in-memory caches.
Comfort with scripting in Python for prototyping, testing, and ops automation.
Excellent problem-solving skills, ownership mindset, and ability to thrive in a fast-paced trading environment.
Familiarity with containerization (Docker/K8s) and public-cloud networking (AWS, GCP).
Benefits
Competitive salary, plus bonus based on individual and company performance.
Collaborative, casual, and friendly work environment while solving the hardest problems in the financial markets.
PPO Health, dental and vision insurance premiums fully covered for you and your dependents.
Pre-Tax Commuter Benefits
Applications are now open for our NYC office, opening in September 2026.
The base salary range is $175,000 - $200,000 depending on the candidate's educational and professional background. Base salary is one component of Trexquant's total compensation, which may also include a discretionary, performance-based bonus.
Trexquant is an Equal Opportunity Employer
$175k-200k yearly Auto-Apply 60d+ ago
Salesforce Data 360 Architect
Slalom 4.6
Data engineer job in White Plains, NY
Who You'll Work With In our Salesforce business, we help our clients bring the most impactful customer experiences to life and we do that in a way that makes our clients the hero of their transformation story. We are passionate about and dedicated to building a diverse and inclusive team, recognizing that diverse team members who are celebrated for bringing their authentic selves to their work build solutions that reach more diverse populations in innovative and impactful ways. Our team is comprised of customer strategy experts, Salesforce-certified experts across all Salesforce capabilities, industry experts, organizational and cultural change consultants, and project delivery leaders. As the 3rd largest Salesforce partner globally and in North America, we are committed to growing and developing our Salesforce talent, offering continued growth opportunities, and exposing our people to meaningful work that aligns to their personal and professional goals.
We're looking for individuals who have experience implementing Salesforce Data Cloud or similar platforms and are passionate about customer data. The ideal candidate has a desire for continuous professional growth and can deliver complex, end-to-end Data Cloud implementations from strategy and design, through to data ingestion, segment creation, and activation; all while working alongside both our clients and other delivery disciplines. Our Global Salesforce team is looking to add a passionate Principal or Senior Principal to take on the role of Data Cloud Architect within our Salesforce practice.
What You'll Do:
Responsible for business requirements gathering, architecture design, data ingestion and modeling, identity resolution setup, calculated insight configuration, segment creation and activation, end-user training, and support procedures
Lead technical conversations with both business and technical client teams; translate those outcomes into well-architected solutions that best utilize Salesforce Data Cloud and the wider Salesforce ecosystem
Ability to direct technical teams, both internal and client-side
Provide subject matter expertise as warranted via customer needs and business demands
Build lasting relationships with key client stakeholders and sponsors
Collaborate with digital specialists across disciplines to innovate and build premier solutions
Participate in compiling industry research, thought leadership and proposal materials for business development activities
Experience with scoping client work
Experience with hyperscale data platforms (ex: Snowflake), robust database modeling and data governance is a plus.
What You'll Bring:
Have been part of at least one Salesforce Data Cloud implementation
Familiarity with Salesforce's technical architecture: APIs, Standard and Custom Objects, APEX. Proficient with ANSI SQL and supported functions in Salesforce Data Cloud
Strong proficiency toward presenting complex business and technical concepts using visualization aids
Ability to conceptualize and craft sophisticated wireframes, workflows, and diagrams
Strong understanding of data management concepts, including data quality, data distribution, data modeling and data governance
Detailed understanding of the fundamentals of digital marketing and complementary Salesforce products that organizations may use to run their business. Experience defining strategy, developing requirements, and implementing practical business solutions.
Experience in delivering projects using Agile-based methodologies
Salesforce Data Cloud certification preferred
Additional Salesforce certifications like Administrator are a plus
Strong interpersonal skills
Bachelor's degree in a related field preferred, but not required
Open to travel (up to 50%)
About Us
Slalom is a fiercely human business and technology consulting company that leads with outcomes to bring more value, in all ways, always. From strategy through delivery, our agile teams across 52 offices in 12 countries collaborate with clients to bring powerful customer experiences, innovative ways of working, and new products and services to life. We are trusted by leaders across the Global 1000, many successful enterprise and mid-market companies, and 500+ public sector organizations to improve operations, drive growth, and create value. At Slalom, we believe that together, we can move faster, dream bigger, and build better tomorrows for all.
Compensation and Benefits
Slalom prides itself on helping team members thrive in their work and life. As a result, Slalom is proud to invest in benefits that include meaningful time off and paid holidays, parental leave, 401(k) with a match, a range of choices for highly subsidized health, dental, & vision coverage, adoption and fertility assistance, and short/long-term disability. We also offer yearly $350 reimbursement account for any well-being-related expenses, as well as discounted home, auto, and pet insurance.
Slalom is committed to fair and equitable compensation practices. For this role, we are hiring at the following levels and salary ranges:
East Bay, San Francisco, Silicon Valley:
Principal: $184,000-$225,000
San Diego, Los Angeles, Orange County, Seattle, Boston, Houston, New Jersey, New York City, Washington DC, Westchester:
Principal: $169,000-$206,000
All other locations:
Principal: $155,000-$189,000
In addition, individuals may be eligible for an annual discretionary bonus. Actual compensation will depend upon an individual's skills, experience, qualifications, location, and other relevant factors. The salary pay range is subject to change and may be modified at any time.
We are committed to pay transparency and compliance with applicable laws. If you have questions or concerns about the pay range or other compensation information in this posting, please contact us at: ********************.
We will accept applications until January 30, 2025 or until the position is filled.
EEO and Accommodations
Slalom is an equal opportunity employer and is committed to attracting, developing and retaining highly qualified talent who empower our innovative teams through unique perspectives and experiences. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veterans' status, or any other characteristic protected by federal, state, or local laws. Slalom will also consider qualified applications with criminal histories, consistent with legal requirements. Slalom welcomes and encourages applications from individuals with disabilities. Reasonable accommodations are available for candidates during all aspects of the selection process. Please advise the talent acquisition team or contact ****************************** if you require accommodations during the interview process.
$184k-225k yearly Easy Apply 8d ago
Network Planning Data Scientist (Manager)
Atlas Air Worldwide Holdings 4.9
Data engineer job in White Plains, NY
Atlas Air is seeking a detail-oriented and analytical Network Planning Analyst to help optimize our global cargo network. This role plays a critical part in the 2-year to 11-day planning window, driving insights that enable operational teams to execute the most efficient and reliable schedules. The successful candidate will provide actionable analysis on network delays, utilization trends, and operating performance, build models and reports to govern network operating parameters, and contribute to the development and implementation of software optimization tools that improve reliability and streamline planning processes.
This position requires strong analytical skills, a proactive approach to problem-solving, and the ability to translate data into operational strategies that protect service quality and maximize network efficiency.
Responsibilities
Analyze and Monitor Network Performance
Track and assess network delays, capacity utilization, and operating constraints to identify opportunities for efficiency gains and reliability improvements.
Develop and maintain key performance indicators (KPIs) for network operations and planning effectiveness.
Modeling & Optimization
Build and maintain predictive models to assess scheduling scenarios and network performance under varying conditions.
Support the design, testing, and implementation of software optimization tools to enhance operational decision-making.
Reporting & Governance
Develop periodic performance and reliability reports for customers, assisting in presentation creation
Produce regular and ad hoc reports to monitor compliance with established operating parameters.
Establish data-driven processes to govern scheduling rules, protect operational integrity, and ensure alignment with reliability targets.
Cross-Functional Collaboration
Partner with Operations, Planning, and Technology teams to integrate analytics into network planning and execution.
Provide insights that inform schedule adjustments, fleet utilization, and contingency planning.
Innovation & Continuous Improvement
Identify opportunities to streamline workflows and automate recurring analyses.
Contributes to the development of new planning methodologies and tools that enhance decision-making and operational agility.
Qualifications
Proficiency in SQL (Python and R are a plus) for data extraction and analysis; experience building decision-support tools, reporting tools dashboards (e.g., Tableau, Power BI)
Bachelor's degree required in Industrial Engineering, Operations Research, Applied Mathematics, Data Science or related quantitative discipline or equivalent work experience.
5+ years of experience in strategy, operations planning, finance or continuous improvement, ideally with airline network planning
Strong analytical skills with experience in statistical analysis, modeling, and scenario evaluation.
Strong problem-solving skills with the ability to work in a fast-paced, dynamic environment.
Excellent communication skills with the ability to convey complex analytical findings to non-technical stakeholders.
A proactive, solution-focused mindset with a passion for operational excellence and continuous improvement.
Knowledge of operations, scheduling, and capacity planning, ideally in airlines, transportation or other complex network operations
Salary Range: $131,500 - $177,500
Financial offer within the stated range will be based on multiple factors to include but not limited to location, relevant experience/level and skillset.
The Company is an Equal Opportunity Employer. It is our policy to afford equal employment opportunity to all employees and applicants for employment without regard to race, color, religion, sex, sexual orientation, national origin, citizenship, place of birth, age, disability, protected veteran status, gender identity or any other characteristic or status protected by applicable in accordance with federal, state and local laws.
If you'd like more information about your EEO rights as an applicant under the law, please download the available EEO is the Law document at ******************************************
To view our Pay Transparency Statement, please click here: Pay Transparency Statement
“Know Your Rights: Workplace Discrimination is Illegal” Poster
The "EEO Is The Law" Poster
$131.5k-177.5k yearly Auto-Apply 60d+ ago
Data Modeler / Architect
Mindlance 4.6
Data engineer job in Bristol, CT
Job Title: Data Modeler / Architect Duration: 9 Months Responsibilities: Partner with our architecture, data development and application development teams to create relational and dimensional data structures for data management solutions.
Develop and maintain various data subject area data models for Data Warehouse, Operational Data Store and Data Marts.
Define relationships and touch points between data subject areas.
Data mapping /Data Lineage mapping.
Gap analysis.
Analyze complex data structures
Compose accurate and complete definitions for entities and attributes
Perform source system analysis and data profiling
Design and validate data solutions which are achieved through the application of industry proven architectural principles, standards and governance.
Basic Qualifications:
5+ years of experience working as an Data Warehouse Data Modeler / Architect.
Undergraduate degree in applicable area of expertise or equivalent work experience
Experience in building complex, large scale logical data models with Erwin, E/R Studio, or another data modeling tool
Experience in relational and dimensional schemas for Enterprise Data Warehouse
Experience in data modeling (physical and logical), ER diagrams, data dictionary, data mapping
Experience in Data Architecture and Database design
Experience with database technologies (e.g. Oracle)
Excellent analytical and communication skills
Preferred Qualifications:
Exposure with the below technologies and / or skills will be considered a plus:
Oracle 11G, Data movement architectural patterns, and a thorough understanding of system development lifecycles.
Experience in data modeling, data mapping, and data architecture is desirable.
Required Education BS
Additional Information
Thanks & Regards'
________________________________________________________________________
___
Vikram Bhalla
|
Team Recruitment
|
Mindlance, Inc.
|
W
:
************
All your information will be kept confidential according to EEO guidelines.
$89k-124k yearly est. 1d ago
OFSAA Data Architect
Tectammina
Data engineer job in Norwalk, CT
Mandatory Technical Skills : Strong in data warehousing concepts and dimension modeling - Min 6 Years Exp. Experience in OFSAA Data Modeling etc - Min 3 years Translate business requirements into (OFSAA) designs and map data elements from models to the OFSAA data model.
Strong troubleshooting skills
Hands-on experience with extracting, loading of data from source systems into the OFSAA model.
Data modeling (star / 3NF / cube), ETL design and build.
Extensive experience in
OFSAA Infrastructure, OFSAA Data Model, & Erwin Data Modeler.
Desirable Technical Skills :
OBIEE Analytics and BI - ETL Knowledge
Mandatory Functional Skills :
Ability to co-ordinate with multiple technical teams, Business users and Customer
Strong communication
Strong troubleshooting skills
Should have strong understanding on OFSAA LRM, Basel, OBIEE Analytics."
Desirable Functional Skills :
Banking and finance service Industry
Qualifications
Bachelor or higher
Additional Information
Job Status: Permanent
Share the Profiles to *****************************
Contact:
************
Keep the subject line with Job Title and Location
$87k-119k yearly est. Easy Apply 1d ago
Data Architect - Power & Utilities - Senior Manager- Consulting - Location OPEN
Ernst & Young Oman 4.7
Data engineer job in Stamford, CT
At EY, we're all in to shape your future with confidence.
We'll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world.
AI & Data - Data Architecture - Senior Manager - Power & Utilities Sector
EY is seeking a motivated professional with solid experience in the utilities sector to serve as a Senior Manager who possesses a robust background in Data Architecture, Data Modernization, End to end Data capabilities, AI, Gen AI, Agentic AI, preferably with a power systems / electrical engineering background and having delivered business use cases in Transmission / Distribution / Generation / Customer. The ideal candidate will have a history of working for consulting companies and be well-versed in the fast-paced culture of consulting work. This role is dedicated to the utilities sector, where the successful candidate will craft, deploy, and maintain large-scale AI data ready architectures.
The opportunity
You will help our clients enable better business outcomes while working in the rapidly growing Power & Utilities sector. You will have the opportunity to lead and develop your skill set to keep up with the ever-growing demands of the modern data platform. During implementation you will solve complex analytical problems to bring data to insights and enable the use of ML and AI at scale for your clients. This is a high growth area and a high visibility role with plenty of opportunities to enhance your skillset and build your career.
As a Senior Manager in Data Architecture, you will have the opportunity to lead transformative technology projects and programs that align with our organizational strategy to achieve impactful outcomes. You will provide assurance to leadership by managing timelines, costs, and quality, and lead both technical and non-technical project teams in the development and implementation of cutting-edge technology solutions and infrastructure. You will have the opportunity to be face to face with external clients and build new and existing relationships in the sector. Your specialized knowledge in project and program delivery methods, including Agile and Waterfall, will be instrumental in coaching others and proposing solutions to technical constraints.
Your key responsibilities
In this pivotal role, you will be responsible for the effective management and delivery of one or more processes, solutions, and projects, with a focus on quality and effective risk management. You will drive continuous process improvement and identify innovative solutions through research, analysis, and best practices. Managing professional employees or supervising team members to deliver complex technical initiatives, you will apply your depth of expertise to guide others and interpret internal/external issues to recommend quality solutions. Your responsibilities will include:
As Data Architect - Senior Manager, you will have an expert understanding of data architecture and dataengineering and will be focused on problem-solving to design, architect, and present findings and solutions, leading more junior team members, and working with a wide variety of clients to sell and lead delivery of technology consulting services. You will be the go-to resource for understanding our clients' problems and responding with appropriate methodologies and solutions anchored around data architectures, platforms, and technologies. You are responsible for helping to win new business for EY. You are a trusted advisor with a broad understanding of digital transformation initiatives, the analytic technology landscape, industry trends and client motivations. You are also a charismatic communicator and thought leader, capable of going toe-to-toe with the C-level in our clients and prospects and willing and able to constructively challenge them.
Skills and attributes for success
To thrive in this role, you will need a combination of technical and business skills that will make a significant impact. Your skills will include:
Technical Skills Applications Integration
Cloud Computing and Cloud Computing Architecture
Data Architecture Design and Modelling
Data Integration and Data Quality
AI/Agentic AI driven data operations
Experience delivering business use cases in Transmission / Distribution / Generation / Customer.
Strong relationship management and business development skills.
Become a trusted advisor to your clients' senior decision makers and internal EY teams by establishing credibility and expertise in both data strategy in general and in the use of analytic technology solutions to solve business problems.
Engage with senior business leaders to understand and shape their goals and objectives and their corresponding information needs and analytic requirements.
Collaborate with cross-functional teams (Data Scientists, Business Analysts, and IT teams) to define data requirements, design solutions, and implement data strategies that align with our clients' objectives.
Organize and lead workshops and design sessions with stakeholders, including clients, team members, and cross-functional partners, to capture requirements, understand use cases, personas, key business processes, brainstorm solutions, and align on data architecture strategies and projects.
Lead the design and implementation of modern data architectures, supporting transactional, operational, analytical, and AI solutions.
Direct and mentor global data architecture and engineering teams, fostering a culture of innovation, collaboration, and continuous improvement.
Establish data governance policies and practices, including data security, quality, and lifecycle management.
Stay abreast of industry trends and emerging technologies in data architecture and management, recommending innovations and improvements to enhance our capabilities.
To qualify for the role, you must have
A Bachelor's degree required in STEM
12+ years professional consulting experience in industry or in technology consulting.
12+ years hands-on experience in architecting, designing, delivering or optimizing data lake solutions.
5+ years' experience with native cloud products and services such as Azure or GCP.
8+ years of experience mentoring and leading teams of data architects and dataengineers, fostering a culture of innovation and professional development.
In-depth knowledge of data architecture principles and best practices, including data modelling, data warehousing, data lakes, and data integration.
Demonstrated experience in leading large dataengineering teams to design and build platforms with complex architectures and diverse features including various data flow patterns, relational and no-SQL databases, production-grade performance, and delivery to downstream use cases and applications.
Hands-on experience in designing end-to-end architectures and pipelines that collect, process, and deliver data to its destination efficiently and reliably.
Proficiency in data modelling techniques and the ability to choose appropriate architectural design patterns, including Data Fabrics, Data Mesh, Lake Houses, or Delta Lakes.
Manage complex data analysis, migration, and integration of enterprise solutions to modern platforms, including code efficiency and performance optimizations.
Previous hands‑on coding skills in languages commonly used in dataengineering, such as Python, Java, or Scala.
Ability to design data solutions that can scale horizontally and vertically while optimizing performance.
Experience with containerization technologies like Docker and container orchestration platforms like Kubernetes for managing data workloads.
Experience in version control systems (e.g. Git) and knowledge of DevOps practices for automating dataengineering workflows (DataOps).
Practical understanding of data encryption, access control, and security best practices to protect sensitive data.
Experience leading Infrastructure and Security engineers and architects in overall platform build.
Excellent leadership, communication, and project management skills.
Data Security and Database Management
Enterprise Data Management and Metadata Management
Ontology Design and Systems Design
Ideally, you'll also have
Master's degree in Electrical / Power Systems Engineering, Computer science, Statistics, Applied Mathematics, Data Science, Machine Learning or commensurate professional experience.
Experience working at big 4 or a major utility.
Experience with cloud data platforms like Databricks.
Experience in leading and influencing teams, with a focus on mentorship and professional development.
A passion for innovation and the strategic application of emerging technologies to solve real-world challenges.
The ability to foster an inclusive environment that values diverse perspectives and empowers team members.
Building and Managing Relationships
Client Trust and Value and Commercial Astuteness
Communicating With Impact and Digital Fluency
What we look for
We are looking for top performers who demonstrate a blend of technical expertise and business acumen, with the ability to build strong client relationships and lead teams through change. Emotional agility and hybrid collaboration skills are key to success in this dynamic role.
FY26NATAID
What we offer you
At EY, we'll develop you with future-focused skills and equip you with world-class experiences. We'll empower you in a flexible environment, and fuel you and your extraordinary talents in a diverse and inclusive culture of globally connected teams. Learn more.
We offer a comprehensive compensation and benefits package where you'll be rewarded based on your performance and recognized for the value you bring to the business. The base salary range for this job in all geographic locations in the US is $144,000 to $329,100. The base salary range for New York City Metro Area, Washington State and California (excluding Sacramento) is $172,800 to $374,000. Individual salaries within those ranges are determined through a wide variety of factors including but not limited to education, experience, knowledge, skills and geography. In addition, our Total Rewards package includes medical and dental coverage, pension and 401(k) plans, and a wide range of paid time off options.
Join us in our team‑led and leader‑enabled hybrid model. Our expectation is for most people in external, client serving roles to work together in person 40-60% of the time over the course of an engagement, project or year.
Under our flexible vacation policy, you'll decide how much vacation time you need based on your own personal circumstances. You'll also be granted time off for designated EY Paid Holidays, Winter/Summer breaks, Personal/Family Care, and other leaves of absence when needed to support your physical, financial, and emotional well‑being.
Are you ready to shape your future with confidence? Apply today.
EY accepts applications for this position on an on‑going basis.
For those living in California, please click here for additional information.
EY focuses on high‑ethical standards and integrity among its employees and expects all candidates to demonstrate these qualities.
EY | Building a better working world
EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets.
Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow.
EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
EY provides equal employment opportunities to applicants and employees without regard to race, color, religion, age, sex, sexual orientation, gender identity/expression, pregnancy, genetic information, national origin, protected veteran status, disability status, or any other legally protected basis, including arrest and conviction records, in accordance with applicable law.
EY is committed to providing reasonable accommodation to qualified individuals with disabilities including veterans with disabilities. If you have a disability and either need assistance applying online or need to request an accommodation during any part of the application process, please call 1-800-EY-HELP3, select Option 2 for candidate related inquiries, then select Option 1 for candidate queries and finally select Option 2 for candidates with an inquiry which will route you to EY's Talent Shared Services Team (TSS) or email the TSS at ************************** .
#J-18808-Ljbffr
How much does a data engineer earn in Bridgeport, CT?
The average data engineer in Bridgeport, CT earns between $73,000 and $131,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.
Average data engineer salary in Bridgeport, CT
$98,000
What are the biggest employers of Data Engineers in Bridgeport, CT?
The biggest employers of Data Engineers in Bridgeport, CT are: