Senior Data Architect - Power & Utilities AI Platforms
Ernst & Young Oman 4.7
Data engineer job in Stamford, CT
A leading global consulting firm is seeking a Senior Manager in Data Architecture for the Power & Utilities sector. This role requires at least 12 years of consulting experience and expertise in data architecture and engineering. The successful candidate will manage technology projects, lead teams, and develop innovative data solutions that drive significant business outcomes. Strong relationship management and communication skills are essential for engaging with clients and stakeholders. Join us to help shape a better working world.
#J-18808-Ljbffr
$112k-156k yearly est. 2d ago
Looking for a job?
Let Zippia find it for you.
Data Scientist - Analytics roles draw analytical talent hunting for roles.
Boxncase
Data engineer job in Commack, NY
About the Role We believe that the best decisions are backed by data. We are seeking a curious and analytical Data Scientist to champion our data -driven culture.
In this role, you will act as a bridge between technical data and business strategy. You will mine massive datasets, build predictive models, and-most importantly-tell the story behind the numbers to help our leadership team make smarter choices. You are perfect for this role if you are as comfortable with SQL queries as you are with slide decks.
### What You Will Do
Exploratory Analysis: Dive deep into raw data to discover trends, patterns, and anomalies that others miss.
Predictive Modeling: Build and test statistical models (Regression, Time -series, Clustering) to forecast business outcomes and customer behavior.
Data Visualization: Create clear, impactful dashboards using Tableau, PowerBI, or Python libraries (Matplotlib/Seaborn) to visualize success metrics.
Experimentation: Design and analyze A/B tests to optimize product features and marketing campaigns.
Data Cleaning: Work with DataEngineers to clean and structure messy data for analysis.
Strategy: Present findings to stakeholders, translating complex math into clear, actionable business recommendations.
Requirements
Experience: 2+ years of experience in Data Science or Advanced Analytics.
The Toolkit: Expert proficiency in Python or R for statistical analysis.
Data Querying: Advanced SQL skills are non -negotiable (Joins, Window Functions, CTEs).
Math Mindset: Strong grasp of statistics (Hypothesis testing, distributions, probability).
Visualization: Ability to communicate data visually using Tableau, PowerBI, or Looker.
Communication: Excellent verbal and written skills; you can explain a p -value to a non -technical manager.
### Preferred Tech Stack (Keywords)
Languages: Python (Pandas, NumPy), R, SQL
Viz Tools: Tableau, PowerBI, Looker, Plotly
Machine Learning: Scikit -learn, XGBoost (applied to business problems)
Big Data: Spark, Hadoop, Snowflake
Benefits
Salary Range: $50,000 - $180,000 USD / year (Commensurate with location and experience)
Remote Friendly: Work from where you are most productive.
Learning Budget: Stipend for data courses (Coursera, DataCamp) and books.
$50k-180k yearly 43d ago
Principal Data Scientist
Maximus 4.3
Data engineer job in Bridgeport, CT
Description & Requirements Maximus has an exciting opportunity for a Principal Data Scientist to join the Maximus AI Accelerator supporting both the enterprise and our clients. We are looking for an accomplished hands-on individual contributor and team player to be a part of the AI Accelerator team.
You will be responsible for architecting and optimizing scalable, secure AI systems and integrating AI models in production using MLOps best practices, ensuring systems are resilient, compliant, and efficient. This role requires strong systems thinking, problem-solving abilities, and the capacity to manage risk and change in complex environments. Success depends on cross-functional collaboration, strategic communication, and adaptability in fast-paced, evolving technology landscapes.
This position will be focused on strategic company-wide initiatives but will also play a role in project delivery and capture solutioning (i.e., leaning in on existing or future projects and providing solutioning to capture new work.)
This is a remote position.
Essential Duties and Responsibilities:
- Make deep dives into the data, pulling out objective insights for business leaders.
- Initiate, craft, and lead advanced analyses of operational data.
- Provide a strong voice for the importance of data-driven decision making.
- Provide expertise to others in data wrangling and analysis.
- Convert complex data into visually appealing presentations.
- Develop and deploy advanced methods to analyze operational data and derive meaningful, actionable insights for stakeholders and business development partners.
- Understand the importance of automation and look to implement and initiate automated solutions where appropriate.
- Initiate and take the lead on AI/ML initiatives as well as develop AI/ML code for projects.
- Utilize various languages for scripting and write SQL queries. Serve as the primary point of contact for data and analytical usage across multiple projects.
- Guide operational partners on product performance and solution improvement/maturity options.
- Participate in intra-company data-related initiatives as well as help foster and develop relationships throughout the organization.
- Learn new skills in advanced analytics/AI/ML tools, techniques, and languages.
- Mentor more junior data analysts/data scientists as needed.
- Apply strategic approach to lead projects from start to finish;
Job-Specific Essential Duties and Responsibilities:
- Develop, collaborate, and advance the applied and responsible use of AI, ML, simulation, and data science solutions throughout the enterprise and for our clients by finding the right fit of tools, technologies, processes, and automation to enable effective and efficient solutions for each unique situation.
- Contribute and lead the creation, curation, and promotion of playbooks, best practices, lessons learned and firm intellectual capital.
- Contribute to efforts across the enterprise to support the creation of solutions and real mission outcomes leveraging AI capabilities from Computer Vision, Natural Language Processing, LLMs and classical machine learning.
- Maintain current knowledge and evaluation of the AI technology landscape and emerging developments and their applicability for use in production/operational environments
Minimum Requirements
- Bachelor's degree in related field required.
- 10-12 years of relevant professional experience required.
Job-Specific Minimum Requirements (required skills that align with contract LCAT, verifiable, and measurable):
- 10+ years of relevant Software Development + AI / ML / DS experience
- Professional Programming experience (e.g. Python, R, etc.)
- Experience with AI / Machine Learning
- Experience working as a contributor on a team
- Experience leading AI/DS/or Analytics teams
- Experience mentoring Junior Staff
- Experience with Modeling and Simulation
- Experience with program management
Preferred Skills and Qualifications:
- Masters in quantitative discipline (Math, Operations Research, Computer Science, etc.)
- Experience developing machine learning or signal processing algorithms:
- Ability to leverage mathematical principles to model new and novel behaviors
- Ability to leverage statistics to identify true signals from noise or clutter
- Experience working as an individual contributor in AI or modeling and simulation
- Use of state-of-the-art technology to solve operational problems in AI, Machine Learning, or Modeling and Simulation spheres
- Strong knowledge of data structures, common computing infrastructures/paradigms (stand alone and cloud), and software engineering principles
- Ability to design custom solutions in the AI and Advanced Analytics sphere for customers. This includes the ability to scope customer needs, identify currently existing technologies, and develop custom software solutions to fill any gaps in available off the shelf solutions
- Use and development of program automation, CI/CD, DevSecOps, and Agile
- Experience managing technical teams delivering technical solutions for clients.
- Experience working with optimization problems like scheduling
- Experience with Data Analytics and Visualizations
- Cloud certifications (AWS, Azure, or GCP)
- 10+ yrs of related experience in AI, advanced analytics, computer science, or software development
#techjobs #Veteranspage
EEO Statement
Maximus is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, age, national origin, disability, veteran status, genetic information and other legally protected characteristics.
Pay Transparency
Maximus compensation is based on various factors including but not limited to job location, a candidate's education, training, experience, expected quality and quantity of work, required travel (if any), external market and internal value analysis including seniority and merit systems, as well as internal pay alignment. Annual salary is just one component of Maximus's total compensation package. Other rewards may include short- and long-term incentives as well as program-specific awards. Additionally, Maximus provides a variety of benefits to employees, including health insurance coverage, life and disability insurance, a retirement savings plan, paid holidays and paid time off. Compensation ranges may differ based on contract value but will be commensurate with job duties and relevant work experience. An applicant's salary history will not be used in determining compensation. Maximus will comply with regulatory minimum wage rates and exempt salary thresholds in all instances.
Accommodations
Maximus provides reasonable accommodations to individuals requiring assistance during any phase of the employment process due to a disability, medical condition, or physical or mental impairment. If you require assistance at any stage of the employment process-including accessing job postings, completing assessments, or participating in interviews,-please contact People Operations at **************************.
Minimum Salary
$
156,640.00
Maximum Salary
$
234,960.00
$77k-112k yearly est. Easy Apply 2d ago
Lead Data Engineer
Launch Potato
Data engineer job in New Haven, CT
WHO ARE WE?
Launch Potato is a profitable digital media company that reaches over 30M+ monthly visitors through brands such as FinanceBuzz, All About Cookies, and OnlyInYourState.
As The Discovery and Conversion Company, our mission is to connect consumers with the world's leading brands through data-driven content and technology.
Headquartered in South Florida with a remote-first team spanning over 15 countries, we've built a high-growth, high-performance culture where speed, ownership, and measurable impact drive success.
WHY JOIN US?
At Launch Potato, you'll accelerate your career by owning outcomes, moving fast, and driving impact with a global team of high-performers.
BASE SALARY: $150,000 to $190,000 per year
MUST HAVE:
5+ years of experience in dataengineering within fast-paced, cloud-native environments
Deep expertise in Python, SQL, Docker, and AWS (S3, Glue, Kinesis, Athena/Presto)
Experience building and managing scalable ETL pipelines and data lake infrastructure
Familiarity with distributed systems, Spark, and data quality best practices
Strong cross-functional collaboration skills to support BI, analytics, and engineering teams
EXPERIENCE: 5+ years of dataengineering experience in an AWS-based environment where data powers decision-making across product, marketing, and operations.
YOUR ROLE
Lead scalable dataengineering efforts that empower cross-functional teams with reliable, timely, and actionable data, ensuring Launch Potato's analytics and business intelligence infrastructure fuels strategic growth.
OUTCOMES
Build and optimize scalable, efficient ETL and data lake processes that proactively catch issues before they impact the business
Own the ingestion, modeling, and transformation of structured and unstructured data to support reporting and analysis across all business units
Partner closely with BI and Analytics to deliver clean, query-ready datasets that improve user acquisition, engagement, and revenue growth
Maintain and enhance database monitoring, anomaly detection, and quality assurance workflows
Serve as the internal point of contact for reporting infrastructure-delivering ad hoc data analyses and driving consistent data integrity
Drive adoption and advancement of Looker dashboards by ensuring robust and scalable backend data support
Contribute to the future of Launch Potato's data team by mentoring peers and shaping a high-performance, quality-first engineering culture
COMPETENCIES
DataEngineering Mastery: Demonstrates technical excellence in building data pipelines, troubleshooting distributed systems, and scaling infrastructure using AWS and open-source tools
Cross-Functional Collaboration: Communicates clearly across technical and non-technical teams, translating business needs into robust data solutions
Proactive Ownership: Operates with a strong sense of accountability; identifies and solves data issues independently and efficiently
Quality-Driven Execution: Holds a high bar for data accuracy, auditability, and documentation throughout all systems and workflows
Strategic Thinking: Anticipates how data infrastructure impacts wider company OKRs and proactively suggests improvements and innovations
Growth Mindset: Seeks out opportunities to elevate team capabilities, mentor others, and stay ahead of evolving best practices in dataengineering
TOTAL COMPENSATION
Base salary is set according to market rates for the nearest major metro and varies based on Launch Potato's Levels Framework. Your compensation package includes a base salary, profit-sharing bonus, and competitive benefits. Launch Potato is a performance-driven company, which means once you are hired, future increases will be based on company and personal performance, not annual cost of living adjustments.
Want to accelerate your career? Apply now!
Since day one, we've been committed to having a diverse, inclusive team and culture. We are proud to be an Equal Employment Opportunity company. We value diversity, equity, and inclusion.
We do not discriminate based on race, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics.
$150k-190k yearly Auto-Apply 2d ago
Data Engineer w AWS services such as AWS Glue, Lambda, Athena, Step Functions, and Lake
Intermedia Group
Data engineer job in Ridgefield, CT
OPEN JOB: DataEngineer w AWS services such as AWS Glue, Lambda, Athena, Step Functions, and Lake **HYBRID - This candidate will work on site 2-3X per week in Ridgefield CT location SALARY: $140,000 to $185,000
2 Openings
NOTE: CANDIDATE MUST BE US CITIZEN OR GREEN CARD HOLDER
We are seeking a highly skilled and experienced DataEngineer to design, build, and maintain our scalable and robust data infrastructure on a cloud platform. In this pivotal role, you will be instrumental in enhancing our data infrastructure, optimizing data flow, and ensuring data availability. You will be responsible for both the hands-on implementation of data pipelines and the strategic design of our overall data architecture.
Seeking a candidate with hands-on experience with AWS services such as AWS Glue, Lambda, Athena, Step Functions, and Lake, Proficiency in Python and SQL and DevOps/CI/CD experience
Duties & Responsibilities
Design, develop, and maintain scalable data pipelines and ETL processes to support data integration and analytics.
Collaborate with data architects, modelers and IT team members to help define and evolve the overall cloud-based data architecture strategy, including data warehousing, data lakes, streaming analytics, and data governance frameworks
Collaborate with data scientists, analysts, and other business stakeholders to understand data requirements and deliver solutions.
Optimize and manage data storage solutions (e.g., S3, Snowflake, Redshift) ensuring data quality, integrity, security, and accessibility.
Implement data quality and validation processes to ensure data accuracy and reliability.
Develop and maintain documentation for data processes, architecture, and workflows.
Monitor and troubleshoot data pipeline performance and resolve issues promptly.
Consulting and Analysis: Meet regularly with defined clients and stakeholders to understand and analyze their processes and needs. Determine requirements to present possible solutions or improvements.
Technology Evaluation: Stay updated with the latest industry trends and technologies to continuously improve dataengineering practices.
Requirements
Cloud Expertise: Expert-level proficiency in at least one major cloud platform (AWS, Azure, or GCP) with extensive experience in their respective data services (e.g., AWS S3, Glue, Lambda, Redshift, Kinesis; Azure Data Lake, Data Factory, Synapse, Event Hubs; GCP BigQuery, Dataflow, Pub/Sub, Cloud Storage); experience with AWS data cloud platform preferred
SQL Mastery: Advanced SQL writing and optimization skills.
Data Warehousing: Deep understanding of data warehousing concepts, Kimball methodology, and various data modeling techniques (dimensional, star/snowflake schemas).
Big Data Technologies: Experience with big data processing frameworks (e.g., Spark, Hadoop, Flink) is a plus.
Database Systems: Experience with relational and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB, Cassandra).
DevOps/CI/CD: Familiarity with DevOps principles and CI/CD pipelines for data solutions.
Hands-on experience with AWS services such as AWS Glue, Lambda, Athena, Step Functions, and Lake Formation
Proficiency in Python and SQL
Desired Skills, Experience and Abilities
4+ years of progressive experience in dataengineering, with a significant portion dedicated to cloud-based data platforms.
ETL/ELT Tools: Hands-on experience with ETL/ELT tools and orchestrators (e.g., Apache Airflow, Azure Data Factory, AWS Glue, dbt).
Data Governance: Understanding of data governance, data quality, and metadata management principles.
AWS Experience: Ability to evaluate AWS cloud applications, make architecture recommendations; AWS solutions architect certification (Associate or Professional) is a plus
Familiarity with Snowflake
Knowledge of dbt (data build tool)
Strong problem-solving skills, especially in data pipeline troubleshooting and optimization
If you are interested in pursuing this opportunity, please respond back and include the following:
Full CURRENT Resume
Required compensation
Contact information
Availability
Upon receipt, one of our managers will contact you to discuss in full
STEPHEN FLEISCHNER
Recruiting Manager
INTERMEDIA GROUP, INC.
EMAIL: *******************************
$140k-185k yearly Easy Apply 60d+ ago
ETL/Data Platform Engineer
Clarapath
Data engineer job in Hawthorne, NY
JOB TITLE: ETL/Data Platform Engineer
TYPE: Full time, regular
COMPENSATION: $130,000 - $180,000/yr
Clarapath is a medical robotics company based in Westchester County, NY. Our mission is to transform and modernize laboratory workflows with the goal of improving patient care, decreasing costs, and enhancing the quality and consistency of laboratory processes. SectionStar by Clarapath is a ground-breaking electro-mechanical system designed to elevate and automate the workflow in histology laboratories and provide pathologists with the tissue samples they need to make the most accurate diagnoses. Through the use of innovative technology, data, and precision analytics, Clarapath is paving the way for a new era of laboratory medicine.
Role Summary:
The ETL/Data Platform Engineer will play a key role in designing, building, and maintaining Clarapath s data pipelines and platform infrastructure supporting SectionStar , our advanced electro-mechanical device. This role requires a strong foundation in dataengineering, including ETL/ELT development, data modeling, and scalable data platform design. Working closely with cross-functional teams including software, firmware, systems, and mechanical engineering this individual will enable reliable ingestion, transformation, and storage of device and operational data. The engineer will help power analytics, system monitoring, diagnostics, and long-term insights that support product performance, quality, and continuous improvement. We are seeking a proactive, detail-oriented engineer who thrives in a fast-paced, rapidly growing environment and is excited to apply dataengineering best practices to complex, data-driven challenges in a regulated medical technology setting.
Responsibilities:
Design, develop, and maintain robust ETL/ELT pipelines for device, telemetry, and operational data
Build and optimize data models to support analytics, reporting, and system insights
Develop and maintain scalable data platform infrastructure (cloud and/or on-prem)
Ensure data quality, reliability, observability, and performance across pipelines
Support real-time or near real-time data ingestion where applicable
Collaborate with firmware and software teams to integrate device-generated data
Enable dashboards, analytics, and internal tools for engineering, quality, and operations teams
Implement best practices for data security, access control, and compliance
Troubleshoot pipeline failures and improve system resilience
Document data workflows, schemas, and platform architecture
Qualifications:
Bachelor s degree in Computer Science, Engineering, or a related field (or equivalent experience)
3+ years of experience in dataengineering, ETL development, or data platform roles
Strong proficiency in SQL and at least one programming language (Python preferred)
Experience building and maintaining ETL/ELT pipelines
Familiarity with data modeling concepts and schema design
Experience with cloud platforms (AWS, GCP, or Azure) or hybrid environments
Understanding of data reliability, monitoring, and pipeline orchestration
Strong problem-solving skills and attention to detail
Experience with streaming data or message-based systems (ex: Kafka, MQTT), a plus
Experience working with IoT, device, or telemetry data, a plus
Familiarity with data warehouses and analytics platforms, a plus
Experience in regulated environments (medical device, healthcare, life sciences), a plus
Exposure to DevOps practices, CI/CD, or infrastructure-as-code, a plus
Company Offers:
Competitive salary, commensurate with experience and education
Comprehensive benefits package available: (healthcare, vision, dental and life insurances; 401k; PTO and holidays)
A collaborative and diverse work environment where our teams thrive on solving complex challenges
Ability to file IP with the company
Connections with world class researchers and their laboratories
Collaboration with strategic leaders in healthcare and pharmaceutical world
A mission driven organization where every team member will be responsible for changing the standards of delivering healthcare
Clarapath is proud to be an equal opportunity employer. We are committed to providing equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. In addition to federal law requirements, Clarapath complies with applicable state and local laws governing nondiscrimination in employment. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation, and training.
$130k-180k yearly 15d ago
Data Engineer (AI, ML, and Data Science)
Consumer Reports
Data engineer job in Yonkers, NY
Job DescriptionWHO WE ARE
Consumer Reports is an independent, nonprofit organization dedicated to a fair and just marketplace for all. CR is known for our rigorous testing and trusted ratings on thousands of products and services. We report extensively on consumer trends and challenges, and survey millions of people in the U.S. each year. We leverage our evidence-based approach to advocate for consumer rights, working with policymakers and companies to find solutions for safer products and fair practices.
Our mission starts with you. We offer medical benefits that start on your first day as a CR employee that include behavioral health coverage, family planning and a generous 401K match. Learn more about how CR advocates on behalf of our employees.
OVERVIEW
Data powers everything we do at CR-and it's the foundation for our AI and machine learning efforts that are transforming how we serve consumers.
The DataEngineer ( AI/ML & Data Science) will play a critical role in building the data infrastructure that powers advanced AI applications, machine learning models, and analytics systems across CR. Reporting to the Associate Director, AI/M & Data Science, in this role, you will design and maintain robust data pipelines and services that support experimentation, model training, and AI application deployment.
If you're passionate about solving complex data challenges, working with cutting-edge AI technologies, and enabling impactful, data-driven products that support CR's mission, this is the role for you.
This is a hybrid position. This position is not eligible for sponsorship or relocation assistance.
How You'll Make An Impact
As a mission based organization, CR and our Software team are pursuing an AI strategy that will drive value for our customers, give our employees superpowers, and address AI harms in the digital marketplace. We're looking for an AI/ML engineer to help us execute on our multi-year roadmap around generative AI.
As a DataEngineer ( AI/M & Data Science) you will:
Design, develop, and maintain ETL/ELT pipelines for structured and unstructured data to support AI/ML model and application development, evaluation, and monitoring.
Build and optimize data processing workflows in Databricks, AWS SageMaker, or similar cloud platforms.
Collaborate with AI/ML engineers to deliver clean, reliable datasets for model training and inference.
Implement data quality, observability, and lineage tracking within the ML lifecycle.
Develop Data APIs/microservices to power AI applications and reporting/analytics dashboards.
Support the deployment of AI/ML applications by building and maintaining feature stores and data pipelines optimized for production workloads.
Ensure adherence to CR's data governance, security, and compliance standards across all AI and data workflows.
Work with Product, Engineering and other stakeholders to define project requirements and deliverables.
Integrate data from multiple internal and external systems, including APIs, third-party datasets, and cloud storage.
ABOUT YOU
You'll Be Highly Rated If:
You have the experience. You have 3+ years of experience designing and developing data pipelines, data models/schemas, APIs, or services for analytics or ML workloads.
You have the education. You've earned a Bachelor's degree in Computer Science, Engineering, or a related field.
You have programming skills. You are skilled in Python, SQL, and have experience with PySpark on large-scale datasets.
You have experience with data orchestration tools such as Airflow, dbt and Prefect, plus CI/CD pipelines for data delivery.
You have experience with Data and AI/ML platforms such as Databricks, AWS SageMaker or similar.
You have experience working with Kubernetes on cloud platforms like - AWS, GCP, or Azure.
You'll Be One of Our Top Picks If:
You are passionate about automation and continuous improvement.
You have excellent documentation and technical communication skills.
You are an analytical thinker with troubleshooting abilities.
You are self-driven and proactive in solving infrastructure bottlenecks.
FAIR PAY AND A JUST WORKPLACE
At Consumer Reports, we are committed to fair, transparent pay and we strive to provide competitive, market-informed compensation.The target salary range for this position is $100K-$120K. It is anticipated that most qualified candidates will fall near the middle of this range. Compensation for the successful candidate will be informed by the candidate's particular combination of knowledge, skills, competencies, and experience. We have three locations: Yonkers, NY, Washington, DC and Colchester, CT. We are registered to do business in and can only hire from the following states and federal district: Arizona, California, Connecticut, Illinois, Maryland, Massachusetts, Michigan, New Hampshire, New Jersey, New York, Texas, Vermont, Virginia and Washington, DC.
Salary ranges
NY/California: $120K-$140K annually
DMV/Massachusetts: $115K-$135K annually
Colchester, CT and additional approved CR locations: $100K-$120K annually
Consumer Reports is an equal opportunity employer and does not discriminate in employment on the basis of actual or perceived race, color, creed, religion, age, national origin, ancestry, citizenship status, sex or gender (including pregnancy, childbirth, related medical conditions or lactation), gender identity and expression (including transgender status), sexual orientation, marital status, military service or veteran status, protected medical condition as defined by applicable state or local law, disability, genetic information, or any other basis protected by applicable federal, state or local laws. Consumer Reports will provide you with any reasonable assistance or accommodation for any part of the application and hiring process.
$120k-140k yearly 26d ago
Data Engineer I
Epicured, Inc.
Data engineer job in Glen Cove, NY
Job DescriptionWhy Epicured?
Epicured is on a mission to combat and prevent chronic disease, translating scientific research into high-quality food products and healthcare services nationwide. Our evidence-based approach brings together the best of the clinical, culinary, and technology worlds to help people eat better, feel better, and live better one meal at a time.
By joining Epicured's Technology team, you'll help power the data infrastructure that supports Medicaid programs, clinical services, life sciences initiatives, and direct-to-consumer operations - enabling better decisions, better outcomes, and scalable growth.
Role Overview
Epicured is seeking a DataEngineer I to support data ingestion, reporting, and analytics across multiple business lines. Reporting to the SVP of Software Engineering, this role will focus on building and maintaining reliable reporting pipelines, supporting business requests, and managing data from a growing ecosystem of healthcare, operational, and e-commerce systems.
This position is ideal for a self-starter with strong SQL skills who is comfortable working with evolving requirements, healthcare-adjacent data, and modern data platforms such as Microsoft Fabric and Power BI.
Key Responsibilities
Build, maintain, and support reports across all Epicured business lines using Power BI, exports, and Microsoft Fabric.
Ingest and integrate new data sources, including SQL Server, operational systems, and external data exchanges.
Support reporting and analytics requests across Clinical & Life Sciences, Section 1115 Medicaid Waiver programs, Health Information Exchanges (e.g., Healthix), and Self-Pay e-commerce operations.
Handle HIPAA-sensitive data, ensuring proper governance, access control, and compliance standards are maintained.
Manage Shopify and other e-commerce data requests for Epicured's Self-Pay division.
Keep reporting environments organized, documented, and operational while prioritizing incoming requests.
Operate and help scale Epicured's Microsoft Fabric environment, contributing to platform strategy and best practices.
Partner with stakeholders to clarify ambiguous requirements and translate business questions into data solutions.
Qualifications
3+ years of experience in dataengineering, analytics, or business intelligence roles.
Strong SQL skills with experience working in relational databases.
Experience with Azure, Microsoft Fabric, Power BI, or similar modern data platforms.
Strong proficiency in Excel / Google Sheets.
Ability to work independently and manage multiple priorities in a fast-growing environment.
Experience working with healthcare or HIPAA-adjacent data, including exposure to health information exchanges.
Familiarity with ETL / ELT pipelines and data modeling best practices.
Experience integrating operational, financial, logistics, and clinical datasets.
Preferred Qualifications
Experience with C#.
Python experience is a plus.
Healthcare or life sciences background.
Experience supporting analytics for Medicaid, payer, or regulated environments.
Compensation & Benefits
Salary Range: $115,000-$130,000 annually, commensurate with experience
Benefits include:
401(k)
Health, Dental, and Vision insurance
Unlimited Paid Time Off (PTO)
Opportunity to grow with Epicured's expanding data and technology organization
Equal Employment Opportunity
Epicured is proud to be an Equal Opportunity Employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We do not discriminate on the basis of age, race, creed, color, national origin, religion, gender, sexual orientation, gender identity or expression, disability, veteran status, or any other protected status under federal, state, or local law.
$115k-130k yearly 23d ago
Data Scientist
The Connecticut Rise Network
Data engineer job in New Haven, CT
RISE Data Scientist
Reports to: Monitoring, Evaluation, and Learning Manager
Salary: Competitive and commensurate with experience
Please note: Due to the upcoming holidays, application review for this position will begin the first week of January. Applicants can expect outreach by the end of the week of January 5.
Overview:
The RISE Network's mission is to ensure all high school students graduate with a plan and the skills and confidence to achieve college and career success. Founded in 2016, RISE partners with public high schools to lead networks where communities work together to use data to learn and improve. Through its core and most comprehensive network, RISE partners with nine high schools and eight districts, serving over 13,000 students in historically marginalized communities.
RISE high schools work together to ensure all students experience success as they transition to, through, and beyond high school by using data to pinpoint needs, form hypotheses, and pursue ideas to advance student achievement. Partner schools have improved Grade 9 promotion rates by nearly 20 percentage points, while also decreasing subgroup gaps and increasing schoolwide graduation and college access rates. In 2021, the RISE Network was honored to receive the Carnegie Foundation's annual Spotlight on Quality in Continuous Improvement recognition. Increasingly, RISE is pursuing opportunities to scale its impact through research publications, consulting partnerships, professional development experiences, and other avenues to drive excellent student outcomes.
Position Summary and Essential Job Functions:
The RISE Data Scientist will play a critical role in leveraging data to support continuous improvement, program evaluation, and research, enhancing the organization's evidence-based learning and decision-making. RISE is seeking a talented and motivated individual to design and conduct rigorous quantitative analyses to assess the outcomes and impacts of programs.
The ideal candidate is an experienced analyst who is passionate about using data to drive social change, with strong skills in statistical modeling, data visualization, and research design. This individual will also lead efforts to monitor and analyze organization-wide data related to mission progress and key performance indicators (KPIs), and communicate these insights in ways that inspire improvement and action. This is an exciting opportunity for an individual who thrives in an entrepreneurial environment and is passionate about closing opportunity gaps and supporting the potential of all students, regardless of life circumstances. The role will report to the Monitoring, Evaluation, and Learning (MEL) Manager and sit on the MEL team.
Responsibilities include, but are not limited to:
1. Research and Evaluation (30%)
Collaborate with MEL and network teams to design and implement rigorous process, outcome, and impact evaluations.
Lead in the development of data collection tools and survey instruments.
Manage survey data collection, reporting, and learning processes.
Develop RISE learning and issue briefs supported by quantitative analysis.
Design and implement causal inference approaches where applicable, including quasi-experimental designs.
Provide technical input on statistical analysis plans, monitoring frameworks, and indicator selection for network programs.
Translate complex findings into actionable insights and policy-relevant recommendations for non-technical audiences.
Report data for RISE leadership and staff, generating new insights to inform program design.
Create written reports, presentations, publications, and communications pieces.
2. Quantitative Analysis and Statistical Modeling (30%)
Clean, transform, and analyze large and complex datasets from internal surveys, the RISE data warehouse, and external data sources such as the National Student Clearinghouse (NSC).
Conduct exploratory research that informs organizational learning.
Lead complex statistical analyses using advanced methods (regression modeling, propensity score matching, difference in differences analysis, time-series analysis, etc).
Contribute to data cleaning and analysis for key performance indicator reporting.
Develop processes that support automation of cleaning and analysis for efficiency.
Develop and maintain analytical code and workflows to ensure reproducibility.
3. Data Visualization and Tool-building (30%)
Work closely with non-technical stakeholders to understand the question(s) they are asking and the use cases they have for specific data visualizations or tools
Develop well-documented overviews and specifications for new tools.
Create clear, compelling data visualizations and dashboards.
Collaborate with DataEngineering to appropriately and sustainably source data for new tools.
Manage complex projects to build novel and specific tools for internal or external stakeholders.
Maintain custom tools for the duration of their usefulness, including by responding to feedback and requests from project stakeholders.
4. Data Governance and Quality Assurance (10%)
Support data quality assurance protocols and standards across the MEL team.
Ensure compliance with data protection, security, and ethical standards.
Maintain organized, well-documented code and databases.
Collaborate with the DataEngineering team to maintain RISE MEL data infrastructure.
Qualifications
Master's degree (or PhD) in statistics, economics, quantitative social sciences, public policy, data science, or related field.
Minimum of 3 years of professional experience conducting statistical analysis and managing large datasets.
Advanced proficiency in R, Python, or Stata for data analysis and modeling.
Experience designing and implementing quantitative research and evaluation studies.
Strong understanding of inferential statistics, experimental and quasi-experimental methods, and sampling design.
Strong knowledge of survey data collection tools such as Key Surveys, Google Forms, etc.
Excellent data visualization and communication skills
Experience with data visualization tools; strong preference for Tableau.
Ability to translate complex data into insights for diverse audiences, including non-technical stakeholders.
Ability to cultivate relationships and earn credibility with a diverse range of stakeholders.
Strong organizational and project management skills.
Strong sense of accountability and responsibility for results.
Ability to work in an independent and self-motivated manner.
Demonstrated proficiency with Google Workspace.
Commitment to equity, ethics, and learning in a nonprofit or mission-driven context.
Positive attitude and willingness to work in a collaborative environment.
Strong belief that all students can learn and achieve at high levels.
Preferred
Experience working on a monitoring, evaluation, and learning team.
Familiarity with school data systems and prior experience working in a school, district, or similar K-12 educational context preferred.
Experience working with survey data (e.g., DHS, LSMS), administrative datasets, or real-time digital data sources.
Working knowledge of dataengineering or database management (SQL, cloud-based platforms).
Salary Range
$85k - $105k
Most new hires' salaries fall within the first half of the range, allowing team members to grow in their roles. For those who already have significant and aligned experiences at the same level as the role, placement may be at the higher end of the range.
The Connecticut RISE Network is an equal opportunity employer and welcomes candidates from diverse backgrounds.
RISE Interview & Communication Policy
The RISE interview process includes:
A video or phone screening with the hiring manager
Interviews with the hiring panel
A performance task
Reference checks
Applicants will never receive an offer unless they have completed the full interview process.
All official communications with applicants are sent only through ADP or CT RISE email addresses (@ctrise.org). There has been a job offer scam circulating from various email addresses using the domain @careers-ctrise.org-this is not a valid RISE email address.
If you receive an email from anyone claiming to represent RISE with a job offer outside of our official channels, or requesting written screening information, and you have not completed the full interview process, please do not respond and report it to ******************.
$85k-105k yearly Auto-Apply 41d ago
Data Engineer
Innovative Rocket Technologies Inc. 4.3
Data engineer job in Hauppauge, NY
Job Description
Data is pivotal to our goal of frequent launch and rapid iteration. We're recruiting a DataEngineer at iRocket to build pipelines, analytics, and tools that support propulsion test, launch operations, manufacturing, and vehicle performance.
The Role
Design and build data pipelines for test stands, manufacturing machines, launch telemetry, and operations systems.
Develop dashboards, real-time monitoring, data-driven anomaly detection, performance trending, and predictive maintenance tools.
Work with engineers across propulsion, manufacturing, and operations to translate data-needs into data-products.
Maintain data architecture, ETL processes, cloud/edge-data systems, and analytics tooling.
Support A/B testing, performance metrics, and feed insights back into design/manufacturing cycles.
Requirements
Bachelor's degree in Computer Science, DataEngineering, or related technical field.
2+ years of experience building data pipelines, ETL/ELT workflows, and analytics systems.
Proficient in Python, SQL, cloud data platforms (AWS, GCP, Azure), streaming/real-time analytics, and dashboarding (e.g., Tableau, PowerBI).
Strong ability to work cross-functionally and deliver data-products to engineering and operations teams.
Strong communication, documentation, and a curiosity-driven mindset.
Benefits
Health Care Plan (Medical, Dental & Vision)
Retirement Plan (401k, IRA)
Life Insurance (Basic, Voluntary & AD&D)
Paid Time Off (Vacation, Sick & Public Holidays)
Family Leave (Maternity, Paternity)
Short Term & Long Term Disability
Wellness Resources
$102k-146k yearly est. 12d ago
Hadoop Developer
Roljobs Technology Services
Data engineer job in Lake Success, NY
A big opportunity! Great pay! Love Hadoop? Do you have it in you to become a mentor? Then, my client requires YOUR mojo with them. This is an excellent opportunity for those aspiring for a growth oriented company. You will be the first person to work on Hadoop
in their New York office. You will be responsible for starting and educating new team members on Hadoop technologies, with a combination of hands on and training responsibilities.
Interesting?? Scroll down for more..
Job Description
Not an exhaustive set of responsibilities, but an
overview
:
You will be responsible for Architect, design and develop code that consistently adheres to functional programming principles.
You will design, develop, and maintain high volume data processing batch/streaming jobs using industry standard tools and frameworks in Hadoop ecosystem.
You will help my client continue as an industry leader by exploring new technologies,languages and techniques in the rapidly evolving world of high volume data processing.
You will collaborate with team members using Agile techniques, including: pair programming, test driven development (TDD), code reviews, and retrospectives.
If this has piqued your curiosity, please check if YOU have the following:
Experience in Hadoop Ecosystems (MapReduce/Spark/Hive/Oozie/scoop etc...)
Development experience in one of the following - Java, Python OR Scala.
Capable of working with and influencing large, diverse teams.
Experience working with large data sets and high volume data processing.
Here's what we can offer:
A competitive Base Salary of $140K with a Bonus of up to $10K and Benefits.
Note: We are looking for local candidates.
Additional Information
All your information will be kept confidential according to EEO guidelines.
Ping me at
******************** to know more.
$140k yearly Easy Apply 1d ago
Hadoop Developer - II
Workila
Data engineer job in Lake Success, NY
The sheer scale of our capabilities and client engagements and the way we collaborate, operate and deliver value provides an unparalleled opportunity to grow and advance. Choose Workila, and make delivering innovative work part of your extraordinary career.
Job Description
MUST-HAVES
Development exp in Hadoop Ecosystems (MapReduce/Spark/Hive/Oozie/Sqoop etc)
Development experience in one of the following - Java, Python OR Scala
Capable of working with and influencing large diverse teams. Provide example
7+ yr overall development experience (2+ in Hadoop min.)
Effective communication and proven leadership/mentoring experience
NICE-TO-HAVES
Have worked with large datasets and high volume data processing
ROLE DESCRIPTION
DESCRIPTION OF DUTIES
The Hadoop Developer will be responsible to Architect, design and develop code that consistently adheres to functional programming principles. The successful candidate will also have a balance of hands on work with training and mentoring of a team. They will also be an effective communicator. Other responsibilities include:
Design, develop, and maintain high volume data processing batch/streaming jobs using industry standard tools and frameworks in Hadoop ecosystem.
Maintain our position as industry leader by exploring new technologies,languages and techniques in the rapidly evolving world of high volume data processing.
Collaborate with team members using Agile techniques, including: pair programming, test driven development (TDD), code reviews, and retrospectives.
CULTURE AND PERKS
The Hadoop Developer will be the first person to work on Hadoop in our Lake Success Office. They will be responsible for starting and educating new team members on Hadoop technologies, with a combination of hands on and training responsibilities.
Our company is a dynamic, innovative technology company that revolutionized the automotive retail industry with the first online finance and credit application network in 2001.
Our state-of-the-art, web-based solutions are embraced by all major segments of the automotive retailing trade including dealers, financing sources, original equipment manufacturers (OEMs), third-party retailers, agents and aftermarket providers, fueling our tremendous growth.
We have location-specific programs and amenities that keep team members energized, engaged and doing great work, including:
Health club partial reimbursement
Options for discounts on car purchases with our vendor partners
Local business partnership programs
Volunteer Paid time
Massages
Company Picnic
Company Party
Hackathons
Holiday-inspired Company days
Shuttle Service to and from LIRR station to our building in Lake Success, NY
MORE INFORMATION
Full-time Mid-Level Computer Software
SALARY RANGE $110,000 - $130,000
PERFORMANCE BONUS $0 - $10,000, Bonus Plan
SIGNING BONUS None
BENEFITS
Medical Ins.
Dental Ins.
Other
OPEN UNTIL (MAY 24)
RELOCATION Not offered
DIRECT REPORTS Zero
REPORTS TO Director of Business Intelligence
REMOTE WORK Remote work not available
TRAVEL Travel not required
VISA Candidate visas are supported
Additional Information
Apply online by clicking on green label "I am Interested"
or call if you have any question however applying online is the best way to apply.
US # ************
India # 9999 883 470
$110k-130k yearly 1d ago
C++ Market Data Engineer (USA)
Trexquant Investment 4.0
Data engineer job in Stamford, CT
Trexquant is a growing systematic fund at the forefront of quantitative finance, with a core team of highly accomplished researchers and engineers. To keep pace with our expanding global trading operations, we are seeking a C++ Market DataEngineer to design and build ultra-low-latency feed handlers for premier vendor feeds and major exchange multicast feeds. This is a high-impact role that sits at the heart of Trexquant's trading platform; the quality, speed, and reliability of your code directly influence every strategy we run.
Responsibilities
Design & implement high-performance feed handlers in modern C++ for equities, futures, and options across global venues (e.g., NYSE, CME, Refinitiv RTS, Bloomberg B-PIPE).
Optimize for micro- and nanosecond latency using lock-free data structures, cache-friendly memory layouts, and kernel-bypass networking where appropriate.
Build reusable libraries for message decoding, normalization, and publication to internal buses shared by research, simulation, and live trading systems.
Collaborate with cross-functional teams to tune TCP/UDP multicast stacks, kernel parameters, and NIC settings for deterministic performance.
Provide robust failover, gap-recovery, and replay mechanisms to guarantee data integrity under packet loss or venue outages.
Instrument code paths with precision timestamping and performance metrics; drive continuous latency regression testing and capacity planning.
Partner closely with quantitative researchers to understand downstream data requirements and to fine-tune delivery formats for both simulation and live trading.
Produce clear architecture documents, operational run-books, and post-mortems; participate in a 24×7 follow-the-sun support rotation for mission-critical market-data services.
Requirements
BS/MS/PhD in Computer Science, Electrical Engineering, or related field.
3+ years of professional C++ (14,17,20) development experience focused on low-latency, high-throughput systems.
Proven track record building or maintaining real-time market-data feeds (e.g., Refinitiv RTS/TREP, Bloomberg B-PIPE, OPRA, CME MDP, ITCH).
Strong grasp of concurrency, lock-free algorithms, memory-model semantics, and compiler optimizations.
Familiarity with serialization formats (FAST, SBE, Protocol Buffers) and time-series databases or in-memory caches.
Comfort with scripting in Python for prototyping, testing, and ops automation.
Excellent problem-solving skills, ownership mindset, and ability to thrive in a fast-paced trading environment.
Familiarity with containerization (Docker/K8s) and public-cloud networking (AWS, GCP).
Benefits
Competitive salary, plus bonus based on individual and company performance.
Collaborative, casual, and friendly work environment while solving the hardest problems in the financial markets.
PPO Health, dental and vision insurance premiums fully covered for you and your dependents.
Pre-Tax Commuter Benefits
Applications are open for both Stamford and New York City offices, the latter with a planned opening in October 2026.
The base salary range is $175,000 - $200,000 depending on the candidate's educational and professional background. Base salary is one component of Trexquant's total compensation, which may also include a discretionary, performance-based bonus. This position is classified as overtime-exempt.
Trexquant is an Equal Opportunity Employer
$175k-200k yearly Auto-Apply 60d+ ago
Tech Lead, Data & Inference Engineer
Catalyst Labs
Data engineer job in Stamford, CT
Our Client
A fast moving and venture backed advertising technology startup based in San Francisco. They have raised twelve million dollars in funding and are transforming how business to business marketers reach their ideal customers. Their identity resolution technology blends business and consumer signals to convert static audience lists into high match and cross channel segments without the use of cookies. By transforming first party and third party data into precision targetable audiences across platforms such as Meta, Google and YouTube, they enable marketing teams to reach higher match rates, reduce wasted advertising spend and accelerate pipeline growth. With a strong understanding of how business buyers behave in channels that have traditionally been focused on business to consumer activity, they are redefining how business brands scale demand generation and account based efforts.
About Us
Catalyst Labs is a leading talent agency with a specialized vertical in Applied AI, Machine Learning, and Data Science. We stand out as an agency thats deeply embedded in our clients recruitment operations.
We collaborate directly with Founders, CTOs, and Heads of AI in those themes who are driving the next wave of applied intelligence from model optimization to productized AI workflows. We take pride in facilitating conversations that align with your technical expertise, creative problem-solving mindset, and long-term growth trajectory in the evolving world of intelligent systems.
Location: San Francisco
Work type: Full Time,
Compensation: above market base + bonus + equity
Roles & Responsibilities
Lead the design, development and scaling of an end to end data platform from ingestion to insights, ensuring that data is fast, reliable and ready for business use.
Build and maintain scalable batch and streaming pipelines, transforming diverse data sources and third party application programming interfaces into trusted and low latency systems.
Take full ownership of reliability, cost and service level objectives. This includes achieving ninety nine point nine percent uptime, maintaining minutes level latency and optimizing cost per terabyte. Conduct root cause analysis and provide long lasting solutions.
Operate inference pipelines that enhance and enrich data. This includes enrichment, scoring and quality assurance using large language models and retrieval augmented generation. Manage version control, caching and evaluation loops.
Work across teams to deliver data as a product through the creation of clear data contracts, ownership models, lifecycle processes and usage based decision making.
Guide architectural decisions across the data lake and the entire pipeline stack. Document lineage, trade offs and reversibility while making practical decisions on whether to build internally or buy externally.
Scale integration with application programming interfaces and internal services while ensuring data consistency, high data quality and support for both real time and batch oriented use cases.
Mentor engineers, review code and raise the overall technical standard across teams. Promote data driven best practices throughout the organization.
Qualifications
Bachelors or Masters degree in Computer Science, Computer Engineering, Electrical Engineering, or Mathematics.
Excellent written and verbal communication; proactive and collaborative mindset.
Comfortable in hybrid or distributed environments with strong ownership and accountability.
A founder-level bias for actionable to identify bottlenecks, automate workflows, and iterate rapidly based on measurable outcomes.
Demonstrated ability to teach, mentor, and document technical decisions and schemas clearly.
Core Experience
6 to 12 years of experience building and scaling production-grade data systems, with deep expertise in data architecture, modeling, and pipeline design.
Expert SQL (query optimization on large datasets) and Python skills.
Hands-on experience with distributed data technologies (Spark, Flink, Kafka) and modern orchestration tools (Airflow, Dagster, Prefect).
Familiarity with dbt, DuckDB, and the modern data stack; experience with IaC, CI/CD, and observability.
Exposure to Kubernetes and cloud infrastructure (AWS, GCP, or Azure).
Bonus: Strong Node.js skills for faster onboarding and system integration.
Previous experience at a high-growth startup (10 to 200 people) or early-stage environment with a strong product mindset.
$84k-114k yearly est. 60d+ ago
Network Planning Data Scientist (Manager)
Atlas Air 4.9
Data engineer job in White Plains, NY
Atlas Air is seeking a detail-oriented and analytical Network Planning Analyst to help optimize our global cargo network. This role plays a critical part in the 2-year to 11-day planning window, driving insights that enable operational teams to execute the most efficient and reliable schedules. The successful candidate will provide actionable analysis on network delays, utilization trends, and operating performance, build models and reports to govern network operating parameters, and contribute to the development and implementation of software optimization tools that improve reliability and streamline planning processes.
This position requires strong analytical skills, a proactive approach to problem-solving, and the ability to translate data into operational strategies that protect service quality and maximize network efficiency.
Responsibilities
* Analyze and Monitor Network Performance
* Track and assess network delays, capacity utilization, and operating constraints to identify opportunities for efficiency gains and reliability improvements.
* Develop and maintain key performance indicators (KPIs) for network operations and planning effectiveness.
* Modeling & Optimization
* Build and maintain predictive models to assess scheduling scenarios and network performance under varying conditions.
* Support the design, testing, and implementation of software optimization tools to enhance operational decision-making.
* Reporting & Governance
* Develop periodic performance and reliability reports for customers, assisting in presentation creation
* Produce regular and ad hoc reports to monitor compliance with established operating parameters.
* Establish data-driven processes to govern scheduling rules, protect operational integrity, and ensure alignment with reliability targets.
* Cross-Functional Collaboration
* Partner with Operations, Planning, and Technology teams to integrate analytics into network planning and execution.
* Provide insights that inform schedule adjustments, fleet utilization, and contingency planning.
* Innovation & Continuous Improvement
* Identify opportunities to streamline workflows and automate recurring analyses.
* Contributes to the development of new planning methodologies and tools that enhance decision-making and operational agility.
Qualifications
* Proficiency in SQL (Python and R are a plus) for data extraction and analysis; experience building decision-support tools, reporting tools dashboards (e.g., Tableau, Power BI)
* Bachelor's degree required in Industrial Engineering, Operations Research, Applied Mathematics, Data Science or related quantitative discipline or equivalent work experience.
* 5+ years of experience in strategy, operations planning, finance or continuous improvement, ideally with airline network planning
* Strong analytical skills with experience in statistical analysis, modeling, and scenario evaluation.
* Strong problem-solving skills with the ability to work in a fast-paced, dynamic environment.
* Excellent communication skills with the ability to convey complex analytical findings to non-technical stakeholders.
* A proactive, solution-focused mindset with a passion for operational excellence and continuous improvement.
* Knowledge of operations, scheduling, and capacity planning, ideally in airlines, transportation or other complex network operations
Salary Range: $131,500 - $177,500
Financial offer within the stated range will be based on multiple factors to include but not limited to location, relevant experience/level and skillset.
The Company is an Equal Opportunity Employer. It is our policy to afford equal employment opportunity to all employees and applicants for employment without regard to race, color, religion, sex, sexual orientation, national origin, citizenship, place of birth, age, disability, protected veteran status, gender identity or any other characteristic or status protected by applicable in accordance with federal, state and local laws.
If you'd like more information about your EEO rights as an applicant under the law, please download the available EEO is the Law document at ******************************************
To view our Pay Transparency Statement, please click here: Pay Transparency Statement
"Know Your Rights: Workplace Discrimination is Illegal" Poster
The "EEO Is The Law" Poster
$131.5k-177.5k yearly Auto-Apply 45d ago
Salesforce Data 360 Architect
Slalom 4.6
Data engineer job in White Plains, NY
Who You'll Work With In our Salesforce business, we help our clients bring the most impactful customer experiences to life and we do that in a way that makes our clients the hero of their transformation story. We are passionate about and dedicated to building a diverse and inclusive team, recognizing that diverse team members who are celebrated for bringing their authentic selves to their work build solutions that reach more diverse populations in innovative and impactful ways. Our team is comprised of customer strategy experts, Salesforce-certified experts across all Salesforce capabilities, industry experts, organizational and cultural change consultants, and project delivery leaders. As the 3rd largest Salesforce partner globally and in North America, we are committed to growing and developing our Salesforce talent, offering continued growth opportunities, and exposing our people to meaningful work that aligns to their personal and professional goals.
We're looking for individuals who have experience implementing Salesforce Data Cloud or similar platforms and are passionate about customer data. The ideal candidate has a desire for continuous professional growth and can deliver complex, end-to-end Data Cloud implementations from strategy and design, through to data ingestion, segment creation, and activation; all while working alongside both our clients and other delivery disciplines. Our Global Salesforce team is looking to add a passionate Principal or Senior Principal to take on the role of Data Cloud Architect within our Salesforce practice.
What You'll Do:
Responsible for business requirements gathering, architecture design, data ingestion and modeling, identity resolution setup, calculated insight configuration, segment creation and activation, end-user training, and support procedures
Lead technical conversations with both business and technical client teams; translate those outcomes into well-architected solutions that best utilize Salesforce Data Cloud and the wider Salesforce ecosystem
Ability to direct technical teams, both internal and client-side
Provide subject matter expertise as warranted via customer needs and business demands
Build lasting relationships with key client stakeholders and sponsors
Collaborate with digital specialists across disciplines to innovate and build premier solutions
Participate in compiling industry research, thought leadership and proposal materials for business development activities
Experience with scoping client work
Experience with hyperscale data platforms (ex: Snowflake), robust database modeling and data governance is a plus.
What You'll Bring:
Have been part of at least one Salesforce Data Cloud implementation
Familiarity with Salesforce's technical architecture: APIs, Standard and Custom Objects, APEX. Proficient with ANSI SQL and supported functions in Salesforce Data Cloud
Strong proficiency toward presenting complex business and technical concepts using visualization aids
Ability to conceptualize and craft sophisticated wireframes, workflows, and diagrams
Strong understanding of data management concepts, including data quality, data distribution, data modeling and data governance
Detailed understanding of the fundamentals of digital marketing and complementary Salesforce products that organizations may use to run their business. Experience defining strategy, developing requirements, and implementing practical business solutions.
Experience in delivering projects using Agile-based methodologies
Salesforce Data Cloud certification preferred
Additional Salesforce certifications like Administrator are a plus
Strong interpersonal skills
Bachelor's degree in a related field preferred, but not required
Open to travel (up to 50%)
About Us
Slalom is a fiercely human business and technology consulting company that leads with outcomes to bring more value, in all ways, always. From strategy through delivery, our agile teams across 52 offices in 12 countries collaborate with clients to bring powerful customer experiences, innovative ways of working, and new products and services to life. We are trusted by leaders across the Global 1000, many successful enterprise and mid-market companies, and 500+ public sector organizations to improve operations, drive growth, and create value. At Slalom, we believe that together, we can move faster, dream bigger, and build better tomorrows for all.
Compensation and Benefits
Slalom prides itself on helping team members thrive in their work and life. As a result, Slalom is proud to invest in benefits that include meaningful time off and paid holidays, parental leave, 401(k) with a match, a range of choices for highly subsidized health, dental, & vision coverage, adoption and fertility assistance, and short/long-term disability. We also offer yearly $350 reimbursement account for any well-being-related expenses, as well as discounted home, auto, and pet insurance.
Slalom is committed to fair and equitable compensation practices. For this role, we are hiring at the following levels and salary ranges:
East Bay, San Francisco, Silicon Valley:
Principal: $184,000-$225,000
San Diego, Los Angeles, Orange County, Seattle, Boston, Houston, New Jersey, New York City, Washington DC, Westchester:
Principal: $169,000-$206,000
All other locations:
Principal: $155,000-$189,000
In addition, individuals may be eligible for an annual discretionary bonus. Actual compensation will depend upon an individual's skills, experience, qualifications, location, and other relevant factors. The salary pay range is subject to change and may be modified at any time.
We are committed to pay transparency and compliance with applicable laws. If you have questions or concerns about the pay range or other compensation information in this posting, please contact us at: ********************.
We will accept applications until January 30, 2025 or until the position is filled.
EEO and Accommodations
Slalom is an equal opportunity employer and is committed to attracting, developing and retaining highly qualified talent who empower our innovative teams through unique perspectives and experiences. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veterans' status, or any other characteristic protected by federal, state, or local laws. Slalom will also consider qualified applications with criminal histories, consistent with legal requirements. Slalom welcomes and encourages applications from individuals with disabilities. Reasonable accommodations are available for candidates during all aspects of the selection process. Please advise the talent acquisition team or contact ****************************** if you require accommodations during the interview process.
$184k-225k yearly Easy Apply 17d ago
Data Architect - Power & Utilities - Senior Manager- Consulting - Location OPEN
Ernst & Young Oman 4.7
Data engineer job in Stamford, CT
At EY, we're all in to shape your future with confidence.
We'll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world.
AI & Data - Data Architecture - Senior Manager - Power & Utilities Sector
EY is seeking a motivated professional with solid experience in the utilities sector to serve as a Senior Manager who possesses a robust background in Data Architecture, Data Modernization, End to end Data capabilities, AI, Gen AI, Agentic AI, preferably with a power systems / electrical engineering background and having delivered business use cases in Transmission / Distribution / Generation / Customer. The ideal candidate will have a history of working for consulting companies and be well-versed in the fast-paced culture of consulting work. This role is dedicated to the utilities sector, where the successful candidate will craft, deploy, and maintain large-scale AI data ready architectures.
The opportunity
You will help our clients enable better business outcomes while working in the rapidly growing Power & Utilities sector. You will have the opportunity to lead and develop your skill set to keep up with the ever-growing demands of the modern data platform. During implementation you will solve complex analytical problems to bring data to insights and enable the use of ML and AI at scale for your clients. This is a high growth area and a high visibility role with plenty of opportunities to enhance your skillset and build your career.
As a Senior Manager in Data Architecture, you will have the opportunity to lead transformative technology projects and programs that align with our organizational strategy to achieve impactful outcomes. You will provide assurance to leadership by managing timelines, costs, and quality, and lead both technical and non-technical project teams in the development and implementation of cutting-edge technology solutions and infrastructure. You will have the opportunity to be face to face with external clients and build new and existing relationships in the sector. Your specialized knowledge in project and program delivery methods, including Agile and Waterfall, will be instrumental in coaching others and proposing solutions to technical constraints.
Your key responsibilities
In this pivotal role, you will be responsible for the effective management and delivery of one or more processes, solutions, and projects, with a focus on quality and effective risk management. You will drive continuous process improvement and identify innovative solutions through research, analysis, and best practices. Managing professional employees or supervising team members to deliver complex technical initiatives, you will apply your depth of expertise to guide others and interpret internal/external issues to recommend quality solutions. Your responsibilities will include:
As Data Architect - Senior Manager, you will have an expert understanding of data architecture and dataengineering and will be focused on problem-solving to design, architect, and present findings and solutions, leading more junior team members, and working with a wide variety of clients to sell and lead delivery of technology consulting services. You will be the go-to resource for understanding our clients' problems and responding with appropriate methodologies and solutions anchored around data architectures, platforms, and technologies. You are responsible for helping to win new business for EY. You are a trusted advisor with a broad understanding of digital transformation initiatives, the analytic technology landscape, industry trends and client motivations. You are also a charismatic communicator and thought leader, capable of going toe-to-toe with the C-level in our clients and prospects and willing and able to constructively challenge them.
Skills and attributes for success
To thrive in this role, you will need a combination of technical and business skills that will make a significant impact. Your skills will include:
Technical Skills Applications Integration
Cloud Computing and Cloud Computing Architecture
Data Architecture Design and Modelling
Data Integration and Data Quality
AI/Agentic AI driven data operations
Experience delivering business use cases in Transmission / Distribution / Generation / Customer.
Strong relationship management and business development skills.
Become a trusted advisor to your clients' senior decision makers and internal EY teams by establishing credibility and expertise in both data strategy in general and in the use of analytic technology solutions to solve business problems.
Engage with senior business leaders to understand and shape their goals and objectives and their corresponding information needs and analytic requirements.
Collaborate with cross-functional teams (Data Scientists, Business Analysts, and IT teams) to define data requirements, design solutions, and implement data strategies that align with our clients' objectives.
Organize and lead workshops and design sessions with stakeholders, including clients, team members, and cross-functional partners, to capture requirements, understand use cases, personas, key business processes, brainstorm solutions, and align on data architecture strategies and projects.
Lead the design and implementation of modern data architectures, supporting transactional, operational, analytical, and AI solutions.
Direct and mentor global data architecture and engineering teams, fostering a culture of innovation, collaboration, and continuous improvement.
Establish data governance policies and practices, including data security, quality, and lifecycle management.
Stay abreast of industry trends and emerging technologies in data architecture and management, recommending innovations and improvements to enhance our capabilities.
To qualify for the role, you must have
A Bachelor's degree required in STEM
12+ years professional consulting experience in industry or in technology consulting.
12+ years hands-on experience in architecting, designing, delivering or optimizing data lake solutions.
5+ years' experience with native cloud products and services such as Azure or GCP.
8+ years of experience mentoring and leading teams of data architects and dataengineers, fostering a culture of innovation and professional development.
In-depth knowledge of data architecture principles and best practices, including data modelling, data warehousing, data lakes, and data integration.
Demonstrated experience in leading large dataengineering teams to design and build platforms with complex architectures and diverse features including various data flow patterns, relational and no-SQL databases, production-grade performance, and delivery to downstream use cases and applications.
Hands-on experience in designing end-to-end architectures and pipelines that collect, process, and deliver data to its destination efficiently and reliably.
Proficiency in data modelling techniques and the ability to choose appropriate architectural design patterns, including Data Fabrics, Data Mesh, Lake Houses, or Delta Lakes.
Manage complex data analysis, migration, and integration of enterprise solutions to modern platforms, including code efficiency and performance optimizations.
Previous hands‑on coding skills in languages commonly used in dataengineering, such as Python, Java, or Scala.
Ability to design data solutions that can scale horizontally and vertically while optimizing performance.
Experience with containerization technologies like Docker and container orchestration platforms like Kubernetes for managing data workloads.
Experience in version control systems (e.g. Git) and knowledge of DevOps practices for automating dataengineering workflows (DataOps).
Practical understanding of data encryption, access control, and security best practices to protect sensitive data.
Experience leading Infrastructure and Security engineers and architects in overall platform build.
Excellent leadership, communication, and project management skills.
Data Security and Database Management
Enterprise Data Management and Metadata Management
Ontology Design and Systems Design
Ideally, you'll also have
Master's degree in Electrical / Power Systems Engineering, Computer science, Statistics, Applied Mathematics, Data Science, Machine Learning or commensurate professional experience.
Experience working at big 4 or a major utility.
Experience with cloud data platforms like Databricks.
Experience in leading and influencing teams, with a focus on mentorship and professional development.
A passion for innovation and the strategic application of emerging technologies to solve real-world challenges.
The ability to foster an inclusive environment that values diverse perspectives and empowers team members.
Building and Managing Relationships
Client Trust and Value and Commercial Astuteness
Communicating With Impact and Digital Fluency
What we look for
We are looking for top performers who demonstrate a blend of technical expertise and business acumen, with the ability to build strong client relationships and lead teams through change. Emotional agility and hybrid collaboration skills are key to success in this dynamic role.
FY26NATAID
What we offer you
At EY, we'll develop you with future-focused skills and equip you with world-class experiences. We'll empower you in a flexible environment, and fuel you and your extraordinary talents in a diverse and inclusive culture of globally connected teams. Learn more.
We offer a comprehensive compensation and benefits package where you'll be rewarded based on your performance and recognized for the value you bring to the business. The base salary range for this job in all geographic locations in the US is $144,000 to $329,100. The base salary range for New York City Metro Area, Washington State and California (excluding Sacramento) is $172,800 to $374,000. Individual salaries within those ranges are determined through a wide variety of factors including but not limited to education, experience, knowledge, skills and geography. In addition, our Total Rewards package includes medical and dental coverage, pension and 401(k) plans, and a wide range of paid time off options.
Join us in our team‑led and leader‑enabled hybrid model. Our expectation is for most people in external, client serving roles to work together in person 40-60% of the time over the course of an engagement, project or year.
Under our flexible vacation policy, you'll decide how much vacation time you need based on your own personal circumstances. You'll also be granted time off for designated EY Paid Holidays, Winter/Summer breaks, Personal/Family Care, and other leaves of absence when needed to support your physical, financial, and emotional well‑being.
Are you ready to shape your future with confidence? Apply today.
EY accepts applications for this position on an on‑going basis.
For those living in California, please click here for additional information.
EY focuses on high‑ethical standards and integrity among its employees and expects all candidates to demonstrate these qualities.
EY | Building a better working world
EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets.
Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow.
EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
EY provides equal employment opportunities to applicants and employees without regard to race, color, religion, age, sex, sexual orientation, gender identity/expression, pregnancy, genetic information, national origin, protected veteran status, disability status, or any other legally protected basis, including arrest and conviction records, in accordance with applicable law.
EY is committed to providing reasonable accommodation to qualified individuals with disabilities including veterans with disabilities. If you have a disability and either need assistance applying online or need to request an accommodation during any part of the application process, please call 1-800-EY-HELP3, select Option 2 for candidate related inquiries, then select Option 1 for candidate queries and finally select Option 2 for candidates with an inquiry which will route you to EY's Talent Shared Services Team (TSS) or email the TSS at ************************** .
#J-18808-Ljbffr
$112k-156k yearly est. 2d ago
ETL/Data Platform Engineer
Clarapath
Data engineer job in Hawthorne, NY
Job Description
JOB TITLE: ETL/Data Platform Engineer
TYPE: Full time, regular
COMPENSATION: $130,000 - $180,000/yr
Clarapath is a medical robotics company based in Westchester County, NY. Our mission is to transform and modernize laboratory workflows with the goal of improving patient care, decreasing costs, and enhancing the quality and consistency of laboratory processes. SectionStar™ by Clarapath is a ground-breaking electro-mechanical system designed to elevate and automate the workflow in histology laboratories and provide pathologists with the tissue samples they need to make the most accurate diagnoses. Through the use of innovative technology, data, and precision analytics, Clarapath is paving the way for a new era of laboratory medicine.
Role Summary:
The ETL/Data Platform Engineer will play a key role in designing, building, and maintaining Clarapath's data pipelines and platform infrastructure supporting SectionStar™, our advanced electro-mechanical device. This role requires a strong foundation in dataengineering, including ETL/ELT development, data modeling, and scalable data platform design. Working closely with cross-functional teams-including software, firmware, systems, and mechanical engineering-this individual will enable reliable ingestion, transformation, and storage of device and operational data. The engineer will help power analytics, system monitoring, diagnostics, and long-term insights that support product performance, quality, and continuous improvement. We are seeking a proactive, detail-oriented engineer who thrives in a fast-paced, rapidly growing environment and is excited to apply dataengineering best practices to complex, data-driven challenges in a regulated medical technology setting.
Responsibilities:
Design, develop, and maintain robust ETL/ELT pipelines for device, telemetry, and operational data
Build and optimize data models to support analytics, reporting, and system insights
Develop and maintain scalable data platform infrastructure (cloud and/or on-prem)
Ensure data quality, reliability, observability, and performance across pipelines
Support real-time or near real-time data ingestion where applicable
Collaborate with firmware and software teams to integrate device-generated data
Enable dashboards, analytics, and internal tools for engineering, quality, and operations teams
Implement best practices for data security, access control, and compliance
Troubleshoot pipeline failures and improve system resilience
Document data workflows, schemas, and platform architecture
Qualifications:
Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent experience)
3+ years of experience in dataengineering, ETL development, or data platform roles
Strong proficiency in SQL and at least one programming language (Python preferred)
Experience building and maintaining ETL/ELT pipelines
Familiarity with data modeling concepts and schema design
Experience with cloud platforms (AWS, GCP, or Azure) or hybrid environments
Understanding of data reliability, monitoring, and pipeline orchestration
Strong problem-solving skills and attention to detail
Experience with streaming data or message-based systems (ex: Kafka, MQTT), a plus
Experience working with IoT, device, or telemetry data, a plus
Familiarity with data warehouses and analytics platforms, a plus
Experience in regulated environments (medical device, healthcare, life sciences), a plus
Exposure to DevOps practices, CI/CD, or infrastructure-as-code, a plus
Company Offers:
Competitive salary, commensurate with experience and education
Comprehensive benefits package available: (healthcare, vision, dental and life insurances; 401k; PTO and holidays)
A collaborative and diverse work environment where our teams thrive on solving complex challenges
Ability to file IP with the company
Connections with world class researchers and their laboratories
Collaboration with strategic leaders in healthcare and pharmaceutical world
A mission driven organization where every team member will be responsible for changing the standards of delivering healthcare
Clarapath is proud to be an equal opportunity employer. We are committed to providing equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. In addition to federal law requirements, Clarapath complies with applicable state and local laws governing nondiscrimination in employment. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation, and training.
$130k-180k yearly 15d ago
Network Planning Data Scientist (Manager)
Atlas Air Worldwide Holdings 4.9
Data engineer job in White Plains, NY
Atlas Air is seeking a detail-oriented and analytical Network Planning Analyst to help optimize our global cargo network. This role plays a critical part in the 2-year to 11-day planning window, driving insights that enable operational teams to execute the most efficient and reliable schedules. The successful candidate will provide actionable analysis on network delays, utilization trends, and operating performance, build models and reports to govern network operating parameters, and contribute to the development and implementation of software optimization tools that improve reliability and streamline planning processes.
This position requires strong analytical skills, a proactive approach to problem-solving, and the ability to translate data into operational strategies that protect service quality and maximize network efficiency.
Responsibilities
Analyze and Monitor Network Performance
Track and assess network delays, capacity utilization, and operating constraints to identify opportunities for efficiency gains and reliability improvements.
Develop and maintain key performance indicators (KPIs) for network operations and planning effectiveness.
Modeling & Optimization
Build and maintain predictive models to assess scheduling scenarios and network performance under varying conditions.
Support the design, testing, and implementation of software optimization tools to enhance operational decision-making.
Reporting & Governance
Develop periodic performance and reliability reports for customers, assisting in presentation creation
Produce regular and ad hoc reports to monitor compliance with established operating parameters.
Establish data-driven processes to govern scheduling rules, protect operational integrity, and ensure alignment with reliability targets.
Cross-Functional Collaboration
Partner with Operations, Planning, and Technology teams to integrate analytics into network planning and execution.
Provide insights that inform schedule adjustments, fleet utilization, and contingency planning.
Innovation & Continuous Improvement
Identify opportunities to streamline workflows and automate recurring analyses.
Contributes to the development of new planning methodologies and tools that enhance decision-making and operational agility.
Qualifications
Proficiency in SQL (Python and R are a plus) for data extraction and analysis; experience building decision-support tools, reporting tools dashboards (e.g., Tableau, Power BI)
Bachelor's degree required in Industrial Engineering, Operations Research, Applied Mathematics, Data Science or related quantitative discipline or equivalent work experience.
5+ years of experience in strategy, operations planning, finance or continuous improvement, ideally with airline network planning
Strong analytical skills with experience in statistical analysis, modeling, and scenario evaluation.
Strong problem-solving skills with the ability to work in a fast-paced, dynamic environment.
Excellent communication skills with the ability to convey complex analytical findings to non-technical stakeholders.
A proactive, solution-focused mindset with a passion for operational excellence and continuous improvement.
Knowledge of operations, scheduling, and capacity planning, ideally in airlines, transportation or other complex network operations
Salary Range: $131,500 - $177,500
Financial offer within the stated range will be based on multiple factors to include but not limited to location, relevant experience/level and skillset.
The Company is an Equal Opportunity Employer. It is our policy to afford equal employment opportunity to all employees and applicants for employment without regard to race, color, religion, sex, sexual orientation, national origin, citizenship, place of birth, age, disability, protected veteran status, gender identity or any other characteristic or status protected by applicable in accordance with federal, state and local laws.
If you'd like more information about your EEO rights as an applicant under the law, please download the available EEO is the Law document at ******************************************
To view our Pay Transparency Statement, please click here: Pay Transparency Statement
“Know Your Rights: Workplace Discrimination is Illegal” Poster
The "EEO Is The Law" Poster
$131.5k-177.5k yearly Auto-Apply 60d+ ago
C++ Market Data Engineer (USA)
Trexquant 4.0
Data engineer job in Stamford, CT
Trexquant is a growing systematic fund at the forefront of quantitative finance, with a core team of highly accomplished researchers and engineers. To keep pace with our expanding global trading operations, we are seeking a C++ Market DataEngineer to design and build ultra-low-latency feed handlers for premier vendor feeds and major exchange multicast feeds. This is a high-impact role that sits at the heart of Trexquant's trading platform; the quality, speed, and reliability of your code directly influence every strategy we run.
Responsibilities
* Design & implement high-performance feed handlers in modern C++ for equities, futures, and options across global venues (e.g., NYSE, CME, Refinitiv RTS, Bloomberg B-PIPE).
* Optimize for micro- and nanosecond latency using lock-free data structures, cache-friendly memory layouts, and kernel-bypass networking where appropriate.
* Build reusable libraries for message decoding, normalization, and publication to internal buses shared by research, simulation, and live trading systems.
* Collaborate with cross-functional teams to tune TCP/UDP multicast stacks, kernel parameters, and NIC settings for deterministic performance.
* Provide robust failover, gap-recovery, and replay mechanisms to guarantee data integrity under packet loss or venue outages.
* Instrument code paths with precision timestamping and performance metrics; drive continuous latency regression testing and capacity planning.
* Partner closely with quantitative researchers to understand downstream data requirements and to fine-tune delivery formats for both simulation and live trading.
* Produce clear architecture documents, operational run-books, and post-mortems; participate in a 24×7 follow-the-sun support rotation for mission-critical market-data services.
How much does a data engineer earn in Norwalk, CT?
The average data engineer in Norwalk, CT earns between $73,000 and $131,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.
Average data engineer salary in Norwalk, CT
$98,000
What are the biggest employers of Data Engineers in Norwalk, CT?
The biggest employers of Data Engineers in Norwalk, CT are: