DataEngineer Check below to see if you have what is needed for this opportunity, and if so, make an application asap. - Data Integration, IBM Corporation, Armonk, NY and various unanticipated client sites throughout the US: Manage the end-to-end delivery of data migration projects, implementing ETL/ELT concepts and leveraging ETL tools such as Informatica and DataStage, and cloud platforms like Google Cloud.
Design and build end-to-end data pipelines to extract, integrate, transform, and load data from diverse source systems into target environments such as databases, data warehouses, or data marts.
Collaborate with clients to define data mapping and transformation rules, ensuring accurate application prior to loading.
Normalize data and establish relational structures to support system migrations.
Develop processes for data cleaning, filtering, aggregation, and augmentation to maintain data integrity.
Implement validation checks and data quality controls to ensure accuracy and consistency across systems.
Create, maintain, and optimize SQL procedures, functions, triggers, and ETL/ELT processes.
Develop, debug, and maintain ETL jobs while applying query optimization techniques
- such as indexing, clustering, partitioning, and use of analytical functions
- to enhance performance on large datasets.
Partner with data analysts, data scientists, and business stakeholders to understand requirements and ensure delivery of the right data.
Capture fallouts and prepare reports using Excel, Power BI, Looker, Crystal Reports, etc.
Perform root cause analysis and resolution.
Monitor and maintain pipelines to ensure stability and efficiency of data pipelines through regular monitoring, troubleshooting, and performance optimization.
Maintain thorough and up-to-date documentation of all data integration processes, pipelines, and architectures.
Analyze current trends, tools, and technologies in dataengineering and integration.
Utilize: Google Cloud Platform (Google Big Query, Cloud Storage, Google Looker), Procedural language/Structured Query Language (PL/SQL), Informatica, DataStage, Data Integration, Data Warehousing, Database Design / Modelling, Data Visualization (Power BI/ Crystal reports).
Required: Master's degree or equivalent in Computer Science or related (employer will accept a bachelor's degree plus five (5) years of progressive experience in lieu of a master's degree) and one (1) year of experience as a DataEngineer or related.
One (1) year of experience must include utilizing Google Cloud Platform (Google Big Query, Cloud Storage, Google Looker), Procedural language/Structured Query Language (PL/SQL), Informatica, DataStage, Data Integration, Data Warehousing, Database Design / Modelling, Data Visualization (Power BI/ Crystal reports).
$167835 to $216700 per year.
Please send resumes to
Applicants must reference D185 in the subject line. xevrcyc
JobiqoTJN.
Keywords: DataEngineer, Location: NORTH CASTLE, NY
- 10504
$167.8k-216.7k yearly 2d ago
Looking for a job?
Let Zippia find it for you.
Data Architect - Power & Utilities - Senior Manager- Consulting - Location OPEN
Ernst & Young Oman 4.7
Data engineer job in Stamford, CT
At EY, we're all in to shape your future with confidence.
We'll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world.
AI & Data - Data Architecture - Senior Manager - Power & Utilities Sector
EY is seeking a motivated professional with solid experience in the utilities sector to serve as a Senior Manager who possesses a robust background in Data Architecture, Data Modernization, End to end Data capabilities, AI, Gen AI, Agentic AI, preferably with a power systems / electrical engineering background and having delivered business use cases in Transmission / Distribution / Generation / Customer. The ideal candidate will have a history of working for consulting companies and be well-versed in the fast-paced culture of consulting work. This role is dedicated to the utilities sector, where the successful candidate will craft, deploy, and maintain large-scale AI data ready architectures.
The opportunity
You will help our clients enable better business outcomes while working in the rapidly growing Power & Utilities sector. You will have the opportunity to lead and develop your skill set to keep up with the ever-growing demands of the modern data platform. During implementation you will solve complex analytical problems to bring data to insights and enable the use of ML and AI at scale for your clients. This is a high growth area and a high visibility role with plenty of opportunities to enhance your skillset and build your career.
As a Senior Manager in Data Architecture, you will have the opportunity to lead transformative technology projects and programs that align with our organizational strategy to achieve impactful outcomes. You will provide assurance to leadership by managing timelines, costs, and quality, and lead both technical and non-technical project teams in the development and implementation of cutting-edge technology solutions and infrastructure. You will have the opportunity to be face to face with external clients and build new and existing relationships in the sector. Your specialized knowledge in project and program delivery methods, including Agile and Waterfall, will be instrumental in coaching others and proposing solutions to technical constraints.
Your key responsibilities
In this pivotal role, you will be responsible for the effective management and delivery of one or more processes, solutions, and projects, with a focus on quality and effective risk management. You will drive continuous process improvement and identify innovative solutions through research, analysis, and best practices. Managing professional employees or supervising team members to deliver complex technical initiatives, you will apply your depth of expertise to guide others and interpret internal/external issues to recommend quality solutions. Your responsibilities will include:
As Data Architect - Senior Manager, you will have an expert understanding of data architecture and dataengineering and will be focused on problem-solving to design, architect, and present findings and solutions, leading more junior team members, and working with a wide variety of clients to sell and lead delivery of technology consulting services. You will be the go-to resource for understanding our clients' problems and responding with appropriate methodologies and solutions anchored around data architectures, platforms, and technologies. You are responsible for helping to win new business for EY. You are a trusted advisor with a broad understanding of digital transformation initiatives, the analytic technology landscape, industry trends and client motivations. You are also a charismatic communicator and thought leader, capable of going toe-to-toe with the C-level in our clients and prospects and willing and able to constructively challenge them.
Skills and attributes for success
To thrive in this role, you will need a combination of technical and business skills that will make a significant impact. Your skills will include:
Technical Skills Applications Integration
Cloud Computing and Cloud Computing Architecture
Data Architecture Design and Modelling
Data Integration and Data Quality
AI/Agentic AI driven data operations
Experience delivering business use cases in Transmission / Distribution / Generation / Customer.
Strong relationship management and business development skills.
Become a trusted advisor to your clients' senior decision makers and internal EY teams by establishing credibility and expertise in both data strategy in general and in the use of analytic technology solutions to solve business problems.
Engage with senior business leaders to understand and shape their goals and objectives and their corresponding information needs and analytic requirements.
Collaborate with cross-functional teams (Data Scientists, Business Analysts, and IT teams) to define data requirements, design solutions, and implement data strategies that align with our clients' objectives.
Organize and lead workshops and design sessions with stakeholders, including clients, team members, and cross-functional partners, to capture requirements, understand use cases, personas, key business processes, brainstorm solutions, and align on data architecture strategies and projects.
Lead the design and implementation of modern data architectures, supporting transactional, operational, analytical, and AI solutions.
Direct and mentor global data architecture and engineering teams, fostering a culture of innovation, collaboration, and continuous improvement.
Establish data governance policies and practices, including data security, quality, and lifecycle management.
Stay abreast of industry trends and emerging technologies in data architecture and management, recommending innovations and improvements to enhance our capabilities.
To qualify for the role, you must have
A Bachelor's degree required in STEM
12+ years professional consulting experience in industry or in technology consulting.
12+ years hands-on experience in architecting, designing, delivering or optimizing data lake solutions.
5+ years' experience with native cloud products and services such as Azure or GCP.
8+ years of experience mentoring and leading teams of data architects and dataengineers, fostering a culture of innovation and professional development.
In-depth knowledge of data architecture principles and best practices, including data modelling, data warehousing, data lakes, and data integration.
Demonstrated experience in leading large dataengineering teams to design and build platforms with complex architectures and diverse features including various data flow patterns, relational and no-SQL databases, production-grade performance, and delivery to downstream use cases and applications.
Hands-on experience in designing end-to-end architectures and pipelines that collect, process, and deliver data to its destination efficiently and reliably.
Proficiency in data modelling techniques and the ability to choose appropriate architectural design patterns, including Data Fabrics, Data Mesh, Lake Houses, or Delta Lakes.
Manage complex data analysis, migration, and integration of enterprise solutions to modern platforms, including code efficiency and performance optimizations.
Previous hands‑on coding skills in languages commonly used in dataengineering, such as Python, Java, or Scala.
Ability to design data solutions that can scale horizontally and vertically while optimizing performance.
Experience with containerization technologies like Docker and container orchestration platforms like Kubernetes for managing data workloads.
Experience in version control systems (e.g. Git) and knowledge of DevOps practices for automating dataengineering workflows (DataOps).
Practical understanding of data encryption, access control, and security best practices to protect sensitive data.
Experience leading Infrastructure and Security engineers and architects in overall platform build.
Excellent leadership, communication, and project management skills.
Data Security and Database Management
Enterprise Data Management and Metadata Management
Ontology Design and Systems Design
Ideally, you'll also have
Master's degree in Electrical / Power Systems Engineering, Computer science, Statistics, Applied Mathematics, Data Science, Machine Learning or commensurate professional experience.
Experience working at big 4 or a major utility.
Experience with cloud data platforms like Databricks.
Experience in leading and influencing teams, with a focus on mentorship and professional development.
A passion for innovation and the strategic application of emerging technologies to solve real-world challenges.
The ability to foster an inclusive environment that values diverse perspectives and empowers team members.
Building and Managing Relationships
Client Trust and Value and Commercial Astuteness
Communicating With Impact and Digital Fluency
What we look for
We are looking for top performers who demonstrate a blend of technical expertise and business acumen, with the ability to build strong client relationships and lead teams through change. Emotional agility and hybrid collaboration skills are key to success in this dynamic role.
FY26NATAID
What we offer you
At EY, we'll develop you with future-focused skills and equip you with world-class experiences. We'll empower you in a flexible environment, and fuel you and your extraordinary talents in a diverse and inclusive culture of globally connected teams. Learn more.
We offer a comprehensive compensation and benefits package where you'll be rewarded based on your performance and recognized for the value you bring to the business. The base salary range for this job in all geographic locations in the US is $144,000 to $329,100. The base salary range for New York City Metro Area, Washington State and California (excluding Sacramento) is $172,800 to $374,000. Individual salaries within those ranges are determined through a wide variety of factors including but not limited to education, experience, knowledge, skills and geography. In addition, our Total Rewards package includes medical and dental coverage, pension and 401(k) plans, and a wide range of paid time off options.
Join us in our team‑led and leader‑enabled hybrid model. Our expectation is for most people in external, client serving roles to work together in person 40-60% of the time over the course of an engagement, project or year.
Under our flexible vacation policy, you'll decide how much vacation time you need based on your own personal circumstances. You'll also be granted time off for designated EY Paid Holidays, Winter/Summer breaks, Personal/Family Care, and other leaves of absence when needed to support your physical, financial, and emotional well‑being.
Are you ready to shape your future with confidence? Apply today.
EY accepts applications for this position on an on‑going basis.
For those living in California, please click here for additional information.
EY focuses on high‑ethical standards and integrity among its employees and expects all candidates to demonstrate these qualities.
EY | Building a better working world
EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets.
Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow.
EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
EY provides equal employment opportunities to applicants and employees without regard to race, color, religion, age, sex, sexual orientation, gender identity/expression, pregnancy, genetic information, national origin, protected veteran status, disability status, or any other legally protected basis, including arrest and conviction records, in accordance with applicable law.
EY is committed to providing reasonable accommodation to qualified individuals with disabilities including veterans with disabilities. If you have a disability and either need assistance applying online or need to request an accommodation during any part of the application process, please call 1-800-EY-HELP3, select Option 2 for candidate related inquiries, then select Option 1 for candidate queries and finally select Option 2 for candidates with an inquiry which will route you to EY's Talent Shared Services Team (TSS) or email the TSS at ************************** .
#J-18808-Ljbffr
$112k-156k yearly est. 4d ago
Data Scientist
Gartner 4.7
Data engineer job in Stamford, CT
About this role
In Gartner's Services Data Science team, we innovate the way our team helps clients receive value, so technology leaders will be able to make smarter decisions in a different way.
We are searching for a talented data scientist to join our team. You will have access to the best facilities, technology and expertise within the industry and will work on challenging business problems. This is an excellent opportunity to be part of a new venture, in a start-up environment where you can truly develop your skill set and knowledge and bring impact to the team.
What you'll do
* Designing and implementing state of the art Large Language Model (LLM) based agents that seamlessly synthesize complex information and initiate important actions in a business workflow.
* Using advanced Generative AI techniques deriving actionable insights from unstructured text data, such as call transcripts and emails.
* Predicting client interest basis their digital footprint and making relevant recommendations to drive higher client value delivery
* Leverage statistical and machine learning techniques to extract actionable insights from client retention data.
* Develop customer churn prediction models that proactively identify at-risk clients,
* Build tools to process structured and unstructured data
* Engineering features and signals to train ML model from diverse data collection
What you'll need
* BS required/ MS/ preferred; in Computer Science or other technology, Math, Physics, Statistics or Economics (focus on Natural Language Processing, Information Retrieval a plus)
* 4 years' experience in data science methodologies as applied to live initiatives or software development// Experience working with Gen AI projects
* Minimum 4+ years of experience in python coding and statistical analysis
* Minimum 2 years working experience in several of the following:
o Prompt Engineering and working with LLMs
o Machine Learning and statistical techniques
o Data mining and recommendation systems
o Natural Language Processing and Information Retrieval
o Experience working with large volumes of data
o User behavior modeling
Who you are
* A team player. You get along well with your colleagues and are always ready to help get things done. You enjoy working on projects with multiple people and share knowledge.
* Passionate about learning. You thrive on complex technical challenges and are always eager to learn the latest technologies.
* Organized and detailed-oriented. You think ahead of time about how best to implement new features, and your code is clean, well-organized and properly documented.
* Innovative. You are always proactively looking for opportunities to problem solve using innovative methods that impact the business
What we offer
* A collaborative, positive culture. You'll work with people who are as enthusiastic, smart and driven as you are. You'll be managed by the best too.
* Limitless growth and learning opportunities. We offer the excitement of a fast-paced entrepreneurial workplace and the professional growth opportunities of an established global organization.
About Gartner:
Gartner, Inc. (NYSE: IT) is the world's leading information technology research and advisory company. We deliver the technology-related insight necessary for our clients to make the right decisions, every day. We work with every client to research, analyze and interpret the business of IT within the context of their individual role. Founded in 1979, Gartner is headquartered in Stamford, Connecticut, U.S.A - Visit gartner.com to learn more.
Diversity, inclusion and engagement at Gartner:
The policy of Gartner is to provide equal employment opportunities to all applicants and employees without regard to race, color, caste, creed, religion, sex, sexual orientation, gender identity or expression, marital status, citizenship status, age, national origin, ancestry, disability, or any other characteristic protected by applicable law. Gartner affirmatively seeks to advance the principles of equal employment opportunity and values diversity and inclusion.
Gartner is an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified applicant with a disability and unable to or limited in your ability to use or access the Gartner's career webpage as a result of your disability, you may request reasonable accommodations by calling Human Resources at or by sending an email to
#LI-Hybrid
#LI-GV1
Who are we?
At Gartner, Inc. (NYSE:IT), we guide the leaders who shape the world.
Our mission relies on expert analysis and bold ideas to deliver actionable, objective business and technology insights, helping enterprise leaders and their teams succeed with their mission-critical priorities.
Since our founding in 1979, we've grown to 21,000 associates globally who support ~14,000 client enterprises in ~90 countries and territories. We do important, interesting and substantive work that matters. That's why we hire associates with the intellectual curiosity, energy and drive to want to make a difference. The bar is unapologetically high. So is the impact you can have here.
What makes Gartner a great place to work?
Our vast, virtually untapped market potential offers limitless opportunities - opportunities that may not even exist right now - for you to grow professionally and flourish personally. How far you go is driven by your passion and performance.
We hire remarkable people who collaborate and win as a team. Together, our singular, unifying goal is to deliver results for our clients.
Our teams are inclusive and composed of individuals from different geographies, cultures, religions, ethnicities, races, genders, sexual orientations, abilities and generations.
We invest in great leaders who bring out the best in you and the company, enabling us to multiply our impact and results. This is why, year after year, we are recognized worldwide as a great place to work.
What do we offer?
Gartner offers world-class benefits, highly competitive compensation and disproportionate rewards for top performers.
In our hybrid work environment, we provide the flexibility and support for you to thrive - working virtually when it's productive to do so and getting together with colleagues in a vibrant community that is purposeful, engaging and inspiring.
Ready to grow your career with Gartner? Join us.
Gartner believes in fair and equitable pay. A reasonable estimate of the base salary range for this role is 98,000 USD - 133,000 USD. Please note that actual salaries may vary within the range, or be above or below the range, based on factors including, but not limited to, education, training, experience, professional achievement, business need, and location. In addition to base salary, employees will participate in either an annual bonus plan based on company and individual performance, or a role-based, uncapped sales incentive plan. Our talent acquisition team will provide the specific opportunity on our bonus or incentive programs to eligible candidates. We also offer market leading benefit programs including generous PTO, a 401k match up to $7,200 per year, the opportunity to purchase company stock at a discount, and more.
The policy of Gartner is to provide equal employment opportunities to all applicants and employees without regard to race, color, creed, religion, sex, sexual orientation, gender identity, marital status, citizenship status, age, national origin, ancestry, disability, veteran status, or any other legally protected status and to seek to advance the principles of equal employment opportunity.
Gartner is committed to being an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to use or access the Company's career webpage as a result of your disability. You may request reasonable accommodations by calling Human Resources at or by sending an email .
Job Requisition ID:106172
By submitting your information and application, you confirm that you have read and agree to the country or regional recruitment notice linked below applicable to your place of residence.
Gartner Applicant Privacy Link: applicant-privacy-policy
For efficient navigation through the application, please only use the back button within the application, not the back arrow within your browser.
$78k-103k yearly est. 1d ago
Data Architect
Novocure Inc. 4.6
Data engineer job in New Haven, CT
We are seeking an experienced and innovative Data Architect to lead the design, development, and optimization of our enterprise data architecture. This individual will play a critical role in aligning data strategy with business objectives, ensuring data integrity, and driving value from data across multiple platforms. The ideal candidate will have deep expertise in data architecture best practices and technologies, particularly across SAP S/4 HANA, Veeva CRM, Veeva Vault, SaaS platforms, Operational Data Stores (ODS), and Master Data Management (MDM) platforms.
This is a full-time, position reporting to the Director, Enterprise Architecture
ESSENTIAL DUTIES AND RESPONSIBILITIES:
Design, develop, and maintain scalable and secure enterprise data architecture solutions across SAP S/4 HANA, Veeva CRM, and Veeva Vault environments.
Serve as a subject matter expert for Operational Data Stores and Master Data Management architecture, ensuring clean, consistent, and governed data across the enterprise.
Collaborate with cross-functional teams to identify data needs, establish data governance frameworks, and define data integration strategies.
Develop data models, data flows, and system integration patterns that support enterprise analytics and reporting needs.
Evaluate and recommend new tools, platforms, and methodologies for improving data management capabilities.
Ensure architectural alignment with data privacy, regulatory, and security standards.
Provide leadership and mentoring to dataengineering and analytics teams on best practices in data modeling, metadata management, and data lifecycle management.
Contribute to data governance initiatives by enforcing standards, policies, and procedures for enterprise data.
QUALIFICATIONS/KNOWLEDGE:
Bachelor's or Master's degree in Computer Science, Information Systems, Data Science, or a related field.
Minimum of 8+ years of experience in data architecture, data integration, or enterprise data management roles.
Proven experience in designing and implementing data solutions on SAP S/4 HANA, including integration with other enterprise systems.
Strong hands-on experience with SaaS platforms, including data extraction, modeling, and harmonization.
Deep understanding of Operational Data Stores and MDM design patterns, implementation, and governance practices.
Proficiency in data modeling tools (e.g., Erwin, SAP PowerDesigner), ETL tools (e.g., Business Objects Data Services, SAP Data Services), and integration platforms (e.g., MuleSoft).
Familiarity with cloud data architecture (e.g., AWS, Azure, GCP) and hybrid data environments.
Excellent communication and stakeholder management skills.
OTHER:
Experience with pharmaceutical, life sciences, or regulated industry environments.
Knowledge of data privacy regulations such as GDPR, HIPAA, and data compliance frameworks
Ability to travel domestically and internationally as needed for high priority projects
Novocure is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sexual orientation, gender identity, age, national origin, disability, protected veteran status or other characteristics protected by federal, state, or local law. We actively seek qualified candidates who are protected veteran and individuals with disabilities as defined under VEVRAA and Section 503 of the Rehabilitation Act.
Novocure is committed to providing an interview process that is inclusive of our applicant's needs. If you are an individual with a disability and would like to request an accommodation, please email
ABOUT NOVOCURE:
Our vision
Patient-forward: aspiring to make a difference in cancer.
Our patient-forward mission
Together with our patients, we strive to extend survival in some of the most aggressive forms of cancer by developing and commercializing our innovative therapy.
Our patient-forward values
innovation
focus
drive
courage
trust
empathy
#LI-RJ1
$93k-125k yearly est. 4d ago
Senior Data Engineer
Stratacuity
Data engineer job in Bristol, CT
Description/Comment: Disney Streaming is the leading premium streaming service offering live and on-demand TV and movies, with and without commercials, both in and outside the home. Operating at the intersection of entertainment and technology, Disney Streaming has a unique opportunity to be the number one choice for TV. We captivate and connect viewers with the stories they love, and we're looking for people who are passionate about redefining TV through innovation, unconventional thinking, and embracing fun. Join us and see what this is all about.
The Product Performance Data Solutions team for the Data organization within Disney Streaming (DS), a segment under the Disney Media & Entertainment Distribution is in search of a Senior DataEngineer. As a member of the Product Performance team, you will work on building foundational datasets from clickstream and quality of service telemetry data - enabling dozens of engineering and analytical teams to unlock the power of data to drive key business decisions and provide engineering, analytics, and operational teams the critical information necessary to scale the largest streaming service. The Product Performance Data Solutions team is seeking to grow their team of world-class DataEngineers that share their charisma and enthusiasm for making a positive impact.
Responsibilities:
* Contribute to maintaining, updating, and expanding existing data pipelines in Python / Spark while maintaining strict uptime SLAs
* Architect, design, and code shared libraries in Scala and Python that abstract complex business logic to allow consistent functionality across all data pipelines
* Tech stack includes Airflow, Spark, Databricks, Delta Lake, Snowflake, Scala, Python
* Collaborate with product managers, architects, and other engineers to drive the success of the Product Performance Data and key business stakeholders
* Contribute to developing and documenting both internal and external standards for pipeline configurations, naming conventions, partitioning strategies, and more
* Ensure high operational efficiency and quality of datasets to ensure our solutions meet SLAs and project reliability and accuracy to all our partners (Engineering, Data Science, Operations, and Analytics teams)
* Be an active participant and advocate of agile/scrum ceremonies to collaborate and improve processes for our team
* Engage with and understand our customers, forming relationships that allow us to understand and prioritize both innovative new offerings and incremental platform improvements
* Maintain detailed documentation of your work and changes to support data quality and data governance requirements
Additional Information: NOTE: There will be no SPC for this role Interview process: 4 rounds (1 with HM, 2 tech rounds, and a final with Product) We need an expert in SQL, extensive experience with Scala, a proven self-starter (expected to discover the outcome, and then chase after it), not only able to speak technical but clearly articulate that info to the business as well.
Preferred Qualifications: Candidates with Click stream, user browse data are highly preferred
Apex Systems is a world-class IT services company that serves thousands of clients across the globe. When you join Apex, you become part of a team that values innovation, collaboration, and continuous learning. We offer quality career resources, training, certifications, development opportunities, and a comprehensive benefits package. Our commitment to excellence is reflected in many awards, including ClearlyRated's Best of Staffing in Talent Satisfaction in the United States and Great Place to Work in the United Kingdom and Mexico. Apex uses a virtual recruiter as part of the application process. Click here for more details.
Apex Benefits Overview: Apex offers a range of supplemental benefits, including medical, dental, vision, life, disability, and other insurance plans that offer an optional layer of financial protection. We offer an ESPP (employee stock purchase program) and a 401K program which allows you to contribute typically within 30 days of starting, with a company match after 12 months of tenure. Apex also offers a HSA (Health Savings Account on the HDHP plan), a SupportLinc Employee Assistance Program (EAP) with up to 8 free counseling sessions, a corporate discount savings program and other discounts. In terms of professional development, Apex hosts an on-demand training program, provides access to certification prep and a library of technical and leadership courses/books/seminars once you have 6+ months of tenure, and certification discounts and other perks to associations that include CompTIA and IIBA. Apex has a dedicated customer service team for our Consultants that can address questions around benefits and other resources, as well as a certified Career Coach. You can access a full list of our benefits, programs, support teams and resources within our 'Welcome Packet' as well, which an Apex team member can provide.
Employee Type:
Contract
Location:
Bristol, CT, US
Job Type:
Date Posted:
January 8, 2026
Pay Range:
$50 - $100 per hour
Similar Jobs
* Senior DataEngineer
* Sr DataEngineers x12
* Senior Data Scientist
* Senior DataEngineer - SQL & Reporting
* DataEngineer
$50-100 hourly 3d ago
Senior Data Engineer - Product Performance Data -1573
Akube
Data engineer job in Bristol, CT
City: Bristol, CT /NYC Onsite/ Hybrid/ Remote: Hybrid (4 days a week Onsite)Duration: 10 months Rate Range: Up to $96/hr on W2 depending on experience (no C2C or 1099 or sub -contract) Work Authorization: GC, USC, All valid EADs except OPT, CPT, H1B
Must Have:
Advanced SQL expertise
Strong Scala development experience
Python for dataengineering
Apache Spark in production
Airflow for orchestration
Databricks platform experience
Cloud data storage experience (S3 or equivalent)
Responsibilities:
Build and maintain large -scale data pipelines with strict SLAs.
Design shared libraries in Scala and Python to standardize data logic.
Develop foundational datasets from clickstream and telemetry data.
Ensure data quality, reliability, and operational efficiency.
Partner with product, engineering, and analytics teams.
Define and document data standards and best practices.
Participate actively in Agile and Scrum ceremonies.
Communicate technical outcomes clearly to business stakeholders.
Maintain detailed technical and data governance documentation.
Qualifications:
5+ years of dataengineering experience.
Strong problem -solving and algorithmic skills.
Expert -level SQL with complex analytical queries.
Hands -on experience with distributed systems at scale.
Experience supporting production data platforms.
Self -starter who can define outcomes and drive solutions.
Ability to translate technical concepts for non -technical audiences.
Bachelor's degree or equivalent experience.
$96 hourly 3d ago
Data Engineer I
Epicured, Inc.
Data engineer job in Glen Cove, NY
Job DescriptionWhy Epicured?
Epicured is on a mission to combat and prevent chronic disease, translating scientific research into high-quality food products and healthcare services nationwide. Our evidence-based approach brings together the best of the clinical, culinary, and technology worlds to help people eat better, feel better, and live better one meal at a time.
By joining Epicured's Technology team, you'll help power the data infrastructure that supports Medicaid programs, clinical services, life sciences initiatives, and direct-to-consumer operations - enabling better decisions, better outcomes, and scalable growth.
Role Overview
Epicured is seeking a DataEngineer I to support data ingestion, reporting, and analytics across multiple business lines. Reporting to the SVP of Software Engineering, this role will focus on building and maintaining reliable reporting pipelines, supporting business requests, and managing data from a growing ecosystem of healthcare, operational, and e-commerce systems.
This position is ideal for a self-starter with strong SQL skills who is comfortable working with evolving requirements, healthcare-adjacent data, and modern data platforms such as Microsoft Fabric and Power BI.
Key Responsibilities
Build, maintain, and support reports across all Epicured business lines using Power BI, exports, and Microsoft Fabric.
Ingest and integrate new data sources, including SQL Server, operational systems, and external data exchanges.
Support reporting and analytics requests across Clinical & Life Sciences, Section 1115 Medicaid Waiver programs, Health Information Exchanges (e.g., Healthix), and Self-Pay e-commerce operations.
Handle HIPAA-sensitive data, ensuring proper governance, access control, and compliance standards are maintained.
Manage Shopify and other e-commerce data requests for Epicured's Self-Pay division.
Keep reporting environments organized, documented, and operational while prioritizing incoming requests.
Operate and help scale Epicured's Microsoft Fabric environment, contributing to platform strategy and best practices.
Partner with stakeholders to clarify ambiguous requirements and translate business questions into data solutions.
Qualifications
3+ years of experience in dataengineering, analytics, or business intelligence roles.
Strong SQL skills with experience working in relational databases.
Experience with Azure, Microsoft Fabric, Power BI, or similar modern data platforms.
Strong proficiency in Excel / Google Sheets.
Ability to work independently and manage multiple priorities in a fast-growing environment.
Experience working with healthcare or HIPAA-adjacent data, including exposure to health information exchanges.
Familiarity with ETL / ELT pipelines and data modeling best practices.
Experience integrating operational, financial, logistics, and clinical datasets.
Preferred Qualifications
Experience with C#.
Python experience is a plus.
Healthcare or life sciences background.
Experience supporting analytics for Medicaid, payer, or regulated environments.
Compensation & Benefits
Salary Range: $115,000-$130,000 annually, commensurate with experience
Benefits include:
401(k)
Health, Dental, and Vision insurance
Unlimited Paid Time Off (PTO)
Opportunity to grow with Epicured's expanding data and technology organization
Equal Employment Opportunity
Epicured is proud to be an Equal Opportunity Employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We do not discriminate on the basis of age, race, creed, color, national origin, religion, gender, sexual orientation, gender identity or expression, disability, veteran status, or any other protected status under federal, state, or local law.
$115k-130k yearly 14d ago
Data Engineer w AWS services such as AWS Glue, Lambda, Athena, Step Functions, and Lake
Intermedia Group
Data engineer job in Ridgefield, CT
OPEN JOB: DataEngineer w AWS services such as AWS Glue, Lambda, Athena, Step Functions, and Lake **HYBRID - This candidate will work on site 2-3X per week in Ridgefield CT location SALARY: $140,000 to $185,000
2 Openings
NOTE: CANDIDATE MUST BE US CITIZEN OR GREEN CARD HOLDER
We are seeking a highly skilled and experienced DataEngineer to design, build, and maintain our scalable and robust data infrastructure on a cloud platform. In this pivotal role, you will be instrumental in enhancing our data infrastructure, optimizing data flow, and ensuring data availability. You will be responsible for both the hands-on implementation of data pipelines and the strategic design of our overall data architecture.
Seeking a candidate with hands-on experience with AWS services such as AWS Glue, Lambda, Athena, Step Functions, and Lake, Proficiency in Python and SQL and DevOps/CI/CD experience
Duties & Responsibilities
Design, develop, and maintain scalable data pipelines and ETL processes to support data integration and analytics.
Collaborate with data architects, modelers and IT team members to help define and evolve the overall cloud-based data architecture strategy, including data warehousing, data lakes, streaming analytics, and data governance frameworks
Collaborate with data scientists, analysts, and other business stakeholders to understand data requirements and deliver solutions.
Optimize and manage data storage solutions (e.g., S3, Snowflake, Redshift) ensuring data quality, integrity, security, and accessibility.
Implement data quality and validation processes to ensure data accuracy and reliability.
Develop and maintain documentation for data processes, architecture, and workflows.
Monitor and troubleshoot data pipeline performance and resolve issues promptly.
Consulting and Analysis: Meet regularly with defined clients and stakeholders to understand and analyze their processes and needs. Determine requirements to present possible solutions or improvements.
Technology Evaluation: Stay updated with the latest industry trends and technologies to continuously improve dataengineering practices.
Requirements
Cloud Expertise: Expert-level proficiency in at least one major cloud platform (AWS, Azure, or GCP) with extensive experience in their respective data services (e.g., AWS S3, Glue, Lambda, Redshift, Kinesis; Azure Data Lake, Data Factory, Synapse, Event Hubs; GCP BigQuery, Dataflow, Pub/Sub, Cloud Storage); experience with AWS data cloud platform preferred
SQL Mastery: Advanced SQL writing and optimization skills.
Data Warehousing: Deep understanding of data warehousing concepts, Kimball methodology, and various data modeling techniques (dimensional, star/snowflake schemas).
Big Data Technologies: Experience with big data processing frameworks (e.g., Spark, Hadoop, Flink) is a plus.
Database Systems: Experience with relational and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB, Cassandra).
DevOps/CI/CD: Familiarity with DevOps principles and CI/CD pipelines for data solutions.
Hands-on experience with AWS services such as AWS Glue, Lambda, Athena, Step Functions, and Lake Formation
Proficiency in Python and SQL
Desired Skills, Experience and Abilities
4+ years of progressive experience in dataengineering, with a significant portion dedicated to cloud-based data platforms.
ETL/ELT Tools: Hands-on experience with ETL/ELT tools and orchestrators (e.g., Apache Airflow, Azure Data Factory, AWS Glue, dbt).
Data Governance: Understanding of data governance, data quality, and metadata management principles.
AWS Experience: Ability to evaluate AWS cloud applications, make architecture recommendations; AWS solutions architect certification (Associate or Professional) is a plus
Familiarity with Snowflake
Knowledge of dbt (data build tool)
Strong problem-solving skills, especially in data pipeline troubleshooting and optimization
If you are interested in pursuing this opportunity, please respond back and include the following:
Full CURRENT Resume
Required compensation
Contact information
Availability
Upon receipt, one of our managers will contact you to discuss in full
STEPHEN FLEISCHNER
Recruiting Manager
INTERMEDIA GROUP, INC.
EMAIL: *******************************
$140k-185k yearly Easy Apply 60d+ ago
Data Engineer (AI, ML, and Data Science)
Consumer Reports
Data engineer job in Yonkers, NY
WHO WE ARE
Consumer Reports is an independent, nonprofit organization dedicated to a fair and just marketplace for all. CR is known for our rigorous testing and trusted ratings on thousands of products and services. We report extensively on consumer trends and challenges, and survey millions of people in the U.S. each year. We leverage our evidence-based approach to advocate for consumer rights, working with policymakers and companies to find solutions for safer products and fair practices.
Our mission starts with you. We offer medical benefits that start on your first day as a CR employee that include behavioral health coverage, family planning and a generous 401K match. Learn more about how CR advocates on behalf of our employees.
OVERVIEW
Data powers everything we do at CR-and it's the foundation for our AI and machine learning efforts that are transforming how we serve consumers.
The DataEngineer ( AI/ML & Data Science) will play a critical role in building the data infrastructure that powers advanced AI applications, machine learning models, and analytics systems across CR. Reporting to the Associate Director, AI/M & Data Science, in this role, you will design and maintain robust data pipelines and services that support experimentation, model training, and AI application deployment.
If you're passionate about solving complex data challenges, working with cutting-edge AI technologies, and enabling impactful, data-driven products that support CR's mission, this is the role for you.
This is a hybrid position. This position is not eligible for sponsorship or relocation assistance.
How You'll Make An Impact
As a mission based organization, CR and our Software team are pursuing an AI strategy that will drive value for our customers, give our employees superpowers, and address AI harms in the digital marketplace. We're looking for an AI/ML engineer to help us execute on our multi-year roadmap around generative AI.
As a DataEngineer ( AI/M & Data Science) you will:
Design, develop, and maintain ETL/ELT pipelines for structured and unstructured data to support AI/ML model and application development, evaluation, and monitoring.
Build and optimize data processing workflows in Databricks, AWS SageMaker, or similar cloud platforms.
Collaborate with AI/ML engineers to deliver clean, reliable datasets for model training and inference.
Implement data quality, observability, and lineage tracking within the ML lifecycle.
Develop Data APIs/microservices to power AI applications and reporting/analytics dashboards.
Support the deployment of AI/ML applications by building and maintaining feature stores and data pipelines optimized for production workloads.
Ensure adherence to CR's data governance, security, and compliance standards across all AI and data workflows.
Work with Product, Engineering and other stakeholders to define project requirements and deliverables.
Integrate data from multiple internal and external systems, including APIs, third-party datasets, and cloud storage.
ABOUT YOU
You'll Be Highly Rated If:
You have the experience. You have 3+ years of experience designing and developing data pipelines, data models/schemas, APIs, or services for analytics or ML workloads.
You have the education. You've earned a Bachelor's degree in Computer Science, Engineering, or a related field.
You have programming skills. You are skilled in Python, SQL, and have experience with PySpark on large-scale datasets.
You have experience with data orchestration tools such as Airflow, dbt and Prefect, plus CI/CD pipelines for data delivery.
You have experience with Data and AI/ML platforms such as Databricks, AWS SageMaker or similar.
You have experience working with Kubernetes on cloud platforms like - AWS, GCP, or Azure.
You'll Be One of Our Top Picks If:
You are passionate about automation and continuous improvement.
You have excellent documentation and technical communication skills.
You are an analytical thinker with troubleshooting abilities.
You are self-driven and proactive in solving infrastructure bottlenecks.
FAIR PAY AND A JUST WORKPLACE
At Consumer Reports, we are committed to fair, transparent pay and we strive to provide competitive, market-informed compensation.The target salary range for this position is $100K-$120K. It is anticipated that most qualified candidates will fall near the middle of this range. Compensation for the successful candidate will be informed by the candidate's particular combination of knowledge, skills, competencies, and experience. We have three locations: Yonkers, NY, Washington, DC and Colchester, CT. We are registered to do business in and can only hire from the following states and federal district: Arizona, California, Connecticut, Illinois, Maryland, Massachusetts, Michigan, New Hampshire, New Jersey, New York, Texas, Vermont, Virginia and Washington, DC.
Salary ranges
NY/California: $120K-$140K annually
DMV/Massachusetts: $115K-$135K annually
Colchester, CT and additional approved CR locations: $100K-$120K annually
Consumer Reports is an equal opportunity employer and does not discriminate in employment on the basis of actual or perceived race, color, creed, religion, age, national origin, ancestry, citizenship status, sex or gender (including pregnancy, childbirth, related medical conditions or lactation), gender identity and expression (including transgender status), sexual orientation, marital status, military service or veteran status, protected medical condition as defined by applicable state or local law, disability, genetic information, or any other basis protected by applicable federal, state or local laws. Consumer Reports will provide you with any reasonable assistance or accommodation for any part of the application and hiring process.
$120k-140k yearly Auto-Apply 47d ago
Data Engineer
Orion Innovation 3.7
Data engineer job in Montvale, NJ
Orion Innovation is a premier, award-winning, global business and technology services firm. Orion delivers game-changing business transformation and product development rooted in digital strategy, experience design, and engineering, with a unique combination of agility, scale, and maturity. We work with a wide range of clients across many industries including financial services, professional services, telecommunications and media, consumer products, automotive, industrial automation, professional sports and entertainment, life sciences, ecommerce, and education.
Role Overview
We're seeking Financial Statement Data Developers/Engineers to support client engagements. You'll work with modern data tooling to transform accounting trial balance data into accurate Income Statements and Balance Sheets. This role blends dataengineering, financial data transformation, and client-facing consulting.
Key Responsibilities
Ingest & transform trial balance data from various sources (CSV, Excel, ERP extract, database).
Apply chart of accounts mappings, including entity-specific variations.
Utilize purpose-built applications to aggregate outputs into Income Statement & Balance Sheet formats.
Build repeatable Alteryx workflows, SQL transformations, and Python scripts for automation.
Perform data reconciliation, tie‑outs, variance checks, and data quality controls across entities and periods.
Required Technical Skills
Strong proficiency in SQL, Python and Alteryx.
Hands-on experience in accounting database development, dataengineering and analytics
Accounting fundamentals: trial balance, chart of accounts, journal entries, Income Statement & Balance Sheet.
Experience converting ledger-level data into financial statements and performing reconciliations.
Strong communication skills and client-facing experience; ability to work onsite when required.
Experience with cloud environments (Azure).
Qualifications & Experience
Education: Bachelor's degree in Computer Science, Accounting or a related field (or equivalent experience).
Experience: 8+ years of hands-on experience with database development and scripting
Preferred nice-to-have skills
Exposure to M&A/Deal Advisory workflows: multi-entity consolidation, intercompany eliminations, pro forma reporting.
Proficiency with Power BI reporting and validation dashboards.
ERP familiarity (e.g., SAP, Oracle).
Big-4 or top tier Professional Services experience is a plus.
Orion is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, creed, religion, sex, sexual orientation, gender identity or expression, pregnancy, age, national origin, citizenship status, disability status, genetic information, protected veteran status, or any other characteristic protected by law.
Candidate Privacy Policy
Orion Systems Integrators, LLC and its subsidiaries and its affiliates (collectively, “Orion,” “we” or “us”) are committed to protecting your privacy. This Candidate Privacy Policy (orioninc.com) (“Notice”) explains:
What information we collect during our application and recruitment process and why we collect it;
How we handle that information; and
How to access and update that information.
Your use of Orion services is governed by any applicable terms in this notice and our general Privacy Policy.
$93k-133k yearly est. Auto-Apply 7d ago
Junior Data Scientist
Bexorg
Data engineer job in New Haven, CT
About Us
Bexorg is revolutionizing drug discovery by restoring molecular activity in postmortem human brains. Through our BrainEx platform, we directly experiment on functionally preserved human brain tissue, creating enormous high-fidelity molecular datasets that fuel AI-driven breakthroughs in treating CNS diseases. We are looking for a Junior Data Scientist to join our team and dive into this one-of-a-kind data. In this onsite role, you will work at the intersection of computational biology and machine learning, helping analyze high-dimensional brain data and uncover patterns that could lead to the next generation of CNS therapeutics. This is an ideal opportunity for a recent graduate or early-career scientist to grow in a fast-paced, mission-driven environment.
The Job
Data Analysis & Exploration: Work with large-scale molecular datasets from our BrainEx experiments - including transcriptomic, proteomic, and metabolic data. Clean, transform, and explore these high-dimensional datasets to understand their structure and identify initial insights or anomalies.
Collaborative Research Support: Collaborate closely with our life sciences, computational biology and deep learning teams to support ongoing research. You will help biologists interpret data results and assist machine learning researchers in preparing data for modeling, ensuring that domain knowledge and data science intersect effectively.
Machine Learning Model Execution: Run and tune machine learning and deep learning models on real-world central nervous system (CNS) data. You'll help set up experiments, execute training routines (for example, using scikit-learn or PyTorch models), and evaluate model performance to extract meaningful patterns that could inform drug discovery.
Statistical Insight Generation: Apply statistical analysis and visualization techniques to derive actionable insights from complex data. Whether it's identifying gene expression patterns or correlating molecular changes with experimental conditions, you will contribute to turning data into scientific discoveries.
Reporting & Communication: Document your analysis workflows and results in clear reports or dashboards. Present findings to the team, highlighting key insights and recommendations. You will play a key role in translating data into stories that drive decision-making in our R&D efforts.
Qualifications and Skills:
Strong Python Proficiency: Expert coding skills in Python and deep familiarity with the standard data science stack. You have hands-on experience with NumPy, pandas, and Matplotlib for data manipulation and visualization; scikit-learn for machine learning; and preferably PyTorch (or similar frameworks like TensorFlow) for deep learning tasks.
Educational Background: A Bachelor's or Master's degree in Data Science, Computer Science, Computational Biology, Bioinformatics, Statistics, or a related field. Equivalent practical project experience or internships in data science will also be considered.
Machine Learning Knowledge: Solid understanding of machine learning fundamentals and algorithms. Experience developing or applying models to real or simulated datasets (through coursework or projects) is expected. Familiarity with high-dimensional data techniques or bioinformatics methods is a plus.
Analytical & Problem-Solving Skills: Comfortable with statistics and data analysis techniques for finding signals in noisy data. Able to break down complex problems, experiment with solutions, and clearly interpret the results.
Team Player: Excellent communication and collaboration skills. Willingness to learn from senior scientists and ability to contribute effectively in a multidisciplinary team that includes biologists, dataengineers, and AI researchers.
Motivation and Curiosity: Highly motivated, with an evident passion for data-driven discovery. You are excited by Bexorg's mission and eager to take on challenging tasks - whether it's mastering a new analysis method or digging into scientific literature - to push our research forward.
Local to New Haven, CT preferred. No relocation offered for this position.
Bexorg is an equal opportunity employer. We strive to create a supportive and inclusive workplace where contributions are valued and celebrated, and our employees thrive by being themselves and are inspired to do their best work. We seek applicants of all backgrounds and identities, across race, color, ethnicity, national origin or ancestry, citizenship, religion, sex, sexual orientation, gender identity or expression, veteran status, marital status, pregnancy or parental status, or disability. Applicants will not be discriminated against based on these or other protected categories or social identities. Bexorg will also consider for employment qualified applicants with criminal histories in a manner consistent with applicable federal, state and local law.
$75k-105k yearly est. Auto-Apply 60d+ ago
Senior Data Engineer and BI Developer - Contract to Hire
Uszoom
Data engineer job in Montebello, NY
Contract Senior DataEngineer and BI Developer - Contract to Hire Montebello, New York
iPostal1, the leading provider of Digital Mailbox technology worldwide, is looking for an intelligent and ambitious Senior DataEngineer and BI Developer to join the newly formed data team. Our website, ***************** lists 3,000 addresses, including retail pack and ship stores, 1,000 Staples stores and coworking spaces. Customers choose a mailing address for business or personal use and view and manage their postal mail and packages anywhere with an app or online.
The incumbent will help us build our enterprise data warehouse on Snowflake! Your role will utilize advanced SQL and Sigma visualization and dashboard design tool, and perform insightful data analysis to meet our corporate growth goals. Our corporate headquarters is in Rockland County, NY, and this person will work in the office as needed.
Responsibilities:
Engineer robust ELT solutions: Design, build, test, and document efficient data pipelines using cutting-edge tools to ensure seamless data integration.
Satisfy business requirements: Leverage your advanced SQL skills to design and implement robust data transformations that directly address business objectives and drive decision-making.
Stay ahead of the curve: Eager to explore and embrace new technologies and algorithms to continuously optimize data solutions.
Deliver results with a user-centric approach: Driven to achieve targets and consistently prioritize user experience in every solution. Meet commitment deadlines.
Qualifications:
8+ years of data expertise: Proven track record in ETL/ELT, data modeling, data warehouse design, development, and population from diverse data sources (databases, unstructured data, APIs).
2+ years with Snowflake: Hands-on experience developing and optimizing within the Snowflake cloud data warehouse environment. Snowpro certification is a plus.
SQL mastery: Advanced-level SQL programming skills with a focus on query and ELT optimization for performance and knowledge of window functions.
Analytical problem solver: Data-driven mindset with sharp analytical skills, comfortable with ambiguity, able to identify and resolve inconsistencies and root causes. Strong business acumen to validate the accuracy of data.
Excellent verbal and written communication skills.
Exceptional grammar, spelling, and attention to detail
Use of ETL/ELT Tools like Five Tran, Matillon, Kettle, Abinitio, Informatica, SSIS or similar.
Use of Visualization tools such as Tableau, PowerBI, Looker (LookML) or Sigma
Good knowledge of RDBMS like MySQL, Oracle, Postgres, SQL Server
Other Technical know-how: Proficient in software engineering best practices, adept across Unix/Linux/Windows platforms, and experienced with Agile workflows.
Collaboration tools: Basic familiarity with Git for version control and Jira for project tracking.
Education: Proven track record in a technical field, ideally supported by a BS/MS in Computer Science, Information Systems, or similar experience. Strong technical and business skills are key.
Pluses:
Experience with:
Low code data extraction tools like FiveTrans, Matillion or Rivery
Semantic layer or data virtualization tools
Data Governance
Agile
Scripting languages like Python, Shell or JavaScript
AWS services (S3, RDS, etc) and networking knowledge
Google Analytics and Facebook Ads dataData Quality (we use Monte Carlo Data)
Job orchestration tools (Airflow or proprietary)
iPostal1 is an Equal Opportunity Employer and considers all applicants for employment without regard to race, color, religious creed, ancestry, religion, sex, sexual orientation, gender identity and/or expression, pregnancy, age, national origin, marital status, disability, military status, genetic information, or any other category protected by law.
$90k-123k yearly est. 46d ago
ETL/Data Platform Engineer
Clarapath
Data engineer job in Hawthorne, NY
JOB TITLE: ETL/Data Platform Engineer
TYPE: Full time, regular
COMPENSATION: $130,000 - $180,000/yr
Clarapath is a medical robotics company based in Westchester County, NY. Our mission is to transform and modernize laboratory workflows with the goal of improving patient care, decreasing costs, and enhancing the quality and consistency of laboratory processes. SectionStar by Clarapath is a ground-breaking electro-mechanical system designed to elevate and automate the workflow in histology laboratories and provide pathologists with the tissue samples they need to make the most accurate diagnoses. Through the use of innovative technology, data, and precision analytics, Clarapath is paving the way for a new era of laboratory medicine.
Role Summary:
The ETL/Data Platform Engineer will play a key role in designing, building, and maintaining Clarapath s data pipelines and platform infrastructure supporting SectionStar , our advanced electro-mechanical device. This role requires a strong foundation in dataengineering, including ETL/ELT development, data modeling, and scalable data platform design. Working closely with cross-functional teams including software, firmware, systems, and mechanical engineering this individual will enable reliable ingestion, transformation, and storage of device and operational data. The engineer will help power analytics, system monitoring, diagnostics, and long-term insights that support product performance, quality, and continuous improvement. We are seeking a proactive, detail-oriented engineer who thrives in a fast-paced, rapidly growing environment and is excited to apply dataengineering best practices to complex, data-driven challenges in a regulated medical technology setting.
Responsibilities:
Design, develop, and maintain robust ETL/ELT pipelines for device, telemetry, and operational data
Build and optimize data models to support analytics, reporting, and system insights
Develop and maintain scalable data platform infrastructure (cloud and/or on-prem)
Ensure data quality, reliability, observability, and performance across pipelines
Support real-time or near real-time data ingestion where applicable
Collaborate with firmware and software teams to integrate device-generated data
Enable dashboards, analytics, and internal tools for engineering, quality, and operations teams
Implement best practices for data security, access control, and compliance
Troubleshoot pipeline failures and improve system resilience
Document data workflows, schemas, and platform architecture
Qualifications:
Bachelor s degree in Computer Science, Engineering, or a related field (or equivalent experience)
3+ years of experience in dataengineering, ETL development, or data platform roles
Strong proficiency in SQL and at least one programming language (Python preferred)
Experience building and maintaining ETL/ELT pipelines
Familiarity with data modeling concepts and schema design
Experience with cloud platforms (AWS, GCP, or Azure) or hybrid environments
Understanding of data reliability, monitoring, and pipeline orchestration
Strong problem-solving skills and attention to detail
Experience with streaming data or message-based systems (ex: Kafka, MQTT), a plus
Experience working with IoT, device, or telemetry data, a plus
Familiarity with data warehouses and analytics platforms, a plus
Experience in regulated environments (medical device, healthcare, life sciences), a plus
Exposure to DevOps practices, CI/CD, or infrastructure-as-code, a plus
Company Offers:
Competitive salary, commensurate with experience and education
Comprehensive benefits package available: (healthcare, vision, dental and life insurances; 401k; PTO and holidays)
A collaborative and diverse work environment where our teams thrive on solving complex challenges
Ability to file IP with the company
Connections with world class researchers and their laboratories
Collaboration with strategic leaders in healthcare and pharmaceutical world
A mission driven organization where every team member will be responsible for changing the standards of delivering healthcare
Clarapath is proud to be an equal opportunity employer. We are committed to providing equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. In addition to federal law requirements, Clarapath complies with applicable state and local laws governing nondiscrimination in employment. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation, and training.
$130k-180k yearly 6d ago
Senior Market Data Engineer
Worldquant 4.6
Data engineer job in Old Greenwich, CT
WorldQuant develops and deploys systematic financial strategies across a broad range of asset classes and global markets. We seek to produce high-quality predictive signals (alphas) through our proprietary research platform to employ financial strategies focused on market inefficiencies. Our teams work collaboratively to drive the production of alphas and financial strategies - the foundation of a balanced, global investment platform.
WorldQuant is built on a culture that pairs academic sensibility with accountability for results. Employees are encouraged to think openly about problems, balancing intellectualism and practicality. Excellent ideas come from anyone, anywhere. Employees are encouraged to challenge conventional thinking and possess an attitude of continuous improvement.
Our goal is to hire the best and the brightest. We value intellectual horsepower first and foremost, and people who demonstrate an outstanding talent. There is no roadmap to future success, so we need people who can help us build it.
Technologists at WorldQuant research, design, code, test and deploy firmwide platforms and tooling while working collaboratively with researchers and portfolio managers. Our environment is relaxed yet intellectually driven. We seek people who think in code and are motivated by being around like-minded people.
The Role:
* Design and build real-time market data processing systems covering global markets and multiple asset classes
* Architect and implement high-performance software solutions for processing market data feeds at scale
* Drive technical innovation by leveraging emerging technologies to enhance system telemetry, monitoring, and operational efficiency
* Provide technical leadership and escalation support for production market data systems
* Analyze system performance and design data-driven approaches to optimize market data processing workflows
* Lead the design of data governance systems for tracking availability, access patterns, and usage metrics
What You Will Bring:
* Degree in a quantitative or technical discipline from top university and strong academic scores
* Expert-level C++ proficiency with demonstrated experience in other object-oriented languages (Java, C#)
* Experience with scripting languages such as Perl, Python, and shell scripting for automation and data processing
* Deep experience with tick-by-tick market data processing, including data normalization, feed handling, and real-time analytic
* Excellent communication skills with ability to collaborate effectively across technical and business teams
* Have experience working under Linux environment
Our Benefits:
* Core Benefits: Fully paid medical and dental insurance for employees and dependents, flexible spending account, 401k, fully paid parental leave, generous PTO (paid time off) that consists of:
* twenty vacation days that are pro-rated based on the employee's start date, at an accrual of 1.67 days per month,
* three personal days, and
* ten sick days.
* Perks: Employee discounts for gym memberships, wellness activities, healthy snacks, casual dress code
* Training: learning and development courses, speakers, team-building off-site
* Employee resource groups
Pay Transparency:
WorldQuant is a total compensation organization where you will be eligible for a base salary, discretionary performance bonus, and benefits.
To provide greater transparency to candidates, we share base pay ranges for all US-based job postings regardless of state. We set standard base pay ranges for all roles based on job function and level, benchmarked against similar stage organizations. When finalizing an offer, we will take into consideration an individual's experience level and the qualifications they bring to the role to formulate a competitive total compensation package.
The Base Pay Range For This Position Is $175,000 - $250,000 USD.
At WorldQuant, we are committed to providing candidates with all necessary information in compliance with pay transparency laws. If you believe any required details are missing from this job posting, please notify us at [email protected], and we will address your concerns promptly.
By submitting this application, you acknowledge and consent to terms of the WorldQuant Privacy Policy. The privacy policy offers an explanation of how and why your data will be collected, how it will be used and disclosed, how it will be retained and secured, and what legal rights are associated with that data (including the rights of access, correction, and deletion). The policy also describes legal and contractual limitations on these rights. The specific rights and obligations of individuals living and working in different areas may vary by jurisdiction.
#LI-RS1
By submitting this application, you acknowledge and consent to terms of the WorldQuant Privacy Policy. The privacy policy offers an explanation of how and why your data will be collected, how it will be used and disclosed, how it will be retained and secured, and what legal rights are associated with that data (including the rights of access, correction, and deletion). The policy also describes legal and contractual limitations on these rights. The specific rights and obligations of individuals living and working in different areas may vary by jurisdiction.
Copyright 2025 WorldQuant, LLC. All Rights Reserved.
WorldQuant is an equal opportunity employer and does not discriminate in hiring on the basis of race, color, creed, religion, sex, sexual orientation or preference, age, marital status, citizenship, national origin, disability, military status, genetic predisposition or carrier status, or any other protected characteristic as established by applicable law.
$175k-250k yearly 60d+ ago
Tech Lead, Data & Inference Engineer
Catalyst Labs
Data engineer job in Stamford, CT
Our Client
A fast moving and venture backed advertising technology startup based in San Francisco. They have raised twelve million dollars in funding and are transforming how business to business marketers reach their ideal customers. Their identity resolution technology blends business and consumer signals to convert static audience lists into high match and cross channel segments without the use of cookies. By transforming first party and third party data into precision targetable audiences across platforms such as Meta, Google and YouTube, they enable marketing teams to reach higher match rates, reduce wasted advertising spend and accelerate pipeline growth. With a strong understanding of how business buyers behave in channels that have traditionally been focused on business to consumer activity, they are redefining how business brands scale demand generation and account based efforts.
About Us
Catalyst Labs is a leading talent agency with a specialized vertical in Applied AI, Machine Learning, and Data Science. We stand out as an agency thats deeply embedded in our clients recruitment operations.
We collaborate directly with Founders, CTOs, and Heads of AI in those themes who are driving the next wave of applied intelligence from model optimization to productized AI workflows. We take pride in facilitating conversations that align with your technical expertise, creative problem-solving mindset, and long-term growth trajectory in the evolving world of intelligent systems.
Location: San Francisco
Work type: Full Time,
Compensation: above market base + bonus + equity
Roles & Responsibilities
Lead the design, development and scaling of an end to end data platform from ingestion to insights, ensuring that data is fast, reliable and ready for business use.
Build and maintain scalable batch and streaming pipelines, transforming diverse data sources and third party application programming interfaces into trusted and low latency systems.
Take full ownership of reliability, cost and service level objectives. This includes achieving ninety nine point nine percent uptime, maintaining minutes level latency and optimizing cost per terabyte. Conduct root cause analysis and provide long lasting solutions.
Operate inference pipelines that enhance and enrich data. This includes enrichment, scoring and quality assurance using large language models and retrieval augmented generation. Manage version control, caching and evaluation loops.
Work across teams to deliver data as a product through the creation of clear data contracts, ownership models, lifecycle processes and usage based decision making.
Guide architectural decisions across the data lake and the entire pipeline stack. Document lineage, trade offs and reversibility while making practical decisions on whether to build internally or buy externally.
Scale integration with application programming interfaces and internal services while ensuring data consistency, high data quality and support for both real time and batch oriented use cases.
Mentor engineers, review code and raise the overall technical standard across teams. Promote data driven best practices throughout the organization.
Qualifications
Bachelors or Masters degree in Computer Science, Computer Engineering, Electrical Engineering, or Mathematics.
Excellent written and verbal communication; proactive and collaborative mindset.
Comfortable in hybrid or distributed environments with strong ownership and accountability.
A founder-level bias for actionable to identify bottlenecks, automate workflows, and iterate rapidly based on measurable outcomes.
Demonstrated ability to teach, mentor, and document technical decisions and schemas clearly.
Core Experience
6 to 12 years of experience building and scaling production-grade data systems, with deep expertise in data architecture, modeling, and pipeline design.
Expert SQL (query optimization on large datasets) and Python skills.
Hands-on experience with distributed data technologies (Spark, Flink, Kafka) and modern orchestration tools (Airflow, Dagster, Prefect).
Familiarity with dbt, DuckDB, and the modern data stack; experience with IaC, CI/CD, and observability.
Exposure to Kubernetes and cloud infrastructure (AWS, GCP, or Azure).
Bonus: Strong Node.js skills for faster onboarding and system integration.
Previous experience at a high-growth startup (10 to 200 people) or early-stage environment with a strong product mindset.
$84k-114k yearly est. 60d+ ago
C++ Market Data Engineer (USA)
Trexquant Investment 4.0
Data engineer job in Stamford, CT
Trexquant is a growing systematic fund at the forefront of quantitative finance, with a core team of highly accomplished researchers and engineers. To keep pace with our expanding global trading operations, we are seeking a C++ Market DataEngineer to design and build ultra-low-latency feed handlers for premier vendor feeds and major exchange multicast feeds. This is a high-impact role that sits at the heart of Trexquant's trading platform; the quality, speed, and reliability of your code directly influence every strategy we run.
Responsibilities
Design & implement high-performance feed handlers in modern C++ for equities, futures, and options across global venues (e.g., NYSE, CME, Refinitiv RTS, Bloomberg B-PIPE).
Optimize for micro- and nanosecond latency using lock-free data structures, cache-friendly memory layouts, and kernel-bypass networking where appropriate.
Build reusable libraries for message decoding, normalization, and publication to internal buses shared by research, simulation, and live trading systems.
Collaborate with cross-functional teams to tune TCP/UDP multicast stacks, kernel parameters, and NIC settings for deterministic performance.
Provide robust failover, gap-recovery, and replay mechanisms to guarantee data integrity under packet loss or venue outages.
Instrument code paths with precision timestamping and performance metrics; drive continuous latency regression testing and capacity planning.
Partner closely with quantitative researchers to understand downstream data requirements and to fine-tune delivery formats for both simulation and live trading.
Produce clear architecture documents, operational run-books, and post-mortems; participate in a 24×7 follow-the-sun support rotation for mission-critical market-data services.
Requirements
BS/MS/PhD in Computer Science, Electrical Engineering, or related field.
3+ years of professional C++ (14,17,20) development experience focused on low-latency, high-throughput systems.
Proven track record building or maintaining real-time market-data feeds (e.g., Refinitiv RTS/TREP, Bloomberg B-PIPE, OPRA, CME MDP, ITCH).
Strong grasp of concurrency, lock-free algorithms, memory-model semantics, and compiler optimizations.
Familiarity with serialization formats (FAST, SBE, Protocol Buffers) and time-series databases or in-memory caches.
Comfort with scripting in Python for prototyping, testing, and ops automation.
Excellent problem-solving skills, ownership mindset, and ability to thrive in a fast-paced trading environment.
Familiarity with containerization (Docker/K8s) and public-cloud networking (AWS, GCP).
Benefits
Competitive salary, plus bonus based on individual and company performance.
Collaborative, casual, and friendly work environment while solving the hardest problems in the financial markets.
PPO Health, dental and vision insurance premiums fully covered for you and your dependents.
Pre-Tax Commuter Benefits
Applications are now open for our NYC office, opening in September 2026.
The base salary range is $175,000 - $200,000 depending on the candidate's educational and professional background. Base salary is one component of Trexquant's total compensation, which may also include a discretionary, performance-based bonus.
Trexquant is an Equal Opportunity Employer
$175k-200k yearly Auto-Apply 60d+ ago
Onsite Data Engineer Miami
It Search Corp
Data engineer job in Norwood, NJ
Benefits:
401(k) matching
Bonus based on performance
Dental insurance
Health insurance
DataEngineer Onsite Miami, Fl $130150K base , 10% bonus We are seeking an experienced Senior DataEngineer to lead the design, development, and optimization of end-to-end data pipelines and cloud-based solutions. You will be responsible for architecting scalable data and analytic systems, ensuring data integrity, and implementing software engineering best practices and patterns. The ideal candidate has a strong background in ETL, big data technologies, and cloud services, with a proven ability to drive complex projects from concept to production.
Primary Responsibilities/Essential Functions
This job description in no way states or implies that these are the only duties to be performed by the teammate occupying this position. The selected candidate may perform other related duties assigned to meet the ongoing needs of the business.
Qualifications
Required:
Bachelors or Masters degree in Computer Science, DataEngineering, or a related field.
5+ years of experience as a DataEngineer with expertise in building large-scale data solutions.
Proficiency in Python, SQL, and scripting languages (Bash, PowerShell).
Deep understanding of big data tools (Hadoop, Spark) and ETL processes.
Hands-on experience with cloud platforms (AWS S3, Azure Data Lake, Google BigQuery, Snowflake).
Strong knowledge of database systems (SQL, NoSQL), database design, and query optimization.
Experience designing and managing data warehouses for performance and scalability.
Proficiency in software engineering practices: version control (Git), CI/CD pipelines, and unit testing.
Preferred
Strong experience in software architecture, design patterns, and code optimization.
Expertise in Python-based pipelines and ETL frameworks.
Experience with Azure Data Services and Databricks.
Excellent problem-solving, analytical, and communication skills.
Experience working in agile environments and collaborating with diverse teams.
$82k-112k yearly est. 12d ago
Network Planning Data Scientist (Manager)
Atlas Air Worldwide Holdings 4.9
Data engineer job in White Plains, NY
Atlas Air is seeking a detail-oriented and analytical Network Planning Analyst to help optimize our global cargo network. This role plays a critical part in the 2-year to 11-day planning window, driving insights that enable operational teams to execute the most efficient and reliable schedules. The successful candidate will provide actionable analysis on network delays, utilization trends, and operating performance, build models and reports to govern network operating parameters, and contribute to the development and implementation of software optimization tools that improve reliability and streamline planning processes.
This position requires strong analytical skills, a proactive approach to problem-solving, and the ability to translate data into operational strategies that protect service quality and maximize network efficiency.
Responsibilities
Analyze and Monitor Network Performance
Track and assess network delays, capacity utilization, and operating constraints to identify opportunities for efficiency gains and reliability improvements.
Develop and maintain key performance indicators (KPIs) for network operations and planning effectiveness.
Modeling & Optimization
Build and maintain predictive models to assess scheduling scenarios and network performance under varying conditions.
Support the design, testing, and implementation of software optimization tools to enhance operational decision-making.
Reporting & Governance
Develop periodic performance and reliability reports for customers, assisting in presentation creation
Produce regular and ad hoc reports to monitor compliance with established operating parameters.
Establish data-driven processes to govern scheduling rules, protect operational integrity, and ensure alignment with reliability targets.
Cross-Functional Collaboration
Partner with Operations, Planning, and Technology teams to integrate analytics into network planning and execution.
Provide insights that inform schedule adjustments, fleet utilization, and contingency planning.
Innovation & Continuous Improvement
Identify opportunities to streamline workflows and automate recurring analyses.
Contributes to the development of new planning methodologies and tools that enhance decision-making and operational agility.
Qualifications
Proficiency in SQL (Python and R are a plus) for data extraction and analysis; experience building decision-support tools, reporting tools dashboards (e.g., Tableau, Power BI)
Bachelor's degree required in Industrial Engineering, Operations Research, Applied Mathematics, Data Science or related quantitative discipline or equivalent work experience.
5+ years of experience in strategy, operations planning, finance or continuous improvement, ideally with airline network planning
Strong analytical skills with experience in statistical analysis, modeling, and scenario evaluation.
Strong problem-solving skills with the ability to work in a fast-paced, dynamic environment.
Excellent communication skills with the ability to convey complex analytical findings to non-technical stakeholders.
A proactive, solution-focused mindset with a passion for operational excellence and continuous improvement.
Knowledge of operations, scheduling, and capacity planning, ideally in airlines, transportation or other complex network operations
Salary Range: $131,500 - $177,500
Financial offer within the stated range will be based on multiple factors to include but not limited to location, relevant experience/level and skillset.
The Company is an Equal Opportunity Employer. It is our policy to afford equal employment opportunity to all employees and applicants for employment without regard to race, color, religion, sex, sexual orientation, national origin, citizenship, place of birth, age, disability, protected veteran status, gender identity or any other characteristic or status protected by applicable in accordance with federal, state and local laws.
If you'd like more information about your EEO rights as an applicant under the law, please download the available EEO is the Law document at ******************************************
To view our Pay Transparency Statement, please click here: Pay Transparency Statement
“Know Your Rights: Workplace Discrimination is Illegal” Poster
The "EEO Is The Law" Poster
$131.5k-177.5k yearly Auto-Apply 60d+ ago
Slalom Flex (Project Based)- Microsoft Power Platform Data Architect
Slalom 4.6
Data engineer job in White Plains, NY
Data Architect / Microsoft Power Platform Architect (6‑Month Project) Duration: 6 months (with potential extension) Start: Immediate Employment Type: W2 About Us Slalom is a purpose-led, global business and technology consulting company. From strategy to implementation, our approach is fiercely human. In six+ countries and 43+ markets, we deeply understand our customers-and their customers-to deliver practical, end-to-end solutions that drive meaningful impact. Backed by close partnerships with over 400 leading technology providers, our 10,000+ strong team helps people and organizations dream bigger, move faster, and build better tomorrows for all. We're honored to be consistently recognized as a great place to work, including being one of Fortune's 100 Best Companies to Work For seven years running. Learn more at slalom.com.
Overview
We are seeking a Microsoft Power Platform Architect with strong Data Architecture expertise to support an in‑flight initiative focused on building a pricing application. This role will provide immediate capacity to an existing team of six platform developers, performing a full architectural review of the current solution and optimizing an established design spanning Power Platform (front end) and Databricks/Snowflake (backend).
The ideal candidate brings hands-on experience across Power Apps, Power Automate, and modern dataengineering and is comfortable stepping into a partially built solution to strengthen quality, stability, and performance.
Key Responsibilities
Architectural & Technical Leadership
* Perform a full end‑to‑end architectural review of the existing pricing application across Power Platform, Databricks, and Snowflake.
* Optimize the already-defined architecture, identifying gaps, constraints, and opportunities for improvement.
* Provide expert guidance on best practices for Power Platform, data modeling, integration design, and automation patterns.
Development & Optimization
* Enhance and optimize Power Apps, Power Automate flows, and integrations with Databricks and Snowflake.
* Implement improvements to ensure scalability, reliability, and performance of the pricing solution.
* Review and refine backend data pipelines, SQL logic, and data transformations as needed.
Quality Engineering & Delivery Support
* Build and execute end‑to‑end testing strategies, including functional, integration, and regression testing.
* Perform bug resolution and troubleshoot issues across both front-end and back-end layers.
* Collaborate with the existing team of six developers to ensure smooth delivery and alignment with solution architecture.
Cross‑Functional Collaboration
* Work closely with product owners, developers, and dataengineers to validate technical decisions.
* Communicate architectural recommendations clearly to both technical and business stakeholders.
* Support Agile ceremonies and sprint planning as needed.
Required Skills & Experience
* Power Automate (expert level)
* Hands-on experience building and optimizing Power Apps
* Experience managing and troubleshooting Power Platform flows
* Strong understanding of Databricks and Snowflake data architecture
* Ability to conduct thorough architectural reviews and produce actionable recommendations
* Strong SQL skills for debugging, optimization, and backend analysis
* Experience working on mid‑to‑large scale applications within the Power Platform ecosystem
Preferred / Nice‑to‑Have Skills
* Experience building custom connectors or plug-ins in C#
* Agile delivery experience
* Power BI (report development or model optimization)
* Background in pricing, financial modeling, or workflow-driven applications
Compensation and Benefits
Slalom prides itself on helping team members thrive in their work and life. As a result, Slalom is proud to invest in benefits that include meaningful time off and paid holidays, 401(k) with a match, a range of choices for highly subsidized health, dental, & vision coverage, adoption and fertility assistance, and short/long-term disability. We also offer yearly $350 reimbursement account for any well-being-related expenses.
Slalom is committed to fair and equitable compensation practices. For this position, the base salary pay range is $80/hr to $100/hr. Actual compensation will depend upon an individual's skills, experience, qualifications, location, and other relevant factors. The salary pay range is subject to change and may be modified at any time.
EEO and Accommodations
Slalom is an equal opportunity employer and is committed to inclusion, diversity, and equity in the workplace. All qualified applicants will receive consideration
for employment without regard to race, color, religion, sex, national origin, disability status, protected veterans' status, or any other characteristic protected by federal, state, or local laws. Slalom will also consider qualified applications with criminal histories, consistent with legal requirements.
Slalom welcomes and encourages applications from individuals with disabilities. Reasonable accommodations are available for candidates during all aspects of the
selection process. Please advise the talent acquisition team if you require accommodations during the interview process.
#DMVHOT
$80 hourly 5d ago
Applications Programmer Analyst
Provision People
Data engineer job in Poughkeepsie, NY
Our award-winning client is seeking an Applications Programmer Analyst to join their team. We're seeking a passionate Software Developer to join our team, leveraging expertise to solve key challenges and contribute to our company's mission.
Responsibilities:
Shape Tomorrow's Technology:
Partner with departments to identify and implement cutting-edge solutions (web, n-tier, advanced reporting, mobile, collaboration) that address their unique needs.
Design, develop, enhance, and test these innovative applications, ensuring quality and business continuity.
Collaborate with external vendors and resources to seamlessly integrate solutions.
Become a User Champion:
Analyze business processes and provide ongoing support for systems and applications, maximizing business impact.
Translate data into actionable insights, supporting informed decision-making across the organization.
Champion continuous improvement by researching emerging technologies and proposing innovative solutions.
Deliver engaging training and guidance on new and existing technology solutions.
Foster Collaboration and Success:
Proactively engage with stakeholders to understand their technology needs and develop impactful solutions.
Communicate project progress transparently and effectively throughout the development cycle.
Present various solution options, including cost and time estimations, to empower stakeholders' decision-making.
Required Qualifications:
A Bachelor's degree in Information Systems (IS) or a 2-year degree in IS with equivalent experience.
At least 4 years of programming and systems analysis experience across all solution development phases.
Expertise in: N-Tier development, CSS, HTML, PHP, JavaScript/TypeScript, NodeJS, C#, SQL, JSON, XML (REST protocol), MS SQL Server, Azure, MS Access, SharePoint, SSIS data integration, MS Visual Studio, SQL Server Mgmt Studio, MS Power platform (PowerBI, PowerApps, Dataverse), Git version control.
Exceptional communication, problem-solving, and analytical skills.
How much does a data engineer earn in Danbury, CT?
The average data engineer in Danbury, CT earns between $73,000 and $131,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.
Average data engineer salary in Danbury, CT
$98,000
What are the biggest employers of Data Engineers in Danbury, CT?
The biggest employers of Data Engineers in Danbury, CT are: