Senior Data Architect - Power & Utilities AI Platforms
Ernst & Young Oman 4.7
Data engineer job in Stamford, CT
A leading global consulting firm is seeking a Senior Manager in Data Architecture for the Power & Utilities sector. This role requires at least 12 years of consulting experience and expertise in data architecture and engineering. The successful candidate will manage technology projects, lead teams, and develop innovative data solutions that drive significant business outcomes. Strong relationship management and communication skills are essential for engaging with clients and stakeholders. Join us to help shape a better working world.
#J-18808-Ljbffr
$112k-156k yearly est. 2d ago
Looking for a job?
Let Zippia find it for you.
Data Scientist
Gartner 4.7
Data engineer job in Stamford, CT
About this role
In Gartner's Services Data Science team, we innovate the way our team helps clients receive value, so technology leaders will be able to make smarter decisions in a different way.
We are searching for a talented data scientist to join our team. You will have access to the best facilities, technology and expertise within the industry and will work on challenging business problems. This is an excellent opportunity to be part of a new venture, in a start-up environment where you can truly develop your skill set and knowledge and bring impact to the team.
What you'll do
* Designing and implementing state of the art Large Language Model (LLM) based agents that seamlessly synthesize complex information and initiate important actions in a business workflow.
* Using advanced Generative AI techniques deriving actionable insights from unstructured text data, such as call transcripts and emails.
* Predicting client interest basis their digital footprint and making relevant recommendations to drive higher client value delivery
* Leverage statistical and machine learning techniques to extract actionable insights from client retention data.
* Develop customer churn prediction models that proactively identify at-risk clients,
* Build tools to process structured and unstructured data
* Engineering features and signals to train ML model from diverse data collection
What you'll need
* BS required/ MS/ preferred; in Computer Science or other technology, Math, Physics, Statistics or Economics (focus on Natural Language Processing, Information Retrieval a plus)
* 4 years' experience in data science methodologies as applied to live initiatives or software development// Experience working with Gen AI projects
* Minimum 4+ years of experience in python coding and statistical analysis
* Minimum 2 years working experience in several of the following:
o Prompt Engineering and working with LLMs
o Machine Learning and statistical techniques
o Data mining and recommendation systems
o Natural Language Processing and Information Retrieval
o Experience working with large volumes of data
o User behavior modeling
Who you are
* A team player. You get along well with your colleagues and are always ready to help get things done. You enjoy working on projects with multiple people and share knowledge.
* Passionate about learning. You thrive on complex technical challenges and are always eager to learn the latest technologies.
* Organized and detailed-oriented. You think ahead of time about how best to implement new features, and your code is clean, well-organized and properly documented.
* Innovative. You are always proactively looking for opportunities to problem solve using innovative methods that impact the business
What we offer
* A collaborative, positive culture. You'll work with people who are as enthusiastic, smart and driven as you are. You'll be managed by the best too.
* Limitless growth and learning opportunities. We offer the excitement of a fast-paced entrepreneurial workplace and the professional growth opportunities of an established global organization.
About Gartner:
Gartner, Inc. (NYSE: IT) is the world's leading information technology research and advisory company. We deliver the technology-related insight necessary for our clients to make the right decisions, every day. We work with every client to research, analyze and interpret the business of IT within the context of their individual role. Founded in 1979, Gartner is headquartered in Stamford, Connecticut, U.S.A - Visit gartner.com to learn more.
Diversity, inclusion and engagement at Gartner:
The policy of Gartner is to provide equal employment opportunities to all applicants and employees without regard to race, color, caste, creed, religion, sex, sexual orientation, gender identity or expression, marital status, citizenship status, age, national origin, ancestry, disability, or any other characteristic protected by applicable law. Gartner affirmatively seeks to advance the principles of equal employment opportunity and values diversity and inclusion.
Gartner is an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified applicant with a disability and unable to or limited in your ability to use or access the Gartner's career webpage as a result of your disability, you may request reasonable accommodations by calling Human Resources at or by sending an email to
#LI-Hybrid
#LI-GV1
Who are we?
At Gartner, Inc. (NYSE:IT), we guide the leaders who shape the world.
Our mission relies on expert analysis and bold ideas to deliver actionable, objective business and technology insights, helping enterprise leaders and their teams succeed with their mission-critical priorities.
Since our founding in 1979, we've grown to 21,000 associates globally who support ~14,000 client enterprises in ~90 countries and territories. We do important, interesting and substantive work that matters. That's why we hire associates with the intellectual curiosity, energy and drive to want to make a difference. The bar is unapologetically high. So is the impact you can have here.
What makes Gartner a great place to work?
Our vast, virtually untapped market potential offers limitless opportunities - opportunities that may not even exist right now - for you to grow professionally and flourish personally. How far you go is driven by your passion and performance.
We hire remarkable people who collaborate and win as a team. Together, our singular, unifying goal is to deliver results for our clients.
Our teams are inclusive and composed of individuals from different geographies, cultures, religions, ethnicities, races, genders, sexual orientations, abilities and generations.
We invest in great leaders who bring out the best in you and the company, enabling us to multiply our impact and results. This is why, year after year, we are recognized worldwide as a great place to work.
What do we offer?
Gartner offers world-class benefits, highly competitive compensation and disproportionate rewards for top performers.
In our hybrid work environment, we provide the flexibility and support for you to thrive - working virtually when it's productive to do so and getting together with colleagues in a vibrant community that is purposeful, engaging and inspiring.
Ready to grow your career with Gartner? Join us.
Gartner believes in fair and equitable pay. A reasonable estimate of the base salary range for this role is 98,000 USD - 133,000 USD. Please note that actual salaries may vary within the range, or be above or below the range, based on factors including, but not limited to, education, training, experience, professional achievement, business need, and location. In addition to base salary, employees will participate in either an annual bonus plan based on company and individual performance, or a role-based, uncapped sales incentive plan. Our talent acquisition team will provide the specific opportunity on our bonus or incentive programs to eligible candidates. We also offer market leading benefit programs including generous PTO, a 401k match up to $7,200 per year, the opportunity to purchase company stock at a discount, and more.
The policy of Gartner is to provide equal employment opportunities to all applicants and employees without regard to race, color, creed, religion, sex, sexual orientation, gender identity, marital status, citizenship status, age, national origin, ancestry, disability, veteran status, or any other legally protected status and to seek to advance the principles of equal employment opportunity.
Gartner is committed to being an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to use or access the Company's career webpage as a result of your disability. You may request reasonable accommodations by calling Human Resources at or by sending an email .
Job Requisition ID:106172
By submitting your information and application, you confirm that you have read and agree to the country or regional recruitment notice linked below applicable to your place of residence.
Gartner Applicant Privacy Link: applicant-privacy-policy
For efficient navigation through the application, please only use the back button within the application, not the back arrow within your browser.
$78k-103k yearly est. 4d ago
Data Architect
Novocure Inc. 4.6
Data engineer job in New Haven, CT
We are seeking an experienced and innovative Data Architect to lead the design, development, and optimization of our enterprise data architecture. This individual will play a critical role in aligning data strategy with business objectives, ensuring data integrity, and driving value from data across multiple platforms. The ideal candidate will have deep expertise in data architecture best practices and technologies, particularly across SAP S/4 HANA, Veeva CRM, Veeva Vault, SaaS platforms, Operational Data Stores (ODS), and Master Data Management (MDM) platforms.
This is a full-time, position reporting to the Director, Enterprise Architecture
ESSENTIAL DUTIES AND RESPONSIBILITIES:
Design, develop, and maintain scalable and secure enterprise data architecture solutions across SAP S/4 HANA, Veeva CRM, and Veeva Vault environments.
Serve as a subject matter expert for Operational Data Stores and Master Data Management architecture, ensuring clean, consistent, and governed data across the enterprise.
Collaborate with cross-functional teams to identify data needs, establish data governance frameworks, and define data integration strategies.
Develop data models, data flows, and system integration patterns that support enterprise analytics and reporting needs.
Evaluate and recommend new tools, platforms, and methodologies for improving data management capabilities.
Ensure architectural alignment with data privacy, regulatory, and security standards.
Provide leadership and mentoring to dataengineering and analytics teams on best practices in data modeling, metadata management, and data lifecycle management.
Contribute to data governance initiatives by enforcing standards, policies, and procedures for enterprise data.
QUALIFICATIONS/KNOWLEDGE:
Bachelor's or Master's degree in Computer Science, Information Systems, Data Science, or a related field.
Minimum of 8+ years of experience in data architecture, data integration, or enterprise data management roles.
Proven experience in designing and implementing data solutions on SAP S/4 HANA, including integration with other enterprise systems.
Strong hands-on experience with SaaS platforms, including data extraction, modeling, and harmonization.
Deep understanding of Operational Data Stores and MDM design patterns, implementation, and governance practices.
Proficiency in data modeling tools (e.g., Erwin, SAP PowerDesigner), ETL tools (e.g., Business Objects Data Services, SAP Data Services), and integration platforms (e.g., MuleSoft).
Familiarity with cloud data architecture (e.g., AWS, Azure, GCP) and hybrid data environments.
Excellent communication and stakeholder management skills.
OTHER:
Experience with pharmaceutical, life sciences, or regulated industry environments.
Knowledge of data privacy regulations such as GDPR, HIPAA, and data compliance frameworks
Ability to travel domestically and internationally as needed for high priority projects
Novocure is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sexual orientation, gender identity, age, national origin, disability, protected veteran status or other characteristics protected by federal, state, or local law. We actively seek qualified candidates who are protected veteran and individuals with disabilities as defined under VEVRAA and Section 503 of the Rehabilitation Act.
Novocure is committed to providing an interview process that is inclusive of our applicant's needs. If you are an individual with a disability and would like to request an accommodation, please email
ABOUT NOVOCURE:
Our vision
Patient-forward: aspiring to make a difference in cancer.
Our patient-forward mission
Together with our patients, we strive to extend survival in some of the most aggressive forms of cancer by developing and commercializing our innovative therapy.
Our patient-forward values
innovation
focus
drive
courage
trust
empathy
#LI-RJ1
$93k-125k yearly est. 2d ago
Staff Data Scientist
Recursion 4.2
Data engineer job in Saltaire, NY
Your work will change lives. Including your own.
The Impact You'll Make
As a member of Recursion's AI-driven drug discovery initiatives, you will be at the forefront of reimagining how biological knowledge is generated, stored, accessed, and reasoned upon by LLMs. You will play a key role in developing the biological reasoning infrastructure, connecting large-scale data and codebases with dynamic, agent-driven AI systems.You will be responsible for defining the architecture that grounds our agents in biological truth. This involves integrating biomedical resources to enable AI systems to reason effectively and selecting the most appropriate data retrieval strategies to support those insights. This is a highly collaborative role: you will partner with machine learning engineers, biologists, chemists, and platform teams to build the connective tissue that allows our AI agents to reason like a scientist. The ideal candidate possesses deep expertise in both core bioinformatics/cheminformatics libraries and modern GenAI frameworks (including RAG and MCP), a strong architectural vision, and the ability to translate high-potential prototypes into scalable production workflows.
In this role, you will:
Architect and maintain robust infrastructure to keep critical internal and external biological resources (e.g., ChEMBL, Ensembl, Reactome, proprietary assays) up-to-date and accessible to reasoning agents.
Design sophisticated context retrieval strategies, choosing the most effective approach for each biological use case, whether working with structured, entity-focused data, unstructured RAG, or graph-based representations.
Integrate established bioinformatics/cheminformatics libraries into a GenAI ecosystem, creating interfaces (such as via MCP) that allow agents to autonomously query and manipulate biological data.
Pilot methods for tool use by LLMs, enabling the system to perform complex tasks like pathway analysis on the fly rather than relying solely on memorized weights.
Develop scalable, production-grade systems that serve as the backbone for Recursion's automated scientific reasoning capabilities.
Collaborate cross-functionally with Recursion's core biology, chemistry, data science and engineering teams to ensure our biological data and the reasoning engines are accurately reflecting the complexity of disease biology and drug discovery.
Present technical trade-offs (e.g., graph vs. vector) to leadership and stakeholders in a clear, compelling way that aligns technical reality with product vision.
The Team You'll Join
You'll join a bold, agile team of scientists and engineers dedicated to building comprehensive biological maps by integrating Recursion's in-house datasets, patient data, and external knowledge layers to enable sophisticated agent-based reasoning. Within this cross-functional team, you will design and maintain the biological context and data structures that allow agents to reason accurately and efficiently. You'll collaborate closely with wet-lab biologists and core platform engineers to develop systems that are not only technically robust but also scientifically rigorous. The ideal candidate is curious about emerging AI technologies, passionate about making biological data both machine-readable and machine-understandable, and brings a strong foundation in systems biology, biomedical data analysis, and agentic AI systems.
The Experience You'll Need
PhD in a relevant field (Bioinformatics, Cheminformatics, Computational Biology, Computer Science, Systems Biology) with 5+ years of industry experience, or MS in a relevant field with 7+ years of experience, focusing on biological data representation and retrieval.
Proficiency in utilizing major public biological databases (NCBI, Ensembl, STRING, GO) and using standard bioinformatics/cheminformatics toolkits (e.g., RDKit, samtools, Biopython).
Strong skills in designing and maintaining automated data pipelines that support continuous ingestion, transformation, and refresh of biological data without manual intervention.
Ability to work with knowledge graph data models and query languages (e.g., RDF, SPARQL, OWL) and translate graph-structured data into relational or other non-graph representations, with a strong judgment in evaluating trade-offs between different approaches.
Competence in building and operating GenAI stacks, including RAG systems, vector databases, and optimization of context windows for large-scale LLM deployments.
Hands-on expertise with agentic AI frameworks (e.g., MCP, Google ADK, LangChain, AutoGPT) and familiarity with leading LLMs (e.g., Google Gemini/Gemma) in agentic workflows, including benchmarking and evaluating agent performance on bioinformatics/cheminformatics tasks such as structure prediction, target identification, and pathway mapping.
Strong Python skills and adherence to software engineering best practices, including CI/CD, Git-based version control, and modular design.
Excellent cross-functional communication skills, ability to clearly explain complex architectural decisions to both scientific domain experts and technical stakeholders.
Nice to Have
Strong background in machine learning and deep learning, including hands-on experience with foundation models and modern neural architectures.
Fine-tuning LLMs on scientific corpora for domain-specific reasoning.
Integrating LLMs with experimental or proprietary assay data in live scientific workflows.
Background in drug discovery and target identification.
Meaningful contributions to open-source libraries, research codebases, or community-driven tools.
Working Location & Compensation:
This is an office-based, hybrid role in either our Salt Lake City, UT or New York City, NY offices. Employees are expected to work in the office at least 50% of the time.
At Recursion, we believe that every employee should be compensated fairly. Based on the skill and level of experience required for this role, the estimated current annual base range for this role is $200,600 - $238,400. You will also be eligible for an annual bonus and equity compensation, as well as a comprehensive benefits package.
#LI-DNI
The Values We Hope You Share:
We act boldly with integrity. We are unconstrained in our thinking, take calculated risks, and push boundaries, but never at the expense of ethics, science, or trust.
We care deeply and engage directly. Caring means holding a deep sense of responsibility and respect - showing up, speaking honestly, and taking action.
We learn actively and adapt rapidly. Progress comes from doing. We experiment, test, and refine, embracing iteration over perfection.
We move with urgency because patients are waiting. Speed isn't about rushing but about moving the needle every day.
We take ownership and accountability. Through ownership and accountability, we enable trust and autonomy-leaders take accountability for decisive action, and teams own outcomes together.
We are One Recursion. True cross-functional collaboration is about trust, clarity, humility, and impact. Through sharing, we can be greater than the sum of our individual capabilities.
Our values underpin the employee experience at Recursion. They are the character and personality of the company demonstrated through how we communicate, support one another, spend our time, make decisions, and celebrate collectively.
More About Recursion
Recursion (NASDAQ: RXRX) is a clinical stage TechBio company leading the space by decoding biology to radically improve lives. Enabling its mission is the Recursion OS, a platform built across diverse technologies that continuously generate one of the world's largest proprietary biological and chemical datasets. Recursion leverages sophisticated machine-learning algorithms to distill from its dataset a collection of trillions of searchable relationships across biology and chemistry unconstrained by human bias. By commanding massive experimental scale - up to millions of wet lab experiments weekly - and massive computational scale - owning and operating one of the most powerful supercomputers in the world, Recursion is uniting technology, biology and chemistry to advance the future of medicine.
Recursion is headquartered in Salt Lake City, where it is a founding member of BioHive, the Utah life sciences industry collective. Recursion also has offices in Toronto, Montréal, New York, London, Oxford area, and the San Francisco Bay area. Learn more at ****************** or connect on X (formerly Twitter) and LinkedIn.
Recursion is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, veteran status, or any other characteristic protected under applicable federal, state, local, or provincial human rights legislation.
Accommodations are available on request for candidates taking part in all aspects of the selection process.
Recruitment & Staffing Agencies: Recursion Pharmaceuticals and its affiliate companies do not accept resumes from any source other than candidates. The submission of resumes by recruitment or staffing agencies to Recursion or its employees is strictly prohibited unless contacted directly by Recursion's internal Talent Acquisition team. Any resume submitted by an agency in the absence of a signed agreement will automatically become the property of Recursion, and Recursion will not owe any referral or other fees. Our team will communicate directly with candidates who are not represented by an agent or intermediary unless otherwise agreed to prior to interviewing for the job.
$200.6k-238.4k yearly Auto-Apply 29d ago
Data Scientist - Analytics roles draw analytical talent hunting for roles.
Boxncase
Data engineer job in Commack, NY
About the Role We believe that the best decisions are backed by data. We are seeking a curious and analytical Data Scientist to champion our data -driven culture.
In this role, you will act as a bridge between technical data and business strategy. You will mine massive datasets, build predictive models, and-most importantly-tell the story behind the numbers to help our leadership team make smarter choices. You are perfect for this role if you are as comfortable with SQL queries as you are with slide decks.
### What You Will Do
Exploratory Analysis: Dive deep into raw data to discover trends, patterns, and anomalies that others miss.
Predictive Modeling: Build and test statistical models (Regression, Time -series, Clustering) to forecast business outcomes and customer behavior.
Data Visualization: Create clear, impactful dashboards using Tableau, PowerBI, or Python libraries (Matplotlib/Seaborn) to visualize success metrics.
Experimentation: Design and analyze A/B tests to optimize product features and marketing campaigns.
Data Cleaning: Work with DataEngineers to clean and structure messy data for analysis.
Strategy: Present findings to stakeholders, translating complex math into clear, actionable business recommendations.
Requirements
Experience: 2+ years of experience in Data Science or Advanced Analytics.
The Toolkit: Expert proficiency in Python or R for statistical analysis.
Data Querying: Advanced SQL skills are non -negotiable (Joins, Window Functions, CTEs).
Math Mindset: Strong grasp of statistics (Hypothesis testing, distributions, probability).
Visualization: Ability to communicate data visually using Tableau, PowerBI, or Looker.
Communication: Excellent verbal and written skills; you can explain a p -value to a non -technical manager.
### Preferred Tech Stack (Keywords)
Languages: Python (Pandas, NumPy), R, SQL
Viz Tools: Tableau, PowerBI, Looker, Plotly
Machine Learning: Scikit -learn, XGBoost (applied to business problems)
Big Data: Spark, Hadoop, Snowflake
Benefits
Salary Range: $50,000 - $180,000 USD / year (Commensurate with location and experience)
Remote Friendly: Work from where you are most productive.
Learning Budget: Stipend for data courses (Coursera, DataCamp) and books.
$50k-180k yearly 32d ago
Data Scientist
Drive Devilbiss Healthcare
Data engineer job in Port Washington, NY
Who is Drive Medical..
Drive Medical has become a leading manufacturer of medical products with a strong and consistent track record of growth achieved both organically and through acquisitions. We are proud of our high-quality, diverse product portfolio, channel footprint and global operating scale. Our products are sold into the homecare, long-term care, retail, and e-commerce channels in more than 100 countries around the world.
“Leading the World with Innovative Healthcare Solutions that Enhance Lives”
Summary (Major Purpose of the Role):
Position Summary
The Sales Data Scientist will use data analytics and statistical techniques to generate insights that support sales performance and revenue growth. This role focuses on building and improving reporting tools, analyzing data, and providing actionable recommendations to help the sales organization make informed decisions.
Key Responsibilities
Data Analysis & Reporting
Analyze sales data to identify trends, patterns, and opportunities.
Create and maintain dashboards and reports for Sales and leadership teams.
Support root-cause analysis and process improvement initiatives.
Sales Insights
Provide data-driven recommendations for pricing, discount strategies, and sales funnel optimization.
Assist in segmentation analysis to identify key customer groups and markets.
Collaboration
Work closely with Sales, Marketing, Finance, and Product teams to align analytics with business needs.
Present findings in clear, actionable formats to stakeholders.
Data Infrastructure
Ensure data accuracy and integrity across reporting tools.
Help automate reporting processes for efficiency and scalability.
Required Qualifications:
2-4 years of experience in a data analytics or sales operations role.
Strong Excel skills (pivot tables, formulas, data analysis).
Bachelor's degree in Mathematics, Statistics, Economics, Data Science, or related field-or equivalent experience.
Preferred Qualifications:
Familiarity with Python, R, SQL, and data visualization tools (e.g., Power BI).
Experience leveraging AI/ML tools and platforms (e.g., predictive analytics, natural language processing, automated insights).
Experience with CRM systems (Salesforce) and marketing automation platforms.
Strong analytical and problem-solving skills with attention to detail.
Ability to communicate insights clearly to non-technical audiences.
Collaborative mindset and willingness to learn new tools and techniques.
Why Apply…
Competitive Benefits, Paid Time Off, 401(k) Savings Plan
This position does not offer sponsorship opportunities.
Pursuant to New York law, Drive Medical provides a salary range in job advertisements. The salary range for this role is $95,000.00 to $125,000.00 per year. Actual salaries may vary depending on factors such as the applicant's experience, specialization, education, as well as the company's requirements. The provided salary range does not include bonuses, incentives, differential pay, or other forms of compensation or benefits which may be offered to the applicant, if eligible according to the company's policies.
Drive Medical is an Equal Opportunity Employer and provides equal employment opportunities to all employees and applicants for employment. Drive Medical strictly prohibits and does not tolerate discrimination against employees, applicants, or any other covered person because of race, color, religion, gender, sexual orientation, gender identity, pregnancy and/or parental status, national origin, age, disability status, protected veteran status, genetic information (including family medical history), or any other characteristic protected by federal, state, or local law. Drive Medical complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities.
$95k-125k yearly Auto-Apply 28d ago
Big Data Engineer
Cardinal Integrated 4.4
Data engineer job in Jericho, NY
Title: Big DataEngineer Location: Jericho, NY (source locally first, can source outside of the state as long as the candidate is willing to relocate at their own expense) Duration: C2H Rate: $70 - $80.00 per hour C2C Visa Type: H1, GC, US Citizen Interview: 1 or 2 rounds of phone followed by an on-site interview (may be able to provide a Skype or Webex to out of state candidates)
Travel: no
Description:
The Big DataEngineer is responsible for the design, architecture, and development of projects powered by Google BigData and MapR Hadoop distribution
Must-Have Skills/Experience:
* Bachelors Degree required
* 5+ years of solution architecture in Hadoop
* Demonstrated experience in architecture, engineering, and implementation of enterprise-grade production big data use cases
* Extensive hands-on experience in MapReduce, Hive, Java, HBase, and the following Hadoop eco-system products: Sqoop, Flume, Oozie, Storm, Spark, and/or Kaftka.
* Extensive experience in Shell Scripting
* Solid understanding of different file formats and data serialization formats such as ProtoBuf, Avro, JSON.
* Hands-on delivery experience working on popular Hadoop distribution platforms like Cloudera, HortonWorks or MapR [MapR preferrably]
* Excellent communication skills
Nice to have:
* Coordinating the movement of data from original data sources into NoSQL data lakes and cloud environments
* Hands-on experience with Talend used in conjunction with Hadoop MapReduce/Spark/Hive.
* Experience with Google cloud platform (Google BigQuery)
* Source control (preferably Git Hub)
* Knowledge of agile development methodologies
* Experience in IDE framework like Hue, Jupyter, Zepplin
* Needs to have a good experience on ETL Technologies and concepts of Data Warehouse
$70-80 hourly 23d ago
Data Engineer
Innovative Rocket Technologies Inc. 4.3
Data engineer job in Hauppauge, NY
Job Description
Data is pivotal to our goal of frequent launch and rapid iteration. We're recruiting a DataEngineer at iRocket to build pipelines, analytics, and tools that support propulsion test, launch operations, manufacturing, and vehicle performance.
The Role
Design and build data pipelines for test stands, manufacturing machines, launch telemetry, and operations systems.
Develop dashboards, real-time monitoring, data-driven anomaly detection, performance trending, and predictive maintenance tools.
Work with engineers across propulsion, manufacturing, and operations to translate data-needs into data-products.
Maintain data architecture, ETL processes, cloud/edge-data systems, and analytics tooling.
Support A/B testing, performance metrics, and feed insights back into design/manufacturing cycles.
Requirements
Bachelor's degree in Computer Science, DataEngineering, or related technical field.
2+ years of experience building data pipelines, ETL/ELT workflows, and analytics systems.
Proficient in Python, SQL, cloud data platforms (AWS, GCP, Azure), streaming/real-time analytics, and dashboarding (e.g., Tableau, PowerBI).
Strong ability to work cross-functionally and deliver data-products to engineering and operations teams.
Strong communication, documentation, and a curiosity-driven mindset.
Benefits
Health Care Plan (Medical, Dental & Vision)
Retirement Plan (401k, IRA)
Life Insurance (Basic, Voluntary & AD&D)
Paid Time Off (Vacation, Sick & Public Holidays)
Family Leave (Maternity, Paternity)
Short Term & Long Term Disability
Wellness Resources
$102k-146k yearly est. 2d ago
Junior Data Scientist
Bexorg
Data engineer job in New Haven, CT
About Us
Bexorg is revolutionizing drug discovery by restoring molecular activity in postmortem human brains. Through our BrainEx platform, we directly experiment on functionally preserved human brain tissue, creating enormous high-fidelity molecular datasets that fuel AI-driven breakthroughs in treating CNS diseases. We are looking for a Junior Data Scientist to join our team and dive into this one-of-a-kind data. In this onsite role, you will work at the intersection of computational biology and machine learning, helping analyze high-dimensional brain data and uncover patterns that could lead to the next generation of CNS therapeutics. This is an ideal opportunity for a recent graduate or early-career scientist to grow in a fast-paced, mission-driven environment.
The Job
Data Analysis & Exploration: Work with large-scale molecular datasets from our BrainEx experiments - including transcriptomic, proteomic, and metabolic data. Clean, transform, and explore these high-dimensional datasets to understand their structure and identify initial insights or anomalies.
Collaborative Research Support: Collaborate closely with our life sciences, computational biology and deep learning teams to support ongoing research. You will help biologists interpret data results and assist machine learning researchers in preparing data for modeling, ensuring that domain knowledge and data science intersect effectively.
Machine Learning Model Execution: Run and tune machine learning and deep learning models on real-world central nervous system (CNS) data. You'll help set up experiments, execute training routines (for example, using scikit-learn or PyTorch models), and evaluate model performance to extract meaningful patterns that could inform drug discovery.
Statistical Insight Generation: Apply statistical analysis and visualization techniques to derive actionable insights from complex data. Whether it's identifying gene expression patterns or correlating molecular changes with experimental conditions, you will contribute to turning data into scientific discoveries.
Reporting & Communication: Document your analysis workflows and results in clear reports or dashboards. Present findings to the team, highlighting key insights and recommendations. You will play a key role in translating data into stories that drive decision-making in our R&D efforts.
Qualifications and Skills:
Strong Python Proficiency: Expert coding skills in Python and deep familiarity with the standard data science stack. You have hands-on experience with NumPy, pandas, and Matplotlib for data manipulation and visualization; scikit-learn for machine learning; and preferably PyTorch (or similar frameworks like TensorFlow) for deep learning tasks.
Educational Background: A Bachelor's or Master's degree in Data Science, Computer Science, Computational Biology, Bioinformatics, Statistics, or a related field. Equivalent practical project experience or internships in data science will also be considered.
Machine Learning Knowledge: Solid understanding of machine learning fundamentals and algorithms. Experience developing or applying models to real or simulated datasets (through coursework or projects) is expected. Familiarity with high-dimensional data techniques or bioinformatics methods is a plus.
Analytical & Problem-Solving Skills: Comfortable with statistics and data analysis techniques for finding signals in noisy data. Able to break down complex problems, experiment with solutions, and clearly interpret the results.
Team Player: Excellent communication and collaboration skills. Willingness to learn from senior scientists and ability to contribute effectively in a multidisciplinary team that includes biologists, dataengineers, and AI researchers.
Motivation and Curiosity: Highly motivated, with an evident passion for data-driven discovery. You are excited by Bexorg's mission and eager to take on challenging tasks - whether it's mastering a new analysis method or digging into scientific literature - to push our research forward.
Local to New Haven, CT preferred. No relocation offered for this position.
Bexorg is an equal opportunity employer. We strive to create a supportive and inclusive workplace where contributions are valued and celebrated, and our employees thrive by being themselves and are inspired to do their best work. We seek applicants of all backgrounds and identities, across race, color, ethnicity, national origin or ancestry, citizenship, religion, sex, sexual orientation, gender identity or expression, veteran status, marital status, pregnancy or parental status, or disability. Applicants will not be discriminated against based on these or other protected categories or social identities. Bexorg will also consider for employment qualified applicants with criminal histories in a manner consistent with applicable federal, state and local law.
$75k-105k yearly est. Auto-Apply 60d+ ago
Data Scientist
The Connecticut Rise Network
Data engineer job in New Haven, CT
RISE Data Scientist
Reports to: Monitoring, Evaluation, and Learning Manager
Salary: Competitive and commensurate with experience
Please note: Due to the upcoming holidays, application review for this position will begin the first week of January. Applicants can expect outreach by the end of the week of January 5.
Overview:
The RISE Network's mission is to ensure all high school students graduate with a plan and the skills and confidence to achieve college and career success. Founded in 2016, RISE partners with public high schools to lead networks where communities work together to use data to learn and improve. Through its core and most comprehensive network, RISE partners with nine high schools and eight districts, serving over 13,000 students in historically marginalized communities.
RISE high schools work together to ensure all students experience success as they transition to, through, and beyond high school by using data to pinpoint needs, form hypotheses, and pursue ideas to advance student achievement. Partner schools have improved Grade 9 promotion rates by nearly 20 percentage points, while also decreasing subgroup gaps and increasing schoolwide graduation and college access rates. In 2021, the RISE Network was honored to receive the Carnegie Foundation's annual Spotlight on Quality in Continuous Improvement recognition. Increasingly, RISE is pursuing opportunities to scale its impact through research publications, consulting partnerships, professional development experiences, and other avenues to drive excellent student outcomes.
Position Summary and Essential Job Functions:
The RISE Data Scientist will play a critical role in leveraging data to support continuous improvement, program evaluation, and research, enhancing the organization's evidence-based learning and decision-making. RISE is seeking a talented and motivated individual to design and conduct rigorous quantitative analyses to assess the outcomes and impacts of programs.
The ideal candidate is an experienced analyst who is passionate about using data to drive social change, with strong skills in statistical modeling, data visualization, and research design. This individual will also lead efforts to monitor and analyze organization-wide data related to mission progress and key performance indicators (KPIs), and communicate these insights in ways that inspire improvement and action. This is an exciting opportunity for an individual who thrives in an entrepreneurial environment and is passionate about closing opportunity gaps and supporting the potential of all students, regardless of life circumstances. The role will report to the Monitoring, Evaluation, and Learning (MEL) Manager and sit on the MEL team.
Responsibilities include, but are not limited to:
1. Research and Evaluation (30%)
Collaborate with MEL and network teams to design and implement rigorous process, outcome, and impact evaluations.
Lead in the development of data collection tools and survey instruments.
Manage survey data collection, reporting, and learning processes.
Develop RISE learning and issue briefs supported by quantitative analysis.
Design and implement causal inference approaches where applicable, including quasi-experimental designs.
Provide technical input on statistical analysis plans, monitoring frameworks, and indicator selection for network programs.
Translate complex findings into actionable insights and policy-relevant recommendations for non-technical audiences.
Report data for RISE leadership and staff, generating new insights to inform program design.
Create written reports, presentations, publications, and communications pieces.
2. Quantitative Analysis and Statistical Modeling (30%)
Clean, transform, and analyze large and complex datasets from internal surveys, the RISE data warehouse, and external data sources such as the National Student Clearinghouse (NSC).
Conduct exploratory research that informs organizational learning.
Lead complex statistical analyses using advanced methods (regression modeling, propensity score matching, difference in differences analysis, time-series analysis, etc).
Contribute to data cleaning and analysis for key performance indicator reporting.
Develop processes that support automation of cleaning and analysis for efficiency.
Develop and maintain analytical code and workflows to ensure reproducibility.
3. Data Visualization and Tool-building (30%)
Work closely with non-technical stakeholders to understand the question(s) they are asking and the use cases they have for specific data visualizations or tools
Develop well-documented overviews and specifications for new tools.
Create clear, compelling data visualizations and dashboards.
Collaborate with DataEngineering to appropriately and sustainably source data for new tools.
Manage complex projects to build novel and specific tools for internal or external stakeholders.
Maintain custom tools for the duration of their usefulness, including by responding to feedback and requests from project stakeholders.
4. Data Governance and Quality Assurance (10%)
Support data quality assurance protocols and standards across the MEL team.
Ensure compliance with data protection, security, and ethical standards.
Maintain organized, well-documented code and databases.
Collaborate with the DataEngineering team to maintain RISE MEL data infrastructure.
Qualifications
Master's degree (or PhD) in statistics, economics, quantitative social sciences, public policy, data science, or related field.
Minimum of 3 years of professional experience conducting statistical analysis and managing large datasets.
Advanced proficiency in R, Python, or Stata for data analysis and modeling.
Experience designing and implementing quantitative research and evaluation studies.
Strong understanding of inferential statistics, experimental and quasi-experimental methods, and sampling design.
Strong knowledge of survey data collection tools such as Key Surveys, Google Forms, etc.
Excellent data visualization and communication skills
Experience with data visualization tools; strong preference for Tableau.
Ability to translate complex data into insights for diverse audiences, including non-technical stakeholders.
Ability to cultivate relationships and earn credibility with a diverse range of stakeholders.
Strong organizational and project management skills.
Strong sense of accountability and responsibility for results.
Ability to work in an independent and self-motivated manner.
Demonstrated proficiency with Google Workspace.
Commitment to equity, ethics, and learning in a nonprofit or mission-driven context.
Positive attitude and willingness to work in a collaborative environment.
Strong belief that all students can learn and achieve at high levels.
Preferred
Experience working on a monitoring, evaluation, and learning team.
Familiarity with school data systems and prior experience working in a school, district, or similar K-12 educational context preferred.
Experience working with survey data (e.g., DHS, LSMS), administrative datasets, or real-time digital data sources.
Working knowledge of dataengineering or database management (SQL, cloud-based platforms).
Salary Range
$85k - $105k
Most new hires' salaries fall within the first half of the range, allowing team members to grow in their roles. For those who already have significant and aligned experiences at the same level as the role, placement may be at the higher end of the range.
The Connecticut RISE Network is an equal opportunity employer and welcomes candidates from diverse backgrounds.
RISE Interview & Communication Policy
The RISE interview process includes:
A video or phone screening with the hiring manager
Interviews with the hiring panel
A performance task
Reference checks
Applicants will never receive an offer unless they have completed the full interview process.
All official communications with applicants are sent only through ADP or CT RISE email addresses (@ctrise.org). There has been a job offer scam circulating from various email addresses using the domain @careers-ctrise.org-this is not a valid RISE email address.
If you receive an email from anyone claiming to represent RISE with a job offer outside of our official channels, or requesting written screening information, and you have not completed the full interview process, please do not respond and report it to ******************.
$85k-105k yearly Auto-Apply 30d ago
C++ Market Data Engineer (USA)
Trexquant Investment 4.0
Data engineer job in Stamford, CT
Trexquant is a growing systematic fund at the forefront of quantitative finance, with a core team of highly accomplished researchers and engineers. To keep pace with our expanding global trading operations, we are seeking a C++ Market DataEngineer to design and build ultra-low-latency feed handlers for premier vendor feeds and major exchange multicast feeds. This is a high-impact role that sits at the heart of Trexquant's trading platform; the quality, speed, and reliability of your code directly influence every strategy we run.
Responsibilities
Design & implement high-performance feed handlers in modern C++ for equities, futures, and options across global venues (e.g., NYSE, CME, Refinitiv RTS, Bloomberg B-PIPE).
Optimize for micro- and nanosecond latency using lock-free data structures, cache-friendly memory layouts, and kernel-bypass networking where appropriate.
Build reusable libraries for message decoding, normalization, and publication to internal buses shared by research, simulation, and live trading systems.
Collaborate with cross-functional teams to tune TCP/UDP multicast stacks, kernel parameters, and NIC settings for deterministic performance.
Provide robust failover, gap-recovery, and replay mechanisms to guarantee data integrity under packet loss or venue outages.
Instrument code paths with precision timestamping and performance metrics; drive continuous latency regression testing and capacity planning.
Partner closely with quantitative researchers to understand downstream data requirements and to fine-tune delivery formats for both simulation and live trading.
Produce clear architecture documents, operational run-books, and post-mortems; participate in a 24×7 follow-the-sun support rotation for mission-critical market-data services.
Requirements
BS/MS/PhD in Computer Science, Electrical Engineering, or related field.
3+ years of professional C++ (14,17,20) development experience focused on low-latency, high-throughput systems.
Proven track record building or maintaining real-time market-data feeds (e.g., Refinitiv RTS/TREP, Bloomberg B-PIPE, OPRA, CME MDP, ITCH).
Strong grasp of concurrency, lock-free algorithms, memory-model semantics, and compiler optimizations.
Familiarity with serialization formats (FAST, SBE, Protocol Buffers) and time-series databases or in-memory caches.
Comfort with scripting in Python for prototyping, testing, and ops automation.
Excellent problem-solving skills, ownership mindset, and ability to thrive in a fast-paced trading environment.
Familiarity with containerization (Docker/K8s) and public-cloud networking (AWS, GCP).
Benefits
Competitive salary, plus bonus based on individual and company performance.
Collaborative, casual, and friendly work environment while solving the hardest problems in the financial markets.
PPO Health, dental and vision insurance premiums fully covered for you and your dependents.
Pre-Tax Commuter Benefits
Applications are now open for our NYC office, opening in September 2026.
The base salary range is $175,000 - $200,000 depending on the candidate's educational and professional background. Base salary is one component of Trexquant's total compensation, which may also include a discretionary, performance-based bonus.
Trexquant is an Equal Opportunity Employer
$175k-200k yearly Auto-Apply 60d+ ago
Data Engineer I
Epicured, Inc.
Data engineer job in Glen Cove, NY
Job DescriptionWhy Epicured?
Epicured is on a mission to combat and prevent chronic disease, translating scientific research into high-quality food products and healthcare services nationwide. Our evidence-based approach brings together the best of the clinical, culinary, and technology worlds to help people eat better, feel better, and live better one meal at a time.
By joining Epicured's Technology team, you'll help power the data infrastructure that supports Medicaid programs, clinical services, life sciences initiatives, and direct-to-consumer operations - enabling better decisions, better outcomes, and scalable growth.
Role Overview
Epicured is seeking a DataEngineer I to support data ingestion, reporting, and analytics across multiple business lines. Reporting to the SVP of Software Engineering, this role will focus on building and maintaining reliable reporting pipelines, supporting business requests, and managing data from a growing ecosystem of healthcare, operational, and e-commerce systems.
This position is ideal for a self-starter with strong SQL skills who is comfortable working with evolving requirements, healthcare-adjacent data, and modern data platforms such as Microsoft Fabric and Power BI.
Key Responsibilities
Build, maintain, and support reports across all Epicured business lines using Power BI, exports, and Microsoft Fabric.
Ingest and integrate new data sources, including SQL Server, operational systems, and external data exchanges.
Support reporting and analytics requests across Clinical & Life Sciences, Section 1115 Medicaid Waiver programs, Health Information Exchanges (e.g., Healthix), and Self-Pay e-commerce operations.
Handle HIPAA-sensitive data, ensuring proper governance, access control, and compliance standards are maintained.
Manage Shopify and other e-commerce data requests for Epicured's Self-Pay division.
Keep reporting environments organized, documented, and operational while prioritizing incoming requests.
Operate and help scale Epicured's Microsoft Fabric environment, contributing to platform strategy and best practices.
Partner with stakeholders to clarify ambiguous requirements and translate business questions into data solutions.
Qualifications
3+ years of experience in dataengineering, analytics, or business intelligence roles.
Strong SQL skills with experience working in relational databases.
Experience with Azure, Microsoft Fabric, Power BI, or similar modern data platforms.
Strong proficiency in Excel / Google Sheets.
Ability to work independently and manage multiple priorities in a fast-growing environment.
Experience working with healthcare or HIPAA-adjacent data, including exposure to health information exchanges.
Familiarity with ETL / ELT pipelines and data modeling best practices.
Experience integrating operational, financial, logistics, and clinical datasets.
Preferred Qualifications
Experience with C#.
Python experience is a plus.
Healthcare or life sciences background.
Experience supporting analytics for Medicaid, payer, or regulated environments.
Compensation & Benefits
Salary Range: $115,000-$130,000 annually, commensurate with experience
Benefits include:
401(k)
Health, Dental, and Vision insurance
Unlimited Paid Time Off (PTO)
Opportunity to grow with Epicured's expanding data and technology organization
Equal Employment Opportunity
Epicured is proud to be an Equal Opportunity Employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We do not discriminate on the basis of age, race, creed, color, national origin, religion, gender, sexual orientation, gender identity or expression, disability, veteran status, or any other protected status under federal, state, or local law.
$115k-130k yearly 12d ago
Tech Lead, Data & Inference Engineer
Catalyst Labs
Data engineer job in Stamford, CT
Our Client
A fast moving and venture backed advertising technology startup based in San Francisco. They have raised twelve million dollars in funding and are transforming how business to business marketers reach their ideal customers. Their identity resolution technology blends business and consumer signals to convert static audience lists into high match and cross channel segments without the use of cookies. By transforming first party and third party data into precision targetable audiences across platforms such as Meta, Google and YouTube, they enable marketing teams to reach higher match rates, reduce wasted advertising spend and accelerate pipeline growth. With a strong understanding of how business buyers behave in channels that have traditionally been focused on business to consumer activity, they are redefining how business brands scale demand generation and account based efforts.
About Us
Catalyst Labs is a leading talent agency with a specialized vertical in Applied AI, Machine Learning, and Data Science. We stand out as an agency thats deeply embedded in our clients recruitment operations.
We collaborate directly with Founders, CTOs, and Heads of AI in those themes who are driving the next wave of applied intelligence from model optimization to productized AI workflows. We take pride in facilitating conversations that align with your technical expertise, creative problem-solving mindset, and long-term growth trajectory in the evolving world of intelligent systems.
Location: San Francisco
Work type: Full Time,
Compensation: above market base + bonus + equity
Roles & Responsibilities
Lead the design, development and scaling of an end to end data platform from ingestion to insights, ensuring that data is fast, reliable and ready for business use.
Build and maintain scalable batch and streaming pipelines, transforming diverse data sources and third party application programming interfaces into trusted and low latency systems.
Take full ownership of reliability, cost and service level objectives. This includes achieving ninety nine point nine percent uptime, maintaining minutes level latency and optimizing cost per terabyte. Conduct root cause analysis and provide long lasting solutions.
Operate inference pipelines that enhance and enrich data. This includes enrichment, scoring and quality assurance using large language models and retrieval augmented generation. Manage version control, caching and evaluation loops.
Work across teams to deliver data as a product through the creation of clear data contracts, ownership models, lifecycle processes and usage based decision making.
Guide architectural decisions across the data lake and the entire pipeline stack. Document lineage, trade offs and reversibility while making practical decisions on whether to build internally or buy externally.
Scale integration with application programming interfaces and internal services while ensuring data consistency, high data quality and support for both real time and batch oriented use cases.
Mentor engineers, review code and raise the overall technical standard across teams. Promote data driven best practices throughout the organization.
Qualifications
Bachelors or Masters degree in Computer Science, Computer Engineering, Electrical Engineering, or Mathematics.
Excellent written and verbal communication; proactive and collaborative mindset.
Comfortable in hybrid or distributed environments with strong ownership and accountability.
A founder-level bias for actionable to identify bottlenecks, automate workflows, and iterate rapidly based on measurable outcomes.
Demonstrated ability to teach, mentor, and document technical decisions and schemas clearly.
Core Experience
6 to 12 years of experience building and scaling production-grade data systems, with deep expertise in data architecture, modeling, and pipeline design.
Expert SQL (query optimization on large datasets) and Python skills.
Hands-on experience with distributed data technologies (Spark, Flink, Kafka) and modern orchestration tools (Airflow, Dagster, Prefect).
Familiarity with dbt, DuckDB, and the modern data stack; experience with IaC, CI/CD, and observability.
Exposure to Kubernetes and cloud infrastructure (AWS, GCP, or Azure).
Bonus: Strong Node.js skills for faster onboarding and system integration.
Previous experience at a high-growth startup (10 to 200 people) or early-stage environment with a strong product mindset.
$84k-114k yearly est. 60d+ ago
ETL Development and Architecture-----------------Need GC and USC
USM 4.2
Data engineer job in Greenwich, CT
USM Business Systems Inc. is a quickly developing worldwide System Integrator, Software and Product Development, IT Outsourcing and Technology assistance supplier headquartered in Chantilly, VA with off-shore delivery centers in India. We offer world-class ability in giving most astounding quality and administrations through industry best practices planned to convey remarkable worth to our customers.
Utilizing our industry knowledge, administration service offering expertise and innovation abilities, we distinguish new business and innovation slants and create answers for help customers around the globe, giving top of the line solid and practical IT benefits which are cost effective services.
Established in 1999, the organization has corner qualities in building and dealing with a Business Oriented IT environment with rich involvement in technology innovation, ERP and CRM counselling, Product Engineering, Business Intelligence, Data Management, SOA, BPM, Data Warehousing, SharePoint Consulting and IT Infrastructure. Our other offerings include modified solutions and administrations in ERP, CRM, Enterprise architecture, offshore advisory services ,e-commerce, Social , Mobile, Cloud, Analytics (SMAC) and DevOps.
USM, a US ensured Minority Business Enterprise (MBE) is perceived as one of the fastest developing IT Systems Integrator in the Washington, DC zone. Most as of late, USM was positioned #9 on the rundown of the Top administrations organizations in the DC Metro Area - Washington Business Journal (2011). We are a project-driven firm that reliably meets the IT needs of our State and Government customers through development and business keenness.
ETL Development and Architecture
Greenwich, CT
3 Years contract
Need GC and USC
Top Three Skills:
1. ETL Development and Architecture expertise leveraging SSIS on a SQL Server platform. (SSIS is a must have) Any knowledge of other ETL tools would be a plus as well.
2. Hedge Fund/Financials work experience, understanding of investment data and terminology from the Buy Side*.
3. Business facing with excellent communication and the ability to gather complex requirements from various teams across the organization.
4. Attribution Analysis and Asset Allocation expertise in prior projects will be a major plus within this organization.
Job Description:
The Information Technology Group of AQR Capital Management is seeking an experienced Data Warehouse and Business Intelligence professional to help the firm expand its use of data and BI tools to meet corporate goals and objectives. The individual will be providing optimal DW/ETL architecture and design for the enterprise DW environment. This includes gathering requirements, designing, working with the BI team and rolling solutions to end users. This individual will act as an advocate of DW/ETL best practices. Candidate must have effective communication skills and the ability to interact professionally within a diverse group of staff and user community. Candidate must be able to work productively in teams and to mentor or instruct others while effectively prioritizing their own tasks. Successful candidates will have great technology and problem solving skills, a solid work ethic, and an eagerness to learn in a highly intellectual, collaborative environment.
ROLE
Responsible for leading the strategic design and maintenance of Data Warehouse and ETL platforms.
Direct, organize, and lead projects in the implementation and use of ETL software tools and systems.
Design, code, test, and document all new or modified ETL processes, applications, and programs.
Manage the design, development, architecture, and delivery of all AQR internal and external reports.
Assist in the design of BI solutions to ensure interoperability with DW/ETL solutions.
Assist in analyzing user requirements and, based on findings, design ETL processes based on the functional specifications.
Adhere to best practices in change control and documentation.
REQUIREMENTS
8+ years of hands-on experience with DW/ETL processes, data architecture and business requirements gathering/analysis.
Expert knowledge of one ETL tool (Informatica, SSIS, Ab Initio etc.) - preferably SSIS
Sound knowledge of data warehouse methodologies.
Sound knowledge of data warehouse design methodologies (Kimball, Inmon etc.)
Strong SQL skills.
Good understanding of financial markets and products.
Direct experience in implementing enterprise DW/ETL processes and decision support.
Strong ability to analyze user requirements and build DW/ETL processes to specifications.
Direct hands-on experience with ad hoc query programs and reporting software.
OPTIONAL (Nice to Have)
Knowledge of data mining and predictive analysis
Understanding of .Net (C#) and/or Java
Understanding of fund structures and strategies.
Additional Information
If you are interested please share your resume to preethib@usmsystems(dot)com or can directly call me on ************
$82k-108k yearly est. 60d+ ago
OFSAA Data Architect
Tectammina
Data engineer job in Norwalk, CT
Mandatory Technical Skills :
Strong in data warehousing concepts and dimension modeling - Min 6 Years Exp.
Experience in OFSAA Data Modeling etc - Min 3 years
Translate business requirements into (OFSAA) designs and map data elements from models to the OFSAA data model.
Strong troubleshooting skills
Hands-on experience with extracting, loading of data from source systems into the OFSAA model.
Data modeling (star / 3NF / cube), ETL design and build.
Extensive experience in OFSAA Infrastructure, OFSAA Data Model, & Erwin Data Modeler.
Desirable Technical Skills : OBIEE Analytics and BI - ETL Knowledge
Mandatory Functional Skills :
Ability to co-ordinate with multiple technical teams, Business users and Customer
Strong communication
Strong troubleshooting skills
Should have strong understanding on OFSAA LRM, Basel, OBIEE Analytics."
Desirable Functional Skills : Banking and finance service Industry
Qualifications
Bachelor or higher
Additional Information
Job Status: Permanent
Share the Profiles to *****************************
Contact: ************
Keep the subject line with Job Title and Location
$87k-119k yearly est. Easy Apply 60d+ ago
Data Architect - Power & Utilities - Senior Manager- Consulting - Location OPEN
Ernst & Young Oman 4.7
Data engineer job in Stamford, CT
At EY, we're all in to shape your future with confidence.
We'll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world.
AI & Data - Data Architecture - Senior Manager - Power & Utilities Sector
EY is seeking a motivated professional with solid experience in the utilities sector to serve as a Senior Manager who possesses a robust background in Data Architecture, Data Modernization, End to end Data capabilities, AI, Gen AI, Agentic AI, preferably with a power systems / electrical engineering background and having delivered business use cases in Transmission / Distribution / Generation / Customer. The ideal candidate will have a history of working for consulting companies and be well-versed in the fast-paced culture of consulting work. This role is dedicated to the utilities sector, where the successful candidate will craft, deploy, and maintain large-scale AI data ready architectures.
The opportunity
You will help our clients enable better business outcomes while working in the rapidly growing Power & Utilities sector. You will have the opportunity to lead and develop your skill set to keep up with the ever-growing demands of the modern data platform. During implementation you will solve complex analytical problems to bring data to insights and enable the use of ML and AI at scale for your clients. This is a high growth area and a high visibility role with plenty of opportunities to enhance your skillset and build your career.
As a Senior Manager in Data Architecture, you will have the opportunity to lead transformative technology projects and programs that align with our organizational strategy to achieve impactful outcomes. You will provide assurance to leadership by managing timelines, costs, and quality, and lead both technical and non-technical project teams in the development and implementation of cutting-edge technology solutions and infrastructure. You will have the opportunity to be face to face with external clients and build new and existing relationships in the sector. Your specialized knowledge in project and program delivery methods, including Agile and Waterfall, will be instrumental in coaching others and proposing solutions to technical constraints.
Your key responsibilities
In this pivotal role, you will be responsible for the effective management and delivery of one or more processes, solutions, and projects, with a focus on quality and effective risk management. You will drive continuous process improvement and identify innovative solutions through research, analysis, and best practices. Managing professional employees or supervising team members to deliver complex technical initiatives, you will apply your depth of expertise to guide others and interpret internal/external issues to recommend quality solutions. Your responsibilities will include:
As Data Architect - Senior Manager, you will have an expert understanding of data architecture and dataengineering and will be focused on problem-solving to design, architect, and present findings and solutions, leading more junior team members, and working with a wide variety of clients to sell and lead delivery of technology consulting services. You will be the go-to resource for understanding our clients' problems and responding with appropriate methodologies and solutions anchored around data architectures, platforms, and technologies. You are responsible for helping to win new business for EY. You are a trusted advisor with a broad understanding of digital transformation initiatives, the analytic technology landscape, industry trends and client motivations. You are also a charismatic communicator and thought leader, capable of going toe-to-toe with the C-level in our clients and prospects and willing and able to constructively challenge them.
Skills and attributes for success
To thrive in this role, you will need a combination of technical and business skills that will make a significant impact. Your skills will include:
Technical Skills Applications Integration
Cloud Computing and Cloud Computing Architecture
Data Architecture Design and Modelling
Data Integration and Data Quality
AI/Agentic AI driven data operations
Experience delivering business use cases in Transmission / Distribution / Generation / Customer.
Strong relationship management and business development skills.
Become a trusted advisor to your clients' senior decision makers and internal EY teams by establishing credibility and expertise in both data strategy in general and in the use of analytic technology solutions to solve business problems.
Engage with senior business leaders to understand and shape their goals and objectives and their corresponding information needs and analytic requirements.
Collaborate with cross-functional teams (Data Scientists, Business Analysts, and IT teams) to define data requirements, design solutions, and implement data strategies that align with our clients' objectives.
Organize and lead workshops and design sessions with stakeholders, including clients, team members, and cross-functional partners, to capture requirements, understand use cases, personas, key business processes, brainstorm solutions, and align on data architecture strategies and projects.
Lead the design and implementation of modern data architectures, supporting transactional, operational, analytical, and AI solutions.
Direct and mentor global data architecture and engineering teams, fostering a culture of innovation, collaboration, and continuous improvement.
Establish data governance policies and practices, including data security, quality, and lifecycle management.
Stay abreast of industry trends and emerging technologies in data architecture and management, recommending innovations and improvements to enhance our capabilities.
To qualify for the role, you must have
A Bachelor's degree required in STEM
12+ years professional consulting experience in industry or in technology consulting.
12+ years hands-on experience in architecting, designing, delivering or optimizing data lake solutions.
5+ years' experience with native cloud products and services such as Azure or GCP.
8+ years of experience mentoring and leading teams of data architects and dataengineers, fostering a culture of innovation and professional development.
In-depth knowledge of data architecture principles and best practices, including data modelling, data warehousing, data lakes, and data integration.
Demonstrated experience in leading large dataengineering teams to design and build platforms with complex architectures and diverse features including various data flow patterns, relational and no-SQL databases, production-grade performance, and delivery to downstream use cases and applications.
Hands-on experience in designing end-to-end architectures and pipelines that collect, process, and deliver data to its destination efficiently and reliably.
Proficiency in data modelling techniques and the ability to choose appropriate architectural design patterns, including Data Fabrics, Data Mesh, Lake Houses, or Delta Lakes.
Manage complex data analysis, migration, and integration of enterprise solutions to modern platforms, including code efficiency and performance optimizations.
Previous hands‑on coding skills in languages commonly used in dataengineering, such as Python, Java, or Scala.
Ability to design data solutions that can scale horizontally and vertically while optimizing performance.
Experience with containerization technologies like Docker and container orchestration platforms like Kubernetes for managing data workloads.
Experience in version control systems (e.g. Git) and knowledge of DevOps practices for automating dataengineering workflows (DataOps).
Practical understanding of data encryption, access control, and security best practices to protect sensitive data.
Experience leading Infrastructure and Security engineers and architects in overall platform build.
Excellent leadership, communication, and project management skills.
Data Security and Database Management
Enterprise Data Management and Metadata Management
Ontology Design and Systems Design
Ideally, you'll also have
Master's degree in Electrical / Power Systems Engineering, Computer science, Statistics, Applied Mathematics, Data Science, Machine Learning or commensurate professional experience.
Experience working at big 4 or a major utility.
Experience with cloud data platforms like Databricks.
Experience in leading and influencing teams, with a focus on mentorship and professional development.
A passion for innovation and the strategic application of emerging technologies to solve real-world challenges.
The ability to foster an inclusive environment that values diverse perspectives and empowers team members.
Building and Managing Relationships
Client Trust and Value and Commercial Astuteness
Communicating With Impact and Digital Fluency
What we look for
We are looking for top performers who demonstrate a blend of technical expertise and business acumen, with the ability to build strong client relationships and lead teams through change. Emotional agility and hybrid collaboration skills are key to success in this dynamic role.
FY26NATAID
What we offer you
At EY, we'll develop you with future-focused skills and equip you with world-class experiences. We'll empower you in a flexible environment, and fuel you and your extraordinary talents in a diverse and inclusive culture of globally connected teams. Learn more.
We offer a comprehensive compensation and benefits package where you'll be rewarded based on your performance and recognized for the value you bring to the business. The base salary range for this job in all geographic locations in the US is $144,000 to $329,100. The base salary range for New York City Metro Area, Washington State and California (excluding Sacramento) is $172,800 to $374,000. Individual salaries within those ranges are determined through a wide variety of factors including but not limited to education, experience, knowledge, skills and geography. In addition, our Total Rewards package includes medical and dental coverage, pension and 401(k) plans, and a wide range of paid time off options.
Join us in our team‑led and leader‑enabled hybrid model. Our expectation is for most people in external, client serving roles to work together in person 40-60% of the time over the course of an engagement, project or year.
Under our flexible vacation policy, you'll decide how much vacation time you need based on your own personal circumstances. You'll also be granted time off for designated EY Paid Holidays, Winter/Summer breaks, Personal/Family Care, and other leaves of absence when needed to support your physical, financial, and emotional well‑being.
Are you ready to shape your future with confidence? Apply today.
EY accepts applications for this position on an on‑going basis.
For those living in California, please click here for additional information.
EY focuses on high‑ethical standards and integrity among its employees and expects all candidates to demonstrate these qualities.
EY | Building a better working world
EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets.
Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow.
EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
EY provides equal employment opportunities to applicants and employees without regard to race, color, religion, age, sex, sexual orientation, gender identity/expression, pregnancy, genetic information, national origin, protected veteran status, disability status, or any other legally protected basis, including arrest and conviction records, in accordance with applicable law.
EY is committed to providing reasonable accommodation to qualified individuals with disabilities including veterans with disabilities. If you have a disability and either need assistance applying online or need to request an accommodation during any part of the application process, please call 1-800-EY-HELP3, select Option 2 for candidate related inquiries, then select Option 1 for candidate queries and finally select Option 2 for candidates with an inquiry which will route you to EY's Talent Shared Services Team (TSS) or email the TSS at ************************** .
#J-18808-Ljbffr
$112k-156k yearly est. 2d ago
ETL Development and Architecture-----------------Need GC and USC
Usm 4.2
Data engineer job in Greenwich, CT
USM Business Systems Inc. is a quickly developing worldwide System Integrator, Software and Product Development, IT Outsourcing and Technology assistance supplier headquartered in Chantilly, VA with off-shore delivery centers in India. We offer world-class ability in giving most astounding quality and administrations through industry best practices planned to convey remarkable worth to our customers.
Utilizing our industry knowledge, administration service offering expertise and innovation abilities, we distinguish new business and innovation slants and create answers for help customers around the globe, giving top of the line solid and practical IT benefits which are cost effective services.
Established in 1999, the organization has corner qualities in building and dealing with a Business Oriented IT environment with rich involvement in technology innovation, ERP and CRM counselling, Product Engineering, Business Intelligence, Data Management, SOA, BPM, Data Warehousing, SharePoint Consulting and IT Infrastructure. Our other offerings include modified solutions and administrations in ERP, CRM, Enterprise architecture, offshore advisory services ,e-commerce, Social , Mobile, Cloud, Analytics (SMAC) and DevOps.
USM, a US ensured Minority Business Enterprise (MBE) is perceived as one of the fastest developing IT Systems Integrator in the Washington, DC zone. Most as of late, USM was positioned #9 on the rundown of the Top administrations organizations in the DC Metro Area - Washington Business Journal (2011). We are a project-driven firm that reliably meets the IT needs of our State and Government customers through development and business keenness.
ETL Development and Architecture
Greenwich, CT
3 Years contract
Need GC and USC
Top Three Skills:
1. ETL Development and Architecture expertise leveraging SSIS on a SQL Server platform. (SSIS is a must have) Any knowledge of other ETL tools would be a plus as well.
2. Hedge Fund/Financials work experience, understanding of investment data and terminology from the Buy Side*.
3. Business facing with excellent communication and the ability to gather complex requirements from various teams across the organization.
4. Attribution Analysis and Asset Allocation expertise in prior projects will be a major plus within this organization.
Job Description:
The Information Technology Group of AQR Capital Management is seeking an experienced Data Warehouse and Business Intelligence professional to help the firm expand its use of data and BI tools to meet corporate goals and objectives. The individual will be providing optimal DW/ETL architecture and design for the enterprise DW environment. This includes gathering requirements, designing, working with the BI team and rolling solutions to end users. This individual will act as an advocate of DW/ETL best practices. Candidate must have effective communication skills and the ability to interact professionally within a diverse group of staff and user community. Candidate must be able to work productively in teams and to mentor or instruct others while effectively prioritizing their own tasks. Successful candidates will have great technology and problem solving skills, a solid work ethic, and an eagerness to learn in a highly intellectual, collaborative environment.
ROLE
Responsible for leading the strategic design and maintenance of Data Warehouse and ETL platforms.
Direct, organize, and lead projects in the implementation and use of ETL software tools and systems.
Design, code, test, and document all new or modified ETL processes, applications, and programs.
Manage the design, development, architecture, and delivery of all AQR internal and external reports.
Assist in the design of BI solutions to ensure interoperability with DW/ETL solutions.
Assist in analyzing user requirements and, based on findings, design ETL processes based on the functional specifications.
Adhere to best practices in change control and documentation.
REQUIREMENTS
8+ years of hands-on experience with DW/ETL processes, data architecture and business requirements gathering/analysis.
Expert knowledge of one ETL tool (Informatica, SSIS, Ab Initio etc.) - preferably SSIS
Sound knowledge of data warehouse methodologies.
Sound knowledge of data warehouse design methodologies (Kimball, Inmon etc.)
Strong SQL skills.
Good understanding of financial markets and products.
Direct experience in implementing enterprise DW/ETL processes and decision support.
Strong ability to analyze user requirements and build DW/ETL processes to specifications.
Direct hands-on experience with ad hoc query programs and reporting software.
OPTIONAL (Nice to Have)
Knowledge of data mining and predictive analysis
Understanding of .Net (C#) and/or Java
Understanding of fund structures and strategies.
Additional Information
If you are interested please share your resume to
preethib
@usmsystems(dot)com
or can directly call me on
************
$82k-108k yearly est. 1h ago
Data Engineer
Bexorg
Data engineer job in New Haven, CT
Bexorg is transforming drug discovery by restoring molecular activity in postmortem human brains. Our groundbreaking BrainEx platform enables direct experimentation on functionally preserved human brain tissue, generating massive, high-fidelity molecular datasets that power AI-driven drug discovery for CNS diseases. We are seeking a DataEngineer to help harness this unprecedented data. In this onsite, mid-level role, you will design and optimize the pipelines and cloud infrastructure that turn terabytes of raw experimental data into actionable insights, driving our mission to revolutionize treatments for central nervous system disorders.
The Job:
Data Ingestion & Pipeline Management: Manage and optimize massive data ingestion pipelines from cutting-edge experimental devices, ensuring reliable, real-time capture of complex molecular data.
Cloud Data Architecture: Organize and structure large datasets in Google Cloud Platform, using tools like BigQuery and cloud storage to build a scalable data warehouse for fast querying and analysis of brain data.
Large-Scale Data Processing: Design and implement robust ETL/ELT processes to handle PB scale data, emphasizing speed, scalability, and data integrity at each step of the process.
Internal Data Services: Work closely with our software and analytics teams to expose processed data and insights to internal web applications. Build appropriate APIs or data access layers so that scientists and engineers can seamlessly visualize and interact with the data through our web platform.
Internal Experiment Services: Work with our life science teams to ensure data entry protocols for seamless metadata integration and association with experimental data
Infrastructure Innovation: Recommend and implement cloud infrastructure improvements (such as streaming technologies, distributed processing frameworks, and automation tools) that will future-proof our data pipeline. You will continually assess new technologies and best practices to increase throughput, reduce latency, and support our rapid growth in data volume.
Qualifications and Skills:
Experience with Google Cloud: Hands-on experience with Google Cloud services (especially BigQuery and related data tools) for managing and analyzing large datasets. You've designed or maintained data systems in a cloud environment and understand how to leverage GCP for big data workloads.
DataEngineering Background: 3+ years of experience in dataengineering or a similar role. Proven ability to build and maintain data pipelines dealing with petabyte-scale data. Proficiency in programming (e.g., Python, Java, or Scala) and SQL for developing data processing jobs and queries.
Scalability & Performance Mindset: Familiarity with distributed systems or big data frameworks and a track record of optimizing data workflows for speed and scalability. You can architect solutions that handle exponential data growth without sacrificing performance.
Biology Domain Insight: Exposure to biology or experience working with scientific data (e.g. genomics, bioinformatics, neuroscience) is a strong plus. While deep domain expertise isn't required, you should be excited to learn about our experimental data and comfortable discussing requirements with biologists.
Problem-Solving & Collaboration: Excellent problem-solving skills, attention to detail, and a proactive attitude in tackling technical challenges. Ability to work closely with cross-functional teams (scientists, software engineers, data scientists) and communicate complex data systems in clear, approachable terms.
Passion for the Mission: A strong desire to apply your skills to transform drug discovery. You are inspired by Bexorg's mission and eager to build the data backbone of a platform that could unlock new therapies for CNS diseases.
Local to New Haven, CT preferred. No relocation offered for this position.
Bexorg is an equal opportunity employer. We strive to create a supportive and inclusive workplace where contributions are valued and celebrated, and our employees thrive by being themselves and are inspired to do their best work. We seek applicants of all backgrounds and identities, across race, color, ethnicity, national origin or ancestry, citizenship, religion, sex, sexual orientation, gender identity or expression, veteran status, marital status, pregnancy or parental status, or disability. Applicants will not be discriminated against based on these or other protected categories or social identities. Bexorg will also consider for employment qualified applicants with criminal histories in a manner consistent with applicable federal, state and local law.
$84k-114k yearly est. Auto-Apply 60d+ ago
OFSAA Data Architect
Tectammina
Data engineer job in Norwalk, CT
Mandatory Technical Skills : Strong in data warehousing concepts and dimension modeling - Min 6 Years Exp. Experience in OFSAA Data Modeling etc - Min 3 years Translate business requirements into (OFSAA) designs and map data elements from models to the OFSAA data model.
Strong troubleshooting skills
Hands-on experience with extracting, loading of data from source systems into the OFSAA model.
Data modeling (star / 3NF / cube), ETL design and build.
Extensive experience in
OFSAA Infrastructure, OFSAA Data Model, & Erwin Data Modeler.
Desirable Technical Skills :
OBIEE Analytics and BI - ETL Knowledge
Mandatory Functional Skills :
Ability to co-ordinate with multiple technical teams, Business users and Customer
Strong communication
Strong troubleshooting skills
Should have strong understanding on OFSAA LRM, Basel, OBIEE Analytics."
Desirable Functional Skills :
Banking and finance service Industry
Qualifications
Bachelor or higher
Additional Information
Job Status: Permanent
Share the Profiles to *****************************
Contact:
************
Keep the subject line with Job Title and Location
$87k-119k yearly est. Easy Apply 1h ago
Senior Data Architect (USA)
Trexquant 4.0
Data engineer job in Stamford, CT
Trexquant is seeking a highly skilled Senior Data Architect to design and lead the next-generation architecture for our research and simulation data ecosystem. This role is central to unifying Trexquant's extensive collection of datasets-sourced from hundreds of vendors-into an accessible, efficient, and scalable data platform that supports simulation, research, and alpha generation across multiple asset classes.
The successful candidate will architect the end-to-end data infrastructure that enables researchers and simulators to seamlessly discover, query, and combine datasets across equities, futures, FX, ETFs, corporate bonds, and options. This person will design data models, storage systems, and researcher-facing interfaces that make it easy to transform raw vendor data into structured, analysis-ready forms-empowering systematic research and robust backtesting.
Responsibilities
* Architect and implement a unified data platform that integrates hundreds of vendor datasets, providing consistent, accessible, and high-quality data to simulators and researchers.
* Design efficient storage and retrieval systems to support both large-scale historical backtesting and high-frequency research workflows.
* Develop intuitive researcher interfaces and APIs that allow users to easily discover variables, explore metadata, and assemble data into standardized stocks × values matrices for rapid hypothesis testing.
* Collaborate closely with quantitative researchers and simulation teams to understand their workflows, ensuring the data platform meets real-world analytical and performance needs.
* Establish best practices for data modeling, normalization, versioning, and quality control across asset classes and data vendors.
* Work with infrastructure and DevOps teams to optimize data pipelines, caching, and distributed storage for scalability and reliability.
* Prototype and deploy internal data applications that enhance research productivity and data transparency.
* Mentor and guide dataengineers to maintain robust, maintainable, and well-documented data systems.
How much does a data engineer earn in Brookhaven, NY?
The average data engineer in Brookhaven, NY earns between $79,000 and $142,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.