Regulatory Engineer
Data engineer job in Cordova, IL
WHO WE ARE
As the nation's largest producer of clean, carbon-free energy, Constellation is focused on our purpose: accelerating the transition to a carbon-free future. We have been the leader in clean energy production for more than a decade, and we are cultivating a workplace where our employees can grow, thrive, and contribute.
Our culture and employee experience make it clear: We are powered by passion and purpose. Together, we're creating healthier communities and a cleaner planet, and our people are the driving force behind our success. At Constellation, you can build a fulfilling career with opportunities to learn, grow and make an impact. By doing our best work and meeting new challenges, we can accomplish great things and help fight climate change. Join us to lead the clean energy future.
The Senior Regulatory Engineer position is based out of our Quad Cities Generating Station in Cordova, IL.
TOTAL REWARDS
Constellation offers a wide range of benefits and rewards to help our employees thrive professionally and personally. We provide competitive compensation and benefits that support both employees and their families, helping them prepare for the future. In addition to highly competitive salaries, we offer a bonus program, 401(k) with company match, employee stock purchase program comprehensive medical, dental and vision benefits, including a robust wellness program paid time off for vacation, holidays, and sick days and much more.
***This Engineering role can be filled at the Mid-level or Senior Engineer level. Please see minimum qualifications list below for each level***
Expected salary range varies based on experience, along with comprehensive benefits package that includes bonus and 401(k).
Mid-Level - $94,500 - $105,000
Sr Level - $124,200 - $138,000
PRIMARY PURPOSE OF POSITION
Performs advanced regulatory/technical problem solving in support of nuclear plant operations. Responsible for regulatory/technical decisions. Possesses excellent knowledge in functional discipline and its practical application and has detailed knowledge of applicable industry codes and regulations.
PRIMARY DUTIES AND ACCOUNTABILITIES
Provide in-depth regulatory/technical expertise to develop, manage and implement regulatory analyses, activities and programs.
Provide regulatory/technical expertise and consultation through direct involvement to identify and resolve regulatory issues.
Provide complete task management of regulatory issues.
Perform regulatory tasks as assigned by supervision.
Accountable for the accuracy, completeness, and timeliness of work ensuring proper licensing basis management and assuring that standard design criteria, practices, procedures, regulations and codes are used in preparation of products.
Perform independent research, reviews, studies and analyses in support of regulatory/technical projects and programs.
Recommend new concepts and techniques to improve performance, simplify operation, reduce costs, reduce regulatory burden, correct regulatory non-compliances, or comply with changes in codes or regulations.
All other job assignments and/or duties pursuant to company policy or as directed by management to include but not limited to: (Emergency Response duties and/or coverage, Department duty coverage and/or call out, and positions
MINIMUM QUALIFICATIONS for Mid-level E02 Engineer
Bachelor&rsquos degree in Engineering with 1-year of relevant position experience OR
Associate degree in Engineering with a minimum of 3 years of relevant experience OR
High school diploma (or equivalent) with at least 5 years of relevant experience
Effective written and oral communication skills
Maintain minimum access requirement or unescorted access requirements, as applicable, and favorable medical examination and/or testing in accordance with position duties
MINIMUM QUALIFICATIONS for Senior E03 Engineer
Bachelor&rsquos degree in Engineering with 5-years of relevant position experience OR
Associate&rsquos degree in Engineering with 7 years of experience OR
High School Diploma or Equivalent with 8 years of experience
Effective written and oral communication skills
Maintain minimum access requirement or unescorted access requirements, as applicable, and favorable medical examination and/or testing in accordance with position duties
PREFERRED QUALIFICATIONS
Previous Senior Reactor Operator (SRO) license/certification
1 year nuclear power experience
NRC experience
Advanced technical degree or related coursework
Regulatory related work experience or previous experience in a military or other government organization
Auto-ApplyNG & NGL Engineer
Data engineer job in South Shore, KY
An exciting career awaits you
At MPC, we're committed to being a great place to work - one that welcomes new ideas, encourages diverse perspectives, develops our people, and fosters a collaborative team environment.
MPLX Natural Gas & Natural Gas Liquids (NG & NGL) Operations is seeking an Operations Engineer. The Operations Engineer will provide engineering support, project management, technical stewardship and oversight to gathering, pipeline and compression assets.
This position will report to the Operations Engineering Manager and requires previous experience in the key areas of operations engineering, crude oil/natural gas/natural gas liquids gathering, pipelines and compression/pump facilities. This position will be focused on providing operations engineering support and the development/execution of gathering, pipeline, and compression related projects.
RESPONSIBILITIES:
1. Provides engineering support, technical stewardship. leadership. and oversight to Natural Gas & Natural Gas Liquids operations
2. Troubleshoots operational issues and optimizes processes utilizing sound engineering practices
3. Utilizes modern technical tools and software to perform engineering calculations and modeling
4. Develops. implements, and manages capital and expense projects for business unit while adhering to budgets and project management processes; supervises contract personnel for project development and execution as required
5. Develops project processes. economic evaluations, scoping, costing. and approval documentation
6. Participates in Process Safety Management processes (PHAs, MOCs. etc.) as applicable
7. Partners with company and industry subject matter experts to maintain thorough knowledge and understanding of applicable DOT, OSHA, EPA, and other environmental safety regulations; ensures area of responsibility is compliant with all industry and company standards
MINIMUM QUALIFICATIONS:
Bachelor's degree in engineering from accredited college or university required
Engineer G&P I: Typically has 0-5 years of relevant experience.
Engineer G&P II: Typically has 4 or more years of relevant experience.
Engineer G&P III: Typically has 7 or more years of relevant experience
Engineer G&P Sr: Typically has 12 or more years of relevant experience
#GP #GPOPS
As an energy industry leader, our career opportunities fuel personal and professional growth.
Location:
South Shore, Kentucky
Additional locations:
Job Requisition ID:
00019728
Location Address:
2 MarkWest Dr
Education:
Bachelors (Required)
Employee Group:
Full time
Employee Subgroup:
Regular
Marathon Petroleum Company LP is an Equal Opportunity Employer and gives consideration for employment to qualified applicants without discrimination on the basis of race, color, religion, creed, sex, gender (including pregnancy, childbirth, breastfeeding or related medical conditions), sexual orientation, gender identity, gender expression, reproductive health decision-making, age, mental or physical disability, medical condition or AIDS/HIV status, ancestry, national origin, genetic information, military, veteran status, marital status, citizenship or any other status protected by applicable federal, state, or local laws. If you would like more information about your EEO rights as an applicant, click here.
If you need a reasonable accommodation for any part of the application process at Marathon Petroleum LP, please contact our Human Resources Department at ***************************************. Please specify the reasonable accommodation you are requesting, along with the job posting number in which you may be interested. A Human Resources representative will review your request and contact you to discuss a reasonable accommodation. Marathon Petroleum offers a total rewards program which includes, but is not limited to, access to health, vision, and dental insurance, paid time off, 401k matching program, paid parental leave, and educational reimbursement. Detailed benefit information is available at ***************************** hired candidate will also be eligible for a discretionary company-sponsored annual bonus program.
Equal Opportunity Employer: Veteran / Disability
We will consider all qualified Applicants for employment, including those with arrest or conviction records, in a manner consistent with the requirements of applicable state and local laws. In reviewing criminal history in connection with a conditional offer of employment, Marathon will consider the key responsibilities of the role.
Auto-ApplyData Scientist
Data engineer job in Indianapolis, IN
We are seeking a Junior Data Scientist to join our large Utility client in downtown Indianapolis. This position will be hired as a Full-Time employee. This entry-level position is perfect for individuals eager to tackle real-world energy challenges through data exploration, predictive modeling, and collaborative problem-solving. As part of our team, you'll work closely with seasoned data scientists, analysts, architects, engineers, and governance specialists to generate insights that power smarter decisions and help shape the future of energy.
Key Responsibilities
Partner cross-functionally with data scientists, data architects and engineers, machine learning engineers, data analysts, and data governance experts to deliver integrated data solutions.
Collaborate with business stakeholders and analysts to define clear project requirements.
Collect, clean, and preprocess both structured and unstructured data from utility systems (e.g., meter data, customer data).
Conduct exploratory data analysis to uncover trends, anomalies, and opportunities to enhance grid operations and customer service.
Apply traditional machine learning techniques and generative AI tools to build predictive models that address utility-focused challenges, particularly in the customer domain (e.g., outage restoration, program adoption, revenue assurance).
Present insights to internal stakeholders in a clear, compelling format, including data visualizations that drive predictive decision-making.
Document methodologies, workflows, and results to ensure transparency and reproducibility.
Serve as a champion of data and AI across all levels of the client's US Utilities organization.
Stay informed on emerging industry trends in utility analytics and machine learning.
Requirements
Bachelor's degree in data science, statistics, computer science, engineering, or a related field. Master's degree or Ph.D. is preferred.
1-3 years of experience in a data science or analytics role.
Strong applied analytics and statistics skills, such as distributions, statistical testing, regression, etc.
Proficiency in Python or R, with experience using libraries such as pandas, NumPy, and scikit-learn.
Proficiency in traditional machine learning algorithms and techniques, including k-nearest neighbors (k-NN), naive Bayes, support vector machines (SVM), convolutional neural networks (CNN), random forest, gradient-boosted trees, etc.
Familiarity with generative AI tools and techniques, including large language models (LLMs) and Retrieval-Augmented Generation (RAG), with an understanding of how these can be applied to enhance contextual relevance and integrate enterprise data into intelligent workflows.
Proficiency in SQL, with experience writing complex queries and working with relational data structures. Google BigQuery experience is preferred, including the use of views, tables, materialized views, stored procedures, etc.
Proficient in Git for version control, including repository management, branching, merging, and collaborating on code and notebooks in data science projects. Experience integrating Git with CI/CD pipelines to automate testing and deployment is preferred.
Experience with cloud computing platforms (GCP preferred).
Ability to manage multiple priorities in a fast-paced environment.
Interest in learning more about the customer-facing side of the utility industry.
Compensation: Up to $130,000 per year annual salary. Exact compensation may vary based on several factors, including skills, experience, and education. Benefit packages for this role may include healthcare insurance offerings and paid leave as provided by applicable law.
Data Engineer
Data engineer job in Chicago, IL
Scaylor helps mid-market and enterprise companies make sense of their data. Most teams past $20M ARR are stuck with fragmented systems, old databases, and hundreds of spreadsheets that never quite line up. We build the pipelines that clean, normalize, and connect that data so it can actually be used.
Our platform handles everything from parsing financial models and reports to mapping tables across legacy databases and modern APIs. The goal is simple: give companies a single source of truth they can trust.
We're a small team of four - one backend engineer, one frontend engineer, and 2 founders. We're looking for our fifth teammate to help us scale the engine behind it all.
⸻
The Role
You'll work across data engineering and full-stack development, helping us build reliable data infrastructure that powers real workflows. You'll touch everything from ingestion and transformation pipelines to the APIs and dashboards that surface insights to clients.
You'll work directly with the founding team and help make technical decisions that define the next version of Scaylor's core platform.
⸻
What You'll Work On
• Build data pipelines that extract, clean, and standardize information from Excel files, PDFs, APIs, and legacy databases
• Design schemas and transformation logic for structured and semi-structured data
• Develop and maintain backend APIs (Python/FastAPI or Node/Express) for data access and analytics
• Help connect backend services to our frontend dashboards (React, Node.js, or similar)
• Set up and maintain AWS infrastructure (Lambda, S3, ECS, CloudFormation)
• Collaborate with clients to understand their data problems and design workflows that fix them
⸻
You'd Be Great Here If You
• Have 3-6 years of experience in data engineering, backend, or full-stack roles
• Write clean, maintainable code in Python + JS
• Understand ETL, data normalization, and schema mapping
• Have experience with SQL and working with legacy databases or systems
• Are comfortable managing cloud services and debugging data pipelines
• Enjoy solving messy data problems and care about building things that last
⸻
Nice to Have
• Familiarity with GCP or SQL databases
• Understanding of enterprise data flows (ERP, CRM, or financial systems)
• Experience building and deploying containers (Docker, GitHub Actions, CI/CD)
• Interest in lightweight ML or LLM-assisted data transformation
⸻
Why Join Scaylor
• Be one of the first five team members shaping the product and the company
• Work directly with the founder and help define Scaylor's technical direction
• Build infrastructure that solves real problems for real companies
• Earn meaningful equity and have a say in how the company grows
⸻
Compensation
• $130k - $150k with a raise based on set revenue triggers
• .4% equity
• Relocation to Chicago, IL required
Data Scientist
Data engineer job in Chicago, IL
The Aspen Group (TAG) is one of the largest and most trusted retail healthcare business support organizations in the U.S. and has supported over 20,000 healthcare professionals and team members with close to 1,500 health and wellness offices across 48 states in four distinct categories: dental care, urgent care, medical aesthetics, and animal health. Working in partnership with independent practice owners and clinicians, the team is united by a single purpose: to prove that healthcare can be better and smarter for everyone. TAG provides a comprehensive suite of centralized business support services that power the impact of five consumer-facing businesses: Aspen Dental, ClearChoice Dental Implant Centers, WellNow Urgent Care, Chapter Aesthetic Studio, and Lovet Pet Health Care. Each brand has access to a deep community of experts, tools and resources to grow their practices, and an unwavering commitment to delivering high-quality consumer healthcare experiences at scale.
As a reflection of our current needs and planned growth we are very pleased to offer a new opportunity to join our dedicated team as Data Scientist.
Join us as a Data Scientist and play a key role in shaping how patients access care across a growing network of brands including Aspen Dental, ClearChoice, WellNow, Chapter Aesthetic Studio, and Lovet. In this role, you won't just analyze data - you'll lead end-to-end initiatives that shape how we optimize revenue across offices, days, and even hours. From designing strategies, deploying solutions and building performance dashboards to partnering with data science on automation and collaborating with teams across Finance, Marketing, Product, Technology and Operations, you'll have a direct hand in driving measurable results.
This role is ideal for someone who excels at turning data into decisions, building repeatable processes, and uncovering insights that drive measurable improvements in revenue performance and patient access. You will lead initiatives across forecasting, scheduling optimization, demand modeling, capacity planning, and revenue strategy, while also shaping how analytics is delivered and scaled across the organization. If you're a builder who loves solving complex problems with data, operational logic, and automation, this opportunity is for you.
Essential Responsibilities:
Revenue Strategy & Optimization
Lead strategy development for optimizing revenue performance at the office, day, and hour level by leveraging forecasting, scheduling, and demand modeling - while balancing access for patients and operational efficiency.
Build analytical frameworks to support pricing, demand forecasting, scheduling, and access optimization.
Identify revenue opportunities through data-driven analysis of booking trends, cancellations, no-shows, and utilization.
Monitor and update demand and schedule availability through the analysis of historical and future booking trends, the pricing environment, industry capacity trends, competitive landscape, and other factors.
Analytics, Insights & Experimentation
Develop and maintain forecasting, demand models, dashboards, and scenario analyses.
Run experiments and structured tests to evaluate new operational and scheduling strategies.
Create clear, actionable insights that influence senior leaders and cross-functional partners.
Process Building & Automation
Map existing manual workflows and identify opportunities to automate recurring analyses or reporting.
Cross-Functional Leadership
Work closely with Operations, Finance, Product, Marketing, and Clinical teams to align strategies and execution.
Help shape and scale the function by building new playbooks, reports, and best practices.
Act as a subject matter expert in forecasting, demand modeling, and capacity optimization.
Qualifications (Skills-Based):
We welcome candidates with diverse academic and career pathways. You may have gained your skills through industry experience, coursework, certificates, or hands-on practice.
Experience/Education:
5+ years of experience in Revenue Management, Pricing, Operations Research, Supply/Demand Optimization (Airline, Travel, Healthcare, or multi-location service industries preferred).
Bachelor's degree in Business, Finance, Economics, Analytics, or Statistics required; Master's degree a plus.
Experience working alongside data science/engineering teams to automate and scale analytics processes.
Exceptional analytical, problem-solving, and communication skills - with the ability to influence senior stakeholders.
Detail-oriented, self-starter mindset with a passion for driving results.
Strong analytical and quantitative skills, with experience in forecasting, modeling, or optimization.
Strong technical proficiency in SQL and a modern BI platform (e.g., Tableau, Looker).
Familiarity with scripting (e.g., Python or R) or automation tools (e.g., DBT, Airflow) - not required, but helpful.
Additional Job Description:
Base Pay Range: $115,000 - $130,000, plus 10% annual bonus (Actual pay may vary based on experience, performance, and qualifications.)
A generous benefits package that includes paid time off, health, dental, vision, and 401(k) savings plan with match.
If you are an applicant residing in California, please view our privacy policy here: *********************************************************************************
Senior Data Engineer
Data engineer job in Indianapolis, IN
Senior Data Engineer - Azure Data Warehouse (5-7+ Years Experience)
Long term renewing contract
Azure-based data warehouse and dashboarding initiatives.
Work alongside architects, analysts, and researchers to build scalable, auditable, and business-aligned data assets using modern cloud tools and best practices.
Key Responsibilities
· Design and implement scalable data pipelines using ADF, Databricks, and Azure SQL Server
· Apply Medallion architecture principles and best practices for data lake and warehouse design
· Collaborate with Data Architects, Analysts, and Researchers to translate business needs into technical solutions
· Develop and maintain CI/CD pipelines for data workflows and dashboard deployments
· Lead troubleshooting and debugging efforts across ETL, SQL, and cloud environments
· Mentor junior team members and promote best practices in data modeling, cleansing, and promotion
· Support dashboarding initiatives with Power BI and wireframe collaboration
· Ensure auditability, lineage, and performance across SQL Server and Oracle environments
Required Skills & Experience
· 5-7+ years in data engineering, data warehouse design, and ETL development
· Strong expertise in Azure Data Factory, Data Bricks, and Python
· Deep understanding of SQL Server, Oracle, Postgres SQL & Cosmos DB and data modeling standards
· Proven experience with Medallion architecture and data Lakehouse best practices
· Hands-on with CI/CD, DevOps, and deployment automation
· Agile mindset with ability to manage multiple priorities and deliver on time
· Excellent communication and documentation skills
Bonus Skills
· Experience with GCP or AWS
· Familiarity with Jira, Confluence, and AppDynamics
Data Architect
Data engineer job in Oak Brook, IL
GeoWealth is a Chicago-based fintech firm that offers an award-winning digital advisory platform, including Turnkey Asset Management Platform (“TAMP”) capabilities. We deliver a comprehensive and fully integrated wealth management technology platform to professionals in the financial services industry.
OPPORTUNITY:
We're looking for a Data Architect to join our Engineering Team. In this role, you will oversee the overall data architecture, helping us deliver our best-in-class solutions to our customers. This role will be key in organizing, designing, and leading our team through well-designed data architecture. If you love architecting complex systems, delivering customer focused software, designing best-in-class systems and leading data architecture design this role is for you.
RESPONSIBILITIES:
Own data architecture and oversee data implementation
Set coding/implementation standards
Lead our data warehouse design
Deliver performant, maintainable, and quality software in collaboration with our teams.
Improve our database design to reduce replication and increase performance
Partner with other architects and engineers to produce better designed systems
SKILLS, KNOWLEDGE, AND EXPERIENCE:
5+ years of experience as Data Architect or equivalent role
Bachelor's degree in computer science or equivalent degree
Hands-on experience with Oracle
Designed and implemented data warehouse
Experience with the following is preferred but not required: designing and building monolithic and distributed systems, Postgres, Logi Symphony, PowerBI, Java and JIRA/Confluence
COMPANY CULTURE & PERKS - HIGHLIGHTS:
Investing in Your Growth 🌱
Casual work environment with fun, hard-working, and open-minded coworkers
Competitive salary with opportunity for performance-based annual bonus
Opportunities to up-skill, explore new responsibilities, and network across departments
Defined and undefined career pathways allowing you to grow your own way
Work/Life Balance 🗓️
Flexible PTO and work schedule to ensure our team balances work and life
Hybrid work schedule
Maternity and paternity leave
Taking Care of Your Future ♥️
Medical, Dental, and Vision, Disability insurance
Free access to Spring Health, a comprehensive mental health solution
401(k) with company match and a broad selection of investments
Voluntary insurance: short-term disability, long-term disability, and life insurance
FSA and transit benefits for employees that contribute pre-tax dollars
Other Fun Stuff ⭐
Free on-site gym and parking
Weekly catered lunches in the office, plus monthly happy hours
Stocked kitchen with snacks and drinks
GeoWealth was recognized as “Best Place to Work” by Purpose Job's 2025, 2024 and 2022
GeoWealth was recognized as “Best Place to Work” by Built In in 2024, 2023 and 2022
SALARY RANGE:
Starting at $170,000-$220,000 + Benefits + Opportunity for Performance Bonus
This is an estimated range based on the circumstances at the time of posting, however, may change based on a combination of factors, including but not limited to skills, experience, education, market factors, geographical location, budget, and demand.
Senior Data Engineer
Data engineer job in Chicago, IL
requires visa independent candidates.
Note: (OPT, CPT, H1B holders will not work at this time)
Design, develop, and maintain scalable ETL pipelines using AWSGlue
Collaborate with data engineers and analysts to understand data requirements
Build and manage data extraction, transformation, and loading processes
Optimize and troubleshoot existing Glue jobs and workflows
Ensure data quality, integrity, and security throughout the ETL process
Integrate AWS Glue with other AWS services like S3, Lambda, Redshift, and Step Functions
Maintain documentation of data workflows and processes
Stay updated with the latest AWS tools and best practices
Required Skills
Strong hands-on experience with AWS Glue, PySpark, and Python
Proficiency in SQL and working with structured/unstructured data (JSON, CSV, Parquet)
Experience with data warehousing concepts and tools
Familiarity with CI/CD pipelines, Terraform, and scripting (PowerShell, Bash)
Solid understanding of data modeling, data integration, and data management
Exposure to AWS Batch, Step Functions, and Data Catalogs
Data Scientist
Data engineer job in Chicago, IL
This role supports a financial services organization by applying advanced data science and machine learning techniques to solve complex business problems using large-scale datasets. The position focuses on end-to-end feature engineering, model development, and writing production-quality code in a fast-paced, collaborative environment. The individual partners closely with product and engineering teams to uncover trends, improve algorithm performance, and drive data-informed decisions.
Key Responsibilities
Independently analyze and aggregate large, complex datasets to identify anomalies that affect model and algorithm performance
Own the full lifecycle of feature engineering, including ideation, development, validation, and selection
Develop and maintain production-quality code in a fast-paced, agile environment
Solve challenging analytical problems using extremely large (terabyte-scale) datasets
Evaluate and apply a range of machine learning techniques to determine the most effective approach for business use cases
Collaborate closely with product and engineering partners to identify trends, opportunities, and data-driven solutions
Communicate insights, results, and model performance clearly through visualizations and explanations tailored to non-technical stakeholders
Adhere to established standards and practices to ensure the security, integrity, and confidentiality of systems and data
Minimum Qualifications
Bachelor's degree in Mathematics, Statistics, Computer Science, Operations Research, or a related field
At least 4 years of professional experience in data science, analytics, engineering, or a closely related discipline
Hands-on experience building data science pipelines and workflows using Python, R, or similar programming languages
Strong SQL skills, including query development and performance tuning
Experience working with large-scale, high-volume datasets (terabyte-scale)
Practical experience applying a variety of machine learning methods and understanding the parameters that impact model performance
Familiarity with common machine learning libraries (e.g., scikit-learn, Spark ML, or similar)
Experience with data visualization tools and techniques
Ability to write clean, maintainable, and production-ready code
Strong interest in rapid prototyping, experimentation, and proof-of-concept development
Proven ability to communicate complex analytical findings to non-technical audiences
Ability to meet standard employment screening requirements
Senior Data Engineer
Data engineer job in Indianapolis, IN
Pinnacle Partners is assisting our client in the search for a Senior Data Engineer to join their team in the Indianapolis, IN area. This successful resource will be responsible for supporting the large-scale data modernization initiative and operationalize the platform moving forward.
RESPONSIBILITIES:
Design, develop, and refine BI focused data architecture and data platforms
Work with internal teams to gather requirements and translate business needs into technical solutions
Build and maintain data pipelines supporting transformation
Develop technical designs, data models, and roadmaps
Troubleshoot and resolve data quality and processing issues
Create and maintain detailed documentation for data warehouses, data stores, and end-to-end data flows
Mentor and support junior team members
REQUIREMENTS:
5+ years of hands-on experience with data warehousing, databases, and dimensional data modeling
5+ years of experience across end-to-end data analysis and development
Experience using GIT version control
Advanced SQL skills
Strong experience with AWS cloud
PREFERRED SKILLS:
Experience with Snowflake
Experience with Python or R
Bachelor's degree in an IT-Related field
TERMS:
This is a direct hire opportunity with a salary up to $130K based on experience. They offer benefits including medical, dental, and vision along with generous PTO, 401K matching, wellness programs, and other benefits.
Senior Data Architect
Data engineer job in Oak Brook, IL
We are seeking a highly skilled and strategic Senior Data Solution Architect to join our IT Enterprise Data Warehouse team. This role is responsible for designing and implementing scalable, secure, and high-performing data solutions that bridge business needs with technical execution. Design solutions for provisioning data to our cloud data platform using ingestion, transformation, and semantic layer techniques. Additionally, this position provides technical thought leadership and guidance to ensure that data platforms and pipelines effectively support ODS, analytics, reporting, and AI initiatives across the organization.
Key Responsibilities:
Architecture & Design:
Design end-to-end data architecture solutions including operational data stores, data warehouses, and real-time data pipelines.
Define standards and best practices for data modeling, integration, and governance.
Evaluate and recommend tools, platforms, and frameworks for data management and analytics.
Collaboration & Leadership:
Partner with business stakeholders, data engineers, data analysts, and other IT teams to translate business requirements into technical solutions.
Lead architecture reviews and provide technical guidance to development teams.
Advocate for data quality, security, and compliance across all data initiatives.
Implementation & Optimization
Oversee the implementation of data solutions, ensuring scalability, performance, and reliability.
Optimize data workflows and storage strategies for cost and performance efficiency.
Monitor and troubleshoot data systems, ensuring high availability and integrity.
Required Qualifications:
Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or related field.
7+ years of experience in data architecture, data engineering, or related roles.
Strong expertise in cloud platforms (e.g., Azure, AWS, GCP) and modern data stack tools (e.g., Snowflake, Databricks).
Proficiency in SQL, Python, and data modeling techniques (e.g. Data Vault 2.0)
Experience with ETL/ELT tools, APIs, and real-time streaming technologies (e.g., dbt, Coalesce, SSIS, Datastage, Kafka, Spark).
Familiarity with data governance, security, and compliance frameworks
Preferred Qualifications:
Certifications in cloud architecture or data engineering (e.g., SnowPro Advanced: Architect).
Strong communication and stakeholder management skills.
Why Join Us?
Work on cutting-edge data platforms and technologies.
Collaborate with cross-functional teams to drive data-driven decision-making.
Be part of a culture that values innovation, continuous learning, and impact.
** This is a full-time, W2 position with Hub Group - We are NOT able to provide sponsorship at this time **
Salary: $135,000 - $175,000/year base salary
+ bonus eligibility
This is an estimated range based on the circumstances at the time of posting, however, may change based on a combination of factors, including but not limited to skills, experience, education, market factors, geographical location, budget, and demand
Benefits
We offer a comprehensive benefits plan including:
Medical
Dental
Vision
Flexible Spending Account (FSA)
Employee Assistance Program (EAP)
Life & AD&D Insurance
Disability
Paid Time Off
Paid Holidays
BEWARE OF FRAUD!
Hub Group has become aware of online recruiting related scams in which individuals who are not affiliated with or authorized by Hub Group are using Hub Group's name in fraudulent emails, job postings, or social media messages. In light of these scams, please bear the following in mind
Hub Group will never solicit money or credit card information in connection with a Hub Group job application.
Hub Group does not communicate with candidates via online chatrooms such as Signal or Discord using email accounts such as Gmail or Hotmail.
Hub Group job postings are posted on our career site: ********************************
About Us
Hub Group is the premier, customer-centric supply chain company offering comprehensive transportation and logistics management solutions. Keeping our customers' needs in focus, Hub Group designs, continually optimizes and applies industry-leading technology to our customers' supply chains for better service, greater efficiency and total visibility. As an award-winning, publicly traded company (NASDAQ: HUBG) with $4 billion in revenue, our 6,000 employees and drivers across the globe are always in pursuit of "The Way Ahead" - a commitment to service, integrity and innovation. We believe the way you do something is just as important as what you do. For more information, visit ****************
Data Scientist
Data engineer job in Chicago, IL
Minimum Qualifications:
● 5+ years of experience leading data science projects that have a direct impact on a company's
objectives or PhD degree in quantitative fields such as Statistics, Data Science, Computer Science
with 3+ years of experience
● 5+ years of experience utilizing data mining techniques, ML models to assist business decision
making. Hands-on experience with deep learning frameworks , LLMs, GenAI tools, and NLP
techniques.
● Deep expertise in statistical methods and machine learning concepts, with the ability to mentor
team members on methodologies, model tuning, and evaluation techniques.
● 2+ years hands-on experience with deep learning frameworks, LLMs, GenAI tools, and NLP
techniques.
● 5+ years of experience using Python to process large, diverse datasets, and develop and deploy
predictive models in cloud-based environments and other computing platforms.
● 5+ years of experience in SQL and cloud-hosted data platforms (Google Cloud Platform, AWS,
etc.).
● Demonstrated ability to assist business decision-making through data mining and machine
learning.
● Strong communication skills to collaborate effectively with business stakeholders. Must be able
to interact cross-functionally and drive both business and technical discussions.
● Ability to translate complex business problems into actionable project plans and solve
Data Engineer
Data engineer job in Chicago, IL
The Data Engineer will design, build, and optimize the data pipelines and models that support the firm's evolving research, analytics, and systematic portfolio construction environment. This role is central to enabling data-driven investment processes, including quantitative research, AI/ML capabilities, and front-office automation.
Candidates must have deep expertise with Snowflake, strong SQL skills, and experience integrating diverse datasets used across investment organizations. The role is highly collaborative and requires comfort working in an iterative, fast-moving environment where data needs evolve rapidly based on stakeholder input.
Responsibilities
Design, build, and enhance ETL/ELT pipelines in Snowflake, ensuring high performance, reliability, and scalability.
Integrate internal and external datasets, including pricing, research content, economic releases, market data, and security reference data.
Support real-time or near-real-time data flows where needed (e.g., pricing, indicative quotes, market-sensitive inputs).
Collaborate closely with Product Leads, Quant Developers, and UI/UX teams to ensure data structures meet the requirements of research workflows, analytical models, and user-facing applications.
Partner with front-office stakeholders to rapidly iterate on evolving analytical and data needs.
Implement data validation, monitoring, and quality frameworks to ensure accuracy and reliability across critical datasets.
Translate prototype pipelines into production-ready workflows with appropriate documentation, standards, and controls.
Contribute to data modeling standards, metadata frameworks, and data governance practices across the platform.
Requirements
10+ years of data engineering experience within investment management, financial technology, or similar data-intensive environments.
Expert-level SQL, including complex queries, schema design, and performance optimization.
Deep hands-on experience with Snowflake, including advanced features such as tasks, streams, performance tuning, and secure data sharing.
Strong Python capabilities for ETL/ELT development, data processing, and workflow automation.
Experience integrating APIs and working with structured, semi-structured, and unstructured datasets.
Familiarity with NLP or AI/ML-oriented datasets (e.g., textual research content, PDFs) is a plus.
Experience with Domino or willingness to work within a Domino-based model environment.
Working knowledge of investment data structures (holdings, benchmarks, pricing, exposures) is highly preferred.
Ability to thrive in a rapid prototyping environment with evolving requirements and close partnership with front-office teams.
Sr. Data Engineer - PERM - MUST BE LOCAL
Data engineer job in Naperville, IL
Resource 1 is in need of a Sr. Data Engineer for a full-time/ permanent position with our client in Naperville, IL. Candidate must be
local to Illinois
because of future hybrid onsite in Naperville expected. Our client is an employee-owned company with excellent benefits, growth opportunities and profit-sharing bonus.
This position is focused on building modern data pipelines, integrations and back-end data solutions. Selected individual will work within cross-functional Agile teams, collaborating with product owners, business analysts and other engineers to design and deliver data solutions that power business insights and AI products.
Responsibilities:
Design and develop scalable data pipelines for ingestion, transformation and integration using AWS services.
Pull data from PostgreSQL and SQL Server to migrate to AWS.
Create and modify jobs in AWS and modify logic in SQL Server.
Create SQL queries, stored procedures and functions in PostgreSQL and RedShift.
Provide input on data modeling and schema design as needed.
Manage infrastructure through infrastructure-as-code templates (Serverless Framework), supporting new data products and services in AWS.
Support inbound/ outbound data flows, including APIs, S3 replication and secured data.
Assist with data visualization/ reporting as needed.
Follow an Agile development methodology, with regular workshops and standup meetings, working in two-week sprints.
Qualifications:
5+ years of data engineering experience.
Experience with AWS and its associated array of offerings (Glue, Redshift, Athena, S3, Lambda, Spectrum).
Strong experience with SQL, Python and PySpark.
A background in supply chain, logistics or distribution would be a plus.
Experience with Power BI is a plus.
Data Architect - Pharma
Data engineer job in Chicago, IL
MathCo
Role - Data/AI Engineering Manager
Onsite - Chicago - 4 days in office (Mandatory)
Industry - Pharma (Mandatory)
As platform architect/owner, you will:
Lead the end-to-end architecture, lifecycle, and governance of the AI/Analytics platform, defining standards, reusable components, and integration patterns.
Partner with AI/Data architects to enable scalable model deployment and enhance agentic orchestration.
Translate business needs into platform features, manage onboarding, documentation, and cross-functional collaboration for platform adoption.
Oversee infrastructure-as-code, CI/CD, observability, and containerized environments to ensure reliability and scalability.
Evaluate complex technical proposals and develop actionable platform roadmaps and architecture recommendations
Stay current on key AI platform developments and assess their impact on architecture and client strategy
Coach others, recognize their strengths, and encourage them to take ownership of their personal development
Skills Required
Experience in designing, architecting, or managing distributed data and AI platforms in cloud environments (AWS, Azure, or GCP)
Proven ability to carry out complex Proof of Concept (POC), pilot projects, and limited production rollouts for AI use-cases, focusing on developing new or improved techniques and procedures.
Strong skills in pipeline/workflow optimization and data processing frameworks to evaluate architectural choices
Years of Experience
Minimum of 8 years in relevant experience, preferably with a consulting
background and experience with Pharma clients
Cloud Data Architect
Data engineer job in Chicago, IL
Must Have Technical/Functional Skills
· Cloud Data Architect in Data & Analytics will play a key role in driving solution architecture design, evaluation, and selection, buy vs.
· build decisions, and early-phase project estimates which contribute to the business case.
· Partner with Key Roles (Business and Technology) to shape the Cloud Data & Analytics strategy, provide hands on guidance to convert the strategy to architecture/Solutions that are aligned to architecture standards and principles, leverage common solutions and services, and meet the financial targets (costs and benefits) and actionable plan.
· Furthermore, shape large Data and Analytics opportunities for TCS. Prior experience in Banking Financial and Insurance combined with strong articulation skills, strategic Thinking exposure, Leadership traits, Strong business acumen, Customer focus, Social are essential.
· Must Have skills : Cloud Data Architect, Snowflake, demonstrate hands-On on Databricks (Delta Lake, Unity Catalogue), Snowflake, Cortex AI, Five Tran, DBT.
· On Data front Hands on in Data Quality, Data Governance, MDM and Data Modeling (Data Products, Data Mesh, Data vault)
Roles & Responsibilities
· Strong experience in Cloud Data Architecture and Roadmap and effectively engaging various client Stakeholders on the architecture proposition.
· Strong experience in Solution Architecture (Data & Analytics)
· Must have prior experience building large scale enterprise data architectures using commercial and/or open source Data Analytics technologies. Must demonstrate knowledge of data architecture patterns
· Strong cloud architecture expertise (Azure, AWS, GCP) across large project with minimum 2-3 years experience in doing cloud projects
· Strong solution implementation experience (platform engineering, developer experience)
· Data modelling and architecting skills including strong foundation in data warehousing concepts, data normalization, dimensional data modelling and a variety of data models including data vault
· Demonstrated experience across different data platform such as Databricks (Delta Lake, Unity Catalogue), Snowflake and Cortex AI, FiveTran, dbt, Data Virtualization tools
· Demonstrated experience in Big Data, data warehouse, data analytics projects, and/or any Information Management related projects
· Lead transformation effort for Large opportunities i.e. build strategy and multi-year Transformation roadmap for our customers
· Strong ability and experience in engaging with both technical and non-technical stakeholders
· Strong teaming, communications and analytical skills
· Dedicated, resourceful and able to work under pressure
· As part of Presales support, should be able to shape large multi-million value Deals both proactive and reactive by driving TCS positioning with customers and build business for TCS
· Develop Point of views, solution approaches & pitches for new opportunities
· Define Data and Technology Architecture and patterns on-prem/hybrid/Cloud using native and other partners/tools with recommendations that suits customer's context and objectives
· On Data front strong in Data Governance, MDM and Data Modeling
· Drive and Participate in Architecture reviews both internal and client teams
· Oversee data management, warehousing, integration, data movement, and delivery across assigned data initiatives.
· Examine data, processes, and technologies to determine the current state and critical problems of Customer's data and analytics journey
· Research and make recommendations for training, equipment, and technology to improve data use
· Build connect with Client's Senior Leadership CXOs/D&A Heads and work with them to develop clear Data and Analytics Strategy and Architecture that supports their business goals and objectives.
Data Architect
Data engineer job in Chicago, IL
Job Title: Architect / Senior Data Engineer
We are seeking a highly skilled Architect / Senior Data Engineer to design, build, and optimize our modern data ecosystem. The ideal candidate will have deep experience with AWS cloud services, Snowflake, and dbt, along with a strong understanding of scalable data architecture, ETL/ELT development, and data modeling best practices.
Key Responsibilities
Architect, design, and implement scalable, reliable, and secure data solutions using AWS, Snowflake, and dbt.
Develop end-to-end data pipelines (batch and streaming) to support analytics, machine learning, and business intelligence needs.
Lead the modernization and migration of legacy data systems to cloud-native architectures.
Define and enforce data engineering best practices including coding standards, CI/CD, testing, and monitoring.
Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements and translate them into technical solutions.
Optimize Snowflake performance through query tuning, warehouse sizing, and cost management.
Establish and maintain data governance, security, and compliance standards across the data platform.
Mentor and guide junior data engineers, providing technical leadership and direction.
Required Skills & Qualifications
8+ years of experience in Data Engineering, with at least 3+ years in a cloud-native data environment.
Hands-on expertise in AWS services such as S3, Glue, Lambda, Step Functions, Redshift, and IAM.
Strong experience with Snowflake - data modeling, warehouse design, performance optimization, and cost governance.
Proven experience with dbt (data build tool) - model development, documentation, and deployment automation.
Proficient in SQL, Python, and ETL/ELT pipeline development.
Experience with CI/CD pipelines, version control (Git), and workflow orchestration tools (Airflow, Dagster, Prefect, etc.).
Familiarity with data governance and security best practices, including role-based access control and data masking.
Strong understanding of data modeling techniques (Kimball, Data Vault, etc.) and data architecture principles.
Preferred Qualifications
AWS Certification (e.g., AWS Certified Data Analytics - Specialty, Solutions Architect).
Strong communication and collaboration skills, with a track record of working in agile environments.
Data Architect
Data engineer job in Springfield, IL
Krasan Consulting Services is a Chicago boutique management consulting and technology solutions integrator. As a certified BEP WBE with the State of Illinois and a certified WBE, MBE, and DBE recognized by the City of Chicago, Krasan provides complex technology solutions and services for clients in the Public and Commercial sectors. With decades of experience, Krasan specializes in end-to-end technology solutions, integration, and implementation services and is proud to enable the success of our customers.
Role Title: Data Architect
Location - Remote - Springfield, IL
Job Description:
Determine where the data resides.
Create an architectural design for migration.
Plan the migration path from start to finish.
Collaborate with the IEPA during system design.
Confirm data security and integrity during migration.
Design and integrate the business and technical components of the EIMS, including future enhancements.
Confirm the solution meets all IEPA requirements and integrates seamlessly with other environmental information systems.
Provide technical leadership and guidance to development teams.
Confirm system compliance with relevant regulations
Minimum seven (7) years of experience in solution architecture and system integration.
Knowledge of environmental information or permitting systems and interoperability.
Experience with technical leadership and project management.
Knowledge of CROMERR and FedRAMP solution design.
Experience serving as Offeror's business architect (or similar role) for at least two (2) full life cycle implementations of the offered solution end-to-end.
2013
Senior Back End Developer - Distributed Systems (C# or Golang)
Data engineer job in Chicago, IL
Our client, a fast-growing organization developing secure, scalable technologies for next-generation AI applications, is seeking a Backend Engineer to join their core platform team.
In this role, you'll help build and refine the foundational services that power authentication, observability, data flows, and high-availability systems across a distributed ecosystem. This is an opportunity to work on complex backend challenges while shaping the infrastructure that supports mission-critical applications.
What You'll Do
Develop, enhance, and support backend services that form the foundation of the platform.
Build and maintain core authentication and authorization capabilities.
Apply principles of Domain-Driven Design to guide how services and components evolve over time.
Architect, extend, and support event-sourced systems to ensure durable, consistent operations at scale.
Participate in API design and integration efforts across internal and external stakeholders.
Implement and support messaging frameworks (e.g., NATS) to enable reliable service-to-service communication.
Maintain and improve observability tooling-including metrics, tracing, and logging-to ensure healthy system performance.
Work closely with infrastructure, DevOps, and engineering teams to ensure robust, secure, and maintainable operations.
What You Bring
3-6+ years of experience as a backend engineer.
Strong knowledge of distributed systems and microservices.
Proficiency in at least one modern backend programming language (C#, Go, Rust, etc.).
Practical experience with IAM concepts and authentication/authorization frameworks.
Exposure to event-sourcing patterns, DDD, and common messaging systems (e.g., NATS, Kafka, SNS, RabbitMQ).
Familiarity with Redis or similar in-memory caching technologies.
Experience working with observability tools such as Prometheus, Jaeger, ELK, or Application Insights.
Understanding of cloud-native environments and deployment workflows (AWS, Azure, or GCP).
Why This Role Is Compelling
You'll contribute directly to a foundational platform used across an entire organization-impacting performance, reliability, and security at every layer. If you enjoy solving distributed-system challenges and working on complex, high-scale backend services, this is a strong match.
#BackendEngineering #DistributedSystems #PlatformEngineering #CloudNative #SoftwareJobs
Backend Software Engineer
Data engineer job in Indianapolis, IN
Apexon is a digital-first technology services firm specializing in accelerating business transformation and delivering human-centric digital experiences. We have been meeting customers wherever they are in the digital lifecycle and helping them outperform their competition through speed and innovation. Apexon brings together distinct core competencies - in AI, analytics, app development, cloud, commerce, CX, data, DevOps, IoT, mobile, quality engineering and UX, and our deep expertise in BFSI, healthcare, and life sciences - to help businesses capitalize on the unlimited opportunities digital offers. Our reputation is built on a comprehensive suite of engineering services, a dedication to solving clients' toughest technology problems, and a commitment to continuous improvement. Backed by Goldman Sachs Asset Management and Everstone Capital, Apexon now has a global presence of 15 offices (and 10 delivery centers) across four continents.
We enable #HumanFirstDIGITAL
Backend Software Engineer
Who We Are
We are passionate about transforming patients' lives, and we are bold in both
decision and action - we believe that good business means a better world. That is why we come
to work every single day. We commit ourselves to scientific rigor, unassailable ethics, and
access to medical innovations for all. We do this today to build a better tomorrow.
Role Purpose (Summary of position)
Developing software is great, but developing software with a purpose is even better! As a
Principal Backend Software Engineer, you'll work on a product that helps people with the most
precious thing they have - their health. In collaborative teams of engineers, designers, product
owners, and QA experts, you'll experience best-in-class software development practices daily
and contribute to software that meets the highest expectations - we do not put our users' lives
at risk!
Here's what we're looking for:
We are looking for an experienced, motivated Principal Backend Software Engineer who will
work closely with their backend colleagues, and who ideally has built digital products and
platforms. As a code-magician, you will support our efforts to improve the digital health
ecosystem. You will contribute with your knowledge of Java, Spring Boot, relational databases &
REST within our agile and cross-functional teams. As a flexible and open-minded person with a
passion for clean code you will be a perfect addition to our team. We are committed to quality,
dedicating time to code reviews, test coverage, quality days and CI/CD principles. If this
resonates with you, we would love to hear from you!
You will be part of the Platform Engineering chapter working on our navify platform.
Essentials skills for your mission:
You have the required years of experience as specified by your educational background:
At least 10 years of experience working as a software engineer with a Bachelor's degree, including 7-8 years as backend engineer.
At least 6 years of experience working as a software engineer with a Master's degree, including 5 years as a backend engineer.
At least 3 years of experience working as a software engineer for candidates with a PhD.
Equivalent work experience, which includes at least 8 years as a software engineer and 5 years as a backend engineer.
You are familiar with the following backend technologies: Java 21+ and frameworks like Spring Boot 3+
SQL and relational databases (e.g.PostgreSQL) are second nature to you
You have experience with OpenID Connect standard and Keycloak or other open source software product that allows single sign-on with identity and access management
You enjoy developing clean, stable, testable, and performant backend code, serving our beautiful applications
You are passionate about solid technical design, clean code, and future-proof architectures
You have experience with Amazon Web Services (AWS) or other cloud providers
You enjoy guiding and sharing your knowledge with other engineers
Great written and verbal communication in English
Bonus skills:
Experienced in automated testing with Selenium or Selenide
Knowledge of Infra as Code, Terraform and Github Actions
Understanding of medical, security, and privacy regulations
Knowledge of the diabetes industry or other comparable health industries
Our Commitment to Diversity & Inclusion:
Did you know that Apexon has been Certified™ by Great Place To Work , the global authority on workplace culture, in each of the three regions in which it operates: USA (for the fourth time in 2023), India (seven consecutive certifications as of 2023), and the UK.Apexon is committed to being an equal opportunity employer and promoting diversity in the workplace. We take affirmative action to ensure equal employment opportunity for all qualified individuals. Apexon strictly prohibits discrimination and harassment of any kind and provides equal employment opportunities to employees and applicants without regard to gender, race, color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. You can read about our Job Applicant Privacy policy here Job Applicant Privacy Policy (apexon.com)