Senior Data Scientist (Senior Consultant)
Senior data scientist job in Chicago, IL
Job Family:
Data Science Consulting
Travel Required:
Up to 10%
Clearance Required:
Ability to Obtain Public Trust
About our AI and Data Capability Team
Our consultants on the AI and Data Analytics Capability team help clients maximize the value of their data and automate business processes. This high performing team works with clients to implement the full spectrum of data analytics and data science services, from data architecture and storage to data engineering and querying, to data visualization and dashboarding, to predictive analytics, machine learning, and artificial intelligence as well as intelligent automation. Our services enable our clients to define their information strategy, enable mission critical insights and data-driven decision making, reduce cost and complexity, increase trust, and improve operational effectiveness.
What You Will Do:
Data Collection & Management: Identify, gather, and manage data from primary and secondary sources, ensuring its accuracy and integrity.
Data Cleaning & Preprocessing: Clean raw data by identifying and addressing inconsistencies, missing values, and errors to prepare it for analysis.
Data Analysis & Interpretation: Apply statistical techniques and analytical methods to explore datasets, discover trends, find patterns, and derive insights.
Data Visualization & Reporting: Develop reports, dashboards, and visualizations using tools like Tableau or Power BI to present complex findings clearly to stakeholders.
Collaboration & Communication: Work with cross-functional teams, understand business requirements, and effectively communicate insights to support data-driven decision-making.
Problem Solving: Address specific business challenges by using data to identify underperforming processes, pinpoint areas for growth, and determine optimal strategies.
What You Will Need:
US Citizenship is required
Bachelor's degree is required
Minimum THREE (3) Years Experience using Power BI, Tableau and other visualization tools to develop intuitive and user friendly dashboards and visualizations.
Skilled in SQL, R, and other languages to assist in database querying and statistical programming.
Strong foundational knowledge and experience in statistics, probability, and experimental design.
Familiarity with cloud platforms (e.g., Amazon Web Services, Azure, or Google Cloud) and containerization (e.g., Docker).
Experience applying data governance concepts and techniques to assure greater data quality and reliability.
he curiosity and creativity to uncover hidden patterns and opportunities.
Strong communication skills to bridge technical and business worlds.
What Would Be Nice To Have:
Hands-on experience with Python, SQL, and modern ML frameworks.
Experience in data and AI system development, with a proven ability to design scalable architectures and implement reliable models.
Expertise in Python or Java for data processing.
Demonstrated work experience within the public sector.
Ability to support business development including RFP/RFQ/RFI responses involving data science / analytics.
The annual salary range for this position is $113,000.00-$188,000.00. Compensation decisions depend on a wide range of factors, including but not limited to skill sets, experience and training, security clearances, licensure and certifications, and other business and organizational needs.
What We Offer:
Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace.
Benefits include:
Medical, Rx, Dental & Vision Insurance
Personal and Family Sick Time & Company Paid Holidays
Position may be eligible for a discretionary variable incentive bonus
Parental Leave and Adoption Assistance
401(k) Retirement Plan
Basic Life & Supplemental Life
Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts
Short-Term & Long-Term Disability
Student Loan PayDown
Tuition Reimbursement, Personal Development & Learning Opportunities
Skills Development & Certifications
Employee Referral Program
Corporate Sponsored Events & Community Outreach
Emergency Back-Up Childcare Program
Mobility Stipend
About Guidehouse
Guidehouse is an Equal Opportunity Employer-Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation.
Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco.
If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at ************** or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation.
All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or ************************. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process.
If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse's Ethics Hotline. If you want to check the validity of correspondence you have received, please contact *************************. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant's dealings with unauthorized third parties.
Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.
Auto-ApplySr. Data Scientist
Senior data scientist job in Chicago, IL
The Senior Data Scientist, Clinical Data Science (HOS & HRA) plays a key role in advancing analytics that improve Medicare Advantage member outcomes and CMS Star Ratings performance. This position supports the design, implementation, and automation of analytic solutions for the Health Outcomes Survey (HOS) and Health Risk Assessment (HRA) programs-two core domains in Aetna's Medicare clinical strategy.
The ideal candidate combines strong technical depth in data science and statistical modeling with the ability to translate complex findings into actionable insights for non-technical audiences. This individual will automate recurring data science workflows, conduct robust impact and descriptive analyses, and collaborate closely with clinical, quality, and operations teams to identify emerging opportunities that improve member experience and population health outcomes.
Clinical Data Science & Analytics
Lead the development of analytic models and descriptive frameworks supporting HOS and HRA performance improvement across Medicare Advantage.
Conduct impact analyses, trend identification, and segmentation to explain drivers of performance and inform strategy.
Automate recurring analytics and reporting pipelines to increase reliability, efficiency, and reproducibility of insights.
Apply advanced statistical, predictive, and causal inference methods to evaluate intervention effectiveness and identify member-level opportunities.
Develop and refine tools for data visualization and storytelling to communicate results clearly to non-technical stakeholders.
Partner with business leaders to translate analytic results into actionable recommendations for program design, member outreach, and care interventions.
Collaboration & Consultation
Serve as a bridge between technical and non-technical teams, ensuring analytic outputs are interpretable and actionable.
Collaborate cross-functionally with Clinical Operations, Member Experience, and Quality teams to align analytics with enterprise goals.
Support enterprise data modernization and automation initiatives by identifying repeatable use cases for scalable analytics and workflow improvement.
Mentor junior data scientists and analysts on best practices for data integrity, modeling, and automation.
Technical & Operational Excellence
Design and maintain automated analytic processes leveraging Python, SQL, and modern cloud environments (e.g., GCP).
Ensure accuracy, consistency, and explainability of models and metrics through disciplined version control and validation.
Contribute to the team's continuous improvement culture by recommending new methods, tools, or data sources that enhance analytic precision and speed.
Required Skills & Experience
5+ years of hands-on experience in data science, advanced analytics, or statistical modeling in healthcare, life sciences, or managed care.
Strong proficiency in Python, SQL, and data science libraries (e.g., pandas, scikit-learn, statsmodels).
Demonstrated ability to automate data workflows and standardize recurring analyses or reporting.
Experience applying statistical and descriptive analytics to clinical or quality measurement problems (e.g., HOS, HRA, CAHPS, or HEDIS).
Proven success communicating complex findings to non-technical business partners and influencing decision-making.
Ability to work effectively in a fast-paced, cross-functional environment.
Nice to Have Skills & Experience
Master's or PhD in Data Science, Statistics, Epidemiology, Public Health, or a related quantitative field.
Familiarity with Medicare Advantage, CMS Star Ratings methodology, and clinical quality measures.
Experience working within modern cloud environments (e.g., Google Cloud Platform, Databricks) and with workflow orchestration tools (Airflow, dbt).
Background in impact measurement, causal inference, or time-series analysis in healthcare contexts.
Senior Data Scientist
Senior data scientist job in Chicago, IL
Role: Senior Data Scientist
· We are seeking a hands-on Senior Data Scientist to join our Insurance Analytics & AI Vertical. The ideal candidate will bring a blend of insurance domain expertise (preferably P&C), consulting mindset, and strong data science skills. This is a mid-senior level role focused on delivering value through analytics, stakeholder engagement, and logical problem solving, rather than people management.
· The role involves working closely with EXL teams and clients on reporting, data engineering, transformation, and advanced analytics projects. While strong technical skills are important, we are looking for someone who can engage directly with clients, translate business needs into analytical solutions, and drive measurable impact.
Key Responsibilities
· Collaborate with EXL and client stakeholders to design and deliver data-driven solutions across reporting, analytics, and transformation initiatives.
· Apply traditional statistical methods, machine learning, deep learning, and NLP techniques to solve business problems.
· Support insurance-focused analytics use cases (with preference for P&C lines of business).
· Work in a consulting setup: conduct requirement gathering, structure problem statements, and communicate insights effectively to senior stakeholders.
· Ensure data quality, governance, and compliance with Data Privacy and Protection Guidelines.
· Independently research, analyze, and present findings, ensuring client-ready deliverables.
· Contribute to continuous improvement initiatives and support business development activities where required.
Key Skillsets & Experience
· 7-12 years of experience in analytics, reporting, dashboarding, ETL, Python/R, and associated data management.
· Proficiency in machine learning, deep learning algorithms (e.g., neural networks), and text analytics techniques (NLTK, Gensim, LDA, word embeddings like Word2Vec, FastText, GloVe).
· Strong consulting background with structured problem-solving and stakeholder management skills.
· Excellent communication and presentation skills with the ability to influence and engage senior business leaders.
· Hands-on role with ability to independently manage client deliverables and operate in cross-cultural, global environments.
Data Management Skills
· Strong familiarity with advanced analytics tools (Python, R), BI tools (Tableau, Power BI), and related software applications.
· Good knowledge of SQL, Informatica, Hadoop/Spark, ETL tools.
· Ability to translate business/functional requirements into technical specifications.
· Exposure to cloud data management and AWS services (preferred).
Candidate Profile
· Bachelor's/Master's degree in Economics, Mathematics, Computer Science/Engineering, Operations Research, or related analytical fields.
· Prior insurance industry experience (P&C preferred) strongly desired.
· Superior analytical, logical, and problem-solving skills.
· Outstanding written and verbal communication abilities with a consultative orientation.
· Flexible to work in a fast-paced, evolving environment with occasional visits to the client's Chicago office.
Data Scientist
Senior data scientist job in Peoria, IL
Typical task breakdown:
- Assist with monthly reporting on team metrics, cost savings and tariff analysis
- Lead development of data analytics to assist category teams in making strategic sourcing decisions
Interaction with team:
- Will work as a support to multiple category teams
Team Structure
- Report to the MC&H Strategy Manager and collaborate with Category Managers and buyers
Work environment:
Office environment
Education & Experience Required:
- Years of experience: 3-5
- Degree requirement: Bachelors degree
- Do you accept internships as job experience: Yes
Top 3 Skills
· Communicates effectively to develop standard procedures
· Applies problem-solving techniques across diverse procurement scenarios
Analyzes procurement data to generate actionable insights
Additional Technical Skills
(Required)
- Proficient in PowerBI, PROcure, and tools like CICT, Lognet, MRC, PO Inquiry, AoS
- Expertise in Snowflake and data mining
(Desired)
- Prior experience in Procurement
- Familiarity with monthly reporting processes, including ABP (Annual Business Plan) and RBM (Rolling Business Management)
- Demonstrated expertise in cost savings initiatives
- Machine Learning and AI
Soft Skills
(Required)
- Strong written and verbal communication skills
- Balances speed with accuracy in task execution
- Defines problems and evaluates their impact
(Desired)
- Emotional Intelligence
- Leadership and team management capabilities
Senior Data Architect
Senior data scientist job in Oak Brook, IL
We are seeking a highly skilled and strategic Senior Data Solution Architect to join our IT Enterprise Data Warehouse team. This role is responsible for designing and implementing scalable, secure, and high-performing data solutions that bridge business needs with technical execution. Design solutions for provisioning data to our cloud data platform using ingestion, transformation, and semantic layer techniques. Additionally, this position provides technical thought leadership and guidance to ensure that data platforms and pipelines effectively support ODS, analytics, reporting, and AI initiatives across the organization.
Key Responsibilities:
Architecture & Design:
Design end-to-end data architecture solutions including operational data stores, data warehouses, and real-time data pipelines.
Define standards and best practices for data modeling, integration, and governance.
Evaluate and recommend tools, platforms, and frameworks for data management and analytics.
Collaboration & Leadership:
Partner with business stakeholders, data engineers, data analysts, and other IT teams to translate business requirements into technical solutions.
Lead architecture reviews and provide technical guidance to development teams.
Advocate for data quality, security, and compliance across all data initiatives.
Implementation & Optimization
Oversee the implementation of data solutions, ensuring scalability, performance, and reliability.
Optimize data workflows and storage strategies for cost and performance efficiency.
Monitor and troubleshoot data systems, ensuring high availability and integrity.
Required Qualifications:
Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or related field.
7+ years of experience in data architecture, data engineering, or related roles.
Strong expertise in cloud platforms (e.g., Azure, AWS, GCP) and modern data stack tools (e.g., Snowflake, Databricks).
Proficiency in SQL, Python, and data modeling techniques (e.g. Data Vault 2.0)
Experience with ETL/ELT tools, APIs, and real-time streaming technologies (e.g., dbt, Coalesce, SSIS, Datastage, Kafka, Spark).
Familiarity with data governance, security, and compliance frameworks
Preferred Qualifications:
Certifications in cloud architecture or data engineering (e.g., SnowPro Advanced: Architect).
Strong communication and stakeholder management skills.
Why Join Us?
Work on cutting-edge data platforms and technologies.
Collaborate with cross-functional teams to drive data-driven decision-making.
Be part of a culture that values innovation, continuous learning, and impact.
** This is a full-time, W2 position with Hub Group - We are NOT able to provide sponsorship at this time **
Salary: $135,000 - $175,000/year base salary
+ bonus eligibility
This is an estimated range based on the circumstances at the time of posting, however, may change based on a combination of factors, including but not limited to skills, experience, education, market factors, geographical location, budget, and demand
Benefits
We offer a comprehensive benefits plan including:
Medical
Dental
Vision
Flexible Spending Account (FSA)
Employee Assistance Program (EAP)
Life & AD&D Insurance
Disability
Paid Time Off
Paid Holidays
BEWARE OF FRAUD!
Hub Group has become aware of online recruiting related scams in which individuals who are not affiliated with or authorized by Hub Group are using Hub Group's name in fraudulent emails, job postings, or social media messages. In light of these scams, please bear the following in mind
Hub Group will never solicit money or credit card information in connection with a Hub Group job application.
Hub Group does not communicate with candidates via online chatrooms such as Signal or Discord using email accounts such as Gmail or Hotmail.
Hub Group job postings are posted on our career site: ********************************
About Us
Hub Group is the premier, customer-centric supply chain company offering comprehensive transportation and logistics management solutions. Keeping our customers' needs in focus, Hub Group designs, continually optimizes and applies industry-leading technology to our customers' supply chains for better service, greater efficiency and total visibility. As an award-winning, publicly traded company (NASDAQ: HUBG) with $4 billion in revenue, our 6,000 employees and drivers across the globe are always in pursuit of "The Way Ahead" - a commitment to service, integrity and innovation. We believe the way you do something is just as important as what you do. For more information, visit ****************
Big Data Consultant
Senior data scientist job in Chicago, IL
Job Title: Bigdata Engineer
Employment Type: W2 Contract
Detailed Job Description:
We are seeking a skilled and experienced Big Data Platform Engineer who is having 7+ yrs of experience with a strong background in both development and administration of big data ecosystems. The ideal candidate will be responsible for designing, building, maintaining, and optimizing scalable data platforms that support advanced analytics, machine learning, and real-time data processing.
Key Responsibilities:
Platform Engineering & Administration:
• Install, configure, and manage big data tools such as Hadoop, Spark, Kafka, Hive, HBase, and others.
• Monitor cluster performance, troubleshoot issues, and ensure high availability and reliability.
• Implement security policies, access controls, and data governance practices.
• Manage upgrades, patches, and capacity planning for big data infrastructure.
Development & Data Engineering:
• Design and develop scalable data pipelines using tools like Apache Spark, Flink, NiFi, or Airflow.
• Build ETL/ELT workflows to ingest, transform, and load data from various sources.
• Optimize data storage and retrieval for performance and cost-efficiency.
• Collaborate with data scientists and analysts to support model deployment and data exploration.
Data Engineer
Senior data scientist job in Chicago, IL
Data Engineer - Build the Data Engine Behind AI Execution - Starting Salary $150,000
You'll be part architect, part systems designer, part execution partner - someone who thrives at the intersection of engineering precision, scalability, and impact.
As the builder behind the AI data platform, you'll turn raw, fragmented data into powerful, reliable systems that feed intelligent products. You'll shape how data flows, how it scales, and how it powers decision-making across AI, analytics, and product teams.
Your work won't be behind the scenes - it will be the foundation of everything we build.
You'll be joining a company built for builders. Our model combines AI consulting, venture building, and company creation into one execution flywheel. Here, you won't just build data pipelines - you'll build the platforms that power real products and real companies.
You know that feeling when a data system scales cleanly under real-world pressure, when latency drops below target, when complexity turns into clarity - and everything just flows? That's exactly what you'll build here.
Ready to engineer the platform that powers AI execution? Let's talk.
No up-to-date resume required.
Data Engineer
Senior data scientist job in Chicago, IL
Scaylor helps mid-market and enterprise companies make sense of their data. Most teams past $20M ARR are stuck with fragmented systems, old databases, and hundreds of spreadsheets that never quite line up. We build the pipelines that clean, normalize, and connect that data so it can actually be used.
Our platform handles everything from parsing financial models and reports to mapping tables across legacy databases and modern APIs. The goal is simple: give companies a single source of truth they can trust.
We're a small team of four - one backend engineer, one frontend engineer, and 2 founders. We're looking for our fifth teammate to help us scale the engine behind it all.
⸻
The Role
You'll work across data engineering and full-stack development, helping us build reliable data infrastructure that powers real workflows. You'll touch everything from ingestion and transformation pipelines to the APIs and dashboards that surface insights to clients.
You'll work directly with the founding team and help make technical decisions that define the next version of Scaylor's core platform.
⸻
What You'll Work On
• Build data pipelines that extract, clean, and standardize information from Excel files, PDFs, APIs, and legacy databases
• Design schemas and transformation logic for structured and semi-structured data
• Develop and maintain backend APIs (Python/FastAPI or Node/Express) for data access and analytics
• Help connect backend services to our frontend dashboards (React, Node.js, or similar)
• Set up and maintain AWS infrastructure (Lambda, S3, ECS, CloudFormation)
• Collaborate with clients to understand their data problems and design workflows that fix them
⸻
You'd Be Great Here If You
• Have 3-6 years of experience in data engineering, backend, or full-stack roles
• Write clean, maintainable code in Python + JS
• Understand ETL, data normalization, and schema mapping
• Have experience with SQL and working with legacy databases or systems
• Are comfortable managing cloud services and debugging data pipelines
• Enjoy solving messy data problems and care about building things that last
⸻
Nice to Have
• Familiarity with GCP or SQL databases
• Understanding of enterprise data flows (ERP, CRM, or financial systems)
• Experience building and deploying containers (Docker, GitHub Actions, CI/CD)
• Interest in lightweight ML or LLM-assisted data transformation
⸻
Why Join Scaylor
• Be one of the first five team members shaping the product and the company
• Work directly with the founder and help define Scaylor's technical direction
• Build infrastructure that solves real problems for real companies
• Earn meaningful equity and have a say in how the company grows
⸻
Compensation
• $130k - $150k with a raise based on set revenue triggers
• .4% equity
• Relocation to Chicago, IL required
Senior Data Engineer
Senior data scientist job in Indianapolis, IN
Senior Data Engineer - Azure Data Warehouse (5-7+ Years Experience)
Long term renewing contract
Azure-based data warehouse and dashboarding initiatives.
Work alongside architects, analysts, and researchers to build scalable, auditable, and business-aligned data assets using modern cloud tools and best practices.
Key Responsibilities
· Design and implement scalable data pipelines using ADF, Databricks, and Azure SQL Server
· Apply Medallion architecture principles and best practices for data lake and warehouse design
· Collaborate with Data Architects, Analysts, and Researchers to translate business needs into technical solutions
· Develop and maintain CI/CD pipelines for data workflows and dashboard deployments
· Lead troubleshooting and debugging efforts across ETL, SQL, and cloud environments
· Mentor junior team members and promote best practices in data modeling, cleansing, and promotion
· Support dashboarding initiatives with Power BI and wireframe collaboration
· Ensure auditability, lineage, and performance across SQL Server and Oracle environments
Required Skills & Experience
· 5-7+ years in data engineering, data warehouse design, and ETL development
· Strong expertise in Azure Data Factory, Data Bricks, and Python
· Deep understanding of SQL Server, Oracle, Postgres SQL & Cosmos DB and data modeling standards
· Proven experience with Medallion architecture and data Lakehouse best practices
· Hands-on with CI/CD, DevOps, and deployment automation
· Agile mindset with ability to manage multiple priorities and deliver on time
· Excellent communication and documentation skills
Bonus Skills
· Experience with GCP or AWS
· Familiarity with Jira, Confluence, and AppDynamics
Sr. Data Engineer - PERM - MUST BE LOCAL
Senior data scientist job in Naperville, IL
Resource 1 is in need of a Sr. Data Engineer for a full-time/ permanent position with our client in Naperville, IL. Candidate must be
local to Illinois
because of future hybrid onsite in Naperville expected. Our client is an employee-owned company with excellent benefits, growth opportunities and profit-sharing bonus.
This position is focused on building modern data pipelines, integrations and back-end data solutions. Selected individual will work within cross-functional Agile teams, collaborating with product owners, business analysts and other engineers to design and deliver data solutions that power business insights and AI products.
Responsibilities:
Design and develop scalable data pipelines for ingestion, transformation and integration using AWS services.
Pull data from PostgreSQL and SQL Server to migrate to AWS.
Create and modify jobs in AWS and modify logic in SQL Server.
Create SQL queries, stored procedures and functions in PostgreSQL and RedShift.
Provide input on data modeling and schema design as needed.
Manage infrastructure through infrastructure-as-code templates (Serverless Framework), supporting new data products and services in AWS.
Support inbound/ outbound data flows, including APIs, S3 replication and secured data.
Assist with data visualization/ reporting as needed.
Follow an Agile development methodology, with regular workshops and standup meetings, working in two-week sprints.
Qualifications:
5+ years of data engineering experience.
Experience with AWS and its associated array of offerings (Glue, Redshift, Athena, S3, Lambda, Spectrum).
Strong experience with SQL, Python and PySpark.
A background in supply chain, logistics or distribution would be a plus.
Experience with Power BI is a plus.
Data Engineer
Senior data scientist job in Itasca, IL
Primary Location: Itasca, IL Hybrid in Chicago's Northwest Suburbs
2 Days In-Office, 3 Days WFH
TYPE: Direct Hire / Permanent Role
MUST BE Citizen and Green Card
The Data Engineer will be responsible for designing, developing, and maintaining scalable data pipelines and integrations that support data analytics and business intelligence across the organization. This role is essential to ensuring high-quality data delivery, optimizing performance, and enabling effective decision-making through reliable data solutions.
What You Bring to the Role (Ideal Experience)
Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent work experience).
5+ years of experience as a Data Engineer.
3+ years of experience with the following:
Building and supporting data lakehouse architectures using Delta Lake and change data feeds.
Working with PySpark and Python, with strong Object-Oriented Programming (OOP) experience to extend existing frameworks.
Designing data warehouse table architecture such as star schema or Kimball method.
Writing and maintaining versioned Python wheel packages to manage dependencies and distribute code.
Creating and managing CI/CD pipelines, especially using Azure DevOps for Microsoft Fabric-related assets.
Experience establishing scalable and maintainable data integrations and pipelines in Databricks environments.
Nice to Have's
Hands-on experience implementing data solutions using Microsoft Fabric.
Experience with machine learning/ML and data science tools.
Knowledge of data governance and security best practices.
Experience in a larger IT environment with 3,000+ users and multiple domains.
Current industry certifications from Microsoft cloud/data platforms or equivalent certifications. One or more of the following is preferred:
Microsoft Certified: Fabric Data Engineer Associate
Microsoft Certified: Azure Data Scientist Associate
Microsoft Certified: Azure Data Fundamentals
Google Professional Data Engineer
Certified Data Management Professional (CDMP)
IBM Certified Data Architect - Big Data
What You'll Do (Skills Used in this Position)
Design and develop scalable data pipelines to collect, process, and store large volumes of structured and unstructured data.
Extend and enhance existing OOP-based frameworks developed in Python and PySpark.
Partner with data scientists and analysts to define requirements and design robust data analytics solutions.
Ensure data quality and integrity through data cleansing, validation, and automated testing procedures.
Develop and maintain technical documentation, including requirements, design specifications, and test plans.
Implement and manage data integrations from multiple internal and external sources.
Optimize data workflows to improve performance, reliability, and reduce cloud consumption.
Monitor, troubleshoot, and resolve data pipeline issues to ensure consistent data delivery.
Establish and manage CI/CD pipelines and release processes, particularly using Azure DevOps for Microsoft Fabric.
Provide technical leadership and coordination for global development and support teams.
Participate in creating a safe and healthy workplace by adhering to organizational safety protocols.
Support additional projects and initiatives as assigned by management.
Data Engineer
Senior data scientist job in Chicago, IL
We are seeking a highly skilled Data Engineer with strong expertise in Scala, AWS, and Apache Spark. The ideal candidate will have 7+ years of hands-on experience building scalable data pipelines, distributed processing systems, and cloud-native data solutions.
Key Responsibilities
Design, build, and optimize large-scale data pipelines using Scala and Spark.
Develop and maintain ETL/ELT workflows across AWS services.
Work on distributed data processing using Spark, Hadoop, or similar.
Build data ingestion, transformation, cleansing, and validation routines.
Optimize pipeline performance and ensure reliability in production environments.
Collaborate with cross-functional teams to understand requirements and deliver robust solutions.
Implement CI/CD best practices, testing, and version control.
Troubleshoot and resolve issues in complex data flow systems.
Required Skills & Experience
7+ years of Data Engineering experience.
Strong programming experience with Scala (must-have).
Hands-on experience with Apache Spark (core, SQL, streaming).
Solid experience with AWS cloud services (Glue, EMR, Lambda, S3, EC2, IAM, etc.).
High proficiency in SQL and relational/no SQL data stores.
Strong understanding of data modeling, data architecture, and distributed systems.
Experience with workflow orchestration tools (Airflow, Step Functions, etc.).
Strong communication and problem-solving skills.
Preferred Skills
Experience with Kafka, Kinesis, or other streaming platforms.
Knowledge of containerization tools like Docker or Kubernetes.
Background in data warehousing or modern data lake architectures.
Senior Data Engineer
Senior data scientist job in Chicago, IL
requires visa independent candidates.
Note: (OPT, CPT, H1B holders will not work at this time)
Design, develop, and maintain scalable ETL pipelines using AWSGlue
Collaborate with data engineers and analysts to understand data requirements
Build and manage data extraction, transformation, and loading processes
Optimize and troubleshoot existing Glue jobs and workflows
Ensure data quality, integrity, and security throughout the ETL process
Integrate AWS Glue with other AWS services like S3, Lambda, Redshift, and Step Functions
Maintain documentation of data workflows and processes
Stay updated with the latest AWS tools and best practices
Required Skills
Strong hands-on experience with AWS Glue, PySpark, and Python
Proficiency in SQL and working with structured/unstructured data (JSON, CSV, Parquet)
Experience with data warehousing concepts and tools
Familiarity with CI/CD pipelines, Terraform, and scripting (PowerShell, Bash)
Solid understanding of data modeling, data integration, and data management
Exposure to AWS Batch, Step Functions, and Data Catalogs
Data Engineer
Senior data scientist job in Chicago, IL
Job Title: Data Engineer - Workflow Automation
Employment Type: Contract to Hire or Full-Time
Department: Project Scion / Information Management Solutions
Key Responsibilities:
Design, build, and manage workflows using Automic or experience with similar tools like Autosys, Apache Airflow, or Cybermation.
workflow orchestration across multi-cloud ecosystems (AWS, Azure, Snowflake, Databricks, Redshift).
Monitor and troubleshoot workflow execution, ensuring high availability, reliability, and performance.
Administer and maintain workflow platforms.
Collaborate with architecture and infrastructure teams to align workflows with cloud strategies.
Support migrations, upgrades, and workflow optimization efforts
Required Skills:
Has 5+ years of experience in IT managing production grade system
Hands-on experience with Automic or similar enterprise workflow automation tools.
Strong analytical and problem-solving skills.
Good communication and documenting skills.
Familiarity with cloud platforms and technologies (e.g., AWS, Azure, Snowflake, Databricks).
Scripting proficiency (e.g., Shell, Python).
Ability to manage workflows across hybrid environments and optimize performance.
Experience managing production operations & support activities
Preferred Skills:
Experience with CI/CD pipeline integration.
Knowledge of cloud-native orchestration tools
Exposure to monitoring and alerting systems.
Data Engineer
Senior data scientist job in Chicago, IL
The Data Engineer will design, build, and optimize the data pipelines and models that support the firm's evolving research, analytics, and systematic portfolio construction environment. This role is central to enabling data-driven investment processes, including quantitative research, AI/ML capabilities, and front-office automation.
Candidates must have deep expertise with Snowflake, strong SQL skills, and experience integrating diverse datasets used across investment organizations. The role is highly collaborative and requires comfort working in an iterative, fast-moving environment where data needs evolve rapidly based on stakeholder input.
Responsibilities
Design, build, and enhance ETL/ELT pipelines in Snowflake, ensuring high performance, reliability, and scalability.
Integrate internal and external datasets, including pricing, research content, economic releases, market data, and security reference data.
Support real-time or near-real-time data flows where needed (e.g., pricing, indicative quotes, market-sensitive inputs).
Collaborate closely with Product Leads, Quant Developers, and UI/UX teams to ensure data structures meet the requirements of research workflows, analytical models, and user-facing applications.
Partner with front-office stakeholders to rapidly iterate on evolving analytical and data needs.
Implement data validation, monitoring, and quality frameworks to ensure accuracy and reliability across critical datasets.
Translate prototype pipelines into production-ready workflows with appropriate documentation, standards, and controls.
Contribute to data modeling standards, metadata frameworks, and data governance practices across the platform.
Requirements
10+ years of data engineering experience within investment management, financial technology, or similar data-intensive environments.
Expert-level SQL, including complex queries, schema design, and performance optimization.
Deep hands-on experience with Snowflake, including advanced features such as tasks, streams, performance tuning, and secure data sharing.
Strong Python capabilities for ETL/ELT development, data processing, and workflow automation.
Experience integrating APIs and working with structured, semi-structured, and unstructured datasets.
Familiarity with NLP or AI/ML-oriented datasets (e.g., textual research content, PDFs) is a plus.
Experience with Domino or willingness to work within a Domino-based model environment.
Working knowledge of investment data structures (holdings, benchmarks, pricing, exposures) is highly preferred.
Ability to thrive in a rapid prototyping environment with evolving requirements and close partnership with front-office teams.
Snowflake Data Engineer
Senior data scientist job in Chicago, IL
Join a dynamic team focused on building innovative data solutions that drive strategic insights for the business. This is an opportunity to leverage your expertise in Snowflake, ETL processes, and data integration.
Key Responsibilities
Develop Snowflake-based data models to support enterprise-level reporting.
Design and implement batch ETL pipelines for efficient data ingestion from legacy systems.
Collaborate with stakeholders to gather and understand data requirements.
Required Qualifications
Hands-on experience with Snowflake for data modeling and schema design.
Proven track record in developing ETL pipelines and understanding transformation logic.
Solid SQL skills to perform complex data transformations and optimization.
If you are passionate about building cutting-edge data solutions and want to make a significant impact, we would love to see your application!
#11290
Senior Analyst, Data and Insights
Senior data scientist job in Chicago, IL
The Aspen Group (TAG) is one of the largest and most trusted retail healthcare business support organizations in the U.S., supporting 15,000 healthcare professionals and team members at more than 1,000 health and wellness offices across 47 states in three distinct categories: Dental care, urgent care, and medical aesthetics. Working in partnership with independent practice owners and clinicians, the team is united by a single purpose: to prove that healthcare can be better and smarter for everyone. TAG provides a comprehensive suite of centralized business support services that power the impact of five consumer-facing businesses: Aspen Dental, ClearChoice Dental Implant Centers, WellNow Urgent Care, Lovet Animal Hospitals and Chapter Aesthetic Studios. Each brand has access to a deep community of experts, tools and resources to grow their practices, and an unwavering commitment to delivering high-quality consumer healthcare experiences at scale.
As a reflection of our current needs and planned growth we are very pleased to offer a new opportunity to join our dedicated team as a Senior Analyst, Data & Insights supporting our WellNow Urgent Care brand. This role is responsible for leveraging data and transforming it into insights and reporting that can be utilized across all levels of the organization. This role will be engaged in driving the business both through support of our business partners in both the offices and field teams along with informing strategy for our Executive Leadership Team.
Responsibilities
As a Senior Analyst, Data & Insights you will be responsible for:
Supporting the WellNow brand through the development of a common approach and infrastructure to data sources built to support enterprise-wide reporting
Capture and translate business requirements for reporting from executive leadership
Developing key data sources in BigQuery through use of SQL, Power BI, or other reporting languages & tools
Synthesizing insights from various data sources and presenting data in an easy-to-read manner
Become organizational expert on data sources and how to extract data from all systems.
Ability to combine multiple data sources; strong attention to detail and data integrity.
Work across departments to understand how their work impacts the performance of the business deriving metrics to measure results
Identify key opportunities to drive transparency and turn data into insights and action.
Leading organization in implementing a standardized, consistent approach to reporting, with a strong focus on user experience to drive usage.
Utilizing data to uncover trends and insights, connecting changes in operational metrics to broader business performance, and craft compelling narratives that inform stakeholders and drive strategic decision-making.
Ad-hoc analytical projects as needed.
Minimum Education and Experience
BA or BS in Data Analytics, Finance, Business or other degree with equivalent work experience in analysis or insights-based roles.
3+ years of experience in data analytics or similar analysis-driven roles required. Required experience partnering with both key business stakeholders and IT departments.
Experience writing in SQL or BigQuery is required. Experience using data visualization software like Tableau or PowerBI is required.
Ability to find and query appropriate data from databases, along with validating and reviewing data and reports for accuracy and completion
Excellent communication and interpersonal skills are required. Experience managing cross-functional projects with multiple stakeholders is desirable.
Advanced skills with Microsoft Excel and PowerPoint are required.
Ability to excel in fast paced environment, take direction, and handle multiple priorities
Annual Salary Range: $87,500-105k, with a generous benefits package that includes paid time off, health, dental, vision, and 401(k) savings plan with match
If you are an applicant residing in California, please view our privacy policy here: *********************************************************************************
View CA Privacy Policy
Distinguished Data Engineer
Senior data scientist job in Chicago, IL
Distinguished Data Engineers are individual contributors who strive to be diverse in thought so we visualize the problem space. At Capital One, we believe diversity of thought strengthens our ability to influence, collaborate and provide the most innovative solutions across organizational boundaries. Distinguished Engineers will significantly impact our trajectory and devise clear roadmaps to deliver next generation technology solutions.
Deep technical experts and thought leaders that help accelerate adoption of the very best engineering practices, while maintaining knowledge on industry innovations, trends and practices
Visionaries, collaborating on Capital One's toughest issues, to deliver on business needs that directly impact the lives of our customers and associates
Role models and mentors, helping to coach and strengthen the technical expertise and know-how of our engineering and product community
Evangelists, both internally and externally, helping to elevate the Distinguished Engineering community and establish themselves as a go-to resource on given technologies and technology-enabled capabilities
The Distinguished Data Engineering role will be responsible for the architectural design and technical patterns that enable a high-performing, reliable data platform for Card authorizations. The focus of the work includes advancing data observably, the Spend Data Product, data standardization, and the core data pipelines that power authorization processing and decisioning. The role is expected to be hands-on, partnering closely with engineering teams and authorization's partners to help drive work forward.
Responsibilities:
Build awareness, increase knowledge and drive adoption of modern technologies, sharing consumer and engineering benefits to gain buy-in
Strike the right balance between lending expertise and providing an inclusive environment where others' ideas can be heard and championed; leverage expertise to grow skills in the broader Capital One team
Promote a culture of engineering excellence, using opportunities to reuse and innersource solutions where possible
Effectively communicate with and influence key stakeholders across the enterprise, at all levels of the organization
Operate as a trusted advisor for a specific technology, platform or capability domain, helping to shape use cases and implementation in an unified manner
Lead the way in creating next-generation talent for Tech, mentoring internal talent and actively recruiting external talent to bolster Capital One's Tech talent
Basic Qualifications:
Bachelor's Degree
At least 7 years of experience in data engineering
At least 3 years of experience in data architecture
At least 2 years of experience building applications in AWS
Preferred Qualifications:
Masters' Degree
9+ years of experience in data engineering
3+ years of data modeling experience
2+ years of experience with ontology standards for defining a domain
2+ years of experience using Python, SQL or Scala
1+ year of experience deploying machine learning models
3+ years of experience implementing big data processing solutions on AWS (S3, DynamoDB, Lambda, Glue, Flink)
2+ years of experience with Orchestration Technologies (Airflow, Step functions)
2+ years of experience with Caching and In-memory Data stores
Capital One will consider sponsoring a new qualified applicant for employment authorization for this position
The minimum and maximum full-time annual salaries for this role are listed below, by location. Please note that this salary information is solely for candidates hired to perform work within one of these locations, and refers to the amount Capital One is willing to pay at the time of this posting. Salaries for part-time roles will be prorated based upon the agreed upon number of hours to be regularly worked.
Chicago, IL: $239,900 - $273,800 for Distinguished Data Engineer
McLean, VA: $263,900 - $301,200 for Distinguished Data Engineer
Richmond, VA: $239,900 - $273,800 for Distinguished Data Engineer
Candidates hired to work in other locations will be subject to the pay range associated with that location, and the actual annualized salary amount offered to any candidate at the time of hire will be reflected solely in the candidate's offer letter.
This role is also eligible to earn performance based incentive compensation, which may include cash bonus(es) and/or long term incentives (LTI). Incentives could be discretionary or non discretionary depending on the plan.
Capital One offers a comprehensive, competitive, and inclusive set of health, financial and other benefits that support your total well-being. Learn more at the Capital One Careers website . Eligibility varies based on full or part-time status, exempt or non-exempt status, and management level.
This role is expected to accept applications for a minimum of 5 business days.No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections ; New York City's Fair Chance Act; Philadelphia's Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries.
If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1- or via email at . All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations.
For technical support or questions about Capital One's recruiting process, please send an email to
Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site.
Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC).
Data Scientist - Operations Research
Senior data scientist job in Chicago, IL
Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what's next. Let's define tomorrow, together.
Description
United's Digital Technology team is comprised of many talented individuals all working together with cutting-edge technology to build the best airline in the history of aviation. Our team designs, develops and maintains massively scaling technology solutions brought to life with innovative architectures, data analytics, and digital solutions.
Job overview and responsibilities
Provides mathematical modeling and analysis services to support critical financial, operational, and/or strategic planning decisions and engages in supporting project teams in value added activities that generate practical solutions to complex business problems, explores new business alternatives, and drives improvement in business decisions
* Develops the approach and methods to define and solve management problems through quantitative analysis and analytical models using operations research, machine learning, and structured programming languages
* Identifies, researches, or solves large complex problems using big data and operations research and machine learning principles
* Leverages understanding of the business process to identify and implement operations research solutions that will result in significant bottom-line contributions
* Builds and develops operations research/optimization mathematical model applications, and provides client support leveraging operations research knowledge
* Participates in model design, prototype, and model development for several efforts that occur simultaneously, and interfaces with product delivery groups
* Raises concerns when scope of analysis may not align with time available and can choose an appropriate scope of analysis to conduct balancing ROI to time available
* Designs analytic plan/develop hypotheses to test; understands limitations of analysis (what it can and cannot be used for)
* Anticipates working team questions to data and approach
* Identifies solution quality risks and on-time risks
* Understands the business value, process, and expectations before focusing on choice of a technical solution
* Understands the intuition behind the numbers (i.e. does it make sense?)
* Provides on-going analytical services to client organizations
* Communicates results to management and clients
* Contributes deck content and builds the story for the deck with guidance to summarize findings
* Develops and delivers presentations aligned with Ai standards
* Speaks in a manner appropriate for working team and their level +1
* Keeps informed about the latest analytical methods and research in the operations research and analytics fields
Qualifications
What's needed to succeed (Minimum Qualifications):
* Masters in Operations Research or other related quantitative discipline involving quantitative analysis and application of advanced operations research principles
* Coursework or work experience with mathematical programming techniques
* Coursework or work experience in model prototyping through use of optimization toolkit(s) including CPLEX, AMPL, or OPL
* Coursework or work experience with C, C++, Java, R, Python, or other structured programming language
* Good business, technical, verbal/written communication, presentation and sales skills. Adaptability to changing business environment
* Good interpersonal skills and ability to interact with clients
* Proficient with MS Office
* Successful completion of interview required to meet job qualifications
* Must be legally authorized to work in the United States for any employer without sponsorship
* Reliable, punctual attendance is an essential function of the position
What will help you propel from the pack (Preferred Qualifications):
* Minor in computer science and/or formal advanced computer science coursework preferred
* 1+ years of professional experience in analytical field
* 1+ years designing and programming/coding data structures for large-scale computer models
* Experience with Julia programming language
* Knowledge of United/industry data sources
* Structured programming for large-scale computer models
* Demonstrated ability to create business value
The base pay range for this role is $91,770.00 to $119,514.00.
The base salary range/hourly rate listed is dependent on job-related, factors such as experience, education, and skills. This position is also eligible for bonus and/or long-term incentive compensation awards.
You may be eligible for the following competitive benefits: medical, dental, vision, life, accident & disability, parental leave, employee assistance program, commuter, paid holidays, paid time off, 401(k) and flight privileges.
United Airlines is an equal opportunity employer. United Airlines recruits, employs, trains, compensates and promotes regardless of race, religion, color, national origin, gender identity, sexual orientation, physical ability, age, veteran status and other protected status as required by applicable law. Equal Opportunity Employer - Minorities/Women/Veterans/Disabled/LGBT.
We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions. Please contact JobAccommodations@united.com to request accommodation.
Advisory, Data Scientist - CMC Data Products
Senior data scientist job in Indianapolis, IN
At Lilly, we unite caring with discovery to make life better for people around the world. We are a global healthcare leader headquartered in Indianapolis, Indiana. Our employees around the world work to discover and bring life-changing medicines to those who need them, improve the understanding and management of disease, and give back to our communities through philanthropy and volunteerism. We give our best effort to our work, and we put people first. We're looking for people who are determined to make life better for people around the world.
Organizational & Position Overview: The Bioproduct Research and Development (BR&D) organization strives to deliver creative medicines to patients by developing and commercializing insulins, monoclonal antibodies, novel therapeutic proteins, peptides, oligonucleotide therapies, and gene therapy systems. This multidisciplinary group works collaboratively with our discovery and manufacturing colleagues.
We are seeking an exceptional Data Scientist with deep data expertise in the pharmaceutical domain to lead the development and delivery of enterprise-scale data products that power AI-driven insights, process optimization, and regulatory compliance. In this role, you'll bridge pharmaceutical sciences with modern data engineering to transform complex CMC, PAT, and analytical data into strategic assets that accelerate drug development and manufacturing excellence.
Responsibilities:
Data Product Development: Define the roadmap and deliver analysis-ready and AI-ready data products that enable AI/ML applications, PAT systems, near-time analytical testing, and process intelligence across CMC workflows.
Data Archetypes & Modern Data Management: Define pharmaceutical-specific data archetypes (process, analytical, quality, CMC submission) and create reusable data models aligned with industry standards (ISA-88, ISA-95, CDISC, eCTD).
Modern Data Management for Regulated Environments: Implement data frameworks that ensure 21 CFR Part 11, ALCOA+, and data integrity compliance, while enabling scientific innovation and self-service access.
AI/ML-ready Data Products: Build training datasets for lab automation, process optimization, and predictive CQA models, and support generative AI applications for knowledge management and regulatory Q&A.
Cross-Functional Leadership: Collaborate with analytical R&D, process development, manufacturing science, quality, and regulatory affairs to standardize data products.
Deliverables include:
Scalable data integration platform that automates compilation of technical-review-ready and submission-ready data packages with demonstrable quality assurance.
Unified CMC data repository supporting current process and analytical method development while enabling future AI/ML applications across R&D and manufacturing
Data flow frameworks that enable self-service access while maintaining GxP compliance and audit readiness
Comprehensive documentation, standards, and training programs that democratize data access and accelerate product development
Basic Requirements:
Master's degree in Computer Science, Data Science, Machine Learning, AI, or related technical field
8+ years of product management experience focused on data products, data platforms, or scientific data systems and a strong grasp of modern data architecture patterns (data warehouses, data lakes, real-time streaming)
Knowledge of modern data stack technologies (Microsoft Fabric, Databricks, Airflow) and cloud platforms (AWS- S3, RDS, Lambda/Glue, Azure)
Demonstrated experience designing data products that support AI/ML workflows and advanced analytics in scientific domains
Proficiency with SQL, Python, and data visualization tools
Experience with analytical instrumentation and data systems (HPLC/UPLC, spectroscopy, particle characterization, process sensors)
Knowledge of pharmaceutical manufacturing processes, including batch and continuous manufacturing, unit operations, and process control
Expertise in data modeling for time-series, spectroscopic, chromatographic, and hierarchical batch/lot data
Experience with laboratory data management systems (LIMS, ELN, SDMS, CDS) and their integration patterns
Additional Preferences
Understanding of Design of Experiments (DoE), Quality by Design (QbD), and process validation strategies
Experience implementing data mesh architectures in scientific organizations
Knowledge of MLOps practices and model deployment in validated environments
Familiarity with regulatory submissions (eCTD, CTD) and how analytical data supports marketing applications
Experience with CI/CD pipelines (GitHub Actions, CloudFormation) for scientific applications
Lilly is dedicated to helping individuals with disabilities to actively engage in the workforce, ensuring equal opportunities when vying for positions. If you require accommodation to submit a resume for a position at Lilly, please complete the accommodation request form (******************************************************** for further assistance. Please note this is for individuals to request an accommodation as part of the application process and any other correspondence will not receive a response.
Lilly is proud to be an EEO Employer and does not discriminate on the basis of age, race, color, religion, gender identity, sex, gender expression, sexual orientation, genetic information, ancestry, national origin, protected veteran status, disability, or any other legally protected status.
Our employee resource groups (ERGs) offer strong support networks for their members and are open to all employees. Our current groups include: Africa, Middle East, Central Asia Network, Black Employees at Lilly, Chinese Culture Network, Japanese International Leadership Network (JILN), Lilly India Network, Organization of Latinx at Lilly (OLA), PRIDE (LGBTQ+ Allies), Veterans Leadership Network (VLN), Women's Initiative for Leading at Lilly (WILL), en Able (for people with disabilities). Learn more about all of our groups.
Actual compensation will depend on a candidate's education, experience, skills, and geographic location. The anticipated wage for this position is
$126,000 - $244,200
Full-time equivalent employees also will be eligible for a company bonus (depending, in part, on company and individual performance). In addition, Lilly offers a comprehensive benefit program to eligible employees, including eligibility to participate in a company-sponsored 401(k); pension; vacation benefits; eligibility for medical, dental, vision and prescription drug benefits; flexible benefits (e.g., healthcare and/or dependent day care flexible spending accounts); life insurance and death benefits; certain time off and leave of absence benefits; and well-being benefits (e.g., employee assistance program, fitness benefits, and employee clubs and activities).Lilly reserves the right to amend, modify, or terminate its compensation and benefit programs in its sole discretion and Lilly's compensation practices and guidelines will apply regarding the details of any promotion or transfer of Lilly employees.
#WeAreLilly
Auto-Apply