Senior Data Scientist
Senior data scientist job in Chicago, IL
Be a part of our success story. At Launch, we offer talented and driven professionals the opportunity to deliver high-impact work in a dynamic, fast-growing environment. We invest in our people through competitive compensation, exceptional benefits, internal growth pathways, and meaningful community involvement. For senior professionals ready to shape the future of AI and Data Science, your next big opportunity starts here.
The Role:
As a Senior Data Scientist, you will lead the design and development of advanced machine learning and AI solutions that solve complex business challenges. You'll apply a deep technical foundation in data science, spanning statistics, modeling, software engineering, and domain expertise, while helping set technical direction, mentor team members, and influence decision-making across the organization. You will work closely with stakeholders and cross-functional teams to define use cases, architect scalable solutions, and drive innovation through GenAI, LLMs, NLP, computer vision, and other cutting-edge techniques.
Key Responsibilities:
Lead advanced Data Science initiatives from ideation to deployment, ensuring alignment with business strategy and measurable outcomes.
Develop, implement, and optimize AI/ML models across a range of domains, including predictive analytics, GenAI, LLMs, time series, deep learning, and NLP.
Conduct rigorous exploratory data analysis and feature engineering to uncover insights and support robust model development.
Architect data preprocessing pipelines that ensure clean, high-quality, and well-structured data for training and evaluation.
Apply experimental design best practices (e.g., A/B testing, cross-validation, randomization) to ensure statistical rigor and reduce model bias/overfitting.
Own the deployment and monitoring of ML solutions on modern cloud platforms (Databricks, AWS SageMaker, Azure ML, Snowflake, etc.).
Serve as a technical advisor and thought leader, guiding internal stakeholders on AI capabilities, data strategy, and industry best practices.
Mentor and support the development of junior and mid-level data scientists through code reviews, pair programming, and knowledge sharing.
Communicate effectively across technical and non-technical audiences, including executive leadership, to influence and drive adoption of data-driven solutions.
Required Qualifications:
7+ years of hands-on experience in Data Science, with a strong track record of delivering end-to-end AI/ML solutions in production environments.
Deep expertise in Python, including libraries such as scikit-learn, pandas, NumPy, TensorFlow, PyTorch, spa Cy, and/or Hugging Face.
Strong understanding of statistical modeling, regression, classification, clustering, and deep learning architectures.
Demonstrated experience applying GenAI, LLMs, NLP, or computer vision techniques to solve real-world business problems.
Proven ability to write advanced SQL and work with complex data structures across relational and non-relational databases.
Experience with data pipeline orchestration and preprocessing of structured and unstructured data at scale.
Hands-on experience with cloud data science platforms such as Databricks, AWS SageMaker, Azure ML, Snowflake Snowpark, or Palantir Foundry.
Strong stakeholder management skills and the ability to thrive in a high-paced, constantly evolving environment.
Excellent verbal and written communication skills, including the ability to present complex technical concepts clearly and persuasively.
Preferred Qualifications:
Experience with project delivery frameworks and tools such as Azure DevOps, JIRA, or Agile methodologies.
Experience working with Electronic Medical Record systems and healthcare data file types (e.g., HL7, FHIR, CCD).
Certifications related to Epic, FHIR, or other health IT platforms are a plus.
Familiarity with data architecture tools (e.g., Miro, Visio, LucidChart) for technical design documentation.
Knowledge of modern cloud-based data solutions such as Azure Synapse, Snowflake, Redshift, or BigQuery.
Benefits & Compensation:
As an employee at Launch, you will grow your skills and experience through a variety of exciting project work (across industries and technologies) with some of the top companies in the world! Our employees receive full benefits - medical, dental, vision, short-term disability, long-term disability, life insurance, and matched 401k. We also have an uncapped, take-what-you-need PTO policy. The anticipated wage range for this role is $140,000-180,000. Education and experience will be highly considered, and we are happy to discuss your wage expectations in more detail throughout our internal interview process.
Sr. Data Scientist
Senior data scientist job in Chicago, IL
The Senior Data Scientist, Clinical Data Science (HOS & HRA) plays a key role in advancing analytics that improve Medicare Advantage member outcomes and CMS Star Ratings performance. This position supports the design, implementation, and automation of analytic solutions for the Health Outcomes Survey (HOS) and Health Risk Assessment (HRA) programs-two core domains in Aetna's Medicare clinical strategy.
The ideal candidate combines strong technical depth in data science and statistical modeling with the ability to translate complex findings into actionable insights for non-technical audiences. This individual will automate recurring data science workflows, conduct robust impact and descriptive analyses, and collaborate closely with clinical, quality, and operations teams to identify emerging opportunities that improve member experience and population health outcomes.
Clinical Data Science & Analytics
Lead the development of analytic models and descriptive frameworks supporting HOS and HRA performance improvement across Medicare Advantage.
Conduct impact analyses, trend identification, and segmentation to explain drivers of performance and inform strategy.
Automate recurring analytics and reporting pipelines to increase reliability, efficiency, and reproducibility of insights.
Apply advanced statistical, predictive, and causal inference methods to evaluate intervention effectiveness and identify member-level opportunities.
Develop and refine tools for data visualization and storytelling to communicate results clearly to non-technical stakeholders.
Partner with business leaders to translate analytic results into actionable recommendations for program design, member outreach, and care interventions.
Collaboration & Consultation
Serve as a bridge between technical and non-technical teams, ensuring analytic outputs are interpretable and actionable.
Collaborate cross-functionally with Clinical Operations, Member Experience, and Quality teams to align analytics with enterprise goals.
Support enterprise data modernization and automation initiatives by identifying repeatable use cases for scalable analytics and workflow improvement.
Mentor junior data scientists and analysts on best practices for data integrity, modeling, and automation.
Technical & Operational Excellence
Design and maintain automated analytic processes leveraging Python, SQL, and modern cloud environments (e.g., GCP).
Ensure accuracy, consistency, and explainability of models and metrics through disciplined version control and validation.
Contribute to the team's continuous improvement culture by recommending new methods, tools, or data sources that enhance analytic precision and speed.
Required Skills & Experience
5+ years of hands-on experience in data science, advanced analytics, or statistical modeling in healthcare, life sciences, or managed care.
Strong proficiency in Python, SQL, and data science libraries (e.g., pandas, scikit-learn, statsmodels).
Demonstrated ability to automate data workflows and standardize recurring analyses or reporting.
Experience applying statistical and descriptive analytics to clinical or quality measurement problems (e.g., HOS, HRA, CAHPS, or HEDIS).
Proven success communicating complex findings to non-technical business partners and influencing decision-making.
Ability to work effectively in a fast-paced, cross-functional environment.
Nice to Have Skills & Experience
Master's or PhD in Data Science, Statistics, Epidemiology, Public Health, or a related quantitative field.
Familiarity with Medicare Advantage, CMS Star Ratings methodology, and clinical quality measures.
Experience working within modern cloud environments (e.g., Google Cloud Platform, Databricks) and with workflow orchestration tools (Airflow, dbt).
Background in impact measurement, causal inference, or time-series analysis in healthcare contexts.
Senior Data Scientist
Senior data scientist job in Chicago, IL
Role: Senior Data Scientist
· We are seeking a hands-on Senior Data Scientist to join our Insurance Analytics & AI Vertical. The ideal candidate will bring a blend of insurance domain expertise (preferably P&C), consulting mindset, and strong data science skills. This is a mid-senior level role focused on delivering value through analytics, stakeholder engagement, and logical problem solving, rather than people management.
· The role involves working closely with EXL teams and clients on reporting, data engineering, transformation, and advanced analytics projects. While strong technical skills are important, we are looking for someone who can engage directly with clients, translate business needs into analytical solutions, and drive measurable impact.
Key Responsibilities
· Collaborate with EXL and client stakeholders to design and deliver data-driven solutions across reporting, analytics, and transformation initiatives.
· Apply traditional statistical methods, machine learning, deep learning, and NLP techniques to solve business problems.
· Support insurance-focused analytics use cases (with preference for P&C lines of business).
· Work in a consulting setup: conduct requirement gathering, structure problem statements, and communicate insights effectively to senior stakeholders.
· Ensure data quality, governance, and compliance with Data Privacy and Protection Guidelines.
· Independently research, analyze, and present findings, ensuring client-ready deliverables.
· Contribute to continuous improvement initiatives and support business development activities where required.
Key Skillsets & Experience
· 7-12 years of experience in analytics, reporting, dashboarding, ETL, Python/R, and associated data management.
· Proficiency in machine learning, deep learning algorithms (e.g., neural networks), and text analytics techniques (NLTK, Gensim, LDA, word embeddings like Word2Vec, FastText, GloVe).
· Strong consulting background with structured problem-solving and stakeholder management skills.
· Excellent communication and presentation skills with the ability to influence and engage senior business leaders.
· Hands-on role with ability to independently manage client deliverables and operate in cross-cultural, global environments.
Data Management Skills
· Strong familiarity with advanced analytics tools (Python, R), BI tools (Tableau, Power BI), and related software applications.
· Good knowledge of SQL, Informatica, Hadoop/Spark, ETL tools.
· Ability to translate business/functional requirements into technical specifications.
· Exposure to cloud data management and AWS services (preferred).
Candidate Profile
· Bachelor's/Master's degree in Economics, Mathematics, Computer Science/Engineering, Operations Research, or related analytical fields.
· Prior insurance industry experience (P&C preferred) strongly desired.
· Superior analytical, logical, and problem-solving skills.
· Outstanding written and verbal communication abilities with a consultative orientation.
· Flexible to work in a fast-paced, evolving environment with occasional visits to the client's Chicago office.
Data Scientist
Senior data scientist job in Peoria, IL
Typical task breakdown:
- Assist with monthly reporting on team metrics, cost savings and tariff analysis
- Lead development of data analytics to assist category teams in making strategic sourcing decisions
Interaction with team:
- Will work as a support to multiple category teams
Team Structure
- Report to the MC&H Strategy Manager and collaborate with Category Managers and buyers
Work environment:
Office environment
Education & Experience Required:
- Years of experience: 3-5
- Degree requirement: Bachelors degree
- Do you accept internships as job experience: Yes
Top 3 Skills
· Communicates effectively to develop standard procedures
· Applies problem-solving techniques across diverse procurement scenarios
Analyzes procurement data to generate actionable insights
Additional Technical Skills
(Required)
- Proficient in PowerBI, PROcure, and tools like CICT, Lognet, MRC, PO Inquiry, AoS
- Expertise in Snowflake and data mining
(Desired)
- Prior experience in Procurement
- Familiarity with monthly reporting processes, including ABP (Annual Business Plan) and RBM (Rolling Business Management)
- Demonstrated expertise in cost savings initiatives
- Machine Learning and AI
Soft Skills
(Required)
- Strong written and verbal communication skills
- Balances speed with accuracy in task execution
- Defines problems and evaluates their impact
(Desired)
- Emotional Intelligence
- Leadership and team management capabilities
Big Data Consultant
Senior data scientist job in Chicago, IL
Job Title: Bigdata Engineer
Employment Type: W2 Contract
Detailed Job Description:
We are seeking a skilled and experienced Big Data Platform Engineer who is having 7+ yrs of experience with a strong background in both development and administration of big data ecosystems. The ideal candidate will be responsible for designing, building, maintaining, and optimizing scalable data platforms that support advanced analytics, machine learning, and real-time data processing.
Key Responsibilities:
Platform Engineering & Administration:
• Install, configure, and manage big data tools such as Hadoop, Spark, Kafka, Hive, HBase, and others.
• Monitor cluster performance, troubleshoot issues, and ensure high availability and reliability.
• Implement security policies, access controls, and data governance practices.
• Manage upgrades, patches, and capacity planning for big data infrastructure.
Development & Data Engineering:
• Design and develop scalable data pipelines using tools like Apache Spark, Flink, NiFi, or Airflow.
• Build ETL/ELT workflows to ingest, transform, and load data from various sources.
• Optimize data storage and retrieval for performance and cost-efficiency.
• Collaborate with data scientists and analysts to support model deployment and data exploration.
Data Engineer
Senior data scientist job in Chicago, IL
Data Engineer - Build the Data Engine Behind AI Execution - Starting Salary $150,000
You'll be part architect, part systems designer, part execution partner - someone who thrives at the intersection of engineering precision, scalability, and impact.
As the builder behind the AI data platform, you'll turn raw, fragmented data into powerful, reliable systems that feed intelligent products. You'll shape how data flows, how it scales, and how it powers decision-making across AI, analytics, and product teams.
Your work won't be behind the scenes - it will be the foundation of everything we build.
You'll be joining a company built for builders. Our model combines AI consulting, venture building, and company creation into one execution flywheel. Here, you won't just build data pipelines - you'll build the platforms that power real products and real companies.
You know that feeling when a data system scales cleanly under real-world pressure, when latency drops below target, when complexity turns into clarity - and everything just flows? That's exactly what you'll build here.
Ready to engineer the platform that powers AI execution? Let's talk.
No up-to-date resume required.
Data Engineer
Senior data scientist job in Chicago, IL
Scaylor helps mid-market and enterprise companies make sense of their data. Most teams past $20M ARR are stuck with fragmented systems, old databases, and hundreds of spreadsheets that never quite line up. We build the pipelines that clean, normalize, and connect that data so it can actually be used.
Our platform handles everything from parsing financial models and reports to mapping tables across legacy databases and modern APIs. The goal is simple: give companies a single source of truth they can trust.
We're a small team of four - one backend engineer, one frontend engineer, and 2 founders. We're looking for our fifth teammate to help us scale the engine behind it all.
⸻
The Role
You'll work across data engineering and full-stack development, helping us build reliable data infrastructure that powers real workflows. You'll touch everything from ingestion and transformation pipelines to the APIs and dashboards that surface insights to clients.
You'll work directly with the founding team and help make technical decisions that define the next version of Scaylor's core platform.
⸻
What You'll Work On
• Build data pipelines that extract, clean, and standardize information from Excel files, PDFs, APIs, and legacy databases
• Design schemas and transformation logic for structured and semi-structured data
• Develop and maintain backend APIs (Python/FastAPI or Node/Express) for data access and analytics
• Help connect backend services to our frontend dashboards (React, Node.js, or similar)
• Set up and maintain AWS infrastructure (Lambda, S3, ECS, CloudFormation)
• Collaborate with clients to understand their data problems and design workflows that fix them
⸻
You'd Be Great Here If You
• Have 3-6 years of experience in data engineering, backend, or full-stack roles
• Write clean, maintainable code in Python + JS
• Understand ETL, data normalization, and schema mapping
• Have experience with SQL and working with legacy databases or systems
• Are comfortable managing cloud services and debugging data pipelines
• Enjoy solving messy data problems and care about building things that last
⸻
Nice to Have
• Familiarity with GCP or SQL databases
• Understanding of enterprise data flows (ERP, CRM, or financial systems)
• Experience building and deploying containers (Docker, GitHub Actions, CI/CD)
• Interest in lightweight ML or LLM-assisted data transformation
⸻
Why Join Scaylor
• Be one of the first five team members shaping the product and the company
• Work directly with the founder and help define Scaylor's technical direction
• Build infrastructure that solves real problems for real companies
• Earn meaningful equity and have a say in how the company grows
⸻
Compensation
• $130k - $150k with a raise based on set revenue triggers
• .4% equity
• Relocation to Chicago, IL required
Data Engineer
Senior data scientist job in Itasca, IL
Primary Location: Itasca, IL Hybrid in Chicago's Northwest Suburbs
2 Days In-Office, 3 Days WFH
TYPE: Direct Hire / Permanent Role
MUST BE Citizen and Green Card
The Data Engineer will be responsible for designing, developing, and maintaining scalable data pipelines and integrations that support data analytics and business intelligence across the organization. This role is essential to ensuring high-quality data delivery, optimizing performance, and enabling effective decision-making through reliable data solutions.
What You Bring to the Role (Ideal Experience)
Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent work experience).
5+ years of experience as a Data Engineer.
3+ years of experience with the following:
Building and supporting data lakehouse architectures using Delta Lake and change data feeds.
Working with PySpark and Python, with strong Object-Oriented Programming (OOP) experience to extend existing frameworks.
Designing data warehouse table architecture such as star schema or Kimball method.
Writing and maintaining versioned Python wheel packages to manage dependencies and distribute code.
Creating and managing CI/CD pipelines, especially using Azure DevOps for Microsoft Fabric-related assets.
Experience establishing scalable and maintainable data integrations and pipelines in Databricks environments.
Nice to Have's
Hands-on experience implementing data solutions using Microsoft Fabric.
Experience with machine learning/ML and data science tools.
Knowledge of data governance and security best practices.
Experience in a larger IT environment with 3,000+ users and multiple domains.
Current industry certifications from Microsoft cloud/data platforms or equivalent certifications. One or more of the following is preferred:
Microsoft Certified: Fabric Data Engineer Associate
Microsoft Certified: Azure Data Scientist Associate
Microsoft Certified: Azure Data Fundamentals
Google Professional Data Engineer
Certified Data Management Professional (CDMP)
IBM Certified Data Architect - Big Data
What You'll Do (Skills Used in this Position)
Design and develop scalable data pipelines to collect, process, and store large volumes of structured and unstructured data.
Extend and enhance existing OOP-based frameworks developed in Python and PySpark.
Partner with data scientists and analysts to define requirements and design robust data analytics solutions.
Ensure data quality and integrity through data cleansing, validation, and automated testing procedures.
Develop and maintain technical documentation, including requirements, design specifications, and test plans.
Implement and manage data integrations from multiple internal and external sources.
Optimize data workflows to improve performance, reliability, and reduce cloud consumption.
Monitor, troubleshoot, and resolve data pipeline issues to ensure consistent data delivery.
Establish and manage CI/CD pipelines and release processes, particularly using Azure DevOps for Microsoft Fabric.
Provide technical leadership and coordination for global development and support teams.
Participate in creating a safe and healthy workplace by adhering to organizational safety protocols.
Support additional projects and initiatives as assigned by management.
Data Engineer
Senior data scientist job in Chicago, IL
We are seeking a highly skilled Data Engineer with strong expertise in Scala, AWS, and Apache Spark. The ideal candidate will have 7+ years of hands-on experience building scalable data pipelines, distributed processing systems, and cloud-native data solutions.
Key Responsibilities
Design, build, and optimize large-scale data pipelines using Scala and Spark.
Develop and maintain ETL/ELT workflows across AWS services.
Work on distributed data processing using Spark, Hadoop, or similar.
Build data ingestion, transformation, cleansing, and validation routines.
Optimize pipeline performance and ensure reliability in production environments.
Collaborate with cross-functional teams to understand requirements and deliver robust solutions.
Implement CI/CD best practices, testing, and version control.
Troubleshoot and resolve issues in complex data flow systems.
Required Skills & Experience
7+ years of Data Engineering experience.
Strong programming experience with Scala (must-have).
Hands-on experience with Apache Spark (core, SQL, streaming).
Solid experience with AWS cloud services (Glue, EMR, Lambda, S3, EC2, IAM, etc.).
High proficiency in SQL and relational/no SQL data stores.
Strong understanding of data modeling, data architecture, and distributed systems.
Experience with workflow orchestration tools (Airflow, Step Functions, etc.).
Strong communication and problem-solving skills.
Preferred Skills
Experience with Kafka, Kinesis, or other streaming platforms.
Knowledge of containerization tools like Docker or Kubernetes.
Background in data warehousing or modern data lake architectures.
Junior Data Engineer
Senior data scientist job in Chicago, IL
Job Title - Junior Data Engineer
Duration - Fulltime
No of Positions - 8
Interview Process - Imocha test & 1 CG Interview
Key Responsibilities
Design, develop, and maintain scalable data pipelines and ETL processes using Databricks.
Design an develop Python scripts for data transformation, automation, and integration tasks.
Develop and optimize SQL queries for data extraction, transformation, and loading.
Collaborate with data scientists, analysts, and business stakeholders
Ensure data integrity, security, and compliance with organizational standards.
Participate in code reviews and contribute to best practices in data engineering
Required Skills
3-5 years of professional experience in data engineering or related roles.
Strong proficiency in Databricks (including Spark-based data processing).
Strong programming skills in Python
Advanced knowledge of SQL for querying and data modeling.
Familiarity with Azure cloud and ADF
Understanding of ETL frameworks, data governance, and performance tuning.
Knowledge of CI/CD practices and version control (Git).
Exposure to BI tools (Power BI, Tableau) for data visualization
Mandatory Skills
Python, Databricks, SQL, ETL, Power BI & tableau (good to have)
If your Interested. Kindly, share us your resume on
****************************
Life At Capgemini
Capgemini supports all aspects of your well-being throughout the changing stages of your life and career. For eligible employees, we offer:
Flexible work
Healthcare including dental, vision, mental health, and well-being programs
Financial well-being programs such as 401(k) and Employee Share Ownership Plan
Paid time off and paid holidays
Paid parental leave
Family building benefits like adoption assistance, surrogacy, and cryopreservation
Social well-being benefits like subsidized back-up child/elder care and tutoring
Mentoring, coaching and learning programs
Employee Resource Groups
Disaster Relief
Disclaimer
Capgemini is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law.
This is a general description of the Duties, Responsibilities and Qualifications required for this position. Physical, mental, sensory or environmental demands may be referenced in an attempt to communicate the manner in which this position traditionally is performed. Whenever necessary to provide individuals with disabilities an equal employment opportunity, Capgemini will consider reasonable accommodations that might involve varying job requirements and/or changing the way this job is performed, provided that such accommodations do not pose an undue hardship.
Capgemini is committed to providing reasonable accommodations during our recruitment process. If you need assistance or accommodation, please reach out to your recruiting contact.
Click the following link for more information on your rights as an Applicant **************************************************************************
Salary Transparency
Capgemini discloses salary range information in compliance with state and local pay transparency obligations. The disclosed range represents the lowest to highest salary we, in good faith, believe we would pay for this role at the time of this posting, although we may ultimately pay more or less than the disclosed range, and the range may be modified in the future. The disclosed range takes into account the wide range of factors that are considered in making compensation decisions including, but not limited to, geographic location, relevant education, qualifications, certifications, experience, skills, seniority, performance, sales or revenue-based metrics, and business or organizational needs. At Capgemini, it is not typical for an individual to be hired at or near the top of the range for their role. The base salary range for the tagged location is $56186 to $87556 /yearly. This role may be eligible for other compensation including variable compensation, bonus, or commission. Full time regular employees are eligible for paid time off, medical/dental/vision insurance, 401(k), and any other benefits to eligible employees.
Note: No amount of pay is considered to be wages or compensation until such amount is earned, vested, and determinable. The amount and availability of any bonus, commission, or any other form of compensation that are allocable to a particular employee remains in the Company's sole discretion unless and until paid and may be modified at the Company's sole discretion, consistent with the law.
Senior Data Engineer
Senior data scientist job in Chicago, IL
requires visa independent candidates.
Note: (OPT, CPT, H1B holders will not work at this time)
Design, develop, and maintain scalable ETL pipelines using AWSGlue
Collaborate with data engineers and analysts to understand data requirements
Build and manage data extraction, transformation, and loading processes
Optimize and troubleshoot existing Glue jobs and workflows
Ensure data quality, integrity, and security throughout the ETL process
Integrate AWS Glue with other AWS services like S3, Lambda, Redshift, and Step Functions
Maintain documentation of data workflows and processes
Stay updated with the latest AWS tools and best practices
Required Skills
Strong hands-on experience with AWS Glue, PySpark, and Python
Proficiency in SQL and working with structured/unstructured data (JSON, CSV, Parquet)
Experience with data warehousing concepts and tools
Familiarity with CI/CD pipelines, Terraform, and scripting (PowerShell, Bash)
Solid understanding of data modeling, data integration, and data management
Exposure to AWS Batch, Step Functions, and Data Catalogs
Data Engineer
Senior data scientist job in Chicago, IL
Job Title: Data Engineer - Workflow Automation
Employment Type: Contract to Hire or Full-Time
Department: Project Scion / Information Management Solutions
Key Responsibilities:
Design, build, and manage workflows using Automic or experience with similar tools like Autosys, Apache Airflow, or Cybermation.
workflow orchestration across multi-cloud ecosystems (AWS, Azure, Snowflake, Databricks, Redshift).
Monitor and troubleshoot workflow execution, ensuring high availability, reliability, and performance.
Administer and maintain workflow platforms.
Collaborate with architecture and infrastructure teams to align workflows with cloud strategies.
Support migrations, upgrades, and workflow optimization efforts
Required Skills:
Has 5+ years of experience in IT managing production grade system
Hands-on experience with Automic or similar enterprise workflow automation tools.
Strong analytical and problem-solving skills.
Good communication and documenting skills.
Familiarity with cloud platforms and technologies (e.g., AWS, Azure, Snowflake, Databricks).
Scripting proficiency (e.g., Shell, Python).
Ability to manage workflows across hybrid environments and optimize performance.
Experience managing production operations & support activities
Preferred Skills:
Experience with CI/CD pipeline integration.
Knowledge of cloud-native orchestration tools
Exposure to monitoring and alerting systems.
Data Engineer
Senior data scientist job in Chicago, IL
The Data Engineer will design, build, and optimize the data pipelines and models that support the firm's evolving research, analytics, and systematic portfolio construction environment. This role is central to enabling data-driven investment processes, including quantitative research, AI/ML capabilities, and front-office automation.
Candidates must have deep expertise with Snowflake, strong SQL skills, and experience integrating diverse datasets used across investment organizations. The role is highly collaborative and requires comfort working in an iterative, fast-moving environment where data needs evolve rapidly based on stakeholder input.
Responsibilities
Design, build, and enhance ETL/ELT pipelines in Snowflake, ensuring high performance, reliability, and scalability.
Integrate internal and external datasets, including pricing, research content, economic releases, market data, and security reference data.
Support real-time or near-real-time data flows where needed (e.g., pricing, indicative quotes, market-sensitive inputs).
Collaborate closely with Product Leads, Quant Developers, and UI/UX teams to ensure data structures meet the requirements of research workflows, analytical models, and user-facing applications.
Partner with front-office stakeholders to rapidly iterate on evolving analytical and data needs.
Implement data validation, monitoring, and quality frameworks to ensure accuracy and reliability across critical datasets.
Translate prototype pipelines into production-ready workflows with appropriate documentation, standards, and controls.
Contribute to data modeling standards, metadata frameworks, and data governance practices across the platform.
Requirements
10+ years of data engineering experience within investment management, financial technology, or similar data-intensive environments.
Expert-level SQL, including complex queries, schema design, and performance optimization.
Deep hands-on experience with Snowflake, including advanced features such as tasks, streams, performance tuning, and secure data sharing.
Strong Python capabilities for ETL/ELT development, data processing, and workflow automation.
Experience integrating APIs and working with structured, semi-structured, and unstructured datasets.
Familiarity with NLP or AI/ML-oriented datasets (e.g., textual research content, PDFs) is a plus.
Experience with Domino or willingness to work within a Domino-based model environment.
Working knowledge of investment data structures (holdings, benchmarks, pricing, exposures) is highly preferred.
Ability to thrive in a rapid prototyping environment with evolving requirements and close partnership with front-office teams.
Snowflake Data Engineer
Senior data scientist job in Chicago, IL
Join a dynamic team focused on building innovative data solutions that drive strategic insights for the business. This is an opportunity to leverage your expertise in Snowflake, ETL processes, and data integration.
Key Responsibilities
Develop Snowflake-based data models to support enterprise-level reporting.
Design and implement batch ETL pipelines for efficient data ingestion from legacy systems.
Collaborate with stakeholders to gather and understand data requirements.
Required Qualifications
Hands-on experience with Snowflake for data modeling and schema design.
Proven track record in developing ETL pipelines and understanding transformation logic.
Solid SQL skills to perform complex data transformations and optimization.
If you are passionate about building cutting-edge data solutions and want to make a significant impact, we would love to see your application!
#11290
Senior Analyst, Data and Insights
Senior data scientist job in Chicago, IL
The Aspen Group (TAG) is one of the largest and most trusted retail healthcare business support organizations in the U.S., supporting 15,000 healthcare professionals and team members at more than 1,000 health and wellness offices across 47 states in three distinct categories: Dental care, urgent care, and medical aesthetics. Working in partnership with independent practice owners and clinicians, the team is united by a single purpose: to prove that healthcare can be better and smarter for everyone. TAG provides a comprehensive suite of centralized business support services that power the impact of five consumer-facing businesses: Aspen Dental, ClearChoice Dental Implant Centers, WellNow Urgent Care, Lovet Animal Hospitals and Chapter Aesthetic Studios. Each brand has access to a deep community of experts, tools and resources to grow their practices, and an unwavering commitment to delivering high-quality consumer healthcare experiences at scale.
As a reflection of our current needs and planned growth we are very pleased to offer a new opportunity to join our dedicated team as a Senior Analyst, Data & Insights supporting our WellNow Urgent Care brand. This role is responsible for leveraging data and transforming it into insights and reporting that can be utilized across all levels of the organization. This role will be engaged in driving the business both through support of our business partners in both the offices and field teams along with informing strategy for our Executive Leadership Team.
Responsibilities
As a Senior Analyst, Data & Insights you will be responsible for:
Supporting the WellNow brand through the development of a common approach and infrastructure to data sources built to support enterprise-wide reporting
Capture and translate business requirements for reporting from executive leadership
Developing key data sources in BigQuery through use of SQL, Power BI, or other reporting languages & tools
Synthesizing insights from various data sources and presenting data in an easy-to-read manner
Become organizational expert on data sources and how to extract data from all systems.
Ability to combine multiple data sources; strong attention to detail and data integrity.
Work across departments to understand how their work impacts the performance of the business deriving metrics to measure results
Identify key opportunities to drive transparency and turn data into insights and action.
Leading organization in implementing a standardized, consistent approach to reporting, with a strong focus on user experience to drive usage.
Utilizing data to uncover trends and insights, connecting changes in operational metrics to broader business performance, and craft compelling narratives that inform stakeholders and drive strategic decision-making.
Ad-hoc analytical projects as needed.
Minimum Education and Experience
BA or BS in Data Analytics, Finance, Business or other degree with equivalent work experience in analysis or insights-based roles.
3+ years of experience in data analytics or similar analysis-driven roles required. Required experience partnering with both key business stakeholders and IT departments.
Experience writing in SQL or BigQuery is required. Experience using data visualization software like Tableau or PowerBI is required.
Ability to find and query appropriate data from databases, along with validating and reviewing data and reports for accuracy and completion
Excellent communication and interpersonal skills are required. Experience managing cross-functional projects with multiple stakeholders is desirable.
Advanced skills with Microsoft Excel and PowerPoint are required.
Ability to excel in fast paced environment, take direction, and handle multiple priorities
Annual Salary Range: $87,500-105k, with a generous benefits package that includes paid time off, health, dental, vision, and 401(k) savings plan with match
If you are an applicant residing in California, please view our privacy policy here: *********************************************************************************
View CA Privacy Policy
Senior Data Analyst - PowerBI Dashbords
Senior data scientist job in Chicago, IL
W2 or c2c
Local preferred - Hybrid role
Need to have creating of PowerBI dashboard experience
A Data Analyst interprets data and turns it into information which can offer ways to improve a business thus affecting business decisions.
Data Analysts gather information from various sources and interpret patterns and trends as such a Data Analyst job description should highlight the analytical nature of the role.
Once data has been gathered and interpreted the Data Analyst will report back what has been found in a comprehensive study to the wider business/relevant colleagues.
Primary Responsibilities:
Collaborate to design and develop schemas and data acquisition transformations and data integration.
Develop and utilize software to interface big data and relational solutions.
Design and implement solutions for metadata data quality privacy management.
Support and consult with the development staff. Collecting and interpreting data Analyzing results
Reporting the results back to the relevant members of the business
Identifying patterns and trends in data sets Working alongside teams within the business or the management team to establish business needs
Defining new data collection and analysis processes
Qualifications:
Technical Degree or related work experience
Ability to analyze existing tools and databases and provide software solution recommendations.
Ability to translate business requirements into non-technical lay terms.
High-level experience in methodologies and processes for managing large scale databases.
Demonstrated experience in handling large data sets and relational databases.
Data Scientist - Operations Research
Senior data scientist job in Chicago, IL
Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what's next. Let's define tomorrow, together.
**Description**
United's Digital Technology team is comprised of many talented individuals all working together with cutting-edge technology to build the best airline in the history of aviation. Our team designs, develops and maintains massively scaling technology solutions brought to life with innovative architectures, data analytics, and digital solutions.
**Job overview and responsibilities**
Provides mathematical modeling and analysis services to support critical financial, operational, and/or strategic planning decisions and engages in supporting project teams in value added activities that generate practical solutions to complex business problems, explores new business alternatives, and drives improvement in business decisions
+ Develops the approach and methods to define and solve management problems through quantitative analysis and analytical models using operations research, machine learning, and structured programming languages
+ Identifies, researches, or solves large complex problems using big data and operations research and machine learning principles
+ Leverages understanding of the business process to identify and implement operations research solutions that will result in significant bottom-line contributions
+ Builds and develops operations research/optimization mathematical model applications, and provides client support leveraging operations research knowledge
+ Participates in model design, prototype, and model development for several efforts that occur simultaneously, and interfaces with product delivery groups
+ Raises concerns when scope of analysis may not align with time available and can choose an appropriate scope of analysis to conduct balancing ROI to time available
+ Designs analytic plan/develop hypotheses to test; understands limitations of analysis (what it can and cannot be used for)
+ Anticipates working team questions to data and approach
+ Identifies solution quality risks and on-time risks
+ Understands the business value, process, and expectations before focusing on choice of a technical solution
+ Understands the intuition behind the numbers (i.e. does it make sense?)
+ Provides on-going analytical services to client organizations
+ Communicates results to management and clients
+ Contributes deck content and builds the story for the deck with guidance to summarize findings
+ Develops and delivers presentations aligned with Ai standards
+ Speaks in a manner appropriate for working team and their level +1
+ Keeps informed about the latest analytical methods and research in the operations research and analytics fields
**Qualifications**
**What's needed to succeed (Minimum Qualifications):**
+ Masters in Operations Research or other related quantitative discipline involving quantitative analysis and application of advanced operations research principles
+ Coursework or work experience with mathematical programming techniques
+ Coursework or work experience in model prototyping through use of optimization toolkit(s) including CPLEX, AMPL, or OPL
+ Coursework or work experience with C, C++, Java, R, Python, or other structured programming language
+ Good business, technical, verbal/written communication, presentation and sales skills. Adaptability to changing business environment
+ Good interpersonal skills and ability to interact with clients
+ Proficient with MS Office
+ Successful completion of interview required to meet job qualifications
+ Must be legally authorized to work in the United States for any employer without sponsorship
+ Reliable, punctual attendance is an essential function of the position
**What will help you propel from the pack (Preferred Qualifications):**
+ Minor in computer science and/or formal advanced computer science coursework preferred
+ 1+ years of professional experience in analytical field
+ 1+ years designing and programming/coding data structures for large-scale computer models
+ Experience with Julia programming language
+ Knowledge of United/industry data sources
+ Structured programming for large-scale computer models
+ Demonstrated ability to create business value
The base pay range for this role is $91,770.00 to $119,514.00.
The base salary range/hourly rate listed is dependent on job-related, factors such as experience, education, and skills. This position is also eligible for bonus and/or long-term incentive compensation awards.
You may be eligible for the following competitive benefits: medical, dental, vision, life, accident & disability, parental leave, employee assistance program, commuter, paid holidays, paid time off, 401(k) and flight privileges.
United Airlines is an equal opportunity employer. United Airlines recruits, employs, trains, compensates and promotes regardless of race, religion, color, national origin, gender identity, sexual orientation, physical ability, age, veteran status and other protected status as required by applicable law. Equal Opportunity Employer - Minorities/Women/Veterans/Disabled/LGBT.
We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions. Please contact JobAccommodations@united.com to request accommodation.
Senior Data Scientist
Senior data scientist job in Chicago, IL
The Aspen Group (TAG) is one of the largest and most trusted retail healthcare business support organizations in the U.S., supporting 15,000 healthcare professionals and team members at more than 1,200 health and wellness offices across 47 states in four distinct categories: Dental care, urgent care, veterinary care, and medical aesthetics. Working in partnership with independent practice owners and clinicians, the team is united by a single purpose: to prove that healthcare can be better and smarter for everyone. TAG provides a comprehensive suite of centralized business support services that power the impact of five consumer-facing businesses: Aspen Dental, ClearChoice Dental Implant Centers, WellNow Urgent Care, Lovet Animal Hospitals and Chapter Aesthetic Studios. Each brand has access to a deep community of experts, tools and resources to grow their practices, and an unwavering commitment to delivering high-quality consumer healthcare experiences at scale.
As a reflection of our current needs and planned growth we are very pleased to offer a new opportunity to join our dedicated team as a Senior Data Scientist. This role is responsible for transforming raw data from various sources to develop high quality machine learning and forecasting models informing the growth strategy across our brands and for TAG as a whole.
Responsibilities
As a Senior Data Scientist, you will be responsible for:
Model Development
Identify opportunities within the business and proactively partner with business leaders to develop models using advanced ML and AI techniques to unlock stronger, more streamlined business performance
Build, validate, and deploy predictive forecasting models to predict demand, revenue, and market trends
Design and execute A/B tests and multivariate experiments to measure the impact of pricing changes and promotional strategies
Build elasticity models and scenario simulations to forecast revenue, margin, and adoption under different pricing and promotional strategies
Apply statistical and machine learning methods to identify patterns, trends, and insights
Understand business objectives and translate them into data-driven solutions
Gather, clean, and validate structured and unstructured data from multiple sources
Conduct rigorous testing of models, ensuring reasonable results and reporting back to leadership on outcomes
Model Ownership
Monitor and report on model performance
Develop recommendations to improve model accuracy
Translate complex analytical findings into clear, actionable insights with recommendations to improve performance
Partner with leaders in Product, IT, and Brand leadership to implement tools that leverage the models, changing data outputs into a fully functioning product
Maintain well-documented, reproducible code and workflows
Continuous Learning & Development
Stay current with industry trends, tools, and best practices in data science
Proactively suggest new approaches and methodologies to enhance decision-making by leadership
Collaborate with cross-functional teams (IT, product, finance, operations) evolve data science approaches to integrate the latest business insights and trends
Minimum Education and Experience
BA or BS in Data Science, Computer Science, Mathematics, or other degree with equivalent work experience in advanced analytics or business forecasting roles. Advanced Degree is preferred.
4-6 years of experience in roles that leverage data analysis, statistical modeling, or machine learning required including experience leading projects
Hands-on experience leveraging statistical packages in Python for statistical analysis and leveraging SQL to query data is required. Experience using data visualization software like Tableau or PowerBI is a plus.
Experience working with funnel optimization, user segmentation, cohort analyses, time series analyses, and regression models
Excellent communication and interpersonal skills are required. Experience managing cross-functional projects with multiple stakeholders is desirable.
Proven track record of partnering with executive-level audiences to identify opportunities, build out the framework for potential models, and deliver a recommended approach to the business
Ability to excel in fast paced environment, take direction, and handle multiple priorities
Annual Salary Range: $120,000-$140,000/year, with a generous benefits package that includes paid time off, health, dental, vision, and 401(k) savings plan with match.
If you are an applicant residing in California, please view our privacy policy here: *********************************************************************************
Auto-ApplySenior Data Scientist
Senior data scientist job in Springfield, IL
**What Data Science contributes to Cardinal Health** The Data & Analytics Function oversees the analytics lifecycle in order to identify, analyze and present relevant insights that drive business decisions and anticipate opportunities to achieve a competitive advantage. This function manages analytics products, the access, design and implementation of reporting/business intelligence solutions, and the application of advanced quantitative modeling.
Data Science applies base, scientific methodologies from various disciplines, techniques and tools that extracts knowledge and insight from data to solve complex business problems on large data sets, integrating multiple systems.
This role will support the Major Rugby business unit, a legacy supplier of multi-source, generic pharmaceuticals for over 60 years. Major Rugby provides over 1,000 high-quality, Rx, OTC and vitamin, mineral and supplement products to the acute, retail, government and consumer markets. This role will focus on leveraging advanced analytics, machine learning, and optimization techniques to solve complex challenges related to demand forecasting, inventory optimization, logistics efficiency and risk mitigation. Our goal is to uncover insights and drive meaningful deliverables to improve decision making and business outcomes.
**Responsibilities:**
+ Leads the design, development, and deployment of advanced analytics and machine learning models to solve complex business problems
+ Collaborates cross-functionally with product, engineering, operations, and business teams to identify opportunities for data-driven decision-making
+ Translates business requirements into analytical solutions and delivers insights that drive strategic initiatives
+ Develops and maintains scalable data science solutions, ensuring reproducibility, performance, and maintainability
+ Evaluates and implements new tools, frameworks, and methodologies to enhance the data science toolkit
+ Drives experimentation and A/B testing strategies to optimize business outcomes
+ Mentors junior data scientists and contributes to the development of a high-performing analytics team
+ Ensures data quality, governance, and compliance with organizational and regulatory standards
+ Stays current with industry trends, emerging technologies, and best practices in data science and AI
+ Contributes to the development of internal knowledge bases, documentation, and training materials
**Qualifications:**
+ 8-12 years of experience in data science, analytics, or a related field (preferred)
+ Advanced degree (Master's or Ph.D.) in Data Science, Computer Science, Engineering, Operations Research, Statistics, or a related discipline preferred
+ Strong programming skills in Python and SQL;
+ Proficiency in data visualization tools such as Tableau, or Looker, with a proven ability to translate complex data into clear, actionable business insights
+ Deep understanding of machine learning, statistical modeling, predictive analytics, and optimization techniques
+ Experience with cloud platforms (e.g., AWS, Azure, GCP) and big data technologies (e.g., Spark, Hadoop) is highly desirable
+ Excellent communication and storytelling skills, with the ability to influence stakeholders and present findings to both technical and non-technical audiences
+ Experience in Supervised and Unsupervised Machine Learning including Classification, Forecasting, Anomaly Detection, Pattern Detection, Text Mining, using variety of techniques such as Decision trees, Time Series Analysis, Bagging and Boosting algorithms, Neural Networks, Deep Learning and Natural Language processing (NLP).
+ Experience with PyTorch or other deep learning frameworks
+ Strong understanding of RESTful APIs and / or data streaming a big plus
+ Required experience of modern version control (GitHub, Bitbucket)
+ Hands-on experience with containerization (Docker, Kubernetes, etc.)
+ Experience with product discovery and design thinking
+ Experience with Gen AI
+ Experience with supply chain analytics is preferred
**Anticipated salary range:** $123,400 - $176,300
**Bonus eligible:** Yes
**Benefits:** Cardinal Health offers a wide variety of benefits and programs to support health and well-being.
+ Medical, dental and vision coverage
+ Paid time off plan
+ Health savings account (HSA)
+ 401k savings plan
+ Access to wages before pay day with my FlexPay
+ Flexible spending accounts (FSAs)
+ Short- and long-term disability coverage
+ Work-Life resources
+ Paid parental leave
+ Healthy lifestyle programs
**Application window anticipated to close:** 12/02/2025 *if interested in opportunity, please submit application as soon as possible.
The salary range listed is an estimate. Pay at Cardinal Health is determined by multiple factors including, but not limited to, a candidate's geographical location, relevant education, experience and skills and an evaluation of internal pay equity.
\#LI-Remote
\#LI-AP4
_Candidates who are back-to-work, people with disabilities, without a college degree, and Veterans are encouraged to apply._
_Cardinal Health supports an inclusive workplace that values diversity of thought, experience and background. We celebrate the power of our differences to create better solutions for our customers by ensuring employees can be their authentic selves each day. Cardinal Health is an Equal_ _Opportunity/Affirmative_ _Action employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, ancestry, age, physical or mental disability, sex, sexual orientation, gender identity/expression, pregnancy, veteran status, marital status, creed, status with regard to public assistance, genetic status or any other status protected by federal, state or local law._
_To read and review this privacy notice click_ here (***************************************************************************************************************************
Data Scientist/Engineer
Senior data scientist job in Chicago, IL
InfoTech Resources is seeking a Data Scientist for a hybrid position based in the Chicago Loop, working 2-3 days per week onsite. This opportunity is open to both contract and direct-hire candidates.
The selected candidate will play a key role in designing, developing, and optimizing data-driven AI solutions within a modern cloud-based environment (Google Cloud). The client is a large enterprise company in the insurance sector and this role will join their first dedicated AI/ML team charged with designing and laying a proper foundation for AI/ML solutions. The Data Scientist will use Python and leading AI frameworks to build, train, and evaluate models that support intelligent applications and data analytics initiatives leveraging Google Vertex AI, Gemini, and OpenAI integrations. Candidates must have expertise designing/developing models, agents, frameworks, etc. within a greenfield context and hands-on experience in evaluating model performance using structured testing methodologies.
Key Responsibilities
Develop, train, and validate predictive and generative AI models using Python and industry-standard libraries such as NumPy, pandas, scikit-learn, matplotlib, and TensorFlow/PyTorch
Design and execute structured experiments to evaluate LLM and model performance using quantitative testing methodologies
Collaborate with AI engineers and software developers to operationalize models within Google Vertex AI or similar environments
Apply data wrangling, feature engineering, and exploratory data analysis (EDA) to ensure data quality and model interpretability
Build and visualize insights using Power BI, matplotlib, or other data visualization tools
Implement best practices in prompt engineering, context filtering, and AI model evaluation
Contribute to continuous improvement of the organization's AI and machine learning capabilities through research and adoption of emerging methods
Communicate findings and recommendations clearly to technical and business stakeholders
Required Skills & Experience
Bachelor's or Master's degree in Data Science, Computer Science, Statistics, or a related field
5+ years of hands-on experience in Python for data science and AI solution development
Background in cloud-based data environments (Google Cloud Platform, Azure, or AWS)
Familiarity with LLM prompt design, AI observability, or model monitoring frameworks
Proven experience with Google Vertex AI, Gemini, or OpenAI models
Proficiency with NumPy, pandas, scikit-learn, or similar Python libraries and visualization libraries such as matplotlib or seaborn
Strong foundation in machine learning techniques, including regression, classification, clustering, and evaluation metrics
Experience designing and executing experiments to evaluate model performance
Excellent analytical, problem-solving, and communication skills
Experience presenting findings to cross-functional or business leadership audiences
Exposure to deep learning or generative AI workflows (e.g., PyTorch, TensorFlow, Hugging Face Transformers) is
preferred