Data Scientist
Data scientist job in Chicago, IL
The Aspen Group (TAG) is one of the largest and most trusted retail healthcare business support organizations in the U.S. and has supported over 20,000 healthcare professionals and team members with close to 1,500 health and wellness offices across 48 states in four distinct categories: dental care, urgent care, medical aesthetics, and animal health. Working in partnership with independent practice owners and clinicians, the team is united by a single purpose: to prove that healthcare can be better and smarter for everyone. TAG provides a comprehensive suite of centralized business support services that power the impact of five consumer-facing businesses: Aspen Dental, ClearChoice Dental Implant Centers, WellNow Urgent Care, Chapter Aesthetic Studio, and Lovet Pet Health Care. Each brand has access to a deep community of experts, tools and resources to grow their practices, and an unwavering commitment to delivering high-quality consumer healthcare experiences at scale.
As a reflection of our current needs and planned growth we are very pleased to offer a new opportunity to join our dedicated team as Data Scientist.
Join us as a Data Scientist and play a key role in shaping how patients access care across a growing network of brands including Aspen Dental, ClearChoice, WellNow, Chapter Aesthetic Studio, and Lovet. In this role, you won't just analyze data - you'll lead end-to-end initiatives that shape how we optimize revenue across offices, days, and even hours. From designing strategies, deploying solutions and building performance dashboards to partnering with data science on automation and collaborating with teams across Finance, Marketing, Product, Technology and Operations, you'll have a direct hand in driving measurable results.
This role is ideal for someone who excels at turning data into decisions, building repeatable processes, and uncovering insights that drive measurable improvements in revenue performance and patient access. You will lead initiatives across forecasting, scheduling optimization, demand modeling, capacity planning, and revenue strategy, while also shaping how analytics is delivered and scaled across the organization. If you're a builder who loves solving complex problems with data, operational logic, and automation, this opportunity is for you.
Essential Responsibilities:
Revenue Strategy & Optimization
Lead strategy development for optimizing revenue performance at the office, day, and hour level by leveraging forecasting, scheduling, and demand modeling - while balancing access for patients and operational efficiency.
Build analytical frameworks to support pricing, demand forecasting, scheduling, and access optimization.
Identify revenue opportunities through data-driven analysis of booking trends, cancellations, no-shows, and utilization.
Monitor and update demand and schedule availability through the analysis of historical and future booking trends, the pricing environment, industry capacity trends, competitive landscape, and other factors.
Analytics, Insights & Experimentation
Develop and maintain forecasting, demand models, dashboards, and scenario analyses.
Run experiments and structured tests to evaluate new operational and scheduling strategies.
Create clear, actionable insights that influence senior leaders and cross-functional partners.
Process Building & Automation
Map existing manual workflows and identify opportunities to automate recurring analyses or reporting.
Cross-Functional Leadership
Work closely with Operations, Finance, Product, Marketing, and Clinical teams to align strategies and execution.
Help shape and scale the function by building new playbooks, reports, and best practices.
Act as a subject matter expert in forecasting, demand modeling, and capacity optimization.
Qualifications (Skills-Based):
We welcome candidates with diverse academic and career pathways. You may have gained your skills through industry experience, coursework, certificates, or hands-on practice.
Experience/Education:
5+ years of experience in Revenue Management, Pricing, Operations Research, Supply/Demand Optimization (Airline, Travel, Healthcare, or multi-location service industries preferred).
Bachelor's degree in Business, Finance, Economics, Analytics, or Statistics required; Master's degree a plus.
Experience working alongside data science/engineering teams to automate and scale analytics processes.
Exceptional analytical, problem-solving, and communication skills - with the ability to influence senior stakeholders.
Detail-oriented, self-starter mindset with a passion for driving results.
Strong analytical and quantitative skills, with experience in forecasting, modeling, or optimization.
Strong technical proficiency in SQL and a modern BI platform (e.g., Tableau, Looker).
Familiarity with scripting (e.g., Python or R) or automation tools (e.g., DBT, Airflow) - not required, but helpful.
Additional Job Description:
Base Pay Range: $115,000 - $130,000, plus 10% annual bonus (Actual pay may vary based on experience, performance, and qualifications.)
A generous benefits package that includes paid time off, health, dental, vision, and 401(k) savings plan with match.
If you are an applicant residing in California, please view our privacy policy here: *********************************************************************************
Data Scientist
Data scientist job in Indianapolis, IN
We are seeking a Junior Data Scientist to join our large Utility client in downtown Indianapolis. This position will be hired as a Full-Time employee. This entry-level position is perfect for individuals eager to tackle real-world energy challenges through data exploration, predictive modeling, and collaborative problem-solving. As part of our team, you'll work closely with seasoned data scientists, analysts, architects, engineers, and governance specialists to generate insights that power smarter decisions and help shape the future of energy.
Key Responsibilities
Partner cross-functionally with data scientists, data architects and engineers, machine learning engineers, data analysts, and data governance experts to deliver integrated data solutions.
Collaborate with business stakeholders and analysts to define clear project requirements.
Collect, clean, and preprocess both structured and unstructured data from utility systems (e.g., meter data, customer data).
Conduct exploratory data analysis to uncover trends, anomalies, and opportunities to enhance grid operations and customer service.
Apply traditional machine learning techniques and generative AI tools to build predictive models that address utility-focused challenges, particularly in the customer domain (e.g., outage restoration, program adoption, revenue assurance).
Present insights to internal stakeholders in a clear, compelling format, including data visualizations that drive predictive decision-making.
Document methodologies, workflows, and results to ensure transparency and reproducibility.
Serve as a champion of data and AI across all levels of the client's US Utilities organization.
Stay informed on emerging industry trends in utility analytics and machine learning.
Requirements
Bachelor's degree in data science, statistics, computer science, engineering, or a related field. Master's degree or Ph.D. is preferred.
1-3 years of experience in a data science or analytics role.
Strong applied analytics and statistics skills, such as distributions, statistical testing, regression, etc.
Proficiency in Python or R, with experience using libraries such as pandas, NumPy, and scikit-learn.
Proficiency in traditional machine learning algorithms and techniques, including k-nearest neighbors (k-NN), naive Bayes, support vector machines (SVM), convolutional neural networks (CNN), random forest, gradient-boosted trees, etc.
Familiarity with generative AI tools and techniques, including large language models (LLMs) and Retrieval-Augmented Generation (RAG), with an understanding of how these can be applied to enhance contextual relevance and integrate enterprise data into intelligent workflows.
Proficiency in SQL, with experience writing complex queries and working with relational data structures. Google BigQuery experience is preferred, including the use of views, tables, materialized views, stored procedures, etc.
Proficient in Git for version control, including repository management, branching, merging, and collaborating on code and notebooks in data science projects. Experience integrating Git with CI/CD pipelines to automate testing and deployment is preferred.
Experience with cloud computing platforms (GCP preferred).
Ability to manage multiple priorities in a fast-paced environment.
Interest in learning more about the customer-facing side of the utility industry.
Compensation: Up to $130,000 per year annual salary. Exact compensation may vary based on several factors, including skills, experience, and education. Benefit packages for this role may include healthcare insurance offerings and paid leave as provided by applicable law.
Data Scientist
Data scientist job in Chicago, IL
This role supports a financial services organization by applying advanced data science and machine learning techniques to solve complex business problems using large-scale datasets. The position focuses on end-to-end feature engineering, model development, and writing production-quality code in a fast-paced, collaborative environment. The individual partners closely with product and engineering teams to uncover trends, improve algorithm performance, and drive data-informed decisions.
Key Responsibilities
Independently analyze and aggregate large, complex datasets to identify anomalies that affect model and algorithm performance
Own the full lifecycle of feature engineering, including ideation, development, validation, and selection
Develop and maintain production-quality code in a fast-paced, agile environment
Solve challenging analytical problems using extremely large (terabyte-scale) datasets
Evaluate and apply a range of machine learning techniques to determine the most effective approach for business use cases
Collaborate closely with product and engineering partners to identify trends, opportunities, and data-driven solutions
Communicate insights, results, and model performance clearly through visualizations and explanations tailored to non-technical stakeholders
Adhere to established standards and practices to ensure the security, integrity, and confidentiality of systems and data
Minimum Qualifications
Bachelor's degree in Mathematics, Statistics, Computer Science, Operations Research, or a related field
At least 4 years of professional experience in data science, analytics, engineering, or a closely related discipline
Hands-on experience building data science pipelines and workflows using Python, R, or similar programming languages
Strong SQL skills, including query development and performance tuning
Experience working with large-scale, high-volume datasets (terabyte-scale)
Practical experience applying a variety of machine learning methods and understanding the parameters that impact model performance
Familiarity with common machine learning libraries (e.g., scikit-learn, Spark ML, or similar)
Experience with data visualization tools and techniques
Ability to write clean, maintainable, and production-ready code
Strong interest in rapid prototyping, experimentation, and proof-of-concept development
Proven ability to communicate complex analytical findings to non-technical audiences
Ability to meet standard employment screening requirements
Data Scientist
Data scientist job in Chicago, IL
Minimum Qualifications:
● 5+ years of experience leading data science projects that have a direct impact on a company's
objectives or PhD degree in quantitative fields such as Statistics, Data Science, Computer Science
with 3+ years of experience
● 5+ years of experience utilizing data mining techniques, ML models to assist business decision
making. Hands-on experience with deep learning frameworks , LLMs, GenAI tools, and NLP
techniques.
● Deep expertise in statistical methods and machine learning concepts, with the ability to mentor
team members on methodologies, model tuning, and evaluation techniques.
● 2+ years hands-on experience with deep learning frameworks, LLMs, GenAI tools, and NLP
techniques.
● 5+ years of experience using Python to process large, diverse datasets, and develop and deploy
predictive models in cloud-based environments and other computing platforms.
● 5+ years of experience in SQL and cloud-hosted data platforms (Google Cloud Platform, AWS,
etc.).
● Demonstrated ability to assist business decision-making through data mining and machine
learning.
● Strong communication skills to collaborate effectively with business stakeholders. Must be able
to interact cross-functionally and drive both business and technical discussions.
● Ability to translate complex business problems into actionable project plans and solve
Data Architect
Data scientist job in Oak Brook, IL
GeoWealth is a Chicago-based fintech firm that offers an award-winning digital advisory platform, including Turnkey Asset Management Platform (“TAMP”) capabilities. We deliver a comprehensive and fully integrated wealth management technology platform to professionals in the financial services industry.
OPPORTUNITY:
We're looking for a Data Architect to join our Engineering Team. In this role, you will oversee the overall data architecture, helping us deliver our best-in-class solutions to our customers. This role will be key in organizing, designing, and leading our team through well-designed data architecture. If you love architecting complex systems, delivering customer focused software, designing best-in-class systems and leading data architecture design this role is for you.
RESPONSIBILITIES:
Own data architecture and oversee data implementation
Set coding/implementation standards
Lead our data warehouse design
Deliver performant, maintainable, and quality software in collaboration with our teams.
Improve our database design to reduce replication and increase performance
Partner with other architects and engineers to produce better designed systems
SKILLS, KNOWLEDGE, AND EXPERIENCE:
5+ years of experience as Data Architect or equivalent role
Bachelor's degree in computer science or equivalent degree
Hands-on experience with Oracle
Designed and implemented data warehouse
Experience with the following is preferred but not required: designing and building monolithic and distributed systems, Postgres, Logi Symphony, PowerBI, Java and JIRA/Confluence
COMPANY CULTURE & PERKS - HIGHLIGHTS:
Investing in Your Growth 🌱
Casual work environment with fun, hard-working, and open-minded coworkers
Competitive salary with opportunity for performance-based annual bonus
Opportunities to up-skill, explore new responsibilities, and network across departments
Defined and undefined career pathways allowing you to grow your own way
Work/Life Balance 🗓️
Flexible PTO and work schedule to ensure our team balances work and life
Hybrid work schedule
Maternity and paternity leave
Taking Care of Your Future ♥️
Medical, Dental, and Vision, Disability insurance
Free access to Spring Health, a comprehensive mental health solution
401(k) with company match and a broad selection of investments
Voluntary insurance: short-term disability, long-term disability, and life insurance
FSA and transit benefits for employees that contribute pre-tax dollars
Other Fun Stuff ⭐
Free on-site gym and parking
Weekly catered lunches in the office, plus monthly happy hours
Stocked kitchen with snacks and drinks
GeoWealth was recognized as “Best Place to Work” by Purpose Job's 2025, 2024 and 2022
GeoWealth was recognized as “Best Place to Work” by Built In in 2024, 2023 and 2022
SALARY RANGE:
Starting at $170,000-$220,000 + Benefits + Opportunity for Performance Bonus
This is an estimated range based on the circumstances at the time of posting, however, may change based on a combination of factors, including but not limited to skills, experience, education, market factors, geographical location, budget, and demand.
Senior Data Engineer
Data scientist job in Indianapolis, IN
Pinnacle Partners is assisting our client in the search for a Senior Data Engineer to join their team in the Indianapolis, IN area. This successful resource will be responsible for supporting the large-scale data modernization initiative and operationalize the platform moving forward.
RESPONSIBILITIES:
Design, develop, and refine BI focused data architecture and data platforms
Work with internal teams to gather requirements and translate business needs into technical solutions
Build and maintain data pipelines supporting transformation
Develop technical designs, data models, and roadmaps
Troubleshoot and resolve data quality and processing issues
Create and maintain detailed documentation for data warehouses, data stores, and end-to-end data flows
Mentor and support junior team members
REQUIREMENTS:
5+ years of hands-on experience with data warehousing, databases, and dimensional data modeling
5+ years of experience across end-to-end data analysis and development
Experience using GIT version control
Advanced SQL skills
Strong experience with AWS cloud
PREFERRED SKILLS:
Experience with Snowflake
Experience with Python or R
Bachelor's degree in an IT-Related field
TERMS:
This is a direct hire opportunity with a salary up to $130K based on experience. They offer benefits including medical, dental, and vision along with generous PTO, 401K matching, wellness programs, and other benefits.
Systems Data Analyst
Data scientist job in Indianapolis, IN
Beacon Hill Technologies is seeking proactive, data-driven analyst with strong initiative-someone who can expand on existing frameworks, validate data, and independently build tools that elevate team performance. You communicate clearly, think critically, and enjoy transforming complex technical information into meaningful business insights. You thrive in fast-paced environments and are comfortable working hands-on with evolving data systems. This postion is hybrid!
Required Skills:
Bachelor's degree in Information Systems, Business Analytics, IT, or a related field (or equivalent experience).
3-5 years of experience in data analysis, IT operations, or A/V-adjacent environment.
Proficiency with:
Tableau (strongly preferred; team's primary tool)
ServiceNow reporting
Excel (advanced formulas, macros)
Python (especially for Tableau-based scripting)
Experience working with large datasets and multiple data sources.
Ability to validate, test, and ensure data accuracy and integrity.
Strong communication skills; able to translate technical data into clear business insights.
Demonstrated ability to independently build new reports, dashboards, or tools when standard solutions are not available.
Desired Skills:
Experience with Cisco Spaces, digital room utilization analytics, or space-management tools.
Familiarity with A/V environments, technologies, or governance frameworks (big plus, but not required).
Experience developing or managing lifecycle models, performance metrics, or executive-level reporting dashboards.
Knowledge of AI-assisted reporting or automation tools.
Experience with procurement forecasting, budgeting data, or operational strategy analytics.
Beacon Hill is an equal opportunity employer and individuals with disabilities and/or protected veterans are encouraged to apply.
California residents: Qualified applications with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act.
If you would like to complete our voluntary self-identification form, please click here or copy and paste the following link into an open window in your browser: *****************************************
Completion of this form is voluntary and will not affect your opportunity for employment, or the terms or conditions of your employment. This form will be used for reporting purposes only and will be kept separate from all other records.
Company Profile:
Beacon Hill Technologies, a premier National Information Technology Staffing Group, provides world class technology talent across all industries utilizing a complete suite of staffing services. Beacon Hill Technologies' dedicated team of recruiting and staffing experts consistently delivers quality IT professionals to solve our customers' technical and business needs.
Beacon Hill Technologies covers a broad spectrum of IT positions, including Project Management and Business Analysis, Programming/Development, Database, Infrastructure, Quality Assurance, Production/Support and ERP roles.
Learn more about Beacon Hill and our specialty divisions, Beacon Hill Associates, Beacon Hill Financial, Beacon Hill HR, Beacon Hill Legal, Beacon Hill Life Sciences and Beacon Hill Technologies by visiting *************
Benefits Information:
Beacon Hill offers a robust benefit package including, but not limited to, medical, dental, vision, and federal and state leave programs as required by applicable agency regulations to those that meet eligibility. Upon successfully being hired, details will be provided related to our benefit offerings.
We look forward to working with you.
Beacon Hill. Employing the Future™
Data Engineer
Data scientist job in Chicago, IL
Scaylor helps mid-market and enterprise companies make sense of their data. Most teams past $20M ARR are stuck with fragmented systems, old databases, and hundreds of spreadsheets that never quite line up. We build the pipelines that clean, normalize, and connect that data so it can actually be used.
Our platform handles everything from parsing financial models and reports to mapping tables across legacy databases and modern APIs. The goal is simple: give companies a single source of truth they can trust.
We're a small team of four - one backend engineer, one frontend engineer, and 2 founders. We're looking for our fifth teammate to help us scale the engine behind it all.
⸻
The Role
You'll work across data engineering and full-stack development, helping us build reliable data infrastructure that powers real workflows. You'll touch everything from ingestion and transformation pipelines to the APIs and dashboards that surface insights to clients.
You'll work directly with the founding team and help make technical decisions that define the next version of Scaylor's core platform.
⸻
What You'll Work On
• Build data pipelines that extract, clean, and standardize information from Excel files, PDFs, APIs, and legacy databases
• Design schemas and transformation logic for structured and semi-structured data
• Develop and maintain backend APIs (Python/FastAPI or Node/Express) for data access and analytics
• Help connect backend services to our frontend dashboards (React, Node.js, or similar)
• Set up and maintain AWS infrastructure (Lambda, S3, ECS, CloudFormation)
• Collaborate with clients to understand their data problems and design workflows that fix them
⸻
You'd Be Great Here If You
• Have 3-6 years of experience in data engineering, backend, or full-stack roles
• Write clean, maintainable code in Python + JS
• Understand ETL, data normalization, and schema mapping
• Have experience with SQL and working with legacy databases or systems
• Are comfortable managing cloud services and debugging data pipelines
• Enjoy solving messy data problems and care about building things that last
⸻
Nice to Have
• Familiarity with GCP or SQL databases
• Understanding of enterprise data flows (ERP, CRM, or financial systems)
• Experience building and deploying containers (Docker, GitHub Actions, CI/CD)
• Interest in lightweight ML or LLM-assisted data transformation
⸻
Why Join Scaylor
• Be one of the first five team members shaping the product and the company
• Work directly with the founder and help define Scaylor's technical direction
• Build infrastructure that solves real problems for real companies
• Earn meaningful equity and have a say in how the company grows
⸻
Compensation
• $130k - $150k with a raise based on set revenue triggers
• .4% equity
• Relocation to Chicago, IL required
Data Analyst
Data scientist job in Lake Zurich, IL
Senior Data Analyst on W2
Duration: 3 months (may extend/convert)
We are seeking a Senior Data Analyst to ensure the accuracy, consistency, and integrity of master data across SAP and other platforms for our medical device and pharmaceutical business. This role involves data governance, process optimization, and cross-functional collaboration to drive operational efficiencies and implement best practices.
Key Responsibilities:
Manage day-to-day activities of the Customer Master Data team.
Support data governance, ensure data quality, and optimize processes.
Assist in pricing, licensing updates, rebates, and regulatory compliance.
Serve as SME for new system integrations and enhancements.
Participate in process improvement initiatives and special projects.
Requirements:
Bachelor's degree + 5+ yrs experience (3+ yrs in pharma/medical device).
SAP experience, strong analytical and problem-solving skills.
Expert in Excel and handling large datasets.
Strong communication, collaboration, and leadership skills.
Ability to work independently in a fast-paced environment.
Interview Process:
1st round: Sr. Manager & Sr. Director (Virtual)
2nd round: Team Lead (Virtual)
Senior Data Engineer
Data scientist job in Indianapolis, IN
Senior Data Engineer - Azure Data Warehouse (5-7+ Years Experience)
Long term renewing contract
Azure-based data warehouse and dashboarding initiatives.
Work alongside architects, analysts, and researchers to build scalable, auditable, and business-aligned data assets using modern cloud tools and best practices.
Key Responsibilities
· Design and implement scalable data pipelines using ADF, Databricks, and Azure SQL Server
· Apply Medallion architecture principles and best practices for data lake and warehouse design
· Collaborate with Data Architects, Analysts, and Researchers to translate business needs into technical solutions
· Develop and maintain CI/CD pipelines for data workflows and dashboard deployments
· Lead troubleshooting and debugging efforts across ETL, SQL, and cloud environments
· Mentor junior team members and promote best practices in data modeling, cleansing, and promotion
· Support dashboarding initiatives with Power BI and wireframe collaboration
· Ensure auditability, lineage, and performance across SQL Server and Oracle environments
Required Skills & Experience
· 5-7+ years in data engineering, data warehouse design, and ETL development
· Strong expertise in Azure Data Factory, Data Bricks, and Python
· Deep understanding of SQL Server, Oracle, Postgres SQL & Cosmos DB and data modeling standards
· Proven experience with Medallion architecture and data Lakehouse best practices
· Hands-on with CI/CD, DevOps, and deployment automation
· Agile mindset with ability to manage multiple priorities and deliver on time
· Excellent communication and documentation skills
Bonus Skills
· Experience with GCP or AWS
· Familiarity with Jira, Confluence, and AppDynamics
Sr. Data Engineer - PERM - MUST BE LOCAL
Data scientist job in Naperville, IL
Resource 1 is in need of a Sr. Data Engineer for a full-time/ permanent position with our client in Naperville, IL. Candidate must be
local to Illinois
because of future hybrid onsite in Naperville expected. Our client is an employee-owned company with excellent benefits, growth opportunities and profit-sharing bonus.
This position is focused on building modern data pipelines, integrations and back-end data solutions. Selected individual will work within cross-functional Agile teams, collaborating with product owners, business analysts and other engineers to design and deliver data solutions that power business insights and AI products.
Responsibilities:
Design and develop scalable data pipelines for ingestion, transformation and integration using AWS services.
Pull data from PostgreSQL and SQL Server to migrate to AWS.
Create and modify jobs in AWS and modify logic in SQL Server.
Create SQL queries, stored procedures and functions in PostgreSQL and RedShift.
Provide input on data modeling and schema design as needed.
Manage infrastructure through infrastructure-as-code templates (Serverless Framework), supporting new data products and services in AWS.
Support inbound/ outbound data flows, including APIs, S3 replication and secured data.
Assist with data visualization/ reporting as needed.
Follow an Agile development methodology, with regular workshops and standup meetings, working in two-week sprints.
Qualifications:
5+ years of data engineering experience.
Experience with AWS and its associated array of offerings (Glue, Redshift, Athena, S3, Lambda, Spectrum).
Strong experience with SQL, Python and PySpark.
A background in supply chain, logistics or distribution would be a plus.
Experience with Power BI is a plus.
Data Analyst
Data scientist job in Deerfield, IL
Our client is currently seeking a Data Analyst
Hybrid to the northern suburbs of Chicago
W2 Only
In this role, you'll evaluate usage, adoption, and performance of IIN systems, identify data mining opportunities, and integrate findings with key performance indicators. You'll collaborate with cross-functional teams to deliver dashboards, reports, and scorecards that inform strategy.
What You'll Do
Analyze usage, adoption, and efficacy of IIN systems.
Identify and execute data mining opportunities.
Integrate insights with traditional KPIs.
Provide analytical support for projects and stakeholders.
Develop and share dashboards, reports, and scorecards.
Ensure data accuracy, format, and availability.
What We're Looking For
Bachelor's degree in Statistics, Data Science, Mathematics, or related field.
Strong analytical skills and ability to interpret complex datasets.
Advanced Excel skills (pivot tables, VLOOKUP, Power Query, visualization).
Expertise in Tableau for dashboard design and optimization.
Hands-on experience with Snowflake and SQL for data warehousing and analysis.
Strong business acumen and ability to align data strategies with goals.
Excellent communication and collaboration skills.
Nice to Have
Experience with KPI reporting and BI tools.
Familiarity with advanced data visualization techniques.
Rate: $65-90/HR
Data Architect - Pharma
Data scientist job in Chicago, IL
MathCo
Role - Data/AI Engineering Manager
Onsite - Chicago - 4 days in office (Mandatory)
Industry - Pharma (Mandatory)
As platform architect/owner, you will:
Lead the end-to-end architecture, lifecycle, and governance of the AI/Analytics platform, defining standards, reusable components, and integration patterns.
Partner with AI/Data architects to enable scalable model deployment and enhance agentic orchestration.
Translate business needs into platform features, manage onboarding, documentation, and cross-functional collaboration for platform adoption.
Oversee infrastructure-as-code, CI/CD, observability, and containerized environments to ensure reliability and scalability.
Evaluate complex technical proposals and develop actionable platform roadmaps and architecture recommendations
Stay current on key AI platform developments and assess their impact on architecture and client strategy
Coach others, recognize their strengths, and encourage them to take ownership of their personal development
Skills Required
Experience in designing, architecting, or managing distributed data and AI platforms in cloud environments (AWS, Azure, or GCP)
Proven ability to carry out complex Proof of Concept (POC), pilot projects, and limited production rollouts for AI use-cases, focusing on developing new or improved techniques and procedures.
Strong skills in pipeline/workflow optimization and data processing frameworks to evaluate architectural choices
Years of Experience
Minimum of 8 years in relevant experience, preferably with a consulting
background and experience with Pharma clients
Data Architect
Data scientist job in Chicago, IL
Job Title: Architect / Senior Data Engineer
We are seeking a highly skilled Architect / Senior Data Engineer to design, build, and optimize our modern data ecosystem. The ideal candidate will have deep experience with AWS cloud services, Snowflake, and dbt, along with a strong understanding of scalable data architecture, ETL/ELT development, and data modeling best practices.
Key Responsibilities
Architect, design, and implement scalable, reliable, and secure data solutions using AWS, Snowflake, and dbt.
Develop end-to-end data pipelines (batch and streaming) to support analytics, machine learning, and business intelligence needs.
Lead the modernization and migration of legacy data systems to cloud-native architectures.
Define and enforce data engineering best practices including coding standards, CI/CD, testing, and monitoring.
Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements and translate them into technical solutions.
Optimize Snowflake performance through query tuning, warehouse sizing, and cost management.
Establish and maintain data governance, security, and compliance standards across the data platform.
Mentor and guide junior data engineers, providing technical leadership and direction.
Required Skills & Qualifications
8+ years of experience in Data Engineering, with at least 3+ years in a cloud-native data environment.
Hands-on expertise in AWS services such as S3, Glue, Lambda, Step Functions, Redshift, and IAM.
Strong experience with Snowflake - data modeling, warehouse design, performance optimization, and cost governance.
Proven experience with dbt (data build tool) - model development, documentation, and deployment automation.
Proficient in SQL, Python, and ETL/ELT pipeline development.
Experience with CI/CD pipelines, version control (Git), and workflow orchestration tools (Airflow, Dagster, Prefect, etc.).
Familiarity with data governance and security best practices, including role-based access control and data masking.
Strong understanding of data modeling techniques (Kimball, Data Vault, etc.) and data architecture principles.
Preferred Qualifications
AWS Certification (e.g., AWS Certified Data Analytics - Specialty, Solutions Architect).
Strong communication and collaboration skills, with a track record of working in agile environments.
Data Architect
Data scientist job in Springfield, IL
Krasan Consulting Services is a Chicago boutique management consulting and technology solutions integrator. As a certified BEP WBE with the State of Illinois and a certified WBE, MBE, and DBE recognized by the City of Chicago, Krasan provides complex technology solutions and services for clients in the Public and Commercial sectors. With decades of experience, Krasan specializes in end-to-end technology solutions, integration, and implementation services and is proud to enable the success of our customers.
Role Title: Data Architect
Location - Remote - Springfield, IL
Job Description:
Determine where the data resides.
Create an architectural design for migration.
Plan the migration path from start to finish.
Collaborate with the IEPA during system design.
Confirm data security and integrity during migration.
Design and integrate the business and technical components of the EIMS, including future enhancements.
Confirm the solution meets all IEPA requirements and integrates seamlessly with other environmental information systems.
Provide technical leadership and guidance to development teams.
Confirm system compliance with relevant regulations
Minimum seven (7) years of experience in solution architecture and system integration.
Knowledge of environmental information or permitting systems and interoperability.
Experience with technical leadership and project management.
Knowledge of CROMERR and FedRAMP solution design.
Experience serving as Offeror's business architect (or similar role) for at least two (2) full life cycle implementations of the offered solution end-to-end.
2013
Senior Data Engineer
Data scientist job in Chicago, IL
requires visa independent candidates.
Note: (OPT, CPT, H1B holders will not work at this time)
Design, develop, and maintain scalable ETL pipelines using AWSGlue
Collaborate with data engineers and analysts to understand data requirements
Build and manage data extraction, transformation, and loading processes
Optimize and troubleshoot existing Glue jobs and workflows
Ensure data quality, integrity, and security throughout the ETL process
Integrate AWS Glue with other AWS services like S3, Lambda, Redshift, and Step Functions
Maintain documentation of data workflows and processes
Stay updated with the latest AWS tools and best practices
Required Skills
Strong hands-on experience with AWS Glue, PySpark, and Python
Proficiency in SQL and working with structured/unstructured data (JSON, CSV, Parquet)
Experience with data warehousing concepts and tools
Familiarity with CI/CD pipelines, Terraform, and scripting (PowerShell, Bash)
Solid understanding of data modeling, data integration, and data management
Exposure to AWS Batch, Step Functions, and Data Catalogs
Data Engineer
Data scientist job in Chicago, IL
The Data Engineer will design, build, and optimize the data pipelines and models that support the firm's evolving research, analytics, and systematic portfolio construction environment. This role is central to enabling data-driven investment processes, including quantitative research, AI/ML capabilities, and front-office automation.
Candidates must have deep expertise with Snowflake, strong SQL skills, and experience integrating diverse datasets used across investment organizations. The role is highly collaborative and requires comfort working in an iterative, fast-moving environment where data needs evolve rapidly based on stakeholder input.
Responsibilities
Design, build, and enhance ETL/ELT pipelines in Snowflake, ensuring high performance, reliability, and scalability.
Integrate internal and external datasets, including pricing, research content, economic releases, market data, and security reference data.
Support real-time or near-real-time data flows where needed (e.g., pricing, indicative quotes, market-sensitive inputs).
Collaborate closely with Product Leads, Quant Developers, and UI/UX teams to ensure data structures meet the requirements of research workflows, analytical models, and user-facing applications.
Partner with front-office stakeholders to rapidly iterate on evolving analytical and data needs.
Implement data validation, monitoring, and quality frameworks to ensure accuracy and reliability across critical datasets.
Translate prototype pipelines into production-ready workflows with appropriate documentation, standards, and controls.
Contribute to data modeling standards, metadata frameworks, and data governance practices across the platform.
Requirements
10+ years of data engineering experience within investment management, financial technology, or similar data-intensive environments.
Expert-level SQL, including complex queries, schema design, and performance optimization.
Deep hands-on experience with Snowflake, including advanced features such as tasks, streams, performance tuning, and secure data sharing.
Strong Python capabilities for ETL/ELT development, data processing, and workflow automation.
Experience integrating APIs and working with structured, semi-structured, and unstructured datasets.
Familiarity with NLP or AI/ML-oriented datasets (e.g., textual research content, PDFs) is a plus.
Experience with Domino or willingness to work within a Domino-based model environment.
Working knowledge of investment data structures (holdings, benchmarks, pricing, exposures) is highly preferred.
Ability to thrive in a rapid prototyping environment with evolving requirements and close partnership with front-office teams.
Data Scientist - Operations Research
Data scientist job in Chicago, IL
Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what's next. Let's define tomorrow, together.
Description
United's Digital Technology team is comprised of many talented individuals all working together with cutting-edge technology to build the best airline in the history of aviation. Our team designs, develops and maintains massively scaling technology solutions brought to life with innovative architectures, data analytics, and digital solutions.
Job overview and responsibilities
Provides mathematical modeling and analysis services to support critical financial, operational, and/or strategic planning decisions and engages in supporting project teams in value added activities that generate practical solutions to complex business problems, explores new business alternatives, and drives improvement in business decisions
* Develops the approach and methods to define and solve management problems through quantitative analysis and analytical models using operations research, machine learning, and structured programming languages
* Identifies, researches, or solves large complex problems using big data and operations research and machine learning principles
* Leverages understanding of the business process to identify and implement operations research solutions that will result in significant bottom-line contributions
* Builds and develops operations research/optimization mathematical model applications, and provides client support leveraging operations research knowledge
* Participates in model design, prototype, and model development for several efforts that occur simultaneously, and interfaces with product delivery groups
* Raises concerns when scope of analysis may not align with time available and can choose an appropriate scope of analysis to conduct balancing ROI to time available
* Designs analytic plan/develop hypotheses to test; understands limitations of analysis (what it can and cannot be used for)
* Anticipates working team questions to data and approach
* Identifies solution quality risks and on-time risks
* Understands the business value, process, and expectations before focusing on choice of a technical solution
* Understands the intuition behind the numbers (i.e. does it make sense?)
* Provides on-going analytical services to client organizations
* Communicates results to management and clients
* Contributes deck content and builds the story for the deck with guidance to summarize findings
* Develops and delivers presentations aligned with Ai standards
* Speaks in a manner appropriate for working team and their level +1
* Keeps informed about the latest analytical methods and research in the operations research and analytics fields
Qualifications
What's needed to succeed (Minimum Qualifications):
* Masters in Operations Research or other related quantitative discipline involving quantitative analysis and application of advanced operations research principles
* Coursework or work experience with mathematical programming techniques
* Coursework or work experience in model prototyping through use of optimization toolkit(s) including CPLEX, AMPL, or OPL
* Coursework or work experience with C, C++, Java, R, Python, or other structured programming language
* Good business, technical, verbal/written communication, presentation and sales skills. Adaptability to changing business environment
* Good interpersonal skills and ability to interact with clients
* Proficient with MS Office
* Successful completion of interview required to meet job qualifications
* Must be legally authorized to work in the United States for any employer without sponsorship
* Reliable, punctual attendance is an essential function of the position
What will help you propel from the pack (Preferred Qualifications):
* Minor in computer science and/or formal advanced computer science coursework preferred
* 1+ years of professional experience in analytical field
* 1+ years designing and programming/coding data structures for large-scale computer models
* Experience with Julia programming language
* Knowledge of United/industry data sources
* Structured programming for large-scale computer models
* Demonstrated ability to create business value
The base pay range for this role is $91,770.00 to $119,514.00.
The base salary range/hourly rate listed is dependent on job-related, factors such as experience, education, and skills. This position is also eligible for bonus and/or long-term incentive compensation awards.
You may be eligible for the following competitive benefits: medical, dental, vision, life, accident & disability, parental leave, employee assistance program, commuter, paid holidays, paid time off, 401(k) and flight privileges.
United Airlines is an equal opportunity employer. United Airlines recruits, employs, trains, compensates and promotes regardless of race, religion, color, national origin, gender identity, sexual orientation, physical ability, age, veteran status and other protected status as required by applicable law. Equal Opportunity Employer - Minorities/Women/Veterans/Disabled/LGBT.
We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions. Please contact JobAccommodations@united.com to request accommodation.
Data Scientist/Engineer
Data scientist job in Chicago, IL
InfoTech Resources is seeking a Data Scientist for a hybrid position based in the Chicago Loop, working 2-3 days per week onsite. This opportunity is open to both contract and direct-hire candidates.
The selected candidate will play a key role in designing, developing, and optimizing data-driven AI solutions within a modern cloud-based environment (Google Cloud). The client is a large enterprise company in the insurance sector and this role will join their first dedicated AI/ML team charged with designing and laying a proper foundation for AI/ML solutions. The Data Scientist will use Python and leading AI frameworks to build, train, and evaluate models that support intelligent applications and data analytics initiatives leveraging Google Vertex AI, Gemini, and OpenAI integrations. Candidates must have expertise designing/developing models, agents, frameworks, etc. within a greenfield context and hands-on experience in evaluating model performance using structured testing methodologies.
Key Responsibilities
Develop, train, and validate predictive and generative AI models using Python and industry-standard libraries such as NumPy, pandas, scikit-learn, matplotlib, and TensorFlow/PyTorch
Design and execute structured experiments to evaluate LLM and model performance using quantitative testing methodologies
Collaborate with AI engineers and software developers to operationalize models within Google Vertex AI or similar environments
Apply data wrangling, feature engineering, and exploratory data analysis (EDA) to ensure data quality and model interpretability
Build and visualize insights using Power BI, matplotlib, or other data visualization tools
Implement best practices in prompt engineering, context filtering, and AI model evaluation
Contribute to continuous improvement of the organization's AI and machine learning capabilities through research and adoption of emerging methods
Communicate findings and recommendations clearly to technical and business stakeholders
Required Skills & Experience
Bachelor's or Master's degree in Data Science, Computer Science, Statistics, or a related field
5+ years of hands-on experience in Python for data science and AI solution development
Background in cloud-based data environments (Google Cloud Platform, Azure, or AWS)
Familiarity with LLM prompt design, AI observability, or model monitoring frameworks
Proven experience with Google Vertex AI, Gemini, or OpenAI models
Proficiency with NumPy, pandas, scikit-learn, or similar Python libraries and visualization libraries such as matplotlib or seaborn
Strong foundation in machine learning techniques, including regression, classification, clustering, and evaluation metrics
Experience designing and executing experiments to evaluate model performance
Excellent analytical, problem-solving, and communication skills
Experience presenting findings to cross-functional or business leadership audiences
Exposure to deep learning or generative AI workflows (e.g., PyTorch, TensorFlow, Hugging Face Transformers) is
preferred
Advisory, Data Scientist - CMC Data Products
Data scientist job in Indianapolis, IN
At Lilly, we unite caring with discovery to make life better for people around the world. We are a global healthcare leader headquartered in Indianapolis, Indiana. Our employees around the world work to discover and bring life-changing medicines to those who need them, improve the understanding and management of disease, and give back to our communities through philanthropy and volunteerism. We give our best effort to our work, and we put people first. We're looking for people who are determined to make life better for people around the world.
Organizational & Position Overview: The Bioproduct Research and Development (BR&D) organization strives to deliver creative medicines to patients by developing and commercializing insulins, monoclonal antibodies, novel therapeutic proteins, peptides, oligonucleotide therapies, and gene therapy systems. This multidisciplinary group works collaboratively with our discovery and manufacturing colleagues.
We are seeking an exceptional Data Scientist with deep data expertise in the pharmaceutical domain to lead the development and delivery of enterprise-scale data products that power AI-driven insights, process optimization, and regulatory compliance. In this role, you'll bridge pharmaceutical sciences with modern data engineering to transform complex CMC, PAT, and analytical data into strategic assets that accelerate drug development and manufacturing excellence.
Responsibilities:
Data Product Development: Define the roadmap and deliver analysis-ready and AI-ready data products that enable AI/ML applications, PAT systems, near-time analytical testing, and process intelligence across CMC workflows.
Data Archetypes & Modern Data Management: Define pharmaceutical-specific data archetypes (process, analytical, quality, CMC submission) and create reusable data models aligned with industry standards (ISA-88, ISA-95, CDISC, eCTD).
Modern Data Management for Regulated Environments: Implement data frameworks that ensure 21 CFR Part 11, ALCOA+, and data integrity compliance, while enabling scientific innovation and self-service access.
AI/ML-ready Data Products: Build training datasets for lab automation, process optimization, and predictive CQA models, and support generative AI applications for knowledge management and regulatory Q&A.
Cross-Functional Leadership: Collaborate with analytical R&D, process development, manufacturing science, quality, and regulatory affairs to standardize data products.
Deliverables include:
Scalable data integration platform that automates compilation of technical-review-ready and submission-ready data packages with demonstrable quality assurance.
Unified CMC data repository supporting current process and analytical method development while enabling future AI/ML applications across R&D and manufacturing
Data flow frameworks that enable self-service access while maintaining GxP compliance and audit readiness
Comprehensive documentation, standards, and training programs that democratize data access and accelerate product development
Basic Requirements:
Master's degree in Computer Science, Data Science, Machine Learning, AI, or related technical field
8+ years of product management experience focused on data products, data platforms, or scientific data systems and a strong grasp of modern data architecture patterns (data warehouses, data lakes, real-time streaming)
Knowledge of modern data stack technologies (Microsoft Fabric, Databricks, Airflow) and cloud platforms (AWS- S3, RDS, Lambda/Glue, Azure)
Demonstrated experience designing data products that support AI/ML workflows and advanced analytics in scientific domains
Proficiency with SQL, Python, and data visualization tools
Experience with analytical instrumentation and data systems (HPLC/UPLC, spectroscopy, particle characterization, process sensors)
Knowledge of pharmaceutical manufacturing processes, including batch and continuous manufacturing, unit operations, and process control
Expertise in data modeling for time-series, spectroscopic, chromatographic, and hierarchical batch/lot data
Experience with laboratory data management systems (LIMS, ELN, SDMS, CDS) and their integration patterns
Additional Preferences
Understanding of Design of Experiments (DoE), Quality by Design (QbD), and process validation strategies
Experience implementing data mesh architectures in scientific organizations
Knowledge of MLOps practices and model deployment in validated environments
Familiarity with regulatory submissions (eCTD, CTD) and how analytical data supports marketing applications
Experience with CI/CD pipelines (GitHub Actions, CloudFormation) for scientific applications
Lilly is dedicated to helping individuals with disabilities to actively engage in the workforce, ensuring equal opportunities when vying for positions. If you require accommodation to submit a resume for a position at Lilly, please complete the accommodation request form (******************************************************** for further assistance. Please note this is for individuals to request an accommodation as part of the application process and any other correspondence will not receive a response.
Lilly is proud to be an EEO Employer and does not discriminate on the basis of age, race, color, religion, gender identity, sex, gender expression, sexual orientation, genetic information, ancestry, national origin, protected veteran status, disability, or any other legally protected status.
Our employee resource groups (ERGs) offer strong support networks for their members and are open to all employees. Our current groups include: Africa, Middle East, Central Asia Network, Black Employees at Lilly, Chinese Culture Network, Japanese International Leadership Network (JILN), Lilly India Network, Organization of Latinx at Lilly (OLA), PRIDE (LGBTQ+ Allies), Veterans Leadership Network (VLN), Women's Initiative for Leading at Lilly (WILL), en Able (for people with disabilities). Learn more about all of our groups.
Actual compensation will depend on a candidate's education, experience, skills, and geographic location. The anticipated wage for this position is
$126,000 - $244,200
Full-time equivalent employees also will be eligible for a company bonus (depending, in part, on company and individual performance). In addition, Lilly offers a comprehensive benefit program to eligible employees, including eligibility to participate in a company-sponsored 401(k); pension; vacation benefits; eligibility for medical, dental, vision and prescription drug benefits; flexible benefits (e.g., healthcare and/or dependent day care flexible spending accounts); life insurance and death benefits; certain time off and leave of absence benefits; and well-being benefits (e.g., employee assistance program, fitness benefits, and employee clubs and activities).Lilly reserves the right to amend, modify, or terminate its compensation and benefit programs in its sole discretion and Lilly's compensation practices and guidelines will apply regarding the details of any promotion or transfer of Lilly employees.
#WeAreLilly
Auto-Apply