Post job

Data scientist jobs in Cedar Rapids, IA

- 1,064 jobs
All
Data Scientist
Data Engineer
Data Architect
Actuary
Senior Data Scientist
Data Analyst
Data Science Internship
Data Consultant
  • Sr. Data Scientist

    Insight Global

    Data scientist job in Chicago, IL

    The Senior Data Scientist, Clinical Data Science (HOS & HRA) plays a key role in advancing analytics that improve Medicare Advantage member outcomes and CMS Star Ratings performance. This position supports the design, implementation, and automation of analytic solutions for the Health Outcomes Survey (HOS) and Health Risk Assessment (HRA) programs-two core domains in Aetna's Medicare clinical strategy. The ideal candidate combines strong technical depth in data science and statistical modeling with the ability to translate complex findings into actionable insights for non-technical audiences. This individual will automate recurring data science workflows, conduct robust impact and descriptive analyses, and collaborate closely with clinical, quality, and operations teams to identify emerging opportunities that improve member experience and population health outcomes. Clinical Data Science & Analytics Lead the development of analytic models and descriptive frameworks supporting HOS and HRA performance improvement across Medicare Advantage. Conduct impact analyses, trend identification, and segmentation to explain drivers of performance and inform strategy. Automate recurring analytics and reporting pipelines to increase reliability, efficiency, and reproducibility of insights. Apply advanced statistical, predictive, and causal inference methods to evaluate intervention effectiveness and identify member-level opportunities. Develop and refine tools for data visualization and storytelling to communicate results clearly to non-technical stakeholders. Partner with business leaders to translate analytic results into actionable recommendations for program design, member outreach, and care interventions. Collaboration & Consultation Serve as a bridge between technical and non-technical teams, ensuring analytic outputs are interpretable and actionable. Collaborate cross-functionally with Clinical Operations, Member Experience, and Quality teams to align analytics with enterprise goals. Support enterprise data modernization and automation initiatives by identifying repeatable use cases for scalable analytics and workflow improvement. Mentor junior data scientists and analysts on best practices for data integrity, modeling, and automation. Technical & Operational Excellence Design and maintain automated analytic processes leveraging Python, SQL, and modern cloud environments (e.g., GCP). Ensure accuracy, consistency, and explainability of models and metrics through disciplined version control and validation. Contribute to the team's continuous improvement culture by recommending new methods, tools, or data sources that enhance analytic precision and speed. Required Skills & Experience 5+ years of hands-on experience in data science, advanced analytics, or statistical modeling in healthcare, life sciences, or managed care. Strong proficiency in Python, SQL, and data science libraries (e.g., pandas, scikit-learn, statsmodels). Demonstrated ability to automate data workflows and standardize recurring analyses or reporting. Experience applying statistical and descriptive analytics to clinical or quality measurement problems (e.g., HOS, HRA, CAHPS, or HEDIS). Proven success communicating complex findings to non-technical business partners and influencing decision-making. Ability to work effectively in a fast-paced, cross-functional environment. Nice to Have Skills & Experience Master's or PhD in Data Science, Statistics, Epidemiology, Public Health, or a related quantitative field. Familiarity with Medicare Advantage, CMS Star Ratings methodology, and clinical quality measures. Experience working within modern cloud environments (e.g., Google Cloud Platform, Databricks) and with workflow orchestration tools (Airflow, dbt). Background in impact measurement, causal inference, or time-series analysis in healthcare contexts.
    $78k-106k yearly est. 1d ago
  • Senior Data Scientist

    KTek Resourcing 4.1company rating

    Data scientist job in Chicago, IL

    Role: Senior Data Scientist · We are seeking a hands-on Senior Data Scientist to join our Insurance Analytics & AI Vertical. The ideal candidate will bring a blend of insurance domain expertise (preferably P&C), consulting mindset, and strong data science skills. This is a mid-senior level role focused on delivering value through analytics, stakeholder engagement, and logical problem solving, rather than people management. · The role involves working closely with EXL teams and clients on reporting, data engineering, transformation, and advanced analytics projects. While strong technical skills are important, we are looking for someone who can engage directly with clients, translate business needs into analytical solutions, and drive measurable impact. Key Responsibilities · Collaborate with EXL and client stakeholders to design and deliver data-driven solutions across reporting, analytics, and transformation initiatives. · Apply traditional statistical methods, machine learning, deep learning, and NLP techniques to solve business problems. · Support insurance-focused analytics use cases (with preference for P&C lines of business). · Work in a consulting setup: conduct requirement gathering, structure problem statements, and communicate insights effectively to senior stakeholders. · Ensure data quality, governance, and compliance with Data Privacy and Protection Guidelines. · Independently research, analyze, and present findings, ensuring client-ready deliverables. · Contribute to continuous improvement initiatives and support business development activities where required. Key Skillsets & Experience · 7-12 years of experience in analytics, reporting, dashboarding, ETL, Python/R, and associated data management. · Proficiency in machine learning, deep learning algorithms (e.g., neural networks), and text analytics techniques (NLTK, Gensim, LDA, word embeddings like Word2Vec, FastText, GloVe). · Strong consulting background with structured problem-solving and stakeholder management skills. · Excellent communication and presentation skills with the ability to influence and engage senior business leaders. · Hands-on role with ability to independently manage client deliverables and operate in cross-cultural, global environments. Data Management Skills · Strong familiarity with advanced analytics tools (Python, R), BI tools (Tableau, Power BI), and related software applications. · Good knowledge of SQL, Informatica, Hadoop/Spark, ETL tools. · Ability to translate business/functional requirements into technical specifications. · Exposure to cloud data management and AWS services (preferred). Candidate Profile · Bachelor's/Master's degree in Economics, Mathematics, Computer Science/Engineering, Operations Research, or related analytical fields. · Prior insurance industry experience (P&C preferred) strongly desired. · Superior analytical, logical, and problem-solving skills. · Outstanding written and verbal communication abilities with a consultative orientation. · Flexible to work in a fast-paced, evolving environment with occasional visits to the client's Chicago office.
    $88k-119k yearly est. 4d ago
  • Data Scientist/Analyst(Purchasing and Procurement)

    DSM-H Consulting

    Data scientist job in Peoria, IL

    Role and Overview of the role: It is a Purchasing Analyst. Familiarity with Power BI in the background of the different procurement systems and they could be qualified for the position, even if they don't have a data analyst title. Supporting Analytics within procurement. The real focus of the position is someone who has some procurement background but also very strong in the data analytics, will be able to help to pull together a number of data sources and understand where they have opportunities with their suppliers to improve their performance, to help track their monthly metrics with suppliers, and just gather information from a large organization of the team and help prepare communications to their management. Analyzing like their spend, single source spend with suppliers, kind of how much of their spend is single source, and what type of products that are by different attributes and helping them to understand how they can review for potential dual source opportunity. Understanding kind of where they are paying tariffs to which suppliers and then helping them propose solutions as to if they run lots of different simulations or case studies. And if they did change their footprint and those products, kind of what that would impact that would have for tariff. But kind of some of the dashboard would just be kind of monitoring and consolidating data from various systems to give a full picture of how the team is performing. Top Skills: Power BI Procurement Tools like (CICT, Lognet, MRC, PO Inquiry, AoS or any other similar tools) Analyzes procurement data to generate actionable insights Snowflake and Data Mining Applies problem-solving techniques across diverse procurement scenarios Typical task breakdown: - Assist with monthly reporting on team metrics, cost savings and tariff analysis - Lead development of data analytics to assist category teams in making strategic sourcing decisions Interaction with team: - Will work as a support to multiple category teams Team Structure - Report to the MC&H Strategy Manager and collaborate with Category Managers and buyers Work environment: Office environment Education & Experience Required: - Years of experience: 3-5 - Degree requirement: Bachelors degree - Do you accept internships as job experience: Yes Top 3 Skills · Communicates effectively to develop standard procedures · Applies problem-solving techniques across diverse procurement scenarios Analyzes procurement data to generate actionable insights Additional Technical Skills (Required) - Proficient in PowerBI, PROcure, and tools like CICT, Lognet, MRC, PO Inquiry, AoS - Expertise in Snowflake and data mining (Desired) - Prior experience in Procurement - Familiarity with monthly reporting processes, including ABP (Annual Business Plan) and RBM (Rolling Business Management) - Demonstrated expertise in cost savings initiatives - Machine Learning and AI Soft Skills (Required) - Strong written and verbal communication skills - Balances speed with accuracy in task execution - Defines problems and evaluates their impact (Desired) - Emotional Intelligence - Leadership and team management capabilities
    $69k-95k yearly est. 4d ago
  • Data Architect

    Geowealth

    Data scientist job in Oak Brook, IL

    GeoWealth is a Chicago-based fintech firm that offers an award-winning digital advisory platform, including Turnkey Asset Management Platform (“TAMP”) capabilities. We deliver a comprehensive and fully integrated wealth management technology platform to professionals in the financial services industry. OPPORTUNITY: We're looking for a Data Architect to join our Engineering Team. In this role, you will oversee the overall data architecture, helping us deliver our best-in-class solutions to our customers. This role will be key in organizing, designing, and leading our team through well-designed data architecture. If you love architecting complex systems, delivering customer focused software, designing best-in-class systems and leading data architecture design this role is for you. RESPONSIBILITIES: Own data architecture and oversee data implementation Set coding/implementation standards Lead our data warehouse design Deliver performant, maintainable, and quality software in collaboration with our teams. Improve our database design to reduce replication and increase performance Partner with other architects and engineers to produce better designed systems SKILLS, KNOWLEDGE, AND EXPERIENCE: 5+ years of experience as Data Architect or equivalent role Bachelor's degree in computer science or equivalent degree Hands-on experience with Oracle Designed and implemented data warehouse Experience with the following is preferred but not required: designing and building monolithic and distributed systems, Postgres, Logi Symphony, PowerBI, Java and JIRA/Confluence COMPANY CULTURE & PERKS - HIGHLIGHTS: Investing in Your Growth 🌱 Casual work environment with fun, hard-working, and open-minded coworkers Competitive salary with opportunity for performance-based annual bonus Opportunities to up-skill, explore new responsibilities, and network across departments Defined and undefined career pathways allowing you to grow your own way Work/Life Balance 🗓️ Flexible PTO and work schedule to ensure our team balances work and life Hybrid work schedule Maternity and paternity leave Taking Care of Your Future ♥️ Medical, Dental, and Vision, Disability insurance Free access to Spring Health, a comprehensive mental health solution 401(k) with company match and a broad selection of investments Voluntary insurance: short-term disability, long-term disability, and life insurance FSA and transit benefits for employees that contribute pre-tax dollars Other Fun Stuff ⭐ Free on-site gym and parking Weekly catered lunches in the office, plus monthly happy hours Stocked kitchen with snacks and drinks GeoWealth was recognized as “Best Place to Work” by Purpose Job's 2025, 2024 and 2022 GeoWealth was recognized as “Best Place to Work” by Built In in 2024, 2023 and 2022 SALARY RANGE: Starting at $170,000-$220,000 + Benefits + Opportunity for Performance Bonus This is an estimated range based on the circumstances at the time of posting, however, may change based on a combination of factors, including but not limited to skills, experience, education, market factors, geographical location, budget, and demand.
    $170k-220k yearly 3d ago
  • Big Data Consultant

    Clifyx

    Data scientist job in Chicago, IL

    Job Title: Bigdata Engineer Employment Type: W2 Contract Detailed Job Description: We are seeking a skilled and experienced Big Data Platform Engineer who is having 7+ yrs of experience with a strong background in both development and administration of big data ecosystems. The ideal candidate will be responsible for designing, building, maintaining, and optimizing scalable data platforms that support advanced analytics, machine learning, and real-time data processing. Key Responsibilities: Platform Engineering & Administration: • Install, configure, and manage big data tools such as Hadoop, Spark, Kafka, Hive, HBase, and others. • Monitor cluster performance, troubleshoot issues, and ensure high availability and reliability. • Implement security policies, access controls, and data governance practices. • Manage upgrades, patches, and capacity planning for big data infrastructure. Development & Data Engineering: • Design and develop scalable data pipelines using tools like Apache Spark, Flink, NiFi, or Airflow. • Build ETL/ELT workflows to ingest, transform, and load data from various sources. • Optimize data storage and retrieval for performance and cost-efficiency. • Collaborate with data scientists and analysts to support model deployment and data exploration.
    $74k-99k yearly est. 5d ago
  • Data Engineer

    Scaylor

    Data scientist job in Chicago, IL

    Scaylor helps mid-market and enterprise companies make sense of their data. Most teams past $20M ARR are stuck with fragmented systems, old databases, and hundreds of spreadsheets that never quite line up. We build the pipelines that clean, normalize, and connect that data so it can actually be used. Our platform handles everything from parsing financial models and reports to mapping tables across legacy databases and modern APIs. The goal is simple: give companies a single source of truth they can trust. We're a small team of four - one backend engineer, one frontend engineer, and 2 founders. We're looking for our fifth teammate to help us scale the engine behind it all. ⸻ The Role You'll work across data engineering and full-stack development, helping us build reliable data infrastructure that powers real workflows. You'll touch everything from ingestion and transformation pipelines to the APIs and dashboards that surface insights to clients. You'll work directly with the founding team and help make technical decisions that define the next version of Scaylor's core platform. ⸻ What You'll Work On • Build data pipelines that extract, clean, and standardize information from Excel files, PDFs, APIs, and legacy databases • Design schemas and transformation logic for structured and semi-structured data • Develop and maintain backend APIs (Python/FastAPI or Node/Express) for data access and analytics • Help connect backend services to our frontend dashboards (React, Node.js, or similar) • Set up and maintain AWS infrastructure (Lambda, S3, ECS, CloudFormation) • Collaborate with clients to understand their data problems and design workflows that fix them ⸻ You'd Be Great Here If You • Have 3-6 years of experience in data engineering, backend, or full-stack roles • Write clean, maintainable code in Python + JS • Understand ETL, data normalization, and schema mapping • Have experience with SQL and working with legacy databases or systems • Are comfortable managing cloud services and debugging data pipelines • Enjoy solving messy data problems and care about building things that last ⸻ Nice to Have • Familiarity with GCP or SQL databases • Understanding of enterprise data flows (ERP, CRM, or financial systems) • Experience building and deploying containers (Docker, GitHub Actions, CI/CD) • Interest in lightweight ML or LLM-assisted data transformation ⸻ Why Join Scaylor • Be one of the first five team members shaping the product and the company • Work directly with the founder and help define Scaylor's technical direction • Build infrastructure that solves real problems for real companies • Earn meaningful equity and have a say in how the company grows ⸻ Compensation • $130k - $150k with a raise based on set revenue triggers • .4% equity • Relocation to Chicago, IL required
    $130k-150k yearly 3d ago
  • Data Engineer

    Saragossa

    Data scientist job in Chicago, IL

    Data Engineer - Build the Data Engine Behind AI Execution - Starting Salary $150,000 You'll be part architect, part systems designer, part execution partner - someone who thrives at the intersection of engineering precision, scalability, and impact. As the builder behind the AI data platform, you'll turn raw, fragmented data into powerful, reliable systems that feed intelligent products. You'll shape how data flows, how it scales, and how it powers decision-making across AI, analytics, and product teams. Your work won't be behind the scenes - it will be the foundation of everything we build. You'll be joining a company built for builders. Our model combines AI consulting, venture building, and company creation into one execution flywheel. Here, you won't just build data pipelines - you'll build the platforms that power real products and real companies. You know that feeling when a data system scales cleanly under real-world pressure, when latency drops below target, when complexity turns into clarity - and everything just flows? That's exactly what you'll build here. Ready to engineer the platform that powers AI execution? Let's talk. No up-to-date resume required.
    $150k yearly 1d ago
  • Sr. Data Engineer - PERM - MUST BE LOCAL

    Resource 1, Inc.

    Data scientist job in Naperville, IL

    Resource 1 is in need of a Sr. Data Engineer for a full-time/ permanent position with our client in Naperville, IL. Candidate must be local to Illinois because of future hybrid onsite in Naperville expected. Our client is an employee-owned company with excellent benefits, growth opportunities and profit-sharing bonus. This position is focused on building modern data pipelines, integrations and back-end data solutions. Selected individual will work within cross-functional Agile teams, collaborating with product owners, business analysts and other engineers to design and deliver data solutions that power business insights and AI products. Responsibilities: Design and develop scalable data pipelines for ingestion, transformation and integration using AWS services. Pull data from PostgreSQL and SQL Server to migrate to AWS. Create and modify jobs in AWS and modify logic in SQL Server. Create SQL queries, stored procedures and functions in PostgreSQL and RedShift. Provide input on data modeling and schema design as needed. Manage infrastructure through infrastructure-as-code templates (Serverless Framework), supporting new data products and services in AWS. Support inbound/ outbound data flows, including APIs, S3 replication and secured data. Assist with data visualization/ reporting as needed. Follow an Agile development methodology, with regular workshops and standup meetings, working in two-week sprints. Qualifications: 5+ years of data engineering experience. Experience with AWS and its associated array of offerings (Glue, Redshift, Athena, S3, Lambda, Spectrum). Strong experience with SQL, Python and PySpark. A background in supply chain, logistics or distribution would be a plus. Experience with Power BI is a plus.
    $75k-100k yearly est. 3d ago
  • Data Analyst

    The Judge Group 4.7company rating

    Data scientist job in Deerfield, IL

    Our client is currently seeking a Data Analyst Hybrid to the northern suburbs of Chicago W2 Only In this role, you'll evaluate usage, adoption, and performance of IIN systems, identify data mining opportunities, and integrate findings with key performance indicators. You'll collaborate with cross-functional teams to deliver dashboards, reports, and scorecards that inform strategy. What You'll Do Analyze usage, adoption, and efficacy of IIN systems. Identify and execute data mining opportunities. Integrate insights with traditional KPIs. Provide analytical support for projects and stakeholders. Develop and share dashboards, reports, and scorecards. Ensure data accuracy, format, and availability. What We're Looking For Bachelor's degree in Statistics, Data Science, Mathematics, or related field. Strong analytical skills and ability to interpret complex datasets. Advanced Excel skills (pivot tables, VLOOKUP, Power Query, visualization). Expertise in Tableau for dashboard design and optimization. Hands-on experience with Snowflake and SQL for data warehousing and analysis. Strong business acumen and ability to align data strategies with goals. Excellent communication and collaboration skills. Nice to Have Experience with KPI reporting and BI tools. Familiarity with advanced data visualization techniques. Rate: $65-90/HR
    $63k-84k yearly est. 5d ago
  • Data Engineer

    FAC Services, LLC

    Data scientist job in Madison, WI

    About FAC Services Want to build your career helping those who build the world? At FAC Services, we handle the business side so architecture, engineering, and construction firms can focus on shaping the future. Our trusted, high-quality solutions empower our partners, and our people, to achieve excellence with integrity, precision, and a personal touch. Job Purpose FAC Services is investing in a modern data platform to enable trustworthy, timely, and scalable data for analytics, operations, and product experiences. The Data Engineer will design, build, and maintain core data pipelines and models for Power BI reporting, application programming interfaces (APIs), and downstream integrations. This role partners closely with Infrastructure, Quality Assurance (QA), the Database Administrator, and application teams to deliver production grade, automated data workflows with strong reliability, governance, observability, and Infrastructure as Code (IaC) for resource orchestration. Primary Responsibilities: Data Architecture & Modeling Design and evolve canonical data models, marts, and lake/warehouse structures to support analytics, APIs, and applications. Establish standards for naming, partitioning, schema evolution, and Change Data Capture (CDC). Pipeline Development (ETL/ELT) Build resilient, testable pipelines across Microsoft Fabric Data Factory, notebooks (Apache Spark), and Lakehouse tables for batch and streaming workloads. Design Lakehouse tables (Delta/Parquet) in OneLake. Optimize Direct Lake models for Power BI. Implement reusable ingestion and transformation frameworks emphasizing modularity, idempotency, and performance. Integration & APIs Engineer reliable data services and APIs to feed web applications, Power BI, and partner integrations. Publish consumer-facing data contracts (Swagger) and implement change-notification (webhooks/eventing). Use semantic versioning for breaking changes and maintain a deprecation policy for endpoints and table schemas. Ensure secure connectivity and least-privilege access in coordination with the DBA. Infrastructure as Code (IaC) - Resource Orchestration Resource Orchestration & Security: Author and maintain IaC modules to deploy and configure core resources. Use Bicep/ARM (and, where appropriate, Terraform/Ansible) with CI/CD to promote changes across environments. DevOps, CI/CD & Testing Own CI/CD pipelines (Gitbased promotion) for data code, configurations, and infrastructure. Practice test-driven development with QA (unit, integration, regression) and embed data validations throughout pipelines; collaborate with the Data Quality Engineer to maximize coverage. Observability & Reliability Instrument pipelines and datasets for lineage, logging, metrics, and alerts; define Service Level Agreements (SLAs) for data freshness and quality. Perform performance tuning (e.g., Spark optimization, partition strategies) and cost management across cloud services. Data Quality & Governance Implement rules for deduplication, reconciliation, and anomaly detection across environments (Microsoft Fabric Lakehouse and Power BI). Contribute to standards for sensitivity labels, RoleBased Access Control (RBAC), auditability, and secure data movement aligned with Infrastructure and Security. Collaboration & Leadership Work cross functionally with Infrastructure, QA, and application teams; mentor peers in modern data engineering practices; contribute to documentation and knowledge sharing. Handoff to the Data Quality Engineer for release gating; coordinate with the Database Administrator on backups/restore posture, access roles, High Availability / Disaster Recovery (HA/DR), and source CDC readiness. Qualifications To perform this job successfully, an individual must be able to perform each primary duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and/or ability required. Experience (Required) 3+ years designing and operating production ETL/ELT pipelines and data models. Apache Spark (Fabric notebooks, Synapse Spark pools, or Databricks). Advanced T-SQL and Python; experience with orchestration, scheduling, and dependency management. Azure Event Hubs (or Kafka) for streaming; Change Data Capture (CDC) Infrastructure as Code (Bicep/ARM/Terraform); CI/CD (Azure DevOps) API design for data services (REST/OpenAPI), including versioning, pagination, error handling, authentication, and authorization. Experience (Preferred) Lakehouse design patterns on Microsoft Fabric; optimization of Power BI with Direct Lake models. Kusto Query Language (KQL), Eventstream and Eventhouse familiarity. Experience with lineage/metadata platforms and cost governance.
    $76k-101k yearly est. 3d ago
  • Data Engineer

    Binarybees Business Solutions LLC

    Data scientist job in Itasca, IL

    Primary Location: Itasca, IL Hybrid in Chicago's Northwest Suburbs 2 Days In-Office, 3 Days WFH TYPE: Direct Hire / Permanent Role MUST BE Citizen and Green Card The Data Engineer will be responsible for designing, developing, and maintaining scalable data pipelines and integrations that support data analytics and business intelligence across the organization. This role is essential to ensuring high-quality data delivery, optimizing performance, and enabling effective decision-making through reliable data solutions. What You Bring to the Role (Ideal Experience) Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent work experience). 5+ years of experience as a Data Engineer. 3+ years of experience with the following: Building and supporting data lakehouse architectures using Delta Lake and change data feeds. Working with PySpark and Python, with strong Object-Oriented Programming (OOP) experience to extend existing frameworks. Designing data warehouse table architecture such as star schema or Kimball method. Writing and maintaining versioned Python wheel packages to manage dependencies and distribute code. Creating and managing CI/CD pipelines, especially using Azure DevOps for Microsoft Fabric-related assets. Experience establishing scalable and maintainable data integrations and pipelines in Databricks environments. Nice to Have's Hands-on experience implementing data solutions using Microsoft Fabric. Experience with machine learning/ML and data science tools. Knowledge of data governance and security best practices. Experience in a larger IT environment with 3,000+ users and multiple domains. Current industry certifications from Microsoft cloud/data platforms or equivalent certifications. One or more of the following is preferred: Microsoft Certified: Fabric Data Engineer Associate Microsoft Certified: Azure Data Scientist Associate Microsoft Certified: Azure Data Fundamentals Google Professional Data Engineer Certified Data Management Professional (CDMP) IBM Certified Data Architect - Big Data What You'll Do (Skills Used in this Position) Design and develop scalable data pipelines to collect, process, and store large volumes of structured and unstructured data. Extend and enhance existing OOP-based frameworks developed in Python and PySpark. Partner with data scientists and analysts to define requirements and design robust data analytics solutions. Ensure data quality and integrity through data cleansing, validation, and automated testing procedures. Develop and maintain technical documentation, including requirements, design specifications, and test plans. Implement and manage data integrations from multiple internal and external sources. Optimize data workflows to improve performance, reliability, and reduce cloud consumption. Monitor, troubleshoot, and resolve data pipeline issues to ensure consistent data delivery. Establish and manage CI/CD pipelines and release processes, particularly using Azure DevOps for Microsoft Fabric. Provide technical leadership and coordination for global development and support teams. Participate in creating a safe and healthy workplace by adhering to organizational safety protocols. Support additional projects and initiatives as assigned by management.
    $75k-100k yearly est. 2d ago
  • Data Architect - Pharma

    Mathco

    Data scientist job in Chicago, IL

    MathCo Role - Data/AI Engineering Manager Onsite - Chicago - 4 days in office (Mandatory) Industry - Pharma (Mandatory) As platform architect/owner, you will: Lead the end-to-end architecture, lifecycle, and governance of the AI/Analytics platform, defining standards, reusable components, and integration patterns. Partner with AI/Data architects to enable scalable model deployment and enhance agentic orchestration. Translate business needs into platform features, manage onboarding, documentation, and cross-functional collaboration for platform adoption. Oversee infrastructure-as-code, CI/CD, observability, and containerized environments to ensure reliability and scalability. Evaluate complex technical proposals and develop actionable platform roadmaps and architecture recommendations Stay current on key AI platform developments and assess their impact on architecture and client strategy Coach others, recognize their strengths, and encourage them to take ownership of their personal development Skills Required Experience in designing, architecting, or managing distributed data and AI platforms in cloud environments (AWS, Azure, or GCP) Proven ability to carry out complex Proof of Concept (POC), pilot projects, and limited production rollouts for AI use-cases, focusing on developing new or improved techniques and procedures. Strong skills in pipeline/workflow optimization and data processing frameworks to evaluate architectural choices Years of Experience Minimum of 8 years in relevant experience, preferably with a consulting background and experience with Pharma clients
    $83k-113k yearly est. 5d ago
  • Senior Data Engineer

    Programmers.Io 3.8company rating

    Data scientist job in Chicago, IL

    requires visa independent candidates. Note: (OPT, CPT, H1B holders will not work at this time) Design, develop, and maintain scalable ETL pipelines using AWSGlue Collaborate with data engineers and analysts to understand data requirements Build and manage data extraction, transformation, and loading processes Optimize and troubleshoot existing Glue jobs and workflows Ensure data quality, integrity, and security throughout the ETL process Integrate AWS Glue with other AWS services like S3, Lambda, Redshift, and Step Functions Maintain documentation of data workflows and processes Stay updated with the latest AWS tools and best practices Required Skills Strong hands-on experience with AWS Glue, PySpark, and Python Proficiency in SQL and working with structured/unstructured data (JSON, CSV, Parquet) Experience with data warehousing concepts and tools Familiarity with CI/CD pipelines, Terraform, and scripting (PowerShell, Bash) Solid understanding of data modeling, data integration, and data management Exposure to AWS Batch, Step Functions, and Data Catalogs
    $81k-112k yearly est. 1d ago
  • Data Engineer

    Acuity Analytics

    Data scientist job in Chicago, IL

    The Data Engineer will design, build, and optimize the data pipelines and models that support the firm's evolving research, analytics, and systematic portfolio construction environment. This role is central to enabling data-driven investment processes, including quantitative research, AI/ML capabilities, and front-office automation. Candidates must have deep expertise with Snowflake, strong SQL skills, and experience integrating diverse datasets used across investment organizations. The role is highly collaborative and requires comfort working in an iterative, fast-moving environment where data needs evolve rapidly based on stakeholder input. Responsibilities Design, build, and enhance ETL/ELT pipelines in Snowflake, ensuring high performance, reliability, and scalability. Integrate internal and external datasets, including pricing, research content, economic releases, market data, and security reference data. Support real-time or near-real-time data flows where needed (e.g., pricing, indicative quotes, market-sensitive inputs). Collaborate closely with Product Leads, Quant Developers, and UI/UX teams to ensure data structures meet the requirements of research workflows, analytical models, and user-facing applications. Partner with front-office stakeholders to rapidly iterate on evolving analytical and data needs. Implement data validation, monitoring, and quality frameworks to ensure accuracy and reliability across critical datasets. Translate prototype pipelines into production-ready workflows with appropriate documentation, standards, and controls. Contribute to data modeling standards, metadata frameworks, and data governance practices across the platform. Requirements 10+ years of data engineering experience within investment management, financial technology, or similar data-intensive environments. Expert-level SQL, including complex queries, schema design, and performance optimization. Deep hands-on experience with Snowflake, including advanced features such as tasks, streams, performance tuning, and secure data sharing. Strong Python capabilities for ETL/ELT development, data processing, and workflow automation. Experience integrating APIs and working with structured, semi-structured, and unstructured datasets. Familiarity with NLP or AI/ML-oriented datasets (e.g., textual research content, PDFs) is a plus. Experience with Domino or willingness to work within a Domino-based model environment. Working knowledge of investment data structures (holdings, benchmarks, pricing, exposures) is highly preferred. Ability to thrive in a rapid prototyping environment with evolving requirements and close partnership with front-office teams.
    $75k-100k yearly est. 4d ago
  • Data Engineer

    Mastek

    Data scientist job in Chicago, IL

    Job Title: Data Engineer - Workflow Automation Employment Type: Contract to Hire or Full-Time Department: Project Scion / Information Management Solutions Key Responsibilities: Design, build, and manage workflows using Automic or experience with similar tools like Autosys, Apache Airflow, or Cybermation. workflow orchestration across multi-cloud ecosystems (AWS, Azure, Snowflake, Databricks, Redshift). Monitor and troubleshoot workflow execution, ensuring high availability, reliability, and performance. Administer and maintain workflow platforms. Collaborate with architecture and infrastructure teams to align workflows with cloud strategies. Support migrations, upgrades, and workflow optimization efforts Required Skills: Has 5+ years of experience in IT managing production grade system Hands-on experience with Automic or similar enterprise workflow automation tools. Strong analytical and problem-solving skills. Good communication and documenting skills. Familiarity with cloud platforms and technologies (e.g., AWS, Azure, Snowflake, Databricks). Scripting proficiency (e.g., Shell, Python). Ability to manage workflows across hybrid environments and optimize performance. Experience managing production operations & support activities Preferred Skills: Experience with CI/CD pipeline integration. Knowledge of cloud-native orchestration tools Exposure to monitoring and alerting systems.
    $75k-100k yearly est. 3d ago
  • Snowflake Data Engineer

    VC5 Consulting

    Data scientist job in Chicago, IL

    Join a dynamic team focused on building innovative data solutions that drive strategic insights for the business. This is an opportunity to leverage your expertise in Snowflake, ETL processes, and data integration. Key Responsibilities Develop Snowflake-based data models to support enterprise-level reporting. Design and implement batch ETL pipelines for efficient data ingestion from legacy systems. Collaborate with stakeholders to gather and understand data requirements. Required Qualifications Hands-on experience with Snowflake for data modeling and schema design. Proven track record in developing ETL pipelines and understanding transformation logic. Solid SQL skills to perform complex data transformations and optimization. If you are passionate about building cutting-edge data solutions and want to make a significant impact, we would love to see your application! #11290
    $75k-100k yearly est. 4d ago
  • Distinguished Data Engineer

    Capital One 4.7company rating

    Data scientist job in Chicago, IL

    Distinguished Data Engineers are individual contributors who strive to be diverse in thought so we visualize the problem space. At Capital One, we believe diversity of thought strengthens our ability to influence, collaborate and provide the most innovative solutions across organizational boundaries. Distinguished Engineers will significantly impact our trajectory and devise clear roadmaps to deliver next generation technology solutions. Deep technical experts and thought leaders that help accelerate adoption of the very best engineering practices, while maintaining knowledge on industry innovations, trends and practices Visionaries, collaborating on Capital One's toughest issues, to deliver on business needs that directly impact the lives of our customers and associates Role models and mentors, helping to coach and strengthen the technical expertise and know-how of our engineering and product community Evangelists, both internally and externally, helping to elevate the Distinguished Engineering community and establish themselves as a go-to resource on given technologies and technology-enabled capabilities The Distinguished Data Engineering role will be responsible for the architectural design and technical patterns that enable a high-performing, reliable data platform for Card authorizations. The focus of the work includes advancing data observably, the Spend Data Product, data standardization, and the core data pipelines that power authorization processing and decisioning. The role is expected to be hands-on, partnering closely with engineering teams and authorization's partners to help drive work forward. Responsibilities: Build awareness, increase knowledge and drive adoption of modern technologies, sharing consumer and engineering benefits to gain buy-in Strike the right balance between lending expertise and providing an inclusive environment where others' ideas can be heard and championed; leverage expertise to grow skills in the broader Capital One team Promote a culture of engineering excellence, using opportunities to reuse and innersource solutions where possible Effectively communicate with and influence key stakeholders across the enterprise, at all levels of the organization Operate as a trusted advisor for a specific technology, platform or capability domain, helping to shape use cases and implementation in an unified manner Lead the way in creating next-generation talent for Tech, mentoring internal talent and actively recruiting external talent to bolster Capital One's Tech talent Basic Qualifications: Bachelor's Degree At least 7 years of experience in data engineering At least 3 years of experience in data architecture At least 2 years of experience building applications in AWS Preferred Qualifications: Masters' Degree 9+ years of experience in data engineering 3+ years of data modeling experience 2+ years of experience with ontology standards for defining a domain 2+ years of experience using Python, SQL or Scala 1+ year of experience deploying machine learning models 3+ years of experience implementing big data processing solutions on AWS (S3, DynamoDB, Lambda, Glue, Flink) 2+ years of experience with Orchestration Technologies (Airflow, Step functions) 2+ years of experience with Caching and In-memory Data stores Capital One will consider sponsoring a new qualified applicant for employment authorization for this position The minimum and maximum full-time annual salaries for this role are listed below, by location. Please note that this salary information is solely for candidates hired to perform work within one of these locations, and refers to the amount Capital One is willing to pay at the time of this posting. Salaries for part-time roles will be prorated based upon the agreed upon number of hours to be regularly worked. Chicago, IL: $239,900 - $273,800 for Distinguished Data Engineer McLean, VA: $263,900 - $301,200 for Distinguished Data Engineer Richmond, VA: $239,900 - $273,800 for Distinguished Data Engineer Candidates hired to work in other locations will be subject to the pay range associated with that location, and the actual annualized salary amount offered to any candidate at the time of hire will be reflected solely in the candidate's offer letter. This role is also eligible to earn performance based incentive compensation, which may include cash bonus(es) and/or long term incentives (LTI). Incentives could be discretionary or non discretionary depending on the plan. Capital One offers a comprehensive, competitive, and inclusive set of health, financial and other benefits that support your total well-being. Learn more at the Capital One Careers website . Eligibility varies based on full or part-time status, exempt or non-exempt status, and management level. This role is expected to accept applications for a minimum of 5 business days.No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections ; New York City's Fair Chance Act; Philadelphia's Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1- or via email at . All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC).
    $75k-97k yearly est. 19h ago
  • Data Scientist - Operations Research

    United Airlines 4.6company rating

    Data scientist job in Chicago, IL

    Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what's next. Let's define tomorrow, together. **Description** United's Digital Technology team is comprised of many talented individuals all working together with cutting-edge technology to build the best airline in the history of aviation. Our team designs, develops and maintains massively scaling technology solutions brought to life with innovative architectures, data analytics, and digital solutions. **Job overview and responsibilities** Provides mathematical modeling and analysis services to support critical financial, operational, and/or strategic planning decisions and engages in supporting project teams in value added activities that generate practical solutions to complex business problems, explores new business alternatives, and drives improvement in business decisions + Develops the approach and methods to define and solve management problems through quantitative analysis and analytical models using operations research, machine learning, and structured programming languages + Identifies, researches, or solves large complex problems using big data and operations research and machine learning principles + Leverages understanding of the business process to identify and implement operations research solutions that will result in significant bottom-line contributions + Builds and develops operations research/optimization mathematical model applications, and provides client support leveraging operations research knowledge + Participates in model design, prototype, and model development for several efforts that occur simultaneously, and interfaces with product delivery groups + Raises concerns when scope of analysis may not align with time available and can choose an appropriate scope of analysis to conduct balancing ROI to time available + Designs analytic plan/develop hypotheses to test; understands limitations of analysis (what it can and cannot be used for) + Anticipates working team questions to data and approach + Identifies solution quality risks and on-time risks + Understands the business value, process, and expectations before focusing on choice of a technical solution + Understands the intuition behind the numbers (i.e. does it make sense?) + Provides on-going analytical services to client organizations + Communicates results to management and clients + Contributes deck content and builds the story for the deck with guidance to summarize findings + Develops and delivers presentations aligned with Ai standards + Speaks in a manner appropriate for working team and their level +1 + Keeps informed about the latest analytical methods and research in the operations research and analytics fields **Qualifications** **What's needed to succeed (Minimum Qualifications):** + Masters in Operations Research or other related quantitative discipline involving quantitative analysis and application of advanced operations research principles + Coursework or work experience with mathematical programming techniques + Coursework or work experience in model prototyping through use of optimization toolkit(s) including CPLEX, AMPL, or OPL + Coursework or work experience with C, C++, Java, R, Python, or other structured programming language + Good business, technical, verbal/written communication, presentation and sales skills. Adaptability to changing business environment + Good interpersonal skills and ability to interact with clients + Proficient with MS Office + Successful completion of interview required to meet job qualifications + Must be legally authorized to work in the United States for any employer without sponsorship + Reliable, punctual attendance is an essential function of the position **What will help you propel from the pack (Preferred Qualifications):** + Minor in computer science and/or formal advanced computer science coursework preferred + 1+ years of professional experience in analytical field + 1+ years designing and programming/coding data structures for large-scale computer models + Experience with Julia programming language + Knowledge of United/industry data sources + Structured programming for large-scale computer models + Demonstrated ability to create business value The base pay range for this role is $91,770.00 to $119,514.00. The base salary range/hourly rate listed is dependent on job-related, factors such as experience, education, and skills. This position is also eligible for bonus and/or long-term incentive compensation awards. You may be eligible for the following competitive benefits: medical, dental, vision, life, accident & disability, parental leave, employee assistance program, commuter, paid holidays, paid time off, 401(k) and flight privileges. United Airlines is an equal opportunity employer. United Airlines recruits, employs, trains, compensates and promotes regardless of race, religion, color, national origin, gender identity, sexual orientation, physical ability, age, veteran status and other protected status as required by applicable law. Equal Opportunity Employer - Minorities/Women/Veterans/Disabled/LGBT. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions. Please contact JobAccommodations@united.com to request accommodation.
    $91.8k-119.5k yearly 60d+ ago
  • Data Scientist/Engineer

    Info Tech Resources 4.4company rating

    Data scientist job in Chicago, IL

    InfoTech Resources is seeking a Data Scientist for a hybrid position based in the Chicago Loop, working 2-3 days per week onsite. This opportunity is open to both contract and direct-hire candidates. The selected candidate will play a key role in designing, developing, and optimizing data-driven AI solutions within a modern cloud-based environment (Google Cloud). The client is a large enterprise company in the insurance sector and this role will join their first dedicated AI/ML team charged with designing and laying a proper foundation for AI/ML solutions. The Data Scientist will use Python and leading AI frameworks to build, train, and evaluate models that support intelligent applications and data analytics initiatives leveraging Google Vertex AI, Gemini, and OpenAI integrations. Candidates must have expertise designing/developing models, agents, frameworks, etc. within a greenfield context and hands-on experience in evaluating model performance using structured testing methodologies. Key Responsibilities Develop, train, and validate predictive and generative AI models using Python and industry-standard libraries such as NumPy, pandas, scikit-learn, matplotlib, and TensorFlow/PyTorch Design and execute structured experiments to evaluate LLM and model performance using quantitative testing methodologies Collaborate with AI engineers and software developers to operationalize models within Google Vertex AI or similar environments Apply data wrangling, feature engineering, and exploratory data analysis (EDA) to ensure data quality and model interpretability Build and visualize insights using Power BI, matplotlib, or other data visualization tools Implement best practices in prompt engineering, context filtering, and AI model evaluation Contribute to continuous improvement of the organization's AI and machine learning capabilities through research and adoption of emerging methods Communicate findings and recommendations clearly to technical and business stakeholders Required Skills & Experience Bachelor's or Master's degree in Data Science, Computer Science, Statistics, or a related field 5+ years of hands-on experience in Python for data science and AI solution development Background in cloud-based data environments (Google Cloud Platform, Azure, or AWS) Familiarity with LLM prompt design, AI observability, or model monitoring frameworks Proven experience with Google Vertex AI, Gemini, or OpenAI models Proficiency with NumPy, pandas, scikit-learn, or similar Python libraries and visualization libraries such as matplotlib or seaborn Strong foundation in machine learning techniques, including regression, classification, clustering, and evaluation metrics Experience designing and executing experiments to evaluate model performance Excellent analytical, problem-solving, and communication skills Experience presenting findings to cross-functional or business leadership audiences Exposure to deep learning or generative AI workflows (e.g., PyTorch, TensorFlow, Hugging Face Transformers) is preferred
    $67k-89k yearly est. 60d+ ago
  • Data Scientist

    Kohler Co 4.5company rating

    Data scientist job in Kohler, WI

    _Work Mode: Onsite_ **Opportunity** To serve as a data-driven innovator and strategic enabler, transforming business decisions through advanced analytics, machine learning and AI. This role combines statistical and technical expertise with business insight to unlock patterns, forecast trends, and deliver actionable intelligence that accelerates growth and operational excellence. The Data Scientist will champion the use of data as a competitive advantage-designing robust analytical models, ensuring data integrity, and driving scalable solutions that align with enterprise goals. **Key Responsibilities** **Analytical Modeling and AI Development** + Design experiments, test hypotheses, and build predictive models to uncover actionable insights. + Develop and refine algorithms to identify patterns and trends in large, complex datasets. + Validate findings through rigorous statistical methods and iterative testing. + Continuously monitor and optimize models for accuracy, scalability, and business impact. + Prototype and deploy AI-driven solutions that align with strategic business objectives and deliver measurable value. **Data Exploration & Architecture** + Independently collate, analyze, and interpret large, complex datasets to generate actionable insights. + Consult on data architecture and governance to ensure quality, accuracy, and readiness for advanced analytics. + Identify relevant internal and external data sources and ensure compliance with data standards. + Drive improvements in data pipelines and integration to enable faster, more reliable analytics. **Decision Intelligence & Transformation** + Partner with stakeholders to translate business requirements into analytical solutions that drive process and product improvements. + Support creation of scalable AI/ML solutions aligned with strategic goals. + Model and frame business scenarios that influence critical decisions and outcomes. + Provide clear metrics and KPIs to demonstrate the impact of analytics initiatives on business performance. + Identify opportunities for automation and optimization to accelerate enterprise-wide transformation. **Mentorship & Evangelism** + Mentor Functional teams globally on AI tools, prompt engineering, and experimentation. Lead workshops, proofs-of-concept, and training sessions to build momentum and literacy across cultures and time zones. + Share personal and professional learnings to foster a culture of innovation and continuous improvement. + Advocate for responsible AI use and digital fluency across the enterprise **Skills/Requirements** + Bachelor's degree in Mathematics, Statistics, Computer Science, or related field (Master's preferred). + 1+ years of relevant quantitative and qualitative research and analytics experience. + Demonstrated experience in using, building, or configuring AI tools, bots, or agents-especially by those who actively explore and experiment with emerging technologies, including in personal contexts. Strong familiarity with large language models (LLMs), prompt engineering, and commonly used AI tools. **Technical Fluency** + Proficiency in Python or R + Strong skills in SQL + Experience with cloud platforms (Azure and Databricks preferred) + Familiarity with prompt engineering and agent-based tools (Copilot Studio, Azure AI Foundry, or similar) + Knowledge of data visualization tools (Power BI, Tableau, or similar) **Core Competencies** + Creative Problem solving:Works with full competence to find practical solutions for unexpected stakeholder problems. Typically works without supervision + Brings a cross-functional collaborative mindset across business and technical teams; thrives in cross-disciplinary environments to align AI efforts across groups. + Communication and storytelling:Tailors communication content and style to the needs of others. Pays attention to others' input and perspectives, asks questions, and summarizes to confirm understanding + Nimble learning:Learns through experimentation when tackling new problems, using both successes and failures as learning fodder. Swiftly incorporates new concepts and principles into own expertise; skillfully uses these fresh insights to solve problems + Cultivates innovation:Creates new and better ways for the organization to be successful. Offers creative ideas, finds unique connections between previously unrelated elements. Builds upon and strengthens new solutions in a positive and collaborative manner. \#LI-DNI **_Applicants must be authorized to work in the US without requiring sponsorship now or in the future._** _We believe in supporting you from the moment you join us, which is why Kohler offers day 1 benefits. This means you'll have access to your applicable benefit programs from your first day on the job, with no waiting period. The salary range for this position is $64,750 - $98,350. The specific salary offered to a candidate may be influenced by a variety of factors including the candidate's experience, their education, and the work location._ **Why Choose Kohler?** We empower each associate to #BecomeMoreAtKohler with a competitive total rewards package to support your health and wellbeing, access to career growth and development opportunities, a diverse and inclusive workplace, and a strong culture of innovation. With more than 30,000 bold leaders across the globe, we're driving meaningful change in our mission to help people live gracious, healthy, and sustainable lives. **About Us** It is Kohler's policy to recruit, hire, and promote qualified applicants without regard to race, creed, religion, age, sex, sexual orientation, gender identity or expression, marital status, national origin, disability or status as a protected veteran. If, as an individual with a disability, you need reasonable accommodation during the recruitment process, please contact ********************* . Kohler Co. is an equal opportunity/affirmative action employer.
    $64.8k-98.4k yearly 5d ago

Learn more about data scientist jobs

How much does a data scientist earn in Cedar Rapids, IA?

The average data scientist in Cedar Rapids, IA earns between $56,000 and $102,000 annually. This compares to the national average data scientist range of $75,000 to $148,000.

Average data scientist salary in Cedar Rapids, IA

$76,000

What are the biggest employers of Data Scientists in Cedar Rapids, IA?

The biggest employers of Data Scientists in Cedar Rapids, IA are:
  1. Sedgwick LLP
  2. Molina Healthcare
Job type you want
Full Time
Part Time
Internship
Temporary