Post job

Data scientist jobs in Champaign, IL

- 982 jobs
All
Data Scientist
Data Engineer
Senior Data Scientist
Data Architect
Data Analyst
Actuarial Analyst
Data Consultant
  • Senior Data Scientist (Senior Consultant)

    Guidehouse 3.7company rating

    Data scientist job in Chicago, IL

    Job Family: Data Science Consulting Travel Required: Up to 10% Clearance Required: Ability to Obtain Public Trust About our AI and Data Capability Team Our consultants on the AI and Data Analytics Capability team help clients maximize the value of their data and automate business processes. This high performing team works with clients to implement the full spectrum of data analytics and data science services, from data architecture and storage to data engineering and querying, to data visualization and dashboarding, to predictive analytics, machine learning, and artificial intelligence as well as intelligent automation. Our services enable our clients to define their information strategy, enable mission critical insights and data-driven decision making, reduce cost and complexity, increase trust, and improve operational effectiveness. What You Will Do: Data Collection & Management: Identify, gather, and manage data from primary and secondary sources, ensuring its accuracy and integrity. Data Cleaning & Preprocessing: Clean raw data by identifying and addressing inconsistencies, missing values, and errors to prepare it for analysis. Data Analysis & Interpretation: Apply statistical techniques and analytical methods to explore datasets, discover trends, find patterns, and derive insights. Data Visualization & Reporting: Develop reports, dashboards, and visualizations using tools like Tableau or Power BI to present complex findings clearly to stakeholders. Collaboration & Communication: Work with cross-functional teams, understand business requirements, and effectively communicate insights to support data-driven decision-making. Problem Solving: Address specific business challenges by using data to identify underperforming processes, pinpoint areas for growth, and determine optimal strategies. What You Will Need: US Citizenship is required Bachelor's degree is required Minimum THREE (3) Years Experience using Power BI, Tableau and other visualization tools to develop intuitive and user friendly dashboards and visualizations. Skilled in SQL, R, and other languages to assist in database querying and statistical programming. Strong foundational knowledge and experience in statistics, probability, and experimental design. Familiarity with cloud platforms (e.g., Amazon Web Services, Azure, or Google Cloud) and containerization (e.g., Docker). Experience applying data governance concepts and techniques to assure greater data quality and reliability. he curiosity and creativity to uncover hidden patterns and opportunities. Strong communication skills to bridge technical and business worlds. What Would Be Nice To Have: Hands-on experience with Python, SQL, and modern ML frameworks. Experience in data and AI system development, with a proven ability to design scalable architectures and implement reliable models. Expertise in Python or Java for data processing. Demonstrated work experience within the public sector. Ability to support business development including RFP/RFQ/RFI responses involving data science / analytics. The annual salary range for this position is $113,000.00-$188,000.00. Compensation decisions depend on a wide range of factors, including but not limited to skill sets, experience and training, security clearances, licensure and certifications, and other business and organizational needs. What We Offer: Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. Benefits include: Medical, Rx, Dental & Vision Insurance Personal and Family Sick Time & Company Paid Holidays Position may be eligible for a discretionary variable incentive bonus Parental Leave and Adoption Assistance 401(k) Retirement Plan Basic Life & Supplemental Life Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts Short-Term & Long-Term Disability Student Loan PayDown Tuition Reimbursement, Personal Development & Learning Opportunities Skills Development & Certifications Employee Referral Program Corporate Sponsored Events & Community Outreach Emergency Back-Up Childcare Program Mobility Stipend About Guidehouse Guidehouse is an Equal Opportunity Employer-Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at ************** or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or ************************. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse's Ethics Hotline. If you want to check the validity of correspondence you have received, please contact *************************. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant's dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.
    $113k-188k yearly Auto-Apply 1d ago
  • Data Scientist

    Insight Global

    Data scientist job in Indianapolis, IN

    We are seeking a Junior Data Scientist to join our large Utility client in downtown Indianapolis. This position will be hired as a Full-Time employee. This entry-level position is perfect for individuals eager to tackle real-world energy challenges through data exploration, predictive modeling, and collaborative problem-solving. As part of our team, you'll work closely with seasoned data scientists, analysts, architects, engineers, and governance specialists to generate insights that power smarter decisions and help shape the future of energy. Key Responsibilities Partner cross-functionally with data scientists, data architects and engineers, machine learning engineers, data analysts, and data governance experts to deliver integrated data solutions. Collaborate with business stakeholders and analysts to define clear project requirements. Collect, clean, and preprocess both structured and unstructured data from utility systems (e.g., meter data, customer data). Conduct exploratory data analysis to uncover trends, anomalies, and opportunities to enhance grid operations and customer service. Apply traditional machine learning techniques and generative AI tools to build predictive models that address utility-focused challenges, particularly in the customer domain (e.g., outage restoration, program adoption, revenue assurance). Present insights to internal stakeholders in a clear, compelling format, including data visualizations that drive predictive decision-making. Document methodologies, workflows, and results to ensure transparency and reproducibility. Serve as a champion of data and AI across all levels of the client's US Utilities organization. Stay informed on emerging industry trends in utility analytics and machine learning. Requirements Bachelor's degree in data science, statistics, computer science, engineering, or a related field. Master's degree or Ph.D. is preferred. 1-3 years of experience in a data science or analytics role. Strong applied analytics and statistics skills, such as distributions, statistical testing, regression, etc. Proficiency in Python or R, with experience using libraries such as pandas, NumPy, and scikit-learn. Proficiency in traditional machine learning algorithms and techniques, including k-nearest neighbors (k-NN), naive Bayes, support vector machines (SVM), convolutional neural networks (CNN), random forest, gradient-boosted trees, etc. Familiarity with generative AI tools and techniques, including large language models (LLMs) and Retrieval-Augmented Generation (RAG), with an understanding of how these can be applied to enhance contextual relevance and integrate enterprise data into intelligent workflows. Proficiency in SQL, with experience writing complex queries and working with relational data structures. Google BigQuery experience is preferred, including the use of views, tables, materialized views, stored procedures, etc. Proficient in Git for version control, including repository management, branching, merging, and collaborating on code and notebooks in data science projects. Experience integrating Git with CI/CD pipelines to automate testing and deployment is preferred. Experience with cloud computing platforms (GCP preferred). Ability to manage multiple priorities in a fast-paced environment. Interest in learning more about the customer-facing side of the utility industry. Compensation: Up to $130,000 per year annual salary. Exact compensation may vary based on several factors, including skills, experience, and education. Benefit packages for this role may include healthcare insurance offerings and paid leave as provided by applicable law.
    $130k yearly 3d ago
  • Senior Data Scientist

    KTek Resourcing 4.1company rating

    Data scientist job in Chicago, IL

    Role: Senior Data Scientist · We are seeking a hands-on Senior Data Scientist to join our Insurance Analytics & AI Vertical. The ideal candidate will bring a blend of insurance domain expertise (preferably P&C), consulting mindset, and strong data science skills. This is a mid-senior level role focused on delivering value through analytics, stakeholder engagement, and logical problem solving, rather than people management. · The role involves working closely with EXL teams and clients on reporting, data engineering, transformation, and advanced analytics projects. While strong technical skills are important, we are looking for someone who can engage directly with clients, translate business needs into analytical solutions, and drive measurable impact. Key Responsibilities · Collaborate with EXL and client stakeholders to design and deliver data-driven solutions across reporting, analytics, and transformation initiatives. · Apply traditional statistical methods, machine learning, deep learning, and NLP techniques to solve business problems. · Support insurance-focused analytics use cases (with preference for P&C lines of business). · Work in a consulting setup: conduct requirement gathering, structure problem statements, and communicate insights effectively to senior stakeholders. · Ensure data quality, governance, and compliance with Data Privacy and Protection Guidelines. · Independently research, analyze, and present findings, ensuring client-ready deliverables. · Contribute to continuous improvement initiatives and support business development activities where required. Key Skillsets & Experience · 7-12 years of experience in analytics, reporting, dashboarding, ETL, Python/R, and associated data management. · Proficiency in machine learning, deep learning algorithms (e.g., neural networks), and text analytics techniques (NLTK, Gensim, LDA, word embeddings like Word2Vec, FastText, GloVe). · Strong consulting background with structured problem-solving and stakeholder management skills. · Excellent communication and presentation skills with the ability to influence and engage senior business leaders. · Hands-on role with ability to independently manage client deliverables and operate in cross-cultural, global environments. Data Management Skills · Strong familiarity with advanced analytics tools (Python, R), BI tools (Tableau, Power BI), and related software applications. · Good knowledge of SQL, Informatica, Hadoop/Spark, ETL tools. · Ability to translate business/functional requirements into technical specifications. · Exposure to cloud data management and AWS services (preferred). Candidate Profile · Bachelor's/Master's degree in Economics, Mathematics, Computer Science/Engineering, Operations Research, or related analytical fields. · Prior insurance industry experience (P&C preferred) strongly desired. · Superior analytical, logical, and problem-solving skills. · Outstanding written and verbal communication abilities with a consultative orientation. · Flexible to work in a fast-paced, evolving environment with occasional visits to the client's Chicago office.
    $88k-119k yearly est. 3d ago
  • Data Scientist

    DSM-H Consulting

    Data scientist job in Peoria, IL

    Typical task breakdown: - Assist with monthly reporting on team metrics, cost savings and tariff analysis - Lead development of data analytics to assist category teams in making strategic sourcing decisions Interaction with team: - Will work as a support to multiple category teams Team Structure - Report to the MC&H Strategy Manager and collaborate with Category Managers and buyers Work environment: Office environment Education & Experience Required: - Years of experience: 3-5 - Degree requirement: Bachelors degree - Do you accept internships as job experience: Yes Top 3 Skills · Communicates effectively to develop standard procedures · Applies problem-solving techniques across diverse procurement scenarios Analyzes procurement data to generate actionable insights Additional Technical Skills (Required) - Proficient in PowerBI, PROcure, and tools like CICT, Lognet, MRC, PO Inquiry, AoS - Expertise in Snowflake and data mining (Desired) - Prior experience in Procurement - Familiarity with monthly reporting processes, including ABP (Annual Business Plan) and RBM (Rolling Business Management) - Demonstrated expertise in cost savings initiatives - Machine Learning and AI Soft Skills (Required) - Strong written and verbal communication skills - Balances speed with accuracy in task execution - Defines problems and evaluates their impact (Desired) - Emotional Intelligence - Leadership and team management capabilities
    $69k-95k yearly est. 3d ago
  • Data Architect

    Geowealth

    Data scientist job in Oak Brook, IL

    GeoWealth is a Chicago-based fintech firm that offers an award-winning digital advisory platform, including Turnkey Asset Management Platform (“TAMP”) capabilities. We deliver a comprehensive and fully integrated wealth management technology platform to professionals in the financial services industry. OPPORTUNITY: We're looking for a Data Architect to join our Engineering Team. In this role, you will oversee the overall data architecture, helping us deliver our best-in-class solutions to our customers. This role will be key in organizing, designing, and leading our team through well-designed data architecture. If you love architecting complex systems, delivering customer focused software, designing best-in-class systems and leading data architecture design this role is for you. RESPONSIBILITIES: Own data architecture and oversee data implementation Set coding/implementation standards Lead our data warehouse design Deliver performant, maintainable, and quality software in collaboration with our teams. Improve our database design to reduce replication and increase performance Partner with other architects and engineers to produce better designed systems SKILLS, KNOWLEDGE, AND EXPERIENCE: 5+ years of experience as Data Architect or equivalent role Bachelor's degree in computer science or equivalent degree Hands-on experience with Oracle Designed and implemented data warehouse Experience with the following is preferred but not required: designing and building monolithic and distributed systems, Postgres, Logi Symphony, PowerBI, Java and JIRA/Confluence COMPANY CULTURE & PERKS - HIGHLIGHTS: Investing in Your Growth 🌱 Casual work environment with fun, hard-working, and open-minded coworkers Competitive salary with opportunity for performance-based annual bonus Opportunities to up-skill, explore new responsibilities, and network across departments Defined and undefined career pathways allowing you to grow your own way Work/Life Balance 🗓️ Flexible PTO and work schedule to ensure our team balances work and life Hybrid work schedule Maternity and paternity leave Taking Care of Your Future ♥️ Medical, Dental, and Vision, Disability insurance Free access to Spring Health, a comprehensive mental health solution 401(k) with company match and a broad selection of investments Voluntary insurance: short-term disability, long-term disability, and life insurance FSA and transit benefits for employees that contribute pre-tax dollars Other Fun Stuff ⭐ Free on-site gym and parking Weekly catered lunches in the office, plus monthly happy hours Stocked kitchen with snacks and drinks GeoWealth was recognized as “Best Place to Work” by Purpose Job's 2025, 2024 and 2022 GeoWealth was recognized as “Best Place to Work” by Built In in 2024, 2023 and 2022 SALARY RANGE: Starting at $170,000-$220,000 + Benefits + Opportunity for Performance Bonus This is an estimated range based on the circumstances at the time of posting, however, may change based on a combination of factors, including but not limited to skills, experience, education, market factors, geographical location, budget, and demand.
    $170k-220k yearly 2d ago
  • Big Data Consultant

    Clifyx

    Data scientist job in Chicago, IL

    Job Title: Bigdata Engineer Employment Type: W2 Contract Detailed Job Description: We are seeking a skilled and experienced Big Data Platform Engineer who is having 7+ yrs of experience with a strong background in both development and administration of big data ecosystems. The ideal candidate will be responsible for designing, building, maintaining, and optimizing scalable data platforms that support advanced analytics, machine learning, and real-time data processing. Key Responsibilities: Platform Engineering & Administration: • Install, configure, and manage big data tools such as Hadoop, Spark, Kafka, Hive, HBase, and others. • Monitor cluster performance, troubleshoot issues, and ensure high availability and reliability. • Implement security policies, access controls, and data governance practices. • Manage upgrades, patches, and capacity planning for big data infrastructure. Development & Data Engineering: • Design and develop scalable data pipelines using tools like Apache Spark, Flink, NiFi, or Airflow. • Build ETL/ELT workflows to ingest, transform, and load data from various sources. • Optimize data storage and retrieval for performance and cost-efficiency. • Collaborate with data scientists and analysts to support model deployment and data exploration.
    $74k-99k yearly est. 4d ago
  • Systems Data Analyst

    Beacon Hill 3.9company rating

    Data scientist job in Indianapolis, IN

    Beacon Hill Technologies is seeking proactive, data-driven analyst with strong initiative-someone who can expand on existing frameworks, validate data, and independently build tools that elevate team performance. You communicate clearly, think critically, and enjoy transforming complex technical information into meaningful business insights. You thrive in fast-paced environments and are comfortable working hands-on with evolving data systems. This postion is hybrid! Required Skills: Bachelor's degree in Information Systems, Business Analytics, IT, or a related field (or equivalent experience). 3-5 years of experience in data analysis, IT operations, or A/V-adjacent environment. Proficiency with: Tableau (strongly preferred; team's primary tool) ServiceNow reporting Excel (advanced formulas, macros) Python (especially for Tableau-based scripting) Experience working with large datasets and multiple data sources. Ability to validate, test, and ensure data accuracy and integrity. Strong communication skills; able to translate technical data into clear business insights. Demonstrated ability to independently build new reports, dashboards, or tools when standard solutions are not available. Desired Skills: Experience with Cisco Spaces, digital room utilization analytics, or space-management tools. Familiarity with A/V environments, technologies, or governance frameworks (big plus, but not required). Experience developing or managing lifecycle models, performance metrics, or executive-level reporting dashboards. Knowledge of AI-assisted reporting or automation tools. Experience with procurement forecasting, budgeting data, or operational strategy analytics. Beacon Hill is an equal opportunity employer and individuals with disabilities and/or protected veterans are encouraged to apply. California residents: Qualified applications with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. If you would like to complete our voluntary self-identification form, please click here or copy and paste the following link into an open window in your browser: ***************************************** Completion of this form is voluntary and will not affect your opportunity for employment, or the terms or conditions of your employment. This form will be used for reporting purposes only and will be kept separate from all other records. Company Profile: Beacon Hill Technologies, a premier National Information Technology Staffing Group, provides world class technology talent across all industries utilizing a complete suite of staffing services. Beacon Hill Technologies' dedicated team of recruiting and staffing experts consistently delivers quality IT professionals to solve our customers' technical and business needs. Beacon Hill Technologies covers a broad spectrum of IT positions, including Project Management and Business Analysis, Programming/Development, Database, Infrastructure, Quality Assurance, Production/Support and ERP roles. Learn more about Beacon Hill and our specialty divisions, Beacon Hill Associates, Beacon Hill Financial, Beacon Hill HR, Beacon Hill Legal, Beacon Hill Life Sciences and Beacon Hill Technologies by visiting ************* Benefits Information: Beacon Hill offers a robust benefit package including, but not limited to, medical, dental, vision, and federal and state leave programs as required by applicable agency regulations to those that meet eligibility. Upon successfully being hired, details will be provided related to our benefit offerings. We look forward to working with you. Beacon Hill. Employing the Future™
    $49k-76k yearly est. 4d ago
  • Data Engineer

    Scaylor

    Data scientist job in Chicago, IL

    Scaylor helps mid-market and enterprise companies make sense of their data. Most teams past $20M ARR are stuck with fragmented systems, old databases, and hundreds of spreadsheets that never quite line up. We build the pipelines that clean, normalize, and connect that data so it can actually be used. Our platform handles everything from parsing financial models and reports to mapping tables across legacy databases and modern APIs. The goal is simple: give companies a single source of truth they can trust. We're a small team of four - one backend engineer, one frontend engineer, and 2 founders. We're looking for our fifth teammate to help us scale the engine behind it all. ⸻ The Role You'll work across data engineering and full-stack development, helping us build reliable data infrastructure that powers real workflows. You'll touch everything from ingestion and transformation pipelines to the APIs and dashboards that surface insights to clients. You'll work directly with the founding team and help make technical decisions that define the next version of Scaylor's core platform. ⸻ What You'll Work On • Build data pipelines that extract, clean, and standardize information from Excel files, PDFs, APIs, and legacy databases • Design schemas and transformation logic for structured and semi-structured data • Develop and maintain backend APIs (Python/FastAPI or Node/Express) for data access and analytics • Help connect backend services to our frontend dashboards (React, Node.js, or similar) • Set up and maintain AWS infrastructure (Lambda, S3, ECS, CloudFormation) • Collaborate with clients to understand their data problems and design workflows that fix them ⸻ You'd Be Great Here If You • Have 3-6 years of experience in data engineering, backend, or full-stack roles • Write clean, maintainable code in Python + JS • Understand ETL, data normalization, and schema mapping • Have experience with SQL and working with legacy databases or systems • Are comfortable managing cloud services and debugging data pipelines • Enjoy solving messy data problems and care about building things that last ⸻ Nice to Have • Familiarity with GCP or SQL databases • Understanding of enterprise data flows (ERP, CRM, or financial systems) • Experience building and deploying containers (Docker, GitHub Actions, CI/CD) • Interest in lightweight ML or LLM-assisted data transformation ⸻ Why Join Scaylor • Be one of the first five team members shaping the product and the company • Work directly with the founder and help define Scaylor's technical direction • Build infrastructure that solves real problems for real companies • Earn meaningful equity and have a say in how the company grows ⸻ Compensation • $130k - $150k with a raise based on set revenue triggers • .4% equity • Relocation to Chicago, IL required
    $130k-150k yearly 2d ago
  • Data Engineer

    Saragossa

    Data scientist job in Chicago, IL

    Data Engineer - Build the Data Engine Behind AI Execution - Starting Salary $150,000 You'll be part architect, part systems designer, part execution partner - someone who thrives at the intersection of engineering precision, scalability, and impact. As the builder behind the AI data platform, you'll turn raw, fragmented data into powerful, reliable systems that feed intelligent products. You'll shape how data flows, how it scales, and how it powers decision-making across AI, analytics, and product teams. Your work won't be behind the scenes - it will be the foundation of everything we build. You'll be joining a company built for builders. Our model combines AI consulting, venture building, and company creation into one execution flywheel. Here, you won't just build data pipelines - you'll build the platforms that power real products and real companies. You know that feeling when a data system scales cleanly under real-world pressure, when latency drops below target, when complexity turns into clarity - and everything just flows? That's exactly what you'll build here. Ready to engineer the platform that powers AI execution? Let's talk. No up-to-date resume required.
    $150k yearly 23h ago
  • Senior Data Engineer

    Brooksource 4.1company rating

    Data scientist job in Indianapolis, IN

    Senior Data Engineer - Azure Data Warehouse (5-7+ Years Experience) Long term renewing contract Azure-based data warehouse and dashboarding initiatives. Work alongside architects, analysts, and researchers to build scalable, auditable, and business-aligned data assets using modern cloud tools and best practices. Key Responsibilities · Design and implement scalable data pipelines using ADF, Databricks, and Azure SQL Server · Apply Medallion architecture principles and best practices for data lake and warehouse design · Collaborate with Data Architects, Analysts, and Researchers to translate business needs into technical solutions · Develop and maintain CI/CD pipelines for data workflows and dashboard deployments · Lead troubleshooting and debugging efforts across ETL, SQL, and cloud environments · Mentor junior team members and promote best practices in data modeling, cleansing, and promotion · Support dashboarding initiatives with Power BI and wireframe collaboration · Ensure auditability, lineage, and performance across SQL Server and Oracle environments Required Skills & Experience · 5-7+ years in data engineering, data warehouse design, and ETL development · Strong expertise in Azure Data Factory, Data Bricks, and Python · Deep understanding of SQL Server, Oracle, Postgres SQL & Cosmos DB and data modeling standards · Proven experience with Medallion architecture and data Lakehouse best practices · Hands-on with CI/CD, DevOps, and deployment automation · Agile mindset with ability to manage multiple priorities and deliver on time · Excellent communication and documentation skills Bonus Skills · Experience with GCP or AWS · Familiarity with Jira, Confluence, and AppDynamics
    $77k-104k yearly est. 1d ago
  • Sr. Data Engineer - PERM - MUST BE LOCAL

    Resource 1, Inc.

    Data scientist job in Naperville, IL

    Resource 1 is in need of a Sr. Data Engineer for a full-time/ permanent position with our client in Naperville, IL. Candidate must be local to Illinois because of future hybrid onsite in Naperville expected. Our client is an employee-owned company with excellent benefits, growth opportunities and profit-sharing bonus. This position is focused on building modern data pipelines, integrations and back-end data solutions. Selected individual will work within cross-functional Agile teams, collaborating with product owners, business analysts and other engineers to design and deliver data solutions that power business insights and AI products. Responsibilities: Design and develop scalable data pipelines for ingestion, transformation and integration using AWS services. Pull data from PostgreSQL and SQL Server to migrate to AWS. Create and modify jobs in AWS and modify logic in SQL Server. Create SQL queries, stored procedures and functions in PostgreSQL and RedShift. Provide input on data modeling and schema design as needed. Manage infrastructure through infrastructure-as-code templates (Serverless Framework), supporting new data products and services in AWS. Support inbound/ outbound data flows, including APIs, S3 replication and secured data. Assist with data visualization/ reporting as needed. Follow an Agile development methodology, with regular workshops and standup meetings, working in two-week sprints. Qualifications: 5+ years of data engineering experience. Experience with AWS and its associated array of offerings (Glue, Redshift, Athena, S3, Lambda, Spectrum). Strong experience with SQL, Python and PySpark. A background in supply chain, logistics or distribution would be a plus. Experience with Power BI is a plus.
    $75k-100k yearly est. 2d ago
  • Data Engineer

    Binarybees Business Solutions LLC

    Data scientist job in Itasca, IL

    Primary Location: Itasca, IL Hybrid in Chicago's Northwest Suburbs 2 Days In-Office, 3 Days WFH TYPE: Direct Hire / Permanent Role MUST BE Citizen and Green Card The Data Engineer will be responsible for designing, developing, and maintaining scalable data pipelines and integrations that support data analytics and business intelligence across the organization. This role is essential to ensuring high-quality data delivery, optimizing performance, and enabling effective decision-making through reliable data solutions. What You Bring to the Role (Ideal Experience) Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent work experience). 5+ years of experience as a Data Engineer. 3+ years of experience with the following: Building and supporting data lakehouse architectures using Delta Lake and change data feeds. Working with PySpark and Python, with strong Object-Oriented Programming (OOP) experience to extend existing frameworks. Designing data warehouse table architecture such as star schema or Kimball method. Writing and maintaining versioned Python wheel packages to manage dependencies and distribute code. Creating and managing CI/CD pipelines, especially using Azure DevOps for Microsoft Fabric-related assets. Experience establishing scalable and maintainable data integrations and pipelines in Databricks environments. Nice to Have's Hands-on experience implementing data solutions using Microsoft Fabric. Experience with machine learning/ML and data science tools. Knowledge of data governance and security best practices. Experience in a larger IT environment with 3,000+ users and multiple domains. Current industry certifications from Microsoft cloud/data platforms or equivalent certifications. One or more of the following is preferred: Microsoft Certified: Fabric Data Engineer Associate Microsoft Certified: Azure Data Scientist Associate Microsoft Certified: Azure Data Fundamentals Google Professional Data Engineer Certified Data Management Professional (CDMP) IBM Certified Data Architect - Big Data What You'll Do (Skills Used in this Position) Design and develop scalable data pipelines to collect, process, and store large volumes of structured and unstructured data. Extend and enhance existing OOP-based frameworks developed in Python and PySpark. Partner with data scientists and analysts to define requirements and design robust data analytics solutions. Ensure data quality and integrity through data cleansing, validation, and automated testing procedures. Develop and maintain technical documentation, including requirements, design specifications, and test plans. Implement and manage data integrations from multiple internal and external sources. Optimize data workflows to improve performance, reliability, and reduce cloud consumption. Monitor, troubleshoot, and resolve data pipeline issues to ensure consistent data delivery. Establish and manage CI/CD pipelines and release processes, particularly using Azure DevOps for Microsoft Fabric. Provide technical leadership and coordination for global development and support teams. Participate in creating a safe and healthy workplace by adhering to organizational safety protocols. Support additional projects and initiatives as assigned by management.
    $75k-100k yearly est. 1d ago
  • Data Analyst - Statistician

    The Judge Group 4.7company rating

    Data scientist job in Chicago, IL

    We're looking for a data-driven professional to join our team as a Data Analyst - Statistician, responsible for turning complex data into actionable insights that drive business decisions. In this role, you'll evaluate usage, adoption, and efficacy systems, uncover data mining opportunities, and integrate findings with traditional KPIs. You'll also support analytical projects, guide data formatting and availability, and deliver impactful reports, scorecards, and dashboards. What You'll Do: Analyze and interpret large datasets to identify trends and opportunities. Evaluate system performance and effectiveness, integrating insights with KPIs. Provide analytical support for cross-functional projects. Design and share reports, dashboards, and scorecards to inform stakeholders. Collaborate with teams to ensure data accessibility and quality. Perform additional duties as assigned by your manager. Core Competencies: Data Analysis & Interpretation: Ability to transform complex data into actionable insights. Excel Expertise: Advanced skills in pivot tables, VLOOKUP, Power Query, and data visualization. Tableau: Skilled in creating interactive dashboards and optimizing performance. Snowflake: Hands-on experience with data warehousing and efficient SQL queries. SQL Proficiency: Strong ability to write and optimize queries for data extraction and analysis. Business Acumen: Align data strategies with organizational goals. Collaboration & Communication: Work effectively with product owners, engineers, and stakeholders.
    $63k-84k yearly est. 4d ago
  • Data Engineer

    Cliff Services Inc. 4.8company rating

    Data scientist job in Chicago, IL

    We are seeking a highly skilled Data Engineer with strong expertise in Scala, AWS, and Apache Spark. The ideal candidate will have 7+ years of hands-on experience building scalable data pipelines, distributed processing systems, and cloud-native data solutions. Key Responsibilities Design, build, and optimize large-scale data pipelines using Scala and Spark. Develop and maintain ETL/ELT workflows across AWS services. Work on distributed data processing using Spark, Hadoop, or similar. Build data ingestion, transformation, cleansing, and validation routines. Optimize pipeline performance and ensure reliability in production environments. Collaborate with cross-functional teams to understand requirements and deliver robust solutions. Implement CI/CD best practices, testing, and version control. Troubleshoot and resolve issues in complex data flow systems. Required Skills & Experience 7+ years of Data Engineering experience. Strong programming experience with Scala (must-have). Hands-on experience with Apache Spark (core, SQL, streaming). Solid experience with AWS cloud services (Glue, EMR, Lambda, S3, EC2, IAM, etc.). High proficiency in SQL and relational/no SQL data stores. Strong understanding of data modeling, data architecture, and distributed systems. Experience with workflow orchestration tools (Airflow, Step Functions, etc.). Strong communication and problem-solving skills. Preferred Skills Experience with Kafka, Kinesis, or other streaming platforms. Knowledge of containerization tools like Docker or Kubernetes. Background in data warehousing or modern data lake architectures.
    $91k-120k yearly est. 4d ago
  • Data Architect - Pharma

    Mathco

    Data scientist job in Chicago, IL

    MathCo Role - Data/AI Engineering Manager Onsite - Chicago - 4 days in office (Mandatory) Industry - Pharma (Mandatory) As platform architect/owner, you will: Lead the end-to-end architecture, lifecycle, and governance of the AI/Analytics platform, defining standards, reusable components, and integration patterns. Partner with AI/Data architects to enable scalable model deployment and enhance agentic orchestration. Translate business needs into platform features, manage onboarding, documentation, and cross-functional collaboration for platform adoption. Oversee infrastructure-as-code, CI/CD, observability, and containerized environments to ensure reliability and scalability. Evaluate complex technical proposals and develop actionable platform roadmaps and architecture recommendations Stay current on key AI platform developments and assess their impact on architecture and client strategy Coach others, recognize their strengths, and encourage them to take ownership of their personal development Skills Required Experience in designing, architecting, or managing distributed data and AI platforms in cloud environments (AWS, Azure, or GCP) Proven ability to carry out complex Proof of Concept (POC), pilot projects, and limited production rollouts for AI use-cases, focusing on developing new or improved techniques and procedures. Strong skills in pipeline/workflow optimization and data processing frameworks to evaluate architectural choices Years of Experience Minimum of 8 years in relevant experience, preferably with a consulting background and experience with Pharma clients
    $83k-113k yearly est. 4d ago
  • Senior Data Engineer

    Programmers.Io 3.8company rating

    Data scientist job in Chicago, IL

    requires visa independent candidates. Note: (OPT, CPT, H1B holders will not work at this time) Design, develop, and maintain scalable ETL pipelines using AWSGlue Collaborate with data engineers and analysts to understand data requirements Build and manage data extraction, transformation, and loading processes Optimize and troubleshoot existing Glue jobs and workflows Ensure data quality, integrity, and security throughout the ETL process Integrate AWS Glue with other AWS services like S3, Lambda, Redshift, and Step Functions Maintain documentation of data workflows and processes Stay updated with the latest AWS tools and best practices Required Skills Strong hands-on experience with AWS Glue, PySpark, and Python Proficiency in SQL and working with structured/unstructured data (JSON, CSV, Parquet) Experience with data warehousing concepts and tools Familiarity with CI/CD pipelines, Terraform, and scripting (PowerShell, Bash) Solid understanding of data modeling, data integration, and data management Exposure to AWS Batch, Step Functions, and Data Catalogs
    $81k-112k yearly est. 23h ago
  • Data Engineer

    Acuity Analytics

    Data scientist job in Chicago, IL

    The Data Engineer will design, build, and optimize the data pipelines and models that support the firm's evolving research, analytics, and systematic portfolio construction environment. This role is central to enabling data-driven investment processes, including quantitative research, AI/ML capabilities, and front-office automation. Candidates must have deep expertise with Snowflake, strong SQL skills, and experience integrating diverse datasets used across investment organizations. The role is highly collaborative and requires comfort working in an iterative, fast-moving environment where data needs evolve rapidly based on stakeholder input. Responsibilities Design, build, and enhance ETL/ELT pipelines in Snowflake, ensuring high performance, reliability, and scalability. Integrate internal and external datasets, including pricing, research content, economic releases, market data, and security reference data. Support real-time or near-real-time data flows where needed (e.g., pricing, indicative quotes, market-sensitive inputs). Collaborate closely with Product Leads, Quant Developers, and UI/UX teams to ensure data structures meet the requirements of research workflows, analytical models, and user-facing applications. Partner with front-office stakeholders to rapidly iterate on evolving analytical and data needs. Implement data validation, monitoring, and quality frameworks to ensure accuracy and reliability across critical datasets. Translate prototype pipelines into production-ready workflows with appropriate documentation, standards, and controls. Contribute to data modeling standards, metadata frameworks, and data governance practices across the platform. Requirements 10+ years of data engineering experience within investment management, financial technology, or similar data-intensive environments. Expert-level SQL, including complex queries, schema design, and performance optimization. Deep hands-on experience with Snowflake, including advanced features such as tasks, streams, performance tuning, and secure data sharing. Strong Python capabilities for ETL/ELT development, data processing, and workflow automation. Experience integrating APIs and working with structured, semi-structured, and unstructured datasets. Familiarity with NLP or AI/ML-oriented datasets (e.g., textual research content, PDFs) is a plus. Experience with Domino or willingness to work within a Domino-based model environment. Working knowledge of investment data structures (holdings, benchmarks, pricing, exposures) is highly preferred. Ability to thrive in a rapid prototyping environment with evolving requirements and close partnership with front-office teams.
    $75k-100k yearly est. 3d ago
  • Snowflake Data Engineer

    VC5 Consulting

    Data scientist job in Chicago, IL

    Join a dynamic team focused on building innovative data solutions that drive strategic insights for the business. This is an opportunity to leverage your expertise in Snowflake, ETL processes, and data integration. Key Responsibilities Develop Snowflake-based data models to support enterprise-level reporting. Design and implement batch ETL pipelines for efficient data ingestion from legacy systems. Collaborate with stakeholders to gather and understand data requirements. Required Qualifications Hands-on experience with Snowflake for data modeling and schema design. Proven track record in developing ETL pipelines and understanding transformation logic. Solid SQL skills to perform complex data transformations and optimization. If you are passionate about building cutting-edge data solutions and want to make a significant impact, we would love to see your application! #11290
    $75k-100k yearly est. 3d ago
  • Data Engineer

    Mastek

    Data scientist job in Chicago, IL

    Job Title: Data Engineer - Workflow Automation Employment Type: Contract to Hire or Full-Time Department: Project Scion / Information Management Solutions Key Responsibilities: Design, build, and manage workflows using Automic or experience with similar tools like Autosys, Apache Airflow, or Cybermation. workflow orchestration across multi-cloud ecosystems (AWS, Azure, Snowflake, Databricks, Redshift). Monitor and troubleshoot workflow execution, ensuring high availability, reliability, and performance. Administer and maintain workflow platforms. Collaborate with architecture and infrastructure teams to align workflows with cloud strategies. Support migrations, upgrades, and workflow optimization efforts Required Skills: Has 5+ years of experience in IT managing production grade system Hands-on experience with Automic or similar enterprise workflow automation tools. Strong analytical and problem-solving skills. Good communication and documenting skills. Familiarity with cloud platforms and technologies (e.g., AWS, Azure, Snowflake, Databricks). Scripting proficiency (e.g., Shell, Python). Ability to manage workflows across hybrid environments and optimize performance. Experience managing production operations & support activities Preferred Skills: Experience with CI/CD pipeline integration. Knowledge of cloud-native orchestration tools Exposure to monitoring and alerting systems.
    $75k-100k yearly est. 2d ago
  • Data Engineer

    Creative Planning Inc. 4.6company rating

    Data scientist job in Kansas, IL

    For more information. Visit: ************************* com/wp-content/uploads/2025/05/Data-Engineer-1. pdf
    $85k-123k yearly est. 2d ago

Learn more about data scientist jobs

How much does a data scientist earn in Champaign, IL?

The average data scientist in Champaign, IL earns between $60,000 and $110,000 annually. This compares to the national average data scientist range of $75,000 to $148,000.

Average data scientist salary in Champaign, IL

$81,000
Job type you want
Full Time
Part Time
Internship
Temporary