Post job

Data scientist jobs in Fort Campbell North, KY

- 1,094 jobs
All
Data Scientist
Data Engineer
Data Architect
Data Science Internship
Data Analyst
  • Data Scientist

    Tag-The Aspen Group

    Data scientist job in Chicago, IL

    The Aspen Group (TAG) is one of the largest and most trusted retail healthcare business support organizations in the U.S. and has supported over 20,000 healthcare professionals and team members with close to 1,500 health and wellness offices across 48 states in four distinct categories: dental care, urgent care, medical aesthetics, and animal health. Working in partnership with independent practice owners and clinicians, the team is united by a single purpose: to prove that healthcare can be better and smarter for everyone. TAG provides a comprehensive suite of centralized business support services that power the impact of five consumer-facing businesses: Aspen Dental, ClearChoice Dental Implant Centers, WellNow Urgent Care, Chapter Aesthetic Studio, and Lovet Pet Health Care. Each brand has access to a deep community of experts, tools and resources to grow their practices, and an unwavering commitment to delivering high-quality consumer healthcare experiences at scale. As a reflection of our current needs and planned growth we are very pleased to offer a new opportunity to join our dedicated team as Data Scientist. Join us as a Data Scientist and play a key role in shaping how patients access care across a growing network of brands including Aspen Dental, ClearChoice, WellNow, Chapter Aesthetic Studio, and Lovet. In this role, you won't just analyze data - you'll lead end-to-end initiatives that shape how we optimize revenue across offices, days, and even hours. From designing strategies, deploying solutions and building performance dashboards to partnering with data science on automation and collaborating with teams across Finance, Marketing, Product, Technology and Operations, you'll have a direct hand in driving measurable results. This role is ideal for someone who excels at turning data into decisions, building repeatable processes, and uncovering insights that drive measurable improvements in revenue performance and patient access. You will lead initiatives across forecasting, scheduling optimization, demand modeling, capacity planning, and revenue strategy, while also shaping how analytics is delivered and scaled across the organization. If you're a builder who loves solving complex problems with data, operational logic, and automation, this opportunity is for you. Essential Responsibilities: Revenue Strategy & Optimization Lead strategy development for optimizing revenue performance at the office, day, and hour level by leveraging forecasting, scheduling, and demand modeling - while balancing access for patients and operational efficiency. Build analytical frameworks to support pricing, demand forecasting, scheduling, and access optimization. Identify revenue opportunities through data-driven analysis of booking trends, cancellations, no-shows, and utilization. Monitor and update demand and schedule availability through the analysis of historical and future booking trends, the pricing environment, industry capacity trends, competitive landscape, and other factors. Analytics, Insights & Experimentation Develop and maintain forecasting, demand models, dashboards, and scenario analyses. Run experiments and structured tests to evaluate new operational and scheduling strategies. Create clear, actionable insights that influence senior leaders and cross-functional partners. Process Building & Automation Map existing manual workflows and identify opportunities to automate recurring analyses or reporting. Cross-Functional Leadership Work closely with Operations, Finance, Product, Marketing, and Clinical teams to align strategies and execution. Help shape and scale the function by building new playbooks, reports, and best practices. Act as a subject matter expert in forecasting, demand modeling, and capacity optimization. Qualifications (Skills-Based): We welcome candidates with diverse academic and career pathways. You may have gained your skills through industry experience, coursework, certificates, or hands-on practice. Experience/Education: 5+ years of experience in Revenue Management, Pricing, Operations Research, Supply/Demand Optimization (Airline, Travel, Healthcare, or multi-location service industries preferred). Bachelor's degree in Business, Finance, Economics, Analytics, or Statistics required; Master's degree a plus. Experience working alongside data science/engineering teams to automate and scale analytics processes. Exceptional analytical, problem-solving, and communication skills - with the ability to influence senior stakeholders. Detail-oriented, self-starter mindset with a passion for driving results. Strong analytical and quantitative skills, with experience in forecasting, modeling, or optimization. Strong technical proficiency in SQL and a modern BI platform (e.g., Tableau, Looker). Familiarity with scripting (e.g., Python or R) or automation tools (e.g., DBT, Airflow) - not required, but helpful. Additional Job Description: Base Pay Range: $115,000 - $130,000, plus 10% annual bonus (Actual pay may vary based on experience, performance, and qualifications.) A generous benefits package that includes paid time off, health, dental, vision, and 401(k) savings plan with match. If you are an applicant residing in California, please view our privacy policy here: *********************************************************************************
    $115k-130k yearly 4d ago
  • Data Scientist

    Talent 4.8company rating

    Data scientist job in Chicago, IL

    This role supports a financial services organization by applying advanced data science and machine learning techniques to solve complex business problems using large-scale datasets. The position focuses on end-to-end feature engineering, model development, and writing production-quality code in a fast-paced, collaborative environment. The individual partners closely with product and engineering teams to uncover trends, improve algorithm performance, and drive data-informed decisions. Key Responsibilities Independently analyze and aggregate large, complex datasets to identify anomalies that affect model and algorithm performance Own the full lifecycle of feature engineering, including ideation, development, validation, and selection Develop and maintain production-quality code in a fast-paced, agile environment Solve challenging analytical problems using extremely large (terabyte-scale) datasets Evaluate and apply a range of machine learning techniques to determine the most effective approach for business use cases Collaborate closely with product and engineering partners to identify trends, opportunities, and data-driven solutions Communicate insights, results, and model performance clearly through visualizations and explanations tailored to non-technical stakeholders Adhere to established standards and practices to ensure the security, integrity, and confidentiality of systems and data Minimum Qualifications Bachelor's degree in Mathematics, Statistics, Computer Science, Operations Research, or a related field At least 4 years of professional experience in data science, analytics, engineering, or a closely related discipline Hands-on experience building data science pipelines and workflows using Python, R, or similar programming languages Strong SQL skills, including query development and performance tuning Experience working with large-scale, high-volume datasets (terabyte-scale) Practical experience applying a variety of machine learning methods and understanding the parameters that impact model performance Familiarity with common machine learning libraries (e.g., scikit-learn, Spark ML, or similar) Experience with data visualization tools and techniques Ability to write clean, maintainable, and production-ready code Strong interest in rapid prototyping, experimentation, and proof-of-concept development Proven ability to communicate complex analytical findings to non-technical audiences Ability to meet standard employment screening requirements
    $71k-100k yearly est. 3d ago
  • Data Scientist

    Alined Consulting Group

    Data scientist job in Chicago, IL

    Minimum Qualifications: ● 5+ years of experience leading data science projects that have a direct impact on a company's objectives or PhD degree in quantitative fields such as Statistics, Data Science, Computer Science with 3+ years of experience ● 5+ years of experience utilizing data mining techniques, ML models to assist business decision making. Hands-on experience with deep learning frameworks , LLMs, GenAI tools, and NLP techniques. ● Deep expertise in statistical methods and machine learning concepts, with the ability to mentor team members on methodologies, model tuning, and evaluation techniques. ● 2+ years hands-on experience with deep learning frameworks, LLMs, GenAI tools, and NLP techniques. ● 5+ years of experience using Python to process large, diverse datasets, and develop and deploy predictive models in cloud-based environments and other computing platforms. ● 5+ years of experience in SQL and cloud-hosted data platforms (Google Cloud Platform, AWS, etc.). ● Demonstrated ability to assist business decision-making through data mining and machine learning. ● Strong communication skills to collaborate effectively with business stakeholders. Must be able to interact cross-functionally and drive both business and technical discussions. ● Ability to translate complex business problems into actionable project plans and solve
    $70k-97k yearly est. 4d ago
  • Data Scientist

    Astec Digital

    Data scientist job in Chattanooga, TN

    BUILT TO CONNECT At Astec, we believe in the power of connection and the importance of building long-lasting relationships with our employees, customers and the communities we call home. With a team more than 4,000 strong, our employees are our #1 advantage. We invest in skills training and provide opportunities for career development to help you grow along with the business. We offer programs that support physical safety, as well as benefits and resources to enhance total health and wellbeing, so you can be your best at work and at home. Our equipment is used to build the roads and infrastructure that connects us to each other and to the goods and services we use. We are an industry leader known for delivering innovative solutions that create value for our customers. As our industry evolves, we are using new technology and data like never before. We're looking for creative problem solvers to build the future with us. Connect with us today and build your career at Astec. LOCATION: Chattanooga, TN On-site / Hybrid (Role must report on-site regularly) ABOUT THE POSITION The Data Scientist will play a key role in establishing the analytical foundation of Astec Smart Services. This individual will lead efforts to build pipelines from source to cloud, define data workflows, build predictive models, and help guide the team's approach to turning data into customer value. He or she will work closely within Smart Services and cross-functionally to ensure insights are actionable and impactful. The role blends Data architecture, data engineering, and data science to help build Smart Services analytical foundation. This person will be instrumental in helping to build Astec's digital transformation and aftermarket strategy. Deliverables & Responsibilities Data Engineering: Build and maintain robust data pipelines for ingestion, transformation, and storage. Optimize ETL processes for scalability and performance. Data Architecture: Design and implement data models that support analytics and operational needs. Define standards for data governance, security, and integration. Data Science: Develop predictive models and advanced analytics to support business decisions. Apply statistical and machine learning techniques to large datasets. Strong business acumen to understand decision drivers with internal and external customers Collaborate with individuals and departments across the company to ensure insights are aligned with customer needs and drive value. To be successful in this role, your experience and competencies are: Bachelor's degree in data science, engineering, or related field. (Adv. degrees a plus.) 5+ years of experience in data science, including at least 3 years in industrial or operational environments. Strong communication and project management skills are critical. Proficiency in data pipeline tools (e.g., Spark, Airflow) and cloud platforms (Azure, AWS, GCP). Strong understanding of data modeling principles and database technologies (SQL/NoSQL). Hands-on experience with machine learning frameworks (e.g., TensorFlow, PyTorch) and statistical analysis. Ability to work across data architecture design and data science experimentation. Programming: Python, SQL, and optionally Scala or Java. Familiarity with distributed systems and big data technologies. Strong communication skills for translating technical insights into business value. Ability to work across technical, commercial, and customer-facing teams. Supervisor and Leadership Expectations This role will not have supervisory or managerial responsibilities. This role will have program management responsibilities. Our Culture and Values Employees that become part of Astec embody the values below throughout their work. Continuous devotion to meeting the needs of our customers Honesty and integrity in all aspects of business Respect for all individuals Preserving entrepreneurial spirit and innovation Safety, quality and productivity as means to ensure success EQUAL OPPORTUNITY EMPLOYER As an Equal Opportunity Employer, Astec does not discriminate on the basis of race, creed, color, religion, gender (sex), sexual orientation, gender identity, marital status, national origin, ancestry, age, disability, citizenship status, a person's veteran status or any other characteristic protected by law or executive order.
    $68k-94k yearly est. 1d ago
  • Data Science SME

    Marathon TS

    Data scientist job in Knoxville, TN

    This person is responsible for extracting insights and building predictive models from data. Analyzes curated data, builds ML models, and validates them with experiments. The DS researches and defines business problems and translates them into analytical or ML tasks. Conducts exploratory data analysis to identify patterns, correlations, and anomalies. They will be in charge of designing, training, and evaluating statistical, ML, or AI models. Works with SMEs to determine use cases, model requirements, and alternative solutions. Defines requirements and specifications from which Data Specialists build data structures. Works closely with Data Engineers and customers to ensure correct implementation. Required Experience: Bachelor's in Mathematics, OR, Business, Economics, or related field preferred. Minimum 7+ years designing and implementing mathematical and statistical models. Strong experience with Python/R, PyTorch, Scikit-Learn, Jupyter, visualization, A/B testing. SQL (Oracle, MySQL, or Hive). Oracle DB and Hadoop experience preferred. Ability to work independently and under deadlines. Strong communication and problem-solving skills. Clearance Requirement: Must be a US Citizen with an active DoD Secret security clearance.
    $69k-95k yearly est. 3d ago
  • Data Architect

    Geowealth

    Data scientist job in Oak Brook, IL

    GeoWealth is a Chicago-based fintech firm that offers an award-winning digital advisory platform, including Turnkey Asset Management Platform (“TAMP”) capabilities. We deliver a comprehensive and fully integrated wealth management technology platform to professionals in the financial services industry. OPPORTUNITY: We're looking for a Data Architect to join our Engineering Team. In this role, you will oversee the overall data architecture, helping us deliver our best-in-class solutions to our customers. This role will be key in organizing, designing, and leading our team through well-designed data architecture. If you love architecting complex systems, delivering customer focused software, designing best-in-class systems and leading data architecture design this role is for you. RESPONSIBILITIES: Own data architecture and oversee data implementation Set coding/implementation standards Lead our data warehouse design Deliver performant, maintainable, and quality software in collaboration with our teams. Improve our database design to reduce replication and increase performance Partner with other architects and engineers to produce better designed systems SKILLS, KNOWLEDGE, AND EXPERIENCE: 5+ years of experience as Data Architect or equivalent role Bachelor's degree in computer science or equivalent degree Hands-on experience with Oracle Designed and implemented data warehouse Experience with the following is preferred but not required: designing and building monolithic and distributed systems, Postgres, Logi Symphony, PowerBI, Java and JIRA/Confluence COMPANY CULTURE & PERKS - HIGHLIGHTS: Investing in Your Growth 🌱 Casual work environment with fun, hard-working, and open-minded coworkers Competitive salary with opportunity for performance-based annual bonus Opportunities to up-skill, explore new responsibilities, and network across departments Defined and undefined career pathways allowing you to grow your own way Work/Life Balance 🗓️ Flexible PTO and work schedule to ensure our team balances work and life Hybrid work schedule Maternity and paternity leave Taking Care of Your Future ♥️ Medical, Dental, and Vision, Disability insurance Free access to Spring Health, a comprehensive mental health solution 401(k) with company match and a broad selection of investments Voluntary insurance: short-term disability, long-term disability, and life insurance FSA and transit benefits for employees that contribute pre-tax dollars Other Fun Stuff ⭐ Free on-site gym and parking Weekly catered lunches in the office, plus monthly happy hours Stocked kitchen with snacks and drinks GeoWealth was recognized as “Best Place to Work” by Purpose Job's 2025, 2024 and 2022 GeoWealth was recognized as “Best Place to Work” by Built In in 2024, 2023 and 2022 SALARY RANGE: Starting at $170,000-$220,000 + Benefits + Opportunity for Performance Bonus This is an estimated range based on the circumstances at the time of posting, however, may change based on a combination of factors, including but not limited to skills, experience, education, market factors, geographical location, budget, and demand.
    $170k-220k yearly 3d ago
  • AI Data Strategy Internship

    Catholic Extension Society 3.6company rating

    Data scientist job in Chicago, IL

    Chicago Based, no relocation support; We are looking to fill this role ASAP *This role is intended to be converted to a full-time position for the right candidate. *No ChatGPT during interview The AI Data Strategy Intern will play a key role in advancing data-driven decision-making processes while contributing to the development of AI models and supporting and enhancing the organization's AI environment. Reporting to the Director of Data Strategy, the intern will work with diverse datasets, explore their sources, and utilize the Datawarehouse for data extraction, transformation, and analysis. This role offers an excellent opportunity to gain hands-on experience in AI and data analytics while supporting the mission to strengthen faith communities and make a meaningful impact. Essential Functions & Job Performance Criteria Analyze donor data to uncover insights and patterns. Utilize SQL to cleanse, transform, and extract donor information. Assist in developing and maintaining fundraising performance reports using Power BI. Contribute to the development, maintenance, and optimization of AI models to support predictive fundraising outcomes and other organizational initiatives. Support and enhance the organization's AI environment, ensuring its effective use in analytics and decision-making. Perform data management tasks such as deduplication, migration, and addressing data integrity issues. Adhere to security standards and protocols in handling donor information. Troubleshoot and resolve data-related challenges effectively. Identify opportunities for process enhancement and efficiency within fundraising operations. Assist with other duties as needed to support fundraising and AI-related initiatives. Qualifications Pursuing or holding a bachelor's degree in Data Science, Computer Science, or Engineering Proficiency in data structures and SQL, with strong data manipulation skills. Familiarity with data analytics concepts and AI/ML fundamentals. Interest in AI development and its applications. Strong problem-solving skills with a technical mindset. Ability to communicate analysis results to both technical and non-technical audiences. Demonstrates a growth mindset and a positive attitude. Interest in working with data for performance measurement, decision-making, and AI applications. Ability to collaborate effectively in a team-oriented, fast-paced environment at our downtown Chicago location. Strong organizational and prioritization skills. Capacity to work independently while maintaining open communication. Self-motivated with excellent organizational abilities. Demonstrates personal integrity, credibility, and a dedication to the organization's mission. Appreciation for and understanding of the teachings and traditions of the Roman Catholic Church. A passion for your field and a sense of humor. Full time position but part time optional if need be (students only)
    $42k-68k yearly est. 2d ago
  • Data Engineer

    Scaylor

    Data scientist job in Chicago, IL

    Scaylor helps mid-market and enterprise companies make sense of their data. Most teams past $20M ARR are stuck with fragmented systems, old databases, and hundreds of spreadsheets that never quite line up. We build the pipelines that clean, normalize, and connect that data so it can actually be used. Our platform handles everything from parsing financial models and reports to mapping tables across legacy databases and modern APIs. The goal is simple: give companies a single source of truth they can trust. We're a small team of four - one backend engineer, one frontend engineer, and 2 founders. We're looking for our fifth teammate to help us scale the engine behind it all. ⸻ The Role You'll work across data engineering and full-stack development, helping us build reliable data infrastructure that powers real workflows. You'll touch everything from ingestion and transformation pipelines to the APIs and dashboards that surface insights to clients. You'll work directly with the founding team and help make technical decisions that define the next version of Scaylor's core platform. ⸻ What You'll Work On • Build data pipelines that extract, clean, and standardize information from Excel files, PDFs, APIs, and legacy databases • Design schemas and transformation logic for structured and semi-structured data • Develop and maintain backend APIs (Python/FastAPI or Node/Express) for data access and analytics • Help connect backend services to our frontend dashboards (React, Node.js, or similar) • Set up and maintain AWS infrastructure (Lambda, S3, ECS, CloudFormation) • Collaborate with clients to understand their data problems and design workflows that fix them ⸻ You'd Be Great Here If You • Have 3-6 years of experience in data engineering, backend, or full-stack roles • Write clean, maintainable code in Python + JS • Understand ETL, data normalization, and schema mapping • Have experience with SQL and working with legacy databases or systems • Are comfortable managing cloud services and debugging data pipelines • Enjoy solving messy data problems and care about building things that last ⸻ Nice to Have • Familiarity with GCP or SQL databases • Understanding of enterprise data flows (ERP, CRM, or financial systems) • Experience building and deploying containers (Docker, GitHub Actions, CI/CD) • Interest in lightweight ML or LLM-assisted data transformation ⸻ Why Join Scaylor • Be one of the first five team members shaping the product and the company • Work directly with the founder and help define Scaylor's technical direction • Build infrastructure that solves real problems for real companies • Earn meaningful equity and have a say in how the company grows ⸻ Compensation • $130k - $150k with a raise based on set revenue triggers • .4% equity • Relocation to Chicago, IL required
    $130k-150k yearly 3d ago
  • Sr. Data Engineer - PERM - MUST BE LOCAL

    Resource 1, Inc.

    Data scientist job in Naperville, IL

    Resource 1 is in need of a Sr. Data Engineer for a full-time/ permanent position with our client in Naperville, IL. Candidate must be local to Illinois because of future hybrid onsite in Naperville expected. Our client is an employee-owned company with excellent benefits, growth opportunities and profit-sharing bonus. This position is focused on building modern data pipelines, integrations and back-end data solutions. Selected individual will work within cross-functional Agile teams, collaborating with product owners, business analysts and other engineers to design and deliver data solutions that power business insights and AI products. Responsibilities: Design and develop scalable data pipelines for ingestion, transformation and integration using AWS services. Pull data from PostgreSQL and SQL Server to migrate to AWS. Create and modify jobs in AWS and modify logic in SQL Server. Create SQL queries, stored procedures and functions in PostgreSQL and RedShift. Provide input on data modeling and schema design as needed. Manage infrastructure through infrastructure-as-code templates (Serverless Framework), supporting new data products and services in AWS. Support inbound/ outbound data flows, including APIs, S3 replication and secured data. Assist with data visualization/ reporting as needed. Follow an Agile development methodology, with regular workshops and standup meetings, working in two-week sprints. Qualifications: 5+ years of data engineering experience. Experience with AWS and its associated array of offerings (Glue, Redshift, Athena, S3, Lambda, Spectrum). Strong experience with SQL, Python and PySpark. A background in supply chain, logistics or distribution would be a plus. Experience with Power BI is a plus.
    $75k-100k yearly est. 3d ago
  • Data Engineer

    Binarybees Business Solutions LLC

    Data scientist job in Itasca, IL

    Primary Location: Itasca, IL Hybrid in Chicago's Northwest Suburbs 2 Days In-Office, 3 Days WFH TYPE: Direct Hire / Permanent Role MUST BE Citizen and Green Card The Data Engineer will be responsible for designing, developing, and maintaining scalable data pipelines and integrations that support data analytics and business intelligence across the organization. This role is essential to ensuring high-quality data delivery, optimizing performance, and enabling effective decision-making through reliable data solutions. What You Bring to the Role (Ideal Experience) Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent work experience). 5+ years of experience as a Data Engineer. 3+ years of experience with the following: Building and supporting data lakehouse architectures using Delta Lake and change data feeds. Working with PySpark and Python, with strong Object-Oriented Programming (OOP) experience to extend existing frameworks. Designing data warehouse table architecture such as star schema or Kimball method. Writing and maintaining versioned Python wheel packages to manage dependencies and distribute code. Creating and managing CI/CD pipelines, especially using Azure DevOps for Microsoft Fabric-related assets. Experience establishing scalable and maintainable data integrations and pipelines in Databricks environments. Nice to Have's Hands-on experience implementing data solutions using Microsoft Fabric. Experience with machine learning/ML and data science tools. Knowledge of data governance and security best practices. Experience in a larger IT environment with 3,000+ users and multiple domains. Current industry certifications from Microsoft cloud/data platforms or equivalent certifications. One or more of the following is preferred: Microsoft Certified: Fabric Data Engineer Associate Microsoft Certified: Azure Data Scientist Associate Microsoft Certified: Azure Data Fundamentals Google Professional Data Engineer Certified Data Management Professional (CDMP) IBM Certified Data Architect - Big Data What You'll Do (Skills Used in this Position) Design and develop scalable data pipelines to collect, process, and store large volumes of structured and unstructured data. Extend and enhance existing OOP-based frameworks developed in Python and PySpark. Partner with data scientists and analysts to define requirements and design robust data analytics solutions. Ensure data quality and integrity through data cleansing, validation, and automated testing procedures. Develop and maintain technical documentation, including requirements, design specifications, and test plans. Implement and manage data integrations from multiple internal and external sources. Optimize data workflows to improve performance, reliability, and reduce cloud consumption. Monitor, troubleshoot, and resolve data pipeline issues to ensure consistent data delivery. Establish and manage CI/CD pipelines and release processes, particularly using Azure DevOps for Microsoft Fabric. Provide technical leadership and coordination for global development and support teams. Participate in creating a safe and healthy workplace by adhering to organizational safety protocols. Support additional projects and initiatives as assigned by management.
    $75k-100k yearly est. 3d ago
  • Data Analyst

    Southern Orthodontic Partners

    Data scientist job in Nashville, TN

    S outhern Orthodontic Partners is a fast paced, entrepreneurial organization offering a premier network of orthodontic care. We provide business management expertise to our partner practices, enabling orthodontic professionals to focus on delivering the best possible care to their patients. Our partner practices benefit from our centralized support system, operational best practices, recruiting network, and marketing expertise. Our mission is to positively impact the orthodontic profession through a partnership of clinical and service excellence with the country's leading doctors. At Southern Orthodontic Partners we heavily rely on our data from our practices to not only track progress toward operational objectives, but to also provide insight to allow for strategic and timely decisions related to the operations of the business. We are seeking a skilled Data Analyst with strong SQL development capabilities and hands-on experience in data engineering workflows. This role combines analytical expertise with technical proficiency in data transformation, pipeline management, and business intelligence tools. The ideal candidate will bridge the gap between data engineering and analytics, ensuring reliable data flows while delivering actionable insights. In this role you will: Build and maintain data pipelines using Azure Data Factory, Python, and other ETL tools Monitor data ingestion processes and troubleshoot issues proactively to ensure data availability Design, develop, and optimize SQL queries to extract, transform, and analyze large datasets Perform ad hoc analysis to support business questions and operational needs Collaborate with data engineers and business stakeholders to understand requirements and deliver solutions Document data workflows, transformations, and troubleshooting procedures Ensure data quality through validation, testing, and monitoring procedures Create visualizations and reports using Power BI, Tableau, or similar tools Support data warehouse development and maintenance as needed. Technical Skills: SQL Development: Intermediate to advanced proficiency in SQL for complex querying, performance optimization, and database management Data Transformation & ETL: Hands-on experience managing data load and transformation processes using: Python for data manipulation and scripting Azure Data Factory for orchestrating data pipelines Other industry-standard ETL/ELT tools and frameworks API Integration: Practical experience working with REST APIs, including: Consuming data from external APIs using authentication methods (OAuth, API keys, tokens) Pushing data to external systems via API endpoints Handling API pagination, rate limiting, and error responses Building automated data workflows that leverage APIs for data ingestion and distribution Problem-Solving: Demonstrated ability to perform ad hoc querying and troubleshoot data ingestion issues with minimal guidance Data Quality: Experience validating data integrity and implementing error handling procedures Core Competencies: Strong analytical and problem-solving mindset Ability to work independently on technical troubleshooting Effective communication skills for documenting processes and collaborating with stakeholders Attention to detail in data validation and quality assurance Preferred Job Skills: Highly Desired: Experience with Databricks for big data processing and analytics Proficiency in Azure Data Factory for cloud-based data integration Business intelligence expertise using Power BI and/or Tableau for visualization and reporting
    $51k-72k yearly est. 1d ago
  • Principal Data Architect

    Independence Pet Holdings

    Data scientist job in Chicago, IL

    Established in 2021, Independence Pet Holdings is a corporate holding company that manages a diverse and broad portfolio of modern pet health brands and services, including insurance, pet education, lost recovery services, and more throughout North America. We believe pet insurance is more than a financial product and build solutions to simplify the pet parenting journey and help improve the well-being of pets. As a leading authority in the pet category, we operate with a full stack of resources, capital, and services to support pet parents. Our multi-brand and omni-channel approach include our own insurance carrier, insurance brands and partner brands. Role Overview In close collaboration with the CDO team, this role leads enterprise data architecture and strategy for IPH, driving the architecture and governance of the Unified data platform and enabling advanced analytics and AI/ML capabilities across all Pet Insurance and non-insurance brands. It combines data platform, architecture, engineering, data governance, and business intelligence architecture to ensure a unified, secure, and scalable data ecosystem. Key Focus Areas: Building unified Customer Data Platform (CDP) across multiple zones/domains/lines of business - Enabling data monetization through cross-sell and up-sell insights and customer journey analytics - Driving Gen-AI/Agentic AI adoption (agents, skills, RAG) with data as foundation - Handling non-standard data from various sources (pet metadata, third-party data with formatting issues) - Transforming non-insurance data into pet insurance actionable insights and vice versa. Key Responsibilities Calandra Data Platform Architecture (30%) Architect, design, and support / govern the implementation the Azure-based enterprise data platform using cloud-native services Architect a medallion lakehouse (Bronze/Silver/Gold tiers) for structured and unstructured data Enable real-time streaming and batch processing for analytics and operational insights Architect a multi-region, multi-AZ architecture with multiple instances per zone for high availability Ensure unified CDP across multiple zones/domains/lines of business Data Governance & Master Data Management (20%) With the CDO team, establish an enterprise data governance framework and implement a data catalog for discovery and lineage Define MDM strategy to ensure a single source of truth for critical data domains (Customer, Product, Policy) Implement data quality monitoring and remediation processes Handle non-standard input data with formatting issues from various sources Analytics & BI Architecture (20%) Build a Customer Data Platform (CDP) for all brands, including enrichment pipelines and analytical models Enable self-service BI through Power BI and semantic modeling Define architecture for reporting and advanced analytics capabilities Drive data monetization strategies for cross-sell/up-sell and customer journey insights Integration & Standards (15%) Define data integration patterns, pipeline architecture, and API standards Ensure compliance with SOC 2, PCI DSS, and internal security baselines Align with Calandra Toolkit and target architecture standards Strategic Leadership (15%) With the CDO team, develop future-state data architecture roadmap aligned with IPH's digital transformation goals Partner with business and technology leaders to drive adoption and maturity of data capabilities Drive organization toward 90% agentic-driven operations through AI/ML adoption Data Architecture Process Workflow (Expected Participation) As Solution Architect: Define high-level design, scope, and intended outcomes Align to business goals, governance standards, and target architecture (Calandra Toolkit) Hand off requirements to Data Architecture As Data Architect: Translate solution into data models, integrations, security, and governance patterns Validate scalability, regulatory, compliance, and long-term roadmap alignment Produce clear technical specs for engineering Architecture Review: Conduct "Solution Architecture + Data Architecture" review - In collaboration with the CDO team Support implementation, design, and go-live Performance, security, and governance Reusability and standards compliance Ensure required fixes are addressed before proceeding Technical Requirements - Required Platforms & Tools Data Platforms Data Platforms, Databricks, Azure Synapse, Microsoft Fabric BI & Analytics Power BI, Self-Service BI, Semantic Modeling CDP Customer Data Platform architecture, enrichment pipelines Cloud Azure (primary), AWS Streaming Kafka, Kinesis, Spark Streaming, Real-time pipelines Data Governance Data Catalog, Lineage, MDM, Quality Frameworks Gen-AI & Agentic Requirements (Nice to Have → Becoming Critical) Machine Learning Model building, deployment, ML pipelines Gen-AI Agents & Skills Multi-agent pipelines, agentic workflows RAG Retrieval Augmented Generation for data insights Data Monetization E2E framework with data as a base, AI on top to deliver customer value 90% Agentic Driven Vision to drive org toward agentic automation Client Technology Stack (Familiarity Expected) MS Dynamics Internal client CRM New Portal External client portal (in-house, React + MS stack) DocGen Document generation (MS stack) EIS Insurance Platform Middle layer - Policy/Group Admin Earnix 3rd party Rating/Pricing engine General Qualifications 10+ years enterprise data architecture experience; deep expertise with Azure data platforms. 8+ years in analytics/BI architecture and Power BI at enterprise scale. Preferred Qualifications Azure Data Engineer or Solutions Architect Expert certification Experience with Databricks, Synapse, and Microsoft Fabric Strong knowledge of lakehouse patterns, streaming architectures, and data governance frameworks Insurance industry experience is a plus Experience building end-to-end frameworks with data as a foundation and AI on top Hands-on with Data, Gen-AI agents, RAG, and agentic workflows Key Success Metrics Unified CDP Single customer 360 view across all brands/zones Data Quality 95%+ quality scores Agentic Automation Progress toward 90% agentic-driven operations Cross-sell/Up-sell Data-driven insights driving revenue Compliance SOC 2, PCI DSS compliance maintained Platform Adoption Self-service BI adoption across business units All of our jobs come with great benefits including healthcare, parental leave and opportunities for career advancements. Some offerings are dependent upon the location of where you work and can include the following: Comprehensive full medical, dental and vision Insurance Basic Life Insurance at no cost to the employee Company paid short-term and long-term disability 12 weeks of 100% paid Parental Leave Health Savings Account (HSA) Flexible Spending Accounts (FSA) Retirement savings plan Personal Paid Time Off Paid holidays and company-wide Wellness Day off Paid time off to volunteer at nonprofit organizations Pet friendly office environment Commuter Benefits Group Pet Insurance On the job training and skills development Employee Assistance Program (EAP)
    $83k-113k yearly est. 2d ago
  • Cloud Data Architect

    Centraprise

    Data scientist job in Chicago, IL

    Must Have Technical/Functional Skills · Cloud Data Architect in Data & Analytics will play a key role in driving solution architecture design, evaluation, and selection, buy vs. · build decisions, and early-phase project estimates which contribute to the business case. · Partner with Key Roles (Business and Technology) to shape the Cloud Data & Analytics strategy, provide hands on guidance to convert the strategy to architecture/Solutions that are aligned to architecture standards and principles, leverage common solutions and services, and meet the financial targets (costs and benefits) and actionable plan. · Furthermore, shape large Data and Analytics opportunities for TCS. Prior experience in Banking Financial and Insurance combined with strong articulation skills, strategic Thinking exposure, Leadership traits, Strong business acumen, Customer focus, Social are essential. · Must Have skills : Cloud Data Architect, Snowflake, demonstrate hands-On on Databricks (Delta Lake, Unity Catalogue), Snowflake, Cortex AI, Five Tran, DBT. · On Data front Hands on in Data Quality, Data Governance, MDM and Data Modeling (Data Products, Data Mesh, Data vault) Roles & Responsibilities · Strong experience in Cloud Data Architecture and Roadmap and effectively engaging various client Stakeholders on the architecture proposition. · Strong experience in Solution Architecture (Data & Analytics) · Must have prior experience building large scale enterprise data architectures using commercial and/or open source Data Analytics technologies. Must demonstrate knowledge of data architecture patterns · Strong cloud architecture expertise (Azure, AWS, GCP) across large project with minimum 2-3 years experience in doing cloud projects · Strong solution implementation experience (platform engineering, developer experience) · Data modelling and architecting skills including strong foundation in data warehousing concepts, data normalization, dimensional data modelling and a variety of data models including data vault · Demonstrated experience across different data platform such as Databricks (Delta Lake, Unity Catalogue), Snowflake and Cortex AI, FiveTran, dbt, Data Virtualization tools · Demonstrated experience in Big Data, data warehouse, data analytics projects, and/or any Information Management related projects · Lead transformation effort for Large opportunities i.e. build strategy and multi-year Transformation roadmap for our customers · Strong ability and experience in engaging with both technical and non-technical stakeholders · Strong teaming, communications and analytical skills · Dedicated, resourceful and able to work under pressure · As part of Presales support, should be able to shape large multi-million value Deals both proactive and reactive by driving TCS positioning with customers and build business for TCS · Develop Point of views, solution approaches & pitches for new opportunities · Define Data and Technology Architecture and patterns on-prem/hybrid/Cloud using native and other partners/tools with recommendations that suits customer's context and objectives · On Data front strong in Data Governance, MDM and Data Modeling · Drive and Participate in Architecture reviews both internal and client teams · Oversee data management, warehousing, integration, data movement, and delivery across assigned data initiatives. · Examine data, processes, and technologies to determine the current state and critical problems of Customer's data and analytics journey · Research and make recommendations for training, equipment, and technology to improve data use · Build connect with Client's Senior Leadership CXOs/D&A Heads and work with them to develop clear Data and Analytics Strategy and Architecture that supports their business goals and objectives.
    $83k-113k yearly est. 1d ago
  • Data Architect

    Mastek

    Data scientist job in Chicago, IL

    Job Title: Architect / Senior Data Engineer We are seeking a highly skilled Architect / Senior Data Engineer to design, build, and optimize our modern data ecosystem. The ideal candidate will have deep experience with AWS cloud services, Snowflake, and dbt, along with a strong understanding of scalable data architecture, ETL/ELT development, and data modeling best practices. Key Responsibilities Architect, design, and implement scalable, reliable, and secure data solutions using AWS, Snowflake, and dbt. Develop end-to-end data pipelines (batch and streaming) to support analytics, machine learning, and business intelligence needs. Lead the modernization and migration of legacy data systems to cloud-native architectures. Define and enforce data engineering best practices including coding standards, CI/CD, testing, and monitoring. Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements and translate them into technical solutions. Optimize Snowflake performance through query tuning, warehouse sizing, and cost management. Establish and maintain data governance, security, and compliance standards across the data platform. Mentor and guide junior data engineers, providing technical leadership and direction. Required Skills & Qualifications 8+ years of experience in Data Engineering, with at least 3+ years in a cloud-native data environment. Hands-on expertise in AWS services such as S3, Glue, Lambda, Step Functions, Redshift, and IAM. Strong experience with Snowflake - data modeling, warehouse design, performance optimization, and cost governance. Proven experience with dbt (data build tool) - model development, documentation, and deployment automation. Proficient in SQL, Python, and ETL/ELT pipeline development. Experience with CI/CD pipelines, version control (Git), and workflow orchestration tools (Airflow, Dagster, Prefect, etc.). Familiarity with data governance and security best practices, including role-based access control and data masking. Strong understanding of data modeling techniques (Kimball, Data Vault, etc.) and data architecture principles. Preferred Qualifications AWS Certification (e.g., AWS Certified Data Analytics - Specialty, Solutions Architect). Strong communication and collaboration skills, with a track record of working in agile environments.
    $83k-113k yearly est. 5d ago
  • Data Architect

    Krasan Consulting Services

    Data scientist job in Springfield, IL

    Krasan Consulting Services is a Chicago boutique management consulting and technology solutions integrator. As a certified BEP WBE with the State of Illinois and a certified WBE, MBE, and DBE recognized by the City of Chicago, Krasan provides complex technology solutions and services for clients in the Public and Commercial sectors. With decades of experience, Krasan specializes in end-to-end technology solutions, integration, and implementation services and is proud to enable the success of our customers. Role Title: Data Architect Location - Remote - Springfield, IL Job Description: Determine where the data resides. Create an architectural design for migration. Plan the migration path from start to finish. Collaborate with the IEPA during system design. Confirm data security and integrity during migration. Design and integrate the business and technical components of the EIMS, including future enhancements. Confirm the solution meets all IEPA requirements and integrates seamlessly with other environmental information systems. Provide technical leadership and guidance to development teams. Confirm system compliance with relevant regulations Minimum seven (7) years of experience in solution architecture and system integration. Knowledge of environmental information or permitting systems and interoperability. Experience with technical leadership and project management. Knowledge of CROMERR and FedRAMP solution design. Experience serving as Offeror's business architect (or similar role) for at least two (2) full life cycle implementations of the offered solution end-to-end. 2013
    $82k-111k yearly est. 5d ago
  • Senior Data Engineer

    Programmers.Io 3.8company rating

    Data scientist job in Chicago, IL

    requires visa independent candidates. Note: (OPT, CPT, H1B holders will not work at this time) Design, develop, and maintain scalable ETL pipelines using AWSGlue Collaborate with data engineers and analysts to understand data requirements Build and manage data extraction, transformation, and loading processes Optimize and troubleshoot existing Glue jobs and workflows Ensure data quality, integrity, and security throughout the ETL process Integrate AWS Glue with other AWS services like S3, Lambda, Redshift, and Step Functions Maintain documentation of data workflows and processes Stay updated with the latest AWS tools and best practices Required Skills Strong hands-on experience with AWS Glue, PySpark, and Python Proficiency in SQL and working with structured/unstructured data (JSON, CSV, Parquet) Experience with data warehousing concepts and tools Familiarity with CI/CD pipelines, Terraform, and scripting (PowerShell, Bash) Solid understanding of data modeling, data integration, and data management Exposure to AWS Batch, Step Functions, and Data Catalogs
    $81k-112k yearly est. 1d ago
  • Data Engineer

    Acuity Analytics

    Data scientist job in Chicago, IL

    The Data Engineer will design, build, and optimize the data pipelines and models that support the firm's evolving research, analytics, and systematic portfolio construction environment. This role is central to enabling data-driven investment processes, including quantitative research, AI/ML capabilities, and front-office automation. Candidates must have deep expertise with Snowflake, strong SQL skills, and experience integrating diverse datasets used across investment organizations. The role is highly collaborative and requires comfort working in an iterative, fast-moving environment where data needs evolve rapidly based on stakeholder input. Responsibilities Design, build, and enhance ETL/ELT pipelines in Snowflake, ensuring high performance, reliability, and scalability. Integrate internal and external datasets, including pricing, research content, economic releases, market data, and security reference data. Support real-time or near-real-time data flows where needed (e.g., pricing, indicative quotes, market-sensitive inputs). Collaborate closely with Product Leads, Quant Developers, and UI/UX teams to ensure data structures meet the requirements of research workflows, analytical models, and user-facing applications. Partner with front-office stakeholders to rapidly iterate on evolving analytical and data needs. Implement data validation, monitoring, and quality frameworks to ensure accuracy and reliability across critical datasets. Translate prototype pipelines into production-ready workflows with appropriate documentation, standards, and controls. Contribute to data modeling standards, metadata frameworks, and data governance practices across the platform. Requirements 10+ years of data engineering experience within investment management, financial technology, or similar data-intensive environments. Expert-level SQL, including complex queries, schema design, and performance optimization. Deep hands-on experience with Snowflake, including advanced features such as tasks, streams, performance tuning, and secure data sharing. Strong Python capabilities for ETL/ELT development, data processing, and workflow automation. Experience integrating APIs and working with structured, semi-structured, and unstructured datasets. Familiarity with NLP or AI/ML-oriented datasets (e.g., textual research content, PDFs) is a plus. Experience with Domino or willingness to work within a Domino-based model environment. Working knowledge of investment data structures (holdings, benchmarks, pricing, exposures) is highly preferred. Ability to thrive in a rapid prototyping environment with evolving requirements and close partnership with front-office teams.
    $75k-100k yearly est. 4d ago
  • Lead Data Engineer

    Atos 4.7company rating

    Data scientist job in Nashville, TN

    Minimum 10+ years of experience in data engineering. Experience with any one ETL/ELT development Tool viz. Matillion/Nexla/Airflow/Azure, Webjob/ADF, or any other leading ETL tool is preferred. Extensive experience with data ingestion, reporting, and analytics for structured and semi-structured data from various cloud platforms like Azure, AWS, or GCP. Experience working with REST and SOAP-based APIs, and various file formats like CSV, PSV, JSON, or XML to extract data for data pipelines. Knowledge of the Retail/E-commerce domain is highly preferred. Collaborate with cross-functional teams to design and implement data pipelines. Understand the business objectives of the customers and translate their explicit and implicit requirements into optimal, scalable, high-performance Cloud DW solutions. Must possess excellent programming skills with SQL. Experience using Code/Version management tools like GitHub, and TFS in a continuous integration/continuous delivery (CI/CD) environment is preferred. Communicates effectively with business stakeholders and architects. Experience in Agile/Scrum/Scaled Agile application development using Jira, MS Project, or any other project management tool. Collaborate with business stakeholders to gather requirements and create comprehensive technical solutions and documentation.
    $64k-83k yearly est. 2d ago
  • Data Scientist - Operations Research

    United Airlines 4.6company rating

    Data scientist job in Chicago, IL

    Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what's next. Let's define tomorrow, together. **Description** United's Digital Technology team is comprised of many talented individuals all working together with cutting-edge technology to build the best airline in the history of aviation. Our team designs, develops and maintains massively scaling technology solutions brought to life with innovative architectures, data analytics, and digital solutions. **Job overview and responsibilities** Provides mathematical modeling and analysis services to support critical financial, operational, and/or strategic planning decisions and engages in supporting project teams in value added activities that generate practical solutions to complex business problems, explores new business alternatives, and drives improvement in business decisions + Develops the approach and methods to define and solve management problems through quantitative analysis and analytical models using operations research, machine learning, and structured programming languages + Identifies, researches, or solves large complex problems using big data and operations research and machine learning principles + Leverages understanding of the business process to identify and implement operations research solutions that will result in significant bottom-line contributions + Builds and develops operations research/optimization mathematical model applications, and provides client support leveraging operations research knowledge + Participates in model design, prototype, and model development for several efforts that occur simultaneously, and interfaces with product delivery groups + Raises concerns when scope of analysis may not align with time available and can choose an appropriate scope of analysis to conduct balancing ROI to time available + Designs analytic plan/develop hypotheses to test; understands limitations of analysis (what it can and cannot be used for) + Anticipates working team questions to data and approach + Identifies solution quality risks and on-time risks + Understands the business value, process, and expectations before focusing on choice of a technical solution + Understands the intuition behind the numbers (i.e. does it make sense?) + Provides on-going analytical services to client organizations + Communicates results to management and clients + Contributes deck content and builds the story for the deck with guidance to summarize findings + Develops and delivers presentations aligned with Ai standards + Speaks in a manner appropriate for working team and their level +1 + Keeps informed about the latest analytical methods and research in the operations research and analytics fields **Qualifications** **What's needed to succeed (Minimum Qualifications):** + Masters in Operations Research or other related quantitative discipline involving quantitative analysis and application of advanced operations research principles + Coursework or work experience with mathematical programming techniques + Coursework or work experience in model prototyping through use of optimization toolkit(s) including CPLEX, AMPL, or OPL + Coursework or work experience with C, C++, Java, R, Python, or other structured programming language + Good business, technical, verbal/written communication, presentation and sales skills. Adaptability to changing business environment + Good interpersonal skills and ability to interact with clients + Proficient with MS Office + Successful completion of interview required to meet job qualifications + Must be legally authorized to work in the United States for any employer without sponsorship + Reliable, punctual attendance is an essential function of the position **What will help you propel from the pack (Preferred Qualifications):** + Minor in computer science and/or formal advanced computer science coursework preferred + 1+ years of professional experience in analytical field + 1+ years designing and programming/coding data structures for large-scale computer models + Experience with Julia programming language + Knowledge of United/industry data sources + Structured programming for large-scale computer models + Demonstrated ability to create business value The base pay range for this role is $91,770.00 to $119,514.00. The base salary range/hourly rate listed is dependent on job-related, factors such as experience, education, and skills. This position is also eligible for bonus and/or long-term incentive compensation awards. You may be eligible for the following competitive benefits: medical, dental, vision, life, accident & disability, parental leave, employee assistance program, commuter, paid holidays, paid time off, 401(k) and flight privileges. United Airlines is an equal opportunity employer. United Airlines recruits, employs, trains, compensates and promotes regardless of race, religion, color, national origin, gender identity, sexual orientation, physical ability, age, veteran status and other protected status as required by applicable law. Equal Opportunity Employer - Minorities/Women/Veterans/Disabled/LGBT. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions. Please contact JobAccommodations@united.com to request accommodation.
    $91.8k-119.5k yearly 60d+ ago
  • Visiting Data Scientist - Gies College of Business

    University of Illinois Urbana-Champaign, Il 4.6company rating

    Data scientist job in Urbana, IL

    Data Science Research Services The Data Scientist will support the analytical, research, and technical needs of DSRS in helping Faculty and Researchers. This role is responsible for conducting data analysis, building datasets, developing models, supporting research projects, and contributing to internal tools that enhance data-driven decision-making within the Gies College of Business. Duties & Responsibilities * Data Analysis & Research Support a. Clean, prepare, and analyze datasets to support faculty research and DSRS initiatives. b. Conduct exploratory data analysis, statistical testing, and visualizations. c. Translate research questions into clear analytical approaches and deliverable outputs. * Modeling & Methodology Development a. Build and evaluate statistical models or machine learning models based on project needs. b. Develop reproducible workflows for model training, testing, and interpretation. c. Document methodologies and maintain organized, transparent code for future reference. * Data Pipeline & Workflow Development a. Assist in building and maintaining robust data pipelines used across DSRS projects. b. Ensure data processing steps are consistent, scalable, and aligned with best practices. c. Collaborate with technical staff to integrate data processes into DSRS framework * Internal Tools & Reporting a. Contribute to dashboards, internal applications, or automation processes used by DSRS. b. Generate clear summaries, reports, and visual outputs for faculty and internal stakeholders. c. Support development of templates, documentation, and shared resources. d. Support and contribute to infrastructure development to support DSRS services * Cross-Functional Collaboration a. Work closely with faculty, staff, project managers, and interns to deliver high-quality analytical work. b. Participate in project scoping, requirement gathering, and progress discussions. c. Provide technical guidance or support to junior team members when needed. d. Mentor and train students into Data Science Applications * Miscellaneous a. Provide ad-hoc assistance as and when required b. Support additional tasks or emerging needs that fall outside the primary responsibilities, as the role evolves with DSRS. Additional Physical Demands Additional Information: This position supports both research-based and operational projects within the college. Work may include collaboration with multiple faculty members, administrative units, and cross-functional teams. The role requires flexibility, professionalism, and adherence to university data, privacy, and security standards. Minimum Qualifications 1. Bachelor's or Master's degree in Data Science, Statistics, Computer Science, Information Systems, or a related field. 2. Proficiency in Python and commonly used data science libraries 3. Experience with data cleaning, analysis, and visualization. 4. Experience applying statistical or machine learning methods to real datasets. Preferred Qualifications 1. Experience working with large or research datasets. 2. Experience with large computing systems. 3. Experience with SQL or relational databases. 4. Familiarity with cloud tools or workflow automation 5. Experience supporting academic research or collaborative analytic projects. 6. Familiarity with emerging AI/ML techniques (e.g., embeddings, LLM-based workflows). Knowledge, Skills and Abilities Strong analytical and problem-solving skills. Ability to write clean, well-documented, and reproducible code. Strong communication skills and ability to explain technical concepts clearly. Appointment Information This is a 100% full-time Academic Professional position, appointed on a 12-month basis. The expected start date is as soon as possible after the 2/16/2026. Salary is competitive and commensurate with qualifications and experience, while also considering internal equity. The budgeted salary range for the position is $90,000 to $100,000. Sponsorship for work authorization is not available for this position. For more information about Gies Business, visit ****************************************************** Application Procedures & Deadline Information Applications must be received by 6:00 pm (Central Time) on January 15, 2026. Apply for this position using the Apply Now button at the top or bottom of this posting, and upload your cover letter, resume (months and years of employment must be included), and names/contact information for three professional references. Please remember to include specific information on how you meet the qualifications of the position in your resume, including all relevant experience. Applications not submitted through ************************* will not be considered. For further information about this specific position, please contact *******************. For questions regarding the application process, please contact ************. This position is intended to be eligible for benefits. This includes Health, Dental, Vision, Life Insurance, a Retirement Plan, Paid time Off, and Tuition waivers for employees and dependents. The University of Illinois System is an equal opportunity employer, including but not limited to disability and/or veteran status, and complies with all applicable state and federal employment mandates. Please visit Required Employment Notices and Posters to view our non-discrimination statement and find additional information about required background checks, sexual harassment/misconduct disclosures, and employment eligibility review through E-Verify. Applicants with disabilities are encouraged to apply and may request a reasonable accommodation under the Americans with Disabilities Act (2008) to complete the application and/or interview process. Accommodations may also be requested on the basis of pregnancy, childbirth, and related conditions, or religion. Requests may be submitted through the reasonable accommodations portal, or by contacting the Accessibility & Accommodations Division of the Office for Access and Equity at ************, or by emailing accessibility@illinois.edu. Requisition ID: 1034068 Job Category: Research Apply at: *************************
    $90k-100k yearly Easy Apply 7d ago

Learn more about data scientist jobs

How much does a data scientist earn in Fort Campbell North, KY?

The average data scientist in Fort Campbell North, KY earns between $55,000 and $100,000 annually. This compares to the national average data scientist range of $75,000 to $148,000.

Average data scientist salary in Fort Campbell North, KY

$74,000

What are the biggest employers of Data Scientists in Fort Campbell North, KY?

The biggest employers of Data Scientists in Fort Campbell North, KY are:
  1. General Dynamics
Job type you want
Full Time
Part Time
Internship
Temporary