Data Scientist
Data scientist job in Wisconsin
We are seeking a Data Scientist to deliver predictive analytics and actionable insights that enhance financial forecasting and supply chain performance. This role will partner with business leaders and analysts to design models that inform strategic decisions. You will work primarily within Microsoft Fabric, leveraging Delta Lake/OneLake and Medallion Architecture (Bronze-Silver-Gold) to build scalable solutions and lay the groundwork for future AI-driven capabilities.
This is a full-time, direct hire role which will be onsite in Mendota Heights, MN. Local candidates only. Target salary is between $120,000-140,000.
Candidates must be eligible to work in the United States without sponsorship both now and in the future. No C2C or third parties.
Key Responsibilities
Develop and deploy machine learning models for cost modeling, sales forecasting, and long-term work order projections.
Analyze large, complex datasets to uncover trends, anomalies, and opportunities for operational improvement.
Collaborate with finance, supply chain, and business teams to translate challenges into data-driven solutions.
Work with engineering teams to create robust pipelines for data ingestion, transformation, and modeling using cloud-native tools.
Utilize Azure services (Data Lake, Synapse, ML Studio) to operationalize models and manage workflows.
Present insights through clear visualizations and executive-level presentations.
Contribute to governance standards, audit trails, and model documentation.
Qualifications
Education & Certifications
Bachelor's degree required; Master's in Computer Science, IT, or related field preferred.
Cloud certifications (Azure or similar) are a plus.
Experience & Skills
5+ years as a Data Scientist or similar role.
Hands-on experience with Microsoft Fabric, Azure Synapse, and related cloud technologies.
Proficiency in Python, R, SQL, and visualization tools (Power BI, Tableau).
Strong background in financial modeling, cost allocation, and supply chain analytics.
Familiarity with Oracle and Salesforce UI navigation is helpful.
Excellent business acumen and ability to communicate complex concepts to senior leadership.
Strong problem-solving skills and ability to design scalable solutions.
Preferred
Experience with Azure Machine Learning.
Knowledge of Jitterbit is a plus.
All qualified applicants will receive consideration for employment without regard to race, color, national origin, age, ancestry, religion, sex, sexual orientation, gender identity, gender expression, marital status, disability, medical condition, genetic information, pregnancy, or military or veteran status. We consider all qualified applicants, including those with criminal histories, in a manner consistent with state and local laws, including the California Fair Chance Act, City of Los Angeles' Fair Chance Initiative for Hiring Ordinance, and Los Angeles County Fair Chance Ordinance. For unincorporated Los Angeles county, to the extent our customers require a background check for certain positions, the Company faces a significant risk to its business operations and business reputation unless a review of criminal history is conducted for those specific job positions.
Data Analyst
Data scientist job in Warren, MI
The main function of a Data Analyst is to coordinate changes to computer databases, test, and implement the database applying knowledge of database management systems.
Job Responsibilities:
• Work with senior management, technical and client teams in order to determine data requirements, business data implementation approaches, best practices for advanced data manipulation, storage and analysis strategies
• Write and code logical and physical database descriptions and specify identifiers of database to management system or direct others in coding descriptions
• Design, implement, automate and maintain large scale enterprise data ETL processes
• Modify existing databases and database management systems and/or direct programmers and analysts to make changes
• Test programs or databases, correct errors and make necessary modifications
Qualifications:
• Bachelor's degree in a technical field such as computer science, computer engineering or related field required
• 2-4 years applicable experience required
• Experience with database technologies
• Knowledge of the ETL process
• Knowledge of at least one scripting language
• Strong written and oral communication skills
• Strong troubleshooting and problem solving skills
• Demonstrated history of success
• Desire to be working with data and helping businesses make better data driven decisions
Data Analyst
Data scientist job in Troy, MI
Data Analyst & Backend Developer with AI
The Digital Business Team develops promising digital solutions for global products and processes. It aims to organize Applus+ Laboratories' information, making it useful, fast, and reliable for both the Applus+ Group and its clients. The team's mission is to be recognized as the most digital, innovative, and customer-oriented company, reducing digital operations costs while increasing the value and portfolio of services.
We are looking for a Data Science / AI Engineer to join our Digital team and contribute to the development of evolving data products and applications.
Responsibilities:
Collect, organize, and analyze structured and unstructured data to extract actionable insights, generate code, train models, validate results, and draw meaningful conclusions.
Demonstrate advanced proficiency in Power BI, including data modeling, DAX, the creation of interactive dashboards, and connecting to diverse data sources.
Possess a strong mathematical and statistical foundation, with experience in numerical methods and a wide range of machine learning algorithms.
Exhibit hands-on experience in Natural Language Processing (NLP) and foundation models, with a thorough understanding of transformers, tokenization, and encoding-decoding processes.
Apply Explainable AI (XAI) techniques using Python libraries such as SHAP, LIME, or similar tools.
Develop and integrate AI models into backend systems utilizing frameworks such as FastAPI, Flask, or Django.
Demonstrate logical and organized thinking with attention to detail, capable of identifying data or code anomalies and effectively communicating findings through clear documentation and well-commented code.
Maintain a proactive mindset for optimizing analytical workflows and continuously improving models and tools.
Exhibit creativity and innovation in problem-solving, with a practical and results-oriented approach.
Technical Requirements:
Demonstrated experience coding in python specifically working in data science projects and using ML most common libraries.
Previous experiences working with Generative AI and explainable AI are welcome.
Expertise in Power BI: data modeling, DAX, dashboards, integration with multiple sources.
Proficient in SQL for querying, transforming, optimizing databases.
Experienced in Python for data analysis, automation, machine learning.
Knowledgeable in data analytics, KPIs, business intelligence practices.
Skilled in translating business requirements into insights and visualizations.
Our current tech stack is:
Power BI (DAX)
SQL
Python
Commonly used ML/AI libraries.
Azure AI (Open AI)
Education
Degree in Computer Science, Software Engineering, Applied Mathematics, or a related field.
A master's degree in data science or AI Engineering is an advantage.
Languages
English
If you are passionate about analytics, advanced AI algorithms, and challenging yourself, this is the right job for you!
AWS Data Architect
Data scientist job in Neenah, WI
We are seeking a highly skilled AWS Data Architect to design, build, and optimize cloud-based data platforms that enable scalable analytics and business intelligence. The ideal candidate will have deep expertise in AWS cloud services, data modeling, data lakes, ETL pipelines, and big data ecosystems.
Key Responsibilities:
Design and implement end-to-end data architectures on AWS (data lakes, data warehouses, and streaming solutions).
Define data ingestion, transformation, and storage strategies using AWS native services (Glue, Lambda, EMR, S3, Redshift, Athena, etc.).
Architect ETL/ELT pipelines and ensure efficient, secure, and reliable data flow.
Collaborate with data engineers, analysts, and business stakeholders to translate business needs into scalable data solutions.
Establish data governance, security, and compliance frameworks following AWS best practices (IAM, KMS, Lake Formation).
Optimize data systems for performance, cost, and scalability.
Lead data migration projects from on-prem or other clouds to AWS.
Provide technical guidance and mentorship to data engineering teams.
Required Skills & Qualifications
10+ years of experience in data architecture, data engineering, or cloud architecture.
Strong hands-on experience with AWS services:
Storage & Compute: S3, EC2, Lambda, ECS, EKS
Data Processing: Glue, EMR, Kinesis, Step Functions
Thanks!
Data Architect
Data scientist job in Detroit, MI
Millennium Software is look for a Data Architect for one of its direct client based in Michigan. It is onsite role.
Title: Data Architect
Tax term: Only w2, no c2c
Description:
All below are must have
Senior Data Architect with 12+ years of experience in Data Modeling.
Develop conceptual, logical, and physical data models.
Experience with GCP Cloud
Data Engineer
Data scientist job in Madison, WI
About FAC Services
Want to build your career helping those who build the world?
At FAC Services, we handle the business side so architecture, engineering, and construction firms can focus on shaping the future. Our trusted, high-quality solutions empower our partners, and our people, to achieve excellence with integrity, precision, and a personal touch.
Job Purpose
FAC Services is investing in a modern data platform to enable trustworthy, timely, and scalable data for analytics, operations, and product experiences. The Data Engineer will design, build, and maintain core data pipelines and models for Power BI reporting, application programming interfaces (APIs), and downstream integrations. This role partners closely with Infrastructure, Quality Assurance (QA), the Database Administrator, and application teams to deliver production grade, automated data workflows with strong reliability, governance, observability, and Infrastructure as Code (IaC) for resource orchestration.
Primary Responsibilities:
Data Architecture & Modeling
Design and evolve canonical data models, marts, and lake/warehouse structures to support analytics, APIs, and applications.
Establish standards for naming, partitioning, schema evolution, and Change Data Capture (CDC).
Pipeline Development (ETL/ELT)
Build resilient, testable pipelines across Microsoft Fabric Data Factory, notebooks (Apache Spark), and Lakehouse tables for batch and streaming workloads.
Design Lakehouse tables (Delta/Parquet) in OneLake. Optimize Direct Lake models for Power BI.
Implement reusable ingestion and transformation frameworks emphasizing modularity, idempotency, and performance.
Integration & APIs
Engineer reliable data services and APIs to feed web applications, Power BI, and partner integrations.
Publish consumer-facing data contracts (Swagger) and implement change-notification (webhooks/eventing).
Use semantic versioning for breaking changes and maintain a deprecation policy for endpoints and table schemas.
Ensure secure connectivity and least-privilege access in coordination with the DBA.
Infrastructure as Code (IaC) - Resource Orchestration
Resource Orchestration & Security: Author and maintain IaC modules to deploy and configure core resources.
Use Bicep/ARM (and, where appropriate, Terraform/Ansible) with CI/CD to promote changes across environments.
DevOps, CI/CD & Testing
Own CI/CD pipelines (Gitbased promotion) for data code, configurations, and infrastructure. Practice test-driven development with QA (unit, integration, regression) and embed data validations throughout pipelines; collaborate with the Data Quality Engineer to maximize coverage.
Observability & Reliability
Instrument pipelines and datasets for lineage, logging, metrics, and alerts; define Service Level Agreements (SLAs) for data freshness and quality.
Perform performance tuning (e.g., Spark optimization, partition strategies) and cost management across cloud services.
Data Quality & Governance
Implement rules for deduplication, reconciliation, and anomaly detection across environments (Microsoft Fabric Lakehouse and Power BI).
Contribute to standards for sensitivity labels, RoleBased Access Control (RBAC), auditability, and secure data movement aligned with Infrastructure and Security.
Collaboration & Leadership
Work cross functionally with Infrastructure, QA, and application teams; mentor peers in modern data engineering practices; contribute to documentation and knowledge sharing. Handoff to the Data Quality Engineer for release gating; coordinate with the Database Administrator on backups/restore posture, access roles, High Availability / Disaster Recovery (HA/DR), and source CDC readiness.
Qualifications
To perform this job successfully, an individual must be able to perform each primary duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and/or ability required.
Experience (Required)
3+ years designing and operating production ETL/ELT pipelines and data models.
Apache Spark (Fabric notebooks, Synapse Spark pools, or Databricks).
Advanced T-SQL and Python; experience with orchestration, scheduling, and dependency management.
Azure Event Hubs (or Kafka) for streaming; Change Data Capture (CDC)
Infrastructure as Code (Bicep/ARM/Terraform); CI/CD (Azure DevOps)
API design for data services (REST/OpenAPI), including versioning, pagination, error handling, authentication, and authorization.
Experience (Preferred)
Lakehouse design patterns on Microsoft Fabric; optimization of Power BI with Direct Lake models.
Kusto Query Language (KQL), Eventstream and Eventhouse familiarity.
Experience with lineage/metadata platforms and cost governance.
GCP Data Architect (only W2 Position - No C2C Accepted) 11.18.2025
Data scientist job in Dearborn, MI
- No C2C Accepted) 11.18.2025
Description: STG is a SEI CMMi Level 5 company with several Fortune 500 and State Government clients. STG has an opening for GCP Data Architect.
Please note that this project assignment is with our own direct clients. We do not go through any vendors. STG only does business with direct end clients. This is expected to be a long-term position. STG will provide immigration and permanent residency sponsorship assistance to those candidates who need it.
Job Description:
Employees in this job function are responsible for designing, building and maintaining reliable, efficient and scalable data architecture and data models that serve as a foundation for all data solutions. They also closely collaborate with senior leadership and IT teams to ensure alignment of data strategy with overall business goals.
Key Responsibilities:
Align data strategy to business goals to support a mix of business strategy, improved decision-making, operations efficiency and risk management
Ensure data assets are available, consumable and secure for end users across the enterprise - applications, platforms and infrastructure - within the confines of enterprise and security architecture
Design and build reliable, efficient and scalable data architecture to be used by the organization for all data solutions
Implement and maintain scalable architectural data patterns, solutions and tooling to support business strategy
Design, build, and launch shared data services and APIs to support and expose data-driven solutions in line with enterprise architecture standards
Research and optimize data architecture technologies to enhance and support enterprise technology and data strategy
Skills Required:
Power Builder, PostgreSQL, GCP, Big Query
Senior Specialist Exp.: 10+ years in IT; 7+ years in concentration
Must have experience presenting technical material to business users
Must be able to envision larger strategies and anticipate where possible synergies can be realized
Experience acting as the voice of the architecture/data model and defend its relevancy to ensure adherence to its principles and purpose
Data Modeling and Design: Develop conceptual, logical, and physical data models for business intelligence, analytics, and reporting solutions. Transform requirements into scalable, flexible, and efficient data structures that can support advanced analytics.
Requirement Analysis: Collaborate with business analysts, stakeholders, and subject matter experts to gather and interpret requirements for new data initiatives. Translate business questions into data models that can answer these questions.
Data Integration: Work closely with data engineers to integrate data from multiple sources, ensuring consistency, accuracy, and reliability. Map data flows and document relationships between datasets.
Database Architecture: Design and optimize database schemas using the medallion architecture which includes relational, star schema and denormalized data sets for BI and ML data consumers.
Metadata Management: Team with the data governance team so detailed documentation on data definitions, data lineage, and data quality statistics are available to data consumers.
Data Quality Assurance: Establish master data management data modeling so the history of how customer, provider and other party data is consolidated into a single version of the truth.
Collaboration and Communication: Serve as a bridge between technical teams and business units, clearly communicating the value and limitations of various data sources and structures.
Continuous Improvement: Stay abreast of emerging trends in data modeling, analytics platforms, and big data technologies. Recommend enhancements to existing data models and approaches.
Performance Optimization: Monitor and optimize data models for query performance and scalability. Troubleshoot and resolve performance bottlenecks in collaboration with database administrators.
Governance and Compliance: Ensure that data models and processes adhere to regulatory standards and organizational policies regarding privacy, access, and security.
GCP Data Architect is based in Dearborn, MI. A great opportunity to experience the corporate environment leading personal career growth.
Resume Submittal Instructions: Interested/qualified candidates should email their word formatted resumes to Vasavi Konda - vasavi.konda(.@)stgit.com and/or contact @(Two-Four-Eight) Seven- One-Two - Six-Seven-Two-Five (@*************. In the subject line of the email please include: First and Last Name: GCP Data Architect.
For more information about STG, please visit us at **************
Sincerely,
Vasavi Konda| Recruiting Specialist
“Opportunities don't happen, you create them.”
Systems Technology Group (STG)
3001 W. Big Beaver Road, Suite 500
Troy, Michigan 48084
Phone: @(Two-Four-Eight) Seven- One-Two - Six-Seven-Two-Five: @************(O)
Email: vasavi.konda(.@)stgit.com
GCP Data Engineer
Data scientist job in Dearborn, MI
Experience Required: 8+ years
Work Status: Hybrid
We're seeking an experienced GCP Data Engineer who can build cloud analytics platform to meet ever expanding business requirements with speed and quality using lean Agile practices. You will work on analyzing and manipulating large datasets supporting the enterprise by activating data assets to support Enabling Platforms and Analytics in the Google Cloud Platform (GCP). You will be responsible for designing the transformation and modernization on GCP, as well as landing data from source applications to GCP. Experience with large scale solution and operationalization of data warehouses, data lakes and analytics platforms on Google Cloud Platform or other cloud environment is a must. We are looking for candidates who have a broad set of technology skills across these areas and who can demonstrate an ability to design right solutions with appropriate combination of GCP and 3rd party technologies for deploying on Google Cloud Platform. You will: Work in collaborative environment including pairing and mobbing with other cross-functional engineers Work on a small agile team to deliver working, tested software Work effectively with fellow data engineers, product owners, data champions and other technical experts Demonstrate technical knowledge/leadership skills and advocate for technical excellence Develop exceptional Analytics data products using streaming, batch ingestion patterns in the Google Cloud Platform with solid Data Warehouse principles Be the Subject Matter Expert in Data Engineering and GCP tool technologies
Skills Required:
Big Query
Skills Preferred:
N/A
Experience Required:
In-depth understanding of Google's product technology (or other cloud platform) and underlying architectures 5+ years of analytics application development experience required 5+ years of SQL development experience 3+ years of Cloud experience (GCP preferred) with solution designed and implemented at production scale Experience working in GCP based Big Data deployments (Batch/Real-Time) leveraging Terraform, Big Query, Google Cloud Storage, PubSub, Dataflow, Dataproc, Airflow, etc. 2 + years professional development experience in Java or Python, and Apache Beam Extracting, Loading, Transforming, cleaning, and validating data Designing pipelines and architectures for data processing 1+ year of designing and building CI/CD pipelines
Experience Preferred:
Experience building Machine Learning solutions using TensorFlow, BigQueryML, AutoML, Vertex AI Experience in building solution architecture, provision infrastructure, secure and reliable data-centric services and application in GCP Experience with DataPlex is preferred Experience with development eco-system such as Git, Jenkins and CICD Exceptional problem solving and communication skills Experience in working with DBT/Dataform Experience in working with Agile and Lean methodologies Team player and attention to detail Performance tuning experience
Education Required:
Bachelor's Degree
Education Preferred:
Master's Degree
Additional Safety Training/Licensing/Personal Protection Requirements:
Additional Information:
***POSITION IS HYBRID*** Primary Skills Required: Experience in working in an implementation team from concept to operations, providing deep technical subject matter expertise for successful deployment. Implement methods for automation of all parts of the pipeline to minimize labor in development and production Experience in analyzing complex data, organizing raw data and integrating massive datasets from multiple data sources to build subject areas and reusable data products Experience in working with architects to evaluate and productionalize appropriate GCP tools for data ingestion, integration, presentation, and reporting Experience in working with all stakeholders to formulate business problems as technical data requirement, identify and implement technical solutions while ensuring key business drivers are captured in collaboration with product management This includes designing and deploying a pipeline with automated data lineage. Identify, develop, evaluate and summarize Proof of Concepts to prove out solutions. Test and compare competing solutions and report out a point of view on the best solution. Design and build production data engineering solutions to deliver pipeline patterns using Google Cloud Platform (GCP) services: BigQuery, DataFlow, Pub/Sub, BigTable, Data Fusion, DataProc, Cloud Composer, Cloud SQL, Compute Engine, Cloud Functions, and App Engine. Additional Skills Preferred: Strong drive for results and ability to multi-task and work independently Self-starter with proven innovation skills Ability to communicate and work with cross-functional teams and all levels of management Demonstrated commitment to quality and project timing Demonstrated ability to document complex systems Experience in creating and executing detailed test plans Additional Education Preferred GCP Professional Data Engineer Certified In-depth software engineering knowledge "
AZCO Data Scientist - IT (Appleton, WI)
Data scientist job in Appleton, WI
The Staff Data Scientist plays a critical role in leveraging data to drive business insights, optimize operations, and support strategic decision-making. You will be responsible for designing and implementing advanced analytical models, developing data pipelines, and applying statistical and machine learning techniques to solve complex business challenges. This position requires a balance of technical expertise, business acumen, and communication skills to translate data findings into actionable recommendations.
+ Develop and apply data solutions for cleansing data to remove errors and review consistency.
+ Perform analysis of data to discover information, business value, patterns and trends in support to guide development of asset business solutions.
+ Gather data, find patterns and relationships and create prediction models to evaluate client assets.
+ Conduct research and apply existing data science methods to business line problems.
+ Monitor client assets and perform predictive and root cause analysis to identify adverse trends; choose best fit methods, define algorithms, and validate and deploy models to achieve desired results.
+ Produce reports and visualizations to communicate technical results and interpretation of trends; effectively communicate findings and recommendations to all areas of the business.
+ Collaborate with cross-functional stakeholders to assess needs, provide assistance and resolve problems.
+ Translate business problems into data science solutions.
+ Performs other duties as assigned
+ Complies with all policies and standards
**Qualifications**
+ Bachelor Degree in Analytics, Computer Science, Information Systems, Statistics, Math, or related field from an accredited program and 4 years related experience required or experience may be substituted for degree requirement required
+ Experience in data mining and predictive analytics.
+ Strong problem-solving skills, analytical thinking, attention to detail and hypothesis-driven approach.
+ Excellent verbal/written communication, and the ability to present and explain technical concepts to business audiences.
+ Proficiency with data visualization tools (Power BI, Tableau, or Python libraries).
+ Experience with Azure Machine Learning, Databricks, or similar ML platforms.
+ Expert proficiency in Python with pandas, scikit-learn, and statistical libraries.
+ Advanced SQL skills and experience with large datasets.
+ Experience with predictive modeling, time series analysis, and statistical inference.
+ Knowledge of A/B testing, experimental design, and causal inference.
+ Familiarity with computer vision for image/video analysis.
+ Understanding of NLP techniques for document processing.
+ Experience with optimization algorithms and operations research techniques preferred.
+ Knowledge of machine learning algorithms, feature engineering, and model evaluation.
This job posting will remain open a minimum of 72 hours and on an ongoing basis until filled.
EEO/Disabled/Veterans
**Job** Information Technology
**Primary Location** US-WI-Appleton
**Schedule:** Full-time
**Travel:** Yes, 5 % of the Time
**Req ID:** 253790
\#LI-MF #ACO N/A
Data Scientist III
Data scientist job in Pontiac, MI
Job Description
Ready to join thousands of talented team members who are making the dream of home ownership possible for more Americans? It's all happening on UWM's campus, where our award-winning workplace packs plenty of perks and amenities that keep the atmosphere buzzing with energy and excitement.
It's no wonder that out of our six pillars, People Are Our Greatest Asset is number one. It's at the very heart of how we treat each other, our clients and our community. Whether it's providing elite client service or continuously striving to improve, our pillars provide a pathway to a more successful personal and professional life.
From the team member that holds a door open to the one that helps guide your career, you'll feel the encouragement and support on day one. No matter your race, creed, gender, age, sexual orientation and ethnicity, you'll be welcomed here. Accepted here. And empowered to Be You Here.
More reasons you'll love working here include:
Paid Time Off (PTO) after just 30 days
Additional parental and maternity leave benefits after 12 months
Adoption reimbursement program
Paid volunteer hours
Paid training and career development
Medical, dental, vision and life insurance
401k with employer match
Mortgage discount and area business discounts
Free membership to our large, state-of-the-art fitness center, including exercise classes such as yoga and Zumba, various sports leagues and a full-size basketball court
Wellness area, including an in-house primary-care physician's office, full-time massage therapist and hair salon
Gourmet cafeteria featuring homemade breakfast and lunch
Convenience store featuring healthy grab-and-go snacks
In-house Starbucks and Dunkin
Indoor/outdoor café with Wi-Fi
Responsibilities
Work with stakeholders throughout the organization to identify opportunities for leveraging company data to increase efficiency or improve the bottom line.
Analyze UWM data sets to identify areas of optimization and improvement of business strategies.
Assess the effectiveness and accuracy of new data sources and data gathering techniques.
Develop custom data models, algorithms, simulations, and predictive modeling to support insights and opportunities for improvement.
Develop A/B testing framework and test model quality.
Coordinate with different business areas to implement models and monitor outcomes.
Develop processes and tools to monitor and analyze model performance and data accuracy
Qualifications
Must Have
Bachelor's degree in Finance, Statistics, Economics, Data Science, Computer Science, Engineering or Mathematics, or related field
5+ years of experience in statistical analysis, and/or machine learning
5+ years of experience with one or more of the following tools: machine learning (Python, MATLAB), data wrangling skills/tools (Hadoop, Teradata, SAS, or other), statistical analysis (Python, R, SAS) and/or visualization skills/tools (PowerBI, Tableau, Qlikview)
3+ years of experience collaborating with teams (either internal or external) to develop analytics solutions
Strong problem solving skills
Strong communication skills (interpersonal, written, and presentation)
Nice to Have
Master's degree in Finance, Statistics, Economics, Data Science, Computer Science, Mathematics or related field
3+ years of experience with R, SQL, Tableau, MATLAB, Python
3+ years of professional experience in machine learning, data mining, statistical analysis, modeling, optimization
Experience in Accounting, Finance, and Economics
Data Scientist
Data scientist job in Detroit, MI
Please, review and apply for this position through the QCI system following the link below (Copy and Paste): http://tinyurl.com/nzn6msu *You can apply through Indeed using mobile devices with this link. Job Description The Data Scientist at will delve into the recesses of large data sets of structured, semi-structured, and unstructured data to discover hidden knowledge about our business and develop methods to leverage that knowledge within our line of business. The successful candidate will combine strengths in mathematics and applied statics, computer science, visualization capabilities, and a healthy sense of exploration and knowledge acquisition.You must have USA/Canadian Citizenship or your Green Card/EAD.
Responsibilities
Work closely with various teams across the company to identify and solve business challenges utilizing large structured, semi-structured, and unstructured data in a distributed processing environment.
Develop predictive statistical, behavioral or other models via supervised and unsupervised machine learning, statistical analysis, and other predictive modeling techniques.
Drive the collection of new data and the refinement of existing data sources.
Analyze and interpret the results of product experiments.
Collaborate with the engineering and product teams to develop and support our internal data platform to support ongoing analyses.
Requirements
M.S. or Ph.D. in a relevant technical field (e.g. applied mathematics, statistics, physics, computer science, operations research), or 3+ years experience in a relevant role.
Extensive experience solving analytics problems using quantitative approaches.
A proven passion for generating insights from data.
Strong knowledge of statistical methods generally, and particularly in the areas of modeling and business analytics.
Comfort manipulating and analyzing complex, high-volume, high-dimensionality data from varying sources.
Ability to communicate complex quantitative analysis in a clear, precise, and actionable manner.
Fluency with at least one scripting language such as Python, Java, or C/C++.
Familiarity with relational databases and SQL.
Additional Information
All your information will be kept confidential according to EEO guidelines.
Lead Data Scientist
Data scientist job in Detroit, MI
OneMagnify is a global performance marketing organization working at the intersection of brand marketing, technology, and analytics. The Company's core offerings accelerate business, amplify real-time results, and help set their clients apart from their competitors. OneMagnify partners with clients to design, implement and manage marketing and brand strategies using analytical and predictive data models that provide valuable customer insights to drive higher levels of sales conversion.
OneMagnify's commitment to employee growth and development extends far beyond typical approaches. We take great pride in fostering an environment where each of our 700+ colleagues can thrive and achieve their personal best. OneMagnify has been recognized as a Top Workplace, Best Workplace and Cool Workplace in the United States for 10 consecutive years and recently was recognized as a Top Workplace in India.
You'll be joining our RXA Data Science team, a group dedicated to leveraging advanced analytics, predictive modeling, and machine learning to drive smarter marketing and business decisions. As Lead Data Scientist, you will play a critical role in delivering impactful, data-driven solutions. In this role, you will bridge strategy and execution-translating complex business problems into analytically sound solutions while ensuring technical excellence, timely delivery, and cross-functional collaboration.
The Lead Data Scientist is responsible for leading the execution of end-to-end data science projects, from scoping and modeling to operationalization and insight delivery. You will partner with clients, internal teams, and technical stakeholders to develop and deploy scalable solutions that drive measurable business value.
What you'll do:
Lead the design, development, and deployment of statistical models, machine learning algorithms, and custom analytics solutions
Collaborate consistently with team members to understand the purpose, focus, and objectives of each data analysis project, ensuring alignment and meaningful support
Translate client goals into clear modeling strategies, project plans, and deliverables
Guide the development of production-level model pipelines using tools such as Databricks and Azure ML
Collaborate with engineering, marketing, and strategic partners to integrate models into real-world applications
Monitor and improve model performance, ensuring high standards for reliability and business relevance
Present complex analytical results to technical and non-technical audiences in a clear, actionable format
Support innovation by identifying new tools, methods, and data sources-including the use of Snowflake for modern data architecture
Promote best practices in model governance, data ethics, and responsible AI
What you need:
Minimum 5-7 years of experience in data science, analytics, or predictive modeling
Experience leading all aspects of sophisticated data science initiatives with a solid foundation in technical strategy and execution
Strong programming skills in Python, R, or SAS for modeling and data analysis
Advanced SQL capabilities and experience working in cloud-based environments (e.g., Azure, AWS)
Hands-on experience with Databricks, Azure Machine Learning, and Snowflake strongly preferred
Experience applying the modeling rigor and documentation standards required in regulated industries such as financial services is a strong plus
Expertise in regression, classification, clustering, A/B testing, and audience segmentation
Proficiency with Tableau, Power BI, and Excel for data visualization and communication
Strong communication skills and the ability to translate complex technical findings into business insight
Bachelor's degree in Data Science, Statistics, Computer Science, or a related quantitative field (Master's preferred)
Benefits
We offer a comprehensive benefits package including medical, dental, 401(k), paid holidays, vacations, and more.
About us
Whether it's awareness, advocacy, engagement, or efficacy, we move brands forward with work that connects with audiences and delivers results. Through meaningful analytics, engaging communications and innovative technology solutions, we help clients tackle their most ambitious projects and overcome their biggest challenges.
We are an equal opportunity employer
We believe that Innovative ideas and solutions start with unique perspectives. That's why we're committed to providing every employee a workplace that's free of discrimination and intolerance. We're proud to be an equal opportunity employer and actively search for like-minded people to join our team.
We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform job functions, and to receive benefits and privileges of employment. Please contact us to request accommodation.
Auto-ApplyData Scientist
Data scientist job in Zeeland, MI
Why join us? Our purpose is to design for the good of humankind. It's the ideal we strive toward each day in everything we do. Being a part of MillerKnoll means being a part of something larger than your work team, or even your brand. We are redefining modern for the 21st century. And our success allows MillerKnoll to support causes that align with our values, so we can build a more sustainable, equitable, and beautiful future for everyone.
About the Role
We're looking for an experienced and adaptable Data Scientist to join our growing AI & Data Science team. You'll be part of a small, highly technical group focused on delivering impactful machine learning, forecasting, and generative AI solutions.
In this role, you'll work closely with stakeholders to translate business challenges into well-defined analytical problems, design and validate models, and communicate results in clear, actionable terms. You'll collaborate extensively with our ML Engineer to transition solutions from experimentation to production, ensuring models are both effective and robust in real-world environments. You'll be expected to quickly prototype and iterate on solutions, adapt to new tools and approaches, and share knowledge with the broader organization. This is a hands-on role with real impact and room to innovate.
Key Responsibilities
* Partner with business stakeholders to identify, scope, and prioritize data science opportunities.
* Translate complex business problems into structured analytical tasks and hypotheses.
* Design, develop, and evaluate machine learning, forecasting, and statistical models, considering fairness, interpretability, and business impact.
* Perform exploratory data analysis, feature engineering, and data preprocessing.
* Rapidly prototype solutions to assess feasibility before scaling.
* Interpret model outputs and clearly communicate findings, implications, and recommendations to both technical and non-technical audiences.
* Collaborate closely with the ML Engineer to transition models from experimentation into scalable, production-ready systems.
* Develop reproducible code, clear documentation, and reusable analytical workflows to support org-wide AI adoption.
* Stay up to date with advances in data science, AI/ML, and generative AI, bringing innovative approaches to the team.
Required Technical Skills
* Bachelor's or Master's degree in Data Science, Statistics, Applied Mathematics, Computer Science, or a related quantitative field, with 3+ years of applied experience in data science.
* Strong foundation in statistics, probability, linear algebra, and optimization.
* Proficiency with Python and common data science libraries (Pandas, NumPy, Scikit-learn, XGBoost, PyTorch or TensorFlow).
* Experience with time series forecasting, regression, classification, clustering, or recommendation systems.
* Familiarity with GenAI concepts and tools (LLM APIs, embeddings, prompt engineering, evaluation methods).
* Strong SQL skills and experience working with large datasets and cloud-based data warehouses (Snowflake, BigQuery, etc.).
* Solid understanding of experimental design and model evaluation metrics beyond accuracy.
* Experience with data visualization and storytelling tools (Plotly, Tableau, Power BI, or Streamlit).
* Exposure to MLOps/LLMOps concepts and working in close collaboration with engineering teams.
Soft Skills & Qualities
* Excellent communication skills with the ability to translate analysis into actionable business recommendations.
* Strong problem-solving abilities and business acumen.
* High adaptability to evolving tools, frameworks, and industry practices.
* Curiosity and continuous learning mindset.
* Stakeholder empathy and ability to build trust while introducing AI solutions.
* Strong collaboration skills and comfort working in ambiguous, fast-paced environments.
* Commitment to clear documentation and knowledge sharing.
Who We Hire?
Simply put, we hire qualified applicants representing a wide range of backgrounds and abilities. MillerKnoll is comprised of people of all abilities, gender identities and expressions, ages, ethnicities, sexual orientations, veterans from every branch of military service, and more. Here, you can bring your whole self to work. We're committed to equal opportunity employment, including veterans and people with disabilities.
This organization participates in E-Verify Employment Eligibility Verification. In general, MillerKnoll positions are closed within 45 days and are open for applications for a minimum of 5 days. We encourage our prospective candidates to submit their application(s) expediently so as not to miss out on our opportunities. We frequently post new opportunities and encourage prospective candidates to check back often for new postings.
MillerKnoll complies with applicable disability laws and makes reasonable accommodations for applicants and employees with disabilities. If reasonable accommodation is needed to participate in the job application or interview process, to perform essential job functions, and/or to receive other benefits and privileges of employment, please contact MillerKnoll Talent Acquisition at careers_********************.
Auto-ApplyData Scientist
Data scientist job in Luxemburg, WI
Your career at Deutsche Börse Group Your Area of Work Join Clearstream Fund Services as a Data Scientist to design and prototype data products that empower data monetization and business users through curated datasets, semantic models, and advanced analytics. You'll work across the data stack-from pipelines to visualizations-and contribute to the evolution of AI-driven solutions.
Your Responsibilities
* Prototype data products including curated datasets and semantic models to support data democratization and self-service BI
* Design semantic layers to simplify data access and usability
* Develop and optimize data pipelines using data engineering tool (e.g. Databricks)
* Use SQL, Python, and PySpark for data processing and transformation
* Create Power BI dashboards to support prototyping and reporting
* Apply ML/AI techniques to support early-stage modeling and future product innovation
* Collaborate with data product managers, functional analyst, engineers, and business stakeholders
* Ensure data quality, scalability, and performance in all deliverables
Your Profile
* Master's in Data Science, Computer Science, Engineering, or related field
* 3+ years of experience in data pipeline development and prototyping in financial services or fund administration
* Proficiency in SQL, Python, and PySpark
* Hands-on experience with Databricks
* Experience building Power BI dashboards and semantic models
* Strong analytical and communication skills
* Fluent in English
Senior Data Scientist - Metrics
Data scientist job in Ann Arbor, MI
May Mobility is transforming cities through autonomous technology to create a safer, greener, more accessible world. Based in Ann Arbor, Michigan, May develops and deploys autonomous vehicles (AVs) powered by our innovative Multi-Policy Decision Making (MPDM) technology that literally reimagines the way AVs think.
Our vehicles do more than just drive themselves - they provide value to communities, bridge public transit gaps and move people where they need to go safely, easily and with a lot more fun. We're building the world's best autonomy system to reimagine transit by minimizing congestion, expanding access and encouraging better land use in order to foster more green, vibrant and livable spaces. Since our founding in 2017, we've given more than 300,000 autonomy-enabled rides to real people around the globe. And we're just getting started. We're hiring people who share our passion for building the future, today, solving real-world problems and seeing the impact of their work. Join us.
May Mobility is experiencing a period of significant growth as we expand our autonomous shuttle and mobility services nationwide. As we advance toward widespread deployment, the ability to measure safety and comfort objectively, accurately, and at scale is critical. The Senior Data Scientist in this role will shape how we evaluate AV performance, uncover system vulnerabilities, and ensure that every driving decision meets the highest standards of safety and passenger experience. Your work will directly influence product readiness, inform engineering priorities, and accelerate the path to building trustworthy, human-centered autonomous driving systems.
Responsibilities
* Develop and refine safety and comfort metrics for evaluating autonomous vehicle performance across real-world and simulation data.
* Build ML and non-ML models to detect unsafe, uncomfortable, or anomalous behaviors.
* Analyze large-scale drive logs and simulation datasets to identify patterns, regressions, and system gaps.
* Collaborate with perception, prediction, behavior, and simulation teams to integrate metrics into workflows.
* Communicate insights and recommendations to engineering leaders and cross-functional teams.
Skills
Success in this role typically requires the following competencies:
* Strong proficiency in Python, SQL, and data analysis tools (e.g., Pandas, NumPy, Spark).
* Strong understanding of vehicle dynamics, kinematics, agent interactions, and road/traffic elements.
* Expertise in analyzing high-dimensional or time-series data from sensors, logs, and simulation systems.
* Excellent technical communication skills with the ability to clearly present complex model designs and results to both technical and non-technical stakeholders.
* Detail-oriented with a focus on validation, testing, and error detection.
Qualifications and Experience
Required
* B.S, M.S. or Ph.D. Degree in Engineering, Data Science, Computer Science, Math, or a related quantitative field.
* 5+ years of experience in data science, applied machine learning, robotics, or autonomous systems.
* 2+ years working in AV, ADAS, robotics, or another safety-critical domain involving vehicle behavior analysis.
* Demonstrated experience developing or evaluating safety and/or comfort metrics for autonomous or robotic systems.
* Hands-on experience working with real-world driving logs and/or simulation data.
Desired
* Background in motion planning, behavior prediction, or multi-agent interaction modeling.
* Experience designing metric-driven development, KPIs, and automated triaging pipelines.
Benefits and Perks
* Comprehensive healthcare suite including medical, dental, vision, life, and disability plans. Domestic partners who have been residing together at least one year are also eligible to participate.
* Health Savings and Flexible Spending Healthcare and Dependent Care Accounts available.
* Rich retirement benefits, including an immediately vested employer safe harbor match.
* Generous paid parental leave as well as a phased return to work.
* Flexible vacation policy in addition to paid company holidays.
* Total Wellness Program providing numerous resources for overall wellbeing
Don't meet every single requirement? Studies have shown that women and/or people of color are less likely to apply to a job unless they meet every qualification. At May Mobility, we're committed to building a diverse, inclusive, and authentic workforce, so if you're excited about this role but your previous experience doesn't align perfectly with every qualification, we encourage you to apply anyway! You may be the perfect candidate for this or another role at May.
Want to learn more about our culture & benefits? Check out our website!
May Mobility is an equal opportunity employer. All applicants for employment will be considered without regard to race, color, religion, sex, national origin, age, disability, sexual orientation, gender identity or expression, veteran status, genetics or any other legally protected basis. Below, you have the opportunity to share your preferred gender pronouns, gender, ethnicity, and veteran status with May Mobility to help us identify areas of improvement in our hiring and recruitment processes. Completion of these questions is entirely voluntary. Any information you choose to provide will be kept confidential, and will not impact the hiring decision in any way. If you believe that you will need any type of accommodation, please let us know.
Note to Recruitment Agencies: May Mobility does not accept unsolicited agency resumes. Furthermore, May Mobility does not pay placement fees for candidates submitted by any agency other than its approved partners.
Salary Range
$163,477-$240,408 USD
Auto-ApplySenior Data Scientist
Data scientist job in Milwaukee, WI
Sun Life U.S. is one of the largest providers of employee and government benefits, helping approximately 50 million Americans access the care and coverage they need. Through employers, industry partners and government programs, Sun Life U.S. offers a portfolio of benefits and services, including dental, vision, disability, absence management, life, supplemental health, medical stop-loss insurance, and healthcare navigation. We have more than 6,400 employees and associates in our partner dental practices and operate nationwide.
Visit our website to discover how Sun Life is making life brighter for our customers, partners and communities.
Job Description:
Sun Life embraces a hybrid work model that balances in-office collaboration with the flexibility of virtual work. Internal candidates are not required to relocate near an office.
The opportunity: The Senior Data Scientist provides advanced analytics support within the Business Analytics function that applies the power of data with machine learning to enhance risk-based decisions across the Health and Risk Solutions business. This team is expected to work closely with the other functional teams, in particular the Pricing, Underwriting, and Clinical teams. This position reports to the Director, Data Science within the Health and Risk Solutions business.
Responsibilities include developing and monitoring predictive models to support our pricing, underwriting and clinical review processes, as well as supporting the implementation of these models. Additional responsibilities may include applying machine learning techniques to streamline and automate aspects of these processes and bring in industry standard best practice on developing and maintaining MLOPS.
How you will contribute:
* Apply advanced data science techniques to solve business problems across a broad range of data analysis functions including predictive analysis, data modeling, visualization and data profiling.
* Apply expertise with data wrangling, feature engineering, model training and model evaluation.
* Ability to identify the right algorithms and statistical techniques for a specific project as well as the best features for a model.
* Align with best practices for data mining and modeling as set up by the team
* Partners with key subject matter experts across various functional teams within the Health & Risk Solution business to help develop predictive modeling and analytic solutions based on understanding of business needs and opportunities.
* Develop and maintain high-quality, robust predictive models using advanced analytic techniques
* Extract and analyze internal and external data sources to help answer key business problems related to risk assessment.
What you will bring with you:
* Ability to work with a diverse range of people
* 4-5 years of experience in developing and implementing data science techniques
* BS/MS/PhD in a statistical, mathematical, or technical field (e.g., data science, computer science, actuarial science)
* Experience with actuarial/pricing, underwriting or related concepts utilized in the health insurance field preferred. Experience with pricing/underwriting advanced analytics application within property & casualty insurance sector is welcomed.
* Business knowledge of insurance sector preferred.
* Strong communication skills, with an ability to explain technical concepts to a non-technical audience.
* Attention to details and accuracy and clarity in communicating results and insights
* Proactive ownership of problem solving with team members and subject matter experts
* Experience with a broad range of statistical programming languages, applications and data environments (e.g., R, Python, SQL, Tableau)
* Strong knowledge of data science including conditioning, modeling, integration and visualization
* Strong knowledge of the fundamental statistical and AI underpinnings of data science
* Commitment to data compliance, model governance and security protocols
* Strong business acumen to understand why and how the work we do will impact our business stakeholders
Salary:
$97,400-$146,100
At our company, we are committed to pay transparency and equity. The salary range for this role is competitive nationwide, and we strive to ensure that compensation is fair and equitable. Your actual base salary will be determined based on your unique skills, qualifications, experience, education, and geographic location. In addition to your base salary, this position is eligible for a discretionary annual incentive award based on your individual performance as well as the overall performance of the business. We are dedicated to creating a work environment where everyone is rewarded for their contributions.
Not ready to apply yet but want to stay in touch? Join our talent community to stay connected until the time is right for you!
We are committed to fostering an inclusive environment where all employees feel they belong, are supported and empowered to thrive. We are dedicated to building teams with varied experiences, backgrounds, perspectives and ideas that benefit our colleagues, clients, and the communities where we operate. We encourage applications from qualified individuals from all backgrounds.
Life is brighter when you work at Sun Life
At Sun Life, we prioritize your well-being with comprehensive benefits, including generous vacation and sick time, market-leading paid family, parental and adoption leave, medical coverage, company paid life and AD&D insurance, disability programs and a partially paid sabbatical program. Plan for your future with our 401(k) employer match, stock purchase options and an employer-funded retirement account. Enjoy a flexible, inclusive and collaborative work environment that supports career growth. We're proud to be recognized in our communities as a top employer. Proudly Great Place to Work Certified in Canada and the U.S., we've also been recognized as a "Top 10" employer by the Boston Globe's "Top Places to Work" for two years in a row. Visit our website to learn more about our benefits and recognition within our communities.
We will make reasonable accommodations to the known physical or mental limitations of otherwise-qualified individuals with disabilities or special disabled veterans, unless the accommodation would impose an undue hardship on the operation of our business. Please email ************************* to request an accommodation.
For applicants residing in California, please read our employee California Privacy Policy and Notice.
We do not require or administer lie detector tests as a condition of employment or continued employment.
Sun Life will consider for employment all qualified applicants, including those with criminal histories, in a manner consistent with the requirements of applicable state and local laws, including applicable fair chance ordinances.
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Job Category:
Advanced Analytics
Posting End Date:
11/01/2026
Auto-ApplySenior Data Scientist
Data scientist job in Lansing, MI
**_What Data Science contributes to Cardinal Health_** The Data & Analytics Function oversees the analytics life-cycle in order to identify, analyze and present relevant insights that drive business decisions and anticipate opportunities to achieve a competitive advantage. This function manages analytic data platforms, the access, design and implementation of reporting/business intelligence solutions, and the application of advanced quantitative modeling.
Data Science applies base, scientific methodologies from various disciplines, techniques and tools that extracts knowledge and insight from data to solve complex business problems on large data sets, integrating multiple systems.
At Cardinal Health's Artificial Intelligence Center of Excellence (AI CoE), we are pushing the boundaries of healthcare with cutting-edge Data Science and Artificial Intelligence (AI). Our mission is to leverage the power of data to create innovative solutions that improve patient outcomes, streamline operations, and enhance the overall healthcare experience.
We are seeking a highly motivated and experienced Senior Data Scientist to join our team as a thought leader and architect of our AI strategy. You will play a critical role in fulfilling our vision through delivery of impactful solutions that drive real-world change.
**_Responsibilities_**
+ Lead the Development of Innovative AI solutions: Be responsible for designing, implementing, and scaling sophisticated AI solutions that address key business challenges within the healthcare industry by leveraging your expertise in areas such as Machine Learning, Generative AI, and RAG Technologies.
+ Develop advanced ML models for forecasting, classification, risk prediction, and other critical applications.
+ Explore and leverage the latest Generative AI (GenAI) technologies, including Large Language Models (LLMs), for applications like summarization, generation, classification and extraction.
+ Build robust Retrieval Augmented Generation (RAG) systems to integrate LLMs with vast repositories of healthcare and business data, ensuring accurate and relevant outputs.
+ Shape Our AI Strategy: Work closely with key stakeholders across the organization to understand their needs and translate them into actionable AI-driven or AI-powered solutions.
+ Act as a champion for AI within Cardinal Health, influencing the direction of our technology roadmap and ensuring alignment with our overall business objectives.
+ Guide and mentor a team of Data Scientists and ML Engineers by providing technical guidance, mentorship, and support to a team of skilled and geographically distributed data scientists, while fostering a collaborative and innovative environment that encourages continuous learning and growth.
+ Embrace a AI-Driven Culture: foster a culture of data-driven decision-making, promoting the use of AI insights to drive business outcomes and improve customer experience and patient care.
**_Qualifications_**
+ 8-12 years of experience with a minimum of 4 years of experience in data science, with a strong track record of success in developing and deploying complex AI/ML solutions, preferred
+ Bachelor's degree in related field, or equivalent work experience, preferred
+ GenAI Proficiency: Deep understanding of Generative AI concepts, including LLMs, RAG technologies, embedding models, prompting techniques, and vector databases, along with evaluating retrievals from RAGs and GenAI models without ground truth
+ Experience working with building production ready Generative AI Applications involving RAGs, LLM, vector databases and embeddings model.
+ Extensive knowledge of healthcare data, including clinical data, patient demographics, and claims data. Understanding of HIPAA and other relevant regulations, preferred.
+ Experience working with cloud platforms like Google Cloud Platform (GCP) for data processing, model training, evaluation, monitoring, deployment and support preferred.
+ Proven ability to lead data science projects, mentor colleagues, and effectively communicate complex technical concepts to both technical and non-technical audiences preferred.
+ Proficiency in Python, statistical programming languages, machine learning libraries (Scikit-learn, TensorFlow, PyTorch), cloud platforms, and data engineering tools preferred.
+ Experience in Cloud Functions, VertexAI, MLFlow, Storage Buckets, IAM Principles and Service Accounts preferred.
+ Experience in building end-to-end ML pipelines, from data ingestion and feature engineering to model training, deployment, and scaling preferred.
+ Experience in building and implementing CI/CD pipelines for ML models and other solutions, ensuring seamless integration and deployment in production environments preferred.
+ Familiarity with RESTful API design and implementation, including building robust APIs to integrate your ML models and GenAI solutions with existing systems preferred.
+ Working understanding of software engineering patterns, solutions architecture, information architecture, and security architecture with an emphasis on ML/GenAI implementations preferred.
+ Experience working in Agile development environments, including Scrum or Kanban, and a strong understanding of Agile principles and practices preferred.
+ Familiarity with DevSecOps principles and practices, incorporating coding standards and security considerations into all stages of the development lifecycle preferred.
**_What is expected of you and others at this level_**
+ Applies advanced knowledge and understanding of concepts, principles, and technical capabilities to manage a wide variety of projects
+ Participates in the development of policies and procedures to achieve specific goals
+ Recommends new practices, processes, metrics, or models
+ Works on or may lead complex projects of large scope
+ Projects may have significant and long-term impact
+ Provides solutions which may set precedent
+ Independently determines method for completion of new projects
+ Receives guidance on overall project objectives
+ Acts as a mentor to less experienced colleagues
**Anticipated salary range:** $121,600 - $173,700
**Bonus eligible:** Yes
**Benefits:** Cardinal Health offers a wide variety of benefits and programs to support health and well-being.
+ Medical, dental and vision coverage
+ Paid time off plan
+ Health savings account (HSA)
+ 401k savings plan
+ Access to wages before pay day with my FlexPay
+ Flexible spending accounts (FSAs)
+ Short- and long-term disability coverage
+ Work-Life resources
+ Paid parental leave
+ Healthy lifestyle programs
**Application window anticipated to close:** 11/05/2025
*if interested in opportunity, please submit application as soon as possible.
The salary range listed is an estimate. Pay at Cardinal Health is determined by multiple factors including, but not limited to, a candidate's geographical location, relevant education, experience and skills and an evaluation of internal pay equity.
_Candidates who are back-to-work, people with disabilities, without a college degree, and Veterans are encouraged to apply._
_Cardinal Health supports an inclusive workplace that values diversity of thought, experience and background. We celebrate the power of our differences to create better solutions for our customers by ensuring employees can be their authentic selves each day. Cardinal Health is an Equal_ _Opportunity/Affirmative_ _Action employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, ancestry, age, physical or mental disability, sex, sexual orientation, gender identity/expression, pregnancy, veteran status, marital status, creed, status with regard to public assistance, genetic status or any other status protected by federal, state or local law._
_To read and review this privacy notice click_ here (***************************************************************************************************************************
Assistant Actuary
Data scientist job in Milwaukee, WI
Hybrid role in Milwaukee part of Northwestern Mutual's Actuarial Leadership Development Program. In this rotational role, individuals will use their actuarial training and background to carry out in-depth analysis of a wide variety of business issues. Typical activities may include development of complex actuarial models, pricing one of many insurance product lines, forecasting future financial circumstances, analyzing product experience, assessing regulatory issues, consulting on product or other risk management issues, or providing significant assistance on corporate projects. The result of the work will frequently influence management towards action that can have a sizeable impact on products, markets, underwriting, pricing, risk management, and profitability.
Primary Duties and Responsibilities
Investigate and analyze technical issues of an actuarial nature.
Assess risk by determining the impact and the financial benefit of varied options.
Provide actuarial consultation to internal clients.
Qualifications
Undergraduate degree in Actuarial Science, Mathematics, Finance, or related field.
Attainment of Associate of the Society of Actuaries (ASA) designation at a minimum.
Most candidates will have made progress toward attaining the Fellowship (FSA) designation.
A minimum of three years of actuarial experience.
Skilled in using software tools that are used for mathematical and/or financial modeling and analysis.
Strong communication skills.
A strong understanding of the actuarial aspects of NM product lines.
A good grasp of risk management issues tied to our products and finances.
Experience in multiple actuarial disciplines (e.g. pricing, modeling, valuation), typically attained through rotational assignments is preferred.
Compensation Range:
Pay Range - Start:
$92,750.00
Pay Range - End:
$172,250.00
Geographic Specific Pay Structure:
205 - Structure 110: 102,060.00 USD - 189,540.00 USD205 - Structure 115: 106,680.00 USD - 198,120.00 USD
We believe in fairness and transparency. It's why we share the salary range for most of our roles. However, final salaries are based on a number of factors, including the skills and experience of the candidate; the current market; location of the candidate; and other factors uncovered in the hiring process. The standard pay structure is listed but if you're living in California, New York City or other eligible location, geographic specific pay structures, compensation and benefits could be applicable, click here to learn more.
Grow your career with a best-in-class company that puts our clients' interests at the center of all we do. Get started now!
Northwestern Mutual is an equal opportunity employer who welcomes and encourages diversity in the workforce. We are committed to creating and maintaining an environment in which each employee can contribute creative ideas, seek challenges, assume leadership and continue to focus on meeting and exceeding business and personal objectives.
Auto-ApplySr Data Scientist
Data scientist job in Waterford, MI
We are Lennar
Lennar is one of the nation's leading homebuilders, dedicated to making an impact and creating an extraordinary experience for our Homeowners, Communities, and Associates by building quality homes and providing exceptional customer service, giving back to the communities in which we work and live, and fostering a culture of opportunity and growth for our Associates throughout their career. Lennar has been recognized as a Fortune 500 company and consistently ranked among the top homebuilders in the United States.
A Career that Empowers You to Build Your Future
As a Senior Data Scientist at Lennar, you will design, build, and deploy advanced models and AI agents that shape how Lennar prices, sells, and personalizes experiences for customers across 40+ divisions. You'll work end-to-end-from research and experimentation to production deployment and monitoring-delivering measurable business impact in pricing, sales, operations, and customer engagement. You'll collaborate across teams, navigate ambiguity, and drive innovation in a rapidly evolving AI ecosystem.
Your Responsibilities on the Team
Design, build, and deploy pricing recommendation models to optimize sales velocity, revenue, and division-level targets.
Develop sales forecasting and demand prediction models to support pricing and inventory decisions.
Build personalization algorithms for tailored product recommendations and communications across email, text, and digital platforms.
Apply machine vision and feature extraction on home attributes (photos, plans, finishes) to inform premium pricing and personalization strategies.
Design, build, and deploy autonomous AI agents using frameworks like Amazon Bedrock and AgentCore to solve business problems in pricing, sales, operations, and customer interactions.
Engineer and maintain data pipelines and systems supporting all models and agents, ensuring scalability and reliability.
Integrate agents with enterprise systems and protocols (MCP servers, A2A protocol, internal APIs).
Design and run experiments (A/B tests, multi-armed bandits, uplift models) to measure and optimize model and agent performance.
Ensure observability and reliability of deployed agents, including logging, evaluation, monitoring, and drift detection.
Proactively gather feedback from stakeholders and adapt solutions for adoption and measurable impact.
Translate complex data science and statistical concepts into clear recommendations, stories, and visualizations for executives and non-technical audiences.
Favor incremental, explainable solutions that deliver quick wins and scale over time.
Drive experimentation with new tools and approaches, ensuring robustness, governance, and scalability in production deployments.
Share learnings with the broader team to raise the bar on data science and agentic development across the organization.
Manage timelines and expectations transparently with both the data science team and business stakeholders.
Your Toolbox
Bachelor's or Master's degree in Statistics, Economics, Math, Computer Science, Data Science, Machine Learning, or related field (or equivalent experience).
5+ years of relevant experience (1+ with PhD, 3+ with MS) as a data scientist, ML engineer, or applied AI developer delivering production-ready models and systems.
Strong proficiency in Python and SQL, with experience owning the full data science stack (data pipelines + models + deployment).
Experience with pricing optimization, revenue management, economic modeling, and price elasticity/demand modeling.
Experience building and deploying large-scale recommender systems (collaborative filtering, embeddings, contextual bandits).
Hands-on experience with AI development frameworks (LangChain, Strands, Amazon Bedrock, AgentCore, or equivalent).
Experience with experimentation frameworks (A/B testing, uplift modeling, multi-armed bandits, causal ML).
Exposure to machine vision techniques (CNNs, transfer learning, embeddings) and NLP techniques (embeddings, transformers, prompt engineering).
Familiarity with real-time or near-real-time systems (Kafka, Kinesis, Flink, or similar) for scalable personalization.
Understanding of AI agent observability (evaluation frameworks like LangFuse, RAGAS, Weights & Biases, custom monitoring).
Experience with system integrations: APIs, A2A protocol, MCP servers, orchestration pipelines.
Comfort working with large-scale, imperfect real-world datasets and making progress despite complexity.
Strong engineering skills: ability to design and maintain production pipelines, microservices, and scalable systems.
Proven ability to navigate ambiguity, rapidly prototype, and move solutions into production.
Collaborative communicator who can align technical solutions with business priorities across diverse stakeholders.
Bonus: experience with RAG pipelines, LLM fine-tuning, RLHF, multi-agent orchestration, feature stores, survival analysis/churn modeling, and attribution modeling.
Bonus: background in real estate analytics, revenue management systems, or retail pricing optimization.
Physical & Office/Site Presence Requirements:
This is primarily a sedentary office position which requires the incumbent to have the ability to operate computer equipment, speak, hear, bend, stoop, reach, lift, and move and carry up to 25 lbs. Finger dexterity is necessary.
This description outlines the basic responsibilities and requirements for the position noted. This is not a comprehensive listing of all job duties of the Associates. Duties, responsibilities and activities may change at any time with or without notice.
Lennar is an equal opportunity employer and complies with all applicable federal, state, and local fair employment practices laws
#LI-KB2
Life at Lennar
At Lennar, we are committed to fostering a supportive and enriching environment for our Associates, offering a comprehensive array of benefits designed to enhance their well-being and professional growth. Our Associates have access to robust health insurance plans, including Medical, Dental, and Vision coverage, ensuring their health needs are well taken care of. Our 401(k) Retirement Plan, complete with a $1 for $1 Company Match up to 5%, helps secure their financial future, while Paid Parental Leave and an Associate Assistance Plan provide essential support during life's critical moments. To further support our Associates, we provide an Education Assistance Program and up to $30,000 in Adoption Assistance, underscoring our commitment to their diverse needs and aspirations. From the moment of hire, they can enjoy up to three weeks of vacation annually, alongside generous Holiday, Sick Leave, and Personal Day policies. Additionally, we offer a New Hire Referral Bonus Program, significant Home Purchase Discounts, and unique opportunities such as the Everyone's Included Day. At Lennar, we believe in investing in our Associates, empowering them to thrive both personally and professionally. Lennar Associates will have access to these benefits as outlined by Lennar's policies and applicable plan terms. Visit Lennartotalrewards.com to view our suite of benefits.
Join the fun and follow us on social media to see what's happening at our company, and don't forget to connect with us on Lennar: Overview | LinkedIn for the latest job opportunities.
Lennar is an equal opportunity employer and complies with all applicable federal, state, and local fair employment practices laws.
Auto-ApplyData Warehouse Engineer I
Data scientist job in Menasha, WI
The Data Warehouse Engineer I is part of a team dedicated to supporting Network Health's Enterprise Data Warehouse. This individual will perform development, analysis, testing, debugging, documentation, implementation, and maintenance of interfaces to support the Enterprise Data Warehouse and related applications. They will consult with other technical resources and key departmental users on solutions and best practices. They will monitor performance and effectiveness of the data warehouse and recommend changes as appropriate.
Location: Candidates must reside in the state of Wisconsin for consideration. This position is eligible to work at your home office (reliable internet is required), at our office in Brookfield or Menasha, or a combination of both in our hybrid workplace model.
Hours: 1.0 FTE, 40 hours per week, 8am-5pm Monday through Friday, may be required to work later hours when system changes are being implemented or problems arise
Check out our 2024 Community Report to learn a little more about the difference our employees make in the communities we live and work in. As an employee, you will have the opportunity to work hard and have fun while getting paid to volunteer in your local neighborhood. You too, can be part of the team and making a difference. Apply to this position to learn more about our team.
Job Responsibilities:
Perform end to end delivery of data interfaces in various stages of the Enterprise Data Warehouse in accordance with professional standards and industry best practice
Perform all phases of the development lifecycle including solution design, creation of acceptance criteria, implementation, technical documentation, development and execution of test cases, performance monitoring, troubleshooting, data analysis, and profiling
Consult with Developers, Engineers, DBAs, key departmental stakeholders, data governance and leadership on technical solutions and best practice
Monitor and audit the Enterprise Data Warehouse for effectiveness, throughput, and responsiveness. Recommend changes as appropriate. Troubleshoot customer complaints related to system performance issues
Maintain effective communication with customers from all departments for system development, implementation, and problem resolution
Required to take call to assist in resolution of technical problems
Other duties and responsibilities as assigned
Job Requirements:
Requires Associate Degree in Computer Science, Business, or related technical field; equivalent years of experience may be substituted
Minimum of 1 year experience in program interfacing required
Experience with T-SQL development, SSIS development, and database troubleshooting skills required
Network Health is an Equal Opportunity Employer