Data Scientist
Data scientist job in Indianapolis, IN
We are seeking a Junior Data Scientist to join our large Utility client in downtown Indianapolis. This position will be hired as a Full-Time employee. This entry-level position is perfect for individuals eager to tackle real-world energy challenges through data exploration, predictive modeling, and collaborative problem-solving. As part of our team, you'll work closely with seasoned data scientists, analysts, architects, engineers, and governance specialists to generate insights that power smarter decisions and help shape the future of energy.
Key Responsibilities
Partner cross-functionally with data scientists, data architects and engineers, machine learning engineers, data analysts, and data governance experts to deliver integrated data solutions.
Collaborate with business stakeholders and analysts to define clear project requirements.
Collect, clean, and preprocess both structured and unstructured data from utility systems (e.g., meter data, customer data).
Conduct exploratory data analysis to uncover trends, anomalies, and opportunities to enhance grid operations and customer service.
Apply traditional machine learning techniques and generative AI tools to build predictive models that address utility-focused challenges, particularly in the customer domain (e.g., outage restoration, program adoption, revenue assurance).
Present insights to internal stakeholders in a clear, compelling format, including data visualizations that drive predictive decision-making.
Document methodologies, workflows, and results to ensure transparency and reproducibility.
Serve as a champion of data and AI across all levels of the client's US Utilities organization.
Stay informed on emerging industry trends in utility analytics and machine learning.
Requirements
Bachelor's degree in data science, statistics, computer science, engineering, or a related field. Master's degree or Ph.D. is preferred.
1-3 years of experience in a data science or analytics role.
Strong applied analytics and statistics skills, such as distributions, statistical testing, regression, etc.
Proficiency in Python or R, with experience using libraries such as pandas, NumPy, and scikit-learn.
Proficiency in traditional machine learning algorithms and techniques, including k-nearest neighbors (k-NN), naive Bayes, support vector machines (SVM), convolutional neural networks (CNN), random forest, gradient-boosted trees, etc.
Familiarity with generative AI tools and techniques, including large language models (LLMs) and Retrieval-Augmented Generation (RAG), with an understanding of how these can be applied to enhance contextual relevance and integrate enterprise data into intelligent workflows.
Proficiency in SQL, with experience writing complex queries and working with relational data structures. Google BigQuery experience is preferred, including the use of views, tables, materialized views, stored procedures, etc.
Proficient in Git for version control, including repository management, branching, merging, and collaborating on code and notebooks in data science projects. Experience integrating Git with CI/CD pipelines to automate testing and deployment is preferred.
Experience with cloud computing platforms (GCP preferred).
Ability to manage multiple priorities in a fast-paced environment.
Interest in learning more about the customer-facing side of the utility industry.
Compensation: Up to $130,000 per year annual salary. Exact compensation may vary based on several factors, including skills, experience, and education. Benefit packages for this role may include healthcare insurance offerings and paid leave as provided by applicable law.
Senior Data Engineer
Data scientist job in Indianapolis, IN
Pinnacle Partners is assisting our client in the search for a Senior Data Engineer to join their team in the Indianapolis, IN area. This successful resource will be responsible for supporting the large-scale data modernization initiative and operationalize the platform moving forward.
RESPONSIBILITIES:
Design, develop, and refine BI focused data architecture and data platforms
Work with internal teams to gather requirements and translate business needs into technical solutions
Build and maintain data pipelines supporting transformation
Develop technical designs, data models, and roadmaps
Troubleshoot and resolve data quality and processing issues
Create and maintain detailed documentation for data warehouses, data stores, and end-to-end data flows
Mentor and support junior team members
REQUIREMENTS:
5+ years of hands-on experience with data warehousing, databases, and dimensional data modeling
5+ years of experience across end-to-end data analysis and development
Experience using GIT version control
Advanced SQL skills
Strong experience with AWS cloud
PREFERRED SKILLS:
Experience with Snowflake
Experience with Python or R
Bachelor's degree in an IT-Related field
TERMS:
This is a direct hire opportunity with a salary up to $130K based on experience. They offer benefits including medical, dental, and vision along with generous PTO, 401K matching, wellness programs, and other benefits.
Systems Data Analyst
Data scientist job in Indianapolis, IN
Beacon Hill Technologies is seeking proactive, data-driven analyst with strong initiative-someone who can expand on existing frameworks, validate data, and independently build tools that elevate team performance. You communicate clearly, think critically, and enjoy transforming complex technical information into meaningful business insights. You thrive in fast-paced environments and are comfortable working hands-on with evolving data systems. This postion is hybrid!
Required Skills:
Bachelor's degree in Information Systems, Business Analytics, IT, or a related field (or equivalent experience).
3-5 years of experience in data analysis, IT operations, or A/V-adjacent environment.
Proficiency with:
Tableau (strongly preferred; team's primary tool)
ServiceNow reporting
Excel (advanced formulas, macros)
Python (especially for Tableau-based scripting)
Experience working with large datasets and multiple data sources.
Ability to validate, test, and ensure data accuracy and integrity.
Strong communication skills; able to translate technical data into clear business insights.
Demonstrated ability to independently build new reports, dashboards, or tools when standard solutions are not available.
Desired Skills:
Experience with Cisco Spaces, digital room utilization analytics, or space-management tools.
Familiarity with A/V environments, technologies, or governance frameworks (big plus, but not required).
Experience developing or managing lifecycle models, performance metrics, or executive-level reporting dashboards.
Knowledge of AI-assisted reporting or automation tools.
Experience with procurement forecasting, budgeting data, or operational strategy analytics.
Beacon Hill is an equal opportunity employer and individuals with disabilities and/or protected veterans are encouraged to apply.
California residents: Qualified applications with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act.
If you would like to complete our voluntary self-identification form, please click here or copy and paste the following link into an open window in your browser: *****************************************
Completion of this form is voluntary and will not affect your opportunity for employment, or the terms or conditions of your employment. This form will be used for reporting purposes only and will be kept separate from all other records.
Company Profile:
Beacon Hill Technologies, a premier National Information Technology Staffing Group, provides world class technology talent across all industries utilizing a complete suite of staffing services. Beacon Hill Technologies' dedicated team of recruiting and staffing experts consistently delivers quality IT professionals to solve our customers' technical and business needs.
Beacon Hill Technologies covers a broad spectrum of IT positions, including Project Management and Business Analysis, Programming/Development, Database, Infrastructure, Quality Assurance, Production/Support and ERP roles.
Learn more about Beacon Hill and our specialty divisions, Beacon Hill Associates, Beacon Hill Financial, Beacon Hill HR, Beacon Hill Legal, Beacon Hill Life Sciences and Beacon Hill Technologies by visiting *************
Benefits Information:
Beacon Hill offers a robust benefit package including, but not limited to, medical, dental, vision, and federal and state leave programs as required by applicable agency regulations to those that meet eligibility. Upon successfully being hired, details will be provided related to our benefit offerings.
We look forward to working with you.
Beacon Hill. Employing the Future™
Data Architect
Data scientist job in Detroit, MI
Millennium Software is look for a Data Architect for one of its direct client based in Michigan. It is onsite role.
Title: Data Architect
Tax term: Only w2, no c2c
Description:
All below are must have
Senior Data Architect with 12+ years of experience in Data Modeling.
Develop conceptual, logical, and physical data models.
Experience with GCP Cloud
Data Analyst
Data scientist job in Troy, MI
Data Analyst & Backend Developer with AI
The Digital Business Team develops promising digital solutions for global products and processes. It aims to organize Applus+ Laboratories' information, making it useful, fast, and reliable for both the Applus+ Group and its clients. The team's mission is to be recognized as the most digital, innovative, and customer-oriented company, reducing digital operations costs while increasing the value and portfolio of services.
We are looking for a Data Science / AI Engineer to join our Digital team and contribute to the development of evolving data products and applications.
Responsibilities:
Collect, organize, and analyze structured and unstructured data to extract actionable insights, generate code, train models, validate results, and draw meaningful conclusions.
Demonstrate advanced proficiency in Power BI, including data modeling, DAX, the creation of interactive dashboards, and connecting to diverse data sources.
Possess a strong mathematical and statistical foundation, with experience in numerical methods and a wide range of machine learning algorithms.
Exhibit hands-on experience in Natural Language Processing (NLP) and foundation models, with a thorough understanding of transformers, tokenization, and encoding-decoding processes.
Apply Explainable AI (XAI) techniques using Python libraries such as SHAP, LIME, or similar tools.
Develop and integrate AI models into backend systems utilizing frameworks such as FastAPI, Flask, or Django.
Demonstrate logical and organized thinking with attention to detail, capable of identifying data or code anomalies and effectively communicating findings through clear documentation and well-commented code.
Maintain a proactive mindset for optimizing analytical workflows and continuously improving models and tools.
Exhibit creativity and innovation in problem-solving, with a practical and results-oriented approach.
Technical Requirements:
Demonstrated experience coding in python specifically working in data science projects and using ML most common libraries.
Previous experiences working with Generative AI and explainable AI are welcome.
Expertise in Power BI: data modeling, DAX, dashboards, integration with multiple sources.
Proficient in SQL for querying, transforming, optimizing databases.
Experienced in Python for data analysis, automation, machine learning.
Knowledgeable in data analytics, KPIs, business intelligence practices.
Skilled in translating business requirements into insights and visualizations.
Our current tech stack is:
Power BI (DAX)
SQL
Python
Commonly used ML/AI libraries.
Azure AI (Open AI)
Education
Degree in Computer Science, Software Engineering, Applied Mathematics, or a related field.
A master's degree in data science or AI Engineering is an advantage.
Languages
English
If you are passionate about analytics, advanced AI algorithms, and challenging yourself, this is the right job for you!
Senior Data Engineer
Data scientist job in Indianapolis, IN
Senior Data Engineer - Azure Data Warehouse (5-7+ Years Experience)
Long term renewing contract
Azure-based data warehouse and dashboarding initiatives.
Work alongside architects, analysts, and researchers to build scalable, auditable, and business-aligned data assets using modern cloud tools and best practices.
Key Responsibilities
· Design and implement scalable data pipelines using ADF, Databricks, and Azure SQL Server
· Apply Medallion architecture principles and best practices for data lake and warehouse design
· Collaborate with Data Architects, Analysts, and Researchers to translate business needs into technical solutions
· Develop and maintain CI/CD pipelines for data workflows and dashboard deployments
· Lead troubleshooting and debugging efforts across ETL, SQL, and cloud environments
· Mentor junior team members and promote best practices in data modeling, cleansing, and promotion
· Support dashboarding initiatives with Power BI and wireframe collaboration
· Ensure auditability, lineage, and performance across SQL Server and Oracle environments
Required Skills & Experience
· 5-7+ years in data engineering, data warehouse design, and ETL development
· Strong expertise in Azure Data Factory, Data Bricks, and Python
· Deep understanding of SQL Server, Oracle, Postgres SQL & Cosmos DB and data modeling standards
· Proven experience with Medallion architecture and data Lakehouse best practices
· Hands-on with CI/CD, DevOps, and deployment automation
· Agile mindset with ability to manage multiple priorities and deliver on time
· Excellent communication and documentation skills
Bonus Skills
· Experience with GCP or AWS
· Familiarity with Jira, Confluence, and AppDynamics
GCP Data Architect
Data scientist job in Dearborn, MI
Title: GCP Data Architect
Description: STG is a fast-growing Digital Transformation services company providing Fortune 500 companies with Digital Transformation, Mobility, Analytics and Cloud Integration services in both information technology and engineering product lines. STG has a 98% repeat business rate from existing clients and have achieved industry awards and recognition for our services.
Responsibilities:
Data Modeling and Design: Develop conceptual, logical, and physical data models for business intelligence, analytics, and reporting solutions. Transform requirements into scalable, flexible, and efficient data structures that can support advanced analytics.
Requirement Analysis: Collaborate with business analysts, stakeholders, and subject matter experts to gather and interpret requirements for new data initiatives. Translate business questions into data models that can answer these questions.
Data Integration: Work closely with data engineers to integrate data from multiple sources, ensuring consistency, accuracy, and reliability. Map data flows and document relationships between datasets.
Database Architecture: Design and optimize database schemas using the medallion architecture which includes relational, star schema and denormalized data sets for BI and ML data consumers.
Metadata Management: Team with the data governance team so detailed documentation on data definitions, data lineage, and data quality statistics are available to data consumers.
Data Quality Assurance: Establish master data management data modeling so the history of how customer, provider and other party data is consolidated into a single version of the truth.
Collaboration and Communication: Serve as a bridge between technical teams and business units, clearly communicating the value and limitations of various data sources and structures.
Governance and Compliance: Ensure that data models and processes adhere to regulatory standards and organizational policies regarding privacy, access, and security.
Experience Required:
Specialist Exp: 10+ yrs in IT; 7+ yrs as Data Architect
Power Builder
PostgreSQL
GCP
Big Query
GCP Data Architect position is based at our corporate office located in Dearborn, Michigan. A great opportunity to experience the corporate environment leading personal career growth.
Resume Submittal Instructions: Interested/qualified candidates should email their word formatted resumes to Ms. Shweta Huria at ********************** and/or contact at ************. In the subject line of the email please include: First and Last Name (GCP Data Architect).
For more information about STG, please visit us at **************
GCP Data Engineer
Data scientist job in Dearborn, MI
Experience Required: 8+ years
Work Status: Hybrid
We're seeking an experienced GCP Data Engineer who can build cloud analytics platform to meet ever expanding business requirements with speed and quality using lean Agile practices. You will work on analyzing and manipulating large datasets supporting the enterprise by activating data assets to support Enabling Platforms and Analytics in the Google Cloud Platform (GCP). You will be responsible for designing the transformation and modernization on GCP, as well as landing data from source applications to GCP. Experience with large scale solution and operationalization of data warehouses, data lakes and analytics platforms on Google Cloud Platform or other cloud environment is a must. We are looking for candidates who have a broad set of technology skills across these areas and who can demonstrate an ability to design right solutions with appropriate combination of GCP and 3rd party technologies for deploying on Google Cloud Platform. You will: Work in collaborative environment including pairing and mobbing with other cross-functional engineers Work on a small agile team to deliver working, tested software Work effectively with fellow data engineers, product owners, data champions and other technical experts Demonstrate technical knowledge/leadership skills and advocate for technical excellence Develop exceptional Analytics data products using streaming, batch ingestion patterns in the Google Cloud Platform with solid Data Warehouse principles Be the Subject Matter Expert in Data Engineering and GCP tool technologies
Skills Required:
Big Query
Skills Preferred:
N/A
Experience Required:
In-depth understanding of Google's product technology (or other cloud platform) and underlying architectures 5+ years of analytics application development experience required 5+ years of SQL development experience 3+ years of Cloud experience (GCP preferred) with solution designed and implemented at production scale Experience working in GCP based Big Data deployments (Batch/Real-Time) leveraging Terraform, Big Query, Google Cloud Storage, PubSub, Dataflow, Dataproc, Airflow, etc. 2 + years professional development experience in Java or Python, and Apache Beam Extracting, Loading, Transforming, cleaning, and validating data Designing pipelines and architectures for data processing 1+ year of designing and building CI/CD pipelines
Experience Preferred:
Experience building Machine Learning solutions using TensorFlow, BigQueryML, AutoML, Vertex AI Experience in building solution architecture, provision infrastructure, secure and reliable data-centric services and application in GCP Experience with DataPlex is preferred Experience with development eco-system such as Git, Jenkins and CICD Exceptional problem solving and communication skills Experience in working with DBT/Dataform Experience in working with Agile and Lean methodologies Team player and attention to detail Performance tuning experience
Education Required:
Bachelor's Degree
Education Preferred:
Master's Degree
Additional Safety Training/Licensing/Personal Protection Requirements:
Additional Information:
***POSITION IS HYBRID*** Primary Skills Required: Experience in working in an implementation team from concept to operations, providing deep technical subject matter expertise for successful deployment. Implement methods for automation of all parts of the pipeline to minimize labor in development and production Experience in analyzing complex data, organizing raw data and integrating massive datasets from multiple data sources to build subject areas and reusable data products Experience in working with architects to evaluate and productionalize appropriate GCP tools for data ingestion, integration, presentation, and reporting Experience in working with all stakeholders to formulate business problems as technical data requirement, identify and implement technical solutions while ensuring key business drivers are captured in collaboration with product management This includes designing and deploying a pipeline with automated data lineage. Identify, develop, evaluate and summarize Proof of Concepts to prove out solutions. Test and compare competing solutions and report out a point of view on the best solution. Design and build production data engineering solutions to deliver pipeline patterns using Google Cloud Platform (GCP) services: BigQuery, DataFlow, Pub/Sub, BigTable, Data Fusion, DataProc, Cloud Composer, Cloud SQL, Compute Engine, Cloud Functions, and App Engine. Additional Skills Preferred: Strong drive for results and ability to multi-task and work independently Self-starter with proven innovation skills Ability to communicate and work with cross-functional teams and all levels of management Demonstrated commitment to quality and project timing Demonstrated ability to document complex systems Experience in creating and executing detailed test plans Additional Education Preferred GCP Professional Data Engineer Certified In-depth software engineering knowledge "
Data Scientist III
Data scientist job in Pontiac, MI
Team members in the Data Scientist role at UWM are responsible for modeling complex problems, discovering insights and identifying opportunities using statistics, algorithms, machine learning, and visualization techniques. Data Scientists work closely with executives, product owners, SME's, and other business teams to leverage data and help answer critical business decisions. Data Scientists at UWM need to be creative thinkers and propose innovative ways to look at problems by examining and discovering new patterns within our datasets and collaborating with our business stakeholders.
They will need to validate their results using an experimental and iterative approach. Perhaps most importantly, they will need to be able to communicate their insights and results to the business in a clear, concise, and approachable way. They need to be storytellers of their work.
These professionals will need a combination of business focus, data programming knowledge, and strong analytical and problem solving skills, to be able to quickly develop and test hypothesis, and provide conclusions in a clear, structured manner. This role includes the full data science lifecycle from analytic problem definition, through data wrangling, analysis, model development, reporting/visualization development, testing, deployment, and feedback.
WHAT YOU WILL BE DOING
* Work with stakeholders throughout the organization to identify opportunities for leveraging company data to increase efficiency or improve the bottom line.
* Analyze UWM data sets to identify areas of optimization and improvement of business strategies.
* Assess the effectiveness and accuracy of new data sources and data gathering techniques.
* Develop custom data models, algorithms, simulations, and predictive modeling to support insights and opportunities for improvement.
* Develop A/B testing framework and test model quality.
* Coordinate with different business areas to implement models and monitor outcomes.
* Develop processes and tools to monitor and analyze model performance and data accuracy
WHAT WE NEED FROM YOU
Must Have
* Bachelor's degree in Finance, Statistics, Economics, Data Science, Computer Science, Engineering or Mathematics, or related field
* 5+ years of experience in statistical analysis, and/or machine learning
* 5+ years of experience with one or more of the following tools: machine learning (Python, MATLAB), data wrangling skills/tools (Hadoop, Teradata, SAS, or other), statistical analysis (Python, R, SAS) and/or visualization skills/tools (PowerBI, Tableau, Qlikview)
* 3+ years of experience collaborating with teams (either internal or external) to develop analytics solutions
* Strong problem solving skills
* Strong communication skills (interpersonal, written, and presentation)
Nice to Have
* Master's degree in Finance, Statistics, Economics, Data Science, Computer Science, Mathematics or related field
* 3+ years of experience with R, SQL, Tableau, MATLAB, Python
* 3+ years of professional experience in machine learning, data mining, statistical analysis, modeling, optimization
* Experience in Accounting, Finance, and Economics
THE PLACE & THE PERKS
Ready to join thousands of talented team members who are making the dream of home ownership possible for more Americans? It's all happening on UWM's campus, where our award-winning workplace packs plenty of perks and amenities that keep the atmosphere buzzing with energy and excitement.
It's no wonder that out of our six pillars, People Are Our Greatest Asset is number one. It's at the very heart of how we treat each other, our clients and our community. Whether it's providing elite client service or continuously striving to improve, our pillars provide a pathway to a more successful personal and professional life.
From the team member that holds a door open to the one that helps guide your career, you'll feel the encouragement and support on day one. No matter your race, creed, gender, age, sexual orientation and ethnicity, you'll be welcomed here. Accepted here. And empowered to Be You Here.
More reasons you'll love working here include:
* Paid Time Off (PTO) after just 30 days
* Additional parental and maternity leave benefits after 12 months
* Adoption reimbursement program
* Paid volunteer hours
* Paid training and career development
* Medical, dental, vision and life insurance
* 401k with employer match
* Mortgage discount and area business discounts
* Free membership to our large, state-of-the-art fitness center, including exercise classes such as yoga and Zumba, various sports leagues and a full-size basketball court
* Wellness area, including an in-house primary-care physician's office, full-time massage therapist and hair salon
* Gourmet cafeteria featuring homemade breakfast and lunch
* Convenience store featuring healthy grab-and-go snacks
* In-house Starbucks and Dunkin
* Indoor/outdoor café with Wi-Fi
DISCLAIMER
All the above duties and responsibilities are essential job functions subject to reasonable accommodation and change. All job requirements listed indicate the minimum level of knowledge, skills and/or ability deemed necessary to perform the job proficiently. Team members may be required to perform other or different job-related duties as requested by their team lead, subject to reasonable accommodation. This document does not create an employment contract, implied or otherwise. Employment with UWM is "at-will." UWM is an Equal Opportunity Employer. By selecting "Apply for this job online" you provide consent to UWM to record phone call conversations between you and UWM to be used for quality control purposes.
Auto-ApplyData Scientist
Data scientist job in Zeeland, MI
Why join us? Our purpose is to design for the good of humankind. It's the ideal we strive toward each day in everything we do. Being a part of MillerKnoll means being a part of something larger than your work team, or even your brand. We are redefining modern for the 21st century. And our success allows MillerKnoll to support causes that align with our values, so we can build a more sustainable, equitable, and beautiful future for everyone.
About the Role
We're looking for an experienced and adaptable Data Scientist to join our growing AI & Data Science team. You'll be part of a small, highly technical group focused on delivering impactful machine learning, forecasting, and generative AI solutions.
In this role, you'll work closely with stakeholders to translate business challenges into well-defined analytical problems, design and validate models, and communicate results in clear, actionable terms. You'll collaborate extensively with our ML Engineer to transition solutions from experimentation to production, ensuring models are both effective and robust in real-world environments. You'll be expected to quickly prototype and iterate on solutions, adapt to new tools and approaches, and share knowledge with the broader organization. This is a hands-on role with real impact and room to innovate.
Key Responsibilities
* Partner with business stakeholders to identify, scope, and prioritize data science opportunities.
* Translate complex business problems into structured analytical tasks and hypotheses.
* Design, develop, and evaluate machine learning, forecasting, and statistical models, considering fairness, interpretability, and business impact.
* Perform exploratory data analysis, feature engineering, and data preprocessing.
* Rapidly prototype solutions to assess feasibility before scaling.
* Interpret model outputs and clearly communicate findings, implications, and recommendations to both technical and non-technical audiences.
* Collaborate closely with the ML Engineer to transition models from experimentation into scalable, production-ready systems.
* Develop reproducible code, clear documentation, and reusable analytical workflows to support org-wide AI adoption.
* Stay up to date with advances in data science, AI/ML, and generative AI, bringing innovative approaches to the team.
Required Technical Skills
* Bachelor's or Master's degree in Data Science, Statistics, Applied Mathematics, Computer Science, or a related quantitative field, with 3+ years of applied experience in data science.
* Strong foundation in statistics, probability, linear algebra, and optimization.
* Proficiency with Python and common data science libraries (Pandas, NumPy, Scikit-learn, XGBoost, PyTorch or TensorFlow).
* Experience with time series forecasting, regression, classification, clustering, or recommendation systems.
* Familiarity with GenAI concepts and tools (LLM APIs, embeddings, prompt engineering, evaluation methods).
* Strong SQL skills and experience working with large datasets and cloud-based data warehouses (Snowflake, BigQuery, etc.).
* Solid understanding of experimental design and model evaluation metrics beyond accuracy.
* Experience with data visualization and storytelling tools (Plotly, Tableau, Power BI, or Streamlit).
* Exposure to MLOps/LLMOps concepts and working in close collaboration with engineering teams.
Soft Skills & Qualities
* Excellent communication skills with the ability to translate analysis into actionable business recommendations.
* Strong problem-solving abilities and business acumen.
* High adaptability to evolving tools, frameworks, and industry practices.
* Curiosity and continuous learning mindset.
* Stakeholder empathy and ability to build trust while introducing AI solutions.
* Strong collaboration skills and comfort working in ambiguous, fast-paced environments.
* Commitment to clear documentation and knowledge sharing.
Who We Hire?
Simply put, we hire qualified applicants representing a wide range of backgrounds and abilities. MillerKnoll is comprised of people of all abilities, gender identities and expressions, ages, ethnicities, sexual orientations, veterans from every branch of military service, and more. Here, you can bring your whole self to work. We're committed to equal opportunity employment, including veterans and people with disabilities.
This organization participates in E-Verify Employment Eligibility Verification. In general, MillerKnoll positions are closed within 45 days and are open for applications for a minimum of 5 days. We encourage our prospective candidates to submit their application(s) expediently so as not to miss out on our opportunities. We frequently post new opportunities and encourage prospective candidates to check back often for new postings.
MillerKnoll complies with applicable disability laws and makes reasonable accommodations for applicants and employees with disabilities. If reasonable accommodation is needed to participate in the job application or interview process, to perform essential job functions, and/or to receive other benefits and privileges of employment, please contact MillerKnoll Talent Acquisition at careers_********************.
Auto-ApplyData Scientist
Data scientist job in Zeeland, MI
Why join us?
Our purpose is to design for the good of humankind. It's the ideal we strive toward each day in everything we do. Being a part of MillerKnoll means being a part of something larger than your work team, or even your brand. We are redefining modern for the 21st century. And our success allows MillerKnoll to support causes that align with our values, so we can build a more sustainable, equitable, and beautiful future for everyone.
About the Role
We're looking for an experienced and adaptable Data Scientist to join our growing AI & Data Science team. You'll be part of a small, highly technical group focused on delivering impactful machine learning, forecasting, and generative AI solutions.
In this role, you'll work closely with stakeholders to translate business challenges into well-defined analytical problems, design and validate models, and communicate results in clear, actionable terms. You'll collaborate extensively with our ML Engineer to transition solutions from experimentation to production, ensuring models are both effective and robust in real-world environments. You'll be expected to quickly prototype and iterate on solutions, adapt to new tools and approaches, and share knowledge with the broader organization. This is a hands-on role with real impact and room to innovate.
Key Responsibilities
Partner with business stakeholders to identify, scope, and prioritize data science opportunities.
Translate complex business problems into structured analytical tasks and hypotheses.
Design, develop, and evaluate machine learning, forecasting, and statistical models, considering fairness, interpretability, and business impact.
Perform exploratory data analysis, feature engineering, and data preprocessing.
Rapidly prototype solutions to assess feasibility before scaling.
Interpret model outputs and clearly communicate findings, implications, and recommendations to both technical and non-technical audiences.
Collaborate closely with the ML Engineer to transition models from experimentation into scalable, production-ready systems.
Develop reproducible code, clear documentation, and reusable analytical workflows to support org-wide AI adoption.
Stay up to date with advances in data science, AI/ML, and generative AI, bringing innovative approaches to the team.
Required Technical Skills
Bachelor's or Master's degree in Data Science, Statistics, Applied Mathematics, Computer Science, or a related quantitative field, with 3+ years of applied experience in data science.
Strong foundation in statistics, probability, linear algebra, and optimization.
Proficiency with Python and common data science libraries (Pandas, NumPy, Scikit-learn, XGBoost, PyTorch or TensorFlow).
Experience with time series forecasting, regression, classification, clustering, or recommendation systems.
Familiarity with GenAI concepts and tools (LLM APIs, embeddings, prompt engineering, evaluation methods).
Strong SQL skills and experience working with large datasets and cloud-based data warehouses (Snowflake, BigQuery, etc.).
Solid understanding of experimental design and model evaluation metrics beyond accuracy.
Experience with data visualization and storytelling tools (Plotly, Tableau, Power BI, or Streamlit).
Exposure to MLOps/LLMOps concepts and working in close collaboration with engineering teams.
Soft Skills & Qualities
Excellent communication skills with the ability to translate analysis into actionable business recommendations.
Strong problem-solving abilities and business acumen.
High adaptability to evolving tools, frameworks, and industry practices.
Curiosity and continuous learning mindset.
Stakeholder empathy and ability to build trust while introducing AI solutions.
Strong collaboration skills and comfort working in ambiguous, fast-paced environments.
Commitment to clear documentation and knowledge sharing.
Who We Hire?
Simply put, we hire qualified applicants representing a wide range of backgrounds and abilities. MillerKnoll is comprised of people of all abilities, gender identities and expressions, ages, ethnicities, sexual orientations, veterans from every branch of military service, and more. Here, you can bring your whole self to work. We're committed to equal opportunity employment, including veterans and people with disabilities.
This organization participates in E-Verify Employment Eligibility Verification. In general, MillerKnoll positions are closed within 45 days and are open for applications for a minimum of 5 days. We encourage our prospective candidates to submit their application(s) expediently so as not to miss out on our opportunities. We frequently post new opportunities and encourage prospective candidates to check back often for new postings.
MillerKnoll complies with applicable disability laws and makes reasonable accommodations for applicants and employees with disabilities. If reasonable accommodation is needed to participate in the job application or interview process, to perform essential job functions, and/or to receive other benefits and privileges of employment, please contact MillerKnoll Talent Acquisition at careers_********************.
Auto-ApplyManager, Data Scientist, DMP
Data scientist job in Indiana
Apply now Work Type: Office Working Employment Type: Permanent Job Description: This role is a role within the Deposit pricing analytics team in SCMAC. The primary focus of the role is:
* To develop AI solutions that are fit for purpose by leveraging advanced data & analytical tools and technology with in WRB. The individual will be responsible for end-to-end analytics solution development, deployment, performance assessment and to produce high-quality data science conclusions, backed up by results for WRB business.
* Takes end-to-end responsibility for translating business question into data science requirements and actions. Ensures model governance, including documentation, validation, maintenance, etc.
* Responsible for performing the AI solution development and delivery for enabling high impact marketing use cases across products, segments in WRB markets.
* Responsible for alignment with country product, segment and Group product and segment teams on key business use cases to address with AI solutions, in accordance with the model governance framework.
* Responsible for development of pricing and optimization solutions for markets
* Responsible for conceptualizing and building high impact use cases for deposits portfolio
* Responsible for implementation and tracking of use case in markets and leading discussions with governance team on model approvals
Key Responsibilities
Business
* Analyse and agree on the solution Design for Analytics projects
* On the agreed methodology develop and deliver analytical solutions and models
* Partner creating implementation plan with Project owner including models benefit
* Support on the deployment of the initiatives including scoring or implementation though any system
* Consolidate or Track Model performance for periodic model performance assessment
* Create the technical and review documents for approval
* Client Lifecycle Management ( Acquire, Activation, Cross Sell/Up Sell, Retention & Win-back)
* Enable scientific "test and learn" for direct to client campaigns
* Pricing analytics and optimization
* Digital analytics including social media data analytics for any new methodologies
* Channel optimization
* Client wallet utilization prediction both off-us and on-us
* Client and product profitability prediction
Processes
* Continuously improve the operational efficiency and effectiveness of processes
* Ensure effective management of operational risks within the function and compliance with applicable internal policies, and external laws and regulations
Key stakeholders
* Group/Region Analytics teams
* Group / Region/Country Product & Segment Teams
* Group / Region / Country Channels/distribution
* Group / Region / Country Risk Analytics Teams
* Group / Regional / Country Business Teams
* Support functions including Finance, Technology, Analytics Operation
Skills and Experience
* Data Science
* Anti Money Laundering Policies & procedures
* Modelling: Data, Process, Events, Objects
* Banking Product
* 2-4 years of experience (Overrall)
About Standard Chartered
We're an international bank, nimble enough to act, big enough for impact. For more than 170 years, we've worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If you're looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we can't wait to see the talents you can bring us.
Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, you'll see how we value difference and advocate inclusion.
Together we:
* Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do
* Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well
* Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term
What we offer
In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing.
* Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations.
* Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum.
* Flexible working options based around home and office locations, with flexible working patterns.
* Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits
* A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning.
* Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential.
Apply now
Information at a Glance
*
*
*
*
*
Data Scientist
Data scientist job in Indianapolis, IN
Who We Are and What We Do
At Corteva Agriscience, you will help us grow what's next. No matter your role, you will be part of a team that is building the future of agriculture - leading breakthroughs in the innovation and application of science and technology that will better the lives of people all over the world and fuel the progress of humankind.
We are seeking a highly skilled Data Scientist with experience in bioprocess control to join our global Data Science team. This role will focus on applying artificial intelligence, machine learning, and statistical modeling to develop active bioprocess control algorithms, optimize bioprocess workflows, improve operational efficiency, and enable data-driven decision-making in manufacturing and R&D environments.
What You'll Do:
Design and implement active process control strategies, leveraging online measurements and dynamic parameter adjustment for improved productivity and safety.
Develop and deploy predictive models for bioprocess optimization, including fermentation and downstream processing.
Partner with engineers and scientists to translate process insights into actionable improvements, such as yield enhancement and cost reduction.
Analyze high-complexity datasets from bioprocess operations, including sensor data, batch records, and experimental results.
Collaborate with cross-functional teams to integrate data science solutions into plant operations, ensuring scalability and compliance.
Communicate findings through clear visualizations, reports, and presentations to technical and non-technical stakeholders.
Contribute to continuous improvement of data pipelines and modeling frameworks for bioprocess control.
What Skills You Need:
M.S. + 3 years' experience or Ph.D. in Data Science, Computer Science, Chemical Engineering, Bioprocess Engineering, Statistics, or related quantitative field.
Strong foundation in machine learning, statistical modeling, and process control.
Proficiency in Python, R, or another programming language.
Excellent communication and collaboration skills; ability to work in multidisciplinary teams.
Preferred Skills:
Familiarity with bioprocess workflows, fermentation, and downstream processing.
Hands-on experience with bioprocess optimization models and active process control strategies.
Experience with industrial data systems and cloud platforms.
Knowledge of reinforcement learning or adaptive experimentation for process improvement.
#LI-BB1
Benefits - How We'll Support You:
Numerous development opportunities offered to build your skills
Be part of a company with a higher purpose and contribute to making the world a better place
Health benefits for you and your family on your first day of employment
Four weeks of paid time off and two weeks of well-being pay per year, plus paid holidays
Excellent parental leave which includes a minimum of 16 weeks for mother and father
Future planning with our competitive retirement savings plan and tuition reimbursement program
Learn more about our total rewards package here - Corteva Benefits
Check out life at Corteva! *************************************
Are you a good match? Apply today! We seek applicants from all backgrounds to ensure we get the best, most creative talent on our team.
Corteva Agriscience is an equal opportunity employer. We are committed to embracing our differences to enrich lives, advance innovation, and boost company performance. Qualified applicants will be considered without regard to race, color, religion, creed, sex, sexual orientation, gender identity, marital status, national origin, age, military or veteran status, pregnancy related conditions (including pregnancy, childbirth, or related medical conditions), disability or any other protected status in accordance with federal, state, or local laws.
Auto-ApplySenior Data Scientist - Metrics
Data scientist job in Ann Arbor, MI
May Mobility is transforming cities through autonomous technology to create a safer, greener, more accessible world. Based in Ann Arbor, Michigan, May develops and deploys autonomous vehicles (AVs) powered by our innovative Multi-Policy Decision Making (MPDM) technology that literally reimagines the way AVs think.
Our vehicles do more than just drive themselves - they provide value to communities, bridge public transit gaps and move people where they need to go safely, easily and with a lot more fun. We're building the world's best autonomy system to reimagine transit by minimizing congestion, expanding access and encouraging better land use in order to foster more green, vibrant and livable spaces. Since our founding in 2017, we've given more than 300,000 autonomy-enabled rides to real people around the globe. And we're just getting started. We're hiring people who share our passion for building the future, today, solving real-world problems and seeing the impact of their work. Join us.
May Mobility is experiencing a period of significant growth as we expand our autonomous shuttle and mobility services nationwide. As we advance toward widespread deployment, the ability to measure safety and comfort objectively, accurately, and at scale is critical. The Senior Data Scientist in this role will shape how we evaluate AV performance, uncover system vulnerabilities, and ensure that every driving decision meets the highest standards of safety and passenger experience. Your work will directly influence product readiness, inform engineering priorities, and accelerate the path to building trustworthy, human-centered autonomous driving systems.
Responsibilities
* Develop and refine safety and comfort metrics for evaluating autonomous vehicle performance across real-world and simulation data.
* Build ML and non-ML models to detect unsafe, uncomfortable, or anomalous behaviors.
* Analyze large-scale drive logs and simulation datasets to identify patterns, regressions, and system gaps.
* Collaborate with perception, prediction, behavior, and simulation teams to integrate metrics into workflows.
* Communicate insights and recommendations to engineering leaders and cross-functional teams.
Skills
Success in this role typically requires the following competencies:
* Strong proficiency in Python, SQL, and data analysis tools (e.g., Pandas, NumPy, Spark).
* Strong understanding of vehicle dynamics, kinematics, agent interactions, and road/traffic elements.
* Expertise in analyzing high-dimensional or time-series data from sensors, logs, and simulation systems.
* Excellent technical communication skills with the ability to clearly present complex model designs and results to both technical and non-technical stakeholders.
* Detail-oriented with a focus on validation, testing, and error detection.
Qualifications and Experience
Required
* B.S, M.S. or Ph.D. Degree in Engineering, Data Science, Computer Science, Math, or a related quantitative field.
* 5+ years of experience in data science, applied machine learning, robotics, or autonomous systems.
* 2+ years working in AV, ADAS, robotics, or another safety-critical domain involving vehicle behavior analysis.
* Demonstrated experience developing or evaluating safety and/or comfort metrics for autonomous or robotic systems.
* Hands-on experience working with real-world driving logs and/or simulation data.
Desired
* Background in motion planning, behavior prediction, or multi-agent interaction modeling.
* Experience designing metric-driven development, KPIs, and automated triaging pipelines.
Benefits and Perks
* Comprehensive healthcare suite including medical, dental, vision, life, and disability plans. Domestic partners who have been residing together at least one year are also eligible to participate.
* Health Savings and Flexible Spending Healthcare and Dependent Care Accounts available.
* Rich retirement benefits, including an immediately vested employer safe harbor match.
* Generous paid parental leave as well as a phased return to work.
* Flexible vacation policy in addition to paid company holidays.
* Total Wellness Program providing numerous resources for overall wellbeing
Don't meet every single requirement? Studies have shown that women and/or people of color are less likely to apply to a job unless they meet every qualification. At May Mobility, we're committed to building a diverse, inclusive, and authentic workforce, so if you're excited about this role but your previous experience doesn't align perfectly with every qualification, we encourage you to apply anyway! You may be the perfect candidate for this or another role at May.
Want to learn more about our culture & benefits? Check out our website!
May Mobility is an equal opportunity employer. All applicants for employment will be considered without regard to race, color, religion, sex, national origin, age, disability, sexual orientation, gender identity or expression, veteran status, genetics or any other legally protected basis. Below, you have the opportunity to share your preferred gender pronouns, gender, ethnicity, and veteran status with May Mobility to help us identify areas of improvement in our hiring and recruitment processes. Completion of these questions is entirely voluntary. Any information you choose to provide will be kept confidential, and will not impact the hiring decision in any way. If you believe that you will need any type of accommodation, please let us know.
Note to Recruitment Agencies: May Mobility does not accept unsolicited agency resumes. Furthermore, May Mobility does not pay placement fees for candidates submitted by any agency other than its approved partners.
Salary Range
$163,477-$240,408 USD
Auto-ApplyAdvisory, Data Scientist - CMC Data Products
Data scientist job in Indianapolis, IN
At Lilly, we unite caring with discovery to make life better for people around the world. We are a global healthcare leader headquartered in Indianapolis, Indiana. Our employees around the world work to discover and bring life-changing medicines to those who need them, improve the understanding and management of disease, and give back to our communities through philanthropy and volunteerism. We give our best effort to our work, and we put people first. We're looking for people who are determined to make life better for people around the world.
Organizational & Position Overview: The Bioproduct Research and Development (BR&D) organization strives to deliver creative medicines to patients by developing and commercializing insulins, monoclonal antibodies, novel therapeutic proteins, peptides, oligonucleotide therapies, and gene therapy systems. This multidisciplinary group works collaboratively with our discovery and manufacturing colleagues.
We are seeking an exceptional Data Scientist with deep data expertise in the pharmaceutical domain to lead the development and delivery of enterprise-scale data products that power AI-driven insights, process optimization, and regulatory compliance. In this role, you'll bridge pharmaceutical sciences with modern data engineering to transform complex CMC, PAT, and analytical data into strategic assets that accelerate drug development and manufacturing excellence.
Responsibilities:
Data Product Development: Define the roadmap and deliver analysis-ready and AI-ready data products that enable AI/ML applications, PAT systems, near-time analytical testing, and process intelligence across CMC workflows.
Data Archetypes & Modern Data Management: Define pharmaceutical-specific data archetypes (process, analytical, quality, CMC submission) and create reusable data models aligned with industry standards (ISA-88, ISA-95, CDISC, eCTD).
Modern Data Management for Regulated Environments: Implement data frameworks that ensure 21 CFR Part 11, ALCOA+, and data integrity compliance, while enabling scientific innovation and self-service access.
AI/ML-ready Data Products: Build training datasets for lab automation, process optimization, and predictive CQA models, and support generative AI applications for knowledge management and regulatory Q&A.
Cross-Functional Leadership: Collaborate with analytical R&D, process development, manufacturing science, quality, and regulatory affairs to standardize data products.
Deliverables include:
Scalable data integration platform that automates compilation of technical-review-ready and submission-ready data packages with demonstrable quality assurance.
Unified CMC data repository supporting current process and analytical method development while enabling future AI/ML applications across R&D and manufacturing
Data flow frameworks that enable self-service access while maintaining GxP compliance and audit readiness
Comprehensive documentation, standards, and training programs that democratize data access and accelerate product development
Basic Requirements:
Master's degree in Computer Science, Data Science, Machine Learning, AI, or related technical field
8+ years of product management experience focused on data products, data platforms, or scientific data systems and a strong grasp of modern data architecture patterns (data warehouses, data lakes, real-time streaming)
Knowledge of modern data stack technologies (Microsoft Fabric, Databricks, Airflow) and cloud platforms (AWS- S3, RDS, Lambda/Glue, Azure)
Demonstrated experience designing data products that support AI/ML workflows and advanced analytics in scientific domains
Proficiency with SQL, Python, and data visualization tools
Experience with analytical instrumentation and data systems (HPLC/UPLC, spectroscopy, particle characterization, process sensors)
Knowledge of pharmaceutical manufacturing processes, including batch and continuous manufacturing, unit operations, and process control
Expertise in data modeling for time-series, spectroscopic, chromatographic, and hierarchical batch/lot data
Experience with laboratory data management systems (LIMS, ELN, SDMS, CDS) and their integration patterns
Additional Preferences
Understanding of Design of Experiments (DoE), Quality by Design (QbD), and process validation strategies
Experience implementing data mesh architectures in scientific organizations
Knowledge of MLOps practices and model deployment in validated environments
Familiarity with regulatory submissions (eCTD, CTD) and how analytical data supports marketing applications
Experience with CI/CD pipelines (GitHub Actions, CloudFormation) for scientific applications
Lilly is dedicated to helping individuals with disabilities to actively engage in the workforce, ensuring equal opportunities when vying for positions. If you require accommodation to submit a resume for a position at Lilly, please complete the accommodation request form (******************************************************** for further assistance. Please note this is for individuals to request an accommodation as part of the application process and any other correspondence will not receive a response.
Lilly is proud to be an EEO Employer and does not discriminate on the basis of age, race, color, religion, gender identity, sex, gender expression, sexual orientation, genetic information, ancestry, national origin, protected veteran status, disability, or any other legally protected status.
Our employee resource groups (ERGs) offer strong support networks for their members and are open to all employees. Our current groups include: Africa, Middle East, Central Asia Network, Black Employees at Lilly, Chinese Culture Network, Japanese International Leadership Network (JILN), Lilly India Network, Organization of Latinx at Lilly (OLA), PRIDE (LGBTQ+ Allies), Veterans Leadership Network (VLN), Women's Initiative for Leading at Lilly (WILL), en Able (for people with disabilities). Learn more about all of our groups.
Actual compensation will depend on a candidate's education, experience, skills, and geographic location. The anticipated wage for this position is
$126,000 - $244,200
Full-time equivalent employees also will be eligible for a company bonus (depending, in part, on company and individual performance). In addition, Lilly offers a comprehensive benefit program to eligible employees, including eligibility to participate in a company-sponsored 401(k); pension; vacation benefits; eligibility for medical, dental, vision and prescription drug benefits; flexible benefits (e.g., healthcare and/or dependent day care flexible spending accounts); life insurance and death benefits; certain time off and leave of absence benefits; and well-being benefits (e.g., employee assistance program, fitness benefits, and employee clubs and activities).Lilly reserves the right to amend, modify, or terminate its compensation and benefit programs in its sole discretion and Lilly's compensation practices and guidelines will apply regarding the details of any promotion or transfer of Lilly employees.
#WeAreLilly
Auto-ApplySr Data Scientist
Data scientist job in Indiana
What's the role?
As a Senior Data Scientist within the Quality and Business Planning team, you are passionate about delivering quality products. This involves deploying statistical and analytic methods, measurements, and machine learning models to measure and impact the quality of our map products. You are responsible for developing sampling and test designs, data and computing workflows, and analytics centered on Map and Location quality requirements.
Expected to be a expert on statistical and experimental design methods for Map and Location quality testing processes
Interpret project objectives and requirements, create enabling data science solutions and produce impactful outcomes
Develop sampling plans for data collection, quality evaluation, and the production of training data, along with technical estimators for the evaluation of map quality, and A/B experiments to validate optimization and solution approaches
Build and test statistical and machine learning models to support improvement of a wide variety of data-driven processes for map making
Design and develop data and computing tools to enable processing of quality testing data and results
Work with map and location experts, engineering teams, and other teams across the company to enable high-quality map products
Who are you?
Master's or PhD degree required. Degree in statistics, mathematics, computer sciences, natural sciences, econometrics, or related fields.
5+ years of related work experience
Team player with good communication, presentation, and people skills
Proficiency and understanding of sampling methods and experimental design
Proficiency of machine learning and modeling techniques, including analytic methods such as regression, classifiers, clustering, association rules, decision trees, etc.
Proficiency with analysis and programming in Python and SQL, or similar packages (R, Matlab, SAS, etc.)
Knowledge and experience with using GIS tools for spatial data analysis
Experience with understanding, specifying and explaining measurement and analytic systems with other teams of data scientists and engineers to execute projects delivering those solutions.
Knowledge of tools for working with big data in Hadoop or Spark for data extraction and preparation for analysis will be a plus
Knowledge and experience in Quality management, End-to-End quality test strategy and Six Sigma will be a plus
What we offer
HERE offers an opportunity to work in a cutting-edge technology environment with challenging problems to solve! You can make a direct impact on delivery of company´s strategic goals and the freedom to decide how to perform your work. We will support you in delivering your day-to-day tasks and achieving your personal goals and developing your skills. Personal development is highly encouraged at HERE. You can take different courses and training at our online Learning Campus and join cross-functional team projects within our Talent Platform.
HERE is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, age, gender identity, sexual orientation, marital status, parental status, religion, sex, national origin, disability, veteran status, and other legally protected characteristics.
Who are we?
HERE Technologies is a location data and technology platform company. We empower our customers to achieve better outcomes - from helping a city manage its infrastructure or a business optimize its assets to guiding drivers to their destination safely.
At HERE we take it upon ourselves to be the change we wish to see. We create solutions that fuel innovation, provide opportunity and foster inclusion to improve people's lives. If you are inspired by an open world and driven to create positive change, join us. Learn more about us on our YouTube Channel.
Our team has partnered closely with customers to build a robust quality measurement system for HD maps, ensuring both safety and availability in automated driving. Through multi-step review processes, we generate independent reference data for map features and attributes, guaranteeing high data integrity. This system not only evaluates map content quality against agreed KPIs but also quantifies the reliability and uncertainty of these measurements, providing timely feedback that supports critical use cases.
Auto-ApplySenior Data Scientist
Data scientist job in Indianapolis, IN
**What Data Science contributes to Cardinal Health** The Data & Analytics Function oversees the analytics lifecycle in order to identify, analyze and present relevant insights that drive business decisions and anticipate opportunities to achieve a competitive advantage. This function manages analytics products, the access, design and implementation of reporting/business intelligence solutions, and the application of advanced quantitative modeling.
Data Science applies base, scientific methodologies from various disciplines, techniques and tools that extracts knowledge and insight from data to solve complex business problems on large data sets, integrating multiple systems.
This role will support the Major Rugby business unit, a legacy supplier of multi-source, generic pharmaceuticals for over 60 years. Major Rugby provides over 1,000 high-quality, Rx, OTC and vitamin, mineral and supplement products to the acute, retail, government and consumer markets. This role will focus on leveraging advanced analytics, machine learning, and optimization techniques to solve complex challenges related to demand forecasting, inventory optimization, logistics efficiency and risk mitigation. Our goal is to uncover insights and drive meaningful deliverables to improve decision making and business outcomes.
**Responsibilities:**
+ Leads the design, development, and deployment of advanced analytics and machine learning models to solve complex business problems
+ Collaborates cross-functionally with product, engineering, operations, and business teams to identify opportunities for data-driven decision-making
+ Translates business requirements into analytical solutions and delivers insights that drive strategic initiatives
+ Develops and maintains scalable data science solutions, ensuring reproducibility, performance, and maintainability
+ Evaluates and implements new tools, frameworks, and methodologies to enhance the data science toolkit
+ Drives experimentation and A/B testing strategies to optimize business outcomes
+ Mentors junior data scientists and contributes to the development of a high-performing analytics team
+ Ensures data quality, governance, and compliance with organizational and regulatory standards
+ Stays current with industry trends, emerging technologies, and best practices in data science and AI
+ Contributes to the development of internal knowledge bases, documentation, and training materials
**Qualifications:**
+ 8-12 years of experience in data science, analytics, or a related field (preferred)
+ Advanced degree (Master's or Ph.D.) in Data Science, Computer Science, Engineering, Operations Research, Statistics, or a related discipline preferred
+ Strong programming skills in Python and SQL;
+ Proficiency in data visualization tools such as Tableau, or Looker, with a proven ability to translate complex data into clear, actionable business insights
+ Deep understanding of machine learning, statistical modeling, predictive analytics, and optimization techniques
+ Experience with cloud platforms (e.g., AWS, Azure, GCP) and big data technologies (e.g., Spark, Hadoop) is highly desirable
+ Excellent communication and storytelling skills, with the ability to influence stakeholders and present findings to both technical and non-technical audiences
+ Experience in Supervised and Unsupervised Machine Learning including Classification, Forecasting, Anomaly Detection, Pattern Detection, Text Mining, using variety of techniques such as Decision trees, Time Series Analysis, Bagging and Boosting algorithms, Neural Networks, Deep Learning and Natural Language processing (NLP).
+ Experience with PyTorch or other deep learning frameworks
+ Strong understanding of RESTful APIs and / or data streaming a big plus
+ Required experience of modern version control (GitHub, Bitbucket)
+ Hands-on experience with containerization (Docker, Kubernetes, etc.)
+ Experience with product discovery and design thinking
+ Experience with Gen AI
+ Experience with supply chain analytics is preferred
**Anticipated salary range:** $123,400 - $176,300
**Bonus eligible:** Yes
**Benefits:** Cardinal Health offers a wide variety of benefits and programs to support health and well-being.
+ Medical, dental and vision coverage
+ Paid time off plan
+ Health savings account (HSA)
+ 401k savings plan
+ Access to wages before pay day with my FlexPay
+ Flexible spending accounts (FSAs)
+ Short- and long-term disability coverage
+ Work-Life resources
+ Paid parental leave
+ Healthy lifestyle programs
**Application window anticipated to close:** 12/02/2025 *if interested in opportunity, please submit application as soon as possible.
The salary range listed is an estimate. Pay at Cardinal Health is determined by multiple factors including, but not limited to, a candidate's geographical location, relevant education, experience and skills and an evaluation of internal pay equity.
\#LI-Remote
\#LI-AP4
_Candidates who are back-to-work, people with disabilities, without a college degree, and Veterans are encouraged to apply._
_Cardinal Health supports an inclusive workplace that values diversity of thought, experience and background. We celebrate the power of our differences to create better solutions for our customers by ensuring employees can be their authentic selves each day. Cardinal Health is an Equal_ _Opportunity/Affirmative_ _Action employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, ancestry, age, physical or mental disability, sex, sexual orientation, gender identity/expression, pregnancy, veteran status, marital status, creed, status with regard to public assistance, genetic status or any other status protected by federal, state or local law._
_To read and review this privacy notice click_ here (***************************************************************************************************************************
Enrolled Actuary
Data scientist job in Indianapolis, IN
Ascensus is the leading independent technology and service platform powering savings plans across America, providing products and expertise that help nearly 16 million people save for a better today and tomorrow.
High level review to ensure quality of client deliverables including research of pension
law and regulations. Responsible for ensuring service standards remain in alignment with the expectations, guidance and direction provided by the FuturePlan Lead Actuary and organizational standards. Serve as an expert in the valuation, administration and compliance of Defined Benefit and Cash Balance plans.
Section 2: Job Functions, Essential Duties and Responsibilities
Review, prepare and/or certify client projects and ensure accurate client deliverables including:
- Actuarial valuations and reports
- Benefit statements
- Plan documents, amendments, employee communications (SPDs, etc.)
- Work performed by other teams as necessary (e.g., Benefit calculations, Form 5500s and PBGC premium forms)
- IRS determination filings and PBGC (Form 500 series) termination submissions
- ASC 715/960
- Perform and review combined plan non-discrimination testing
Answer technical questions posed by staff
Research the Internal Revenue Code and other technical resources to resolve client issues
Provide technical training for staff
Provide back up to other teams as necessary
Responsible for protecting, securing, and proper handling of all confidential data held by Ascensus to ensure against unauthorized access, improper transmission, and/or unapproved disclosure of information that could result in harm to Ascensus or our clients.
Our I-Client service philosophy and our Core Values of People Matter, Quality First and Integrity Always should be visible in your actions on a day to day basis showing your support of our organizational culture.
Assist with other tasks and projects as assigned
Supervision
N/A
Section 3: Experience, Skills, Knowledge Requirements
Bachelor's degree in actuarial science, math or math related field
Enrolled Actuary designation with 5 years industry experience
Advanced understanding of:
- Pension law
- Plan documents and IRS favorable determination letters
- Actuarial valuations and software systems
Excellent technical/analytical skills
Ability to research technical issues
Ability to explain technical issues to all levels of associates
Good verbal and written communication skills
Demonstrated expertise with Microsoft Office, particularly Excel applications
Demonstrated expertise with industry recognized pension valuation software
Detail and accuracy oriented
Good time management skills
Team Player
We are proud to be an Equal Opportunity Employer
Be aware of employment fraud. All email communications from Ascensus or its hiring managers originate ******************
******************
email addresses. We will never ask you for payment or require you to purchase any equipment. If you are suspicious or unsure about validity of a job posting, we strongly encourage you to apply directly through our website.
Auto-ApplySr Data Scientist
Data scientist job in Waterford, MI
We are Lennar
Lennar is one of the nation's leading homebuilders, dedicated to making an impact and creating an extraordinary experience for our Homeowners, Communities, and Associates by building quality homes and providing exceptional customer service, giving back to the communities in which we work and live, and fostering a culture of opportunity and growth for our Associates throughout their career. Lennar has been recognized as a Fortune 500 company and consistently ranked among the top homebuilders in the United States.
A Career that Empowers You to Build Your Future
As a Senior Data Scientist at Lennar, you will design, build, and deploy advanced models and AI agents that shape how Lennar prices, sells, and personalizes experiences for customers across 40+ divisions. You'll work end-to-end-from research and experimentation to production deployment and monitoring-delivering measurable business impact in pricing, sales, operations, and customer engagement. You'll collaborate across teams, navigate ambiguity, and drive innovation in a rapidly evolving AI ecosystem.
Your Responsibilities on the Team
Design, build, and deploy pricing recommendation models to optimize sales velocity, revenue, and division-level targets.
Develop sales forecasting and demand prediction models to support pricing and inventory decisions.
Build personalization algorithms for tailored product recommendations and communications across email, text, and digital platforms.
Apply machine vision and feature extraction on home attributes (photos, plans, finishes) to inform premium pricing and personalization strategies.
Design, build, and deploy autonomous AI agents using frameworks like Amazon Bedrock and AgentCore to solve business problems in pricing, sales, operations, and customer interactions.
Engineer and maintain data pipelines and systems supporting all models and agents, ensuring scalability and reliability.
Integrate agents with enterprise systems and protocols (MCP servers, A2A protocol, internal APIs).
Design and run experiments (A/B tests, multi-armed bandits, uplift models) to measure and optimize model and agent performance.
Ensure observability and reliability of deployed agents, including logging, evaluation, monitoring, and drift detection.
Proactively gather feedback from stakeholders and adapt solutions for adoption and measurable impact.
Translate complex data science and statistical concepts into clear recommendations, stories, and visualizations for executives and non-technical audiences.
Favor incremental, explainable solutions that deliver quick wins and scale over time.
Drive experimentation with new tools and approaches, ensuring robustness, governance, and scalability in production deployments.
Share learnings with the broader team to raise the bar on data science and agentic development across the organization.
Manage timelines and expectations transparently with both the data science team and business stakeholders.
Your Toolbox
Bachelor's or Master's degree in Statistics, Economics, Math, Computer Science, Data Science, Machine Learning, or related field (or equivalent experience).
5+ years of relevant experience (1+ with PhD, 3+ with MS) as a data scientist, ML engineer, or applied AI developer delivering production-ready models and systems.
Strong proficiency in Python and SQL, with experience owning the full data science stack (data pipelines + models + deployment).
Experience with pricing optimization, revenue management, economic modeling, and price elasticity/demand modeling.
Experience building and deploying large-scale recommender systems (collaborative filtering, embeddings, contextual bandits).
Hands-on experience with AI development frameworks (LangChain, Strands, Amazon Bedrock, AgentCore, or equivalent).
Experience with experimentation frameworks (A/B testing, uplift modeling, multi-armed bandits, causal ML).
Exposure to machine vision techniques (CNNs, transfer learning, embeddings) and NLP techniques (embeddings, transformers, prompt engineering).
Familiarity with real-time or near-real-time systems (Kafka, Kinesis, Flink, or similar) for scalable personalization.
Understanding of AI agent observability (evaluation frameworks like LangFuse, RAGAS, Weights & Biases, custom monitoring).
Experience with system integrations: APIs, A2A protocol, MCP servers, orchestration pipelines.
Comfort working with large-scale, imperfect real-world datasets and making progress despite complexity.
Strong engineering skills: ability to design and maintain production pipelines, microservices, and scalable systems.
Proven ability to navigate ambiguity, rapidly prototype, and move solutions into production.
Collaborative communicator who can align technical solutions with business priorities across diverse stakeholders.
Bonus: experience with RAG pipelines, LLM fine-tuning, RLHF, multi-agent orchestration, feature stores, survival analysis/churn modeling, and attribution modeling.
Bonus: background in real estate analytics, revenue management systems, or retail pricing optimization.
Physical & Office/Site Presence Requirements:
This is primarily a sedentary office position which requires the incumbent to have the ability to operate computer equipment, speak, hear, bend, stoop, reach, lift, and move and carry up to 25 lbs. Finger dexterity is necessary.
This description outlines the basic responsibilities and requirements for the position noted. This is not a comprehensive listing of all job duties of the Associates. Duties, responsibilities and activities may change at any time with or without notice.
Lennar is an equal opportunity employer and complies with all applicable federal, state, and local fair employment practices laws
#LI-KB2
Life at Lennar
At Lennar, we are committed to fostering a supportive and enriching environment for our Associates, offering a comprehensive array of benefits designed to enhance their well-being and professional growth. Our Associates have access to robust health insurance plans, including Medical, Dental, and Vision coverage, ensuring their health needs are well taken care of. Our 401(k) Retirement Plan, complete with a $1 for $1 Company Match up to 5%, helps secure their financial future, while Paid Parental Leave and an Associate Assistance Plan provide essential support during life's critical moments. To further support our Associates, we provide an Education Assistance Program and up to $30,000 in Adoption Assistance, underscoring our commitment to their diverse needs and aspirations. From the moment of hire, they can enjoy up to three weeks of vacation annually, alongside generous Holiday, Sick Leave, and Personal Day policies. Additionally, we offer a New Hire Referral Bonus Program, significant Home Purchase Discounts, and unique opportunities such as the Everyone's Included Day. At Lennar, we believe in investing in our Associates, empowering them to thrive both personally and professionally. Lennar Associates will have access to these benefits as outlined by Lennar's policies and applicable plan terms. Visit Lennartotalrewards.com to view our suite of benefits.
Join the fun and follow us on social media to see what's happening at our company, and don't forget to connect with us on Lennar: Overview | LinkedIn for the latest job opportunities.
Lennar is an equal opportunity employer and complies with all applicable federal, state, and local fair employment practices laws.
Auto-ApplyData Engineer (AI-RPA)
Data scientist job in Grandville, MI
Title: Data Engineer YOUR ROLE PADNOS is seeking a Data Engineer on our Data and Software team who thrives at the intersection of data, automation, and applied AI. This role builds intelligent data pipelines and robotic process automations (RPAs) that connect systems, streamline operations, and unlock efficiency across the enterprise.
You'll design and develop pipelines using Python, SQL Server, and modern APIs-integrating services such as OpenAI, Anthropic, and Azure ML-to drive automation and accelerate business processes. Your work will extend beyond traditional data engineering, applying AI models and API logic to eliminate manual effort and make data more actionable across teams.
You will report directly to IT Manager, at PADNOS Corporate in Grandville, MI. This is an in-person role based in Grandville, Michigan. Must reside within daily commuting distance of Grandville, Michigan. We do not relocate, sponsor visas, or consider remote applicants.
ACCOUNTABILITIES
Design and develop automated data pipelines that integrate AI and machine learning services to process, enrich, and deliver high-value data for analytics and automation use cases.
Build, maintain, and optimize SQL Server ELT workflows and Python-based automation scripts.
Connect to external APIs (OpenAI, Anthropic, Azure ML, and other SaaS systems) to retrieve, transform, and post data as part of end-to-end workflows.
Partner with business stakeholders to identify manual workflows and translate them into AI-enabled automations.
Work with software developers to integrate automation logic directly into enterprise applications.
Implement and monitor data quality, reliability, and observability metrics across pipelines.
Apply performance tuning and best practices for database and process efficiency.
Develop and maintain reusable Python modules and configuration standards for automation scripts.
Support data governance and version control processes to ensure consistency and transparency across environments.
Collaborate closely with analytics, software, and operations teams to prioritize and deliver automation solutions that create measurable business impact.
MEASUREMENTS
Reduction in manual hours across teams through implemented automations.
Reliable and reusable data pipelines supporting AI and analytics workloads.
Delivery of production-ready automation projects in collaboration with business units.
Adherence to data quality and reliability standards.
Continuous improvement in data pipeline performance and maintainability.
QUALIFICATIONS/EXPERIENCE
Bachelor's degree or equivalent experience in data engineering, computer science, or software development.
Must have personally owned an automated pipeline end-to-end (design → build → deploy → maintain).
Minimum 3 years hands-on experience building production data pipelines using Python and SQL Server. Contract, academic, bootcamp, or coursework experience does not qualify.
Intermediate to advanced Python development skills, particularly for data and API automation.
Experience working with RESTful APIs and JSON data structures.
Familiarity with AI/ML API services (OpenAI, Anthropic, Azure ML, etc.) and their integration into data workflows.
Experience with modern data stack components such as Fivetran, dbt, or similar tools preferred.
Knowledge of SQL Server performance tuning and query optimization.
Familiarity with Git and CI/CD workflows for data pipeline deployment.
Bonus: Experience deploying or maintaining RPA or AI automation solutions.
PADNOS is an Equal Opportunity Employer and does not discriminate on the basis of race, color, religion, sex, age, national origin, disability, veteran status, sexual orientation or any other classification protected by Federal, State or local law.