Data Scientist
Data scientist job in Phoenix, AZ
We are seeking a Data Scientist to support advanced analytics and machine learning initiatives across the organization. This role involves working with large, complex datasets to uncover insights, validate data integrity, and build predictive models. A key focus will be developing and refining machine learning models that leverage sales and operational data to optimize pricing strategies at the store level.
Day-to-Day Responsibilities
Compare and validate numbers across multiple data systems
Investigate discrepancies and understand how metrics are derived
Perform data science and data analysis tasks
Build and maintain AI/ML models using Python
Interpret model results, fine-tune algorithms, and iterate based on findings
Validate and reconcile data from different sources to ensure accuracy
Work with sales and production data to produce item-level pricing recommendations
Support ongoing development of a new data warehouse and create queries as needed
Review Power BI dashboards (Power BI expertise not required)
Contribute to both ML-focused work and general data science responsibilities
Improve and refine an existing ML pricing model already in production
Qualifications
Strong proficiency with MS SQL Server
Experience creating and deploying machine learning models in Python
Ability to interpret, evaluate, and fine-tune model outputs
Experience validating and reconciling data across systems
Strong foundation in machine learning, data modeling, and backend data operations
Familiarity with querying and working with evolving data environments
Data Engineer
Data scientist job in Tempe, AZ
About the Role
We are seeking a highly skilled Databricks Data Engineer with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions.
What We're Looking For
8+ years designing and delivering scalable data pipelines in modern data platforms
Deep experience in data engineering, data warehousing, and enterprise-grade solution delivery
Ability to lead cross-functional initiatives in matrixed teams
Advanced skills in SQL, Python, and ETL/ELT development, including performance tuning
Hands-on experience with Azure, Snowflake, and Databricks, including system integrations
Key Responsibilities
Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform
Modernize and enhance cloud-based data ecosystems on Azure, contributing to architecture, modeling, security, and CI/CD
Use Apache Airflow and similar tools for workflow automation and orchestration
Work with financial or regulated datasets while ensuring strong compliance and governance
Drive best practices in data quality, lineage, cataloging, and metadata management
Primary Technical Skills
Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks Notebooks
Design efficient Delta Lake models for reliability and performance
Implement and manage Unity Catalog for governance, RBAC, lineage, and secure data sharing
Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables
Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems
Automate ingestion and workflows using Python and REST APIs
Support downstream analytics for BI, data science, and application workloads
Write optimized SQL/T-SQL queries, stored procedures, and curated datasets
Automate DevOps workflows, testing pipelines, and workspace configurations
Additional Skills
Azure: Data Factory, Data Lake, Key Vault, Logic Apps, Functions
CI/CD: Azure DevOps
Orchestration: Apache Airflow (plus)
Streaming: Delta Live Tables
MDM: Profisee (nice-to-have)
Databases: SQL Server, Cosmos DB
Soft Skills
Strong analytical and problem-solving mindset
Excellent communication and cross-team collaboration
Detail-oriented with a high sense of ownership and accountability
Senior Data Engineer
Data scientist job in Phoenix, AZ
Job Title: Sr. Data Engineer
Job Type: Full Time
Compensation: $130,000 - $150,000 D.O.E.
is eligible for medical, dental, vision, and life insurance coverage, & PTO
Senior Data Engineer
ROLE OVERVIEW
The Senior Data Engineer is responsible for designing, building, and maintaining scalable data platforms that support analytics, reporting, and advanced data-driven initiatives. This is a hands-on engineering role focused on developing reliable, high-performing data solutions while contributing to architectural standards, data quality, and governance practices.
The ideal candidate has strong experience with modern data architectures, data modeling, and pipeline development, and is comfortable collaborating across technical and business teams to deliver trusted, production-ready datasets.
KEY RESPONSIBILITIES
Design and maintain data models across analytical and operational use cases to support reporting and advanced analytics.
Build and manage data pipelines that ingest, transform, and deliver structured and unstructured data at scale.
Contribute to data governance practices, including data quality controls, metadata management, lineage, and stewardship.
Develop and maintain cloud-based data platforms, including data lakes, analytical stores, and curated datasets.
Implement and optimize batch and near-real-time data ingestion and transformation processes.
Support data migration and modernization efforts while ensuring accuracy, performance, and reliability.
Partner with analytics, engineering, and business teams to understand data needs and deliver high-quality solutions.
Enable reporting and visualization use cases by providing clean, well-structured datasets for downstream tools.
Apply security, privacy, and compliance best practices throughout the data lifecycle.
Establish standards for performance tuning, scalability, reliability, and maintainability of data solutions.
Implement automation, testing, and deployment practices to improve data pipeline quality and consistency.
QUALIFICATIONS
Bachelor's degree in Computer Science, Engineering, or a related technical field, or equivalent professional experience.
5+ years of experience in data engineering or related roles.
Strong hands-on experience with:
Data modeling, schema design, and pipeline development
Cloud-based data platforms and services
Data ingestion, transformation, and optimization techniques
Familiarity with modern data architecture patterns, including lakehouse-style designs and governance frameworks.
Experience supporting analytics, reporting, and data science use cases.
Proficiency in one or more programming languages commonly used in data engineering (e.g., Python, SQL, or similar).
Solid understanding of data structures, performance optimization, and scalable system design.
Experience integrating data from APIs and distributed systems.
Exposure to CI/CD practices and automated testing for data workflows.
Familiarity with streaming or event-driven data processing concepts preferred.
Experience working in Agile or iterative delivery environments.
Strong communication skills with the ability to document solutions and collaborate across teams.
Life Actuary
Data scientist job in Phoenix, AZ
Why USAA?
At USAA, our mission is to empower our members to achieve financial security through highly competitive products, exceptional service and trusted advice. We seek to be the #1 choice for the military community and their families.
Embrace a fulfilling career at USAA, where our core values - honesty, integrity, loyalty and service - define how we treat each other and our members. Be part of what truly makes us special and impactful.
The Opportunity
We are seeking a qualified Life Actuary to join our diverse team. The ideal candidate will possess strong risk management skills, with a particular focus on Interest Rate Risk Management and broader financial risk experience. This role requires an individual who has acquired their ASA designation or FSA designation and has a few years of meaningful experience.
Key responsibilities will include experience in Asset-Liability Management (ALM), encompassing liquidity management, asset allocation, cashflow matching, and duration targeting. You will also be responsible for conducting asset adequacy testing to ensure the sufficiency of assets to meet future obligations. Experience in product pricing, especially for annuity products. Furthermore, an understanding of Risk-Based Capital (RBC) frameworks and methodologies is required. Proficiency with actuarial software platforms, with a strong preference for AXIS, is highly advantageous.
We offer a flexible work environment that requires an individual to be in the office 4 days per week. This position can be based in one of the following locations: San Antonio, TX, Plano, TX, Phoenix, AZ, Colorado Springs, CO, Charlotte, NC, or Tampa, FL.
Relocation assistance is not available for this position.
What you'll do:
Performs complex work assignments using actuarial modeling software driven models for pricing, valuation, and/or risk management.
Reviews laws and regulations to ensure all processes are compliant; and provides recommendations for improvements and monitors industry communications regarding potential changes to existing laws and regulations.
Runs models, generates reports, and presents recommendations and detailed analysis of all model runs to Actuarial Leadership.
May make recommendations for model adjustments and improvements, when appropriate.
Shares knowledge with team members and serves as a resource to team on raised issues and navigates obstacles to deliver work product.
Leads or participates as a key resource on moderately complex projects through concept, planning, execution, and implementation phases with minimal guidance, involving cross functional actuarial areas.
Develops exhibits and reports that help explain proposals/findings and provides information in an understandable and usable format for partners.
Identifies and provides recommended solutions to business problems independently, often presenting recommendation to leadership.
Maintains accurate price level, price structure, data availability and other requirements to achieve profitability and competitive goals.
Identifies critical assumptions to monitor and suggest timely remedies to correct or prevent unfavorable trends.
Tests impact of assumptions by identifying sources of gain and loss, the appropriate premiums, interest margins, reserves, and cash values for profitability and viability of new and existing products.
Advises management on issues and serves as a primary resource for their individual team members on raised issues.
Ensures risks associated with business activities are identified, measured, monitored, and controlled in accordance with risk and compliance policies and procedures.
What you have:
Bachelor's degree; OR 4 years of related experience (in addition to the minimum years of experience required) may be substituted in lieu of degree.
4 years relevant actuarial or analytical experience and attainment of Fellow within the Society of Actuaries; OR 8 years relevant actuarial experience and attainment of Associate within the Society of Actuaries.
Experience performing complex work assignments using actuarial modeling software driven models for pricing, valuation, and/or risk management.
Experience presenting complex actuarial analysis and recommendations to technical and non-technical audiences.
What sets you apart:
Asset-Liability Management (ALM): Experience in ALM, including expertise in liquidity management, asset allocation, cashflow matching, and duration targeting.
Asset Adequacy Testing: Experience conducting asset adequacy testing to ensure the sufficiency of assets to meet future obligations.
Product Pricing: Experience in pricing financial products, with a particular emphasis on annuity products.
Risk-Based Capital (RBC): Experience with risk-based capital frameworks and methodologies.
Actuarial Software Proficiency: Familiarity with actuarial software platforms. Experience with AXIS is considered a significant advantage.
Actuarial Designations: Attainment of Society of Actuaries Associateship (ASA) or Fellowship (FSA).
Compensation range: The salary range for this position is: $127,310 - $243,340.
USAA does not provide visa sponsorship for this role. Please do not apply for this role if at any time (now or in the future) you will need immigration support (i.e., H-1B, TN, STEM OPT Training Plans, etc.).
Compensation: USAA has an effective process for assessing market data and establishing ranges to ensure we remain competitive. You are paid within the salary range based on your experience and market data of the position. The actual salary for this role may vary by location.
Employees may be eligible for pay incentives based on overall corporate and individual performance and at the discretion of the USAA Board of Directors.
The above description reflects the details considered necessary to describe the principal functions of the job and should not be construed as a detailed description of all the work requirements that may be performed in the job.
Benefits: At USAA our employees enjoy best-in-class benefits to support their physical, financial, and emotional wellness. These benefits include comprehensive medical, dental and vision plans, 401(k), pension, life insurance, parental benefits, adoption assistance, paid time off program with paid holidays plus 16 paid volunteer hours, and various wellness programs. Additionally, our career path planning and continuing education assists employees with their professional goals.
For more details on our outstanding benefits, visit our benefits page on USAAjobs.com.
Applications for this position are accepted on an ongoing basis, this posting will remain open until the position is filled. Thus, interested candidates are encouraged to apply the same day they view this posting.
USAA is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Auto-ApplyData Governance Engineer
Data scientist job in Phoenix, AZ
Role: Data Governance Engineer
Experience Required - 6+ Years
Must Have Technical/Functional Skills
• Understanding of Data Management and Data Governance concepts (metadata, lineage, data quality, etc.) and prior experience.
• 2 - 5 years of Data Quality Management experience.
• Intermediate competency in SQL & Python or related programming language.
• Strong familiarity with data architecture and/or data modeling concepts
• 2 - 5 years of experience with Agile or SAFe project methodologies
Roles & Responsibilities
• Assist in identifying data-related risks and associated controls for key business processes. Risks relate to Record Retention, Data Quality, Data Movement, Data Stewardship, Data Protection, Data Sharing, among others.
• Identify data quality issues, perform root-cause-analysis of data quality issues and drive remediation of audit and regulatory feedback.
• Develop deep understanding of key enterprise data-related policies and serve as the policy expert for the business unit, providing education to teams regarding policy implications for business.
• Responsible for holistic platform data quality monitoring, including but not limited to critical data elements.
• Collaborate with and influence product managers to ensure all new use cases are managed according to policies.
• Influence and contribute to strategic improvements to data assessment processes and analytical tools.
• Responsible for monitoring data quality issues, communicating issues, and driving resolution.
• Support current regulatory reporting needs via existing platforms, working with upstream data providers, downstream business partners, as well as technology teams.
• Subject matter expertise on multiple platforms.
• Responsible to partner with the Data Steward Manager in developing and managing the data compliance roadmap.
Generic Managerial Skills, If any
• Drives Innovation & Change: Provides systematic and rational analysis to identify the root cause of problems. Is prepared to challenge the status quo and drive innovation. Makes informed judgments, recommends tailored solutions.
• Leverages Team - Collaboration: Coordinates efforts within and across teams to deliver goals, accountable to bring in ideas, information, suggestions, and expertise from others outside & inside the immediate team.
• Communication: Influences and holds others accountable and has ability to convince others. Identifies the specific data governance requirements and is able to communicate clearly and in a compelling way.
Interested candidates please do share me your updated resume to *******************
Salary Range - $100,000 to $120,000 per year
TCS Employee Benefits Summary:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
Data Engineer (GIS)
Data scientist job in Scottsdale, AZ
About the Role
We're partnering with a large, operations-focused organization to hire a Data Scientist (GIS) to support analytics initiatives within their operations function. This role applies geospatial data and advanced analytics to help improve operational efficiency, service reliability, and planning decisions.
The work is highly analytical and engineering-focused, with models built directly in Snowflake and used as inputs into downstream optimization and planning systems.
What You'll Work On
Geospatial Modeling & Time Estimation
Develop data-driven models to estimate operational timing across different service and facility interactions
Leverage GPS data and geofencing techniques to understand behavior across locations
Incorporate contextual variables such as:
Geography and location characteristics
Customer and service attributes
Site complexity and external conditions (e.g., weather, time-based patterns)
Produce reliable, explainable time estimates that support planning and decision-making
Facility & Location Analytics
Model turnaround and processing time across different types of locations
Analyze performance variability based on operational and environmental factors
Apply polygon- and radius-based geofencing to capture location-specific behavior
Quantify how conditions impact operational flow and timing outcomes
Technical Environment
Primary development and modeling in Snowflake
Build and engineer transformations and analytical processes directly in Snowflake
Modeling approaches may include:
Percentile-based time estimates
Aggregations such as averages and medians by service and location attributes
Data sources include:
Latitude/longitude data
High-frequency GPS signals
Location and facility reference data
What We're Looking For
Strong hands-on experience with Snowflake
Advanced SQL skills
Python for analytics and data engineering
Solid understanding of core GIS concepts, including:
Spatial joins
Polygons
Geofencing
Experience with traditional GIS tools (e.g., ArcGIS) is a plus, but this is not a cartography or visualization-focused role
Background in geospatial data engineering and modeling is key
Interview Process
Two One hour video interviews
Data Engineer
Data scientist job in Phoenix, AZ
Hi,
We do have an job opportunity for Data Engineer Analyst role.
Data Analyst / Data Engineer
Expectations: Our project is data analysis heavy, and we are looking for someone who can grasp business functionality and translate that into working technical solutions.
Job location: Phoenix, Arizona.
Type - Hybrid model (3 days a week in office)
Job Description: Data Analyst / Data Engineer (6+ Years relevant Experience with required skill set)
Summary:
We are seeking a Data Analyst Engineer with a minimum of 6 years in data engineering, data analysis, and data design. The ideal candidate will have strong hands-on expertise in Python and relational databases such as Postgres, SQL Server, or MySQL. Should have good understanding of data modeling theory and normalization forms.
Required Skills:
6+ years of experience in data engineering, data analysis, and data design
Your approach as a data analysis in your previous / current role, and what methods or techniques did you use to extract insights from large datasets
Good proficiency in Python
Do you have any formal training or education in data modeling? If so, please provide details about the course, program, or certification you completed, including when you received it.
Strong experience with relational databases: Postgres, SQL Server, or MySQL.
What are the essential factors that contribute to a project's success, and how do you plan to leverage your skills and expertise to ensure our project meets its objectives?
Expertise in writing complex SQL queries and optimizing database performance
Solid understanding of data modeling theory and normalization forms.
Good communicator with the ability to articulate business problems for technical solutions.
Key Responsibilities:
Analyze complex datasets to derive actionable insights and support business decisions.
Model data solutions for high performance and reliability.
Work extensively with Python for data processing and automation.
Develop and optimize SQL queries for Postgres, SQL Server, or MySQL databases.
Ensure data integrity, security, and compliance across all data solutions.
Collaborate with cross-functional teams to understand data requirements and deliver solutions.
Communicate effectively with stakeholders and articulate business problems to drive technical solutions.
Secondary Skills:
Experience deploying applications in Kubernetes.
API development using FastAPI or Django.
Familiarity with containerization (Docker) and CI/CD tools.
Regards,
Suhas Gharge
ORACLE CLOUD DATA ENGINEER
Data scientist job in Phoenix, AZ
Hiring: Oracle Cloud Data Engineer / Technology Lead
We're looking for a hands-on Oracle Cloud Data Engineer (Technology Lead) to drive OCI-based data engineering and Power BI analytics initiatives. This role combines technical leadership with active development in a high-impact data program.
Location: Phoenix, AZ (Hybrid)
Duration: 6+ Months (Contract)
Work Authorization: USC & Green Card holders ONLY (Strict Requirement)
Job Summary
This role focuses on building scalable data pipelines on Oracle Cloud Infrastructure (OCI) while leading Power BI dashboard and reporting development. You'll apply Medallion Architecture, enforce data governance, and collaborate closely with business stakeholders. Utility industry experience is a strong plus.
Must-Have (Non-Negotiable) Skills
8-10 years of experience in Data Engineering & Business Intelligence
3+ years of hands-on OCI experience
Strong expertise in OCI Data Services, including:
OCI Data Integration, OCI Data Flow, OCI Streaming
Autonomous Data Warehouse, Oracle Exadata, OCI Object Storage
Hands-on experience with Medallion Architecture (Bronze, Silver, Gold layers)
Power BI expertise: dashboards, reports, DAX, Power Query, data modeling, RLS
Strong coding skills in SQL, PL/SQL, Python
Experience with Terraform, Ansible, and CI/CD pipelines
Bachelor's or Master's degree in a related field
Power BI Certification - Required
Hands-on development is mandatory
Key Responsibilities
Design and implement secure, scalable OCI data pipelines
Lead Power BI dashboard and reporting development
Build inbound/outbound integration patterns (APIs, files, streaming)
Implement Audit, Balance, and Control (ABC) frameworks
Ensure data quality, governance, lineage, and monitoring
Mentor engineers and BI developers
Drive agile delivery and stakeholder collaboration
📩 Interested? Apply now or DM us to explore this opportunity! You can share resumes at ********************* OR Call us on *****************
Data Engineer
Data scientist job in Phoenix, AZ
Hybrid - 2-3 days on site
Phoenix, AZ
We're looking for a Data Engineer to help build the cloud-native data pipelines that power critical insights across our organization. You'll work with modern technologies, solve real-world data challenges, and support analytics and reporting systems that drive smarter decision-making in the transportation space.
What You'll Do
Build and maintain data pipelines using Databricks, Azure Data Factory, and Microsoft Fabric
Implement incremental and real-time ingestion using medallion architecture
Develop and optimize complex SQL and Python transformations
Support legacy platforms (SSIS, SQL Server) while contributing to modernization efforts
Troubleshoot data quality and integration issues
Participate in proof-of-concepts and recommend technical solutions
What You Bring
5+ years designing and building data solutions
Strong SQL and Python skills
Experience with ETL pipelines and Data Lake architecture
Ability to collaborate and adapt in a fast-moving environment
Preferred: Azure services, cloud ETL tools, Power BI/Tableau, event-driven systems, NoSQL databases
Bonus: Experience with Data Science or Machine Learning
Benefits
Medical, dental, and vision from day one · PTO & holidays · 401(k) with match · Lifestyle account · Tuition reimbursement · Voluntary benefits · Employee Assistance Program · Well-being & culture programs · Professional development support
Senior Data Engineer (PySpark / Python) (Only USC or GC on W2)
Data scientist job in Phoenix, AZ
Job Title: Senior Data Engineer (PySpark / Python)
Employment Type: Contract
Must Have Skills
py Spark, Python development , data engineer
Hands on knowledge for py Spark, Hadoop, Python
Github Backend API integration knowledge (JASON, REST)
Certifications Needed : No (Good to have GCP certification)
Top 3 responsibilities you would expect the Subcon to shoulder and execute
Individual contributor
Strong development experience and leading dev module
Work with client directly
Data Governance Engineer
Data scientist job in Phoenix, AZ
Job Title : Data Governance Engineer
Phoenix, AZ - Complete Onsite
Full-Time Permanent
Experience Required - 6+ Years
Must Have Technical/Functional Skills
Understanding of Data Management and Data Governance concepts (metadata, lineage, data quality, etc.) and prior experience.
2 - 5 years of Data Quality Management experience.
Intermediate competency in SQL & Python or related programming language.
Strong familiarity with data architecture and/or data modeling concepts
2 - 5 years of experience with Agile or SAFe project methodologies
Roles & Responsibilities
Assist in identifying data-related risks and associated controls for key business processes. Risks relate to Record Retention, Data Quality, Data Movement, Data Stewardship, Data Protection, Data Sharing, among others.
Identify data quality issues, perform root-cause-analysis of data quality issues and drive remediation of audit and regulatory feedback.
Develop deep understanding of key enterprise data-related policies and serve as the policy expert for the business unit, providing education to teams regarding policy implications for business.
Responsible for holistic platform data quality monitoring, including but not limited to critical data elements.
Collaborate with and influence product managers to ensure all new use cases are managed according to policies.
Influence and contribute to strategic improvements to data assessment processes and analytical tools.
Responsible for monitoring data quality issues, communicating issues, and driving resolution.
Support current regulatory reporting needs via existing platforms, working with upstream data providers, downstream business partners, as well as technology teams.
Subject matter expertise on multiple platforms.
Responsible to partner with the Data Steward Manager in developing and managing the data compliance roadmap.
Generic Managerial Skills, If any
Drives Innovation & Change: Provides systematic and rational analysis to identify the root cause of problems. Is prepared to challenge the status quo and drive innovation. Makes informed judgments, recommends tailored solutions.
Leverages Team - Collaboration: Coordinates efforts within and across teams to deliver goals, accountable to bring in ideas, information, suggestions, and expertise from others outside & inside the immediate team.
Communication: Influences and holds others accountable and has ability to convince others. Identifies the specific data governance requirements and is able to communicate clearly and in a compelling way.
AI Data Engineer
Data scientist job in Phoenix, AZ
Echelix is a leading AI consulting company helping businesses design, build, and scale intelligent systems. We partner with organizations to make artificial intelligence practical, powerful, and easy to adopt. Our team blends deep technical skill with real-world business sense to deliver AI that drives measurable results.
The Role
We're looking for a Senior Data Engineer to architect, optimize, and manage database systems that power AI-driven solutions and enterprise applications. You'll lead the design of scalable, secure, and high-performance data infrastructure across cloud platforms, ensuring our clients' data foundations are built for the future.
This role is ideal for database professionals who have evolved beyond traditional DBA work into cloud-native architectures, API-driven data access layers, and modern DevOps practices. You'll work with cutting-edge technologies like GraphQL, Hasura, and managed cloud databases while mentoring engineers on data architecture best practices.
What You'll Do
Design, tune, and manage PostgreSQL, SQL Server, and cloud-managed databases (AWS RDS/Aurora, Azure SQL Database/Cosmos DB)
Architect and implement GraphQL APIs using Hasura or equivalent technologies for real-time data access
Lead cloud database migrations and deployments across AWS and Azure environments
Automate database CI/CD pipelines using tools like GitHub Actions, Azure DevOps, or AWS Code Pipeline
Develop and maintain data access layers and APIs that integrate with AI and application workloads
Monitor, secure, and optimize database performance using cloud-native tools (AWS CloudWatch, Azure Monitor, Datadog)
Implement database security best practices including encryption, access controls, and compliance requirements
Mentor engineers on database design, data modeling, and architecture best practices
Requirements
5+ years of experience designing and managing production database systems
Deep expertise in PostgreSQL and SQL Server, including performance tuning and query optimization
Hands-on experience with cloud database services (AWS RDS, Aurora, Azure SQL Database, Azure Cosmos DB)
Experience with GraphQL and API development, preferably with Hasura or similar platforms
Strong background in database CI/CD automation and Infrastructure as Code (Terraform, CloudFormation, Bicep)
Proficiency in scripting languages (Python, Bash) for automation and tooling
Solid understanding of data modeling, schema design, and database normalization
Strong communication and mentoring skills
US citizen and must reside in the United States
Nice to Have
Experience with NoSQL databases (MongoDB, DynamoDB, Redis)
Knowledge of data streaming platforms (Kafka, AWS Kinesis, Azure Event Hubs)
Experience with data warehousing solutions (Snowflake, Redshift, Azure Synapse)
Background in AI/ML data pipelines and feature stores
Relevant certifications (AWS Database Specialty, Azure Database Administrator, PostgreSQL Professional)
Why Join Echelix
You'll join a fast-moving team that's shaping how AI connects people and data. We value curiosity, precision, and practical innovation. You'll work on real projects with real impact, not just proofs of concept.
Tools Developer, Data Scientist
Data scientist job in Arizona
Sustainable Talent is partnering with Nvidia a global leader who's been transforming computer graphics, PC gaming, and accelerated computing for over 25 years. We are looking for a Tools Developer, Data Scientist to support our client's Systems Product team. This is a W-2 full-time contract based onsite in Santa Clara, CA, Austin, Tx, or Pheonix, AZ with hybrid work options. We offer competitive pay $65/hr - $75/hr based on factors like experience, education, location, etc. and provide full benefits, PTO, and amazing company culture!
We are seeking a skilled Tools Developer/Data Scientist/Data Analyst to develop innovative solutions that enable teams within the Systems Product Team (SPT) organization to successfully execute product bring-up activities. This role also involves creating visually compelling, user-friendly dashboards that support strategic executive decision-making. Your work will focus on a wide range of data sources, including engineering logs, factory shop floor metrics, inventory data, material inputs, and production planning. You will contribute to high-impact projects that are critical to new product introductions, as well as key supply chain and manufacturing decisions.
What You'll Be Doing:
Build tools to be used by various engineering and program management teams.
Work with raw log data (e.g., from SPLUNK, SAP other DB)
Analyze engineering, product bring up execution data and other Systems Products Team data:
Build plan
NPI Supply/Demand/Allocation
Factory Shop Floor and test data
Lab asset data
Collaborate with other tool teams member to create and deploy solutions (i.e. automated workflows, insightful dashboards)
Big Data Analytics & Data Visualization Specialists
Core Skills & Qualifications:
Strong programming skills in data manipulation, analysis, and automation (Python, or similar)
Experience with data processing libraries (pandas, numpy, ect)
Experience with data visualization libraries (matplotlib, plotly, ect)
Experience in big data analytics and data visualization.
Proficiency with tools such as Splunk, Tableau, or Power BI.
Strong SQL skills and experience with data modeling.
Background in data processing pipelines (Spark, Kafka, or similar a plus).
Preferred Experience:
Background in Statistical analysis (Regressions, hypothesis testing, probability distributions, ect)
Experience in Data preparation (ETL processes, data transformation, ect)
Experience in cybersecurity-related analytics or operational data reporting.
Building and automating dashboards for executive and technical audiences
Sustainable Talent is a M/F+, disabled, and veteran equal employment opportunity and affirmative action employer.
Auto-ApplyData Scientist
Data scientist job in Phoenix, AZ
Data Scientist Post Available Only to Americans
American, Off-grid
From here, our viewpoint is that the creative staff is committed to using data to reach findings that inspire fresh ideas. Data is about far more than simply numbers for us; it's the secret to revealing latent opportunities, overcoming tough obstacles, and future industry charting. We want a data scientist driven by data who can explore datasets, identify fresh trends, and apply their knowledge to actually change things. If you enjoy the chance to spin stories out of unprocessed data, join our team!
What then is your strategy?
Like a detective, examine challenging data looking for trends, connections, and patterns that could support corporate strategy. Exercises your critical thinking to investigate closely and develop reasonable answers.
Create and polish machine learning models capable of forecasting consumer trends and actions, therefore contributing significantly to corporate decision-making.
Work creatively with others: Work collaboratively with teams in marketing, engineering, and products to grasp corporate goals and create data-driven answers to actual problems.
Set up automated data pipelines. Building effective and adaptable data pipelines that automatically transfer data can help to improve the availability and value of data.
Share Knowledge: Present complicated data results clearly and attractingly to technical teams and non-technical stakeholders. Your aim should be for every piece of data to be simply understandable.
By always learning fresh approaches, tools, and algorithms, keep ahead of the curve. You will make sure we remain competitive always by keeping us at the forefront of data practice innovation.
Finding and resolving discrepancies helps one to guarantee that the data utilized for analysis is accurate, high-quality, and uncompromised.
Our intended result:
Potential applicants for this post could be Americans only.
Data science, machine learning, or a similar discipline calls for minimum two to three years of pertinent professional experience. Statistical techniques, data analysis, and model development all flow naturally to you.
Technical Mastery: You speak Python, R, and SQL rather well among other programming languages. Good usage of machine learning libraries including PyTorch, TensorFlow, or scikit-learn is appreciated.
Using Tableau, Power BI, and Matplotlib, you are a master at visually appealing and intelligible presenting challenging data.
According to the analytical perspective, one can see the possible insights by closely reviewing unprocessed data. Constantly asking "why" questions regarding the figures helps you to strengthen your critical thinking abilities.
Your creative approach helps you to turn facts into strategic advantage and generate original answers to problems.
You like collaborating with others whether you are developing fresh ideas or presenting your results to non-technical aware stakeholders.
Your painstaking attention to detail helps the data you handle to be of the best quality and accuracy.
Our Motives for Your Contentment in Comprising Our Team
Working remotely in the US allows you to enjoy your freedom in juggling job and personal life.
Join a team that celebrates uniqueness and creativity to start fresh ideas and assume leadership responsibilities.
Development in Your Profession: We provide you various chances to pick up fresh skills, broaden your knowledge in a profession always changing, and progress in your employment.
We provide a competitive pay, a whole benefits package, and wellness incentives to guarantee your health and happiness.
You will directly influence the course of our company since you own its future. Your efforts will help to shape it.
Approaches for Application: Are you ready to use data's power and change things? Your answer would really be much valued. Add a quick cover letter outlining your interest in data science and noting how your background fits the position.
Note: Only US applicants are eligible for this post.
Data Scientist
Data scientist job in Phoenix, AZ
Summary/objective We are seeking a highly skilled Data Scientist to focus on building and deploying predictive models that identify customer churn risk and upsell opportunities. This role will play a key part in driving revenue growth and retention strategies by leveraging advanced machine learning, statistical modeling, and large-scale data capabilities within Databricks.
Why Join Us?
Be at the forefront of using Databricks AI/ML capabilities to solve real-world business challenges.
Directly influence customer retention and revenue growth through applied data science.
Work in a collaborative environment where experimentation and innovation are encouraged.
Core Job Duties:
Model Development
* Design, develop, and deploy predictive models for customer churn and upsell propensity using Databricks ML capabilities.
* Evaluate and compare algorithms (e.g., logistic regression, gradient boosting, random forest, deep learning) to optimize predictive performance.
* Incorporate feature engineering pipelines that leverage customer behavior, transaction history, and product usage data.
Data Engineering & Pipeline Ownership
* Build and maintain scalable data pipelines in Databricks (using PySpark, Delta Lake, and MLflow) to enable reliable model training and scoring.
* Collaborate with data engineers to ensure proper data ingestion, transformation, and governance.
Experimentation & Validation
* Conduct A/B tests and back testing to validate model effectiveness.
* Apply techniques for model monitoring, drift detection, and retraining in production.
Business Impact & Storytelling
* Translate complex analytical outputs into clear recommendations for business stakeholders.
* Partner with Product and Customer Success teams to design strategies that reduce churn, increase upsell and improve customer retention KPIs.
Minimum Qualifications:
* Master's or PhD in Data Science, Statistics, Computer Science, or related field (or equivalent industry experience).
* 3+ years of experience building predictive models in a production environment.
* Strong proficiency in Python (pandas, scikit-learn, PySpark) and SQL.
* Demonstrated expertise using Databricks for:
* Data manipulation and distributed processing with PySpark.
* Building and managing models with MLflow.
* Leveraging Delta Lake for efficient data storage and retrieval.
* Implementing scalable ML pipelines within Databricks' ML Runtime.
* Experience with feature engineering for behavioral and transactional datasets.
* Strong understanding of customer lifecycle analytics, including churn modeling and upsell/recommendation systems.
* Ability to communicate results and influence decision-making across technical and non-technical teams.
Preferred Qualifications:
* Experience with cloud platforms (Azure Databricks, AWS, or GCP).
* Familiarity with Unity Catalog for data governance and security.
* Knowledge of deep learning frameworks (TensorFlow, PyTorch) within Databricks.
* Exposure to MLOps best practices (CI/CD for ML, model versioning, monitoring).
* Background in SaaS, subscription-based businesses, or customer analytics.
Physical Demands
Prolonged periods of sitting at a desk and working on a computer. Must be able to lift up to 15 pounds.
Travel Required: Limited
Work Authorization: Employees must be legally authorized to work in the United States.
FLSA Classification: Exempt
Location: Any
Effective Date: 9/16/2025
About isolved
isolved is a provider of human capital management (HCM) solutions that help organizations recruit, retain and elevate their workforce. More than 195,000 employers and 8 million employees rely on isolved's software and services to streamline human resource (HR) operations and deliver employee experiences that matter. isolved People Cloud is a unified yet modular HCM platform with built-in artificial intelligence (AI) and analytics that connects HR, payroll, benefits, and workforce and talent management into a single solution that drives better business outcomes. Through the Sidekick Advantage, isolved also provides expert guidance, embedded services and an engaged community that empowers People Heroes to grow their companies and careers. Learn more at *******************
isolved is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. isolved is a progressive and open-minded meritocracy. If you are smart and good at what you do, come as you are. Visit ************************** for more information regarding our incredible culture and focus on our employee experience. Visit ************************* for a comprehensive list of our employee total rewards offerings.
Data Scientist
Data scientist job in Mesa, AZ
Title: Data Scientist
Department: Baseball Operations
Reporting to: Assistant GM, Baseball Development & Technology
Job Classification: Full-time, Exempt
Full-time Location (City, State): Mesa, AZ
About the A's:
The A's are a baseball team founded in 1901. They have a rich history, having won nine World Series championships and 15 American League pennants. The A's are known for pioneering the "Moneyball" approach to team-building, which focuses on using statistical analysis to identify undervalued players.
In addition to their success on the field, the A's also have a positive and dynamic work culture. They have been recognized twice as the Front Office Sports, Best Employers in Sports.
The A's are defined by their core pillars of being Dynamic, Innovative, and Inclusive. Working for the A's offers the opportunity to be part of an innovative organization that values its employees and strives to create a positive work environment.
Description:
The A's are hiring for a full-time Data Scientist for the Baseball Operations Department. The Data Scientist will construct statistical models that inform decision-making in all facets of Baseball Operations. This position requires strong experience in statistics, data analytics, and computer science. This position is primarily based out of Mesa, AZ.
Responsibilities:
Design, build, and maintain predictive models to support player evaluation, acquisition, development, and performance optimization.
Collaborate with Baseball Analytics staff to integrate analytical findings into decision-making tools and ensure seamless implementation.
Analyze and synthesize large-scale data, creating actionable insights for stakeholders within Baseball Operations.
Research and implement advanced statistical methods, including time series modeling, spatial statistics, boosting models, and Bayesian regression, to stay on the cutting edge of sabermetric research.
Develop and maintain robust data modeling pipelines and workflows in cloud environments to ensure scalability and reliability of analytical outputs.
Produce clear, concise written reports and compelling data visualizations to communicate insights effectively across diverse audiences.
Stay current with advancements in data science, statistical methodologies, and player evaluation techniques to identify and propose new opportunities for organizational improvement.
Mentor team members within the Baseball Operations department, fostering a collaborative and innovative research environment.
Requirements:
PhD in Mathematics, Statistics, Computer Science, or a related quantitative field.
Proficiency in SQL, R, Python, or other similar programming languages.
Strong understanding of modern statistical and machine learning methods, including experience with predictive modeling techniques.
Proven experience productionizing machine learning models in cloud environments.
Ability to communicate complex analytical concepts effectively to both technical and non-technical audiences.
Demonstrated ability to independently design, implement, and present rigorous quantitative research.
Passion for sabermetric research and baseball analytics with a deep understanding of player evaluation methodologies.
Strong interpersonal and mentoring skills with a demonstrated ability to work collaboratively in a team-oriented environment.
Preferred Qualifications:
Expertise in time series modeling, spatial statistics, boosting models, and Bayesian regression.
Previous experience in sports analytics, ideally baseball, is a plus.
Familiarity with integrating biomechanical data into analytical frameworks.
The A's Diversity Statement:
Diversity Statement Diversity, Equity, and Inclusion are in our organizational DNA. Our commitment to these values is unwavering - on and off the field. Together, we continue to build an inclusive, innovative, and dynamic culture that encourages, supports, and celebrates belonging and amplifies diverse voices. Combining a collaborative and innovative work environment with talented and diverse team members, we've created a workforce in which every team member has the tools to reach their full potential.
Equal Opportunity Consideration:
We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, sexual orientation, age, disability, gender identity, marital or veteran status, or any other protected class.
Auto-ApplyDataScientist
Data scientist job in Arizona City, AZ
Java Full Stack Developer (Job Code: J2EE)
3 to 10 years of experience developing web based applications in Java/J2EE technologies
Knowledge of RDBMS and NoSQL data stores and polyglot persistence (Oracle, MongoDB etc.)
Knowledge of event sourcing and distributed message systems (Kafka, RabbitMQ)
AngularJS, React, Backbone or other client-side MVC experience
Experience with JavaScript build tools and dependency management (npm, bower, grunt, gulp)
Experience creating responsive designs (Bootstrap, mobile, etc.)
Experience with unit and automation testing (Jasmine, Protractor, JUnit)
Expert knowledge of build tools and dependency management (gradle, maven)
Knowledge of Domain
Driven
Design concepts and microservices
Participate in software design and development using modern Java and web technology stack.
Should be proficient in Spring boot and Angular
Sound understanding of Microservices architecture
Good understanding of event driven architecture
Experience building Web Services (REST/SOAP)
Experience in writing©Junit
Good to have experience in TDD
Expert in developing highly responsive web application using Angular4 or above
Good Knowledge of HTML/HTML5/CSS, JavaScript/AJAX, and XML
Good understanding of SQL and relational databases and NO SQL databases
Familiarity with design patterns and should be able to design small to medium complexity modules independently
Experience with Agile or similar development methodologies
Experience with a versioning system (e.g., CVS/SVN/Git)
Experience with agile development methodologies including TDD, Scrum and Kanban
Strong verbal communications, cross-group collaboration skills, analytical, structured and strategic thinking.
Great interpersonal skills, cultural awareness, belief in teamwork
Collaborating with product owners, stakeholders and potentially globally distributed teams
Work cross-functional in an Agile environment
Excellent problem-solving, organizational and analytical skills
Qualification : BE / B.Tech / MCA / ME / M.Tech
****************************
TESt Shift 9 hrs
Test Budget 5 hrs
Auto-ApplyData Scientist Lead, Vice President
Data scientist job in Tempe, AZ
JobID: 210686904 JobSchedule: Full time JobShift: Day : Join a powerhouse team at the forefront of Home Lending Data & Analytics, where we partner with Product, Marketing, and Sales to solve the most urgent and complex business challenges. Our team thrives in a fast-paced, matrixed environment, driving transformative impact through bold analytics and innovative data science solutions. We are relentless in our pursuit of actionable insights, seamlessly engaging with stakeholders and redefining what's possible through strategic collaboration and visionary problem solving. If you're ready to shape the future of home lending with breakthrough ideas and data-driven leadership, this is the team for you.
We are seeking a senior Data Scientist Lead to join our Home Lending Data & Analytics team, supporting Originations Product Team. This strategic role requires a visionary leader with a consulting background who excels at translating complex data into actionable business insights. The ideal candidate will be a recognized thought leader, demonstrating exceptional critical thinking and problem-solving skills, and a proven ability to craft and deliver compelling data-driven stories that influence decision-making at all levels. Success in this role requires not only technical expertise, but also the ability to inspire others, drive innovation, and communicate insights in a way that resonates with both technical and non-technical audiences.
Key Responsibilities:
* Identify, quantify, and solve obstacles to business goals using advanced business analysis and data science skillsets.
* Recognize and communicate meaningful trends and patterns in data, delivering clear, compelling narratives to drive business decisions.
* Serve as a data expert and consultant to the predictive modeling team, identifying and validating data sources.
* Advise business and technology partners on data-driven opportunities to increase efficiency and improve customer experience.
* Proactively interface with, and gather information from, other areas of the business (Operations, Technology, Finance, Marketing).
* Extract and analyze data from various sources and technologies using complex SQL queries.
* Summarize discoveries with solid data support and quick turnaround, tailoring messages for technical and non-technical audiences.
* Influence upward and downward-mentor junior team members and interface with business leaders to drive strategic initiatives.
* Foster a culture of innovation, attention to detail, and results within the team.
Qualifications:
* 6+ years of experience in business strategy, analytics, or data science.
* 2+ years of experience in business management consulting.
* Strong experience with SQL (query/procedure writing).
* Proficiency in at least one versatile, cross-technology tool/language: Python, SAS, R, or Alteryx.
* Demonstrated ability to craft compelling stories from data and present insights that influence decision-making.
* Clear and succinct written and verbal communication skills, able to frame and present messages for different audiences.
* Critical and analytical thinking, with the ability to maintain detail focus and retain big picture perspective.
* Strong Microsoft Excel skills.
* Ability to work independently, manage shifting priorities and projects, and thrive in a fast-paced, competitive environment.
* Excellent interpersonal skills to work effectively with a variety of individuals, departments, and organizations.
* Experience mentoring or leading teams is highly desirable.
Preferred Background:
* Experience in Mortgage Banking or Financial Services industry preferred.
* Previous experience in consulting, with exposure to a variety of industries and business challenges.
* Track record of successful stakeholder engagement and change management.
* Recognized as a thought leader, with experience driving strategic initiatives and innovation.
Auto-ApplySenior Data Scientist (Experimentation & Machine Learning)
Data scientist job in Scottsdale, AZ
Recognized as the No. 1 site trusted by real estate professionals, Realtor.com has been at the forefront of online real estate for over 25 years, connecting buyers, sellers, and renters with trusted insights and expert guidance to find their perfect home. Through its robust suite of tools, Realtor.com not only makes a significant impact on the real estate industry at large, but for consumers, navigating the biggest purchase they will make in their life, by providing a user experience that is easy to use, easy to understand, and most of all, easy to make decisions.
Join us on our mission to empower more people to find their way home by breaking barriers to entry, making the right connections, and building confidence through expert guidance.
Senior Data Scientist
The Data Science and Analytics organization at ****************** sits at the heart of our mission. We process and analyze terabytes of data every day that enable decisions for millions of home buyers, sellers, renters, dreamers, and real estate professionals. Our goal is to use this data to make the home buying experience a breeze for our consumers. We empower them with the most up-to-date information on properties, help them find their dream homes in the least amount of time, and match them with the most suitable realtor to meet their unique, individual needs.
Role Description
We are seeking a Senior Data Scientist with a strong background in experimentation, media analytics, and cross-functional stakeholder support to join our Client Analytics team. In this role, you will analyze large-scale product and media experiments, consolidate and rebuild business-critical metrics, and deliver clear recommendations that inform high-impact decisions across product, media, and finance. The ideal candidate is a detail-oriented executor who thrives on repeatable analytics, enjoys collaborating with partners in product and media, and brings expertise in Python, SQL, and Amplitude (or similar analytics platforms).
Responsibilities
* Partner with business stakeholders to translate experiment questions and business needs into actionable analytics, providing timely and accurate answers on A/B test outcomes, media impact, and product changes.
* Analyze and report on dozens of product and media experiments each quarter, using Python (pandas, numpy, scipy), SQL, and Excel to clean, aggregate, and interpret data from sources such as Google Ad Manager and Amplitude.
* Apply standard statistical testing (e.g. t-tests) to assess significance and produce clear, actionable recommendations (including "no effect" findings) for product, media, and business teams.
* Set up, monitor, and analyze live experiments in Amplitude or similar product analytics platforms, ensuring correct instrumentation, sample assignment, and data quality.
* Lead metric consolidation and calculation projects joining multiple data sources and building SQL pipelines for executive-ready business metrics.
* Document methodologies, assumptions, and recommendations clearly for both technical and non-technical audiences.
* Respond to ad hoc and recurring requests for experiment analysis, media reporting, and metric deep-dives with precision, speed, and reliability.
* Balance high experiment throughput with ad hoc media reporting, regularly prioritizing work across multiple stakeholders.
* Foster a culture of accountability and transparency by ensuring reproducibility, traceability, and clear code documentation in all analytics work.
Minimum Qualifications
* Typically requires a minimum of 5 years of related experience with a Bachelor's degree;
or 3 years and a Master's degree; or a PhD without experience; or equivalent work
experience
* Degree in a quantitative field (e.g., Statistics, Data Science, Applied Mathematics, Economics, Engineering, Computer Science).
* Relevant experience as a Data Scientist, Data Analyst, Product Analyst, or similar role, using SQL and Python (pandas, numpy, scipy).
* Proven track record managing and delivering on high-volume experimentation, media analytics, or product analytics projects with multiple stakeholders.
* Experience with Amplitude, Mixpanel, or similar product analytics platforms.
* Strong SQL skills, including experience building and joining complex pipelines; ability to handle large, messy, multi-source data.
* Proficient in Excel/Google Sheets for quick reporting and ad hoc analysis.
* Sound understanding of statistical methods for experimentation (randomization, t-tests, confidence intervals, etc.).
* Excellent written and verbal communication skills, with experience presenting to diverse technical and business audiences.
* Self-motivated and self-managing, with strong time management, documentation, and organizational skills.
Preferred Qualifications
* Master's or Ph.D. degree in a quantitative field (e.g., Statistics, Data Science, Applied Mathematics, Economics, Engineering, Computer Science).
* Experience in ad tech, media analytics, or digital advertising environments.
* Familiarity with revenue or monetization analytics.
* Experience with dashboarding tools (e.g., Tableau, Looker).
* Exposure to real estate, marketplace, or consumer product analytics.
Do the best work of your life at Realtor.com
Here, you'll partner with a diverse team of experts as you use leading-edge tech to empower everyone to meet a crucial goal: finding their way home. And you'll find your way home too. At Realtor.com, you'll bring your full self to work as you innovate with speed, serve our consumers, and champion your teammates. In return, we'll provide you with a warm, welcoming, and inclusive culture; intellectual challenges; and the development opportunities you need to grow.
Diversity is important to us, therefore, Realtor.com is an Equal Opportunity Employer regardless of age, color, national origin, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, marital status, status as a disabled veteran and/or veteran of the Vietnam Era or any other characteristic protected by federal, state or local law. In addition, Realtor.com will provide reasonable accommodations for otherwise qualified disabled individuals.
Auto-ApplySenior Data Scientist
Data scientist job in Phoenix, AZ
**_What Data Science contributes to Cardinal Health_** The Data & Analytics Function oversees the analytics life-cycle in order to identify, analyze and present relevant insights that drive business decisions and anticipate opportunities to achieve a competitive advantage. This function manages analytic data platforms, the access, design and implementation of reporting/business intelligence solutions, and the application of advanced quantitative modeling.
Data Science applies base, scientific methodologies from various disciplines, techniques and tools that extracts knowledge and insight from data to solve complex business problems on large data sets, integrating multiple systems.
At Cardinal Health's Artificial Intelligence Center of Excellence (AI CoE), we are pushing the boundaries of healthcare with cutting-edge Data Science and Artificial Intelligence (AI). Our mission is to leverage the power of data to create innovative solutions that improve patient outcomes, streamline operations, and enhance the overall healthcare experience.
We are seeking a highly motivated and experienced Senior Data Scientist to join our team as a thought leader and architect of our AI strategy. You will play a critical role in fulfilling our vision through delivery of impactful solutions that drive real-world change.
**_Responsibilities_**
+ Lead the Development of Innovative AI solutions: Be responsible for designing, implementing, and scaling sophisticated AI solutions that address key business challenges within the healthcare industry by leveraging your expertise in areas such as Machine Learning, Generative AI, and RAG Technologies.
+ Develop advanced ML models for forecasting, classification, risk prediction, and other critical applications.
+ Explore and leverage the latest Generative AI (GenAI) technologies, including Large Language Models (LLMs), for applications like summarization, generation, classification and extraction.
+ Build robust Retrieval Augmented Generation (RAG) systems to integrate LLMs with vast repositories of healthcare and business data, ensuring accurate and relevant outputs.
+ Shape Our AI Strategy: Work closely with key stakeholders across the organization to understand their needs and translate them into actionable AI-driven or AI-powered solutions.
+ Act as a champion for AI within Cardinal Health, influencing the direction of our technology roadmap and ensuring alignment with our overall business objectives.
+ Guide and mentor a team of Data Scientists and ML Engineers by providing technical guidance, mentorship, and support to a team of skilled and geographically distributed data scientists, while fostering a collaborative and innovative environment that encourages continuous learning and growth.
+ Embrace a AI-Driven Culture: foster a culture of data-driven decision-making, promoting the use of AI insights to drive business outcomes and improve customer experience and patient care.
**_Qualifications_**
+ 8-12 years of experience with a minimum of 4 years of experience in data science, with a strong track record of success in developing and deploying complex AI/ML solutions, preferred
+ Bachelor's degree in related field, or equivalent work experience, preferred
+ GenAI Proficiency: Deep understanding of Generative AI concepts, including LLMs, RAG technologies, embedding models, prompting techniques, and vector databases, along with evaluating retrievals from RAGs and GenAI models without ground truth
+ Experience working with building production ready Generative AI Applications involving RAGs, LLM, vector databases and embeddings model.
+ Extensive knowledge of healthcare data, including clinical data, patient demographics, and claims data. Understanding of HIPAA and other relevant regulations, preferred.
+ Experience working with cloud platforms like Google Cloud Platform (GCP) for data processing, model training, evaluation, monitoring, deployment and support preferred.
+ Proven ability to lead data science projects, mentor colleagues, and effectively communicate complex technical concepts to both technical and non-technical audiences preferred.
+ Proficiency in Python, statistical programming languages, machine learning libraries (Scikit-learn, TensorFlow, PyTorch), cloud platforms, and data engineering tools preferred.
+ Experience in Cloud Functions, VertexAI, MLFlow, Storage Buckets, IAM Principles and Service Accounts preferred.
+ Experience in building end-to-end ML pipelines, from data ingestion and feature engineering to model training, deployment, and scaling preferred.
+ Experience in building and implementing CI/CD pipelines for ML models and other solutions, ensuring seamless integration and deployment in production environments preferred.
+ Familiarity with RESTful API design and implementation, including building robust APIs to integrate your ML models and GenAI solutions with existing systems preferred.
+ Working understanding of software engineering patterns, solutions architecture, information architecture, and security architecture with an emphasis on ML/GenAI implementations preferred.
+ Experience working in Agile development environments, including Scrum or Kanban, and a strong understanding of Agile principles and practices preferred.
+ Familiarity with DevSecOps principles and practices, incorporating coding standards and security considerations into all stages of the development lifecycle preferred.
**_What is expected of you and others at this level_**
+ Applies advanced knowledge and understanding of concepts, principles, and technical capabilities to manage a wide variety of projects
+ Participates in the development of policies and procedures to achieve specific goals
+ Recommends new practices, processes, metrics, or models
+ Works on or may lead complex projects of large scope
+ Projects may have significant and long-term impact
+ Provides solutions which may set precedent
+ Independently determines method for completion of new projects
+ Receives guidance on overall project objectives
+ Acts as a mentor to less experienced colleagues
**Anticipated salary range:** $121,600 - $173,700
**Bonus eligible:** Yes
**Benefits:** Cardinal Health offers a wide variety of benefits and programs to support health and well-being.
+ Medical, dental and vision coverage
+ Paid time off plan
+ Health savings account (HSA)
+ 401k savings plan
+ Access to wages before pay day with my FlexPay
+ Flexible spending accounts (FSAs)
+ Short- and long-term disability coverage
+ Work-Life resources
+ Paid parental leave
+ Healthy lifestyle programs
**Application window anticipated to close:** 11/05/2025
*if interested in opportunity, please submit application as soon as possible.
The salary range listed is an estimate. Pay at Cardinal Health is determined by multiple factors including, but not limited to, a candidate's geographical location, relevant education, experience and skills and an evaluation of internal pay equity.
_Candidates who are back-to-work, people with disabilities, without a college degree, and Veterans are encouraged to apply._
_Cardinal Health supports an inclusive workplace that values diversity of thought, experience and background. We celebrate the power of our differences to create better solutions for our customers by ensuring employees can be their authentic selves each day. Cardinal Health is an Equal_ _Opportunity/Affirmative_ _Action employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, ancestry, age, physical or mental disability, sex, sexual orientation, gender identity/expression, pregnancy, veteran status, marital status, creed, status with regard to public assistance, genetic status or any other status protected by federal, state or local law._
_To read and review this privacy notice click_ here (***************************************************************************************************************************