Staff Data Scientist
Data scientist job in Fremont, CA
Staff Data Scientist | San Francisco | $250K-$300K + Equity
We're partnering with one of the fastest-growing AI companies in the world to hire a Staff Data Scientist. Backed by over $230M from top-tier investors and already valued at over $1B, they've secured customers that include some of the most recognizable names in tech. Their AI platform powers millions of daily interactions and is quickly becoming the enterprise standard for conversational AI.
In this role, you'll bring rigorous analytics and experimentation leadership that directly shapes product strategy and company performance.
What you'll do:
Drive deep-dive analyses on user behavior, product performance, and growth drivers
Design and interpret A/B tests to measure product impact at scale
Build scalable data models, pipelines, and dashboards for company-wide use
Partner with Product and Engineering to embed experimentation best practices
Evaluate ML models, ensuring business relevance, performance, and trade-off clarity
What we're looking for:
5+ years in data science or product analytics at scale (consumer or marketplace preferred)
Advanced SQL and Python skills, with strong foundations in statistics and experimental design
Proven record of designing, running, and analyzing large-scale experiments
Ability to analyze and reason about ML models (classification, recommendation, LLMs)
Strong communicator with a track record of influencing cross-functional teams
If you're excited by the sound of this challenge- apply today and we'll be in touch.
Data Scientist
Data scientist job in Pleasanton, CA
Key Responsibilities
Design and develop marketing-focused machine learning models, including:
Customer segmentation
Propensity, churn, and lifetime value (LTV) models
Campaign response and uplift models
Attribution and marketing mix models (MMM)
Build and deploy NLP solutions for:
Customer sentiment analysis
Text classification and topic modeling
Social media, reviews, chat, and voice-of-customer analytics
Apply advanced statistical and ML techniques to solve real-world business problems.
Work with structured and unstructured data from multiple marketing channels (digital, CRM, social, email, web).
Translate business objectives into analytical frameworks and actionable insights.
Partner with stakeholders to define KPIs, success metrics, and experimentation strategies (A/B testing).
Optimize and productionize models using MLOps best practices.
Mentor junior data scientists and provide technical leadership.
Communicate complex findings clearly to technical and non-technical audiences.
Required Skills & Qualifications
7+ years of experience in Data Science, with a strong focus on marketing analytics.
Strong expertise in Machine Learning (supervised & unsupervised techniques).
Hands-on experience with NLP techniques, including:
Text preprocessing and feature extraction
Word embeddings (Word2Vec, GloVe, Transformers)
Large Language Models (LLMs) is a plus
Proficiency in Python (NumPy, Pandas, Scikit-learn, TensorFlow/PyTorch).
Experience with SQL and large-scale data processing.
Strong understanding of statistics, probability, and experimental design.
Experience working with cloud platforms (AWS, Azure, or GCP).
Ability to translate data insights into business impact.
Nice to Have
Experience with marketing automation or CRM platforms.
Knowledge of MLOps, model monitoring, and deployment pipelines.
Familiarity with GenAI/LLM-based NLP use cases for marketing.
Prior experience in consumer, e-commerce, or digital marketing domains.
EEO
Centraprise is an equal opportunity employer. Your application and candidacy will not be considered based on race, color, sex, religion, creed, sexual orientation, gender identity, national origin, disability, genetic information, pregnancy, veteran status or any other characteristic protected by federal, state or local laws.
Data Scientist
Data scientist job in Fremont, CA
Key Responsibilities
Design and productionize models for opportunity scanning, anomaly detection, and significant change detection across CRM, streaming, ecommerce, and social data.
Define and tune alerting logic (thresholds, SLOs, precision/recall) to minimize noise while surfacing high-value marketing actions.
Partner with marketing, product, and data engineering to operationalize insights into campaigns, playbooks, and automated workflows, with clear monitoring and experimentation.
Required Qualifications
Strong proficiency in Python (pandas, NumPy, scikit-learn; plus experience with PySpark or similar for large-scale data) and SQL on modern warehouses (e.g., BigQuery, Snowflake, Redshift).
Hands-on experience with time-series modeling and anomaly / changepoint / significant-movement detection(e.g., STL decomposition, EWMA/CUSUM, Bayesian/prophet-style models, isolation forests, robust statistics).
Experience building and deploying production ML pipelines (batch and/or streaming), including feature engineering, model training, CI/CD, and monitoring for performance and data drift.
Solid background in statistics and experimentation: hypothesis testing, power analysis, A/B testing frameworks, uplift/propensity modeling, and basic causal inference techniques.
Familiarity with cloud platforms (GCP/AWS/Azure), orchestration tools (e.g., Airflow/Prefect), and dashboarding/visualization tools to expose alerts and model outputs to business users.
Senior Data Scientist
Data scientist job in Pleasanton, CA
Net2Source is a Global Workforce Solutions Company headquartered at NJ, USA with its branch offices in Asia Pacific Region. We are one of the fastest growing IT Consulting company across the USA and we are hiring
" Senior Data Scientist
" for one of our clients. We offer a wide gamut of consulting solutions customized to our 450+ clients ranging from Fortune 500/1000 to Start-ups across various verticals like Technology, Financial Services, Healthcare, Life Sciences, Oil & Gas, Energy, Retail, Telecom, Utilities, Technology, Manufacturing, the Internet, and Engineering.
Position: Senior Data Scientist
Location: Pleasanton, CA (Onsite) - Locals Only
Type: Contract
Exp Level - 10+ Years
Required Skills
Design, develop, and deploy advanced marketing models, including:
Build and productionize NLP solutions.
Partner with Marketing and Business stakeholders to translate business objectives into data science solutions.
Work with large-scale structured and unstructured datasets using SQL, Python, and distributed systems.
Evaluate and implement state-of-the-art ML/NLP techniques to improve model performance and business impact.
Communicate insights, results, and recommendations clearly to both technical and non-technical audiences.
Required Qualifications
5+ years of experience in data science or applied machine learning, with a strong focus on marketing analytics.
Hands-on experience building predictive marketing models (e.g., segmentation, attribution, personalization).
Strong expertise in NLP techniques and libraries (e.g., spa Cy, NLTK, Hugging Face, Gensim).
Proficiency in Python, SQL, and common data science libraries (pandas, NumPy, scikit-learn).
Solid understanding of statistics, machine learning algorithms, and model evaluation.
Experience deploying models into production environments.
Strong communication and stakeholder management skills.
Why Work With Us?
We believe in more than just jobs-we build careers. At Net2Source, we champion leadership at all levels, celebrate diverse perspectives, and empower you to make an impact. Think work-life balance, professional growth, and a collaborative culture where your ideas matter.
Our Commitment to Inclusion & Equity
Net2Source is an equal opportunity employer, dedicated to fostering a workplace where diverse talents and perspectives are valued. We make all employment decisions based on merit, ensuring a culture of respect, fairness, and opportunity for all, regardless of age, gender, ethnicity, disability, or other protected characteristics.
Awards & Recognition
America's Most Honored Businesses (Top 10%)
Fastest-Growing Staffing Firm by Staffing Industry Analysts
INC 5000 List for Eight Consecutive Years
Top 100 by
Dallas Business Journal
Spirit of Alliance Award by Agile1
Maddhuker Singh
Sr Account & Delivery Manager
***********************
Staff Data Engineer
Data scientist job in Fremont, CA
🌎 San Francisco (Hybrid)
💼 Founding/Staff Data Engineer
💵 $200-300k base
Our client is an elite applied AI research and product lab building AI-native systems for finance-and pushing frontier models into real production environments. Their work sits at the intersection of data, research, and high-stakes financial decision-making.
As the Founding Data Engineer, you will own the data platform that powers everything: models, experiments, and user-facing products relied on by demanding financial customers. You'll make foundational architectural decisions, work directly with researchers and product engineers, and help define how data is built, trusted, and scaled from day one.
What you'll do:
Design and build the core data platform, ingesting, transforming, and serving large-scale financial and alternative datasets.
Partner closely with researchers and ML engineers to ship production-grade data and feature pipelines that power cutting-edge models.
Establish data quality, observability, lineage, and reproducibility across both experimentation and production workloads.
Deploy and operate data services using Docker and Kubernetes in a modern cloud environment (AWS, GCP, or Azure).
Make foundational choices on tooling, architecture, and best practices that will define how data works across the company.
Continuously simplify and evolve systems-rewriting pipelines or infrastructure when it's the right long-term decision.
Ideal candidate:
Have owned or built high-performance data systems end-to-end, directly supporting production applications and ML models.
Are strongest in backend and data infrastructure, with enough frontend literacy to integrate cleanly with web products when needed.
Can design and evolve backend services and pipelines (Node.js or Python) to support new product features and research workflows.
Are an expert in at least one statically typed language, with a strong bias toward type safety, correctness, and maintainable systems.
Have deployed data workloads and services using Docker and Kubernetes on a major cloud provider.
Are comfortable making hard calls-simplifying, refactoring, or rebuilding legacy pipelines when quality and scalability demand it.
Use AI tools to accelerate your work, but rigorously review and validate AI-generated code, insisting on sound system design.
Thrive in a high-bar, high-ownership environment with other exceptional engineers.
Love deep technical problems in data infrastructure, distributed systems, and performance.
Nice to have:
Experience working with financial data (market, risk, portfolio, transactional, or alternative datasets).
Familiarity with ML infrastructure, such as feature stores, experiment tracking, or model serving systems.
Background in a high-growth startup or a foundational infrastructure role.
Compensation & setup:
Competitive salary and founder-level equity
Hybrid role based in San Francisco, with close collaboration and significant ownership
Small, elite team building core infrastructure with outsized impact
AI Data Engineer
Data scientist job in Fremont, CA
Member of Technical Staff - AI Data Engineer
San Francisco (In-Office)
$150K to $225K + Equity
A high-growth, AI-native startup coming out of stealth is hiring AI Data Engineers to build the systems that power production-grade AI. The company has recently signed a Series A term sheet and is scaling rapidly. This role is central to unblocking current bottlenecks across data engineering, context modeling, and agent performance.
Responsibilities:
• Build distributed, reliable data pipelines using Airflow, Temporal, and n8n
• Model SQL, vector, and NoSQL databases (Postgres, Qdrant, etc.)
• Build API and function-based services in Python
• Develop custom automations (Playwright, Stagehand, Zapier)
• Work with AI researchers to define and expose context as services
• Identify gaps in data quality and drive changes to upstream processes
• Ship fast, iterate, and own outcomes end-to-end
Required Experience:
• Strong background in data engineering
• Hands-on experience working with LLMs or LLM-powered applications
• Data modeling skills across SQL and vector databases
• Experience building distributed systems
• Experience with Airflow, Temporal, n8n, or similar workflow engines
• Python experience (API/services)
• Startup mindset and bias toward rapid execution
Nice To Have:
• Experience with stream processing (Flink)
• dbt or Clickhouse experience
• CDC pipelines
• Experience with context construction, RAG, or agent workflows
• Analytical tooling (Posthog)
What You Can Expect:
• High-intensity, in-office environment
• Fast decision-making and rapid shipping cycles
• Real ownership over architecture and outcomes
• Opportunity to work on AI systems operating at meaningful scale
• Competitive compensation package
• Meals provided plus full medical, dental, and vision benefits
If this sounds like you, please apply now.
Senior Data Engineer
Data scientist job in Fremont, CA
We're hiring a Senior/Lead Data Engineer to join a fast-growing AI startup. The team comes from a billion dollar AI company, and has raised a $40M+ seed round.
You'll need to be comfortable transforming and moving data in a new 'group level' data warehouse, from legacy sources. You'll have a strong data modeling background.
Proven proficiency in modern data transformation tools, specifically dbt and/or SQLMesh.
Exceptional ability to apply systems thinking and complex problem-solving to ambiguous challenges. Experience within a high-growth startup environment is highly valued.
Deep, practical knowledge of the entire data lifecycle, from generation and governance through to advanced downstream applications (e.g., fueling AI/ML models, LLM consumption, and core product features).
Outstanding ability to communicate technical complexity clearly, synthesizing information into actionable frameworks for executive and cross-functional teams.
Data Engineer
Data scientist job in Fremont, CA
Midjourney is a research lab exploring new mediums to expand the imaginative powers of the human species. We are a small, self-funded team focused on design, human infrastructure, and AI. We have no investors, no big company controlling us, and no advertisers. We are 100% supported by our amazing community.
Our tools are already used by millions of people to dream, to explore, and to create. But this is just the start. We think the story of the 2020s is about building the tools that will remake the world for the next century. We're making those tools, to expand what it means to be human.
Core Responsibilities:
Design and maintain data pipelines to consolidate information across multiple sources (subscription platforms, payment systems, infrastructure and usage monitoring, and financial systems) into a unified analytics environment
Build and manage interactive dashboards and self-service BI tools that enable leadership to track key business metrics including revenue performance, infrastructure costs, customer retention, and operational efficiency
Serve as technical owner of our financial planning platform (Pigment or similar), leading implementation and build-out of models, data connections, and workflows in partnership with Finance leadership to translate business requirements into functional system architecture
Develop automated data quality checks and cleaning processes to ensure accuracy and consistency across financial and operational datasets
Partner with Finance, Product and Operations teams to translate business questions into analytical frameworks, including cohort analysis, cost modeling, and performance trending
Create and maintain documentation for data models, ETL processes, dashboard logic, and system workflows to ensure knowledge continuity
Support strategic planning initiatives by building financial models, scenario analyses, and data-driven recommendations for resource allocation and growth investments
Required Qualifications:
3-5+ years experience in data engineering, analytics engineering, or similar role with demonstrated ability to work with large-scale datasets
Strong SQL skills and experience with modern data warehousing solutions (BigQuery, Snowflake, Redshift, etc.)
Proficiency in at least one programming language (Python, R) for data manipulation and analysis
Experience with BI/visualization tools (Looker, Tableau, Power BI, or similar)
Hands-on experience administering enterprise financial systems (NetSuite, SAP, Oracle, or similar ERP platforms)
Experience working with Stripe Billing or similar subscription management platforms, including data extraction and revenue reporting
Ability to communicate technical concepts clearly to non-technical stakeholders
Data Engineer / Analytics Specialist
Data scientist job in Fremont, CA
Citizenship Requirement: U.S. Citizens Only
ITTConnect is seeking a Data Engineer / Analytics to work for one of our clients, a major Technology Consulting firm with headquarters in Europe. They are experts in tailored technology consulting and services to banks, investment firms and other Financial vertical clients.
Job location: San Francisco Bay area or NY City.
Work Model: Ability to come into the office as requested
Seniority: 10+ years of total experience
About the role:
The Data Engineer / Analytics Specialist will support analytics, product insights, and AI initiatives. You will build robust data pipelines, integrate data sources, and enhance the organization's analytical foundations.
Responsibilities:
Build and operate Snowflake-based analytics environments.
Develop ETL/ELT pipelines (DBT, Airflow, etc.).
Integrate APIs, external data sources, and streaming inputs.
Perform query optimization, basic data modeling, and analytics support.
Enable downstream GenAI and analytics use cases.
Requirements:
10+ years of overall technology experience
3+ years hands-on AWS experience required
Strong SQL and Snowflake experience.
Hands-on pipeline engineering with DBT, Airflow, or similar.
Experience with API integrations and modern data architectures.
Senior Data Engineer - Spark, Airflow
Data scientist job in Fremont, CA
We are seeking an experienced Data Engineer to design and optimize scalable data pipelines that drive our global data and analytics initiatives.
In this role, you will leverage technologies such as Apache Spark, Airflow, and Python to build high performance data processing systems and ensure data quality, reliability, and lineage across Mastercard's data ecosystem.
The ideal candidate combines strong technical expertise with hands-on experience in distributed data systems, workflow automation, and performance tuning to deliver impactful, data-driven solutions at enterprise scale.
Responsibilities:
Design and optimize Spark-based ETL pipelines for large-scale data processing.
Build and manage Airflow DAGs for scheduling, orchestration, and checkpointing.
Implement partitioning and shuffling strategies to improve Spark performance.
Ensure data lineage, quality, and traceability across systems.
Develop Python scripts for data transformation, aggregation, and validation.
Execute and tune Spark jobs using spark-submit.
Perform DataFrame joins and aggregations for analytical insights.
Automate multi-step processes through shell scripting and variable management.
Collaborate with data, DevOps, and analytics teams to deliver scalable data solutions.
Qualifications:
Bachelor's degree in Computer Science, Data Engineering, or related field (or equivalent experience).
At least 7 years of experience in data engineering or big data development.
Strong expertise in Apache Spark architecture, optimization, and job configuration.
Proven experience with Airflow DAGs using authoring, scheduling, checkpointing, monitoring.
Skilled in data shuffling, partitioning strategies, and performance tuning in distributed systems.
Expertise in Python programming including data structures and algorithmic problem-solving.
Hands-on with Spark DataFrames and PySpark transformations using joins, aggregations, filters.
Proficient in shell scripting, including managing and passing variables between scripts.
Experienced with spark submit for deployment and tuning.
Solid understanding of ETL design, workflow automation, and distributed data systems.
Excellent debugging and problem-solving skills in large-scale environments.
Experience with AWS Glue, EMR, Databricks, or similar Spark platforms.
Knowledge of data lineage and data quality frameworks like Apache Atlas.
Familiarity with CI/CD pipelines, Docker/Kubernetes, and data governance tools.
Data Modeler
Data scientist job in Sacramento, CA
We are seeking a senior, hands-on Data Analyst /Data Modeler with strong communication skills for a long-term hybrid assignment with our Sacramento, California-based client. This position requires you to be able to work on-site in Sacramento, California, on Mondays and Wednesdays each week. Candidates must currently live within 60 miles of Sacramento, CA.
Requirements:
Senior, hands-on data model with strong communication skills.
Expert-level command of ER/STUDIO Data Architect modeling application
Strong ability to articulate data modeling principles and gather requirements from non-technical business stakeholders
Excellent presentation skills to different (business and technical) audiences ranging from senior level leadership to operational staff with no supervision
Ability to translate business and functional requirements into technical requirements for technical team members.
Candidate needs to be able to demonstrate direct hands-on recent practical experience in the areas identified with specific examples.
Qualifications:
Mandatory:
Minimum of ten (10) years demonstrable experience in the data management space, with at least 5 years specializing in database design and at least 5 years in data modeling.
Minimum of five (5) years of experience as a data analyst or in other quantitative analysis or related disciplines, such as a researcher or data engineer, supportive of key duties/responsibilities identified above.
Minimum of five (5) years of demonstrated experience with ER/Studio data modeling application
Minimum of five (5) years of relevant experience in relational data modeling and dimensional data modeling, statistical analysis, and machine learning, supportive of key duties/responsibilities identified above.
Excellent communication and collaboration skills to work effectively with stakeholders and team members.
At least 2 years of experience working on Star, Snowflake, and/or Hybrid schemas
Oracle/ODI/OCI/ADW experience required
Desired:
At least 2 years of experience working on Oracle Autonomous Data Warehouse (ADW), specifically installed in an OCI environment.
Expert-level Kimball Dimensional Data Modeling experience
Expert-level experience developing in Oracle SQL Developer or ER/Studio Data Architect for Oracle.
Ability to develop and perform Extract, Transform, and Load (ETL) activities using Oracle tools and PL/SQL with at least 2 years of experience. Ability to perform technical leadership of an Oracle data warehouse team, including but not limited to ETL, requirements solicitation, DBA, data warehouse administration, and data analysis on a hands-on basis.
Sr Data Platform Engineer
Data scientist job in Elk Grove, CA
Hybrid role 3X a week in office in Elk Grove, CA; no remote capabilities
This is a direct hire opportunity.
We're seeking a seasoned Senior Data Platform Engineer to design, build, and optimize scalable data solutions that power analytics, reporting, and AI/ML initiatives. This full‑time role is hands‑on, working with architects, analysts, and business stakeholders to ensure data systems are reliable, secure, and high‑performing.
Responsibilites:
Build and maintain robust data pipelines (structured, semi‑structured, unstructured).
Implement ETL workflows with Spark, Delta Lake, and cloud‑native tools.
Support big data platforms (Databricks, Snowflake, GCP) in production.
Troubleshoot and optimize SQL queries, Spark jobs, and workloads.
Ensure governance, security, and compliance across data systems.
Integrate workflows into CI/CD pipelines with Git, Jenkins, Terraform.
Collaborate cross‑functionally to translate business needs into technical solutions.
Qualifications:
7+ years in data engineering with production pipeline experience.
Expertise in Spark ecosystem, Databricks, Snowflake, GCP.
Strong skills in PySpark, Python, SQL.
Experience with RAG systems, semantic search, and LLM integration.
Familiarity with Kafka, Pub/Sub, vector databases.
Proven ability to optimize ETL jobs and troubleshoot production issues.
Agile team experience and excellent communication skills.
Certifications in Databricks, Snowflake, GCP, or Azure.
Exposure to Airflow, BI tools (Power BI, Looker Studio).
Data Engineer
Data scientist job in Pleasanton, CA
Hi
Job Title: Data Engineer
HM prefers candidate to be on site at Pleasanton
Proficiency in Spark, Python, and SQL is essential for this role. 10+ Experience with relational databases such as Oracle, NoSQL databases including MongoDB and Cassandra, and big data technologies, particularly Databricks, is required. Strong knowledge of data modeling techniques is necessary for designing efficient and scalable data structures. Familiarity with APIs and web services, including REST and SOAP, is important for integrating various data sources and ensuring seamless data flow. This role involves leveraging these technical skills to build and maintain robust data pipelines and support advanced data analytics.
SKILLS:
- Spark/Python/SQL
- Relational Database (Oracle) / NoSQL Database (MongoDB/ Cassandra) / Databricks
- Big Data technologies - Databricks preferred
- Data modelling techniques
- APIs and web services (REST/ SOAP)
If interested, Please share below details with update resume:
Full Name:
Phone:
E-mail:
Rate:
Location:
Visa Status:
Availability:
SSN (Last 4 digit):
Date of Birth:
LinkedIn Profile:
Availability for the interview:
Availability for the project:
Data Scientist 4
Data scientist job in Fremont, CA
Analyze large, complex datasets from diverse sources to uncover insights and identify opportunities for innovation. Design, build, and deploy robust machine learning models with meaningful uncertainty quantification. Perform rigorous data engineering and model evaluation, including feature engineering, hyperparameter tuning, and model selection.
Collaborate with engineering teams to integrate models into production codebases, promoting best practices in code quality and maintainability.
Communicate findings and technical results clearly to both technical and non-technical stakeholders.
Master's degree with 6+ years of experience or Ph.
D.
with 3+ years in Computer Science, Engineering, Physics, Applied Mathematics, Statistics, or a related quantitative field.
Machine Learning Expertise: Strong theoretical foundation and hands-on experience in ML algorithms, deep learning, AI, statistics, or optimization.
Programming Skills: Proficient in Python, with motivation to write efficient, maintainable, testable, and well-documented code.
ML Frameworks: Experience with modern ML frameworks such as PyTorch, JAX, or TensorFlow.
Problem Solving: Demonstrated analytical and critical thinking skills, with a track record of delivering impactful R&D solutions.
Team Collaboration: Proven success working in cross-functional teams with strong execution and communication skills.
Domain expertise in semiconductor engineering, Bayesian statistics, process engineering, multi-physics modeling, or numerical simulation.
Familiarity with Linux/Unix operating systems.
Experience with MLOps tools and principles (e.
g.
, Docker, CI/CD pipelines).
Senior Data Scientist (Forecasting and ML Ops)
Data scientist job in Pleasanton, CA
At Clorox, we champion people to be well and thrive by doing the right thing, putting people at the center, and playing to win. Led by our IGNITE strategy, we build brands that make a positive difference in people's lives around the world. And we know that success requires head, heart, AND guts - all three, every day - coming together to work simpler, faster, bolder, and more inclusively. Interested? Join us to #IgniteYourCareer!
Your role at Clorox:
Clorox is transforming its data & technology capabilities and culture to accelerate purpose-driven business growth. We are looking for a passionate Data Scientist to join our team and help us build the future of forecasting and business planning. In this role, you will be responsible for developing and deploying forecasting models using a variety of techniques, including machine learning, statistics, and data mining. You will also be responsible for designing and implementing data pipelines to collect, clean, and prepare data for forecasting models. Forecasting team would drive this transformation through embedding predictive analytics and data science capabilities into the global decisions, empowering business units with scalable, reusable models. This position reports to Associate Director, Enterprise Analytics.
In this role, you will:
Deliver end-to-end analytics and data science solutions from idea conception through planning, requirements, design, development, testing, production, and deployment/business process integration
Oversee the solution's ML Ops from a model standpoint, own the management of data & model drift
Retrain the models regularly, upgrade the models and associated technical pipelines, such as addition of signals or adaptation of the models to a change in input source / format
Do first line and second line model maintenance incl. advanced debugging whenever necessary; coordinate with engineers for specialized back-end and UI upgrades / debugging
Collaborate with cross-functional stakeholders/SMEs throughout product solution lifecycle to ensure requirements are met and solutions are fully integrated in business processes to deliver value
Evaluate existing and new data science tools and techniques, lead the development of data science best practices across the enterprise
What we look for:
8 plus years of overall data science experience
3 plusyear business experience in a Data Science or Advanced Analytics role in the industry (must have demonstrated working with business units vs academic)
Deep understanding and experience with advanced statistics, time series forecasting, machine-learning models, best practice application of data science in a business context (e.g., back-testing & piloting), model architecture, and use cases
Experience in data management, e.g., wrangling, extraction, normalization
Ability to build industrialized data pipelines
Proficiency in SQL and Python (required) R / Scala
Knowledge of big data framework like Spark is preferred
Ability to navigate, collaborate and deliver production-grade code in a complex industrialized code base
Experience with standard SDLC process and DevOps including version-control (GitHub/SVN) and CI/CD
Experience using business intelligence tools like Power BI / Tableau
Experience in Azure and Databricks are a plus
Understanding of design and architecture principles is a plus
Good communication and presentation skills: ability to synthesize, simplify, and explain complex problems to different audiences across functions and levels; ability to convey insight through storytelling
Strong project management skills to stay on top of the timelines and deliverables
Autonomy and creativity with an ability to design suitable technical solutions to solve business problems
Bachelor's degree in Statistics, Data Science, Applied Mathematics, Computer Science, Business Analytics, or related quantitative disciplines (Master's degree preferred)
Workplace type:
Hybrid or Remote
We seek out and celebrate diverse backgrounds and experiences. We're looking for fresh perspectives, a desire to bring your best, and a non-stop drive to keep growing and learning.
At Clorox, we have a Culture of Inclusion. We believe our values-based culture connects to our purpose and helps our people be the best versions of themselves, professionally and personally. This means building a workplace where every person can feel respected, valued, and fully able to participate in our Clorox community. Learn more about our I&D program & initiatives here.
[U.S.]Additional Information:
At Clorox, we champion people to be well and thrive, starting with our own people. To help make this possible, we offer comprehensive, competitive benefits that prioritize all aspects of wellbeing and provide flexibility for our teammates' unique needs. This includes robust health plans, a market-leading 401(k) program with a company match, flexible time off benefits (including half-day summer Fridays depending on location), inclusive fertility / adoption benefits, and more.
We are committed to fair and equitable pay and are transparent with current and future teammates about our full salary ranges. We use broad salary ranges that reflect the competitive market for similar jobs, provide sufficient opportunity for growth as you gain experience and expand responsibilities, while also allowing for differentiation based on performance. Based on the breadth of our ranges, most new hires will start at Clorox in the first half of the applicable range. Your starting pay will depend on job-related factors, including relevant skills, knowledge, experience and location. The applicable salary range for every role in the U.S. is based on your work location and is aligned to one of three zones according to the cost of labor in your area.
-Zone A: $116,700 - $229,800-Zone B: $107,000 - $210,700-Zone C: $97,200 - $191,500All ranges are subject to change in the future. Your recruiter can share more about the specific salary range for your location during the hiring process.
This job is also eligible for participation in Clorox's incentive plans, subject to the terms of the applicable plan documents and policies.
Please apply directly to our job postings and do not submit your resume to any person via text message. Clorox does not conduct text-based interviews and encourages you to be cautious of anyone posing as a Clorox recruiter via unsolicited texts during these uncertain times.
Data Scientist
Data scientist job in San Ramon, CA
Job Title: Data Scientist
Contract: 6 Months
Need
4 years +
work experience with
.
Heavy
deep learning AND Machine Learning AND Algorithm
experience
Additional Information
All your information will be kept confidential according to EEO guidelines.
Data Scientist II
Data scientist job in Folsom, CA
About the RoleThe Forecasting Team at Gap Inc. applies data analysis and machine learning techniques to drive business benefits for Gap Inc. and its brands. The team's focus is to shape the company's inventory management strategy through advanced data science and forecasting techniques. The successful candidate will lead the development of advanced forecasting models across various business functions, time horizons, and product hierarchies.
Areas of expertise include forecasting, time series, predictive modeling, supply chain analytics, and inventory management. You will support the team to build and deploy data and predictive analytics capabilities, in partnership with GapTech, PDM, Central Marketing & business partners across our brands.What You'll Do
Build, validate, and maintain AI (Machine Learning (ML) /Deep learning) models, diagnose, and optimize performance and develop statistical models and analysis for ad hoc business focused analysis.
Develop software programs, algorithms and automated processes that cleanse, integrate, and evaluate large data sets from multiple disparate sources.
Manipulate large amounts of data across a diverse set of subject areas, collaborating with other data scientists and data engineers to prepare data pipelines for various modeling protocols.
Deliver sound, data-backed recommendations tied to business results, industry insights, and overall Gap Inc. ecosystem of technology, platform, and resources.
Communicate compelling, data-driven recommendations as well as potential trade-offs, backed by data analysis and/or model outputs to influence leaders' and stakeholders' decisions.
Build networks across the organization and partners to anticipate leader requests and influence data-driven decision making.
Guides discussions and empowers more junior team members to identify best solutions.
Who You Are
Experience in developing advanced algorithms using machine leaning (ML), statistical, and optimization methods to enhance various business components in the retail sector.
Hands-on experience with forecasting models, running simulations of what-if analysis, and prescriptive analytics.
Experience with time series analysis, predictive modeling, hierarchical Bayesian, causal ML, and transformer-based algorithms.
Experience with creating business impact in supply chain, merchandise, inventory planning, or vendor management using advanced forecasting techniques.
Experience working directly with cross-functional teams such as product management, engineers, business partners.
Advanced proficiency in modern analytics tools and languages such as Python, R, Spark, SQL.
Advanced proficiency using SQL for efficient manipulation of large datasets in on prem and cloud distributed computing environments, such as Azure environments.
Ability to work both at a detailed level as well as to summarize findings and extrapolate knowledge to make strong recommendations for change.
Ability to collaborate with cross functional teams and influence product and analytics roadmap, with a demonstrated proficiency in relationship building.
Ability to assess relatively complex situations and analyze data to make judgments and recommend solutions.
Required
BS with 7+ years of experience (or MS with 5+ years) in Data Science, Computer Science, Machine Learning, Applied Mathematics, or equivalent quantitative field.
People mentoring experience, ability to work independently on large scale projects.
Proven ability to lead teams in solving unstructured technical problems to achieve business impact.
Full stack experience across analytics, data science, machine learning, and data engineering
Auto-ApplyEducator Preparation Data Scientist
Data scientist job in Sacramento, CA
Responsibilities Under the general direction of the Director, Educator Quality Center, the Educator Preparation Data Scientist will perform duties as outlined below: Data Systems Project Management * Design, develop, and manage systems and processes for collecting, extracting, loading, and integrating high-priority credential program and student data into the EdQ operational data store.
* Lead the development of dashboards and reporting tools that support CSU educator preparation programs.
* Collaborate with a wide range of stakeholders-including campus and Chancellor's Office staff, internal IT and IR&A teams, and external partners such as state and national education agencies, funding organizations, consultants, and vendors-to ensure data systems meet strategic and operational needs.
Strategic Data Analysis
* Conduct strategic data analyses to generate valid and reliable evidence that supports continuous improvement in CSU educator preparation programs and addresses California's educator workforce needs.
* Translate analytical findings into actionable insights and communicate them clearly through data visualizations, storytelling, and presentations tailored to non-technical audiences.
* Evaluate the impact of key initiatives using quantitative evidence and support data-informed decision-making across the CSU system.
Support Effective Use of Data
* Develop and refine systems and processes that improve the quality, consistency, and usability of existing educator preparation metrics administered by EdQ.
* Define and monitor key performance indicators (KPIs) in collaboration with educator preparation faculty and practitioners to reflect their goals and values.
* Promote the standardization and adoption of shared data tools, software platforms, and protocols across CSU educator preparation programs.
Qualifications
This position requires:
* Demonstrated interest in improving educational outcomes, particularly in educator preparation.
* Master's degree or higher in a technical, computational, or quantitative field (e.g., Data Science, Statistics, Computer Science, Economics, Educational Measurement, or related discipline).
* A minimum of four years of professional experience, including at least one year of hands-on experience in data science, analytics, or applied research.
* Proven ability to deliver data products, tools, or original research-preferably in an educational or public-sector context, using large or complex datasets.
* Strong quantitative skills, including experience with statistical methods used in education research.
* Proficiency with one or more statistical software tools or programming languages (e.g., R, Python, Stata).
* Working knowledge of SQL, relational databases, and data visualization tools (e.g., Tableau, Power BI).
* Experience with data mining, exploration, and visualization techniques.
* Familiarity with data systems design, development, and management.
* Strong project management skills and ability to work collaboratively with cross-functional teams.
* Experience administering surveys using platforms such as Qualtrics, SurveyMonkey, or QuestionPro.
* Experience working with sensitive or confidential data in a secure computing environment.
Preferred Qualifications
* Experience with ETL (Extract, Transform, Load) processes and data warehousing solutions.
* Proficiency with version control systems such as Git.
* Familiarity with data governance practices, especially in educational or public-sector environments.
* Experience using AI or machine learning tools (e.g., OpenAI) to enhance data analysis, automation, or reporting workflows.
* Knowledge of educator preparation policy, accreditation processes, or workforce analytics.
Application Period
Priority consideration will be given to candidates who apply by December 2, 2025. Applications will be accepted until the job posting is removed.
How To Apply
Please click "Apply Now" to complete the California State University, Chancellor's Office online employment application.
Equal Employment Opportunity
Consistent with California law and federal civil rights laws, the CSU provides equal opportunity in education and employment without unlawful discrimination or preferential treatment based on race, sex, color, ethnicity, or national origin. Reasonable accommodations will be provided for qualified applicants with disabilities who self-disclose by contacting the Senior Human Resources Manager at **************.
Title IX
Please view the Notice of Non-Discrimination on the Basis of Gender or Sex and Contact Information for Title IX Coordinator at: *********************************
E-Verify
This position requires new hire employment verification to be processed through the E-Verify program administered by the Department of Homeland Security, U.S. Citizenship and Immigration Services (DHSUSCIS)' in partnership with the Social Security Administration (SSA).
If hired, you will be required to furnish proof that you are legally authorized to work in the United States. The CSU Chancellor's Office is not a sponsoring agency for staff and Management positions (i.e., H1-B VISAS).
COVID19 Vaccination Policy
Per the CSU COVID-19 Vaccination Policy, it is strongly recommended that all Chancellor's Office employees who are accessing office and campus facilities follow COVID-19 vaccine recommendations adopted by the U.S. Centers for Disease Control and Prevention (CDC) and the California Department of Public Health (CDPH) applicable to their age, medical condition, and other relevant indications.
Mandated Reporter Per CANRA
The person holding this position is considered a 'mandated reporter' under the California Child Abuse and Neglect Reporting Act and is required to comply with the requirements set forth in CSU Executive Order 1083 as a condition of employment.
CSU Out of State Employment Policy
California State University, Office of the Chancellor, as part of the CSU system, is a State of California Employer. As such, the University requires all employees upon date of hire to reside in the State of California. As of January 1, 2022, the CSU Out-of-State Employment Policy prohibits the hiring of employees to perform CSU-related work outside the state of California.
Background
The Chancellor's Office policy requires that the selected candidate successfully complete a full background check (including a criminal records check) prior to assuming this position.
Advertised: Nov 18 2025 Pacific Standard Time
Applications close:
Senior Data Scientist
Data scientist job in Sacramento, CA
**_What Data Science contributes to Cardinal Health_** The Data & Analytics Function oversees the analytics life-cycle in order to identify, analyze and present relevant insights that drive business decisions and anticipate opportunities to achieve a competitive advantage. This function manages analytic data platforms, the access, design and implementation of reporting/business intelligence solutions, and the application of advanced quantitative modeling.
Data Science applies base, scientific methodologies from various disciplines, techniques and tools that extracts knowledge and insight from data to solve complex business problems on large data sets, integrating multiple systems.
At Cardinal Health's Artificial Intelligence Center of Excellence (AI CoE), we are pushing the boundaries of healthcare with cutting-edge Data Science and Artificial Intelligence (AI). Our mission is to leverage the power of data to create innovative solutions that improve patient outcomes, streamline operations, and enhance the overall healthcare experience.
We are seeking a highly motivated and experienced Senior Data Scientist to join our team as a thought leader and architect of our AI strategy. You will play a critical role in fulfilling our vision through delivery of impactful solutions that drive real-world change.
**_Responsibilities_**
+ Lead the Development of Innovative AI solutions: Be responsible for designing, implementing, and scaling sophisticated AI solutions that address key business challenges within the healthcare industry by leveraging your expertise in areas such as Machine Learning, Generative AI, and RAG Technologies.
+ Develop advanced ML models for forecasting, classification, risk prediction, and other critical applications.
+ Explore and leverage the latest Generative AI (GenAI) technologies, including Large Language Models (LLMs), for applications like summarization, generation, classification and extraction.
+ Build robust Retrieval Augmented Generation (RAG) systems to integrate LLMs with vast repositories of healthcare and business data, ensuring accurate and relevant outputs.
+ Shape Our AI Strategy: Work closely with key stakeholders across the organization to understand their needs and translate them into actionable AI-driven or AI-powered solutions.
+ Act as a champion for AI within Cardinal Health, influencing the direction of our technology roadmap and ensuring alignment with our overall business objectives.
+ Guide and mentor a team of Data Scientists and ML Engineers by providing technical guidance, mentorship, and support to a team of skilled and geographically distributed data scientists, while fostering a collaborative and innovative environment that encourages continuous learning and growth.
+ Embrace a AI-Driven Culture: foster a culture of data-driven decision-making, promoting the use of AI insights to drive business outcomes and improve customer experience and patient care.
**_Qualifications_**
+ 8-12 years of experience with a minimum of 4 years of experience in data science, with a strong track record of success in developing and deploying complex AI/ML solutions, preferred
+ Bachelor's degree in related field, or equivalent work experience, preferred
+ GenAI Proficiency: Deep understanding of Generative AI concepts, including LLMs, RAG technologies, embedding models, prompting techniques, and vector databases, along with evaluating retrievals from RAGs and GenAI models without ground truth
+ Experience working with building production ready Generative AI Applications involving RAGs, LLM, vector databases and embeddings model.
+ Extensive knowledge of healthcare data, including clinical data, patient demographics, and claims data. Understanding of HIPAA and other relevant regulations, preferred.
+ Experience working with cloud platforms like Google Cloud Platform (GCP) for data processing, model training, evaluation, monitoring, deployment and support preferred.
+ Proven ability to lead data science projects, mentor colleagues, and effectively communicate complex technical concepts to both technical and non-technical audiences preferred.
+ Proficiency in Python, statistical programming languages, machine learning libraries (Scikit-learn, TensorFlow, PyTorch), cloud platforms, and data engineering tools preferred.
+ Experience in Cloud Functions, VertexAI, MLFlow, Storage Buckets, IAM Principles and Service Accounts preferred.
+ Experience in building end-to-end ML pipelines, from data ingestion and feature engineering to model training, deployment, and scaling preferred.
+ Experience in building and implementing CI/CD pipelines for ML models and other solutions, ensuring seamless integration and deployment in production environments preferred.
+ Familiarity with RESTful API design and implementation, including building robust APIs to integrate your ML models and GenAI solutions with existing systems preferred.
+ Working understanding of software engineering patterns, solutions architecture, information architecture, and security architecture with an emphasis on ML/GenAI implementations preferred.
+ Experience working in Agile development environments, including Scrum or Kanban, and a strong understanding of Agile principles and practices preferred.
+ Familiarity with DevSecOps principles and practices, incorporating coding standards and security considerations into all stages of the development lifecycle preferred.
**_What is expected of you and others at this level_**
+ Applies advanced knowledge and understanding of concepts, principles, and technical capabilities to manage a wide variety of projects
+ Participates in the development of policies and procedures to achieve specific goals
+ Recommends new practices, processes, metrics, or models
+ Works on or may lead complex projects of large scope
+ Projects may have significant and long-term impact
+ Provides solutions which may set precedent
+ Independently determines method for completion of new projects
+ Receives guidance on overall project objectives
+ Acts as a mentor to less experienced colleagues
**Anticipated salary range:** $121,600 - $173,700
**Bonus eligible:** Yes
**Benefits:** Cardinal Health offers a wide variety of benefits and programs to support health and well-being.
+ Medical, dental and vision coverage
+ Paid time off plan
+ Health savings account (HSA)
+ 401k savings plan
+ Access to wages before pay day with my FlexPay
+ Flexible spending accounts (FSAs)
+ Short- and long-term disability coverage
+ Work-Life resources
+ Paid parental leave
+ Healthy lifestyle programs
**Application window anticipated to close:** 11/05/2025
*if interested in opportunity, please submit application as soon as possible.
The salary range listed is an estimate. Pay at Cardinal Health is determined by multiple factors including, but not limited to, a candidate's geographical location, relevant education, experience and skills and an evaluation of internal pay equity.
_Candidates who are back-to-work, people with disabilities, without a college degree, and Veterans are encouraged to apply._
_Cardinal Health supports an inclusive workplace that values diversity of thought, experience and background. We celebrate the power of our differences to create better solutions for our customers by ensuring employees can be their authentic selves each day. Cardinal Health is an Equal_ _Opportunity/Affirmative_ _Action employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, ancestry, age, physical or mental disability, sex, sexual orientation, gender identity/expression, pregnancy, veteran status, marital status, creed, status with regard to public assistance, genetic status or any other status protected by federal, state or local law._
_To read and review this privacy notice click_ here (***************************************************************************************************************************
Senior Data Engineer
Data scientist job in Fremont, CA
If you're hands on with modern data platforms, cloud tech, and big data tools and you like building solutions that are secure, repeatable, and fast, this role is for you.
As a Senior Data Engineer, you will design, build, and maintain scalable data pipelines that transform raw information into actionable insights. The ideal candidate will have strong experience across modern data platforms, cloud environments, and big data technologies, with a focus on building secure, repeatable, and high-performing solutions.
Responsibilities:
Design, develop, and maintain secure, scalable data pipelines to ingest, transform, and deliver curated data into the Common Data Platform (CDP).
Participate in Agile rituals and contribute to delivery within the Scaled Agile Framework (SAFe).
Ensure quality and reliability of data products through automation, monitoring, and proactive issue resolution.
Deploy alerting and auto-remediation for pipelines and data stores to maximize system availability.
Apply a security first and automation-driven approach to all data engineering practices.
Collaborate with cross-functional teams (data scientists, analysts, product managers, and business stakeholders) to align infrastructure with evolving data needs.
Stay current on industry trends and emerging tools, recommending improvements to strengthen efficiency and scalability.
Qualifications:
Bachelor's degree in Computer Science, Information Systems, or related field (or equivalent experience).
At least 3 years of experience with Python and PySpark, including Jupyter notebooks and unit testing.
At least 2 years of experience with Databricks, Collibra, and Starburst.
Proven work with relational and NoSQL databases, including STAR and dimensional modeling approaches.
Hands-on experience with modern data stacks: object stores (S3), Spark, Airflow, lakehouse architectures, and cloud warehouses (Snowflake, Redshift).
Strong background in ETL and big data engineering (on-prem and cloud).
Work within enterprise cloud platforms (CFS2, Cloud Foundational Services 2/EDS) for governance and compliance.
Experience building end-to-end pipelines for structured, semi-structured, and unstructured data using Spark.