Staff Data Scientist
Data scientist job in San Francisco, CA
Staff Data Scientist | San Francisco | $250K-$300K + Equity
We're partnering with one of the fastest-growing AI companies in the world to hire a Staff Data Scientist. Backed by over $230M from top-tier investors and already valued at over $1B, they've secured customers that include some of the most recognizable names in tech. Their AI platform powers millions of daily interactions and is quickly becoming the enterprise standard for conversational AI.
In this role, you'll bring rigorous analytics and experimentation leadership that directly shapes product strategy and company performance.
What you'll do:
Drive deep-dive analyses on user behavior, product performance, and growth drivers
Design and interpret A/B tests to measure product impact at scale
Build scalable data models, pipelines, and dashboards for company-wide use
Partner with Product and Engineering to embed experimentation best practices
Evaluate ML models, ensuring business relevance, performance, and trade-off clarity
What we're looking for:
5+ years in data science or product analytics at scale (consumer or marketplace preferred)
Advanced SQL and Python skills, with strong foundations in statistics and experimental design
Proven record of designing, running, and analyzing large-scale experiments
Ability to analyze and reason about ML models (classification, recommendation, LLMs)
Strong communicator with a track record of influencing cross-functional teams
If you're excited by the sound of this challenge- apply today and we'll be in touch.
Data Scientist
Data scientist job in Long Beach, CA
STAND 8 provides end to end IT solutions to enterprise partners across the United States and with offices in Los Angeles, New York, New Jersey, Atlanta, and more including internationally in Mexico and India We are seeking a highly analytical and technically skilled Data Scientist to transform complex, multi-source data into unified, actionable insights used for executive reporting and decision-making.
This role requires expertise in business intelligence design, data modeling, metadata management, data integrity validation, and the development of dashboards, reports, and analytics used across operational and strategic environments.
The ideal candidate thrives in a fast-paced environment, demonstrates strong investigative skills, and can collaborate effectively with technical teams, business stakeholders, and leadership.
Essential Duties & Responsibilities
As a Data Scientist, participate across the full solution lifecycle: business case, planning, design, development, testing, migration, and production support.
Analyze large and complex datasets with accuracy and attention to detail.
Collaborate with users to develop effective metadata and data relationships.
Identify reporting and dashboard requirements across business units.
Determine strategic placement of business logic within ETL or metadata models.
Build enterprise data warehouse metadata/semantic models.
Design and develop unified dashboards, reports, and data extractions from multiple data sources.
Develop and execute testing methodologies for reports and metadata models.
Document BI architecture, data lineage, and project report requirements.
Provide technical specifications and data definitions to support the enterprise data dictionary.
Apply analytical skills and Data Science techniques to understand business processes, financial calculations, data flows, and application interactions.
Identify and implement improvements, workarounds, or alternative solutions related to ETL processes, ensuring integrity and timeliness.
Create UI components or portal elements (e.g., SharePoint) for dynamic or interactive stakeholder reporting.
As a Data Scientist, download and process SQL database information to build Power BI or Tableau reports (including cybersecurity awareness campaigns).
Utilize SQL, Python, R, or similar languages for data analysis and modeling.
Support process optimization through advanced modeling, leveraging experience as a Data Scientist where needed.
Required Knowledge & Attributes
Highly self-motivated with strong organizational skills and ability to manage multiple verbal and written assignments.
Experience collaborating across organizational boundaries for data sourcing and usage.
Analytical understanding of business processes, forecasting, capacity planning, and data governance.
Proficient with BI tools (Power BI, Tableau, PBIRS, SSRS, SSAS).
Strong Microsoft Office skills (Word, Excel, Visio, PowerPoint).
High attention to detail and accuracy.
Ability to work independently, demonstrate ownership, and ensure high-quality outcomes.
Strong communication, interpersonal, and stakeholder engagement skills.
Deep understanding that data integrity and consistency are essential for adoption and trust.
Ability to shift priorities and adapt within fast-paced environments.
Required Education & Experience
Bachelor's degree in Computer Science, Mathematics, or Statistics (or equivalent experience).
3+ years of BI development experience.
3+ years with Power BI and supporting Microsoft stack tools (SharePoint 2019, PBIRS/SSRS, Excel 2019/2021).
3+ years of experience with SDLC/project lifecycle processes
3+ years of experience with data warehousing methodologies (ETL, Data Modeling).
3+ years of VBA experience in Excel and Access.
Strong ability to write SQL queries and work with SQL Server 2017-2022.
Experience with BI tools including PBIRS, SSRS, SSAS, Tableau.
Strong analytical skills in business processes, financial modeling, forecasting, and data flow understanding.
Critical thinking and problem-solving capabilities.
Experience producing high-quality technical documentation and presentations.
Excellent communication and presentation skills, with the ability to explain insights to leadership and business teams.
Benefits
Medical coverage and Health Savings Account (HSA) through Anthem
Dental/Vision/Various Ancillary coverages through Unum
401(k) retirement savings plan
Paid-time-off options
Company-paid Employee Assistance Program (EAP)
Discount programs through ADP WorkforceNow
Additional Details
The base range for this contract position is $73 - $83 / per hour, depending on experience. Our pay ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target for new hires of this position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Qualified applicants with arrest or conviction records will be considered
About Us
STAND 8 provides end-to-end IT solutions to enterprise partners across the United States and globally with offices in Los Angeles, Atlanta, New York, Mexico, Japan, India, and more. STAND 8 focuses on the "bleeding edge" of technology and leverages automation, process, marketing, and over fifteen years of success and growth to provide a world-class experience for our customers, partners, and employees.
Our mission is to impact the world positively by creating success through PEOPLE, PROCESS, and TECHNOLOGY.
Check out more at ************** and reach out today to explore opportunities to grow together!
By applying to this position, your data will be processed in accordance with the STAND 8 Privacy Policy.
Data Scientist
Data scientist job in San Francisco, CA
We're working with a Series A health tech start-up pioneering a revolutionary approach to healthcare AI, developing neurosymbolic systems that combine statistical learning with structured medical knowledge. Their technology is being adopted by leading health systems and insurers to enhance patient outcomes through advanced predictive analytics.
We're seeking Machine Learning Engineers who excel at the intersection of data science, modeling, and software engineering. You'll design and implement models that extract insights from longitudinal healthcare data, balancing analytical rigor, interpretability, and scalability.
This role offers a unique opportunity to tackle foundational modeling challenges in healthcare, where your contributions will directly influence clinical, actuarial, and policy decisions.
Key Responsibilities
Develop predictive models to forecast disease progression, healthcare utilization, and costs using temporal clinical data (claims, EHR, laboratory results, pharmacy records)
Design interpretable and explainable ML solutions that earn the trust of clinicians, actuaries, and healthcare decision-makers
Research and prototype innovative approaches leveraging both classical and modern machine learning techniques
Build robust, scalable ML pipelines for training, validation, and deployment in distributed computing environments
Collaborate cross-functionally with data engineers, clinicians, and product teams to ensure models address real-world healthcare needs
Communicate findings and methodologies effectively through visualizations, documentation, and technical presentations
Required Qualifications
Strong foundation in statistical modeling, machine learning, or data science, with preference for experience in temporal or longitudinal data analysis
Proficiency in Python and ML frameworks (PyTorch, JAX, NumPyro, PyMC, etc.)
Proven track record of transitioning models from research prototypes to production systems
Experience with probabilistic methods, survival analysis, or Bayesian inference (highly valued)
Bonus Qualifications
Experience working with clinical data and healthcare terminologies (ICD, CPT, SNOMED CT, LOINC)
Background in actuarial modeling, claims forecasting, or risk adjustment methodologies
Lead Data Scientist - Computer Vision
Data scientist job in Santa Clara, CA
Lead Data Scientist - Computer Vision/Image Processing
About the Role
We are seeking a Lead Data Scientist to drive the strategy and execution of data science initiatives, with a particular focus on computer vision systems & image processing techniques. The ideal candidate has deep expertise in image processing techniques including Filtering, Binary Morphology, Perspective/Affine Transformation, Edge Detection.
Responsibilities
Solid knowledge of computer vision programs and image processing techniques: Filtering, Binary Morphology, Perspective/Affine Transformation, Edge Detection
Strong understanding of machine learning: Regression, Supervised and Unsupervised Learning
Proficiency in Python and libraries such as OpenCV, NumPy, scikit-learn, TensorFlow/PyTorch.
Familiarity with version control (Git) and collaborative development practices
Data Scientist
Data scientist job in Thousand Oaks, CA
Axtria is a leading global provider of cloud software and data analytics tailored for the Life Sciences industry. Since our inception in 2010, we have pioneered technology-driven solutions to revolutionize the commercialization journey, driving sales growth, and enhancing patient healthcare outcomes. Committed to impacting millions of lives positively, our innovative platforms deploy cutting-edge Artificial Intelligence and Machine Learning technologies. With a presence in over 30 countries, Axtria is a key player in delivering commercial solutions to the Life Sciences sector, consistently recognized for our growth and technological advancements.
Job Description:
We are looking for a Project Lead for our Decision Science practice. Success in this position requires managing consulting projects/engagements delivering Brand Analytics, Real World Data (RWD) Analytics, Commercial Analytics, Marketing Analytics, and Market Access Analytics solutions.
Candidates will be expected to have familiarity with:
Patient analytics using Real World Data (RWD) sources such as Claims data, EHR/EMR data, lab/diagnostic testing data, etc.
Predictive modeling using Real World Data
Patient and HCP segmentation
Campaign effectiveness, promotion response modeling, marketing mix optimization
Marketing analytics incl. digital marketing
Required skills and experience:
Overall, 4-8 years of relevant work experience and 2+ years of US local experience in pharma analytics
Knowledge of the Biopharmaceutical domain. Prior experience in analytics in therapeutic areas of Oncology, Inflammation, Cardio and Bone will be preferred
Exposure to syndicated data sets including Claims, EMR/EHR data and exposure to/experience working with large data sets.
Strong quantitative and analytical skills, including sound knowledge of statistical concepts and predictive modeling/machine learning.
Demonstrated ability to frame and scope business problems, design solutions, and deliver results.
Excellent spoken and written communication skills, including superior visualization, storyboarding, and presentation skills.
Ability to communicate actionable analytical findings to a technical or non-technical audience in clear and concise language.
Relevant expertise in using analytical tools such as R/Python, Alteryx, Dataiku etc. and ability to quickly master new analytics tools/software as needed.
Ability to lead project teams and own project delivery.
Logistics and Location:
U.S. Citizens and those authorized to work in the U.S. are encouraged to apply.
The position is based out of Thousand Oaks and the candidate needs to be at the client site 3-5 days per week
Axtria is an EEO/AA employer M/F/D/V. We offer attractive performance-based compensation packages including salary and bonus. Comprehensive benefits are available including health insurance, flexible spending accounts, and 401k with company match. Immigration sponsorship will be considered.
Pay Transparency Laws
Salary range or hourly pay range for the position
The salary range for this position is $83,200 to $129,738 annually. The actual salary will vary based on applicant's education, experience, skills, and abilities, as well as internal equity and alignment with market data. The salary may also be adjusted based on applicant's geographic location.
The salary range reflected is based on a primary work location of Thousand Oaks, CA. The actual salary may vary for applicants in a different geographic location.
Data Scientist
Data scientist job in Santa Rosa, CA
Key Responsibilities
Design and productionize models for opportunity scanning, anomaly detection, and significant change detection across CRM, streaming, ecommerce, and social data.
Define and tune alerting logic (thresholds, SLOs, precision/recall) to minimize noise while surfacing high-value marketing actions.
Partner with marketing, product, and data engineering to operationalize insights into campaigns, playbooks, and automated workflows, with clear monitoring and experimentation.
Required Qualifications
Strong proficiency in Python (pandas, NumPy, scikit-learn; plus experience with PySpark or similar for large-scale data) and SQL on modern warehouses (e.g., BigQuery, Snowflake, Redshift).
Hands-on experience with time-series modeling and anomaly / changepoint / significant-movement detection(e.g., STL decomposition, EWMA/CUSUM, Bayesian/prophet-style models, isolation forests, robust statistics).
Experience building and deploying production ML pipelines (batch and/or streaming), including feature engineering, model training, CI/CD, and monitoring for performance and data drift.
Solid background in statistics and experimentation: hypothesis testing, power analysis, A/B testing frameworks, uplift/propensity modeling, and basic causal inference techniques.
Familiarity with cloud platforms (GCP/AWS/Azure), orchestration tools (e.g., Airflow/Prefect), and dashboarding/visualization tools to expose alerts and model outputs to business users.
Principal Data Scientist
Data scientist job in Alhambra, CA
The Principal Data Scientist works to establish a comprehensive Data Science Program to advance data-driven decision-making, streamline operations, and fully leverage modern platforms including Databricks, or similar, to meet increasing demand for predictive analytics and AI solutions.
The Principal Data Scientist will guide program development, provide training and mentorship to junior members of the team, accelerate adoption of advanced analytics, and build internal capacity through structured mentorship.
The Principal Data Scientist will possess exceptional communication abilities, both verbal and written, with a strong customer service mindset and the ability to translate complex concepts into clear, actionable insights; strong analytical and business acumen, including foundational experience with regression, association analysis, outlier detection, and core data analysis principles; working knowledge of database design and organization, with the ability to partner effectively with Data Management and Data Engineering teams; outstanding time management and organizational skills, with demonstrated success managing multiple priorities and deliverables in parallel; a highly collaborative work style, coupled with the ability to operate independently, maintain focus, and drive projects forward with minimal oversight; a meticulous approach to quality, ensuring accuracy, reliability, and consistency in all deliverables; and proven mentorship capabilities, including the ability to guide, coach, and upskill junior data scientists and analysts.
5+ years of professional experience leading data science initiatives, including developing machine learning models, statistical analyses, and end-to-end data science workflows in production environments.
3+ years of experience working with Databricks and similar cloud-based analytics platforms, including notebook development, feature engineering, ML model training, and workflow orchestration.
3+ years of experience applying advanced analytics and predictive modeling (e.g., regression, classification, clustering, forecasting, natural language processing).
2+ years of experience implementing MLOps practices, such as model versioning, CI/CD for ML, MLflow, automated pipelines, and model performance monitoring.
2+ years of experience collaborating with data engineering teams to design data pipelines, optimize data transformations, and implement Lakehouse or data warehouse architectures (e.g., Databricks, Snowflake, SQL-based platforms).
2+ years of experience mentoring or supervising junior data scientists or analysts, including code reviews, training, and structured skill development.
2+ years of experience with Python and SQL programming, using data sources such as SQL Server, Oracle, PostgreSQL, or similar relational databases.
1+ year of experience operationalizing analytics within enterprise governance frameworks, partnering with Data Management, Security, and IT to ensure compliance, reproducibility, and best practices.
Education:
This classification requires possession of a Master's degree or higher in Data Science, Statistics, Computer Science, or a closely related field. Additional qualifying professional experience may be substituted for the required education on a year-for-year basis.
At least one of the following industry-recognized certifications in data science or cloud analytics, such as: • Microsoft Azure Data Scientist Associate (DP-100) • Databricks Certified Data Scientist or Machine Learning Professional • AWS Machine Learning Specialty • Google Professional Data Engineer • or equivalent advanced analytics certifications. The certification is required and may not be substituted with additional experience.
Data Scientist V
Data scientist job in Menlo Park, CA
Creospan is a growing tech collective of makers, shakers, and problem solvers, offering solutions today that will propel businesses into a better tomorrow. “Tomorrow's ideas, built today!” In addition to being able to work alongside equally brilliant and motivated developers, our consultants appreciate the opportunity to learn and apply new skills and methodologies to different clients and industries.
******NO C2C/3RD PARTY, LOOKING FOR W2 CANDIDATES ONLY, must be able to work in the US without sponsorship now or in the future***
Summary:
The main function of the Data Scientist is to produce innovative solutions driven by exploratory data analysis from complex and high-dimensional datasets.
Job Responsibilities:
• Apply knowledge of statistics, machine learning, programming, data modeling, simulation, and advanced mathematics to recognize patterns, identify opportunities, pose business questions, and make valuable discoveries leading to prototype development and product improvement.
• Use a flexible, analytical approach to design, develop, and evaluate predictive models and advanced algorithms that lead to optimal value extraction from the data.
• Generate and test hypotheses and analyze and interpret the results of product experiments.
• Work with product engineers to translate prototypes into new products, services, and features and provide guidelines for large-scale implementation.
• Provide Business Intelligence (BI) and data visualization support, which includes, but limited to support for the online customer service dashboards and other ad-hoc requests requiring data analysis and visual support.
Skills:
• Experienced in either programming languages such as Python and/or R, big data tools such as Hadoop, or data visualization tools such as Tableau.
• The ability to communicate effectively in writing, including conveying complex information and promoting in-depth engagement on course topics.
• Experience working with large datasets.
Education/Experience:
• Master of Science degree in computer science or in a relevant field.
Senior Data Scientist
Data scientist job in Pleasanton, CA
Net2Source is a Global Workforce Solutions Company headquartered at NJ, USA with its branch offices in Asia Pacific Region. We are one of the fastest growing IT Consulting company across the USA and we are hiring
" Senior Data Scientist
" for one of our clients. We offer a wide gamut of consulting solutions customized to our 450+ clients ranging from Fortune 500/1000 to Start-ups across various verticals like Technology, Financial Services, Healthcare, Life Sciences, Oil & Gas, Energy, Retail, Telecom, Utilities, Technology, Manufacturing, the Internet, and Engineering.
Position: Senior Data Scientist
Location: Pleasanton, CA (Onsite) - Locals Only
Type: Contract
Exp Level - 10+ Years
Required Skills
Design, develop, and deploy advanced marketing models, including:
Build and productionize NLP solutions.
Partner with Marketing and Business stakeholders to translate business objectives into data science solutions.
Work with large-scale structured and unstructured datasets using SQL, Python, and distributed systems.
Evaluate and implement state-of-the-art ML/NLP techniques to improve model performance and business impact.
Communicate insights, results, and recommendations clearly to both technical and non-technical audiences.
Required Qualifications
5+ years of experience in data science or applied machine learning, with a strong focus on marketing analytics.
Hands-on experience building predictive marketing models (e.g., segmentation, attribution, personalization).
Strong expertise in NLP techniques and libraries (e.g., spa Cy, NLTK, Hugging Face, Gensim).
Proficiency in Python, SQL, and common data science libraries (pandas, NumPy, scikit-learn).
Solid understanding of statistics, machine learning algorithms, and model evaluation.
Experience deploying models into production environments.
Strong communication and stakeholder management skills.
Why Work With Us?
We believe in more than just jobs-we build careers. At Net2Source, we champion leadership at all levels, celebrate diverse perspectives, and empower you to make an impact. Think work-life balance, professional growth, and a collaborative culture where your ideas matter.
Our Commitment to Inclusion & Equity
Net2Source is an equal opportunity employer, dedicated to fostering a workplace where diverse talents and perspectives are valued. We make all employment decisions based on merit, ensuring a culture of respect, fairness, and opportunity for all, regardless of age, gender, ethnicity, disability, or other protected characteristics.
Awards & Recognition
America's Most Honored Businesses (Top 10%)
Fastest-Growing Staffing Firm by Staffing Industry Analysts
INC 5000 List for Eight Consecutive Years
Top 100 by
Dallas Business Journal
Spirit of Alliance Award by Agile1
Maddhuker Singh
Sr Account & Delivery Manager
***********************
Data Engineer
Data scientist job in San Francisco, CA
Midjourney is a research lab exploring new mediums to expand the imaginative powers of the human species. We are a small, self-funded team focused on design, human infrastructure, and AI. We have no investors, no big company controlling us, and no advertisers. We are 100% supported by our amazing community.
Our tools are already used by millions of people to dream, to explore, and to create. But this is just the start. We think the story of the 2020s is about building the tools that will remake the world for the next century. We're making those tools, to expand what it means to be human.
Core Responsibilities:
Design and maintain data pipelines to consolidate information across multiple sources (subscription platforms, payment systems, infrastructure and usage monitoring, and financial systems) into a unified analytics environment
Build and manage interactive dashboards and self-service BI tools that enable leadership to track key business metrics including revenue performance, infrastructure costs, customer retention, and operational efficiency
Serve as technical owner of our financial planning platform (Pigment or similar), leading implementation and build-out of models, data connections, and workflows in partnership with Finance leadership to translate business requirements into functional system architecture
Develop automated data quality checks and cleaning processes to ensure accuracy and consistency across financial and operational datasets
Partner with Finance, Product and Operations teams to translate business questions into analytical frameworks, including cohort analysis, cost modeling, and performance trending
Create and maintain documentation for data models, ETL processes, dashboard logic, and system workflows to ensure knowledge continuity
Support strategic planning initiatives by building financial models, scenario analyses, and data-driven recommendations for resource allocation and growth investments
Required Qualifications:
3-5+ years experience in data engineering, analytics engineering, or similar role with demonstrated ability to work with large-scale datasets
Strong SQL skills and experience with modern data warehousing solutions (BigQuery, Snowflake, Redshift, etc.)
Proficiency in at least one programming language (Python, R) for data manipulation and analysis
Experience with BI/visualization tools (Looker, Tableau, Power BI, or similar)
Hands-on experience administering enterprise financial systems (NetSuite, SAP, Oracle, or similar ERP platforms)
Experience working with Stripe Billing or similar subscription management platforms, including data extraction and revenue reporting
Ability to communicate technical concepts clearly to non-technical stakeholders
Data Engineer / Analytics Specialist
Data scientist job in Santa Rosa, CA
Citizenship Requirement: U.S. Citizens Only
ITTConnect is seeking a Data Engineer / Analytics to work for one of our clients, a major Technology Consulting firm with headquarters in Europe. They are experts in tailored technology consulting and services to banks, investment firms and other Financial vertical clients.
Job location: San Francisco Bay area or NY City.
Work Model: Ability to come into the office as requested
Seniority: 10+ years of total experience
About the role:
The Data Engineer / Analytics Specialist will support analytics, product insights, and AI initiatives. You will build robust data pipelines, integrate data sources, and enhance the organization's analytical foundations.
Responsibilities:
Build and operate Snowflake-based analytics environments.
Develop ETL/ELT pipelines (DBT, Airflow, etc.).
Integrate APIs, external data sources, and streaming inputs.
Perform query optimization, basic data modeling, and analytics support.
Enable downstream GenAI and analytics use cases.
Requirements:
10+ years of overall technology experience
3+ years hands-on AWS experience required
Strong SQL and Snowflake experience.
Hands-on pipeline engineering with DBT, Airflow, or similar.
Experience with API integrations and modern data architectures.
Data Engineer
Data scientist job in Irvine, CA
Thank you for stopping by to take a look at the Data Integration Engineer role I posted here on LinkedIN, I appreciate it.
If you have read my s in the past, you will recognize how I write job descriptions. If you are new, allow me to introduce myself. My name is Tom Welke. I am Partner & VP at RSM Solutions, Inc and I have been recruiting technical talent for more than 23 years and been in the tech space since the 1990s. Due to this, I actually write JD's myself...no AI, no 'bots', just a real live human. I realized a while back that looking for work is about as fun as a root canal with no anesthesia...especially now. So, rather than saying 'must work well with others' and 'team mindset', I do away with that kind of nonsense and just tell it like it is.
So, as with every role I work on, social fit is almost as important as technical fit. For this one, technical fit is very very important. But, we also have some social fit characteristics that are important. This is the kind of place that requires people to dive in and learn. The hiring manager for this one is actually a very dear friend of mine. He said something interesting to me not all that long ago. He mentioned, if you aren't spending at least an hour a day learning something new, you really are doing yourself a disservice. This is that classic environment where no one says 'this is not my job'. So that ability to jump in and help is needed for success in this role.
This role is being done onsite in Irvine, California. I prefer working with candidates that are already local to the area. If you need to relocate, that is fine, but there are no relocation dollars available.
I can only work with US Citizens or Green Card Holders for this role. I cannot work with H1, OPT, EAD, F1, H4, or anyone that is not already a US Citizen or Green Card Holder for this role.
The Data Engineer role is similar to the Data Integration role I posted. However, this one is mor Ops focused, with the orchestration of deployment and ML flow, and including orchestrating and using data on the clusters and managing how the models are performing. This role focuses on coding & configuring on the ML side of the house.
You will be designing, automating, and observing end to end data pipelines that feed this client's Kubeflow driven machine learning platform, ensuring models are trained, deployed, and monitored on trustworthy, well governed data. You will build batch/stream workflows, wire them into Azure DevOps CI/CD, and surface real time health metrics in Prometheus + Grafana dashboards to guarantee data availability. The role bridges Data Engineering and MLOps, allowing data scientists to focus on experimentation and the business sees rapid, reliable predictive insight.
Here are some of the main responsibilities:
Design and implement batch and streaming pipelines in Apache Spark running on Kubernetes and Kubeflow Pipelines to hydrate feature stores and training datasets.
Build high throughput ETL/ELT jobs with SSIS, SSAS, and T SQL against MS SQL Server, applying Data Vault style modeling patterns for auditability.
Integrate source control, build, and release automation using GitHub Actions and Azure DevOps for every pipeline component.
Instrument pipelines with Prometheus exporters and visualize SLA, latency, and error budget metrics to enable proactive alerting.
Create automated data quality and schema drift checks; surface anomalies to support a rapid incident response process.
Use MLflow Tracking and Model Registry to version artifacts, parameters, and metrics for reproducible experiments and safe rollbacks.
Work with data scientists to automate model retraining and deployment triggers within Kubeflow based on data freshness or concept drift signals.
Develop PowerShell and .NET utilities to orchestrate job dependencies, manage secrets, and publish telemetry to Azure Monitor.
Optimize Spark and SQL workloads through indexing, partitioning, and cluster sizing strategies, benchmarking performance in CI pipelines.
Document lineage, ownership, and retention policies; ensure pipelines conform to PCI/SOX and internal data governance standards.
Here is what we are seeking:
At least 6 years of experience building data pipelines in Spark or equivalent.
At least 2 years deploying workloads on Kubernetes/Kubeflow.
At least 2 years of experience with MLflow or similar experiment‑tracking tools.
At least 6 years of experience in T‑SQL, Python/Scala for Spark.
At least 6 years of PowerShell/.NET scripting.
At least 6 years of experience with with GitHub, Azure DevOps, Prometheus, Grafana, and SSIS/SSAS.
Kubernetes CKA/CKAD, Azure Data Engineer (DP‑203), or MLOps‑focused certifications (e.g., Kubeflow or MLflow) would be great to see.
Mentor engineers on best practices in containerized data engineering and MLOps.
Sr Data Platform Engineer
Data scientist job in Elk Grove, CA
Hybrid role 3X a week in office in Elk Grove, CA; no remote capabilities
This is a direct hire opportunity.
We're seeking a seasoned Senior Data Platform Engineer to design, build, and optimize scalable data solutions that power analytics, reporting, and AI/ML initiatives. This full‑time role is hands‑on, working with architects, analysts, and business stakeholders to ensure data systems are reliable, secure, and high‑performing.
Responsibilites:
Build and maintain robust data pipelines (structured, semi‑structured, unstructured).
Implement ETL workflows with Spark, Delta Lake, and cloud‑native tools.
Support big data platforms (Databricks, Snowflake, GCP) in production.
Troubleshoot and optimize SQL queries, Spark jobs, and workloads.
Ensure governance, security, and compliance across data systems.
Integrate workflows into CI/CD pipelines with Git, Jenkins, Terraform.
Collaborate cross‑functionally to translate business needs into technical solutions.
Qualifications:
7+ years in data engineering with production pipeline experience.
Expertise in Spark ecosystem, Databricks, Snowflake, GCP.
Strong skills in PySpark, Python, SQL.
Experience with RAG systems, semantic search, and LLM integration.
Familiarity with Kafka, Pub/Sub, vector databases.
Proven ability to optimize ETL jobs and troubleshoot production issues.
Agile team experience and excellent communication skills.
Certifications in Databricks, Snowflake, GCP, or Azure.
Exposure to Airflow, BI tools (Power BI, Looker Studio).
Lead Data Scientist GenAI, Strategic Analytics - Data Science
Data scientist job in Fresno, CA
Deloitte is at the leading edge of GenAI innovation, transforming Strategic Analytics and shaping the future of Finance. We invite applications from highly skilled and experienced Lead Data Scientists ready to drive the development of our next-generation GenAI solutions.
The Team
Strategic Analytics is a dynamic part of our Finance FP&A organization, dedicated to empowering executive leaders across the firm, as well as our partners in financial and operational functions. Our team harnesses the power of cloud computing, data science, AI, and strategic expertise-combined with deep institutional knowledge-to deliver insights that inform our most critical business decisions and fuel the firm's ongoing growth.
GenAI is at the forefront of our innovation agenda and a key strategic priority for our future. We are rapidly developing groundbreaking products and solutions poised to transform both our organization and our clients. As part of our team, the selected candidate will play a pivotal role in driving the success of these high-impact initiatives.
Recruiting for this role ends on December 14, 2025
Work You'll Do
Client Engagement & Solution Scoping
+ Partner with stakeholders to analyze business requirements, pain points, and objectives relevant to GenAI use cases.
+ Facilitate workshops to identify, prioritize, and scope impactful GenAI applications (e.g., text generation, code synthesis, conversational agents).
+ Clearly articulate GenAI's value proposition, including efficiency gains, risk mitigation, and innovation.
+ Solution Architecture & Design
+ Architect holistic GenAI solutions, selecting and customizing appropriate models (GPT, Llama, Claude, Zora AI, etc.).
+ Design scalable integration strategies for embedding GenAI into existing client systems (ERP, CRM, KM platforms).
+ Define and govern reliable, ethical, and compliant data sourcing and management.
Development & Customization
+ Lead model fine-tuning, prompt engineering, and customization for client-specific needs.
+ Oversee the development of GenAI-powered applications and user-friendly interfaces, ensuring robustness and exceptional user experience.
+ Drive thorough validation, testing, and iteration to ensure quality and accuracy.
Implementation, Deployment & Change Management
+ Manage solution rollout, including cloud setup, configuration, and production deployment.
+ Guide clients through adoption: deliver training, create documentation, and provide enablement resources for users.
Risk, Ethics & Compliance
+ Lead efforts in responsible AI, ensuring safeguards against bias, privacy breaches, and unethical outcomes.
+ Monitor performance, implement KPIs, and manage model retraining and auditing processes.
Stakeholder Communication
+ Prepare executive-level reports, dashboards, and demos to summarize progress and impact.
+ Coordinate across internal teams, tech partners, and clients for effective project delivery.
Continuous Improvement & Thought Leadership
+ Stay current on GenAI trends, best practices, and emerging technologies; share insights across teams.
+ Mentor junior colleagues, promote knowledge transfer, and contribute to reusable methodologies.
Qualifications
Required:
+ Bachelor's or Master's degree in Computer Science, Engineering, Data Science, Mathematics, or related field.
+ 5+ years of hands-on experience delivering machine learning or AI solutions, preferably including generative AI.
+ Independent thinker who can create the vision and execute on transforming data into high end client products.
+ Demonstrated accomplishments in the following areas:
+ Deep understanding of GenAI models and approaches (LLMs, transformers, prompt engineering).
+ Proficiency in Python (PyTorch, TensorFlow, HuggingFace), Databricks, ML pipelines, and cloud-based deployment (Azure, AWS, GCP).
+ Experience integrating AI into enterprise applications, building APIs, and designing scalable workflows.
+ Knowledge of solution architecture, risk assessment, and mapping technology to business goals.
+ Familiarity with agile methodologies and iterative delivery.
+ Commitment to responsible AI, including data ethics, privacy, and regulatory compliance.
+ Ability to travel 0-10%, on average, based on the work you do and the clients and industries/sectors you serve
+ Limited immigration sponsorship may be available.
Preferred:
+ Relevant Certifications: May include Google Cloud Professional ML Engineer, Microsoft Azure AI Engineer, AWS Certified Machine Learning, or specialized GenAI/LLM credentials.
+ Experience with data visualization tools such as Tableau
The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Deloitte, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is $102,500 - $188,900.
You may also be eligible to participate in a discretionary annual incentive program, subject to the rules governing the program, whereby an award, if any, depends on various factors, including, without limitation, individual and organizational performance.
Information for applicants with a need for accommodation
************************************************************************************************************
EA_FA_ExpHire
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law.
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law.
Data Scientist Co-op - Summer/Fall 2026 (Newbury Park, CA, US)
Data scientist job in Parksdale, CA
If you are looking for a challenging and exciting career in the world of technology, then look no further. Skyworks is an innovator of high-performance analog semiconductors whose solutions are powering the wireless networking revolution. Through our broad technology expertise and one of the most extensive product portfolios in the industry, we are Connecting Everyone and Everything, All the Time.
At Skyworks, you will find a fast-paced environment with a strong focus on global collaboration, minimal layers of management, and the freedom to make meaningful contributions in a setting that encourages creative thinking. We are excited about the opportunity to work with you and glad you want to be part of a team of talented individuals who together are changing the way the world communicates.
Requisition ID: 76045
Description
We are seeking a highly motivated and detail-oriented Data Scientist Co-op to join our Newbury Park Product Engineering team for Summer/Fall 2026. This role offers a unique opportunity to work closely with product engineers in Newbury Park to develop data-driven tools and AI solutions that enhance our analysis of module ATE and wafer probe data.
Responsibilities
* Collaborate with module product engineers to understand data analysis needs and translate them into scalable solutions.
* Develop and deploy tools for automated analysis of ATE histograms and statistical plots.
* Apply machine learning and statistical techniques to identify patterns, anomalies, and insights in test data.
* Build dashboards and visualizations to support engineering decision-making.
* Assist in automating routine data checks and reporting processes.
* Document methodologies and present findings to cross-functional teams.
Required Experience and Skills
* Currently pursuing a Bachelor's or Master's degree in Mathematics, Statistics, Computer Science, Data Science, or a related field.
* Ability to work onsite up to 6 months (June - December 2026).
* Strong foundation in statistical analysis, data visualization, and machine learning.
* Proficiency in Exensio (preferred), JMP, or other data analysis tools.
* Experience with data visualization libraries and tools.
* Familiarity with semiconductor test data (ATE, wafer probe) is a plus but not required.
* Proficiency in Microsoft Copilot for productivity and automation tasks.
* Experience with Power BI for building interactive dashboards and reports.
* Excellent communication and collaboration skills.
What You'll Gain
* Hands-on experience in applying data science to real-world engineering problems.
* Exposure to semiconductor product development and test engineering workflows.
* Opportunity to contribute to impactful projects that improve product quality and efficiency.
#LI-JR1
The typical pay range for an Engineering intern across the U.S. is currently USD $26.00 - $47.50 per hour and for a Non-Engineering intern across the U.S. is currently USD $22.50 - $42.00 per hour. Starting pay will depend on level of education, the ultimate job duties and requirements, and work location. Skyworks has different pay ranges for different work locations in the U.S.
Skyworks is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, protected veteran status, or any other characteristic protected by law. Skyworks strives to create an accessible workplace; if you need an accommodation due to a disability, please contact us at accommodations@skyworksinc.com.
Staff Data Scientist
Data scientist job in San Jose, CA
Staff Data Scientist | San Francisco | $250K-$300K + Equity
We're partnering with one of the fastest-growing AI companies in the world to hire a Staff Data Scientist. Backed by over $230M from top-tier investors and already valued at over $1B, they've secured customers that include some of the most recognizable names in tech. Their AI platform powers millions of daily interactions and is quickly becoming the enterprise standard for conversational AI.
In this role, you'll bring rigorous analytics and experimentation leadership that directly shapes product strategy and company performance.
What you'll do:
Drive deep-dive analyses on user behavior, product performance, and growth drivers
Design and interpret A/B tests to measure product impact at scale
Build scalable data models, pipelines, and dashboards for company-wide use
Partner with Product and Engineering to embed experimentation best practices
Evaluate ML models, ensuring business relevance, performance, and trade-off clarity
What we're looking for:
5+ years in data science or product analytics at scale (consumer or marketplace preferred)
Advanced SQL and Python skills, with strong foundations in statistics and experimental design
Proven record of designing, running, and analyzing large-scale experiments
Ability to analyze and reason about ML models (classification, recommendation, LLMs)
Strong communicator with a track record of influencing cross-functional teams
If you're excited by the sound of this challenge- apply today and we'll be in touch.
Data Scientist
Data scientist job in San Jose, CA
Key Responsibilities
Design and productionize models for opportunity scanning, anomaly detection, and significant change detection across CRM, streaming, ecommerce, and social data.
Define and tune alerting logic (thresholds, SLOs, precision/recall) to minimize noise while surfacing high-value marketing actions.
Partner with marketing, product, and data engineering to operationalize insights into campaigns, playbooks, and automated workflows, with clear monitoring and experimentation.
Required Qualifications
Strong proficiency in Python (pandas, NumPy, scikit-learn; plus experience with PySpark or similar for large-scale data) and SQL on modern warehouses (e.g., BigQuery, Snowflake, Redshift).
Hands-on experience with time-series modeling and anomaly / changepoint / significant-movement detection(e.g., STL decomposition, EWMA/CUSUM, Bayesian/prophet-style models, isolation forests, robust statistics).
Experience building and deploying production ML pipelines (batch and/or streaming), including feature engineering, model training, CI/CD, and monitoring for performance and data drift.
Solid background in statistics and experimentation: hypothesis testing, power analysis, A/B testing frameworks, uplift/propensity modeling, and basic causal inference techniques.
Familiarity with cloud platforms (GCP/AWS/Azure), orchestration tools (e.g., Airflow/Prefect), and dashboarding/visualization tools to expose alerts and model outputs to business users.
Senior Data Engineer
Data scientist job in Santa Rosa, CA
We're hiring a Senior/Lead Data Engineer to join a fast-growing AI startup. The team comes from a billion dollar AI company, and has raised a $40M+ seed round.
You'll need to be comfortable transforming and moving data in a new 'group level' data warehouse, from legacy sources. You'll have a strong data modeling background.
Proven proficiency in modern data transformation tools, specifically dbt and/or SQLMesh.
Exceptional ability to apply systems thinking and complex problem-solving to ambiguous challenges. Experience within a high-growth startup environment is highly valued.
Deep, practical knowledge of the entire data lifecycle, from generation and governance through to advanced downstream applications (e.g., fueling AI/ML models, LLM consumption, and core product features).
Outstanding ability to communicate technical complexity clearly, synthesizing information into actionable frameworks for executive and cross-functional teams.
Data Engineer
Data scientist job in Fremont, CA
Midjourney is a research lab exploring new mediums to expand the imaginative powers of the human species. We are a small, self-funded team focused on design, human infrastructure, and AI. We have no investors, no big company controlling us, and no advertisers. We are 100% supported by our amazing community.
Our tools are already used by millions of people to dream, to explore, and to create. But this is just the start. We think the story of the 2020s is about building the tools that will remake the world for the next century. We're making those tools, to expand what it means to be human.
Core Responsibilities:
Design and maintain data pipelines to consolidate information across multiple sources (subscription platforms, payment systems, infrastructure and usage monitoring, and financial systems) into a unified analytics environment
Build and manage interactive dashboards and self-service BI tools that enable leadership to track key business metrics including revenue performance, infrastructure costs, customer retention, and operational efficiency
Serve as technical owner of our financial planning platform (Pigment or similar), leading implementation and build-out of models, data connections, and workflows in partnership with Finance leadership to translate business requirements into functional system architecture
Develop automated data quality checks and cleaning processes to ensure accuracy and consistency across financial and operational datasets
Partner with Finance, Product and Operations teams to translate business questions into analytical frameworks, including cohort analysis, cost modeling, and performance trending
Create and maintain documentation for data models, ETL processes, dashboard logic, and system workflows to ensure knowledge continuity
Support strategic planning initiatives by building financial models, scenario analyses, and data-driven recommendations for resource allocation and growth investments
Required Qualifications:
3-5+ years experience in data engineering, analytics engineering, or similar role with demonstrated ability to work with large-scale datasets
Strong SQL skills and experience with modern data warehousing solutions (BigQuery, Snowflake, Redshift, etc.)
Proficiency in at least one programming language (Python, R) for data manipulation and analysis
Experience with BI/visualization tools (Looker, Tableau, Power BI, or similar)
Hands-on experience administering enterprise financial systems (NetSuite, SAP, Oracle, or similar ERP platforms)
Experience working with Stripe Billing or similar subscription management platforms, including data extraction and revenue reporting
Ability to communicate technical concepts clearly to non-technical stakeholders
Life Actuary Consulting Manager
Data scientist job in Fresno, CA
Human Capital Our Human Capital practice is at the forefront of transforming the nature of work. As converging forces reshape industries, our team uniquely addresses the complexities of work, workforce, and workplace dynamics. We leverage sector-specific insights and cross-domain perspectives to help organizations tackle their most challenging workforce issues and align talent strategies with their strategic visions. Our practice is renowned for making work better for humans and humans better at work. Be part of this exciting era of change and join us on this transformative journey.
The Team
Insights, Innovation, and Operate
Our Insights, Innovation & Operate Offering is designed to enhance key aspects of our clients' businesses by leveraging cutting-edge technology, data, and a blend of deep technical and human expertise. We innovate and deliver creative, industry-specific solutions that streamline operations and accelerate speed-to-value
Recruiting for this role ends on 12/31/25.
Work you'll do
As a Consultative Services Life Actuary Manager in Deloitte's Human Capital group, you will bring unique actuarial, analytical, and data sciences perspective in a management consulting environment. Selected job functions include leading medium-sized teams in the following activities:
+ Proactively follow current market trends across life insurance and annuity products, markets and regulations; anticipate future client needs and prepare accordingly
+ Redesign and modernize core business functions for life insurance clients, including new business and underwriting operations, product development, financial reporting, modeling, and related functions
+ Support deployment of modern-day tools, technologies, data sources, & analytics in such modernization initiatives, to achieve objectives such as improved stakeholder experience, reduced costs, and more actionable or insightful results
+ Participate in teams to identify, design, and deploy proprietary models, algorithms, data sets, or other assets or project accelerators
+ Provide support in developing internal and external eminence, research, and solution development
+ Provide subject matter expertise to consulting teams to facilitate the integration of actuarial with data science, technology, underwriting, distribution, finance, and other functions
+ Actively support the sales process with other senior leaders
+ Assist the Deloitte Audit Function, a separate business unit at Deloitte, providing professional actuarial assurance over the material accuracy of selected actuarial balances for selected attest clients
+ Serve as a role model advocate for recruiting, training, people development, and overall strategic planning for the practice.
Qualifications
Required:
+ Bachelor's degree
+ 6+ years of Life Actuary experience
+ Successfully passed 5 actuarial exams
+ Limited immigration sponsorship may be available
+ Ability to travel up to 50%, on average, based on the work you do and the clients and industries/sectors you serve
Preferred:
The ideal candidate will have a meaningful set of knowledge and experience across a subset of the following dimensions for individual life insurance and annuities:
+ 6+ years of experience in product development, including product design, pricing, filing and implementation, ideally across multiple distribution channels
+ 6+ years of experience developing experience studies for core life actuarial assumptions, such as lapse, mortality, expenses, etc.
+ 6+ years of experience in assumption setting for multiple purposes spanning pricing, forecasting, financial reporting and /or embedded value
+ 6+ years of experience creating actuarial projections models for multiple purposes spanning pricing, financial planning, ALM, and financial reporting
+ 6+ years of experience in financial reporting across a subset of the relevant accounting methodologies (statutory, US GAAP, IFRS, tax) for a subset of the full range of individual life insurance and annuity products
+ 4+ years leading medium to large sized teams
+ 4+ years of working with mergers and acquisitions, including understanding of purchase accounting
The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Deloitte, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is $137,400 - $253,000.
You may also be eligible to participate in a discretionary annual incentive program, subject to the rules governing the program, whereby an award, if any, depends on various factors, including, without limitation, individual and organizational performance.
Possible Locations: Atlanta, Austin, Baltimore, Birmingham, Boca Raton, Boise, Boston, Charlotte, Chicago, Cincinnati, Cleveland, Columbus, Costa Mesa, Dallas, Davenport, Dayton, Denver, Des Moines, Detroit, Fort Worth, Fresno, Grand Rapids, Hartford, Hermitage, Houston, Huntsville, Indianapolis, Jacksonville, Jericho, Jersey City, Kansas City, Las Vegas, Los Angeles, Louisville, McLean, Memphis, Miami, Midland, Minneapolis, Morristown, Nashville, New Orleans, New York, Philadelphia, Pittsburgh, Portland, Princeton, Raleigh, Richmond, Rochester, San Antonio, San Diego, San Francisco, San Jose, Seattle, St. Louis, Stamford, Tallahassee, Tampa, Tempe, Tulsa, Washington DC.
Information for applicants with a need for accommodation: ************************************************************************************************************
For more information about Consultative Services and Human Capital, visit our landing page at: ********************************************************************************************
#HCFY26 #IIOFY26
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law.