We design, build and maintain infrastructure to support agentic workflows for Siri. Our team is in charge of data generation, introspection and evaluation frameworks that are key to efficiently developing foundation models and agentic workflows for Siri applications. In this team you will have the opportunity to work at the intersection of with cutting edge foundation models and products.
Minimum Qualifications
Strong background in computer science: algorithms, data structures and system design
3+ year experience on large scale distributed system design, operation and optimization
Experience with SQL/NoSQL database technologies, data warehouse frameworks like BigQuery/Snowflake/RedShift/Iceberg and data pipeline frameworks like GCP Dataflow/Apache Beam/Spark/Kafka
Experience processing data for ML applications at scale
Excellent interpersonal skills able to work independently as well as cross-functionally
Preferred Qualifications
Experience fine-tuning and evaluating Large Language Models
Experience with Vector Databases
Experience deploying and serving of LLMs
At Apple, base pay is one part of our total compensation package and is determined within a range. This provides the opportunity to progress as you grow and develop within a role. The base pay range for this role is between $147,400 and $272,100, and your base pay will depend on your skills, qualifications, experience, and location.
Apple employees also have the opportunity to become an Apple shareholder through participation in Apple's discretionary employee stock programs. Apple employees are eligible for discretionary restricted stock unit awards, and can purchase Apple stock at a discount if voluntarily participating in Apple's Employee Stock Purchase Plan. You'll also receive benefits including: Comprehensive medical and dental coverage, retirement benefits, a range of discounted products and free services, and for formal education related to advancing your career at Apple, reimbursement for certain educational expenses - including tuition. Additionally, this role might be eligible for discretionary bonuses or commission payments as well as relocation. Learn more about Apple Benefits.
Note: Apple benefit, compensation and employee stock programs are subject to eligibility requirements and other terms of the applicable plan or program.
Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant .
#J-18808-Ljbffr
$147.4k-272.1k yearly 1d ago
Looking for a job?
Let Zippia find it for you.
Director, Growth Platforms Data Scientist
Ernst & Young Oman 4.7
Data engineer job in San Francisco, CA
A leading global consulting firm seeks a Data Scientist - Director in San Francisco to drive AI solutions and data initiatives. The ideal candidate will lead multi-source data pipelines, architect complex data solutions while collaborating with business leaders. Candidates should have a strong educational background, extensive experience in dataengineering, and proficiency with SQL and cloud-native infrastructure. This role offers a competitive salary range of $205,000 to $235,000 and promotes a hybrid working model.
#J-18808-Ljbffr
$205k-235k yearly 1d ago
Senior Applications Consultant - Workday Data Consultant
Capgemini 4.5
Data engineer job in San Francisco, CA
Job Description - Senior Applications Consultant - Workday Data Consultant (054374)
Senior Applications Consultant - Workday Data Consultant
Qualifications & Experience:
Certified in Workday HCM
Experience in Workday data conversion
At least one implementation as a data consultant
Ability to work with clients on data conversion requirements and load data into Workday tenants
Flexible to work across delivery landscape including Agile Applications Development, Support, and Deployment
Valid US work authorization (no visa sponsorship required)
6‑8 years overall experience (minimum 2 years relevant), Bachelor's degree
SE Level 1 certification; pursuing Level 2
Experience in package configuration, business analysis, architecture knowledge, technical solution design, vendor management
Responsibilities:
Translate business cases into detailed technical designs
Manage operational and technical issues, translating blueprints into requirements and specifications
Lead integration testing and user acceptance testing
Act as stream lead guiding team members
Participate as an active member within technology communities
Capgemini is an Equal Opportunity Employer encouraging diversity and providing accommodations for disabilities.
All qualified applicants will receive consideration without regard to race, national origin, gender identity or expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status, or any other characteristic protected by law.
Physical, mental, or environmental demands may be referenced. Reasonable accommodations will be considered where possible.
#J-18808-Ljbffr
$101k-134k yearly est. 2d ago
Full-Stack Engineer: AI Data Editor
Hex 3.9
Data engineer job in San Francisco, CA
A cutting-edge data analytics firm in San Francisco is seeking a full-stack engineer to enhance user experiences and integrate AI tools within their platform. You will work on innovative projects that shape data interactions, collaborate with teams on product initiatives, and tackle UX challenges. Ideal candidates should possess 3+ years of software engineering experience, proficiency in React and Typescript, and a strong desire to work in AI development. This position offers a competitive salary and benefits, with a hybrid work model.
#J-18808-Ljbffr
$126k-178k yearly est. 2d ago
Data Partnerships Lead - Equity & Growth (SF)
Exa
Data engineer job in San Francisco, CA
A cutting-edge AI search engine company in San Francisco is seeking a Data Partnerships specialist to build their data pipeline. The role involves owning the partnerships cycle, making strategic decisions, negotiating contracts, and potentially building a team. Candidates should have experience in contract negotiation and a Juris Doctor degree. This in-person role offers a competitive salary range of $160,000 - $250,000 with above-market equity.
#J-18808-Ljbffr
$160k-250k yearly 2d ago
Senior Energy Data Engineer - API & Spark Pipelines
Medium 4.0
Data engineer job in San Francisco, CA
A technology finance firm in San Francisco is seeking an experienced DataEngineer. The role involves building data pipelines, integrating data across various platforms, and developing scalable web applications. The ideal candidate will have a strong background in data analysis, software development, and experience with AWS. The salary range for this position is between $160,000 and $210,000, with potential bonuses and equity.
#J-18808-Ljbffr
$160k-210k yearly 5d ago
Staff Data Scientist - Post Sales
Harnham
Data engineer job in Fremont, CA
Salary: $200-250k base + RSUs
This fast-growing Series E AI SaaS company is redefining how modern engineering teams build and deploy applications. We're expanding our data science organization to accelerate customer success after the initial sale-driving onboarding, retention, expansion, and long-term revenue growth.
About the Role
As the senior data scientist supporting post-sales teams, you will use advanced analytics, experimentation, and predictive modeling to guide strategy across Customer Success, Account Management, and Renewals. Your insights will help leadership forecast expansion, reduce churn, and identify the levers that unlock sustainable net revenue retention.
Key Responsibilities
Forecast & Model Growth: Build predictive models for renewal likelihood, expansion potential, churn risk, and customer health scoring.
Optimize the Customer Journey: Analyze onboarding flows, product adoption patterns, and usage signals to improve activation, engagement, and time-to-value.
Experimentation & Causal Analysis: Design and evaluate experiments (A/B tests, uplift modeling) to measure the impact of onboarding programs, success initiatives, and pricing changes on retention and expansion.
Revenue Insights: Partner with Customer Success and Sales to identify high-value accounts, cross-sell opportunities, and early warning signs of churn.
Cross-Functional Partnership: Collaborate with Product, RevOps, Finance, and Marketing to align post-sales strategies with company growth goals.
Data Infrastructure Collaboration: Work with Analytics Engineering to define data requirements, maintain data quality, and enable self-serve dashboards for Success and Finance teams.
Executive Storytelling: Present clear, actionable recommendations to senior leadership that translate complex analysis into strategic decisions.
About You
Experience: 6+ years in data science or advanced analytics, with a focus on post-sales, customer success, or retention analytics in a B2B SaaS environment.
Technical Skills: Expert SQL and proficiency in Python or R for statistical modeling, forecasting, and machine learning.
Domain Knowledge: Deep understanding of SaaS metrics such as net revenue retention (NRR), gross churn, expansion ARR, and customer health scoring.
Analytical Rigor: Strong background in experimentation design, causal inference, and predictive modeling to inform customer-lifecycle strategy.
Communication: Exceptional ability to translate data into compelling narratives for executives and cross-functional stakeholders.
Business Impact: Demonstrated success improving onboarding efficiency, retention rates, or expansion revenue through data-driven initiatives.
$200k-250k yearly 4d ago
Senior Data Engineer: ML Pipelines & Signal Processing
Zendar
Data engineer job in Berkeley, CA
An innovative tech firm in Berkeley seeks a Senior DataEngineer to manage complex dataengineering pipelines. You will ensure data quality, support ML engineers across locations, and establish infrastructure standards. The ideal candidate has over 5 years of experience in Data Science or MLOps, strong algorithmic skills, and proficiency in GCP, Python, and SQL. This role offers competitive salary and the chance to impact a growing team in a dynamic field.
#J-18808-Ljbffr
$110k-157k yearly est. 5d ago
Data Scientist
Everfit 3.8
Data engineer job in Santa Rosa, CA
Everfit | Hybrid, San Francisco Bay Area
Everfit is a fitness technology company building an AI-powered coaching platform that serves 280,000+ coaches globally. We're transforming how fitness professionals deliver personalized training and nutrition guidance to their clients through intelligent automation and data-driven insights.
About the Role
We're looking for a data scientist who is passionate about fitness and energized by turning data into actionable insights that help coaches and their clients succeed. You'll play a critical role in understanding user behavior, product performance, and business metrics to inform strategic decisions as we scale our platform.
What You'll Do
Analyze user engagement patterns, retention metrics, and feature adoption to identify opportunities for product improvement
Build dashboards and reports that make complex data accessible to cross-functional teams
Partner with Product, Engineering, and Marketing to design experiments and measure their impact
Dive deep into the coaching journey to understand what drives client outcomes and coach success
Translate data findings into clear narratives and recommendations for leadership
Support the development of our AI/ML capabilities by identifying data patterns and opportunities
What We're Looking For
Non-negotiable:
Genuine passion for fitness, health, or wellness (we build for coaches so you need to understand their world)
Strong proficiency in SQL and experience working with large datasets
Experience with data visualization tools (Looker, Tableau, Mode, or similar)
Ability to translate technical findings into insights for non-technical stakeholders
Strongly preferred:
2-4 years of experience in a data analyst or analytics role, preferably at a growth-stage tech company
Experience with Python or R for statistical analysis
Familiarity with product analytics tools (Amplitude, Mixpanel, or similar)
Understanding of SaaS metrics and cohort analysis
Experience in a PLG (Product-Led Growth) environment
Background in the fitness, health, or coaching industry
You'll thrive here if you:
Are naturally curious and love asking "why" until you find the answer
Can balance rigor with speed-knowing when to dig deeper and when to move forward
Enjoy collaborating with diverse teams and making complex topics understandable
Are comfortable with ambiguity and can structure your own work
Care deeply about the impact your insights have on real coaches and their clients
What We Offer
Competitive compensation with equity
Flexible PTO and remote-first culture
The opportunity to directly impact the success of 280,000+ coaches worldwide
Our Team
We're a distributed team of ~15 in the US, scaling to 2 - 3x that by end of 2026. We value depth over superficial growth, shipping over endless planning, and genuine fitness passion over resume credentials.
$124k-169k yearly est. 1d ago
Assoc Director, Data Scientist
Gilead Sciences, Inc. 4.5
Data engineer job in Foster City, CA
At Gilead, we're creating a healthier world for all people. For more than 35 years, we've tackled diseases such as HIV, viral hepatitis, COVID-19 and cancer - working relentlessly to develop therapies that help improve lives and to ensure access to these therapies across the globe. We continue to fight against the world's biggest health challenges, and our mission requires collaboration, determination and a relentless drive to make a difference.
Every member of Gilead's team plays a critical role in the discovery and development of life-changing scientific innovations. Our employees are our greatest asset as we work to achieve our bold ambitions, and we're looking for the next wave of passionate and ambitious people ready to make a direct impact.
We believe every employee deserves a great leader. People Leaders are the cornerstone to the employee experience at Gilead and Kite. As a people leader now or in the future, you are the key driver in evolving our culture and creating an environment where every employee feels included, developed and empowered to fulfil their aspirations. Join Gilead and help create possible, together.
Gilead's AI Research Center(ARC) is looking for a Principal Data Scientist to spearhead the adoption of AI/ML and transform our clinical development processes. This is a pivotal role where you will provide key thought leadership and drive our strategic vision for advanced analytics, with the goal of optimizing clinical trials, enhancing data-driven decision-making, and providing support for Real-World Evidence (RWE), Clinical Pharmacology, and Biomarkers initiatives.
You will be a thought leader in applying AI/ML to real-world clinical challenges, taking deep involvement in all stages of technical development-from coding and configuring compute environments to model evaluation, review, and architecture design. You'll work closely with a variety of cross-functional teams, including architects, dataengineers, and product managers, to scope, develop, and operationalize our AI-driven applications, with a specific focus on leveraging AI/ML to advance insights within RWE, Clinical Pharmacology, and Biomarkers.
Responsibilities:
Innovate and Strategize: Spearhead the strategic vision for leveraging AI/ML within clinical development. You'll partner with cross-functional leaders to identify high-impact opportunities and design innovative solutions that transform how we conduct trials and make data-driven decisions.
Lead with Expertise: Guide the full lifecycle of machine learning models from initial concept to real-world application. This includes architecting scalable solutions, hands-on algorithm development, and ensuring models are rigorously evaluated and operationalized for use in RWE, Clinical Pharmacology, and Biomarkers.
Mentor and Empower: Act as a force multiplier for our data science team. You'll coach and mentor senior and junior data scientists, fostering a culture of technical excellence and continuous learning.
Translate and Execute: Serve as a bridge between technical teams and business stakeholders. You'll translate complex business challenges into precise data science problems and, in a product manager-like role, drive the development of these solutions from proof-of-concept to production.
Drive Breakthroughs: Research and develop cutting-edge algorithms to solve critical challenges. This could involve using NLP for patient insights, computer vision for biomarker analysis, or predictive models to optimize trial logistics. You'll be at the forefront of applying these techniques in a biotech context.
Build the Foundation: Design and implement the technical and process building blocks needed to scale our AI/ML capabilities. This includes working with IT partners to curate and operationalize the datasets essential for fueling our analytical pipelines.
Influence and Advise: Interface directly with internal stakeholders, acting as a trusted advisor to help them understand the potential of advanced analytics and apply data-driven approaches to optimize clinical trial operations.
Stay Ahead: Continuously monitor the landscape of machine learning and biopharmaceutical innovation. You'll ensure our team is leveraging the latest state-of-the-art techniques to maintain a competitive edge.
Technical Skills:
Advanced Model Development & Operationalization: Deep expertise in developing, deploying, and managing complex machine learning and deep learning algorithms at scale. This includes a profound understanding of model evaluation, scoring methodologies, and mitigation of model bias to ensure robust, ethical, and reliable outcomes.
Data & Computational Proficiency: Fluent in Python or R and SQL, with hands-on experience in building and optimizing data pipelines for analytical and model development purposes.
Cloud-Native AI/ML: Demonstrated experience with Cloud DevOps on AWS as it pertains to the entire data science lifecycle, from data ingestion to model serving and monitoring.
Translational Research: Proven ability to translate foundational AI/ML research into functional, production-ready packages and applications that directly support strategic initiatives in areas like RWE, Clinical Pharmacology, and Biomarkers.
Basic Qualifications:
Doctorate and 5+ years of relevant experience OR
Master's and 8+ years of relevant experience OR
Bachelor's and 10+ years of relevant experience
Preferred Qualifications:
Ability to translate stakeholder needs into clear technical requirements, including those related to RWE, Clinical Pharmacology, and Biomarkers.
Skill in scoping project requirements and developing timelines.
Knowledge of product management principles.
Experience with code management using Git.
Strong technical documentation skills.
Join us at the AI Research Center to shape the future of clinical development with groundbreaking AI/ML solutions, and contribute to advancements in RWE, Clinical Pharmacology, and Biomarkers!
The salary range for this position is:
Bay Area: $210,375.00 - $272,250.00.Other US Locations: $191,250.00 - $247,500.00.
At Gilead, we're creating a healthier world for all people. For more than 35 years, we've tackled diseases such as HIV, viral hepatitis, COVID-19 and cancer - working relentlessly to develop therapies that help improve lives and to ensure access to these therapies across the globe. We continue to fight against the world's biggest health challenges, and our mission requires collaboration, determination and a relentless drive to make a difference.
Every member of Gilead's team plays a critical role in the discovery and development of life-changing scientific innovations. Our employees are our greatest asset as we work to achieve our bold ambitions, and we're looking for the next wave of passionate and ambitious people ready to make a direct impact.
We believe every employee deserves a great leader. People Leaders are the cornerstone to the employee experience at Gilead and Kite. As a people leader now or in the future, you are the key driver in evolving our culture and creating an environment where every employee feels included, developed and empowered to fulfil their aspirations. Join Gilead and help create possible, together.
Job Description
Gilead's AI Research Center(ARC) is looking for a Principal Data Scientist to spearhead the adoption of AI/ML and transform our clinical development processes. This is a pivotal role where you will provide key thought leadership and drive our strategic vision for advanced analytics, with the goal of optimizing clinical trials, enhancing data-driven decision-making, and providing support for Real-World Evidence (RWE), Clinical Pharmacology, and Biomarkers initiatives.
You will be a thought leader in applying AI/ML to real-world clinical challenges, taking deep involvement in all stages of technical development-from coding and configuring compute environments to model evaluation, review, and architecture design. You'll work closely with a variety of cross-functional teams, including architects, dataengineers, and product managers, to scope, develop, and operationalize our AI-driven applications, with a specific focus on leveraging AI/ML to advance insights within RWE, Clinical Pharmacology, and Biomarkers.
Responsibilities:
Innovate and Strategize: Spearhead the strategic vision for leveraging AI/ML within clinical development. You'll partner with cross-functional leaders to identify high-impact opportunities and design innovative solutions that transform how we conduct trials and make data-driven decisions.
Lead with Expertise: Guide the full lifecycle of machine learning models from initial concept to real-world application. This includes architecting scalable solutions, hands-on algorithm development, and ensuring models are rigorously evaluated and operationalized for use in RWE, Clinical Pharmacology, and Biomarkers.
Mentor and Empower: Act as a force multiplier for our data science team. You'll coach and mentor senior and junior data scientists, fostering a culture of technical excellence and continuous learning.
Translate and Execute: Serve as a bridge between technical teams and business stakeholders. You'll translate complex business challenges into precise data science problems and, in a product manager-like role, drive the development of these solutions from proof-of-concept to production.
Drive Breakthroughs: Research and develop cutting-edge algorithms to solve critical challenges. This could involve using NLP for patient insights, computer vision for biomarker analysis, or predictive models to optimize trial logistics. You'll be at the forefront of applying these techniques in a biotech context.
Build the Foundation: Design and implement the technical and process building blocks needed to scale our AI/ML capabilities. This includes working with IT partners to curate and operationalize the datasets essential for fueling our analytical pipelines.
Influence and Advise: Interface directly with internal stakeholders, acting as a trusted advisor to help them understand the potential of advanced analytics and apply data-driven approaches to optimize clinical trial operations.
Stay Ahead: Continuously monitor the landscape of machine learning and biopharmaceutical innovation. You'll ensure our team is leveraging the latest state-of-the-art techniques to maintain a competitive edge.
Technical Skills:
Advanced Model Development & Operationalization: Deep expertise in developing, deploying, and managing complex machine learning and deep learning algorithms at scale. This includes a profound understanding of model evaluation, scoring methodologies, and mitigation of model bias to ensure robust, ethical, and reliable outcomes.
Data & Computational Proficiency: Fluent in Python or R and SQL, with hands-on experience in building and optimizing data pipelines for analytical and model development purposes.
Cloud-Native AI/ML: Demonstrated experience with Cloud DevOps on AWS as it pertains to the entire data science lifecycle, from data ingestion to model serving and monitoring.
Translational Research: Proven ability to translate foundational AI/ML research into functional, production-ready packages and applications that directly support strategic initiatives in areas like RWE, Clinical Pharmacology, and Biomarkers.
Basic Qualifications:
Doctorate and 5+ years of relevant experience OR
Master's and 8+ years of relevant experience OR
Bachelor's and 10+ years of relevant experience
Preferred Qualifications:
Ability to translate stakeholder needs into clear technical requirements, including those related to RWE, Clinical Pharmacology, and Biomarkers.
Skill in scoping project requirements and developing timelines.
Knowledge of product management principles.
Experience with code management using Git.
Strong technical documentation skills.
Join us at the AI Research Center to shape the future of clinical development with groundbreaking AI/ML solutions, and contribute to advancements in RWE, Clinical Pharmacology, and Biomarkers!
The salary range for this position is:
Bay Area: $210,375.00 - $272,250.00.Other US Locations: $191,250.00 - $247,500.00.
Gilead considers a variety of factors when determining base compensation, including experience, qualifications, and geographic location. These considerations mean actual compensation will vary. This position may also be eligible for a discretionary annual bonus, discretionary stock-based long-term incentives (eligibility may vary based on role), paid time off, and a benefits package. Benefits include company-sponsored medical, dental, vision, and life insurance plans*.
For additional benefits information, visit:
******************************************************************
* Eligible employees may participate in benefit plans, subject to the terms and conditions of the applicable plans.
For jobs in the United States:
Gilead Sciences Inc. is committed to providing equal employment opportunities to all employees and applicants for employment, and is dedicated to fostering an inclusive work environment comprised of diverse perspectives, backgrounds, and experiences. Employment decisions regarding recruitment and selection will be made without discrimination based on race, color, religion, national origin, sex , age, sexual orientation, physical or mental disability, genetic information or characteristic, gender identity and expression, veteran status, or other non-job related characteristics or other prohibited grounds specified in applicable federal, state and local laws. In order to ensure reasonable accommodation for individuals protected by Section 503 of the Rehabilitation Act of 1973, the Vietnam Era Veterans' Readjustment Act of 1974, and Title I of the Americans with Disabilities Act of 1990, applicants who require accommodation in the job application process may contact ApplicantAccommodations@gilead.com for assistance.
For more information about equal employment opportunity protections, please view the 'Know Your Rights' poster.
NOTICE: EMPLOYEE POLYGRAPH PROTECTION ACT
YOUR RIGHTS UNDER THE FAMILY AND MEDICAL LEAVE ACT
PAY TRANSPARENCY NONDISCRIMINATION PROVISION
Our environment respects individual differences and recognizes each employee as an integral member of our company. Our workforce reflects these values and celebrates the individuals who make up our growing team.
Gilead provides a work environment free of harassment and prohibited conduct. We promote and support individual differences and diversity of thoughts and opinion.
For Current Gilead Employees and Contractors:
Please apply via the Internal Career Opportunities portal in Workday.
Share:
Job Requisition ID R0046852
Full Time/Part Time Full-Time
Job Level Associate Director
Click below to return to the Gilead Careers site
Click below to see a list of upcoming events
Click below to return to the Kite, a Gilead company Careers site
#J-18808-Ljbffr
$210.4k-272.3k yearly 1d ago
Global Data ML Engineer for Multilingual Speech & AI
Cartesia
Data engineer job in San Francisco, CA
A leading technology company in San Francisco is seeking a Machine Learning Engineer to ensure the quality and coverage of data across diverse languages. You will design large-scale datasets, evaluate models, and implement quality control systems. The ideal candidate has expertise in multilingual datasets and a strong background in applied ML. This full-time role offers competitive benefits, including fully covered insurance and in-office perks, in a supportive team environment.
#J-18808-Ljbffr
$110k-157k yearly est. 2d ago
Founding ML Infra Engineer - Audio Data Platform
David Ai
Data engineer job in San Francisco, CA
A pioneering audio tech company based in San Francisco is searching for a Founding Machine Learning Infrastructure Engineer. In this role, you will build and scale the core infrastructure that powers cutting-edge audio ML products. You will lead the development of systems for training and deploying models. Candidates should have over 5 years of backend experience with strong skills in cloud infrastructure and machine learning principles. The company offers benefits like unlimited PTO and comprehensive health coverage.
#J-18808-Ljbffr
$110k-157k yearly est. 5d ago
Data/Full Stack Engineer, Data Storage & Ingestion Consultant
Eon Systems PBC
Data engineer job in San Francisco, CA
About us
At Eon, we are at the forefront of large-scale neuroscientific data collection. Our mission is to enable the safe and scalable development of brain emulation technology to empower humanity over the next decade, beginning with the creation of a fully emulated digital twin of a mouse.
Role
We're a San Francisco team collecting very large microscopy datasets and we need an expert to design and implement our end-to-end data pipeline, from high-rate ingest to multi-petabyte storage and downstream processing. You'll own the strategy (on-prem vs. S3 or hybrid), the bill of materials, and the deployment, and you'll be on the floor wiring, racking, tuning, and validating performance.
Our current instruments generate data at ~1+ GB/s sustained (higher during bursts) and the program will accumulate multiple petabyes total over time. You'll help us choose and implement the right architecture considering reliability and cost controls.
Outcomes (what success looks like)
Within 2 weeks: Implement an immediate data-handling strategy that reliably ingests our initial data streams.
Within 2 weeks: Deliver a documented medium-term data architecture covering storage, networking, ingest, and durability.
Within 1 month: Operationalize the medium-term pipeline in production (ingest → buffer → long-term store → compute access).
Ongoing: Maintain ≥95% uptime for the end-to-end data-handling pipeline after setup.
Responsibilities
Architect ingest & storage: Choose and implement an on-prem hardware and data pipeline design or a cloud/S3 alternative with explicit cost and performance tradeoffs at multi-petabyte scale.
Set up a sustained-write ingest path ≥1 GB/s with adequate burst headroom (camera/frame-to-disk), including networking considerations, cooling, and throttling safeguards.
Optimize footprint & cost: Incorporate on-the-fly compression/downsampling options and quantify CPU budget vs. write-speed tradeoffs; document when/where to compress to control $/PB.
Integrate with acquisition workflows ensuring image data and metadata are compatible with downstream stitching/flat-field correction pipelines.
Enable downstream compute: Expose the data to segmentation/analysis stacks (local GPU nodes or cloud).
Skills
5+ years designing and deploying high-throughput storage or HPC pipelines (≥1 GB/s sustained ingest) in production.
Deep hands-on with: NVMe RAID/striping, ZFS/MDRAID/erasure coding, PCIe topology, NUMA pinning, Linux performance tuning, and NIC offload features.
Proven delivery of multi-GB/s ingest systems and petabyte-scale storage in production (life-sciences, vision, HPC, or media).
Experience building tiered storage systems (NVMe → HDD/object) and validating real-world throughput under sustained load.
Practical S3/object-storage know-how (AWS S3 and/or on-prem S3-compatible systems) with lifecycle, versioning, and cost controls.
Data integrity & reliability: snapshots, scrubs, replication, erasure coding, and backup/DR for PB-scale systems.
Networking: ****25/40/100 GbE (SFP+/SFP28), RDMA/ RoCE/iWARP familiarity; switch config and path tuning.
Ability to spec and rack hardware: selecting chassis/backplanes, RAID/HBA cards, NICs, and cooling strategies to prevent NVMe throttling under sustained writes.
Ideal skills:
Experience with microscopy or scientific imaging ingest at frame-to-disk speeds, including Micro-Manager-based pipelines and raw-to-containerized format conversions.
Experience with life science imaging data a plus.
Engagement details
Contract (1099 or corp-to-corp); contract-to-hire if there's a mutual fit.
On-site requirement: You must be physically present in San Francisco during build-out and initial operations; local field work (e.g., UCSF) as needed.
Compensation: Contract, $100-300/hour
Timeline: Immediate start
#J-18808-Ljbffr
$110k-157k yearly est. 1d ago
Machine Learning Data Engineer - Systems & Retrieval
Zyphra Technologies Inc.
Data engineer job in Palo Alto, CA
Zyphra is an artificial intelligence company based in Palo Alto, California. The Role:
As a Machine Learning DataEngineer - Systems & Retrieval, you will build and optimize the data infrastructure that fuels our machine learning systems. This includes designing high-performance pipelines for collecting, transforming, indexing, and serving massive, heterogeneous datasets from raw web-scale data to enterprise document corpora. You'll play a central role in architecting retrieval systems for LLMs and enabling scalable training and inference with clean, accessible, and secure data. You'll have an impact across both research and product teams by shaping the foundation upon which intelligent systems are trained, retrieved, and reasoned over.
You'll work across:
Design and implementation of distributed data ingestion and transformation pipelines
Building retrieval and indexing systems that support RAG and other LLM-based methods
Mining and organizing large unstructured datasets, both in research and production environments
Collaborating with ML engineers, systems engineers, and DevOps to scale pipelines and observability
Ensuring compliance and access control in data handling, with security and auditability in mind
Requirements:
Strong software engineering background with fluency in Python
Experience designing, building, and maintaining data pipelines in production environments
Deep understanding of data structures, storage formats, and distributed data systems
Familiarity with indexing and retrieval techniques for large-scale document corpora
Understanding of database systems (SQL and NoSQL), their internals, and performance characteristics
Strong attention to security, access controls, and compliance best practices (e.g., GDPR, SOC2)
Excellent debugging, observability, and logging practices to support reliability at scale
Strong communication skills and experience collaborating across ML, infra, and product teams
Bonus Skill Set:
Experience building or maintaining LLM-integrated retrieval systems (e.g, RAG pipelines)
Academic or industry background in data mining, search, recommendation systems, or IR literature
Experience with large-scale ETL systems and tools like Apache Beam, Spark, or similar
Familiarity with vector databases (e.g., FAISS, Weaviate, Pinecone) and embedding-based retrieval
Understanding of data validation and quality assurance in machine learning workflows
Experience working on cross-functional infra and MLOps teams
Knowledge of how data infrastructure supports training pipelines, inference serving, and feedback loops
Comfort working across raw, unstructured data, structured databases, and model-ready formats
Why Work at Zyphra:
Our research methodology is to make grounded, methodical steps toward ambitious goals. Both deep research and engineering excellence are equally valued
We strongly value new and crazy ideas and are very willing to bet big on new ideas
We move as quickly as we can; we aim to minimize the bar to impact as low as possible
We all enjoy what we do and love discussing AI
Benefits and Perks:
Comprehensive medical, dental, vision, and FSA plans
Competitive compensation and 401(k)
Relocation and immigration support on a case-by-case basis
On-site meals prepared by a dedicated culinary team; Thursday Happy Hours
In-person team in Palo Alto, CA, with a collaborative, high-energy environment
If you're excited by the challenge of high-scale, high-performance dataengineering in the context of cutting-edge AI, you'll thrive in this role. Apply Today! #J-18808-Ljbffr
$110k-157k yearly est. 4d ago
Staff Machine Learning Data Engineer
Backflip 3.7
Data engineer job in San Francisco, CA
Mechanical design, the work done in CAD, is the rate-limiter for progress in the physical world. However, there are only 2-4 million people on Earth who know how to CAD. But what if hundreds of millions could? What if creating something in the real world were as easy as imagining the use case, or sketching it on paper?
Backflip is building a foundation model for mechanical design: unifying the world's scattered engineering knowledge into an intelligent, end-to-end design environment. Our goal is to enable anyone to imagine a solution and hit “print.”
Founded by a second-time CEO in the same space (first company: Markforged), Backflip combines deep industry insight with breakthrough AI research. Backed by a16z and NEA, we raised a $30M Series A and built a deeply technical, mission-driven team.
We're building the AI foundation that tomorrow's space elevators, nanobots, and spaceships will be built in.
If you're excited to define the next generation of hard tech, come build it with us.
The Role
We're looking for a Staff Machine Learning DataEngineer to lead and build the data pipelines powering Backflip's foundation model for manufacturing and CAD.
You'll design the systems, tools, and strategies that turn the world's engineering knowledge - text, geometry, and design intent - into high-quality training data.
This is a core leadership role within the AI team, driving the data architecture, augmentation, and evaluation that underpin our model's performance and evolution.
You'll collaborate with Machine Learning Engineers to run data-driven experiments, analyze results, and deliver AI products that shape the future of the physical world.
What You'll Do
Architect and own Backflip's ML data pipeline, from ingestion to processing to evaluation.
Define data strategy: establish best practices for data augmentation, filtering, and sampling at scale.
Design scalable data systems for multimodal training (text, geometry, CAD, and more).
Develop and automate data collection, curation, and validation workflows.
Collaborate with MLEs to design and execute experiments that measure and improve model performance.
Build tools and metrics for dataset analysis, monitoring, and quality assurance.
Contribute to model development through insights grounded in data, shaping what, how, and when we train.
Who You Are
You've built and maintained ML data pipelines at scale, ideally for foundation or generative models, that shipped into production in the real world.
You have deep experience with dataengineering for ML, including distributed systems, data extraction, transformation, and loading, and large-scale data processing (e.g. PySpark, Beam, Ray, or similar).
You're fluent in Python and experienced with ML frameworks and data formats (Parquet, TFRecord, HuggingFace datasets, etc.).
You've developed data augmentation, sampling, or curation strategies that improved model performance.
You think like both an engineer and an experimentalist: curious, analytical, and grounded in evidence.
You collaborate well across AI development, infra, and product, and enjoy building the data systems that make great models possible.
You care deeply about data quality, reproducibility, and scalability.
You're excited to help shape the future of AI for physical design.
Bonus points if:
You are comfortable working with a variety of complex data formats, e.g. for 3D geometry kernels or rendering engines.
You have an interest in math, geometry, topology, rendering, or computational geometry.
You've worked in 3D printing, CAD, or computer graphics domains.
Why Backflip
This is a rare opportunity to own the data backbone of a frontier foundation model, and help define how AI learns to design the physical world.
You'll join a world-class, mission-driven team operating at the intersection of research, engineering, and deep product sense, building systems that let people design the physical world as easily as they imagine it.
Your work will directly shape the performance, capability, and impact of Backflip's foundation model, the core of how the world will build in the future.
Let's build the tools the future will be made in.
#J-18808-Ljbffr
$126k-178k yearly est. 5d ago
ML Engineer: Fraud Detection & Big Data at Scale
Datavisor 4.5
Data engineer job in Mountain View, CA
A leading security technology firm in California is seeking a skilled Data Science Engineer. You will harness the power of unsupervised machine learning to detect fraudulent activities across various sectors. Ideal candidates have experience with Java/C++, data structures, and machine learning. The company offers competitive pay, flexible schedules, equity participation, health benefits, a collaborative environment, and unique perks such as catered lunches and game nights.
#J-18808-Ljbffr
$125k-177k yearly est. 1d ago
Senior Data Engineer, Card Data Platform
Capital One 4.7
Data engineer job in San Francisco, CA
A financial services company in San Francisco seeks a Distinguished DataEngineer to lead innovation in data architecture and management. The role involves building critical data solutions, mentoring teams, and leveraging cloud technologies like AWS. Ideal candidates will have significant experience in dataengineering, a Bachelor's degree, and proficiency in modern data practices to drive customer value through analytics and automation.
#J-18808-Ljbffr
$106k-144k yearly est. 2d ago
Foundry Data Engineer: ETL Automation & Dashboards
Data Freelance Hub 4.5
Data engineer job in San Francisco, CA
A data consulting firm based in San Francisco is seeking a Palantir Foundry Consultant for a contract position. The ideal candidate should have strong experience in Palantir Foundry, SQL, and PySpark, with proven skills in data pipeline development and ETL automation. Responsibilities include building data pipelines, implementing interactive dashboards, and leveraging data analysis for actionable insights. This on-site role offers an excellent opportunity for those experienced in the field.
#J-18808-Ljbffr
$114k-160k yearly est. 4d ago
Multi-Channel Demand Gen Leader - Data SaaS
Motherduck Corporation
Data engineer job in San Francisco, CA
A growing technology firm based in San Francisco is seeking a Demand Generation Marketer to drive campaigns that turn prospects into lifelong customers. This role emphasizes creativity in marketing, collaboration with teams, and a strong data-driven mindset. The ideal candidate will have experience in B2B SaaS environments and a passion for engaging technical audiences. Flexible work environment and competitive compensation offered.
#J-18808-Ljbffr
$112k-157k yearly est. 1d ago
Data Scientist
Talent Software Services 3.6
Data engineer job in Novato, CA
Are you an experienced Data Scientist with a desire to excel? If so, then Talent Software Services may have the job for you! Our client is seeking an experienced Data Scientist to work at their company in Novato, CA.
Client's Data Science is responsible for designing, capturing, analyzing, and presenting data that can drive key decisions for Clinical Development, Medical Affairs, and other business areas of Client. With a quality-by-design culture, Data Science builds quality data that is fit-for-purpose to support statistically sound investigation of critical scientific questions. The Data Science team develops solid analytics that are visually relevant and impactful in supporting key data-driven decisions across Client. The Data Management Science (DMS) group contributes to Data Science by providing complete, correct, and consistent analyzable data at data, data structure and documentation levels following international standards and GCP. The DMS Center of Risk Based Quality Management (RBQM) sub-function is responsible for the implementation of a comprehensive, cross-functional strategy to proactively manage quality risks for clinical trials. Starting at protocol development, the team collaborates to define critical-to-quality factors, design fit-for-purpose quality strategies, and enable ongoing oversight through centralized monitoring and data-driven risk management. The RBQM Data Scientist supports central monitoring and risk-based quality management (RBQM) for clinical trials. This role focuses on implementing and running pre-defined KRIs, QTLs, and other risk metrics using clinical data, with strong emphasis on SAS programming to deliver robust and scalable analytics across multiple studies.
Primary Responsibilities/Accountabilities:
The RBQM Data Scientist may perform a range of the following responsibilities, depending upon the study's complexity and the study's development stage:
Implement and maintain pre-defined KRIs, QTLs, and triggers using robust SAS programs/macros across multiple clinical studies.
Extract, transform, and integrate data from EDC systems (e.g., RAVE) and other clinical sources into analysis-ready SAS datasets.
Run routine and ad-hoc RBQM/central monitoring outputs (tables, listings, data extracts, dashboard feeds) to support signal detection and study review.
Perform QC and troubleshooting of SAS code; ensure outputs are accurate and efficient.
Maintain clear technical documentation (specifications, validation records, change logs) for all RBQM programs and processes.
Collaborate with Central Monitors, Central Statistical Monitors, Data Management, Biostatistics, and Study Operations to understand requirements and ensure correct implementation of RBQM metrics.
Qualifications:
PhD, MS, or BA/BS in statistics, biostatistics, computer science, data science, life science, or a related field.
Relevant clinical development experience (programming, RBM/RBQM, Data Management), for example:
PhD: 3+ years
MS: 5+ years
BA/BS: 8+ years
Advanced SAS programming skills (hard requirement) in a clinical trials environment (Base SAS, Macro, SAS SQL; experience with large, complex clinical datasets).
Hands-on experience working with clinical trial data.•Proficiency with Microsoft Word, Excel, and PowerPoint.
Technical - Preferred / Strong Plus
Experience with RAVE EDC.
Awareness or working knowledge of CDISC, CDASH, SDTM standards.
Exposure to R, Python, or JavaScript and/or clinical data visualization tools/platforms.
Preferred:
Knowledge of GCP, ICH, FDA guidance related to clinical trials and risk-based monitoring.
Strong analytical and problem-solving skills; ability to interpret complex data and risk outputs.
Effective communication and teamwork skills; comfortable collaborating with cross-functional, global teams.
Ability to manage multiple programming tasks and deliver high-quality work in a fast-paced environment.
How much does a data engineer earn in Concord, CA?
The average data engineer in Concord, CA earns between $93,000 and $183,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.
Average data engineer salary in Concord, CA
$131,000
What are the biggest employers of Data Engineers in Concord, CA?
The biggest employers of Data Engineers in Concord, CA are: