Post job

Data engineer jobs in San Bruno, CA

- 9,509 jobs
All
Data Engineer
Data Scientist
  • Staff Data Scientist

    Quantix Search

    Data engineer job in San Francisco, CA

    Staff Data Scientist | San Francisco | $250K-$300K + Equity We're partnering with one of the fastest-growing AI companies in the world to hire a Staff Data Scientist. Backed by over $230M from top-tier investors and already valued at over $1B, they've secured customers that include some of the most recognizable names in tech. Their AI platform powers millions of daily interactions and is quickly becoming the enterprise standard for conversational AI. In this role, you'll bring rigorous analytics and experimentation leadership that directly shapes product strategy and company performance. What you'll do: Drive deep-dive analyses on user behavior, product performance, and growth drivers Design and interpret A/B tests to measure product impact at scale Build scalable data models, pipelines, and dashboards for company-wide use Partner with Product and Engineering to embed experimentation best practices Evaluate ML models, ensuring business relevance, performance, and trade-off clarity What we're looking for: 5+ years in data science or product analytics at scale (consumer or marketplace preferred) Advanced SQL and Python skills, with strong foundations in statistics and experimental design Proven record of designing, running, and analyzing large-scale experiments Ability to analyze and reason about ML models (classification, recommendation, LLMs) Strong communicator with a track record of influencing cross-functional teams If you're excited by the sound of this challenge- apply today and we'll be in touch.
    $250k-300k yearly 4d ago
  • Data Scientist

    Skale 3.7company rating

    Data engineer job in San Francisco, CA

    We're working with a Series A health tech start-up pioneering a revolutionary approach to healthcare AI, developing neurosymbolic systems that combine statistical learning with structured medical knowledge. Their technology is being adopted by leading health systems and insurers to enhance patient outcomes through advanced predictive analytics. We're seeking Machine Learning Engineers who excel at the intersection of data science, modeling, and software engineering. You'll design and implement models that extract insights from longitudinal healthcare data, balancing analytical rigor, interpretability, and scalability. This role offers a unique opportunity to tackle foundational modeling challenges in healthcare, where your contributions will directly influence clinical, actuarial, and policy decisions. Key Responsibilities Develop predictive models to forecast disease progression, healthcare utilization, and costs using temporal clinical data (claims, EHR, laboratory results, pharmacy records) Design interpretable and explainable ML solutions that earn the trust of clinicians, actuaries, and healthcare decision-makers Research and prototype innovative approaches leveraging both classical and modern machine learning techniques Build robust, scalable ML pipelines for training, validation, and deployment in distributed computing environments Collaborate cross-functionally with data engineers, clinicians, and product teams to ensure models address real-world healthcare needs Communicate findings and methodologies effectively through visualizations, documentation, and technical presentations Required Qualifications Strong foundation in statistical modeling, machine learning, or data science, with preference for experience in temporal or longitudinal data analysis Proficiency in Python and ML frameworks (PyTorch, JAX, NumPyro, PyMC, etc.) Proven track record of transitioning models from research prototypes to production systems Experience with probabilistic methods, survival analysis, or Bayesian inference (highly valued) Bonus Qualifications Experience working with clinical data and healthcare terminologies (ICD, CPT, SNOMED CT, LOINC) Background in actuarial modeling, claims forecasting, or risk adjustment methodologies
    $123k-171k yearly est. 2d ago
  • Data Scientist V

    Creospan Inc.

    Data engineer job in Mountain View, CA

    Job Title: Data Scientist V - Data Analytics & Engineering Location: Onsite preferred (Mountain View, CA); Remote considered for strong candidates (US time zones only) Duration: 12 months (possible extension) Required Skills: Strong project or product management experience Excellent communication and consulting skills Proficiency in SQL and Python Nice to Have: Experience with marketing analytics or campaigns Experience in large tech or fast-paced startup environments Familiarity with AI-driven workflows Why Join: High-visibility, cross-functional role Opportunity to work on advanced measurement and automation tools Small, agile team with enterprise-scale impact
    $107k-155k yearly est. 2d ago
  • Lead Data Scientist - Computer Vision

    Straive

    Data engineer job in Santa Clara, CA

    Lead Data Scientist - Computer Vision/Image Processing About the Role We are seeking a Lead Data Scientist to drive the strategy and execution of data science initiatives, with a particular focus on computer vision systems & image processing techniques. The ideal candidate has deep expertise in image processing techniques including Filtering, Binary Morphology, Perspective/Affine Transformation, Edge Detection. Responsibilities Solid knowledge of computer vision programs and image processing techniques: Filtering, Binary Morphology, Perspective/Affine Transformation, Edge Detection Strong understanding of machine learning: Regression, Supervised and Unsupervised Learning Proficiency in Python and libraries such as OpenCV, NumPy, scikit-learn, TensorFlow/PyTorch. Familiarity with version control (Git) and collaborative development practices
    $107k-154k yearly est. 4d ago
  • Data Scientist

    Centraprise

    Data engineer job in Pleasanton, CA

    Key Responsibilities Design and develop marketing-focused machine learning models, including: Customer segmentation Propensity, churn, and lifetime value (LTV) models Campaign response and uplift models Attribution and marketing mix models (MMM) Build and deploy NLP solutions for: Customer sentiment analysis Text classification and topic modeling Social media, reviews, chat, and voice-of-customer analytics Apply advanced statistical and ML techniques to solve real-world business problems. Work with structured and unstructured data from multiple marketing channels (digital, CRM, social, email, web). Translate business objectives into analytical frameworks and actionable insights. Partner with stakeholders to define KPIs, success metrics, and experimentation strategies (A/B testing). Optimize and productionize models using MLOps best practices. Mentor junior data scientists and provide technical leadership. Communicate complex findings clearly to technical and non-technical audiences. Required Skills & Qualifications 7+ years of experience in Data Science, with a strong focus on marketing analytics. Strong expertise in Machine Learning (supervised & unsupervised techniques). Hands-on experience with NLP techniques, including: Text preprocessing and feature extraction Word embeddings (Word2Vec, GloVe, Transformers) Large Language Models (LLMs) is a plus Proficiency in Python (NumPy, Pandas, Scikit-learn, TensorFlow/PyTorch). Experience with SQL and large-scale data processing. Strong understanding of statistics, probability, and experimental design. Experience working with cloud platforms (AWS, Azure, or GCP). Ability to translate data insights into business impact. Nice to Have Experience with marketing automation or CRM platforms. Knowledge of MLOps, model monitoring, and deployment pipelines. Familiarity with GenAI/LLM-based NLP use cases for marketing. Prior experience in consumer, e-commerce, or digital marketing domains. EEO Centraprise is an equal opportunity employer. Your application and candidacy will not be considered based on race, color, sex, religion, creed, sexual orientation, gender identity, national origin, disability, genetic information, pregnancy, veteran status or any other characteristic protected by federal, state or local laws.
    $107k-155k yearly est. 1d ago
  • Data Scientist with Gen Ai and Python experience

    Droisys 4.3company rating

    Data engineer job in Palo Alto, CA

    About Company, Droisys is an innovation technology company focused on helping companies accelerate their digital initiatives from strategy and planning through execution. We leverage deep technical expertise, Agile methodologies, and data-driven intelligence to modernize systems of engagement and simplify human/tech interaction. Amazing things happen when we work in environments where everyone feels a true sense of belonging and when candidates have the requisite skills and opportunities to succeed. At Droisys, we invest in our talent and support career growth, and we are always on the lookout for amazing talent who can contribute to our growth by delivering top results for our clients. Join us to challenge yourself and accomplish work that matters. Here's the job details, Data Scientist with Gen Ai and Python experience Palo Alto CA- 5 days Onsite Interview Mode:-Phone & F2F Job Overview: Competent Data Scientist, who is independent, results driven and is capable of taking business requirements and building out the technologies to generate statistically sound analysis and production grade ML models DS skills with GenAI and LLM Knowledge, Expertise in Python/Spark and their related libraries and frameworks. Experience in building training ML pipelines and efforts involved in ML Model deployment. Experience in other ML concepts - Real time distributed model inferencing pipeline, Champion/Challenger framework, A/B Testing, Model. Familiar with DS/ML Production implementation. Excellent problem-solving skills, with attention to detail, focus on quality and timely delivery of assigned tasks. Azure cloud and Databricks prior knowledge will be a big plus. Droisys is an equal opportunity employer. We do not discriminate based on race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law. Droisys believes in diversity, inclusion, and belonging, and we are committed to fostering a diverse work environment.
    $104k-146k yearly est. 2d ago
  • Staff Data Engineer

    Strativ Group

    Data engineer job in San Jose, CA

    🌎 San Francisco (Hybrid) 💼 Founding/Staff Data Engineer 💵 $200-300k base Our client is an elite applied AI research and product lab building AI-native systems for finance-and pushing frontier models into real production environments. Their work sits at the intersection of data, research, and high-stakes financial decision-making. As the Founding Data Engineer, you will own the data platform that powers everything: models, experiments, and user-facing products relied on by demanding financial customers. You'll make foundational architectural decisions, work directly with researchers and product engineers, and help define how data is built, trusted, and scaled from day one. What you'll do: Design and build the core data platform, ingesting, transforming, and serving large-scale financial and alternative datasets. Partner closely with researchers and ML engineers to ship production-grade data and feature pipelines that power cutting-edge models. Establish data quality, observability, lineage, and reproducibility across both experimentation and production workloads. Deploy and operate data services using Docker and Kubernetes in a modern cloud environment (AWS, GCP, or Azure). Make foundational choices on tooling, architecture, and best practices that will define how data works across the company. Continuously simplify and evolve systems-rewriting pipelines or infrastructure when it's the right long-term decision. Ideal candidate: Have owned or built high-performance data systems end-to-end, directly supporting production applications and ML models. Are strongest in backend and data infrastructure, with enough frontend literacy to integrate cleanly with web products when needed. Can design and evolve backend services and pipelines (Node.js or Python) to support new product features and research workflows. Are an expert in at least one statically typed language, with a strong bias toward type safety, correctness, and maintainable systems. Have deployed data workloads and services using Docker and Kubernetes on a major cloud provider. Are comfortable making hard calls-simplifying, refactoring, or rebuilding legacy pipelines when quality and scalability demand it. Use AI tools to accelerate your work, but rigorously review and validate AI-generated code, insisting on sound system design. Thrive in a high-bar, high-ownership environment with other exceptional engineers. Love deep technical problems in data infrastructure, distributed systems, and performance. Nice to have: Experience working with financial data (market, risk, portfolio, transactional, or alternative datasets). Familiarity with ML infrastructure, such as feature stores, experiment tracking, or model serving systems. Background in a high-growth startup or a foundational infrastructure role. Compensation & setup: Competitive salary and founder-level equity Hybrid role based in San Francisco, with close collaboration and significant ownership Small, elite team building core infrastructure with outsized impact
    $200k-300k yearly 3d ago
  • Senior Data Engineer

    Sigmaways Inc.

    Data engineer job in Fremont, CA

    If you're hands on with modern data platforms, cloud tech, and big data tools and you like building solutions that are secure, repeatable, and fast, this role is for you. As a Senior Data Engineer, you will design, build, and maintain scalable data pipelines that transform raw information into actionable insights. The ideal candidate will have strong experience across modern data platforms, cloud environments, and big data technologies, with a focus on building secure, repeatable, and high-performing solutions. Responsibilities: Design, develop, and maintain secure, scalable data pipelines to ingest, transform, and deliver curated data into the Common Data Platform (CDP). Participate in Agile rituals and contribute to delivery within the Scaled Agile Framework (SAFe). Ensure quality and reliability of data products through automation, monitoring, and proactive issue resolution. Deploy alerting and auto-remediation for pipelines and data stores to maximize system availability. Apply a security first and automation-driven approach to all data engineering practices. Collaborate with cross-functional teams (data scientists, analysts, product managers, and business stakeholders) to align infrastructure with evolving data needs. Stay current on industry trends and emerging tools, recommending improvements to strengthen efficiency and scalability. Qualifications: Bachelor's degree in Computer Science, Information Systems, or related field (or equivalent experience). At least 3 years of experience with Python and PySpark, including Jupyter notebooks and unit testing. At least 2 years of experience with Databricks, Collibra, and Starburst. Proven work with relational and NoSQL databases, including STAR and dimensional modeling approaches. Hands-on experience with modern data stacks: object stores (S3), Spark, Airflow, lakehouse architectures, and cloud warehouses (Snowflake, Redshift). Strong background in ETL and big data engineering (on-prem and cloud). Work within enterprise cloud platforms (CFS2, Cloud Foundational Services 2/EDS) for governance and compliance. Experience building end-to-end pipelines for structured, semi-structured, and unstructured data using Spark.
    $110k-156k yearly est. 3d ago
  • Data Engineer

    Midjourney

    Data engineer job in Fremont, CA

    Midjourney is a research lab exploring new mediums to expand the imaginative powers of the human species. We are a small, self-funded team focused on design, human infrastructure, and AI. We have no investors, no big company controlling us, and no advertisers. We are 100% supported by our amazing community. Our tools are already used by millions of people to dream, to explore, and to create. But this is just the start. We think the story of the 2020s is about building the tools that will remake the world for the next century. We're making those tools, to expand what it means to be human. Core Responsibilities: Design and maintain data pipelines to consolidate information across multiple sources (subscription platforms, payment systems, infrastructure and usage monitoring, and financial systems) into a unified analytics environment Build and manage interactive dashboards and self-service BI tools that enable leadership to track key business metrics including revenue performance, infrastructure costs, customer retention, and operational efficiency Serve as technical owner of our financial planning platform (Pigment or similar), leading implementation and build-out of models, data connections, and workflows in partnership with Finance leadership to translate business requirements into functional system architecture Develop automated data quality checks and cleaning processes to ensure accuracy and consistency across financial and operational datasets Partner with Finance, Product and Operations teams to translate business questions into analytical frameworks, including cohort analysis, cost modeling, and performance trending Create and maintain documentation for data models, ETL processes, dashboard logic, and system workflows to ensure knowledge continuity Support strategic planning initiatives by building financial models, scenario analyses, and data-driven recommendations for resource allocation and growth investments Required Qualifications: 3-5+ years experience in data engineering, analytics engineering, or similar role with demonstrated ability to work with large-scale datasets Strong SQL skills and experience with modern data warehousing solutions (BigQuery, Snowflake, Redshift, etc.) Proficiency in at least one programming language (Python, R) for data manipulation and analysis Experience with BI/visualization tools (Looker, Tableau, Power BI, or similar) Hands-on experience administering enterprise financial systems (NetSuite, SAP, Oracle, or similar ERP platforms) Experience working with Stripe Billing or similar subscription management platforms, including data extraction and revenue reporting Ability to communicate technical concepts clearly to non-technical stakeholders
    $110k-156k yearly est. 1d ago
  • Imaging Data Engineer/Architect

    Intuitive.Ai

    Data engineer job in San Francisco, CA

    About us: Intuitive is an innovation-led engineering company delivering business outcomes for 100's of Enterprises globally. With the reputation of being a Tiger Team & a Trusted Partner of enterprise technology leaders, we help solve the most complex Digital Transformation challenges across following Intuitive Superpowers: Modernization & Migration Application & Database Modernization Platform Engineering (IaC/EaC, DevSecOps & SRE) Cloud Native Engineering, Migration to Cloud, VMware Exit FinOps Data & AI/ML Data (Cloud Native / DataBricks / Snowflake) Machine Learning, AI/GenAI Cybersecurity Infrastructure Security Application Security Data Security AI/Model Security SDx & Digital Workspace (M365, G-suite) SDDC, SD-WAN, SDN, NetSec, Wireless/Mobility Email, Collaboration, Directory Services, Shared Files Services Intuitive Services: Professional and Advisory Services Elastic Engineering Services Managed Services Talent Acquisition & Platform Resell Services About the job: Title: Imaging Data Engineer/Architect Start Date: Immediate # of Positions: 1 Position Type: Contract/ Full-Time Location: San Francisco, CA Notes: Imaging data Engineer/architect who understands Radiology and Digital pathology, related clinical data and metadata. Hands-on experience on above technologies, and with good knowledge in the biomedical imaging, and data pipelines overall. About the Role We are seeking a highly skilled Imaging Data Engineer/Architect to join our San Francisco team as a Subject Matter Expert (SME) in radiology and digital pathology. This role will design and manage imaging data pipelines, ensuring seamless integration of clinical data and metadata to support advanced diagnostic and research applications. The ideal candidate will have deep expertise in medical imaging standards, cloud-based data architectures, and healthcare interoperability, contributing to innovative solutions that enhance patient outcomes. Responsibilities Design and implement scalable data architectures for radiology and digital pathology imaging data, including DICOM, HL7, and FHIR standards. Develop and optimize data pipelines to process and store large-scale imaging datasets (e.g., MRI, CT, histopathology slides) and associated metadata. Collaborate with clinical teams to understand radiology and pathology workflows, ensuring data solutions align with clinical needs. Ensure data integrity, security, and compliance with healthcare regulations (e.g., HIPAA, GDPR). Integrate imaging data with AI/ML models for diagnostic and predictive analytics, working closely with data scientists. Build and maintain metadata schemas to support data discoverability and interoperability across systems. Provide technical expertise to cross-functional teams, including product managers and software engineers, to drive imaging data strategy. Conduct performance tuning and optimization of imaging data storage and retrieval systems in cloud environments (e.g., AWS, Google Cloud, Azure). Document data architectures and processes, ensuring knowledge transfer to internal teams and external partners. Stay updated on emerging imaging technologies and standards, proposing innovative solutions to enhance data workflows. Qualifications Education: Bachelor's degree in computer science, Biomedical Engineering, or a related field (master's preferred). Experience: 5+ years in data engineering or architecture, with at least 3 years focused on medical imaging (radiology and/or digital pathology). Proven experience with DICOM, HL7, FHIR, and imaging metadata standards (e.g., SNOMED, LOINC). Hands-on experience with cloud platforms (AWS, Google Cloud, or Azure) for imaging data storage and processing. Technical Skills: Proficiency in programming languages (e.g., Python, Java, SQL) for data pipeline development. Expertise in ETL processes, data warehousing, and database management (e.g., Snowflake, BigQuery, PostgreSQL). Familiarity with AI/ML integration for imaging data analytics. Knowledge of containerization (e.g., Docker, Kubernetes) for deploying data solutions. Domain Knowledge: Deep understanding of radiology and digital pathology workflows, including PACS and LIS systems. Familiarity with clinical data integration and healthcare interoperability standards. Soft Skills: Strong analytical and problem-solving skills to address complex data challenges. Excellent communication skills to collaborate with clinical and technical stakeholders. Ability to work independently in a fast-paced environment, with a proactive approach to innovation. Certifications (preferred): AWS Certified Solutions Architect, Google Cloud Professional Data Engineer, or equivalent. Certifications in medical imaging (e.g., CIIP - Certified Imaging Informatics Professional).
    $110k-157k yearly est. 1d ago
  • Data Engineer

    Odiin

    Data engineer job in San Francisco, CA

    You'll work closely with engineering, analytics, and product teams to ensure data is accurate, accessible, and efficiently processed across the organization. Key Responsibilities: Design, develop, and maintain scalable data pipelines and architectures. Collect, process, and transform data from multiple sources into structured, usable formats. Ensure data quality, reliability, and security across all systems. Work with data analysts and data scientists to optimize data models for analytics and machine learning. Implement ETL (Extract, Transform, Load) processes and automate workflows. Monitor and troubleshoot data infrastructure, ensuring minimal downtime and high performance. Collaborate with cross-functional teams to define data requirements and integrate new data sources. Maintain comprehensive documentation for data systems and processes. Requirements: Proven experience as a Data Engineer, ETL Developer, or similar role. Strong programming skills in Python, SQL, or Scala. Experience with data pipeline tools (Airflow, dbt, Luigi, etc.). Familiarity with big data technologies (Spark, Hadoop, Kafka, etc.). Hands-on experience with cloud data platforms (AWS, GCP, Azure, Snowflake, or Databricks). Understanding of data modeling, warehousing, and schema design. Solid knowledge of database systems (PostgreSQL, MySQL, NoSQL). Strong analytical and problem-solving skills.
    $110k-157k yearly est. 4d ago
  • Data Engineer - Scientific Data Ingestion

    Mithrl

    Data engineer job in San Francisco, CA

    We envision a world where novel drugs and therapies reach patients in months, not years, accelerating breakthroughs that save lives. Mithrl is building the world's first commercially available AI Co-Scientist-a discovery engine that empowers life science teams to go from messy biological data to novel insights in minutes. Scientists ask questions in natural language, and Mithrl answers with real analysis, novel targets, and patent-ready reports. No coding. No waiting. No bioinformatics bottlenecks. We are the fastest growing tech-bio startup in the Bay Area with over 12X YoY revenue growth. Our platform is already being used by teams at some of the largest biotechs and big pharma across three continents to accelerate and uncover breakthroughs-from target discovery to mechanism of action. WHAT YOU WILL DO Build and own an AI-powered ingestion & normalization pipeline to import data from a wide variety of sources - unprocessed Excel/CSV uploads, lab and instrument exports, as well as processed data from internal pipelines. Develop robust schema mapping, coercion, and conversion logic (think: units normalization, metadata standardization, variable-name harmonization, vendor-instrument quirks, plate-reader formats, reference-genome or annotation updates, batch-effect correction, etc.). Use LLM-driven and classical data-engineering tools to structure “semi-structured” or messy tabular data - extracting metadata, inferring column roles/types, cleaning free-text headers, fixing inconsistencies, and preparing final clean datasets. Ensure all transformations that should only happen once (normalization, coercion, batch-correction) execute during ingestion - so downstream analytics / the AI “Co-Scientist” always works with clean, canonical data. Build validation, verification, and quality-control layers to catch ambiguous, inconsistent, or corrupt data before it enters the platform. Collaborate with product teams, data science / bioinformatics colleagues, and infrastructure engineers to define and enforce data standards, and ensure pipeline outputs integrate cleanly into downstream analysis and storage systems. WHAT YOU BRING Must-have 5+ years of experience in data engineering / data wrangling with real-world tabular or semi-structured data. Strong fluency in Python, and data processing tools (Pandas, Polars, PyArrow, or similar). Excellent experience dealing with messy Excel / CSV / spreadsheet-style data - inconsistent headers, multiple sheets, mixed formats, free-text fields - and normalizing it into clean structures. Comfort designing and maintaining robust ETL/ELT pipelines, ideally for scientific or lab-derived data. Ability to combine classical data engineering with LLM-powered data normalization / metadata extraction / cleaning. Strong desire and ability to own the ingestion & normalization layer end-to-end - from raw upload → final clean dataset - with an eye for maintainability, reproducibility, and scalability. Good communication skills; able to collaborate across teams (product, bioinformatics, infra) and translate real-world messy data problems into robust engineering solutions. Nice-to-have Familiarity with scientific data types and “modalities” (e.g. plate-readers, genomics metadata, time-series, batch-info, instrumentation outputs). Experience with workflow orchestration tools (e.g. Nextflow, Prefect, Airflow, Dagster), or building pipeline abstractions. Experience with cloud infrastructure and data storage (AWS S3, data lakes/warehouses, database schemas) to support multi-tenant ingestion. Past exposure to LLM-based data transformation or cleansing agents - building or integrating tools that clean or structure messy data automatically. Any background in computational biology / lab-data / bioinformatics is a bonus - though not required. WHAT YOU WILL LOVE AT MITHRL Mission-driven impact: you'll be the gatekeeper of data quality - ensuring that all scientific data entering Mithrl becomes clean, consistent, and analysis-ready. You'll have outsized influence over the reliability and trustworthiness of our entire data + AI stack. High ownership & autonomy: this role is yours to shape. You decide how ingestion works, define the standards, build the pipelines. You'll work closely with our product, data science, and infrastructure teams - shaping how data is ingested, stored, and exposed to end users or AI agents. Team: Join a tight-knit, talent-dense team of engineers, scientists, and builders Culture: We value consistency, clarity, and hard work. We solve hard problems through focused daily execution Speed: We ship fast (2x/week) and improve continuously based on real user feedback Location: Beautiful SF office with a high-energy, in-person culture Benefits: Comprehensive PPO health coverage through Anthem (medical, dental, and vision) + 401(k) with top-tier plans
    $110k-157k yearly est. 3d ago
  • Staff Data Scientist

    Quantix Search

    Data engineer job in Fremont, CA

    Staff Data Scientist | San Francisco | $250K-$300K + Equity We're partnering with one of the fastest-growing AI companies in the world to hire a Staff Data Scientist. Backed by over $230M from top-tier investors and already valued at over $1B, they've secured customers that include some of the most recognizable names in tech. Their AI platform powers millions of daily interactions and is quickly becoming the enterprise standard for conversational AI. In this role, you'll bring rigorous analytics and experimentation leadership that directly shapes product strategy and company performance. What you'll do: Drive deep-dive analyses on user behavior, product performance, and growth drivers Design and interpret A/B tests to measure product impact at scale Build scalable data models, pipelines, and dashboards for company-wide use Partner with Product and Engineering to embed experimentation best practices Evaluate ML models, ensuring business relevance, performance, and trade-off clarity What we're looking for: 5+ years in data science or product analytics at scale (consumer or marketplace preferred) Advanced SQL and Python skills, with strong foundations in statistics and experimental design Proven record of designing, running, and analyzing large-scale experiments Ability to analyze and reason about ML models (classification, recommendation, LLMs) Strong communicator with a track record of influencing cross-functional teams If you're excited by the sound of this challenge- apply today and we'll be in touch.
    $250k-300k yearly 4d ago
  • Senior Data Engineer

    Skale 3.7company rating

    Data engineer job in Fremont, CA

    We're hiring a Senior/Lead Data Engineer to join a fast-growing AI startup. The team comes from a billion dollar AI company, and has raised a $40M+ seed round. You'll need to be comfortable transforming and moving data in a new 'group level' data warehouse, from legacy sources. You'll have a strong data modeling background. Proven proficiency in modern data transformation tools, specifically dbt and/or SQLMesh. Exceptional ability to apply systems thinking and complex problem-solving to ambiguous challenges. Experience within a high-growth startup environment is highly valued. Deep, practical knowledge of the entire data lifecycle, from generation and governance through to advanced downstream applications (e.g., fueling AI/ML models, LLM consumption, and core product features). Outstanding ability to communicate technical complexity clearly, synthesizing information into actionable frameworks for executive and cross-functional teams.
    $126k-177k yearly est. 5d ago
  • Senior ML Data Engineer

    Midjourney

    Data engineer job in Fremont, CA

    We're the data team behind Midjourney's image generation models. We handle the dataset side: processing, filtering, scoring, captioning, and all the distributed compute that makes high-quality training data possible. What you'd be working on: Large-scale dataset processing and filtering pipelines Training classifiers for content moderation and quality assessment Models for data quality and aesthetic evaluation Data visualization tools for experimenting on dataset samples Testing/simulating distributed inference pipelines Monitoring dashboards for data quality and pipeline health Performance optimization and infrastructure scaling Occasionally jumping into inference optimization and other cross-team projects Our current stack: PySpark, Slurm, distributed batch processing across hybrid cloud setup. We're pragmatic about tools - if there's something better, we'll switch. We're looking for someone strong in either: Data engineering/ML pipelines at scale, or Cloud/infrastructure with distributed systems experience Don't need exact tech matches - comfort with adjacent technologies and willingness to learn matters more. We work with our own hardware plus GCP and other providers, so adaptability across different environments is valuable. Location: SF office a few times per week (we may make exceptions on location for truly exceptional candidates) The role offers variety, our team members often get pulled into different projects across the company, from dataset work to inference optimization. If you're interested in the intersection of large-scale data processing and cutting-edge generative AI, we'd love to hear from you.
    $110k-156k yearly est. 4d ago
  • Senior Data Engineer

    Sigmaways Inc.

    Data engineer job in San Francisco, CA

    If you're hands on with modern data platforms, cloud tech, and big data tools and you like building solutions that are secure, repeatable, and fast, this role is for you. As a Senior Data Engineer, you will design, build, and maintain scalable data pipelines that transform raw information into actionable insights. The ideal candidate will have strong experience across modern data platforms, cloud environments, and big data technologies, with a focus on building secure, repeatable, and high-performing solutions. Responsibilities: Design, develop, and maintain secure, scalable data pipelines to ingest, transform, and deliver curated data into the Common Data Platform (CDP). Participate in Agile rituals and contribute to delivery within the Scaled Agile Framework (SAFe). Ensure quality and reliability of data products through automation, monitoring, and proactive issue resolution. Deploy alerting and auto-remediation for pipelines and data stores to maximize system availability. Apply a security first and automation-driven approach to all data engineering practices. Collaborate with cross-functional teams (data scientists, analysts, product managers, and business stakeholders) to align infrastructure with evolving data needs. Stay current on industry trends and emerging tools, recommending improvements to strengthen efficiency and scalability. Qualifications: Bachelor's degree in Computer Science, Information Systems, or related field (or equivalent experience). At least 3 years of experience with Python and PySpark, including Jupyter notebooks and unit testing. At least 2 years of experience with Databricks, Collibra, and Starburst. Proven work with relational and NoSQL databases, including STAR and dimensional modeling approaches. Hands-on experience with modern data stacks: object stores (S3), Spark, Airflow, lakehouse architectures, and cloud warehouses (Snowflake, Redshift). Strong background in ETL and big data engineering (on-prem and cloud). Work within enterprise cloud platforms (CFS2, Cloud Foundational Services 2/EDS) for governance and compliance. Experience building end-to-end pipelines for structured, semi-structured, and unstructured data using Spark.
    $110k-157k yearly est. 3d ago
  • Staff Data Scientist

    Quantix Search

    Data engineer job in San Jose, CA

    Staff Data Scientist | San Francisco | $250K-$300K + Equity We're partnering with one of the fastest-growing AI companies in the world to hire a Staff Data Scientist. Backed by over $230M from top-tier investors and already valued at over $1B, they've secured customers that include some of the most recognizable names in tech. Their AI platform powers millions of daily interactions and is quickly becoming the enterprise standard for conversational AI. In this role, you'll bring rigorous analytics and experimentation leadership that directly shapes product strategy and company performance. What you'll do: Drive deep-dive analyses on user behavior, product performance, and growth drivers Design and interpret A/B tests to measure product impact at scale Build scalable data models, pipelines, and dashboards for company-wide use Partner with Product and Engineering to embed experimentation best practices Evaluate ML models, ensuring business relevance, performance, and trade-off clarity What we're looking for: 5+ years in data science or product analytics at scale (consumer or marketplace preferred) Advanced SQL and Python skills, with strong foundations in statistics and experimental design Proven record of designing, running, and analyzing large-scale experiments Ability to analyze and reason about ML models (classification, recommendation, LLMs) Strong communicator with a track record of influencing cross-functional teams If you're excited by the sound of this challenge- apply today and we'll be in touch.
    $250k-300k yearly 4d ago
  • Senior Data Engineer

    Skale 3.7company rating

    Data engineer job in San Jose, CA

    We're hiring a Senior/Lead Data Engineer to join a fast-growing AI startup. The team comes from a billion dollar AI company, and has raised a $40M+ seed round. You'll need to be comfortable transforming and moving data in a new 'group level' data warehouse, from legacy sources. You'll have a strong data modeling background. Proven proficiency in modern data transformation tools, specifically dbt and/or SQLMesh. Exceptional ability to apply systems thinking and complex problem-solving to ambiguous challenges. Experience within a high-growth startup environment is highly valued. Deep, practical knowledge of the entire data lifecycle, from generation and governance through to advanced downstream applications (e.g., fueling AI/ML models, LLM consumption, and core product features). Outstanding ability to communicate technical complexity clearly, synthesizing information into actionable frameworks for executive and cross-functional teams.
    $125k-177k yearly est. 5d ago
  • Senior Data Engineer

    Sigmaways Inc.

    Data engineer job in San Jose, CA

    If you're hands on with modern data platforms, cloud tech, and big data tools and you like building solutions that are secure, repeatable, and fast, this role is for you. As a Senior Data Engineer, you will design, build, and maintain scalable data pipelines that transform raw information into actionable insights. The ideal candidate will have strong experience across modern data platforms, cloud environments, and big data technologies, with a focus on building secure, repeatable, and high-performing solutions. Responsibilities: Design, develop, and maintain secure, scalable data pipelines to ingest, transform, and deliver curated data into the Common Data Platform (CDP). Participate in Agile rituals and contribute to delivery within the Scaled Agile Framework (SAFe). Ensure quality and reliability of data products through automation, monitoring, and proactive issue resolution. Deploy alerting and auto-remediation for pipelines and data stores to maximize system availability. Apply a security first and automation-driven approach to all data engineering practices. Collaborate with cross-functional teams (data scientists, analysts, product managers, and business stakeholders) to align infrastructure with evolving data needs. Stay current on industry trends and emerging tools, recommending improvements to strengthen efficiency and scalability. Qualifications: Bachelor's degree in Computer Science, Information Systems, or related field (or equivalent experience). At least 3 years of experience with Python and PySpark, including Jupyter notebooks and unit testing. At least 2 years of experience with Databricks, Collibra, and Starburst. Proven work with relational and NoSQL databases, including STAR and dimensional modeling approaches. Hands-on experience with modern data stacks: object stores (S3), Spark, Airflow, lakehouse architectures, and cloud warehouses (Snowflake, Redshift). Strong background in ETL and big data engineering (on-prem and cloud). Work within enterprise cloud platforms (CFS2, Cloud Foundational Services 2/EDS) for governance and compliance. Experience building end-to-end pipelines for structured, semi-structured, and unstructured data using Spark.
    $110k-156k yearly est. 3d ago
  • Data Engineer

    Midjourney

    Data engineer job in San Jose, CA

    Midjourney is a research lab exploring new mediums to expand the imaginative powers of the human species. We are a small, self-funded team focused on design, human infrastructure, and AI. We have no investors, no big company controlling us, and no advertisers. We are 100% supported by our amazing community. Our tools are already used by millions of people to dream, to explore, and to create. But this is just the start. We think the story of the 2020s is about building the tools that will remake the world for the next century. We're making those tools, to expand what it means to be human. Core Responsibilities: Design and maintain data pipelines to consolidate information across multiple sources (subscription platforms, payment systems, infrastructure and usage monitoring, and financial systems) into a unified analytics environment Build and manage interactive dashboards and self-service BI tools that enable leadership to track key business metrics including revenue performance, infrastructure costs, customer retention, and operational efficiency Serve as technical owner of our financial planning platform (Pigment or similar), leading implementation and build-out of models, data connections, and workflows in partnership with Finance leadership to translate business requirements into functional system architecture Develop automated data quality checks and cleaning processes to ensure accuracy and consistency across financial and operational datasets Partner with Finance, Product and Operations teams to translate business questions into analytical frameworks, including cohort analysis, cost modeling, and performance trending Create and maintain documentation for data models, ETL processes, dashboard logic, and system workflows to ensure knowledge continuity Support strategic planning initiatives by building financial models, scenario analyses, and data-driven recommendations for resource allocation and growth investments Required Qualifications: 3-5+ years experience in data engineering, analytics engineering, or similar role with demonstrated ability to work with large-scale datasets Strong SQL skills and experience with modern data warehousing solutions (BigQuery, Snowflake, Redshift, etc.) Proficiency in at least one programming language (Python, R) for data manipulation and analysis Experience with BI/visualization tools (Looker, Tableau, Power BI, or similar) Hands-on experience administering enterprise financial systems (NetSuite, SAP, Oracle, or similar ERP platforms) Experience working with Stripe Billing or similar subscription management platforms, including data extraction and revenue reporting Ability to communicate technical concepts clearly to non-technical stakeholders
    $110k-156k yearly est. 1d ago

Learn more about data engineer jobs

How much does a data engineer earn in San Bruno, CA?

The average data engineer in San Bruno, CA earns between $94,000 and $184,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.

Average data engineer salary in San Bruno, CA

$132,000

What are the biggest employers of Data Engineers in San Bruno, CA?

The biggest employers of Data Engineers in San Bruno, CA are:
  1. Genentech
  2. Zoox
  3. Ernst & Young
  4. Redolent
  5. Vertex
  6. Verily
  7. Skydio
  8. M.I.T. International, Inc.
  9. Visa
  10. Roblox
Job type you want
Full Time
Part Time
Internship
Temporary