Post job

Data engineer jobs in Stockton, CA

- 772 jobs
All
Data Engineer
Data Scientist
Software Engineer
Data Architect
Data Modeler
Lead Data Architect
  • Staff Data Scientist - Post Sales

    Harnham

    Data engineer job in Fremont, CA

    Salary: $200-250k base + RSUs This fast-growing Series E AI SaaS company is redefining how modern engineering teams build and deploy applications. We're expanding our data science organization to accelerate customer success after the initial sale-driving onboarding, retention, expansion, and long-term revenue growth. About the Role As the senior data scientist supporting post-sales teams, you will use advanced analytics, experimentation, and predictive modeling to guide strategy across Customer Success, Account Management, and Renewals. Your insights will help leadership forecast expansion, reduce churn, and identify the levers that unlock sustainable net revenue retention. Key Responsibilities Forecast & Model Growth: Build predictive models for renewal likelihood, expansion potential, churn risk, and customer health scoring. Optimize the Customer Journey: Analyze onboarding flows, product adoption patterns, and usage signals to improve activation, engagement, and time-to-value. Experimentation & Causal Analysis: Design and evaluate experiments (A/B tests, uplift modeling) to measure the impact of onboarding programs, success initiatives, and pricing changes on retention and expansion. Revenue Insights: Partner with Customer Success and Sales to identify high-value accounts, cross-sell opportunities, and early warning signs of churn. Cross-Functional Partnership: Collaborate with Product, RevOps, Finance, and Marketing to align post-sales strategies with company growth goals. Data Infrastructure Collaboration: Work with Analytics Engineering to define data requirements, maintain data quality, and enable self-serve dashboards for Success and Finance teams. Executive Storytelling: Present clear, actionable recommendations to senior leadership that translate complex analysis into strategic decisions. About You Experience: 6+ years in data science or advanced analytics, with a focus on post-sales, customer success, or retention analytics in a B2B SaaS environment. Technical Skills: Expert SQL and proficiency in Python or R for statistical modeling, forecasting, and machine learning. Domain Knowledge: Deep understanding of SaaS metrics such as net revenue retention (NRR), gross churn, expansion ARR, and customer health scoring. Analytical Rigor: Strong background in experimentation design, causal inference, and predictive modeling to inform customer-lifecycle strategy. Communication: Exceptional ability to translate data into compelling narratives for executives and cross-functional stakeholders. Business Impact: Demonstrated success improving onboarding efficiency, retention rates, or expansion revenue through data-driven initiatives.
    $200k-250k yearly 5d ago
  • Staff Data Scientist

    Quantix Search

    Data engineer job in Fremont, CA

    Staff Data Scientist | San Francisco | $250K-$300K + Equity We're partnering with one of the fastest-growing AI companies in the world to hire a Staff Data Scientist. Backed by over $230M from top-tier investors and already valued at over $1B, they've secured customers that include some of the most recognizable names in tech. Their AI platform powers millions of daily interactions and is quickly becoming the enterprise standard for conversational AI. In this role, you'll bring rigorous analytics and experimentation leadership that directly shapes product strategy and company performance. What you'll do: Drive deep-dive analyses on user behavior, product performance, and growth drivers Design and interpret A/B tests to measure product impact at scale Build scalable data models, pipelines, and dashboards for company-wide use Partner with Product and Engineering to embed experimentation best practices Evaluate ML models, ensuring business relevance, performance, and trade-off clarity What we're looking for: 5+ years in data science or product analytics at scale (consumer or marketplace preferred) Advanced SQL and Python skills, with strong foundations in statistics and experimental design Proven record of designing, running, and analyzing large-scale experiments Ability to analyze and reason about ML models (classification, recommendation, LLMs) Strong communicator with a track record of influencing cross-functional teams If you're excited by the sound of this challenge- apply today and we'll be in touch.
    $250k-300k yearly 1d ago
  • Guidewire DataHub/InfoCenter Engineer

    Aspire Systems 4.4company rating

    Data engineer job in Stockton, CA

    Hands on Experience on DataHub with InfoCenter Platform. Experience in Production support, BAU, Enhancement and Development. Works with businesses in identifying detailed analytical and operational reporting/extracts requirements. Collaborates with data analysts, architects, engineers and business stakeholders to understand data requirements. Able to create Microsoft SQL / ETL / SSIS complex queries. Handling ends to end loads Qualifications Experience on snowflake and DBT (Data Built tool). 6-9 yrs of Experience in P&C Insurance on Guidewire DataHub/InfoCenter Platform. Must have at least one DHIC on-premise or Cloud implementation experience Well versed with AWS Services - working with S3 storage, AURORA database Experience on SQL Server and Oracle databases Able to create PL/SQL stored procedures Hands-on experience on Guidewire ClaimCenter/PolicyCenter/BillingCenter data models. SAP BODS ETL design & Administration experience. Data Warehousing experience that includes analysis and development of Dataflows, mappings using needed transformations using BODS. Data Specifications hands-on experience. Experience on DataHub and InfoCenter Initial loads and Delta loads. Experience on DataHub and InfoCenter Guidewire Commit and Rollback utility. Extending entities & attributes in DataHub and InfoCenter experience. Experienced in Property & Casualty Insurance Industry. About Aspire Systems Aspire Systems is a $180+ million global technology services firm with over 4,500 employees worldwide, partnering with 275+ active customers. Founded in 1996, Aspire has grown steadily at a 19% CAGR since 2020. Headquartered in Singapore, we operate across the US, UK, LATAM, Europe, the Middle East, India, and Asia Pacific regions, with strong nearshore delivery centers in Poland and Mexico. Aspire has been consistently recognized among India s 100 Best Companies to Work For 12 consecutive years by the Great Place to Work Institute. Who We Are Aspire is built on deep expertise in Software Engineering, Digital Services, Testing, and Infrastructure & Application Support. We serve diverse industries including Independent Software Vendors, Retail, Banking & Financial Services, and Insurance. Our proven frameworks and accelerators enable us to create future-ready, scalable, and business-focused systems, helping customers across the globe embrace digital transformation at speed and scale. What We Believe At the heart of Aspire is our philosophy of Attention. Always. a commitment to investing care and focus on our customers, employees, and society Our Commitment to Diversity & Inclusion At Aspire Systems, we foster a work culture that appreciates diversity and inclusiveness. We understand that our multigenerational workforce represents different regions, cultures, economic backgrounds, races, genders, ethnicities, education levels, personalities, and religions. We believe these differences make us stronger and are committed to building an inclusive workplace where everyone feels respected and valued. Privacy Notice Aspire Systems values your privacy. Candidate information collected through this recruitment process will be used solely for hiring purposes, handled securely, and retained only as long as necessary in compliance with applicable privacy laws. Disclaimer The above statements are not intended to be a complete statement of job content, but rather to serve as a guide to the essential functions performed by the employee in this role. Organization retains the discretion to add or change the duties of the position at any time.
    $127k-167k yearly est. 1d ago
  • Staff Data Engineer

    Strativ Group

    Data engineer job in Fremont, CA

    🌎 San Francisco (Hybrid) 💼 Founding/Staff Data Engineer 💵 $200-300k base Our client is an elite applied AI research and product lab building AI-native systems for finance-and pushing frontier models into real production environments. Their work sits at the intersection of data, research, and high-stakes financial decision-making. As the Founding Data Engineer, you will own the data platform that powers everything: models, experiments, and user-facing products relied on by demanding financial customers. You'll make foundational architectural decisions, work directly with researchers and product engineers, and help define how data is built, trusted, and scaled from day one. What you'll do: Design and build the core data platform, ingesting, transforming, and serving large-scale financial and alternative datasets. Partner closely with researchers and ML engineers to ship production-grade data and feature pipelines that power cutting-edge models. Establish data quality, observability, lineage, and reproducibility across both experimentation and production workloads. Deploy and operate data services using Docker and Kubernetes in a modern cloud environment (AWS, GCP, or Azure). Make foundational choices on tooling, architecture, and best practices that will define how data works across the company. Continuously simplify and evolve systems-rewriting pipelines or infrastructure when it's the right long-term decision. Ideal candidate: Have owned or built high-performance data systems end-to-end, directly supporting production applications and ML models. Are strongest in backend and data infrastructure, with enough frontend literacy to integrate cleanly with web products when needed. Can design and evolve backend services and pipelines (Node.js or Python) to support new product features and research workflows. Are an expert in at least one statically typed language, with a strong bias toward type safety, correctness, and maintainable systems. Have deployed data workloads and services using Docker and Kubernetes on a major cloud provider. Are comfortable making hard calls-simplifying, refactoring, or rebuilding legacy pipelines when quality and scalability demand it. Use AI tools to accelerate your work, but rigorously review and validate AI-generated code, insisting on sound system design. Thrive in a high-bar, high-ownership environment with other exceptional engineers. Love deep technical problems in data infrastructure, distributed systems, and performance. Nice to have: Experience working with financial data (market, risk, portfolio, transactional, or alternative datasets). Familiarity with ML infrastructure, such as feature stores, experiment tracking, or model serving systems. Background in a high-growth startup or a foundational infrastructure role. Compensation & setup: Competitive salary and founder-level equity Hybrid role based in San Francisco, with close collaboration and significant ownership Small, elite team building core infrastructure with outsized impact
    $200k-300k yearly 5d ago
  • Data Scientist

    Centraprise

    Data engineer job in Pleasanton, CA

    Key Responsibilities Design and develop marketing-focused machine learning models, including: Customer segmentation Propensity, churn, and lifetime value (LTV) models Campaign response and uplift models Attribution and marketing mix models (MMM) Build and deploy NLP solutions for: Customer sentiment analysis Text classification and topic modeling Social media, reviews, chat, and voice-of-customer analytics Apply advanced statistical and ML techniques to solve real-world business problems. Work with structured and unstructured data from multiple marketing channels (digital, CRM, social, email, web). Translate business objectives into analytical frameworks and actionable insights. Partner with stakeholders to define KPIs, success metrics, and experimentation strategies (A/B testing). Optimize and productionize models using MLOps best practices. Mentor junior data scientists and provide technical leadership. Communicate complex findings clearly to technical and non-technical audiences. Required Skills & Qualifications 7+ years of experience in Data Science, with a strong focus on marketing analytics. Strong expertise in Machine Learning (supervised & unsupervised techniques). Hands-on experience with NLP techniques, including: Text preprocessing and feature extraction Word embeddings (Word2Vec, GloVe, Transformers) Large Language Models (LLMs) is a plus Proficiency in Python (NumPy, Pandas, Scikit-learn, TensorFlow/PyTorch). Experience with SQL and large-scale data processing. Strong understanding of statistics, probability, and experimental design. Experience working with cloud platforms (AWS, Azure, or GCP). Ability to translate data insights into business impact. Nice to Have Experience with marketing automation or CRM platforms. Knowledge of MLOps, model monitoring, and deployment pipelines. Familiarity with GenAI/LLM-based NLP use cases for marketing. Prior experience in consumer, e-commerce, or digital marketing domains. EEO Centraprise is an equal opportunity employer. Your application and candidacy will not be considered based on race, color, sex, religion, creed, sexual orientation, gender identity, national origin, disability, genetic information, pregnancy, veteran status or any other characteristic protected by federal, state or local laws.
    $107k-155k yearly est. 3d ago
  • AI Data Engineer

    Hartleyco

    Data engineer job in Fremont, CA

    Member of Technical Staff - AI Data Engineer San Francisco (In-Office) $150K to $225K + Equity A high-growth, AI-native startup coming out of stealth is hiring AI Data Engineers to build the systems that power production-grade AI. The company has recently signed a Series A term sheet and is scaling rapidly. This role is central to unblocking current bottlenecks across data engineering, context modeling, and agent performance. Responsibilities: • Build distributed, reliable data pipelines using Airflow, Temporal, and n8n • Model SQL, vector, and NoSQL databases (Postgres, Qdrant, etc.) • Build API and function-based services in Python • Develop custom automations (Playwright, Stagehand, Zapier) • Work with AI researchers to define and expose context as services • Identify gaps in data quality and drive changes to upstream processes • Ship fast, iterate, and own outcomes end-to-end Required Experience: • Strong background in data engineering • Hands-on experience working with LLMs or LLM-powered applications • Data modeling skills across SQL and vector databases • Experience building distributed systems • Experience with Airflow, Temporal, n8n, or similar workflow engines • Python experience (API/services) • Startup mindset and bias toward rapid execution Nice To Have: • Experience with stream processing (Flink) • dbt or Clickhouse experience • CDC pipelines • Experience with context construction, RAG, or agent workflows • Analytical tooling (Posthog) What You Can Expect: • High-intensity, in-office environment • Fast decision-making and rapid shipping cycles • Real ownership over architecture and outcomes • Opportunity to work on AI systems operating at meaningful scale • Competitive compensation package • Meals provided plus full medical, dental, and vision benefits If this sounds like you, please apply now.
    $150k-225k yearly 2d ago
  • Senior Data Engineer

    X4 Engineering

    Data engineer job in Fremont, CA

    The Company: A data services company based in the heart of San Francisco, are looking for a Senior Data Engineer. They are a team of passionate engineers and data experts that are working on a variety of different project, primarily in the financial services sector, helping organizations build scalable, modern data platforms. This is a hands-on, full-time role with close collaboration alongside the CTO and senior engineers, offering strong influence over technical direction and delivery. The Role: This is an on-site position in the downtown San Francisco where you will be working as part of a close-knit team, collaborating on projects in their brand new office. You will be working across end-to-end data projects, including: Building and maintaining data pipelines and ETL processes. Sourcing and integrating third-party APIs and datasets. Batch and near-real-time processing (cloud agnostic). Downstream analytics and reporting using tools like Sigma Computing and Omnium Analytics. Collaborating with the CTO and engineering team to deliver client solutions. Key Skills: 5+ years' data engineering experience Strong Python, BigQuery, and cloud (GCP or similar) Solid ETL and pipeline background Comfortable with large-scale data Nice to Have Beam, Dataflow, Spark, or Hadoop Tableau or Looker ML/AI exposure Kafka or Pub/Sub Given the varied nature of the work, a broad range of technology experience is valued. You don't need to have experience with every tool listed below to be considered, so we encourage you to apply. This role is 5 days a week on-site in downtown San Francisco. Looking to pay between $170,000-$220,000 with a bonus between 15-20%. Benefits Health, Dental & Vision covered Unlimited PTO 401(k) with employer contribution Commuter benefits.
    $110k-156k yearly est. 1d ago
  • Data Engineer

    Infovision Inc. 4.4company rating

    Data engineer job in Pleasanton, CA

    Hi Job Title: Data Engineer HM prefers candidate to be on site at Pleasanton Proficiency in Spark, Python, and SQL is essential for this role. 10+ Experience with relational databases such as Oracle, NoSQL databases including MongoDB and Cassandra, and big data technologies, particularly Databricks, is required. Strong knowledge of data modeling techniques is necessary for designing efficient and scalable data structures. Familiarity with APIs and web services, including REST and SOAP, is important for integrating various data sources and ensuring seamless data flow. This role involves leveraging these technical skills to build and maintain robust data pipelines and support advanced data analytics. SKILLS: - Spark/Python/SQL - Relational Database (Oracle) / NoSQL Database (MongoDB/ Cassandra) / Databricks - Big Data technologies - Databricks preferred - Data modelling techniques - APIs and web services (REST/ SOAP) If interested, Please share below details with update resume: Full Name: Phone: E-mail: Rate: Location: Visa Status: Availability: SSN (Last 4 digit): Date of Birth: LinkedIn Profile: Availability for the interview: Availability for the project:
    $109k-150k yearly est. 5d ago
  • Data Modeler

    Estaff LLC

    Data engineer job in Sacramento, CA

    We are seeking a senior, hands-on Data Analyst /Data Modeler with strong communication skills for a long-term hybrid assignment with our Sacramento, California-based client. This position requires you to be able to work on-site in Sacramento, California, on Mondays and Wednesdays each week. Candidates must currently live within 60 miles of Sacramento, CA. Requirements: Senior, hands-on data model with strong communication skills. Expert-level command of ER/STUDIO Data Architect modeling application Strong ability to articulate data modeling principles and gather requirements from non-technical business stakeholders Excellent presentation skills to different (business and technical) audiences ranging from senior level leadership to operational staff with no supervision Ability to translate business and functional requirements into technical requirements for technical team members. Candidate needs to be able to demonstrate direct hands-on recent practical experience in the areas identified with specific examples. Qualifications: Mandatory: Minimum of ten (10) years demonstrable experience in the data management space, with at least 5 years specializing in database design and at least 5 years in data modeling. Minimum of five (5) years of experience as a data analyst or in other quantitative analysis or related disciplines, such as a researcher or data engineer, supportive of key duties/responsibilities identified above. Minimum of five (5) years of demonstrated experience with ER/Studio data modeling application Minimum of five (5) years of relevant experience in relational data modeling and dimensional data modeling, statistical analysis, and machine learning, supportive of key duties/responsibilities identified above. Excellent communication and collaboration skills to work effectively with stakeholders and team members. At least 2 years of experience working on Star, Snowflake, and/or Hybrid schemas Oracle/ODI/OCI/ADW experience required Desired: At least 2 years of experience working on Oracle Autonomous Data Warehouse (ADW), specifically installed in an OCI environment. Expert-level Kimball Dimensional Data Modeling experience Expert-level experience developing in Oracle SQL Developer or ER/Studio Data Architect for Oracle. Ability to develop and perform Extract, Transform, and Load (ETL) activities using Oracle tools and PL/SQL with at least 2 years of experience. Ability to perform technical leadership of an Oracle data warehouse team, including but not limited to ETL, requirements solicitation, DBA, data warehouse administration, and data analysis on a hands-on basis.
    $109k-155k yearly est. 4d ago
  • Senior Data Engineer

    Skale 3.7company rating

    Data engineer job in Fremont, CA

    We're hiring a Senior/Lead Data Engineer to join a fast-growing AI startup. The team comes from a billion dollar AI company, and has raised a $40M+ seed round. You'll need to be comfortable transforming and moving data in a new 'group level' data warehouse, from legacy sources. You'll have a strong data modeling background. Proven proficiency in modern data transformation tools, specifically dbt and/or SQLMesh. Exceptional ability to apply systems thinking and complex problem-solving to ambiguous challenges. Experience within a high-growth startup environment is highly valued. Deep, practical knowledge of the entire data lifecycle, from generation and governance through to advanced downstream applications (e.g., fueling AI/ML models, LLM consumption, and core product features). Outstanding ability to communicate technical complexity clearly, synthesizing information into actionable frameworks for executive and cross-functional teams.
    $126k-177k yearly est. 2d ago
  • Senior Data Engineer - Spark, Airflow

    Sigmaways Inc.

    Data engineer job in Fremont, CA

    We are seeking an experienced Data Engineer to design and optimize scalable data pipelines that drive our global data and analytics initiatives. In this role, you will leverage technologies such as Apache Spark, Airflow, and Python to build high performance data processing systems and ensure data quality, reliability, and lineage across Mastercard's data ecosystem. The ideal candidate combines strong technical expertise with hands-on experience in distributed data systems, workflow automation, and performance tuning to deliver impactful, data-driven solutions at enterprise scale. Responsibilities: Design and optimize Spark-based ETL pipelines for large-scale data processing. Build and manage Airflow DAGs for scheduling, orchestration, and checkpointing. Implement partitioning and shuffling strategies to improve Spark performance. Ensure data lineage, quality, and traceability across systems. Develop Python scripts for data transformation, aggregation, and validation. Execute and tune Spark jobs using spark-submit. Perform DataFrame joins and aggregations for analytical insights. Automate multi-step processes through shell scripting and variable management. Collaborate with data, DevOps, and analytics teams to deliver scalable data solutions. Qualifications: Bachelor's degree in Computer Science, Data Engineering, or related field (or equivalent experience). At least 7 years of experience in data engineering or big data development. Strong expertise in Apache Spark architecture, optimization, and job configuration. Proven experience with Airflow DAGs using authoring, scheduling, checkpointing, monitoring. Skilled in data shuffling, partitioning strategies, and performance tuning in distributed systems. Expertise in Python programming including data structures and algorithmic problem-solving. Hands-on with Spark DataFrames and PySpark transformations using joins, aggregations, filters. Proficient in shell scripting, including managing and passing variables between scripts. Experienced with spark submit for deployment and tuning. Solid understanding of ETL design, workflow automation, and distributed data systems. Excellent debugging and problem-solving skills in large-scale environments. Experience with AWS Glue, EMR, Databricks, or similar Spark platforms. Knowledge of data lineage and data quality frameworks like Apache Atlas. Familiarity with CI/CD pipelines, Docker/Kubernetes, and data governance tools.
    $110k-156k yearly est. 4d ago
  • Data Engineer / Analytics Specialist

    Ittconnect

    Data engineer job in Fremont, CA

    Citizenship Requirement: U.S. Citizens Only ITTConnect is seeking a Data Engineer / Analytics to work for one of our clients, a major Technology Consulting firm with headquarters in Europe. They are experts in tailored technology consulting and services to banks, investment firms and other Financial vertical clients. Job location: San Francisco Bay area or NY City. Work Model: Ability to come into the office as requested Seniority: 10+ years of total experience About the role: The Data Engineer / Analytics Specialist will support analytics, product insights, and AI initiatives. You will build robust data pipelines, integrate data sources, and enhance the organization's analytical foundations. Responsibilities: Build and operate Snowflake-based analytics environments. Develop ETL/ELT pipelines (DBT, Airflow, etc.). Integrate APIs, external data sources, and streaming inputs. Perform query optimization, basic data modeling, and analytics support. Enable downstream GenAI and analytics use cases. Requirements: 10+ years of overall technology experience 3+ years hands-on AWS experience required Strong SQL and Snowflake experience. Hands-on pipeline engineering with DBT, Airflow, or similar. Experience with API integrations and modern data architectures.
    $110k-156k yearly est. 1d ago
  • Data Engineer

    Aaratech

    Data engineer job in Fremont, CA

    Job Title: Data Engineer - Retail / E-Commerce 🏢 Company: Aaratech Inc 🛑 Eligibility: Only U.S. Citizens and Green Card holders are eligible. Please note that we do not offer visa sponsorship. Aaratech Inc. is seeking a results-driven Data Engineer - Retail / E-Commerce to support customer, sales, and product data platforms. The role focuses on building scalable pipelines that enable real-time and batch analytics for business growth. Key Responsibilities: 🔹 Data Pipeline Development Develop and maintain data pipelines for sales, customer, and product data. Integrate data from e-commerce platforms and marketing systems. 🔹 Data Modeling Design data models to support analytics and BI reporting. Optimize performance and scalability. 🔹 Data Quality Ensure data accuracy, completeness, and consistency. Implement monitoring and error-handling processes. 🔹 Collaboration Work closely with data analysts, product, and marketing teams. 🔹 Tools & Technologies Use SQL, Python, ETL tools, and cloud data platforms. Qualifications: ✅ Bachelor's degree in Computer Science, Engineering, or related field ✅ Minimum 2 years of Data Engineering experience ✅ Strong SQL and Python skills ✅ Experience with data pipelines and analytics platforms ✅ Retail or e-commerce data experience preferred ✅ Strong problem-solving skills
    $110k-156k yearly est. 2d ago
  • Data Engineer

    Midjourney

    Data engineer job in Fremont, CA

    Midjourney is a research lab exploring new mediums to expand the imaginative powers of the human species. We are a small, self-funded team focused on design, human infrastructure, and AI. We have no investors, no big company controlling us, and no advertisers. We are 100% supported by our amazing community. Our tools are already used by millions of people to dream, to explore, and to create. But this is just the start. We think the story of the 2020s is about building the tools that will remake the world for the next century. We're making those tools, to expand what it means to be human. Core Responsibilities: Design and maintain data pipelines to consolidate information across multiple sources (subscription platforms, payment systems, infrastructure and usage monitoring, and financial systems) into a unified analytics environment Build and manage interactive dashboards and self-service BI tools that enable leadership to track key business metrics including revenue performance, infrastructure costs, customer retention, and operational efficiency Serve as technical owner of our financial planning platform (Pigment or similar), leading implementation and build-out of models, data connections, and workflows in partnership with Finance leadership to translate business requirements into functional system architecture Develop automated data quality checks and cleaning processes to ensure accuracy and consistency across financial and operational datasets Partner with Finance, Product and Operations teams to translate business questions into analytical frameworks, including cohort analysis, cost modeling, and performance trending Create and maintain documentation for data models, ETL processes, dashboard logic, and system workflows to ensure knowledge continuity Support strategic planning initiatives by building financial models, scenario analyses, and data-driven recommendations for resource allocation and growth investments Required Qualifications: 3-5+ years experience in data engineering, analytics engineering, or similar role with demonstrated ability to work with large-scale datasets Strong SQL skills and experience with modern data warehousing solutions (BigQuery, Snowflake, Redshift, etc.) Proficiency in at least one programming language (Python, R) for data manipulation and analysis Experience with BI/visualization tools (Looker, Tableau, Power BI, or similar) Hands-on experience administering enterprise financial systems (NetSuite, SAP, Oracle, or similar ERP platforms) Experience working with Stripe Billing or similar subscription management platforms, including data extraction and revenue reporting Ability to communicate technical concepts clearly to non-technical stakeholders
    $110k-156k yearly est. 3d ago
  • Sr Data Platform Engineer

    The Judge Group 4.7company rating

    Data engineer job in Elk Grove, CA

    Hybrid role 3X a week in office in Elk Grove, CA; no remote capabilities This is a direct hire opportunity. We're seeking a seasoned Senior Data Platform Engineer to design, build, and optimize scalable data solutions that power analytics, reporting, and AI/ML initiatives. This full‑time role is hands‑on, working with architects, analysts, and business stakeholders to ensure data systems are reliable, secure, and high‑performing. Responsibilites: Build and maintain robust data pipelines (structured, semi‑structured, unstructured). Implement ETL workflows with Spark, Delta Lake, and cloud‑native tools. Support big data platforms (Databricks, Snowflake, GCP) in production. Troubleshoot and optimize SQL queries, Spark jobs, and workloads. Ensure governance, security, and compliance across data systems. Integrate workflows into CI/CD pipelines with Git, Jenkins, Terraform. Collaborate cross‑functionally to translate business needs into technical solutions. Qualifications: 7+ years in data engineering with production pipeline experience. Expertise in Spark ecosystem, Databricks, Snowflake, GCP. Strong skills in PySpark, Python, SQL. Experience with RAG systems, semantic search, and LLM integration. Familiarity with Kafka, Pub/Sub, vector databases. Proven ability to optimize ETL jobs and troubleshoot production issues. Agile team experience and excellent communication skills. Certifications in Databricks, Snowflake, GCP, or Azure. Exposure to Airflow, BI tools (Power BI, Looker Studio).
    $108k-153k yearly est. 4d ago
  • AWS Data Architect

    Fractal 4.2company rating

    Data engineer job in Fremont, CA

    Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner. Please visit Fractal | Intelligence for Imagination for more information about Fractal. Fractal is looking for a proactive and driven AWS Lead Data Architect/Engineer to join our cloud and data tech team. In this role, you will work on designing the system architecture and solution, ensuring the platform is scalable while performant, and creating automated data pipelines. Responsibilities: Design & Architecture of Scalable Data Platforms Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management). Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility. Client & Business Stakeholder Engagement Partner with business stakeholders to translate functional requirements into scalable technical solutions. Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases Data Pipeline Development & Collaboration Collaborate with data engineers and data scientists to develop end-to-end pipelines using Python, PySpark, SQL Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets. Performance, Scalability, and Reliability Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques. Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools Security, Compliance & Governance Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies. Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging. Adoption of AI Copilots & Agentic Development Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks. Generating documentation and test cases to accelerate pipeline development. Interactive debugging and iterative code optimization within notebooks. Advocate for agentic AI workflows that use specialized agents for Data profiling and schema inference. Automated testing and validation. Innovation and Continuous Learning Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling. Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements. Requirements: Bachelor's or master's degree in computer science, Information Technology, or a related field. 8-12 years of hands-on experience in data engineering, with at least 5+ years on Python and Apache Spark. Expertise in building high-throughput, low-latency ETL/ELT pipelines on AWS/Azure/GCP using Python, PySpark, SQL. Excellent hands on experience with workload automation tools such as Airflow, Prefect etc. Familiarity with building dynamic ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage Experience designing Lakehouse architectures with bronze, silver, gold layering. Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing. Experience with designing Data marts using Cloud data warehouses and integrating with BI tools (Power BI, Tableau, etc.). Experience CI/CD pipelines using tools such as AWS Code commit, Azure DevOps, GitHub Actions. Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning platform resources In-depth experience with AWS Cloud services such as Glue, S3, Redshift etc. Strong understanding of data privacy, access controls, and governance best practices. Experience working with RBAC, tokenization, and data classification frameworks Excellent communication skills for stakeholder interaction, solution presentations, and team coordination. Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements. Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality Must be able to work in PST time zone. Pay: The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $150k - $180k. In addition, you may be eligible for a discretionary bonus for the current performance period. Benefits: As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take the time needed for either sick time or vacation. Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
    $150k-180k yearly 4d ago
  • Founding Software Engineer / Protocol Engineer

    The Crypto Recruiters 3.3company rating

    Data engineer job in Fremont, CA

    We are actively searching for a Founding Protocol Engineer to join our team on a permanent basis. In this position you will If you are someone that is impressed with what Hyperliquid has accomplished then this role is for you. We are on a mission to build next generation lending and debt protocols. We are open to both Senior level and Architect level candidates for this role. Your Rhythm: Drive the architecture, technical design, and implementation of our lending protocol. Collaborate closely with researchers to validate and test designs Collaborate with auditors and security engineers to ensure safety of the protocol Participate in code reviews, providing constructive feedback and ensuring adherence to established coding standards and best practices Your Vibe: 5+ years of professional software Engineering experience 3+ years of experience working in Solidity in EVM in production environments, specifically focused in DeFi products 2+ years of experience working with a modern backend languages (Go, Rust, Python, etc) in distributed architectures Experience building lending protocols in a smart contract language Open to collaborating onsite a few days a week at our downtown SF office Our Vibe: Relaxed work environment 100% paid top of the line health care benefits Full ownership, no micro management Strong equity package 401K Unlimited vacation An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
    $122k-169k yearly est. 1d ago
  • Robotic Software Engineer

    Insight Recruitment

    Data engineer job in Fremont, CA

    Robotics Software Engineer (Generalist/Full-Stack) Robotic Software Engineer - Humanoid Robotics Palo Alto, SF Bay Area (Full-time | Onsite) $180k-$200k + equity (flexible for exceptional candidates) We are recruiting building next-generation humanoid robotic systems that combine advanced AI with cutting-edge hardware. Our team moves fast, prototypes aggressively, and puts real robots into the world. We're now hiring a Robotic Software Engineer to help shape our core software stack and accelerate the development of our embodied AI systems. What You'll Work On As part of a small, high-impact engineering team, you will: Build and optimise robotics software in C++ and ROS2 Integrate perception, control, planning, and learning modules Work hands-on with robots to bring up new hardware and run real-world experiments Deploy reinforcement learning / imitation learning policies onto physical robots Develop middleware, interfaces, and tooling that connect AI → hardware Prototype behaviours across diverse robot types (arms, humanoids, mobile platforms, drones) This role directly supports both our AI and hardware teams and has significant ownership from day one. Must-haves: Strong C++ development skills (multi-threading, performance, systems-level) Professional experience with ROS2 Hands-on robotics experience - ideally robot learning on physical hardware Ability to work on real robots (debugging, integration, testing) Generalist mindset and comfort in a fast-paced startup environment Nice-to-haves: Manipulation or kinematics (humanoids, arms, quadrupeds) Controls for mobile robots or drones Sensor/actuator integration, drivers, or middleware experience VR prototyping (Meta Quest or similar) Experience across different robot embodiments Why Join Us Build software that runs on real humanoid robots immediately High ownership within a small, world-class engineering team Competitive compensation + meaningful equity Opportunity to influence architecture, roadmap, and product direction Work at one of the most exciting intersections in tech: AI × robotics
    $180k-200k yearly 1d ago
  • Full Stack Software Engineer (Python / React)

    Arrayo

    Data engineer job in Fremont, CA

    We're seeking a Full Stack Software Engineer with strong backend development skills in Python and frontend expertise in React.js. You'll help design, implement, and scale full stack web applications that are secure, performant, and user-centric. Responsibilities Architect, build, and maintain backend services using Python (FastAPI, Flask, Django) Design and implement dynamic and responsive frontends using React.js and/or Vue.js Create and consume RESTful and GraphQL APIs Build reusable components and libraries for frontend use Collaborate across teams to gather requirements, define solutions, and ensure quality Optimize performance and scalability of applications Write unit, integration, and end-to-end tests across the stack Participate in peer code reviews and provide mentorship where appropriate Required Qualifications 5+ years of experience in full stack development M.S. degree in relevant domain required Proficiency with Python and one or more major web frameworks (e.g., FastAPI, Django) Advanced skills in React.js, including Hooks, Context, and state management libraries (e.g., Redux, Zustand) Experience with Vue.js or interest in working across multiple frontend frameworks Familiarity with modern frontend tooling: Webpack, Vite, Babel, ESLint Solid experience with HTML5, CSS3, SASS/SCSS, and responsive UI design Strong understanding of RESTful services, API security, and performance optimization Knowledge of relational databases (PostgreSQL, MySQL) and NoSQL options (MongoDB, Redis) Git and CI/CD best practices (GitHub Actions, CircleCI, GitLab CI) Strong communication skills and a collaborative approach to engineering Preferred Qualifications Familiarity with TypeScript Experience with cloud platforms (AWS, GCP, or Azure) Experience with Docker, Kubernetes, or container orchestration GraphQL and Apollo Client experience Familiarity with microservice architecture Experience working with real-time data (WebSockets, MQTT)
    $106k-150k yearly est. 5d ago
  • Java Software Engineer

    Signature It World Inc.

    Data engineer job in Fremont, CA

    We are seeking a skilled and motivated Java Developer to join our software engineering team. The ideal candidate will be instrumental in designing, developing, and maintaining high-performance, scalable Java applications that require complex data management across both relational (SQL) and non-relational (NoSQL) databases. Proven experience as a Java Developer, with 3-8 years of hands-on experience in software development. Strong proficiency in Core Java and object-oriented programming (OOP) principles. Extensive experience with Java frameworks, particularly the Spring ecosystem (Spring Boot, Spring MVC, Spring Data JPA). Proficiency in database technologies, including writing complex queries for SQL (e.g., MySQL, PostgreSQL) and working knowledge of NoSQL databases (e.g., MongoDB). Experience with building RESTful APIs and understanding microservices architecture. Familiarity with version control systems (Git) and build tools (Maven/Gradle). Knowledge of front-end technologies (HTML, CSS, JavaScript) is a plus. Familiarity with cloud platforms (AWS, Azure, or GCP) and containerization (Docker, Kubernetes) is preferred.
    $106k-150k yearly est. 4d ago

Learn more about data engineer jobs

How much does a data engineer earn in Stockton, CA?

The average data engineer in Stockton, CA earns between $93,000 and $182,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.

Average data engineer salary in Stockton, CA

$130,000

What are the biggest employers of Data Engineers in Stockton, CA?

The biggest employers of Data Engineers in Stockton, CA are:
  1. Aspire Chicago
Job type you want
Full Time
Part Time
Internship
Temporary