Staff Data Scientist - Sales Analytics
Data engineer job in Fremont, CA
Salary: $200-250k base + RSUs
This fast-growing Series E AI SaaS company is redefining how modern engineering teams build and deploy applications. We're looking for a Staff Data Scientist to drive Sales and Go-to-Market (GTM) analytics, applying advanced modeling and experimentation to accelerate revenue growth and optimize the full sales funnel.
About the Role
As the senior data scientist supporting Sales and GTM, you will combine statistical modeling, experimentation, and advanced analytics to inform strategy and guide decision-making across our revenue organization. Your work will help leadership understand pipeline health, predict outcomes, and identify the levers that unlock sustainable growth.
Key Responsibilities
Model the Business: Build forecasting and propensity models for pipeline generation, conversion rates, and revenue projections.
Optimize the Sales Funnel: Analyze lead scoring, opportunity progression, and deal velocity to recommend improvements in acquisition, qualification, and close rates.
Experimentation & Causal Analysis: Design and evaluate experiments (A/B tests, uplift modeling) to measure the impact of pricing, incentives, and campaign initiatives.
Advanced Analytics for GTM: Apply machine learning and statistical techniques to segment accounts, predict churn/expansion, and identify high-value prospects.
Cross-Functional Partnership: Work closely with Sales, Marketing, RevOps, and Product to influence GTM strategy and ensure data-driven decisions.
Data Infrastructure Collaboration: Partner with Analytics Engineering to define data requirements, ensure data quality, and enable self-serve reporting.
Strategic Insights: Present findings to executive leadership, translating complex analyses into actionable recommendations.
About You
Experience: 6+ years in data science or advanced analytics roles, with significant time spent in B2B SaaS or developer tools environments.
Technical Depth: Expert in SQL and proficient in Python or R for statistical modeling, forecasting, and machine learning.
Domain Knowledge: Strong understanding of sales analytics, revenue operations, and product-led growth (PLG) motions.
Analytical Rigor: Skilled in experimentation design, causal inference, and building predictive models that influence GTM strategy.
Communication: Exceptional ability to tell a clear story with data and influence senior stakeholders across technical and business teams.
Business Impact: Proven record of driving measurable improvements in pipeline efficiency, conversion rates, or revenue outcomes.
Staff Data Scientist
Data engineer job in Fremont, CA
Staff Data Scientist | San Francisco | $250K-$300K + Equity
We're partnering with one of the fastest-growing AI companies in the world to hire a Staff Data Scientist. Backed by over $230M from top-tier investors and already valued at over $1B, they've secured customers that include some of the most recognizable names in tech. Their AI platform powers millions of daily interactions and is quickly becoming the enterprise standard for conversational AI.
In this role, you'll bring rigorous analytics and experimentation leadership that directly shapes product strategy and company performance.
What you'll do:
Drive deep-dive analyses on user behavior, product performance, and growth drivers
Design and interpret A/B tests to measure product impact at scale
Build scalable data models, pipelines, and dashboards for company-wide use
Partner with Product and Engineering to embed experimentation best practices
Evaluate ML models, ensuring business relevance, performance, and trade-off clarity
What we're looking for:
5+ years in data science or product analytics at scale (consumer or marketplace preferred)
Advanced SQL and Python skills, with strong foundations in statistics and experimental design
Proven record of designing, running, and analyzing large-scale experiments
Ability to analyze and reason about ML models (classification, recommendation, LLMs)
Strong communicator with a track record of influencing cross-functional teams
If you're excited by the sound of this challenge- apply today and we'll be in touch.
Guidewire DataHub/InfoCenter Engineer
Data engineer job in Stockton, CA
Hands on Experience on DataHub with InfoCenter Platform.
Experience in Production support, BAU, Enhancement and Development.
Works with businesses in identifying detailed analytical and operational reporting/extracts requirements.
Collaborates with data analysts, architects, engineers and business stakeholders to understand data requirements.
Able to create Microsoft SQL / ETL / SSIS complex queries.
Handling ends to end loads
Qualifications
Experience on snowflake and DBT (Data Built tool).
6-9 yrs of Experience in P&C Insurance on Guidewire DataHub/InfoCenter Platform.
Must have at least one DHIC on-premise or Cloud implementation experience
Well versed with AWS Services - working with S3 storage, AURORA database
Experience on SQL Server and Oracle databases
Able to create PL/SQL stored procedures
Hands-on experience on Guidewire ClaimCenter/PolicyCenter/BillingCenter data models.
SAP BODS ETL design & Administration experience.
Data Warehousing experience that includes analysis and development of Dataflows, mappings using needed transformations using BODS.
Data Specifications hands-on experience.
Experience on DataHub and InfoCenter Initial loads and Delta loads.
Experience on DataHub and InfoCenter Guidewire Commit and Rollback utility.
Extending entities & attributes in DataHub and InfoCenter experience.
Experienced in Property & Casualty Insurance Industry.
About Aspire Systems
Aspire Systems is a $180+ million global technology services firm with over 4,500 employees worldwide, partnering with 275+ active customers. Founded in 1996, Aspire has grown steadily at a 19% CAGR since 2020.
Headquartered in Singapore, we operate across the US, UK, LATAM, Europe, the Middle East, India, and Asia Pacific regions, with strong nearshore delivery centers in Poland and Mexico. Aspire has been consistently recognized among India s 100 Best Companies to Work For 12 consecutive years by the Great Place to Work Institute.
Who We Are
Aspire is built on deep expertise in Software Engineering, Digital Services, Testing, and Infrastructure & Application Support. We serve diverse industries including Independent Software Vendors, Retail, Banking & Financial Services, and Insurance. Our proven frameworks and accelerators enable us to create future-ready, scalable, and business-focused systems, helping customers across the globe embrace digital transformation at speed and scale.
What We Believe
At the heart of Aspire is our philosophy of Attention. Always. a commitment to investing care and focus on our customers, employees, and society
Our Commitment to Diversity & Inclusion
At Aspire Systems, we foster a work culture that appreciates diversity and inclusiveness. We understand that our multigenerational workforce represents different regions, cultures, economic backgrounds, races, genders, ethnicities, education levels, personalities, and religions. We believe these differences make us stronger and are committed to building an inclusive workplace where everyone feels respected and valued.
Privacy Notice
Aspire Systems values your privacy. Candidate information collected through this recruitment process will be used solely for hiring purposes, handled securely, and retained only as long as necessary in compliance with applicable privacy laws.
Disclaimer
The above statements are not intended to be a complete statement of job content, but rather to serve as a guide to the essential functions performed by the employee in this role. Organization retains the discretion to add or change the duties of the position at any time.
Staff Data Engineer
Data engineer job in Fremont, CA
🌎 San Francisco (Hybrid)
💼 Founding/Staff Data Engineer
💵 $200-300k base
Our client is an elite applied AI research and product lab building AI-native systems for finance-and pushing frontier models into real production environments. Their work sits at the intersection of data, research, and high-stakes financial decision-making.
As the Founding Data Engineer, you will own the data platform that powers everything: models, experiments, and user-facing products relied on by demanding financial customers. You'll make foundational architectural decisions, work directly with researchers and product engineers, and help define how data is built, trusted, and scaled from day one.
What you'll do:
Design and build the core data platform, ingesting, transforming, and serving large-scale financial and alternative datasets.
Partner closely with researchers and ML engineers to ship production-grade data and feature pipelines that power cutting-edge models.
Establish data quality, observability, lineage, and reproducibility across both experimentation and production workloads.
Deploy and operate data services using Docker and Kubernetes in a modern cloud environment (AWS, GCP, or Azure).
Make foundational choices on tooling, architecture, and best practices that will define how data works across the company.
Continuously simplify and evolve systems-rewriting pipelines or infrastructure when it's the right long-term decision.
Ideal candidate:
Have owned or built high-performance data systems end-to-end, directly supporting production applications and ML models.
Are strongest in backend and data infrastructure, with enough frontend literacy to integrate cleanly with web products when needed.
Can design and evolve backend services and pipelines (Node.js or Python) to support new product features and research workflows.
Are an expert in at least one statically typed language, with a strong bias toward type safety, correctness, and maintainable systems.
Have deployed data workloads and services using Docker and Kubernetes on a major cloud provider.
Are comfortable making hard calls-simplifying, refactoring, or rebuilding legacy pipelines when quality and scalability demand it.
Use AI tools to accelerate your work, but rigorously review and validate AI-generated code, insisting on sound system design.
Thrive in a high-bar, high-ownership environment with other exceptional engineers.
Love deep technical problems in data infrastructure, distributed systems, and performance.
Nice to have:
Experience working with financial data (market, risk, portfolio, transactional, or alternative datasets).
Familiarity with ML infrastructure, such as feature stores, experiment tracking, or model serving systems.
Background in a high-growth startup or a foundational infrastructure role.
Compensation & setup:
Competitive salary and founder-level equity
Hybrid role based in San Francisco, with close collaboration and significant ownership
Small, elite team building core infrastructure with outsized impact
Data Scientist
Data engineer job in Pleasanton, CA
Key Responsibilities
Design and develop marketing-focused machine learning models, including:
Customer segmentation
Propensity, churn, and lifetime value (LTV) models
Campaign response and uplift models
Attribution and marketing mix models (MMM)
Build and deploy NLP solutions for:
Customer sentiment analysis
Text classification and topic modeling
Social media, reviews, chat, and voice-of-customer analytics
Apply advanced statistical and ML techniques to solve real-world business problems.
Work with structured and unstructured data from multiple marketing channels (digital, CRM, social, email, web).
Translate business objectives into analytical frameworks and actionable insights.
Partner with stakeholders to define KPIs, success metrics, and experimentation strategies (A/B testing).
Optimize and productionize models using MLOps best practices.
Mentor junior data scientists and provide technical leadership.
Communicate complex findings clearly to technical and non-technical audiences.
Required Skills & Qualifications
7+ years of experience in Data Science, with a strong focus on marketing analytics.
Strong expertise in Machine Learning (supervised & unsupervised techniques).
Hands-on experience with NLP techniques, including:
Text preprocessing and feature extraction
Word embeddings (Word2Vec, GloVe, Transformers)
Large Language Models (LLMs) is a plus
Proficiency in Python (NumPy, Pandas, Scikit-learn, TensorFlow/PyTorch).
Experience with SQL and large-scale data processing.
Strong understanding of statistics, probability, and experimental design.
Experience working with cloud platforms (AWS, Azure, or GCP).
Ability to translate data insights into business impact.
Nice to Have
Experience with marketing automation or CRM platforms.
Knowledge of MLOps, model monitoring, and deployment pipelines.
Familiarity with GenAI/LLM-based NLP use cases for marketing.
Prior experience in consumer, e-commerce, or digital marketing domains.
EEO
Centraprise is an equal opportunity employer. Your application and candidacy will not be considered based on race, color, sex, religion, creed, sexual orientation, gender identity, national origin, disability, genetic information, pregnancy, veteran status or any other characteristic protected by federal, state or local laws.
AI Data Engineer
Data engineer job in Fremont, CA
Member of Technical Staff - AI Data Engineer
San Francisco (In-Office)
$150K to $225K + Equity
A high-growth, AI-native startup coming out of stealth is hiring AI Data Engineers to build the systems that power production-grade AI. The company has recently signed a Series A term sheet and is scaling rapidly. This role is central to unblocking current bottlenecks across data engineering, context modeling, and agent performance.
Responsibilities:
• Build distributed, reliable data pipelines using Airflow, Temporal, and n8n
• Model SQL, vector, and NoSQL databases (Postgres, Qdrant, etc.)
• Build API and function-based services in Python
• Develop custom automations (Playwright, Stagehand, Zapier)
• Work with AI researchers to define and expose context as services
• Identify gaps in data quality and drive changes to upstream processes
• Ship fast, iterate, and own outcomes end-to-end
Required Experience:
• Strong background in data engineering
• Hands-on experience working with LLMs or LLM-powered applications
• Data modeling skills across SQL and vector databases
• Experience building distributed systems
• Experience with Airflow, Temporal, n8n, or similar workflow engines
• Python experience (API/services)
• Startup mindset and bias toward rapid execution
Nice To Have:
• Experience with stream processing (Flink)
• dbt or Clickhouse experience
• CDC pipelines
• Experience with context construction, RAG, or agent workflows
• Analytical tooling (Posthog)
What You Can Expect:
• High-intensity, in-office environment
• Fast decision-making and rapid shipping cycles
• Real ownership over architecture and outcomes
• Opportunity to work on AI systems operating at meaningful scale
• Competitive compensation package
• Meals provided plus full medical, dental, and vision benefits
If this sounds like you, please apply now.
Senior Data Engineer
Data engineer job in Fremont, CA
The Company:
A data services company based in the heart of San Francisco, are looking for a Senior Data Engineer. They are a team of passionate engineers and data experts that are working on a variety of different project, primarily in the financial services sector, helping organizations build scalable, modern data platforms. This is a hands-on, full-time role with close collaboration alongside the CTO and senior engineers, offering strong influence over technical direction and delivery.
The Role:
This is an on-site position in the downtown San Francisco where you will be working as part of a close-knit team, collaborating on projects in their brand new office. You will be working across end-to-end data projects, including:
Building and maintaining data pipelines and ETL processes.
Sourcing and integrating third-party APIs and datasets.
Batch and near-real-time processing (cloud agnostic).
Downstream analytics and reporting using tools like Sigma Computing and Omnium Analytics.
Collaborating with the CTO and engineering team to deliver client solutions.
Key Skills:
5+ years' data engineering experience
Strong Python, BigQuery, and cloud (GCP or similar)
Solid ETL and pipeline background
Comfortable with large-scale data
Nice to Have
Beam, Dataflow, Spark, or Hadoop
Tableau or Looker
ML/AI exposure
Kafka or Pub/Sub
Given the varied nature of the work, a broad range of technology experience is valued. You don't need to have experience with every tool listed below to be considered, so we encourage you to apply.
This role is 5 days a week on-site in downtown San Francisco. Looking to pay between $170,000-$220,000 with a bonus between 15-20%.
Benefits
Health, Dental & Vision covered
Unlimited PTO
401(k) with employer contribution
Commuter benefits.
Data Engineer
Data engineer job in Pleasanton, CA
Hi
Job Title: Data Engineer
HM prefers candidate to be on site at Pleasanton
Proficiency in Spark, Python, and SQL is essential for this role. 10+ Experience with relational databases such as Oracle, NoSQL databases including MongoDB and Cassandra, and big data technologies, particularly Databricks, is required. Strong knowledge of data modeling techniques is necessary for designing efficient and scalable data structures. Familiarity with APIs and web services, including REST and SOAP, is important for integrating various data sources and ensuring seamless data flow. This role involves leveraging these technical skills to build and maintain robust data pipelines and support advanced data analytics.
SKILLS:
- Spark/Python/SQL
- Relational Database (Oracle) / NoSQL Database (MongoDB/ Cassandra) / Databricks
- Big Data technologies - Databricks preferred
- Data modelling techniques
- APIs and web services (REST/ SOAP)
If interested, Please share below details with update resume:
Full Name:
Phone:
E-mail:
Rate:
Location:
Visa Status:
Availability:
SSN (Last 4 digit):
Date of Birth:
LinkedIn Profile:
Availability for the interview:
Availability for the project:
Data Modeler
Data engineer job in Sacramento, CA
We are seeking a senior, hands-on Data Analyst /Data Modeler with strong communication skills for a long-term hybrid assignment with our Sacramento, California-based client. This position requires you to be able to work on-site in Sacramento, California, on Mondays and Wednesdays each week. Candidates must currently live within 60 miles of Sacramento, CA.
Requirements:
Senior, hands-on data model with strong communication skills.
Expert-level command of ER/STUDIO Data Architect modeling application
Strong ability to articulate data modeling principles and gather requirements from non-technical business stakeholders
Excellent presentation skills to different (business and technical) audiences ranging from senior level leadership to operational staff with no supervision
Ability to translate business and functional requirements into technical requirements for technical team members.
Candidate needs to be able to demonstrate direct hands-on recent practical experience in the areas identified with specific examples.
Qualifications:
Mandatory:
Minimum of ten (10) years demonstrable experience in the data management space, with at least 5 years specializing in database design and at least 5 years in data modeling.
Minimum of five (5) years of experience as a data analyst or in other quantitative analysis or related disciplines, such as a researcher or data engineer, supportive of key duties/responsibilities identified above.
Minimum of five (5) years of demonstrated experience with ER/Studio data modeling application
Minimum of five (5) years of relevant experience in relational data modeling and dimensional data modeling, statistical analysis, and machine learning, supportive of key duties/responsibilities identified above.
Excellent communication and collaboration skills to work effectively with stakeholders and team members.
At least 2 years of experience working on Star, Snowflake, and/or Hybrid schemas
Oracle/ODI/OCI/ADW experience required
Desired:
At least 2 years of experience working on Oracle Autonomous Data Warehouse (ADW), specifically installed in an OCI environment.
Expert-level Kimball Dimensional Data Modeling experience
Expert-level experience developing in Oracle SQL Developer or ER/Studio Data Architect for Oracle.
Ability to develop and perform Extract, Transform, and Load (ETL) activities using Oracle tools and PL/SQL with at least 2 years of experience. Ability to perform technical leadership of an Oracle data warehouse team, including but not limited to ETL, requirements solicitation, DBA, data warehouse administration, and data analysis on a hands-on basis.
Senior Data Engineer
Data engineer job in Fremont, CA
We're hiring a Senior/Lead Data Engineer to join a fast-growing AI startup. The team comes from a billion dollar AI company, and has raised a $40M+ seed round.
You'll need to be comfortable transforming and moving data in a new 'group level' data warehouse, from legacy sources. You'll have a strong data modeling background.
Proven proficiency in modern data transformation tools, specifically dbt and/or SQLMesh.
Exceptional ability to apply systems thinking and complex problem-solving to ambiguous challenges. Experience within a high-growth startup environment is highly valued.
Deep, practical knowledge of the entire data lifecycle, from generation and governance through to advanced downstream applications (e.g., fueling AI/ML models, LLM consumption, and core product features).
Outstanding ability to communicate technical complexity clearly, synthesizing information into actionable frameworks for executive and cross-functional teams.
Data Engineer / Analytics Specialist
Data engineer job in Fremont, CA
Citizenship Requirement: U.S. Citizens Only
ITTConnect is seeking a Data Engineer / Analytics to work for one of our clients, a major Technology Consulting firm with headquarters in Europe. They are experts in tailored technology consulting and services to banks, investment firms and other Financial vertical clients.
Job location: San Francisco Bay area or NY City.
Work Model: Ability to come into the office as requested
Seniority: 10+ years of total experience
About the role:
The Data Engineer / Analytics Specialist will support analytics, product insights, and AI initiatives. You will build robust data pipelines, integrate data sources, and enhance the organization's analytical foundations.
Responsibilities:
Build and operate Snowflake-based analytics environments.
Develop ETL/ELT pipelines (DBT, Airflow, etc.).
Integrate APIs, external data sources, and streaming inputs.
Perform query optimization, basic data modeling, and analytics support.
Enable downstream GenAI and analytics use cases.
Requirements:
10+ years of overall technology experience
3+ years hands-on AWS experience required
Strong SQL and Snowflake experience.
Hands-on pipeline engineering with DBT, Airflow, or similar.
Experience with API integrations and modern data architectures.
Senior Data Engineer - Spark, Airflow
Data engineer job in Fremont, CA
We are seeking an experienced Data Engineer to design and optimize scalable data pipelines that drive our global data and analytics initiatives.
In this role, you will leverage technologies such as Apache Spark, Airflow, and Python to build high performance data processing systems and ensure data quality, reliability, and lineage across Mastercard's data ecosystem.
The ideal candidate combines strong technical expertise with hands-on experience in distributed data systems, workflow automation, and performance tuning to deliver impactful, data-driven solutions at enterprise scale.
Responsibilities:
Design and optimize Spark-based ETL pipelines for large-scale data processing.
Build and manage Airflow DAGs for scheduling, orchestration, and checkpointing.
Implement partitioning and shuffling strategies to improve Spark performance.
Ensure data lineage, quality, and traceability across systems.
Develop Python scripts for data transformation, aggregation, and validation.
Execute and tune Spark jobs using spark-submit.
Perform DataFrame joins and aggregations for analytical insights.
Automate multi-step processes through shell scripting and variable management.
Collaborate with data, DevOps, and analytics teams to deliver scalable data solutions.
Qualifications:
Bachelor's degree in Computer Science, Data Engineering, or related field (or equivalent experience).
At least 7 years of experience in data engineering or big data development.
Strong expertise in Apache Spark architecture, optimization, and job configuration.
Proven experience with Airflow DAGs using authoring, scheduling, checkpointing, monitoring.
Skilled in data shuffling, partitioning strategies, and performance tuning in distributed systems.
Expertise in Python programming including data structures and algorithmic problem-solving.
Hands-on with Spark DataFrames and PySpark transformations using joins, aggregations, filters.
Proficient in shell scripting, including managing and passing variables between scripts.
Experienced with spark submit for deployment and tuning.
Solid understanding of ETL design, workflow automation, and distributed data systems.
Excellent debugging and problem-solving skills in large-scale environments.
Experience with AWS Glue, EMR, Databricks, or similar Spark platforms.
Knowledge of data lineage and data quality frameworks like Apache Atlas.
Familiarity with CI/CD pipelines, Docker/Kubernetes, and data governance tools.
Data Engineer
Data engineer job in Fremont, CA
Midjourney is a research lab exploring new mediums to expand the imaginative powers of the human species. We are a small, self-funded team focused on design, human infrastructure, and AI. We have no investors, no big company controlling us, and no advertisers. We are 100% supported by our amazing community.
Our tools are already used by millions of people to dream, to explore, and to create. But this is just the start. We think the story of the 2020s is about building the tools that will remake the world for the next century. We're making those tools, to expand what it means to be human.
Core Responsibilities:
Design and maintain data pipelines to consolidate information across multiple sources (subscription platforms, payment systems, infrastructure and usage monitoring, and financial systems) into a unified analytics environment
Build and manage interactive dashboards and self-service BI tools that enable leadership to track key business metrics including revenue performance, infrastructure costs, customer retention, and operational efficiency
Serve as technical owner of our financial planning platform (Pigment or similar), leading implementation and build-out of models, data connections, and workflows in partnership with Finance leadership to translate business requirements into functional system architecture
Develop automated data quality checks and cleaning processes to ensure accuracy and consistency across financial and operational datasets
Partner with Finance, Product and Operations teams to translate business questions into analytical frameworks, including cohort analysis, cost modeling, and performance trending
Create and maintain documentation for data models, ETL processes, dashboard logic, and system workflows to ensure knowledge continuity
Support strategic planning initiatives by building financial models, scenario analyses, and data-driven recommendations for resource allocation and growth investments
Required Qualifications:
3-5+ years experience in data engineering, analytics engineering, or similar role with demonstrated ability to work with large-scale datasets
Strong SQL skills and experience with modern data warehousing solutions (BigQuery, Snowflake, Redshift, etc.)
Proficiency in at least one programming language (Python, R) for data manipulation and analysis
Experience with BI/visualization tools (Looker, Tableau, Power BI, or similar)
Hands-on experience administering enterprise financial systems (NetSuite, SAP, Oracle, or similar ERP platforms)
Experience working with Stripe Billing or similar subscription management platforms, including data extraction and revenue reporting
Ability to communicate technical concepts clearly to non-technical stakeholders
Sr Data Platform Engineer
Data engineer job in Elk Grove, CA
Hybrid role 3X a week in office in Elk Grove, CA; no remote capabilities
This is a direct hire opportunity.
We're seeking a seasoned Senior Data Platform Engineer to design, build, and optimize scalable data solutions that power analytics, reporting, and AI/ML initiatives. This full‑time role is hands‑on, working with architects, analysts, and business stakeholders to ensure data systems are reliable, secure, and high‑performing.
Responsibilites:
Build and maintain robust data pipelines (structured, semi‑structured, unstructured).
Implement ETL workflows with Spark, Delta Lake, and cloud‑native tools.
Support big data platforms (Databricks, Snowflake, GCP) in production.
Troubleshoot and optimize SQL queries, Spark jobs, and workloads.
Ensure governance, security, and compliance across data systems.
Integrate workflows into CI/CD pipelines with Git, Jenkins, Terraform.
Collaborate cross‑functionally to translate business needs into technical solutions.
Qualifications:
7+ years in data engineering with production pipeline experience.
Expertise in Spark ecosystem, Databricks, Snowflake, GCP.
Strong skills in PySpark, Python, SQL.
Experience with RAG systems, semantic search, and LLM integration.
Familiarity with Kafka, Pub/Sub, vector databases.
Proven ability to optimize ETL jobs and troubleshoot production issues.
Agile team experience and excellent communication skills.
Certifications in Databricks, Snowflake, GCP, or Azure.
Exposure to Airflow, BI tools (Power BI, Looker Studio).
AWS Data Architect
Data engineer job in Fremont, CA
Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner.
Please visit Fractal | Intelligence for Imagination for more information about Fractal.
Fractal is looking for a proactive and driven AWS Lead Data Architect/Engineer to join our cloud and data tech team. In this role, you will work on designing the system architecture and solution, ensuring the platform is scalable while performant, and creating automated data pipelines.
Responsibilities:
Design & Architecture of Scalable Data Platforms
Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs
Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management).
Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility.
Client & Business Stakeholder Engagement
Partner with business stakeholders to translate functional requirements into scalable technical solutions.
Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases
Data Pipeline Development & Collaboration
Collaborate with data engineers and data scientists to develop end-to-end pipelines using Python, PySpark, SQL
Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets.
Performance, Scalability, and Reliability
Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques.
Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools
Security, Compliance & Governance
Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies.
Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging.
Adoption of AI Copilots & Agentic Development
Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for
Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks.
Generating documentation and test cases to accelerate pipeline development.
Interactive debugging and iterative code optimization within notebooks.
Advocate for agentic AI workflows that use specialized agents for
Data profiling and schema inference.
Automated testing and validation.
Innovation and Continuous Learning
Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling.
Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements.
Requirements:
Bachelor's or master's degree in computer science, Information Technology, or a related field.
8-12 years of hands-on experience in data engineering, with at least 5+ years on Python and Apache Spark.
Expertise in building high-throughput, low-latency ETL/ELT pipelines on AWS/Azure/GCP using Python, PySpark, SQL.
Excellent hands on experience with workload automation tools such as Airflow, Prefect etc.
Familiarity with building dynamic ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage
Experience designing Lakehouse architectures with bronze, silver, gold layering.
Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing.
Experience with designing Data marts using Cloud data warehouses and integrating with BI tools (Power BI, Tableau, etc.).
Experience CI/CD pipelines using tools such as AWS Code commit, Azure DevOps, GitHub Actions.
Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning platform resources
In-depth experience with AWS Cloud services such as Glue, S3, Redshift etc.
Strong understanding of data privacy, access controls, and governance best practices.
Experience working with RBAC, tokenization, and data classification frameworks
Excellent communication skills for stakeholder interaction, solution presentations, and team coordination.
Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements.
Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality
Must be able to work in PST time zone.
Pay:
The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $150k - $180k. In addition, you may be eligible for a discretionary bonus for the current performance period.
Benefits:
As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take the time needed for either sick time or vacation.
Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
Founding Software Engineer / Protocol Engineer
Data engineer job in Fremont, CA
We are actively searching for a Founding Protocol Engineer to join our team on a permanent basis. In this position you will If you are someone that is impressed with what Hyperliquid has accomplished then this role is for you. We are on a mission to build next generation lending and debt protocols. We are open to both Senior level and Architect level candidates for this role.
Your Rhythm:
Drive the architecture, technical design, and implementation of our lending protocol.
Collaborate closely with researchers to validate and test designs
Collaborate with auditors and security engineers to ensure safety of the protocol
Participate in code reviews, providing constructive feedback and ensuring adherence to established coding standards and best practices
Your Vibe:
5+ years of professional software Engineering experience
3+ years of experience working in Solidity in EVM in production environments, specifically focused in DeFi products
2+ years of experience working with a modern backend languages (Go, Rust, Python, etc) in distributed architectures
Experience building lending protocols in a smart contract language
Open to collaborating onsite a few days a week at our downtown SF office
Our Vibe:
Relaxed work environment
100% paid top of the line health care benefits
Full ownership, no micro management
Strong equity package
401K
Unlimited vacation
An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
Software Engineer
Data engineer job in Fremont, CA
About Us
Every patient deserves fast access to life-changing medications. We're building the AI operating system that makes it happen, automating the phone calls, texts, data entry, and complex workflows that slow pharmacies down.
We're backed by Gradient Ventures (Google's AI fund) with more demand than we can handle.
Why we need you
We're building AI-powered systems that handle real patient communication and pharmacy workflows. As an engineer, you'll work directly with our founding team to ship features end-to-end-from LLM-powered voice and text systems to integrations with pharmacy platforms. This is a hands-on, full-stack role where you'll own technical decisions that shape how our product scales.
What you'll do
Build and ship AI-powered voice, text, and workflow automation systems
Own features end-to-end: design, implementation, deployment
Create integrations with pharmacy and patient management systems
Solve hard problems: LLM optimization, reliable systems on unreliable APIs, abstractions that speed up customer implementations
Establish engineering patterns as we scale
Must-have experience
4+ years of software engineering experience
Solid programming fundamentals; Python experience a plus
Hands-on experience with AI tools, APIs, or prompt engineering
Track record of owning projects, not just implementing specs
Nice-to-haves
Experience at early-stage startups
Background building LLM-powered applications
Familiarity with voice/telephony systems
Event driven architectures
Location: Hybrid in Palo Alto (Mon/Wed/Thurs)
Compensation: $175k - $250K + generous equity, full benefits, flexible PTO
Python Backend Engineer - 3D / Visualization / API / Software (On-site)
Data engineer job in Fremont, CA
A pioneering and well-funded AI company is seeking a talented Python Backend Engineer to build the core infrastructure for its revolutionary autonomous systems. This is a unique opportunity to join an innovative team at the forefront of engineering and artificial intelligence, creating a new category of software that will redefine how complex products in sectors like aerospace, automotive, and advanced manufacturing are designed and developed.
Why Join?
Build the Future of Engineering: This isn't just another backend role. Your work will directly shape how next-generation rockets, cars, and aircraft are designed, fundamentally changing the engineering landscape.
Solve Unprecedented Technical Puzzles: Tackle unique challenges in building the infrastructure for autonomous AI agents, including simulation orchestration, multi-agent coordination, and scalable model serving.
Shape a Foundational Platform: As a critical member of a pioneering team, you will have a significant impact on the technical direction and core architecture of an entirely new category of software.
Join a High-Impact Team: Work in a collaborative, fast-paced environment where your expertise is valued, and you have end-to-end ownership of critical systems.
Compensation & Location: Base salary of up to $210,000 + equity + benefits, while working on-site with the team in a modern office in downtown San Francisco.
The Role
As a Python Backend Engineer, you will be instrumental in constructing the infrastructure that underpins these autonomous engineering agents. Your responsibilities will span model serving, simulation orchestration, multi-agent coordination, and the development of robust, developer-facing APIs. This position is critical for delivering the fast, reliable, and scalable systems that professional engineers will trust and depend on in high-stakes production environments.
You will:
Own and build the core backend infrastructure for the autonomous AI agents, focusing on scalability, model serving, and multi-agent orchestration.
Design and maintain robust APIs while integrating essential third-party tools like CAD software and simulation backends into the core platform.
Develop backend services to process and serve complex 3D visualizations from simulation and geometric data.
Collaborate across ML, frontend, and simulation teams to shape the product and engage directly with early customers to drive infrastructure needs.
Make foundational architectural decisions that will define the technical future and scalability of the entire platform.
The Essential Requirements
Strong backend software engineering experience, with a primary focus on Python.
Proven experience in designing, building, and maintaining production-level APIs (FastAPI preferred but Flask and Django also considered).
Experience with 3D visualization libraries or tools such as PyVista, ParaView, or VTK.
Excellent systems-thinking skills and the ability to reason about the interactions between compute, data, and models.
Experience working in fast-paced environments where end-to-end ownership and proactivity are essential.
Exceptional communication and collaboration abilities.
What Will Make You Stand Out
Experience integrating with scientific or engineering software (such as CAD, FEA, or CFD tools).
Exposure to agent frameworks, workflow orchestration engines, or distributed systems.
Familiarity with model serving frameworks (e.g., TorchServe, Triton) or simulation backends.
Previous experience building developer-focused tools or working in high-trust, customer-facing technical roles.
If you are interested in this role, please apply with your resume through this site.
SEO Keywords for Search
Python Backend Engineer, Python Software Engineer, Backend Engineer, Software Engineer, Python Developer, AI Engineer, Machine Learning Infrastructure, MLOps Engineer, Backend Software Engineer (Python), Senior Backend Engineer, AI/ML Engineer, Infrastructure Engineer, FastAPI Developer, PyVista, ParaView, VTK, 3D Visualization, Docker, Kubernetes, Cloud Engineer, AI Platform Engineer, Distributed Systems Engineer, Simulation Software Engineer, CAD Integration, CFD, FEA, Scientific Computing, High-Performance Computing (HPC), Agent Frameworks, Workflow Orchestration, Technical Lead, Staff Engineer.
Disclaimer
Attis Global Ltd is an equal opportunities employer. No terminology in this advert is intended to discriminate on any of the grounds protected by law, and all qualified applicants will receive consideration for employment without regard to age, sex, race, national origin, religion or belief, disability, pregnancy and maternity, marital status, political affiliation, socio-economic status, sexual orientation, gender, gender identity and expression, and/or gender reassignment. M/F/D/V. We operate as a staffing agency and employment business. More information can be found at attisglobal.com.
Full Stack Software Engineer (Python / React)
Data engineer job in Fremont, CA
We're seeking a Full Stack Software Engineer with strong backend development skills in Python and frontend expertise in React.js. You'll help design, implement, and scale full stack web applications that are secure, performant, and user-centric.
Responsibilities
Architect, build, and maintain backend services using Python (FastAPI, Flask, Django)
Design and implement dynamic and responsive frontends using React.js and/or Vue.js
Create and consume RESTful and GraphQL APIs
Build reusable components and libraries for frontend use
Collaborate across teams to gather requirements, define solutions, and ensure quality
Optimize performance and scalability of applications
Write unit, integration, and end-to-end tests across the stack
Participate in peer code reviews and provide mentorship where appropriate
Required Qualifications
5+ years of experience in full stack development
M.S. degree in relevant domain required
Proficiency with Python and one or more major web frameworks (e.g., FastAPI, Django)
Advanced skills in React.js, including Hooks, Context, and state management libraries (e.g., Redux, Zustand)
Experience with Vue.js or interest in working across multiple frontend frameworks
Familiarity with modern frontend tooling: Webpack, Vite, Babel, ESLint
Solid experience with HTML5, CSS3, SASS/SCSS, and responsive UI design
Strong understanding of RESTful services, API security, and performance optimization
Knowledge of relational databases (PostgreSQL, MySQL) and NoSQL options (MongoDB, Redis)
Git and CI/CD best practices (GitHub Actions, CircleCI, GitLab CI)
Strong communication skills and a collaborative approach to engineering
Preferred Qualifications
Familiarity with TypeScript
Experience with cloud platforms (AWS, GCP, or Azure)
Experience with Docker, Kubernetes, or container orchestration
GraphQL and Apollo Client experience
Familiarity with microservice architecture
Experience working with real-time data (WebSockets, MQTT)
Software Engineer
Data engineer job in Hayward, CA
Mission and Impact:
VIVIO Health, a Public Benefit Corporation, is revolutionizing pharmacy benefits management through data and technology. Our foundational principle - "The Right Drug for the Right Person at the Right Price" - drives everything we do. Since 2016, our evidence-based approach has delivered superior health outcomes while reducing costs for self-insured employers and health plans. By ensuring each patient receives the most appropriate medication for their specific condition at a fair market price, we're replacing the obsolete PBM Model with innovative solutions that work better for everyone.
Why Join VIVIO?
Innovation: Challenge the status quo and shape healthcare's future
Impact: Directly influence patient care and help change healthcare delivery
Collaboration: Work with passionate teammates dedicated to making a difference
Culture: Enjoy autonomy and reliability in a micromanagement-free environment
Growth: Expand your opportunities as we expand our business
Job Description
Position Overview
We are seeking an exceptional developer with robust Python skills to join our team. You will play a crucial role in building complex business operations logic. You should have a proven track record of building high-quality software, solving complex problems, and thriving in collaborative environments. Experience in regulated cloud environments like HIPAA or PCI is a plus. We expect a self-motivated individual who thrives in a collaborative environment and shares our commitment to enhancing the cost and quality of healthcare. If you're ready to make an impact, we want to hear from you!
Location: Hayward, CA. This is a Hybrid role
with a minimum of 3 in-office days.
Technical Stack:
Languages: Python, PHP
Databases: MySQL
Infrastructure: AWS or other Cloud experience, CICD
Core Responsibilities:
Design and develop scalable services and core libraries.
Develop batch processing jobs for data imports, reporting, and external integrations.
Build and maintain transaction processing systems with complex business rules.
Integrate third-party APIs and normalize data across multiple healthcare providers.
Implement HIPAA-compliant data handling, logging, and audit systems
Write comprehensive tests with proper mocking and maintain CI/CD pipelines.
Foster best practices in a lean startup setting through code reviews.
Promote knowledge sharing to build a collaborative culture.
Optimize architectures and designs through deep understanding of business processes
Ensure operational excellence through monitoring, documentation, and deployment automation.
Qualifications
Required Qualifications:
5+ years of development experience with production systems
BS or advanced degree in an engineering discipline or equivalent experience
SQL database design and optimization
Test-driven development and mocking strategies
Experience with data processing
Preferred Qualifications:
REST API design and integration experience
FastAPI or similar framework experience
CRM customization experience
ETL pipelines and Batch processing systems experience
Job orchestration frameworks experience
File-based and distributed storage systems
Healthcare/pharmacy technology background
Strong understanding of building software in regulated environments & security standards such as PCI DSS, ISO 27001, HIPAA, and NIST.
Other expectations: Hybrid work arrangement with work from office 3 days a week.
Additional Information
Compensation and Benefits:
Base Salary: $120-$140K/year
Bonus Eligible
Health benefits, including Medical, Pharmacy, Dental, Vision, and Life insurance
Stock Options
401K and company match
PTO
Opportunity to work for a growing and innovative company.
Dynamic and collaborative work environment.
The chance to make a real impact with a Public Benefit Corporation.
VIVIO Health is an Equal Opportunity Employer. All information will be kept confidential according to EEO guidelines.
Please be advised that job opportunities will only be extended after a candidate submits a completed job application and goes through our interview process, including 1:1 and/or group interviews via phone, video conferencing, and/or in-person. All legitimate correspondence from a VIVIO employee will come from our Smart Recruiter Applicant Tracking System "@smartrecruiter.com" or "@viviohealth.com" email accounts.