Staff Data Scientist
Data engineer job in Santa Rosa, CA
Staff Data Scientist | San Francisco | $250K-$300K + Equity
We're partnering with one of the fastest-growing AI companies in the world to hire a Staff Data Scientist. Backed by over $230M from top-tier investors and already valued at over $1B, they've secured customers that include some of the most recognizable names in tech. Their AI platform powers millions of daily interactions and is quickly becoming the enterprise standard for conversational AI.
In this role, you'll bring rigorous analytics and experimentation leadership that directly shapes product strategy and company performance.
What you'll do:
Drive deep-dive analyses on user behavior, product performance, and growth drivers
Design and interpret A/B tests to measure product impact at scale
Build scalable data models, pipelines, and dashboards for company-wide use
Partner with Product and Engineering to embed experimentation best practices
Evaluate ML models, ensuring business relevance, performance, and trade-off clarity
What we're looking for:
5+ years in data science or product analytics at scale (consumer or marketplace preferred)
Advanced SQL and Python skills, with strong foundations in statistics and experimental design
Proven record of designing, running, and analyzing large-scale experiments
Ability to analyze and reason about ML models (classification, recommendation, LLMs)
Strong communicator with a track record of influencing cross-functional teams
If you're excited by the sound of this challenge- apply today and we'll be in touch.
Data Scientist
Data engineer job in Santa Rosa, CA
Key Responsibilities
Design and productionize models for opportunity scanning, anomaly detection, and significant change detection across CRM, streaming, ecommerce, and social data.
Define and tune alerting logic (thresholds, SLOs, precision/recall) to minimize noise while surfacing high-value marketing actions.
Partner with marketing, product, and data engineering to operationalize insights into campaigns, playbooks, and automated workflows, with clear monitoring and experimentation.
Required Qualifications
Strong proficiency in Python (pandas, NumPy, scikit-learn; plus experience with PySpark or similar for large-scale data) and SQL on modern warehouses (e.g., BigQuery, Snowflake, Redshift).
Hands-on experience with time-series modeling and anomaly / changepoint / significant-movement detection(e.g., STL decomposition, EWMA/CUSUM, Bayesian/prophet-style models, isolation forests, robust statistics).
Experience building and deploying production ML pipelines (batch and/or streaming), including feature engineering, model training, CI/CD, and monitoring for performance and data drift.
Solid background in statistics and experimentation: hypothesis testing, power analysis, A/B testing frameworks, uplift/propensity modeling, and basic causal inference techniques.
Familiarity with cloud platforms (GCP/AWS/Azure), orchestration tools (e.g., Airflow/Prefect), and dashboarding/visualization tools to expose alerts and model outputs to business users.
AI Data Engineer
Data engineer job in Santa Rosa, CA
Member of Technical Staff - AI Data Engineer
San Francisco (In-Office)
$150K to $225K + Equity
A high-growth, AI-native startup coming out of stealth is hiring AI Data Engineers to build the systems that power production-grade AI. The company has recently signed a Series A term sheet and is scaling rapidly. This role is central to unblocking current bottlenecks across data engineering, context modeling, and agent performance.
Responsibilities:
• Build distributed, reliable data pipelines using Airflow, Temporal, and n8n
• Model SQL, vector, and NoSQL databases (Postgres, Qdrant, etc.)
• Build API and function-based services in Python
• Develop custom automations (Playwright, Stagehand, Zapier)
• Work with AI researchers to define and expose context as services
• Identify gaps in data quality and drive changes to upstream processes
• Ship fast, iterate, and own outcomes end-to-end
Required Experience:
• Strong background in data engineering
• Hands-on experience working with LLMs or LLM-powered applications
• Data modeling skills across SQL and vector databases
• Experience building distributed systems
• Experience with Airflow, Temporal, n8n, or similar workflow engines
• Python experience (API/services)
• Startup mindset and bias toward rapid execution
Nice To Have:
• Experience with stream processing (Flink)
• dbt or Clickhouse experience
• CDC pipelines
• Experience with context construction, RAG, or agent workflows
• Analytical tooling (Posthog)
What You Can Expect:
• High-intensity, in-office environment
• Fast decision-making and rapid shipping cycles
• Real ownership over architecture and outcomes
• Opportunity to work on AI systems operating at meaningful scale
• Competitive compensation package
• Meals provided plus full medical, dental, and vision benefits
If this sounds like you, please apply now.
Senior Data Engineer
Data engineer job in Santa Rosa, CA
We're hiring a Senior/Lead Data Engineer to join a fast-growing AI startup. The team comes from a billion dollar AI company, and has raised a $40M+ seed round.
You'll need to be comfortable transforming and moving data in a new 'group level' data warehouse, from legacy sources. You'll have a strong data modeling background.
Proven proficiency in modern data transformation tools, specifically dbt and/or SQLMesh.
Exceptional ability to apply systems thinking and complex problem-solving to ambiguous challenges. Experience within a high-growth startup environment is highly valued.
Deep, practical knowledge of the entire data lifecycle, from generation and governance through to advanced downstream applications (e.g., fueling AI/ML models, LLM consumption, and core product features).
Outstanding ability to communicate technical complexity clearly, synthesizing information into actionable frameworks for executive and cross-functional teams.
Data Engineer
Data engineer job in Santa Rosa, CA
Midjourney is a research lab exploring new mediums to expand the imaginative powers of the human species. We are a small, self-funded team focused on design, human infrastructure, and AI. We have no investors, no big company controlling us, and no advertisers. We are 100% supported by our amazing community.
Our tools are already used by millions of people to dream, to explore, and to create. But this is just the start. We think the story of the 2020s is about building the tools that will remake the world for the next century. We're making those tools, to expand what it means to be human.
Core Responsibilities:
Design and maintain data pipelines to consolidate information across multiple sources (subscription platforms, payment systems, infrastructure and usage monitoring, and financial systems) into a unified analytics environment
Build and manage interactive dashboards and self-service BI tools that enable leadership to track key business metrics including revenue performance, infrastructure costs, customer retention, and operational efficiency
Serve as technical owner of our financial planning platform (Pigment or similar), leading implementation and build-out of models, data connections, and workflows in partnership with Finance leadership to translate business requirements into functional system architecture
Develop automated data quality checks and cleaning processes to ensure accuracy and consistency across financial and operational datasets
Partner with Finance, Product and Operations teams to translate business questions into analytical frameworks, including cohort analysis, cost modeling, and performance trending
Create and maintain documentation for data models, ETL processes, dashboard logic, and system workflows to ensure knowledge continuity
Support strategic planning initiatives by building financial models, scenario analyses, and data-driven recommendations for resource allocation and growth investments
Required Qualifications:
3-5+ years experience in data engineering, analytics engineering, or similar role with demonstrated ability to work with large-scale datasets
Strong SQL skills and experience with modern data warehousing solutions (BigQuery, Snowflake, Redshift, etc.)
Proficiency in at least one programming language (Python, R) for data manipulation and analysis
Experience with BI/visualization tools (Looker, Tableau, Power BI, or similar)
Hands-on experience administering enterprise financial systems (NetSuite, SAP, Oracle, or similar ERP platforms)
Experience working with Stripe Billing or similar subscription management platforms, including data extraction and revenue reporting
Ability to communicate technical concepts clearly to non-technical stakeholders
Data Engineer / Analytics Specialist
Data engineer job in Santa Rosa, CA
Citizenship Requirement: U.S. Citizens Only
ITTConnect is seeking a Data Engineer / Analytics to work for one of our clients, a major Technology Consulting firm with headquarters in Europe. They are experts in tailored technology consulting and services to banks, investment firms and other Financial vertical clients.
Job location: San Francisco Bay area or NY City.
Work Model: Ability to come into the office as requested
Seniority: 10+ years of total experience
About the role:
The Data Engineer / Analytics Specialist will support analytics, product insights, and AI initiatives. You will build robust data pipelines, integrate data sources, and enhance the organization's analytical foundations.
Responsibilities:
Build and operate Snowflake-based analytics environments.
Develop ETL/ELT pipelines (DBT, Airflow, etc.).
Integrate APIs, external data sources, and streaming inputs.
Perform query optimization, basic data modeling, and analytics support.
Enable downstream GenAI and analytics use cases.
Requirements:
10+ years of overall technology experience
3+ years hands-on AWS experience required
Strong SQL and Snowflake experience.
Hands-on pipeline engineering with DBT, Airflow, or similar.
Experience with API integrations and modern data architectures.
Senior Data Engineer
Data engineer job in Santa Rosa, CA
If you're hands on with modern data platforms, cloud tech, and big data tools and you like building solutions that are secure, repeatable, and fast, this role is for you.
As a Senior Data Engineer, you will design, build, and maintain scalable data pipelines that transform raw information into actionable insights. The ideal candidate will have strong experience across modern data platforms, cloud environments, and big data technologies, with a focus on building secure, repeatable, and high-performing solutions.
Responsibilities:
Design, develop, and maintain secure, scalable data pipelines to ingest, transform, and deliver curated data into the Common Data Platform (CDP).
Participate in Agile rituals and contribute to delivery within the Scaled Agile Framework (SAFe).
Ensure quality and reliability of data products through automation, monitoring, and proactive issue resolution.
Deploy alerting and auto-remediation for pipelines and data stores to maximize system availability.
Apply a security first and automation-driven approach to all data engineering practices.
Collaborate with cross-functional teams (data scientists, analysts, product managers, and business stakeholders) to align infrastructure with evolving data needs.
Stay current on industry trends and emerging tools, recommending improvements to strengthen efficiency and scalability.
Qualifications:
Bachelor's degree in Computer Science, Information Systems, or related field (or equivalent experience).
At least 3 years of experience with Python and PySpark, including Jupyter notebooks and unit testing.
At least 2 years of experience with Databricks, Collibra, and Starburst.
Proven work with relational and NoSQL databases, including STAR and dimensional modeling approaches.
Hands-on experience with modern data stacks: object stores (S3), Spark, Airflow, lakehouse architectures, and cloud warehouses (Snowflake, Redshift).
Strong background in ETL and big data engineering (on-prem and cloud).
Work within enterprise cloud platforms (CFS2, Cloud Foundational Services 2/EDS) for governance and compliance.
Experience building end-to-end pipelines for structured, semi-structured, and unstructured data using Spark.
AWS Data Architect
Data engineer job in Santa Rosa, CA
Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner.
Please visit Fractal | Intelligence for Imagination for more information about Fractal.
Fractal is looking for a proactive and driven AWS Lead Data Architect/Engineer to join our cloud and data tech team. In this role, you will work on designing the system architecture and solution, ensuring the platform is scalable while performant, and creating automated data pipelines.
Responsibilities:
Design & Architecture of Scalable Data Platforms
Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs
Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management).
Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility.
Client & Business Stakeholder Engagement
Partner with business stakeholders to translate functional requirements into scalable technical solutions.
Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases
Data Pipeline Development & Collaboration
Collaborate with data engineers and data scientists to develop end-to-end pipelines using Python, PySpark, SQL
Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets.
Performance, Scalability, and Reliability
Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques.
Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools
Security, Compliance & Governance
Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies.
Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging.
Adoption of AI Copilots & Agentic Development
Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for
Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks.
Generating documentation and test cases to accelerate pipeline development.
Interactive debugging and iterative code optimization within notebooks.
Advocate for agentic AI workflows that use specialized agents for
Data profiling and schema inference.
Automated testing and validation.
Innovation and Continuous Learning
Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling.
Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements.
Requirements:
Bachelor's or master's degree in computer science, Information Technology, or a related field.
8-12 years of hands-on experience in data engineering, with at least 5+ years on Python and Apache Spark.
Expertise in building high-throughput, low-latency ETL/ELT pipelines on AWS/Azure/GCP using Python, PySpark, SQL.
Excellent hands on experience with workload automation tools such as Airflow, Prefect etc.
Familiarity with building dynamic ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage
Experience designing Lakehouse architectures with bronze, silver, gold layering.
Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing.
Experience with designing Data marts using Cloud data warehouses and integrating with BI tools (Power BI, Tableau, etc.).
Experience CI/CD pipelines using tools such as AWS Code commit, Azure DevOps, GitHub Actions.
Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning platform resources
In-depth experience with AWS Cloud services such as Glue, S3, Redshift etc.
Strong understanding of data privacy, access controls, and governance best practices.
Experience working with RBAC, tokenization, and data classification frameworks
Excellent communication skills for stakeholder interaction, solution presentations, and team coordination.
Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements.
Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality
Must be able to work in PST time zone.
Pay:
The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $150k - $180k. In addition, you may be eligible for a discretionary bonus for the current performance period.
Benefits:
As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take the time needed for either sick time or vacation.
Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
Founding Software Engineer - AI/LLM/Multimodal
Data engineer job in Santa Rosa, CA
Our client is building the world's fastest-growing marketplace for elite creative talent. We connect award-winning artists, directors, musicians, animators, and designers with frontier AI labs developing the next generation of multimodal systems including models like Sora, Suno,
Runway, and Grok Imagine.
This role is paying up to $300k salary base with 2% equity in top.
Must have AI/LLM/Multimodal experience.
Fully onsite role.
We are a small, high-talent-density team from Stanford and Mercor, backed by:
The Co-Founder of xAI
The Founder of Periodic Labs
Researchers from OpenAI's Sora team
DeepMind alignment & evaluation leads
Senior researchers from Salesforce AI, Meta, Google, and Anthropic
Role Overview
As a Founding Engineer, you will own core product surfaces, architecture, and infrastructure for the platform. You'll help shape engineering culture, drive rapid product iteration, and work directly with founders to build the marketplace and data systems powering the next wave of multimodal AI innovation.
This role is ideal for someone who thrives in high-agency environments, ships quickly, and enjoys solving end-to-end problems across the stack.
What You Will Build
Marketplace & Talent Systems
Onboarding flows for elite creative talent
AI-assisted interviews and evaluations
Self-serve matching tools between talent and AI labs
Payment, billing, and workflow automation
Data & Infrastructure Products
Tools for frontier AI labs to support training and evaluation
Internal platforms for data quality, annotation, and performance insights
Product & Engineering Leadership
Shape engineering culture, processes, and best practices
Ship fast across greenfield projects with high ownership
Influence product direction through tight feedback cycles with founders and users
What We're Looking For
Strong proficiency with Python, React, and TypeScript
Experience with AWS, Docker, CI/CD, and modern infrastructure tooling
Ability to operate independently and handle ambiguity in a fast-moving environment
Interest in gamified platforms, creative tools, or marketplace dynamics
Bias for action, strong communication skills, and a builder mentality
Software Engineer
Data engineer job in Santa Rosa, CA
Software Engineer - AI Infra Startup - San Francisco, CA
A small, deeply technical AI infra startup in San Francisco, building the platform that next-gen AI apps will run on - think custom compilers, CUDA kernels, and distributed orchestration are looking for a Software Engineer to join their team.
What will I be doing?
As a Software Engineer you'll help architect and build the infrastructure powering the next generation of AI systems. You'll work across the stack on:
High-performance, distributed systems that support real-time AI workloads
Kubernetes orchestration, infra tooling, and automation pipelines
Low-level runtime components like custom compilers and CUDA kernels
Scalable, reliable backend services
Collaborating directly with the founding team on technical strategy and core architecture
What are we looking for?
Strong background in systems programming, backend, or infra engineering
Experience with distributed systems, container orchestration, and Linux internals
Comfortable working across multiple layers of the stack, from low-level code to infra ops
Fast learner who thrives in ambiguous, high-agency environments
Bonus: Experience with compilers, CUDA, or ML infrastructure
What's in it for me:
up to $300k base dependent on experience + 0.2-0.5% equity
Work directly with deeply technical founders (ex-Stanford, Google, and more) on some of the hardest problems in AI infra
On-site in San Francisco
Stealth startup with serious backing, operating lean and hiring intentionally
Apply now for immediate consideration!
Founding Software Engineer / Protocol Engineer
Data engineer job in Santa Rosa, CA
We are actively searching for a Founding Protocol Engineer to join our team on a permanent basis. In this position you will If you are someone that is impressed with what Hyperliquid has accomplished then this role is for you. We are on a mission to build next generation lending and debt protocols. We are open to both Senior level and Architect level candidates for this role.
Your Rhythm:
Drive the architecture, technical design, and implementation of our lending protocol.
Collaborate closely with researchers to validate and test designs
Collaborate with auditors and security engineers to ensure safety of the protocol
Participate in code reviews, providing constructive feedback and ensuring adherence to established coding standards and best practices
Your Vibe:
5+ years of professional software Engineering experience
3+ years of experience working in Solidity in EVM in production environments, specifically focused in DeFi products
2+ years of experience working with a modern backend languages (Go, Rust, Python, etc) in distributed architectures
Experience building lending protocols in a smart contract language
Open to collaborating onsite a few days a week at our downtown SF office
Our Vibe:
Relaxed work environment
100% paid top of the line health care benefits
Full ownership, no micro management
Strong equity package
401K
Unlimited vacation
An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
Staff Software Engineer - High-Growth AI/FinTech
Data engineer job in Santa Rosa, CA
Staff Software Engineer (IC) - High-Growth AI/FinTech Startup
Full-time · Hybrid (San Francisco)
$220k-$300k + equity
A well-funded, rapidly scaling startup in the AI-driven fintech space is looking for an experienced Staff Engineer to take ownership of reshaping the foundations of their core platform. After two years of fast iteration and customer growth, the product has evolved into a set of independently built services. They now need a senior IC who can bring coherence, scalability, and long-term architectural stability as the engineering team expands.
This is a high-impact individual contributor role working directly with the CTO. You'll set technical direction, oversee major system redesigns, and help prepare the platform to support significantly larger usage, customer demands, and a future 20-40+ engineer organisation.
What You'll Be Doing
Lead architectural transformation
Redesign major components into a unified, maintainable, scalable system.
Remove legacy code, reduce fragmentation, and introduce sound architectural patterns.
Define technical standards and guide the broader engineering team towards consistent, high-quality design.
Drive high-leverage engineering work
Partner closely with the CTO on long-term technical strategy.
Lead development of workflow systems for real-time identity, income, and document verification.
Strengthen the infrastructure that powers the company's automated decisioning engine (currently >70% auto-approval/denial rate).
Support integrations with internal ML models that perform fraud detection and financial document understanding.
Influence and elevate the engineering culture
Collaborate with senior and junior engineers across backend, full-stack and infra.
Improve developer velocity and support onboarding of larger enterprise customers.
Help the company scale from an early-stage engineering organisation to a mature, high-performance team.
What They're Looking For
7-8+ years' experience as a strong backend or full-stack IC.
Proven ability to re-architect complex systems and scale codebases beyond the “early startup” phase.
Experience in a fast-growing startup (Seed → A → B or similar) where the engineering org expanded meaningfully.
Depth in modern backend or full-stack development (ideal: TypeScript, React, Node.js, Python).
Someone who thrives in ambiguity, makes pragmatic technical decisions, and moves quickly.
A high engineering bar and the ability to raise the standards of those around you.
Tech Environment
Frontend: TypeScript, React
Backend: Node.js, Python
Data: Postgres, BigQuery, Redis
Cloud: GCP
Hybrid working model; candidates must be based in or willing to relocate to the San Francisco Bay Area. (Hybrid flexibility available for senior candidates.)
Why This Role Is Exciting
Join a business with strong revenue, real customers, and top-tier backers.
Have ownership of mission-critical architecture, not just feature work.
Work alongside a highly capable CTO and shape the company's technical trajectory for years to come.
Build systems that support real-world decisions for millions of end-users.
Competitive salary, meaningful equity, and the chance to make a long-term technical mark.
Python Backend Engineer - 3D / Visualization / API / Software (On-site)
Data engineer job in Santa Rosa, CA
A pioneering and well-funded AI company is seeking a talented Python Backend Engineer to build the core infrastructure for its revolutionary autonomous systems. This is a unique opportunity to join an innovative team at the forefront of engineering and artificial intelligence, creating a new category of software that will redefine how complex products in sectors like aerospace, automotive, and advanced manufacturing are designed and developed.
Why Join?
Build the Future of Engineering: This isn't just another backend role. Your work will directly shape how next-generation rockets, cars, and aircraft are designed, fundamentally changing the engineering landscape.
Solve Unprecedented Technical Puzzles: Tackle unique challenges in building the infrastructure for autonomous AI agents, including simulation orchestration, multi-agent coordination, and scalable model serving.
Shape a Foundational Platform: As a critical member of a pioneering team, you will have a significant impact on the technical direction and core architecture of an entirely new category of software.
Join a High-Impact Team: Work in a collaborative, fast-paced environment where your expertise is valued, and you have end-to-end ownership of critical systems.
Compensation & Location: Base salary of up to $210,000 + equity + benefits, while working on-site with the team in a modern office in downtown San Francisco.
The Role
As a Python Backend Engineer, you will be instrumental in constructing the infrastructure that underpins these autonomous engineering agents. Your responsibilities will span model serving, simulation orchestration, multi-agent coordination, and the development of robust, developer-facing APIs. This position is critical for delivering the fast, reliable, and scalable systems that professional engineers will trust and depend on in high-stakes production environments.
You will:
Own and build the core backend infrastructure for the autonomous AI agents, focusing on scalability, model serving, and multi-agent orchestration.
Design and maintain robust APIs while integrating essential third-party tools like CAD software and simulation backends into the core platform.
Develop backend services to process and serve complex 3D visualizations from simulation and geometric data.
Collaborate across ML, frontend, and simulation teams to shape the product and engage directly with early customers to drive infrastructure needs.
Make foundational architectural decisions that will define the technical future and scalability of the entire platform.
The Essential Requirements
Strong backend software engineering experience, with a primary focus on Python.
Proven experience in designing, building, and maintaining production-level APIs (FastAPI preferred but Flask and Django also considered).
Experience with 3D visualization libraries or tools such as PyVista, ParaView, or VTK.
Excellent systems-thinking skills and the ability to reason about the interactions between compute, data, and models.
Experience working in fast-paced environments where end-to-end ownership and proactivity are essential.
Exceptional communication and collaboration abilities.
What Will Make You Stand Out
Experience integrating with scientific or engineering software (such as CAD, FEA, or CFD tools).
Exposure to agent frameworks, workflow orchestration engines, or distributed systems.
Familiarity with model serving frameworks (e.g., TorchServe, Triton) or simulation backends.
Previous experience building developer-focused tools or working in high-trust, customer-facing technical roles.
If you are interested in this role, please apply with your resume through this site.
SEO Keywords for Search
Python Backend Engineer, Python Software Engineer, Backend Engineer, Software Engineer, Python Developer, AI Engineer, Machine Learning Infrastructure, MLOps Engineer, Backend Software Engineer (Python), Senior Backend Engineer, AI/ML Engineer, Infrastructure Engineer, FastAPI Developer, PyVista, ParaView, VTK, 3D Visualization, Docker, Kubernetes, Cloud Engineer, AI Platform Engineer, Distributed Systems Engineer, Simulation Software Engineer, CAD Integration, CFD, FEA, Scientific Computing, High-Performance Computing (HPC), Agent Frameworks, Workflow Orchestration, Technical Lead, Staff Engineer.
Disclaimer
Attis Global Ltd is an equal opportunities employer. No terminology in this advert is intended to discriminate on any of the grounds protected by law, and all qualified applicants will receive consideration for employment without regard to age, sex, race, national origin, religion or belief, disability, pregnancy and maternity, marital status, political affiliation, socio-economic status, sexual orientation, gender, gender identity and expression, and/or gender reassignment. M/F/D/V. We operate as a staffing agency and employment business. More information can be found at attisglobal.com.
Robotic Software Engineer
Data engineer job in Santa Rosa, CA
Robotics Software Engineer (Generalist/Full-Stack)
Robotic Software Engineer - Humanoid Robotics
Palo Alto, SF Bay Area (Full-time | Onsite)
$180k-$200k + equity (flexible for exceptional candidates)
We are recruiting building next-generation humanoid robotic systems that combine advanced AI with cutting-edge hardware. Our team moves fast, prototypes aggressively, and puts real robots into the world. We're now hiring a Robotic Software Engineer to help shape our core software stack and accelerate the development of our embodied AI systems.
What You'll Work On
As part of a small, high-impact engineering team, you will:
Build and optimise robotics software in C++ and ROS2
Integrate perception, control, planning, and learning modules
Work hands-on with robots to bring up new hardware and run real-world experiments
Deploy reinforcement learning / imitation learning policies onto physical robots
Develop middleware, interfaces, and tooling that connect AI → hardware
Prototype behaviours across diverse robot types (arms, humanoids, mobile platforms, drones)
This role directly supports both our AI and hardware teams and has significant ownership from day one.
Must-haves:
Strong C++ development skills (multi-threading, performance, systems-level)
Professional experience with ROS2
Hands-on robotics experience - ideally robot learning on physical hardware
Ability to work on real robots (debugging, integration, testing)
Generalist mindset and comfort in a fast-paced startup environment
Nice-to-haves:
Manipulation or kinematics (humanoids, arms, quadrupeds)
Controls for mobile robots or drones
Sensor/actuator integration, drivers, or middleware experience
VR prototyping (Meta Quest or similar)
Experience across different robot embodiments
Why Join Us
Build software that runs on real humanoid robots immediately
High ownership within a small, world-class engineering team
Competitive compensation + meaningful equity
Opportunity to influence architecture, roadmap, and product direction
Work at one of the most exciting intersections in tech: AI × robotics
Software Engineer
Data engineer job in Santa Rosa, CA
Founding Engineer
$140K - $200K + equity
San Francisco (Onsite Role)
Direct Hire
A fast growing early-stage start who recently secured a significant amount at Seed is actively hiring 3x software engineers to join their founding team. They're looking for people who are scrappy, move fast, challenge assumptions, and are driven to win. They build quickly and expect teammates to push boundaries.
Who You Are
Make quick, reversible (“two-way door”) decisions
Proactively fix problems before being asked
Comfortable working across a modern engineering stack (e.g., TypeScript, Python, containerisation, ML/LLM tooling, databases, cloud environments, mobile frameworks)
Have built real, shipped products
Thrive in ambiguity and fast-moving environments
What You'll Do
Talk directly with users to understand their workflows, pain points, and needs
Architect systems that support large enterprise usage
Build automated pipelines and intelligent agents that process and verify large volumes of data
Maintain scalable, robust infrastructure
Ship quickly - progress over perfection
The Reality
You'll work closely with the founding team and directly with customers
User value beats hype, trends, and “cool tech”
Expect a demanding, high-output culture
If you're a Software Engineer with 2 + years' experience and want to work in a growing start-up, please do apply now for immediate consideration.
Staff Software Engineer
Data engineer job in Santa Rosa, CA
The Role
Our client is seeking a Staff Software Engineer to join a small, senior team as a highly skilled individual contributor. In this hybrid role, you'll work across the stack to build new user-facing features and develop integrations with CAD and third-party applications. You'll partner closely with product managers, AI researchers, and other engineers to turn new ideas into production-ready systems at scale.
What You'll Do
Design and build scalable, reliable full-stack systems using React, Node.js, and Python.
Deploy an ML model to production: you've done it before, and you'll do it again: build robust products that users love.
Collaborate closely with ML and data teams to integrate models and pipelines into real-world products.
Architect backend systems around AWS services, databases, and modern data infrastructure.
Own performance and scale: build APIs, indexes, and search systems that make high-dimensional data feel instant.
Contribute to product direction: work with design, AI, and leadership to turn technical capabilities into delightful user experiences.
(Optional but exciting): advance 3D visualization, geometry, or rendering engines that make engineering feel magical.
What We're Looking For
You're a strong generalist who can build, ship, and scale complex full-stack systems.
You're fluent in React, Node.js, and Python, and comfortable designing APIs, services, and data flows end-to-end.
You've shipped large production systems, ideally ones that touch ML, data, or search.
You have experience with AWS databases, and you enjoy thinking about indexing, search, and vector data systems.
You're pragmatic, product-minded, and enjoy owning features from concept to deployment.
You collaborate naturally with AI, design, and data teams, and love turning complexity into clarity.
Bonus points if:
You've worked with large-scale data processing pipelines.
You have an interest in math, geometry, topology, rendering, or computational geometry.
You've built software in 3D printing, CAD, or computer graphics domains.
This is a rare opportunity to create the interfaces, infrastructure, and experiences that bring a new kind of intelligence to the physical world, and help define how AI becomes a tool for the imagination.
You love building systems that are elegant, fast, and deeply technical, and want to see them shape the real world.
Let's build the tools the future will be made in.
Compensation
The base salary range for this role is $175,000 - $240,000, plus equity. Flexible PTO and competitive compensation. Final offers will be based on experience, interview performance, and alignment with role requirements.
Senior DevOps Engineer (Ref: 194285)
Data engineer job in Santa Rosa, CA
Contact: ******************************
About Us
We are collaborating with our client, an innovative fintech startup headquartered in the Bay Area, which focuses on delivering real-time card data primarily aimed at B2B software firms. Established as a spin-out in early 2024 and launching their product by the summer of that year, this organization is at a pivotal growth juncture, experiencing an expanding customer base and ambitious future initiatives.
With the support of notable investors including Nica Partners, QED Investors, RBC, and Visa, this organization is championing advancements in card issuance technology. Their team of 20 passionate professionals is seeking exceptional engineers to aid in scaling their operations.
Job Description
We are assisting this organization in their search for a Senior DevOps Engineer (L1+), tasked with steering the evolution of their infrastructure and serverless architecture. Operating primarily within an AWS framework and extensively utilizing Lambda, the ideal candidate will spearhead serverless migrations while upholding the stringent security protocols essential for the fintech and payments domain, particularly regarding PCI compliance.
Key Responsibilities
Design and implement serverless infrastructure using AWS Lambda and associated services.
Lead the serverless migration process and help define the infrastructure roadmap.
Ensure compliance with PCI standards and establish security best practices throughout the infrastructure.
Manage and optimize SQL-based databases hosted on AWS.
Collaborate with teams to advance the organization's Kubernetes capabilities as they grow their microservices architecture.
Work collaboratively with engineering teams to enhance deployment pipelines and improve the developer experience.
Engage in a hybrid work model, requiring in-office presence on Tuesdays, Wednesdays, and Thursdays.
Requirements
Required:
A minimum of senior-level (L1+) experience in DevOps or Infrastructure engineering.
In-depth knowledge of AWS, particularly Lambda and serverless architectures.
A strong background in security practices, especially regarding PCI compliance for infrastructure.
Experience with SQL-based database management.
Familiarity with the fintech or payments industry is essential.
Knowledge of the card issuance sector and experience with payment networks, ideally including Mastercard.
Startup experience with a proven ability to deliver quick and effective solutions.
A willingness to relocate to or already residing in the Bay Area.
Preferred:
Experience with Kubernetes.
Expertise in microservices architecture.
Prior experience at organizations such as Marqeta, Plaid, Ramp, or Brex.
Experience working with payment networks.
Benefits
A competitive compensation package comprising both salary and equity, ensuring alignment with the best talent available.
A 4% matching contribution for the 401(k) plan.
Comprehensive medical and dental insurance managed through Rippling.
Regular off-site events throughout the year in diverse locations across the US and Europe, including frequent trips to California and New York.
Exclusive company merchandise.
A hybrid work structure requiring three days of on-site work.
Software Engineer
Data engineer job in Santa Rosa, CA
As a software engineer at General Medicine, you'll help build and scale a healthcare store that makes it delightfully simple to shop for any type of care. We provide upfront cash and insurance prices for virtual and in-person visits, prescriptions, labs, imaging, and more.
What we're looking for
We're looking for strong engineers to help us build a seamless and beautiful consumer healthcare product. We're looking for folks who will obsess over every detail of our patient experience, and also tackle the complex operational challenges of delivering care at scale. We are looking for engineers who care deeply about technical excellence but are also comfortable moving quickly - we are constantly navigating tradeoffs between engineering velocity and quality.
Our ideal candidate is hungry, high-agency, and aspires to be a generalist. Our engineers frequently write product requirements documents, write SQL to understand how features are performing, and own QA - no task is beneath us or outside of the scope of the role if it helps us to deliver a great product. We're looking for someone who can operate in an environment of significant ambiguity, and who is comfortable working closely with design, operations, and clinical stakeholders.
We don't expect you to have a healthcare background (though it's great if you do!). However, you should be excited by the prospect of digging into the messy complexities of the American healthcare system (integrating with EHRs, revenue cycle management, etc).
Qualifications
2+ years of experience building web apps as a full-stack engineer
Experience with modern infra tooling and programming languages. We currently use AWS, Ruby on Rails, and NextJS, and would expect you to have proficiency in a modern tech stack even if it isn't the one we are using.
Please note that this role is based in either our SF office (near Market and Spear St) or our Boston office (Central Square, Cambridge). We expect our team to work from the office least 3 days per week.
Why join us
We're an experienced team that has built a company in this space before, our product has clear product-market fit, and we've raised money from top investors.
We have an ambitious and distinctive vision for what can be built in consumer healthcare. We believe LLMs and price transparency legislation have opened up several massive opportunities.
If you're an ambitious and entrepreneurial software engineer and this resonates, please apply.
Senior ML Data Engineer
Data engineer job in Santa Rosa, CA
We're the data team behind Midjourney's image generation models. We handle the dataset side: processing, filtering, scoring, captioning, and all the distributed compute that makes high-quality training data possible.
What you'd be working on:
Large-scale dataset processing and filtering pipelines
Training classifiers for content moderation and quality assessment
Models for data quality and aesthetic evaluation
Data visualization tools for experimenting on dataset samples
Testing/simulating distributed inference pipelines
Monitoring dashboards for data quality and pipeline health
Performance optimization and infrastructure scaling
Occasionally jumping into inference optimization and other cross-team projects
Our current stack: PySpark, Slurm, distributed batch processing across hybrid cloud setup. We're pragmatic about tools - if there's something better, we'll switch.
We're looking for someone strong in either:
Data engineering/ML pipelines at scale, or
Cloud/infrastructure with distributed systems experience
Don't need exact tech matches - comfort with adjacent technologies and willingness to learn matters more. We work with our own hardware plus GCP and other providers, so adaptability across different environments is valuable.
Location: SF office a few times per week (we may make exceptions on location for truly exceptional candidates)
The role offers variety, our team members often get pulled into different projects across the company, from dataset work to inference optimization. If you're interested in the intersection of large-scale data processing and cutting-edge generative AI, we'd love to hear from you.
Senior Data Engineer - Spark, Airflow
Data engineer job in Santa Rosa, CA
We are seeking an experienced Data Engineer to design and optimize scalable data pipelines that drive our global data and analytics initiatives.
In this role, you will leverage technologies such as Apache Spark, Airflow, and Python to build high performance data processing systems and ensure data quality, reliability, and lineage across Mastercard's data ecosystem.
The ideal candidate combines strong technical expertise with hands-on experience in distributed data systems, workflow automation, and performance tuning to deliver impactful, data-driven solutions at enterprise scale.
Responsibilities:
Design and optimize Spark-based ETL pipelines for large-scale data processing.
Build and manage Airflow DAGs for scheduling, orchestration, and checkpointing.
Implement partitioning and shuffling strategies to improve Spark performance.
Ensure data lineage, quality, and traceability across systems.
Develop Python scripts for data transformation, aggregation, and validation.
Execute and tune Spark jobs using spark-submit.
Perform DataFrame joins and aggregations for analytical insights.
Automate multi-step processes through shell scripting and variable management.
Collaborate with data, DevOps, and analytics teams to deliver scalable data solutions.
Qualifications:
Bachelor's degree in Computer Science, Data Engineering, or related field (or equivalent experience).
At least 7 years of experience in data engineering or big data development.
Strong expertise in Apache Spark architecture, optimization, and job configuration.
Proven experience with Airflow DAGs using authoring, scheduling, checkpointing, monitoring.
Skilled in data shuffling, partitioning strategies, and performance tuning in distributed systems.
Expertise in Python programming including data structures and algorithmic problem-solving.
Hands-on with Spark DataFrames and PySpark transformations using joins, aggregations, filters.
Proficient in shell scripting, including managing and passing variables between scripts.
Experienced with spark submit for deployment and tuning.
Solid understanding of ETL design, workflow automation, and distributed data systems.
Excellent debugging and problem-solving skills in large-scale environments.
Experience with AWS Glue, EMR, Databricks, or similar Spark platforms.
Knowledge of data lineage and data quality frameworks like Apache Atlas.
Familiarity with CI/CD pipelines, Docker/Kubernetes, and data governance tools.