Software Development Engineer, AI/ML, AWS Neuron, Model Inference
Data engineer job in Cupertino, CA
The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon's custom machine learning accelerators, Inferentia and Trainium. The AWS Neuron SDK, developed by the Annapurna Labs team at AWS, is the backbone for accelerating deep learning and GenAI workloads on Amazon's Inferentia and Trainium ML accelerators. This comprehensive toolkit includes an ML compiler, runtime, and application framework that seamlessly integrates with popular ML frameworks like PyTorch and JAX enabling unparalleled ML inference and training performance.
The Inference Enablement and Acceleration team is at the forefront of running a wide range of models and supporting novel architecture alongside maximizing their performance for AWS's custom ML accelerators. Working across the stack from PyTorch till the hardware-software boundary, our engineers build systematic infrastructure, innovate new methods and create high-performance kernels for ML functions, ensuring every compute unit is fine tuned for optimal performance for our customers' demanding workloads. We combine deep hardware knowledge with ML expertise to push the boundaries of what's possible in AI acceleration.
As part of the broader Neuron organization, our team works across multiple technology layers - from frameworks and kernels and collaborate with compiler to runtime and collectives. We not only optimize current performance but also contribute to future architecture designs, working closely with customers to enable their models and ensure optimal performance. This role offers a unique opportunity to work at the intersection of machine learning, high-performance computing, and distributed architectures, where you'll help shape the future of AI acceleration technology
You will architect and implement business critical features, and mentor a brilliant team of experienced engineers. We operate in spaces that are very large, yet our teams remain small and agile. There is no blueprint. We're inventing. We're experimenting. It is a very unique learning culture. The team works closely with customers on their model enablement, providing direct support and optimization expertise to ensure their machine learning workloads achieve optimal performance on AWS ML accelerators. The team collaborates with open source ecosystems to provide seamless integration and bring peak performance at scale for customers and developers.
This role is responsible for development, enablement and performance tuning of a wide variety of LLM model families, including massive scale large language models like the Llama family, DeepSeek and beyond. The Inference Enablement and Acceleration team works side by side with compiler engineers and runtime engineers to create, build and tune distributed inference solutions with Trainium and Inferentia. Experience optimizing inference performance for both latency and throughput on such large models across the stack from system level optimizations through to Pytorch or JAX is a must have.
You can learn more about Neuron
*****************************************************************************************
***********************************************
*************************************
*********************************************************************************************
Key job responsibilities
This role will help lead the efforts in building distributed inference support for Pytorch in the Neuron SDK. This role will tune these models to ensure highest performance and maximize the efficiency of them running on the customer AWS Trainium and Inferentia silicon and servers. Strong software development using Python, System level programming and ML knowledge are both critical to this role. Our engineers collaborate across compiler, runtime, framework, and hardware teams to optimize machine learning workloads for our global customer base. Working at the intersection of software, hardware, and machine learning systems, you'll bring expertise in low-level optimization, system architecture, and ML model acceleration. In this role, you will:
* Design, develop, and optimize machine learning models and frameworks for deployment on custom ML hardware accelerators.
* Participate in all stages of the ML system development lifecycle including distributed computing based architecture design, implementation, performance profiling, hardware-specific optimizations, testing and production deployment.
* Build infrastructure to systematically analyze and onboard multiple models with diverse architecture.
* Design and implement high-performance kernels and features for ML operations, leveraging the Neuron architecture and programming models
* Analyze and optimize system-level performance across multiple generations of Neuron hardware
* Conduct detailed performance analysis using profiling tools to identify and resolve bottlenecks
* Implement optimizations such as fusion, sharding, tiling, and scheduling
* Conduct comprehensive testing, including unit and end-to-end model testing with continuous deployment and releases through pipelines.
* Work directly with customers to enable and optimize their ML models on AWS accelerators
* Collaborate across teams to develop innovative optimization techniques
A day in the life
You will collaborate with a cross-functional team of applied scientists, system engineers, and product managers to deliver state-of-the-art inference capabilities for Generative AI applications. Your work will involve debugging performance issues, optimizing memory usage, and shaping the future of Neuron's inference stack across Amazon and the Open Source Community. As you design and code solutions to help our team drive efficiencies in software architecture, you'll create metrics, implement automation and other improvements, and resolve the root cause of software defects.
You will also build high-impact solutions to deliver to our large customer base and participate in design discussions, code review, and communicate with internal and external stakeholders. You will work cross-functionally to help drive business decisions with your technical input. You will work in a startup-like development environment, where you're always working on the most important initiative.
About the team
The Inference Enablement and Acceleration team fosters a builder's culture where experimentation is encouraged, and impact is measurable. We emphasize collaboration, technical ownership, and continuous learning. Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we're building an environment that celebrates knowledge-sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects that help our team members develop your engineering expertise so you feel empowered to take on more complex tasks in the future. Join us to solve some of the most interesting and impactful infrastructure challenges in AI/ML today.
BASIC QUALIFICATIONS- Bachelor's degree in computer science or equivalent
- 5+ years of non-internship professional software development experience
- 5+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience
- Fundamentals of Machine learning and LLMs, their architecture, training and inference lifecycles along with work experience on some optimizations for improving the model execution.
- Software development experience in C++, Python (experience in at least one language is required).
- Strong understanding of system performance, memory management, and parallel computing principles.
- Proficiency in debugging, profiling, and implementing best software engineering practices in large-scale systems.
PREFERRED QUALIFICATIONS- Familiarity with PyTorch, JIT compilation, and AOT tracing.
- Familiarity with CUDA kernels or equivalent ML or low-level kernels
- Candidates with performant kernel development such as CUTLASS, FlashInfer etc., would be well suited.
- Familiar with syntax and tile-level semantics similar to Triton.
- Experience with online/offline inference serving with vLLM, SGLang, TensorRT or similar platforms in production environments.
- Deep understanding of computer architecture, operation systems level software and working knowledge of parallel computing.
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
Los Angeles County applicants: Job duties for this position include: work safely and cooperatively with other employees, supervisors, and staff; adhere to standards of excellence despite stressful conditions; communicate effectively and respectfully with employees, supervisors, and staff to ensure exceptional customer service; and follow all federal, state, and local laws and Company policies. Criminal history may have a direct, adverse, and negative relationship with some of the material job duties of this position. These include the duties and responsibilities listed above, as well as the abilities to adhere to company policies, exercise sound judgment, effectively manage stress and work safely and respectfully with others, exhibit trustworthiness and professionalism, and safeguard business operations and the Company's reputation. Pursuant to the Los Angeles County Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit ********************************************************* for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.
Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $129,300/year in our lowest geographic market up to $223,600/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit ******************************************************** This position will remain posted until filled. Applicants should apply via our internal or external career site.
Imaging Data Engineer/Architect
Data engineer job in San Francisco, CA
About us:
Intuitive is an innovation-led engineering company delivering business outcomes for 100's of Enterprises globally. With the reputation of being a Tiger Team & a Trusted Partner of enterprise technology leaders, we help solve the most complex Digital Transformation challenges across following Intuitive Superpowers:
Modernization & Migration
Application & Database Modernization
Platform Engineering (IaC/EaC, DevSecOps & SRE)
Cloud Native Engineering, Migration to Cloud, VMware Exit
FinOps
Data & AI/ML
Data (Cloud Native / DataBricks / Snowflake)
Machine Learning, AI/GenAI
Cybersecurity
Infrastructure Security
Application Security
Data Security
AI/Model Security
SDx & Digital Workspace (M365, G-suite)
SDDC, SD-WAN, SDN, NetSec, Wireless/Mobility
Email, Collaboration, Directory Services, Shared Files Services
Intuitive Services:
Professional and Advisory Services
Elastic Engineering Services
Managed Services
Talent Acquisition & Platform Resell Services
About the job:
Title: Imaging Data Engineer/Architect
Start Date: Immediate
# of Positions: 1
Position Type: Contract/ Full-Time
Location: San Francisco, CA
Notes:
Imaging data Engineer/architect who understands Radiology and Digital pathology, related clinical data and metadata.
Hands-on experience on above technologies, and with good knowledge in the biomedical imaging, and data pipelines overall.
About the Role
We are seeking a highly skilled Imaging Data Engineer/Architect to join our San Francisco team as a Subject Matter Expert (SME) in radiology and digital pathology. This role will design and manage imaging data pipelines, ensuring seamless integration of clinical data and metadata to support advanced diagnostic and research applications. The ideal candidate will have deep expertise in medical imaging standards, cloud-based data architectures, and healthcare interoperability, contributing to innovative solutions that enhance patient outcomes.
Responsibilities
Design and implement scalable data architectures for radiology and digital pathology imaging data, including DICOM, HL7, and FHIR standards.
Develop and optimize data pipelines to process and store large-scale imaging datasets (e.g., MRI, CT, histopathology slides) and associated metadata.
Collaborate with clinical teams to understand radiology and pathology workflows, ensuring data solutions align with clinical needs.
Ensure data integrity, security, and compliance with healthcare regulations (e.g., HIPAA, GDPR).
Integrate imaging data with AI/ML models for diagnostic and predictive analytics, working closely with data scientists.
Build and maintain metadata schemas to support data discoverability and interoperability across systems.
Provide technical expertise to cross-functional teams, including product managers and software engineers, to drive imaging data strategy.
Conduct performance tuning and optimization of imaging data storage and retrieval systems in cloud environments (e.g., AWS, Google Cloud, Azure).
Document data architectures and processes, ensuring knowledge transfer to internal teams and external partners.
Stay updated on emerging imaging technologies and standards, proposing innovative solutions to enhance data workflows.
Qualifications
Education: Bachelor's degree in computer science, Biomedical Engineering, or a related field (master's preferred).
Experience:
5+ years in data engineering or architecture, with at least 3 years focused on medical imaging (radiology and/or digital pathology).
Proven experience with DICOM, HL7, FHIR, and imaging metadata standards (e.g., SNOMED, LOINC).
Hands-on experience with cloud platforms (AWS, Google Cloud, or Azure) for imaging data storage and processing.
Technical Skills:
Proficiency in programming languages (e.g., Python, Java, SQL) for data pipeline development.
Expertise in ETL processes, data warehousing, and database management (e.g., Snowflake, BigQuery, PostgreSQL).
Familiarity with AI/ML integration for imaging data analytics.
Knowledge of containerization (e.g., Docker, Kubernetes) for deploying data solutions.
Domain Knowledge:
Deep understanding of radiology and digital pathology workflows, including PACS and LIS systems.
Familiarity with clinical data integration and healthcare interoperability standards.
Soft Skills:
Strong analytical and problem-solving skills to address complex data challenges.
Excellent communication skills to collaborate with clinical and technical stakeholders.
Ability to work independently in a fast-paced environment, with a proactive approach to innovation.
Certifications (preferred):
AWS Certified Solutions Architect, Google Cloud Professional Data Engineer, or equivalent.
Certifications in medical imaging (e.g., CIIP - Certified Imaging Informatics Professional).
Data Engineer
Data engineer job in San Francisco, CA
Mercor is hiring a Data Engineer on behalf of a leading AI lab. In this role, you'll **design resilient ETL/ELT pipelines and data contracts** to ensure datasets are analytics- and ML-ready. You'll validate, enrich, and serve data with strong schema and versioning discipline, building the backbone that powers AI research and production systems. This position is ideal for candidates who love working with data pipelines, distributed processing, and ensuring data quality at scale.
* * * ### **You're a great fit if you:** - Have a background in **computer science, data engineering, or information systems**. - Are proficient in **Python, pandas, and SQL**. - Have hands-on experience with **databases** like PostgreSQL or SQLite. - Understand distributed data processing with **Spark or DuckDB**. - Are experienced in orchestrating workflows with **Airflow** or similar tools. - Work comfortably with common formats like **JSON, CSV, and Parquet**. - Care about **schema design, data contracts, and version control** with Git. - Are passionate about building pipelines that enable **reliable analytics and ML workflows**. * * * ### **Primary Goal of This Role** To design, validate, and maintain scalable ETL/ELT pipelines and data contracts that produce clean, reliable, and reproducible datasets for analytics and machine learning systems. * * * ### **What You'll Do** - Build and maintain **ETL/ELT pipelines** with a focus on scalability and resilience. - Validate and enrich datasets to ensure they're **analytics- and ML-ready**. - Manage **schemas, versioning, and data contracts** to maintain consistency. - Work with **PostgreSQL/SQLite, Spark/Duck DB, and Airflow** to manage workflows. - Optimize pipelines for performance and reliability using **Python and pandas**. - Collaborate with researchers and engineers to ensure data pipelines align with product and research needs. * * * ### **Why This Role Is Exciting** - You'll create the **data backbone** that powers cutting-edge AI research and applications. - You'll work with modern **data infrastructure and orchestration tools**. - You'll ensure **reproducibility and reliability** in high-stakes data workflows. - You'll operate at the **intersection of data engineering, AI, and scalable systems**. * * * ### **Pay & Work Structure** - You'll be classified as an hourly contractor to Mercor. - Paid weekly via Stripe Connect, based on hours logged. - Part-time (20-30 hrs/week) with flexible hours-work from anywhere, on your schedule. - Weekly Bonus of **$500-$1000 USD** per 5 tasks. - Remote and flexible working style.
Staff Data Scientist - Post Sales
Data engineer job in San Francisco, CA
Salary: $200-250k base + RSUs
This fast-growing Series E AI SaaS company is redefining how modern engineering teams build and deploy applications. We're expanding our data science organization to accelerate customer success after the initial sale-driving onboarding, retention, expansion, and long-term revenue growth.
About the Role
As the senior data scientist supporting post-sales teams, you will use advanced analytics, experimentation, and predictive modeling to guide strategy across Customer Success, Account Management, and Renewals. Your insights will help leadership forecast expansion, reduce churn, and identify the levers that unlock sustainable net revenue retention.
Key Responsibilities
Forecast & Model Growth: Build predictive models for renewal likelihood, expansion potential, churn risk, and customer health scoring.
Optimize the Customer Journey: Analyze onboarding flows, product adoption patterns, and usage signals to improve activation, engagement, and time-to-value.
Experimentation & Causal Analysis: Design and evaluate experiments (A/B tests, uplift modeling) to measure the impact of onboarding programs, success initiatives, and pricing changes on retention and expansion.
Revenue Insights: Partner with Customer Success and Sales to identify high-value accounts, cross-sell opportunities, and early warning signs of churn.
Cross-Functional Partnership: Collaborate with Product, RevOps, Finance, and Marketing to align post-sales strategies with company growth goals.
Data Infrastructure Collaboration: Work with Analytics Engineering to define data requirements, maintain data quality, and enable self-serve dashboards for Success and Finance teams.
Executive Storytelling: Present clear, actionable recommendations to senior leadership that translate complex analysis into strategic decisions.
About You
Experience: 6+ years in data science or advanced analytics, with a focus on post-sales, customer success, or retention analytics in a B2B SaaS environment.
Technical Skills: Expert SQL and proficiency in Python or R for statistical modeling, forecasting, and machine learning.
Domain Knowledge: Deep understanding of SaaS metrics such as net revenue retention (NRR), gross churn, expansion ARR, and customer health scoring.
Analytical Rigor: Strong background in experimentation design, causal inference, and predictive modeling to inform customer-lifecycle strategy.
Communication: Exceptional ability to translate data into compelling narratives for executives and cross-functional stakeholders.
Business Impact: Demonstrated success improving onboarding efficiency, retention rates, or expansion revenue through data-driven initiatives.
Senior Data Warehouse & BI Developer
Data engineer job in San Leandro, CA
About the Role
We're looking for a Senior Data Warehouse & BI Developer to join our Data & Analytics team and help shape the future of Ariat's enterprise data ecosystem. You'll design and build data solutions that power decision-making across the company, from eCommerce to finance and operations.
In this role, you'll take ownership of data modeling, and BI reporting using Cognos and Tableau, and contribute to the development of SAP HANA Calculation Views. If you're passionate about data architecture, visualization, and collaboration - and love learning new tools - this role is for you.
You'll Make a Difference By
Designing and maintaining Ariat's enterprise data warehouse and reporting architecture.
Developing and optimizing Cognos reports for business users.
Collaborating with the SAP HANA team to develop and enhance Calculation Views.
Translating business needs into technical data models and actionable insights.
Ensuring data quality through validation, testing, and governance practices.
Partnering with teams across the business to improve data literacy and reporting capabilities.
Staying current with modern BI and data technologies to continuously evolve Ariat's analytics stack.
About You
7+ years of hands-on experience in BI and Data Warehouse development.
Advanced skills in Cognos (Framework Manager, Report Studio).
Strong SQL skills and experience with data modeling (star schemas, dimensional modeling).
Experience building and maintaining ETL processes.
Excellent analytical and communication skills.
A collaborative, learning-oriented mindset.
Experience developing SAP HANA Calculation Views preferred
Experience with Tableau (Desktop, Server) preferred
Knowledge of cloud data warehouses (Snowflake, BigQuery, etc.).
Background in retail or eCommerce analytics.
Familiarity with Agile/Scrum methodologies.
About Ariat
Ariat is an innovative, outdoor global brand with roots in equestrian performance. We develop high-quality footwear and apparel for people who ride, work, and play outdoors, and care about performance, quality, comfort, and style.
The salary range for this position is $120,000 - $150,000 per year.
The salary is determined by the education, experience, knowledge, skills, and abilities of the applicant, internal equity, and alignment with market data for geographic locations. Ariat in good faith believes that this posted compensation range is accurate for this role at this location at the time of this posting. This range may be modified in the future.
Ariat's holistic benefits package for full-time team members includes (but is not limited to):
Medical, dental, vision, and life insurance options
Expanded wellness and mental health benefits
Paid time off (PTO), paid holidays, and paid volunteer days
401(k) with company match
Bonus incentive plans
Team member discount on Ariat merchandise
Note: Availability of benefits may be subject to location & employment type and may have certain eligibility requirements. Ariat reserves the right to alter these benefits in whole or in part at any time without advance notice.
Ariat will consider qualified applicants, including those with criminal histories, in a manner consistent with state and local laws. Ariat is an Equal Opportunity Employer and considers applicants for employment without regard to race, color, religion, sex, orientation, national origin, age, disability, genetics or any other basis protected under federal, state, or local law. Ariat is committed to providing reasonable accommodations to candidates with disabilities. If you need an accommodation during the application process, email *************************.
Please see our Employment Candidate Privacy Policy at ********************* to learn more about how we collect, use, retain and disclose Personal Information.
Please note that Ariat does not accept unsolicited resumes from recruiters or employment agencies. In the absence of a signed Agreement, Ariat will not consider or agree to payment of any referral compensation or recruiter/agency placement fee. In the event a recruiter or agency submits a resume or candidate without a previously signed Agreement, Ariat explicitly reserves the right to pursue and hire those candidate(s) without any financial obligation to the recruiter or agency. Any unsolicited resumes, including those submitted directly to hiring managers, are deemed to be the property of Ariat.
Founding Data Scientist
Data engineer job in San Francisco, CA
Fast-growing AI healthcare startup is hiring a Founding Data Scientist.
Seeking a true “player-coach”, offering the opportunity to shape the data science function from the ground up while remaining hands-on in analytics, modeling, and product collaboration.
What You'll Do
• Establish the data function: build systems, tools, best practices, and a long-term data strategy
• Hands-on analytics and modeling: develop predictive and optimization models, perform ad hoc analyses, and prototype ML applications
• Collaborate across teams: work closely with product, engineering, and clinical leaders to uncover trends, inform features, and drive measurable outcomes
• Client-facing insights: analyze healthcare data, create dashboards, and translate findings into actionable recommendations
• Leadership growth: act as a “player-coach,” mentoring future data hires as the team expands
What We're Looking For
• 8+ years in data science or advanced analytics
• Advanced proficiency in Python and SQL
• Experience with healthcare datasets (claims, EMR/EHR,
• Strong statistical modeling and ML knowledge (scikit-learn, StatsModels, PyTorch)
• Comfortable with ambiguity, able to own end-to-end data initiatives, and influence product/clinical decisions
• Strong communication skills for cross-functional and external stakeholder engagement
Tech Stack: Python, R, scikit-learn, StatsModels, PyTorch, SQL, healthcare claims + EMR/EHR data, ICD/CPT/HCPCS, BI tools
Glazing Engineer (Construction)
Data engineer job in San Francisco, CA
The Glazing Engineer is responsible for the completion of high quality facade, curtain wall and glazing system projects on time, within budget and within scope. This role will oversee all aspects of a project from start to finish by coordinating efforts involving the internal team, subcontractors, vendors, and owners/developers. The ideal candidate will have outstanding interpersonal skills, adjusting to changing priorities from various directives, and communicate effectively.
RESPONSIBILITIES
Oversees the design development and coordination of custom curtain wall and glazing systems, translating architectural intent into engineered, buildable, and fully coordinated facade solutions. Capable of leading multiple projects and supervising engineers.
Know and comply with all federal, state, local building codes, ordinances and regulations, maintaining the highest standards for safety and quality.
Manage relationships with all internal and external parties in order to determine specifications of the project, resolve conflict, and support success.
Establish project schedule and delegate project tasks based on staff strengths, skills, and experience.
Secure and allocate all resources needed for the completion of the project including building permits, licenses, materials, equipment.
Negotiate, manage and communicate changes to contract scope, schedule and costs.
Plan and execute inspections, assess design compliance and quality, minimize risk.
Create and maintain comprehensive project documentation.
Regularly confer with supervisors to monitor and report on compliance, quality and productivity.
Be a strong team leader, build synergy within and across the team, and develop individuals.
REQUIREMENTS
Bachelor's degree in Construction Management, Civil Engineering, Mechanical Engineering, Architecture or related field.
Minimum of 3+ years of experience in facade, curtain wall, or glazing system engineering within a design-build or design-assist environment
Strong understanding of building envelope design, structural behavior, waterproofing, and thermal performance
Proficient in AutoCAD, Revit, and 3D modeling tools; familiarity with facade testing standards (ASTM, AAMA, NFRC)
Experience coordinating with architects, structural engineers, and fabricators through design, procurement, and installation
Skilled in technical documentation, submittal review, and field problem-solving
Excellent communication and collaboration skills within multidisciplinary project teams
A valid driver's license.
This position description is a summary and not a complete representation of the position; the essential functions of the position may change as duties are assigned.
Information for Recruiters and Agencies/Staffing Firms: Build Group does not accept unsolicited agency resumes. Please do not forward unsolicited agency resumes to our website or to any Build Group employee. Build Group will not pay fees to any third-party agency or firm and will not be responsible for any agency fees associated with unsolicited resumes. Unsolicited resumes received will be considered the property of Build Group.
Notice to California Residents/Applicants: In connection with your application, we collect information that identifies, reasonably relates to, or describes you (“Personal Information”). The categories of Personal Information that we collect include your name, government-issued identification number(s), email address, mailing address, other contact information, employment history, educational history, and demographic information. We collect and use those categories of Personal Information about you for human resources and other business management purposes, including identifying and evaluating you as a candidate for potential or future employment or future contract positions, recordkeeping in relation to recruiting and hiring, conducting criminal background checks as permitted by law, conducting analytics, and ensuring compliance with applicable legal requirements and Company policies.
Equal Opportunity Employment: Build Group provides equal employment opportunity to all employees and applicants for employment, free from unlawful discrimination based on race, color, religion, gender, age, national origin, disability, veteran status, marital status, sexual orientation, gender identity, genetic information or any other status or condition protected by local, state or federal law. This policy applies to all terms and conditions of employment, including hiring, training, orientation, placement, discipline, promotion, transfer, position elimination, rehire, benefits, compensation, retirement and termination. As an equal opportunity employer, Build Group seeks to hire employees based solely on their qualifications and abilities.
Staff Software Engineer
Data engineer job in San Francisco, CA
What we do Idler builds reinforcement learning environments that teach AI models to code like 0.01% engineers. Make your application after reading the following skill and qualification requirements for this position. Our training environments are based on real-world coding scenarios that frontier models will actually encounter.
We've closed a multimillion-dollar contract with a leading foundation lab (the largest they've issued to date).
Demand is outpacing our capacity to deliver, so we're scaling the team fast.
What you'll do Build agentic systems that create and QA coding environments at scale.
Most of your day will be spent designing these systems to be extremely sound.
A big part of our work is thinking critically about what makes a coding environment and task "good" and "fair".
This requires high agency and philosophical thinking alongside technical execution.
Concretely, you'll: Design and build scaleable systems that generate RL environments Create automated QA systems to validate environment quality and fairness Work directly with AI researchers at leading labs to understand what makes training data effective Support new product lines as we expand beyond coding environments Staff Engineer Responsibilities & Requirements Lead the process of identifying, specifying, and implementing core technology primitives that maximize the leverage of the rest of the team.
Understand and own the technology stack end-to-end.
8+ years of professional software engineering experience.
Lead and mentor more junior members of the team.
You'll work with The founding team, a founding engineer, and a small group of engineers (we're hiring quickly).
You'll have direct access to AI researchers at frontier labs.
Tech stack Typescript, React, NodeJS, Postgres, Redis, Vercel, Cursor Benefits Healthcare coverage, 401(k), and 15 days PTO.
Meals, coffee, and snacks (that you will actually enjoy) covered during working days.
Latest MacBook Pro and equipment.
Relocation assistance available.
Team offsites and events (we love hanging out). xevrcyc
This is an in-person role in San Francisco.
We're a tight-knit founding team and we play to win.
Join us if you like to win too.
Software Engineer - AI Agent Infrastructure (Healthcare)
Data engineer job in San Francisco, CA
Honey Health is the all-in-one AI back office for primary and specialty care. Our AI agents autonomously handle core back-office jobs, such as aggregating patient data, processing orders and prescriptions, automating prior authorizations, triaging faxes and referrals, and managing RCM (revenue cycle management). Organizations using Honey frequently cut administrative costs in half while improving staff/patient satisfaction and increasing revenue. Built with enterprise-grade security and privacy, our platform delivers real operational transformation.
About the Role
Honey Health is seeking a Software Engineer to help build leading AI Agent systems that transform healthcare operations. In this role, you will contribute to designing and implementing the infrastructure for training and deploying highly useful AI Agents in healthcare. Our team's mission is to create seamless, robust platforms for AI Agents - enabling them to operate at scale and perform complex tasks safely and autonomously. You'll work closely with AI researchers, product teams, and operations teams to help translate cutting-edge technical research into impactful healthcare applications, automating back-office work and improving patient care. The ideal candidate is passionate about building AI Agents (especially in healthcare) and is motivated to learn, deliver high-quality work, and contribute to safe and beneficial AI systems. This is a full-time role based in the U.S., offering the opportunity to contribute to innovation at the intersection of AI and healthcare.
Is This You?
You're fired up about Agentic AI and ready to help shape the future of healthcare. You're joining at the perfect moment to build transformative AI agents, and you're here to learn fast and contribute boldly.
You're deeply driven to make a meaningful impact - contributing to team culture at Honey, redefining value for healthcare providers and patients, and pushing the boundaries of innovation in one of the most impactful industries.
You don't just solve problems - you take on challenging ones with ambition and drive. You bring energy, even in the face of complexity, aiming for excellence when it matters most.
You bring initiative - sparking ideas, asking good questions, and supporting the team to explore ambitious paths in a fast-moving, open and exploratory environment.
If these describe you, we should definitely talk.
In this role, you will:
Contribute to building and improving next-generation AI agent infrastructure to train and deploy healthcare AI agents, helping ensure the platform is efficient, reliable, and scalable for production environments.
Assist with integrating the latest LLM advancements and in-house research into the agent platform, leveraging generative AI and (where applicable) reinforcement learning to enhance agent capabilities.
Prototype features and integrations for AI agents with real healthcare data and services - supporting reliable, safe automation in complex workflows (e.g., automating administrative tasks) - and help implement secure, sandboxed execution to support robust operations.
Collaborate with healthcare experts and cross-functional partners to turn novel AI research into practical features, and work with pilot customers and clinicians to validate and refine value in healthcare.
Write clean, tested code and contribute to code quality, reliability, and monitoring practices to deliver an excellent experience for healthcare users.
You might thrive in this role if you:
3+ years of industry-related experience.
Are motivated by using AI as a force for innovation, care improvement, and system-wide change in healthcare.
Have exposure to or interest in building AI and agentic systems (e.g., tool-calling stacks, orchestration frameworks), with familiarity or willingness to learn tools like LangChain, context engineering techniques, and RL-enhanced agents.
Enjoy building new things quickly and iterating with the team; excited to learn how to scale systems as products grow.
Bring a product mindset with a focus on quality and user impact; you care about technically sound solutions that improve end-user workflows and you value iterative improvement, testing, and delivering useful features.
Are committed to continuous learning and improvement, with attention to detail and a growth mindset.
Join us at Honey Health and apply your skills and curiosity in AI agents to solve real-world healthcare challenges. You will contribute to a new era where Agentic AI systems meaningfully improve healthcare - from reducing administrative burden to enabling better patient care - all while working with a team that values innovation, safety, and impact. We look forward to your curiosity, ownership, and drive in pushing the boundaries of what AI agents can do in healthcare. Apply now to help shape the future of health with us.
Software Engineer
Data engineer job in San Francisco, CA
🚀 Software Engineer (Founding Team)
About Us
We help brands unlock hidden value in inventory by turning “unsellable” products into profitable, sustainable channels. Our platform powers data-driven refurbishment, resale, and marketplace automation - giving every garment a chance at a second life.
We're an early growth-stage startup (recent Series A) on a fast trajectory with a clear mission: build the platform layer for reverse logistics and recommerce - the systems that connect factories, refurbishers, 3PLs, and marketplace channels.
The Role: Build from 0 → 1
We're hiring a Software Engineer with 2-4 years of experience who wants to join our founding engineering team. This is a high-learning, high-impact role where you'll ship real product features quickly and get deep mentorship from senior engineers.
You'll:
Build core systems and MVPs from scratch and iterate rapidly.
Own end-to-end features (APIs, automation, dashboards) that move inventory and unlock revenue.
Collaborate tightly with product, ops, and data teams to solve ambiguous, high-leverage problems.
Learn architecture, testing, and deployment practices under strong engineering coaching.
Help shape engineering culture, code standards, and system ownership as we scale.
What You'll Be Doing
Contribute to backend and platform work using modern web frameworks and cloud infrastructure.
Build API integrations that link operational partners and internal workflows.
Ship features from prototype → production, iterate based on real usage.
Balance rapid MVP shipping with durable, scalable design.
Participate in code reviews, design discussions, and team mentoring.
What We're Looking For
2-4 years software engineering experience (startup or product company preferred).
Strong backend fundamentals (Rails, Python, Node, or similar).
Comfortable working across the stack and learning new tech quickly.
Excited by messy, real-world problems and shipping fast.
Curious, coachable, and eager to grow into a technical leader.
Interest in sustainability, commerce, or AI-driven productivity is a plus.
Why Join
Ship production code from day one and see direct impact.
Hands-on mentorship from experienced engineering leaders.
Rapid career growth - early ICs will scale into lead roles as we grow.
Mission-driven product with real environmental and economic impact.
Competitive compensation + meaningful early equity; hybrid Bay Area setup.
BIOPHARMACEUTICAL - C&Q ENGINEER
Data engineer job in San Francisco, CA
Previous Pharmaceutical/Biotech experience is mandatory for this role.
MMR Consulting is an engineering and consulting firm specializing in the pharmaceutical and biotechnology industries. Its services include Engineering, Project Management, and other Consulting services.
MMR Consulting has offices in Canada, USA, and Australia.
This is an outstanding opportunity to join our growing team, where the successful candidate will work with a group of engineers and specialists involved in project management, commissioning and qualification, of equipment, systems and facilities. The work will require working out of the client's facilities in San Francisco Bay Area, California.
This role is for Bioprocess C&Q Engineer role to work on the commissioning, qualification, startup of upstream and downstream bioprocess systems/equipment in the biopharmaceutical industry, as well as process equipment in pharma/biotech industries.
Responsibilities
Provide technical guidance into the commissioning, qualification and start-up of various equipment and facilities used in life science manufacturing, such as bioreactors, tanks, CIP, Buffers, Media, Chrom, TFF, washers & autoclaves, etc.
Lead the development of key qualification deliverables during the project lifecycle to ensure project is well defined, and the action plan to test the system is applicable and relevant.
Lead qualification processes throughout the project lifecycle such as VPP, Risk Assessments, RTM, DQ, FAT, SAT, IQ, OQ and PQ as appropriate to ensure timely completion and to ensure all quality and engineering specifications are met.
Prepare protocols, execute protocols, summarize data, resolve deviations, prepare final reports.
Experience with C&Q of process equipment, utilities, facilities is an asset. Thermal Validation experience is an asset.
Coordinate meetings with cross-functional departments, to drive project progress, facilitate decisions, provide updates.
Engage other departments, as required, for design reviews and decisions.
Travel may be occasionally required for meetings with clients, equipment fabrication vendors or Factory Acceptance Testing (FATs).
Work may require occasional support over shutdowns or extended hours, specifically during installation and commissioning / validation phases.
Client-management (maintain key Client relationships in support of business development and pursuit of new work), project scheduling/budgeting, coordination of client and MMR resources for effective project delivery, supporting business development (providing technical support to the sales as required for proposals/opportunities), presenting at industry conferences/publishing papers etc.
Visit construction and installation sites following all site safety requirements.
Other duties as assigned by client, and/or MMR, based on workload and project requirements.
Qualifications
6+ years for years of experience in commissioning, qualification or validation of various systems within the pharmaceutical/biotech industry.
Engineering or Science degree, preferably in Mechanical, Electrical, Chemical, Biochemical, Electromechanical or a related discipline.
Excellent written and spoken English is required including the preparation of technical documents in English
Knowledge of requirements for a cGMP operations, including SOPs, Change Controls, Validation.
Experience with developing and executing validation projects. Risk-Based Commissioning & Qualification approaches, such as ASTM E-2500 or ISPE ICQ, is considered an asset, but not required.
Experience with commissioning and qualification of biotech process equipment (upstream or downstream or both), such as some, but not all, of the following: fermentation, bioreactors, downstream purification processes (chromatography, TFF, UF) is required
Experience with commissioning & qualification of process control systems (i.e. PCS, SCADA, Historians) and building automation systems (i.e. Siemens Insight / Desigo, JCI Metasys) are considered an asset.
Experience with Qualification or Validation of clean utilities, ISO clean rooms, and Thermal Validation is considered an asset.
Experience with preparation and execution of URS's, DQ's, RTMs, Risk Assessments, CPPs, VPPs, FATs, SATs, IOQs, NCRs, Final Reports.
Ability to lift 50 lbs.
Ability to handle multiple projects and work in a fast-paced environment.
Strong multi-tasking skills
Salary range: 80,000$ -120,000$ based on experience.
Equal Employment Opportunity and Reasonable Accommodations
MMR Consulting is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. Our hiring decisions are based on merit, qualifications, and business needs. We are committed to working with and providing reasonable accommodations to individuals with disabilities globally. If, because of a medical condition or disability, you need a reasonable accommodation for any part of the application or interview process, please let us know the nature of your request.
Software Engineer
Data engineer job in San Francisco, CA
As a software engineer at General Medicine, you'll help build and scale a healthcare store that makes it delightfully simple to shop for any type of care. We provide upfront cash and insurance prices for virtual and in-person visits, prescriptions, labs, imaging, and more.
What we're looking for
We're looking for strong engineers to help us build a seamless and beautiful consumer healthcare product. We're looking for folks who will obsess over every detail of our patient experience, and also tackle the complex operational challenges of delivering care at scale. We are looking for engineers who care deeply about technical excellence but are also comfortable moving quickly - we are constantly navigating tradeoffs between engineering velocity and quality.
Our ideal candidate is hungry, high-agency, and aspires to be a generalist. Our engineers frequently write product requirements documents, write SQL to understand how features are performing, and own QA - no task is beneath us or outside of the scope of the role if it helps us to deliver a great product. We're looking for someone who can operate in an environment of significant ambiguity, and who is comfortable working closely with design, operations, and clinical stakeholders.
We don't expect you to have a healthcare background (though it's great if you do!). However, you should be excited by the prospect of digging into the messy complexities of the American healthcare system (integrating with EHRs, revenue cycle management, etc).
Qualifications
2+ years of experience building web apps as a full-stack engineer
Experience with modern infra tooling and programming languages. We currently use AWS, Ruby on Rails, and NextJS, and would expect you to have proficiency in a modern tech stack even if it isn't the one we are using.
Please note that this role is based in either our SF office (near Market and Spear St) or our Boston office (Central Square, Cambridge). We expect our team to work from the office least 3 days per week.
Why join us
We're an experienced team that has built a company in this space before, our product has clear product-market fit, and we've raised money from top investors.
We have an ambitious and distinctive vision for what can be built in consumer healthcare. We believe LLMs and price transparency legislation have opened up several massive opportunities.
If you're an ambitious and entrepreneurial software engineer and this resonates, please apply.
Lead AI Engineer / Head of R&D
Data engineer job in San Francisco, CA
Mission
To engineer the next generation of AI-Assisted Human Annotation Systems. Our goal is to scale the production of high-quality, personalized, and safety-aligned datasets.
Key Responsibilities
Participate in the customer solution - making process and provide guidance on data services;
Design and develop Agent-Assisted Annotation Workflows;
Build Automated Quality Evaluation Frameworks;
Synthetic Data Generation (SDG) Pipeline;
RAG & Fact-Checking Integration.
Requirements
Tech Stack: Mastery of Python; deep experience with LangChain, LlamaIndex, or custom Agent frameworks.
LLM Engineering: Proven experience manipulating LLM APIs for complex tasks (chain-of-thought construction, few-shot prompting).
Data Operations: Familiarity with RLHF data formats (SFT/DPO/PPO) and data versioning tools.
Mindset: A "Data-Centric AI" philosophy-you understand that code is static, but data is dynamic, and you build tools to manage that complexity.
Software Engineer
Data engineer job in Hayward, CA
Mission and Impact:
VIVIO Health, a Public Benefit Corporation, is revolutionizing pharmacy benefits management through data and technology. Our foundational principle - "The Right Drug for the Right Person at the Right Price" - drives everything we do. Since 2016, our evidence-based approach has delivered superior health outcomes while reducing costs for self-insured employers and health plans. By ensuring each patient receives the most appropriate medication for their specific condition at a fair market price, we're replacing the obsolete PBM Model with innovative solutions that work better for everyone.
Why Join VIVIO?
Innovation: Challenge the status quo and shape healthcare's future
Impact: Directly influence patient care and help change healthcare delivery
Collaboration: Work with passionate teammates dedicated to making a difference
Culture: Enjoy autonomy and reliability in a micromanagement-free environment
Growth: Expand your opportunities as we expand our business
Job Description
Position Overview
We are seeking an exceptional developer with robust Python skills to join our team. You will play a crucial role in building complex business operations logic. You should have a proven track record of building high-quality software, solving complex problems, and thriving in collaborative environments. Experience in regulated cloud environments like HIPAA or PCI is a plus. We expect a self-motivated individual who thrives in a collaborative environment and shares our commitment to enhancing the cost and quality of healthcare. If you're ready to make an impact, we want to hear from you!
Location: Hayward, CA. This is a Hybrid role
with a minimum of 3 in-office days.
Technical Stack:
Languages: Python, PHP
Databases: MySQL
Infrastructure: AWS or other Cloud experience, CICD
Core Responsibilities:
Design and develop scalable services and core libraries.
Develop batch processing jobs for data imports, reporting, and external integrations.
Build and maintain transaction processing systems with complex business rules.
Integrate third-party APIs and normalize data across multiple healthcare providers.
Implement HIPAA-compliant data handling, logging, and audit systems
Write comprehensive tests with proper mocking and maintain CI/CD pipelines.
Foster best practices in a lean startup setting through code reviews.
Promote knowledge sharing to build a collaborative culture.
Optimize architectures and designs through deep understanding of business processes
Ensure operational excellence through monitoring, documentation, and deployment automation.
Qualifications
Required Qualifications:
5+ years of development experience with production systems
BS or advanced degree in an engineering discipline or equivalent experience
SQL database design and optimization
Test-driven development and mocking strategies
Experience with data processing
Preferred Qualifications:
REST API design and integration experience
FastAPI or similar framework experience
CRM customization experience
ETL pipelines and Batch processing systems experience
Job orchestration frameworks experience
File-based and distributed storage systems
Healthcare/pharmacy technology background
Strong understanding of building software in regulated environments & security standards such as PCI DSS, ISO 27001, HIPAA, and NIST.
Other expectations: Hybrid work arrangement with work from office 3 days a week.
Additional Information
Compensation and Benefits:
Base Salary: $120-$140K/year
Bonus Eligible
Health benefits, including Medical, Pharmacy, Dental, Vision, and Life insurance
Stock Options
401K and company match
PTO
Opportunity to work for a growing and innovative company.
Dynamic and collaborative work environment.
The chance to make a real impact with a Public Benefit Corporation.
VIVIO Health is an Equal Opportunity Employer. All information will be kept confidential according to EEO guidelines.
Please be advised that job opportunities will only be extended after a candidate submits a completed job application and goes through our interview process, including 1:1 and/or group interviews via phone, video conferencing, and/or in-person. All legitimate correspondence from a VIVIO employee will come from our Smart Recruiter Applicant Tracking System "@smartrecruiter.com" or "@viviohealth.com" email accounts.
Full Stack Software Engineer (Python / React)
Data engineer job in San Francisco, CA
We're seeking a Full Stack Software Engineer with strong backend development skills in Python and frontend expertise in React.js. You'll help design, implement, and scale full stack web applications that are secure, performant, and user-centric.
Responsibilities
Architect, build, and maintain backend services using Python (FastAPI, Flask, Django)
Design and implement dynamic and responsive frontends using React.js and/or Vue.js
Create and consume RESTful and GraphQL APIs
Build reusable components and libraries for frontend use
Collaborate across teams to gather requirements, define solutions, and ensure quality
Optimize performance and scalability of applications
Write unit, integration, and end-to-end tests across the stack
Participate in peer code reviews and provide mentorship where appropriate
Required Qualifications
5+ years of experience in full stack development
M.S. degree in relevant domain required
Proficiency with Python and one or more major web frameworks (e.g., FastAPI, Django)
Advanced skills in React.js, including Hooks, Context, and state management libraries (e.g., Redux, Zustand)
Experience with Vue.js or interest in working across multiple frontend frameworks
Familiarity with modern frontend tooling: Webpack, Vite, Babel, ESLint
Solid experience with HTML5, CSS3, SASS/SCSS, and responsive UI design
Strong understanding of RESTful services, API security, and performance optimization
Knowledge of relational databases (PostgreSQL, MySQL) and NoSQL options (MongoDB, Redis)
Git and CI/CD best practices (GitHub Actions, CircleCI, GitLab CI)
Strong communication skills and a collaborative approach to engineering
Preferred Qualifications
Familiarity with TypeScript
Experience with cloud platforms (AWS, GCP, or Azure)
Experience with Docker, Kubernetes, or container orchestration
GraphQL and Apollo Client experience
Familiarity with microservice architecture
Experience working with real-time data (WebSockets, MQTT)
Moveworks Conversational AI Engineer
Data engineer job in Santa Rosa, CA
Type: Contract
We are hiring a Moveworks Developer/Engineer for one of our premier consulting clients. This role focuses on building and optimizing AI-powered conversational agents that automate employee support across IT, HR, and business functions.
Responsibilities
• Design, develop, and enhance AI-driven bots using the Moveworks platform
• Configure intents, entities, and NLU models for high accuracy
• Build conversational flows to automate IT/HR/Finance use cases
• Integrate Moveworks with platforms like ServiceNow, Workday, SAP, and Jira
• Develop backend services using Python or Node.js
• Monitor bot performance and implement continuous improvements
• Collaborate with IT and business stakeholders on requirements
• Troubleshoot bot logic, integrations, and platform issues
Qualifications
• 1-3+ years of hands-on experience with Moveworks
• Strong expertise in NLP/NLU configuration
• Programming experience (Python/Node.js/TypeScript)
• REST API development & integration experience
• Exposure to ServiceNow, Workday, Jira, or SAP
• Familiarity with SQL/NoSQL databases
• Strong problem-solving and communication skills
• Experience in Agile/Scrum environments
ETL Architect + Talend
Data engineer job in San Francisco, CA
ETL Archicted with Talend Experience.
Additional Information
All your information will be kept confidential according to EEO guidelines.
Staff Data Scientist - Sales Analytics
Data engineer job in San Francisco, CA
Salary: $200-250k base + RSUs
This fast-growing Series E AI SaaS company is redefining how modern engineering teams build and deploy applications. We're looking for a Staff Data Scientist to drive Sales and Go-to-Market (GTM) analytics, applying advanced modeling and experimentation to accelerate revenue growth and optimize the full sales funnel.
About the Role
As the senior data scientist supporting Sales and GTM, you will combine statistical modeling, experimentation, and advanced analytics to inform strategy and guide decision-making across our revenue organization. Your work will help leadership understand pipeline health, predict outcomes, and identify the levers that unlock sustainable growth.
Key Responsibilities
Model the Business: Build forecasting and propensity models for pipeline generation, conversion rates, and revenue projections.
Optimize the Sales Funnel: Analyze lead scoring, opportunity progression, and deal velocity to recommend improvements in acquisition, qualification, and close rates.
Experimentation & Causal Analysis: Design and evaluate experiments (A/B tests, uplift modeling) to measure the impact of pricing, incentives, and campaign initiatives.
Advanced Analytics for GTM: Apply machine learning and statistical techniques to segment accounts, predict churn/expansion, and identify high-value prospects.
Cross-Functional Partnership: Work closely with Sales, Marketing, RevOps, and Product to influence GTM strategy and ensure data-driven decisions.
Data Infrastructure Collaboration: Partner with Analytics Engineering to define data requirements, ensure data quality, and enable self-serve reporting.
Strategic Insights: Present findings to executive leadership, translating complex analyses into actionable recommendations.
About You
Experience: 6+ years in data science or advanced analytics roles, with significant time spent in B2B SaaS or developer tools environments.
Technical Depth: Expert in SQL and proficient in Python or R for statistical modeling, forecasting, and machine learning.
Domain Knowledge: Strong understanding of sales analytics, revenue operations, and product-led growth (PLG) motions.
Analytical Rigor: Skilled in experimentation design, causal inference, and building predictive models that influence GTM strategy.
Communication: Exceptional ability to tell a clear story with data and influence senior stakeholders across technical and business teams.
Business Impact: Proven record of driving measurable improvements in pipeline efficiency, conversion rates, or revenue outcomes.
Senior Software Engineer
Data engineer job in San Francisco, CA
What we do Idler builds reinforcement learning environments that teach AI models to code like 0.01% engineers. Submit your CV and any additional required information after you have read this description by clicking on the application button. Our training environments are based on real-world coding scenarios that frontier models will actually encounter.
We've closed a multimillion-dollar contract with a leading foundation lab (the largest they've issued to date).
Demand is outpacing our capacity to deliver, so we're scaling the team fast.
What you'll do Build agentic systems that create and QA coding environments at scale.
Most of your day will be spent designing these systems to be extremely sound.
A big part of our work is thinking critically about what makes a coding environment and task "good" and "fair".
This requires high agency and philosophical thinking alongside technical execution.
Concretely, you'll: Design and build scaleable systems that generate RL environments Create automated QA systems to validate environment quality and fairness Work directly with AI researchers at leading labs to understand what makes training data effective Support new product lines as we expand beyond coding environments You'll work with The founding team, a founding engineer, and a small group of engineers (we're hiring quickly).
You'll have direct access to AI researchers at frontier labs.
Tech stack Typescript, React, NodeJS, Postgres, Redis, Vercel, Cursor Benefits Healthcare coverage, 401(k), and 15 days PTO.
Meals, coffee, and snacks (that you will actually enjoy) covered during working days.
Latest MacBook Pro and equipment.
Relocation assistance available.
Team offsites and events (we love hanging out). xevrcyc
This is an in-person role in San Francisco.
We're a tight-knit founding team and we play to win.
Join us if you like to win too.
Software Engineer
Data engineer job in Santa Rosa, CA
🚀 Software Engineer (Founding Team)
About Us
We help brands unlock hidden value in inventory by turning “unsellable” products into profitable, sustainable channels. Our platform powers data-driven refurbishment, resale, and marketplace automation - giving every garment a chance at a second life.
We're an early growth-stage startup (recent Series A) on a fast trajectory with a clear mission: build the platform layer for reverse logistics and recommerce - the systems that connect factories, refurbishers, 3PLs, and marketplace channels.
The Role: Build from 0 → 1
We're hiring a Software Engineer with 2-4 years of experience who wants to join our founding engineering team. This is a high-learning, high-impact role where you'll ship real product features quickly and get deep mentorship from senior engineers.
You'll:
Build core systems and MVPs from scratch and iterate rapidly.
Own end-to-end features (APIs, automation, dashboards) that move inventory and unlock revenue.
Collaborate tightly with product, ops, and data teams to solve ambiguous, high-leverage problems.
Learn architecture, testing, and deployment practices under strong engineering coaching.
Help shape engineering culture, code standards, and system ownership as we scale.
What You'll Be Doing
Contribute to backend and platform work using modern web frameworks and cloud infrastructure.
Build API integrations that link operational partners and internal workflows.
Ship features from prototype → production, iterate based on real usage.
Balance rapid MVP shipping with durable, scalable design.
Participate in code reviews, design discussions, and team mentoring.
What We're Looking For
2-4 years software engineering experience (startup or product company preferred).
Strong backend fundamentals (Rails, Python, Node, or similar).
Comfortable working across the stack and learning new tech quickly.
Excited by messy, real-world problems and shipping fast.
Curious, coachable, and eager to grow into a technical leader.
Interest in sustainability, commerce, or AI-driven productivity is a plus.
Why Join
Ship production code from day one and see direct impact.
Hands-on mentorship from experienced engineering leaders.
Rapid career growth - early ICs will scale into lead roles as we grow.
Mission-driven product with real environmental and economic impact.
Competitive compensation + meaningful early equity; hybrid Bay Area setup.