Quality assurance tester jobs in San Mateo, CA - 1,377 jobs
All
Quality Assurance Tester
Quality Assurance Engineer
Performance Test Lead
Quality Assurance Team Leader
Test Engineer
Senior Performance Engineer
Fiberglass Product Tester
Software Quality Engineer
Functional Tester
Sr. ML Kernel Performance Engineer, AWS Neuron, Annapurna Labs
Annapurna Labs (U.S.) Inc. 4.6
Quality assurance tester job in Cupertino, CA
The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon's custom machine learning accelerators, Inferentia and Trainium. The Acceleration Kernel Library team is at the forefront of maximizing performance for AWS's custom ML accelerators. Working at the hardware-software boundary, our engineers craft high-performance kernels for ML functions, ensuring every FLOP counts in delivering optimal performance for our customers' demanding workloads. We combine deep hardware knowledge with ML expertise to push the boundaries of what's possible in AI acceleration.
The AWS Neuron SDK, developed by the Annapurna Labs team at AWS, is the backbone for accelerating deep learning and GenAI workloads on Amazon's Inferentia and Trainium ML accelerators. This comprehensive toolkit includes an ML compiler, runtime, and application framework that seamlessly integrates with popular ML frameworks like PyTorch, enabling unparalleled ML inference and training performance.
As part of the broader Neuron Compiler organization, our team works across multiple technology layers - from frameworks and compilers to runtime and collectives. We not only optimize current performance but also contribute to future architecture designs, working closely with customers to enable their models and ensure optimal performance. This role offers a unique opportunity to work at the intersection of machine learning, high-performance computing, and distributed architectures, where you'll help shape the future of AI acceleration technology
This is an opportunity to work on cutting-edge products at the intersection of machine-learning, high-performance computing, and distributed architectures. You will architect and implement business-critical features, publish cutting-edge research, and mentor a brilliant team of experienced engineers. We operate in spaces that are very large, yet our teams remain small and agile. There is no blueprint. We're inventing. We're experimenting. It is a very unique learning culture. The team works closely with customers on their model enablement, providing direct support and optimization expertise to ensure their machine learning workloads achieve optimal performance on AWS ML accelerators.
Explore the product and our history!
*****************************************************************************************
***********************************************
*************************************
*********************************************************************************************
Key job responsibilities
Our kernel engineers collaborate across compiler, runtime, framework, and hardware teams to optimize machine learning workloads for our global customer base. Working at the intersection of software, hardware, and machine learning systems, you'll bring expertise in low-level optimization, system architecture, and ML model acceleration. In this role, you will:
* Design and implement high-performance compute kernels for ML operations, leveraging the Neuron architecture and programming models
* Analyze and optimize kernel-level performance across multiple generations of Neuron hardware
* Conduct detailed performance analysis using profiling tools to identify and resolve bottlenecks
* Implement compiler optimizations such as fusion, sharding, tiling, and scheduling
* Work directly with customers to enable and optimize their ML models on AWS accelerators
* Collaborate across teams to develop innovative kernel optimization techniques
About the team
#1. Diverse Experiences
AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn't followed a traditional path, or includes alternative experiences, don't let it stop you from applying.
#2. Why AWS
Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating - that's why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses.
#3. Inclusive Team Culture
Here at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon's culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust.
#4. Work/Life Balance
Our team puts a high value on work-life balance. It isn't about how many hours you spend at home or at work; it's about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives.
#5. Mentorship & Career Growth
Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we're building an environment that celebrates knowledge sharing and mentorship. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded professional and enable them to take on more complex tasks in the future.
BASIC QUALIFICATIONS- 5+ years of non-internship professional software development experience
- 5+ years of programming with at least one software programming language experience
- 5+ years of leading design or architecture (design patterns, reliability and scaling) of new and existing systems experience
- 5+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience
- Experience as a mentor, tech lead or leading an engineering team
PREFERRED QUALIFICATIONS- Bachelor's degree in computer science or equivalent
- 6+ years of full software development experience
- Expertise in accelerator architectures for ML or HPC such as GPUs, CPUs, FPGAs, or custom architectures
- Experience with GPU kernel optimization and GPGPU computing such as CUDA, NKI, Triton, OpenCL, SYCL, or ROCm
- Demonstrated experience with NVIDIA PTX and/or AMD GPU ISA
- Experience developing high performance libraries for HPC applications
- Proficiency in low-level performance optimization for GPUs
- Experience with LLVM/MLIR backend development for GPUs
- Knowledge of ML frameworks (PyTorch, TensorFlow) and their GPU backends
- Experience with parallel programming and optimization techniques
- Understanding of GPU memory hierarchies and optimization strategies
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
Los Angeles County applicants: Job duties for this position include: work safely and cooperatively with other employees, supervisors, and staff; adhere to standards of excellence despite stressful conditions; communicate effectively and respectfully with employees, supervisors, and staff to ensure exceptional customer service; and follow all federal, state, and local laws and Company policies. Criminal history may have a direct, adverse, and negative relationship with some of the material job duties of this position. These include the duties and responsibilities listed above, as well as the abilities to adhere to company policies, exercise sound judgment, effectively manage stress and work safely and respectfully with others, exhibit trustworthiness and professionalism, and safeguard business operations and the Company's reputation. Pursuant to the Los Angeles County Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit ********************************************************* for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.
Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $151,300/year in our lowest geographic market up to $261,500/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit ******************************************************** This position will remain posted until filled. Applicants should apply via our internal or external career site.
$151.3k-261.5k yearly 5d ago
Looking for a job?
Let Zippia find it for you.
Siri Speech QA Engineer: Automation & Quality
Apple Inc. 4.8
Quality assurance tester job in San Francisco, CA
A leading tech company is seeking a Tools and Automation Engineer for the Siri Speech Quality Engineering team in San Francisco. The role involves developing test plans, identifying critical testing areas, and ensuring user-facing features meet high-quality standards. Candidates should have strong coding skills in C/C++, Objective C/Swift, and Python, along with experience in audio testing. The position offers competitive compensation and benefits that include comprehensive medical coverage and stock options.
#J-18808-Ljbffr
$134k-172k yearly est. 1d ago
Quality Assurance Engineer
Alibaba Cloud
Quality assurance tester job in Sunnyvale, CA
We, Alibaba Overseas Engineering & TPM team, are seeking for a highly skilled and experienced Construction Quality Assurance Expert/On-site Testing & Commissioning Supervisor to join our dynamic and innovative team.
Our team is dedicated to the design, construction, testing & commissioning and optimization of public cloud infrastructure and facilities. This multidisciplinary group combines expertise in electrical, mechanical and civil engineering, construction progress management, construction quality management to ensure delivery of high-performance environments that support critical IT equipment needs.
In this role, you will be responsible for ensuring the successful testing and commissioning of our electrical and mechanical facilities, with a focus on spending at least 30% of your working time on construction sites. You will be accountable for the following key responsibilities,
1. Site Supervision and Coordination
2. Facility Testing and Commissioning
3. Documentation and Reporting
4. Compliance and Quality Assurance
5. Escalation and Stakeholder Engagement
Minimum qualification:
- A minimum of 5 years of proven experience in facility testing and commissioning, with a strong track record of successful construction project delivery.
- Excellent communication and stakeholder management skills, with the ability to present technical information to both technical and non-technical audiences.
- Proficiency in developing and executing comprehensive testing and commissioning plans, as well as interpreting and documenting test results.
- Bachelor's degree in Engineering (Electrical, Mechanical or a related field)
Preferred qualification:
- Extensive knowledge of electrical and mechanical infrastructure, including but not limited to power, cooling, ventilation, fire-fighting, plumbing, drainage and monitoring.
- Excellent problem-solving and analytical skills, with the ability to identify and resolve complex technical issues.
- Strong project management and coordination skills, with the ability to work effectively with cross-functional teams.
- Master's degree in Engineering (Electrical, Mechanical or a related field)
- Professional engineer (PE) is preferred.
The pay range for this position at commencement of employment is expected to be between $133,200 and $219,600/year. However, base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience.
If hired, employee will be in an “at-will position” and the Company reserves the right to modify base salary (as well as any other discretionary payment or compensation program) at any time, including for reasons related to individual performance, Company or individual department/team performance, and market factors.
$133.2k-219.6k yearly 2d ago
Deliverability Strategy Lead - People & Performance
Klaviyo Inc. 4.2
Quality assurance tester job in San Francisco, CA
A leading technology company seeks a Manager, Deliverability Strategy to ensure operational excellence and individual contributor development. The role focuses on people leadership, performance management, and effective service delivery. Candidates should have strong email deliverability knowledge, customer service orientation, and proven experience in developing teams. The position offers a competitive salary between $144,000 and $216,000 in the United States, emphasizing a collaborative work environment and employee development.
#J-18808-Ljbffr
$144k-216k yearly 4d ago
Salesforce QA Lead - Testing & Automation Leader
N28Tech
Quality assurance tester job in San Francisco, CA
A leading Salesforce Implementation Partner in San Francisco seeks a Salesforce QA Lead to execute functional and non-functional testing on the Salesforce platform. Candidates should have at least 4 years of QA experience, preferably with Sales Cloud, Service Cloud, and CPQ. This role involves writing test cases, performing various tests, and managing a small team. An ideal candidate is a self-starter with strong communication skills and eager to grow within the organization.
#J-18808-Ljbffr
$110k-154k yearly est. 4d ago
AI Product QA Lead - Impactful Testing & Equity
Pantera Capital
Quality assurance tester job in San Francisco, CA
A forward-thinking technology firm is seeking a Quality AssuranceTester to revolutionize product quality. The ideal candidate will have experience in bug detection and feature testing, with strong attention to detail and communication skills. This role offers a competitive salary ranging from $90,000 to $130,000 and includes equity as part of the compensation package, alongside comprehensive health benefits.
#J-18808-Ljbffr
$90k-130k yearly 3d ago
QA Engineer: Elevate Web & Mobile Quality
Air Apps, Inc.
Quality assurance tester job in San Francisco, CA
A tech-driven company in San Francisco is looking for a QA Engineer to ensure the quality and usability of applications. This role requires 3+ years in manual and functional testing with strong collaboration skills. You will conduct various testing methods and document findings to improve product functionality. The position offers competitive benefits including medical insurance and an annual bonus.
#J-18808-Ljbffr
$89k-125k yearly est. 3d ago
Robotics QA & Deployment Engineer
Menlo Ventures
Quality assurance tester job in San Francisco, CA
A leading robotics company is seeking a Robotics Engineer specializing in Deployment & Testing. The role involves creating automated software tests and ensuring the quality of high-level software and products. Candidates should have a degree in Computer Science or related fields, proficiency in low-level systems languages, and experience in developing and deploying software on robots. The position offers a competitive salary between $100,000 and $300,000 USD.
#J-18808-Ljbffr
$89k-125k yearly est. 5d ago
QA Engineer, Scientific Workflows
Mithrl Inc.
Quality assurance tester job in San Francisco, CA
ABOUT MITHRL
We imagine a world where new medicines reach patients in months, not years, and where scientific breakthroughs happen at the speed of thought.
Mithrl is building the world's first commercially available AI Co-Scientist. It is a discovery engine that transforms messy biological data into insights in minutes. Scientists ask questions in natural language, and Mithrl responds with real analysis, novel targets, hypotheses, and patent-ready reports.
Our traction speaks for itself:
12X year-over-year revenue growth
Trusted by leading biotechs and big pharma across three continents
Driving real breakthroughs from target discovery to patient outcomes.
ABOUT THE ROLE
We are hiring a QA Engineer, Scientific Software to build the test, validation, and monitoring infrastructure that guarantees the correctness and reliability of the Mithrl AI Co-Scientist. This role requires a PhD-level scientist or computational biologist who understands the drug development lifecycle and who has hands‑on experience with omics data. Without this scientific foundation, it is not possible to evaluate whether Mithrl's outputs are biologically meaningful.
You will create automated tests for analysis workflows, ingestion pipelines, and discovery applications. You will build CI systems that catch regressions early, set up monitoring and alerting for system behavior, and ensure that every module in Mithrl produces scientifically valid and reproducible outputs. This role bridges scientific understanding with software quality engineering and is critical for maintaining trust in Mithrl's analysis engine.
If you are a scientist with a passion for product reliability, reproducibility, and validation of ML powered scientific tools, this is a uniquely impactful position.
WHAT YOU WILL DO
Build automated test infrastructure for data ingestion, analysis modules, discovery applications, and new product features
Develop scientific validation frameworks that check correctness and reproducibility of ML driven biological analyses
Establish CI workflows that run end to end tests on every commit and catch scientific and computational regressions
Build monitoring and alerting systems that track the health, performance, and scientific integrity of product modules
Design automated checks for omics workflows
Validate that responses generated by Mithrl align with biological logic and expectations from discovery and preclinical development
Work closely with ML engineers, data engineers, and application scientists to ensure scientific accuracy across releases
Maintain documentation, data fixtures, and gold standard datasets for regression testing
Support the development of QA processes that scale with rapid product growth and new analysis capabilities
Build a culture of scientific correctness and software reliability throughout the engineering and product teams
WHAT YOU BRING
Required Qualifications
PhD in biology, computational biology, bioinformatics, systems biology, or a related discovery field
Deep understanding of the drug discovery and preclinical development lifecycle
Hands‑on experience working with omics data such as transcriptomics, RNA‑seq, proteomics, ATAC‑seq, single cell datasets, or imaging‑derived features
Ability to evaluate whether a result, analysis, or insight is scientifically correct based on domain knowledge
Familiarity with common discovery analyses such as differential expression, enrichment, pathway reasoning, target scoring, and feature importance
Experience with Python or similar languages and comfort with scientific computing workflows
Strong interest in software quality, reproducibility, and validation of ML driven scientific systems
Excellent communication skills and ability to partner with engineers and scientists
Nice to Have
Experience building automated tests or QA frameworks for scientific or ML systems
Experience with workflow engines, scientific pipelines, or reproducibility tools
Familiarity with CI tools and modern software development practices
Experience validating outputs of AI powered analysis tools
Previous work in a tech bio company or computational platform environment
WHAT YOU WILL LOVE AT MITHRL
High ownership: You will be the guardian of scientific correctness and reliability inside the AI Co-Scientist
Impact: You will work with cutting edge ML, multi modal data, and real discovery workflows Team: Join a tight-knit, talent-dense team of engineers, scientists, and builders
Culture: We value consistency, clarity, and hard work. We solve hard problems through focused daily execution
Speed: We ship fast (2x/week) and improve continuously based on real user feedback
Location: Beautiful SF office with a high-energy, in-person culture
Benefits: Comprehensive PPO health coverage through Anthem (medical, dental, and vision) + 401(k) with top-tier plans
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
#J-18808-Ljbffr
$89k-125k yearly est. 5d ago
Staff QA Engineer
Hamilton Ai Inc.
Quality assurance tester job in San Francisco, CA
Hamilton AI is creating the operating system for business aviation. Unlike commercial aviation, business aviation lives in the dark ages, think emails, phone and spreadsheets that run most of the operations of operators from 2 aircraft to 1,000 aircraft.
Hamilton is here to change that. We offer a fully vertically integrated platform that runs business operations at our customers. Our AI agents are the first point of contact for quote requests and we run these quote requests from A to Z. Think quoting, flight scheduling and optimization, route planning, aircraft tracking, payment processing, and back-office invoicing and payroll.
Team
Hamilton AI is founded by Wouter Witvoet, who previously founded Secfi.com ($1.5bn AUM) and went on to take his second business public called DeFi Technologies , listed on the NASDAQ under DEFT with a market cap of $1.2bn. Other members are ex-Dropbox, Square, Airlabs, PagerDuty, Playbook, Walt Disney, Getaround, and NASA JPL.
Where we're at
From January to March 2025, we took part in HF0 Residency-one of the world's most exclusive programs for repeat founders, backed by Google, Andreessen Horowitz, and visionary investors like Naval Ravikant. In just 90 days, we made a year's worth of progress-both in product development and real-world results.
Concretely:
We went from $150K to $1.5MM in ARR with both small and large enterprises.Raised $7.2M across two rounds most recently February 2025
Funded by leading VCs, including Bling Capital , TTV Capital , Cambrian VC , FJ Labs , Correlation VC , HF0 , Mintaka Ventures, Weekend Fund
Where will we be in three months? That's up to us. At Hamilton, you won't just watch the future unfold-you'll shape it.
Role
Hamilton is building the operating system for charter aviation - software that manages quoting, trip planning, live operations, payments, safety-critical data, and sensitive customer information. Reliability is foundational. As our first QA hire, you will own the quality platform that ensures Hamilton can scale without compromising trust.
This is an engineering role, not a manual QA position. You will design and implement the automation frameworks, validation systems, and runtime quality signals that become core infrastructure for our entire engineering organization. Your work defines how code is validated, how regressions are detected, and how confidently we ship into production.
You will shape Hamilton's CI/CD pipeline, build the guardrails other engineers rely on, and architect systems that continuously assess product reliability. In an environment driven by AI-accelerated development, your platform becomes the backbone for safe, fast iteration.
What You'll Do
Architect Hamilton's quality platform - automation frameworks, service-level validation systems, testing infrastructure, and developer tooling.
Build end-to-end automated testing systems (unit, integration, API, contract, and E2E) designed for scale and engineer adoption.
Integrate deep quality gates into CI/CD: runtime checks, automated regression detection, stability thresholds, and release blocking criteria.
Develop service-level quality signals - data integrity checks, operational safety validations, and real-time anomaly detection.
Create standardized patterns and libraries that make writing high-quality, testable code the default for every engineer.
Build observability into quality: dashboards, failure signatures, flakiness analysis, historical reliability metrics.
Partner with infra/platform engineers to ensure test environments are hermetic, reproducible, and production-like.
Perform targeted exploratory/manual validation where automation can't yet reach, feeding insights back into the automation platform.
Lead root-cause analysis on critical failures and implement systemic fixes that eliminate entire classes of bugs.
Establish Hamilton's long-term quality strategy - principles, tooling roadmap, and engineering adoption guidelines.
What You Bring
7-12+ years of QA engineering experience with a strong focus on platform-level automation, tooling, and infrastructure.
Expertise in modern automation frameworks (Playwright, Cypress, Jest, contract testing frameworks, API testing tools).
Strong foundation in backend/API testing, distributed system validation, and data consistency testing.
Deep experience integrating automated testing into CI/CD pipelines with clear release-blocking rules.
Ability to design scalable frameworks engineers want to adopt - intuitive interfaces, fast feedback cycles, minimal friction.
Strong problem-solving skills with a focus on systemic, architecture-level solutions rather than patchwork fixes.
High ownership mindset; comfortable defining and driving a company-wide quality strategy.
Strong communication and collaboration skills across engineering, infra, product, and leadership.
Experience with aviation, payments, logistics, security/PII compliance, or AI-assisted testing platforms.
Why Hamilton AI?
You own the quality platform - the systems you build determine how Hamilton ships, scales, and maintains trust.
Your infrastructure protects operators working with safety-critical data, payment flows, and sensitive PII.
You will define the quality culture for the entire engineering org - from CI/CD strategy to runtime validation.
Our AI-accelerated development loop gives you leverage to automate aggressively and push quality upstream.
You're joining a small, high-performing team where your architectural decisions become foundational
Ready to quote smarter and boost revenues?
The world's most intelligent private aviation platform
#J-18808-Ljbffr
$89k-125k yearly est. 5d ago
Web3 Manual QA Engineer - Wallets & Fintech Apps
Dynamic.Xyz
Quality assurance tester job in San Francisco, CA
A leading tech company is seeking a Web3 Manual QA Engineer based in New York, the Bay Area, or Miami. In this remote-first role, you will ensure the quality of web and mobile applications, collaborating with engineers and product managers. The ideal candidate has over three years in manual QA and experience testing mobile applications and Web3 platforms. Excellent communication and a detail-oriented mindset are crucial for success.
#J-18808-Ljbffr
$89k-125k yearly est. 1d ago
Marketplace Growth & Performance Lead
Scoutbee
Quality assurance tester job in San Francisco, CA
A leading procurement platform is seeking a Head/Director of Marketplace Performance to drive engagement within the B2B marketplace. This highly strategic role involves developing frameworks to enhance buyer-supplier interactions and enhance marketplace health. The ideal candidate has over 8 years in relevant fields with strong analytical and leadership skills. This position offers competitive compensation, a flexible remote culture, and an opportunity to shape the future of procurement.
#J-18808-Ljbffr
$104k-151k yearly est. 5d ago
Marketplace Growth & Performance Lead
Scoutbee GmbH
Quality assurance tester job in San Francisco, CA
A leading procurement technology company in San Francisco is seeking a Head/Director of Marketplace Performance to enhance engagement in their marketplace. This role involves cross-functional leadership, KPI management, and strategic development to improve buyer and supplier experiences. Ideal candidates will have over 8 years of relevant experience and strong analytical skills. Competitive compensation and a remote-flexible culture are offered.
#J-18808-Ljbffr
$104k-151k yearly est. 3d ago
Simulation and Test Engineer (Conversational AI) - US Based
Andromeda Robotics
Quality assurance tester job in San Francisco, CA
About Us
Andromeda Robotics is an ambitious social robotics company with offices in Melbourne and San Francisco, dedicated to creating robots that seamlessly and intelligently interact with the human world. Our first robot, Abi, is a testament to this vision-a custom-built platform designed from the ground up to be a helpful aid and intuitive partner in aged care homes. We are a passionate, collaborative team of engineers who solve some of the most challenging problems in AI and robotics. To accelerate our development and ensure Abi's reliability, we are seeking a foundational member to build out our capabilities to train and test our robot in simulation.
Our Values
Deeply empathetic - Kindness and compassion are at the heart of everything we do.
Purposely playful - Play sharpens focus. It keeps us curious, fast and obsessed with the craft.
Relentlessly striving - With relentless ambition, an action bias and constant curiosity, we don't settle.
Strong when it counts - Tenacious under pressure, we expect problems and stay in motion to adapt and progress.
United in action - Different minds. Shared mission. No passengers.
The Role
We are looking for a creative and driven Simulation and Test Engineer to build Andromeda's testing infrastructure for our conversational AI systems and embodied character behaviours. Your immediate focus will be creating robust test systems for Abi's voice-to-voice chatbot, social awareness perception, and gesture motor control. As this infrastructure matures, you'll extend it into simulation environments for generating synthetic training data for character animation and gesture models.
The Team
You'll work at the intersection of our character software, robotics, perception, conversational AI, controls, and audio engineering teams. We bring deep expertise from autonomous vehicles and robotics, including simulation backgrounds. You'll collaborate with product owners and technical specialists to define requirements, integrate systems, and ensure quality across our AI/ML stack.
Phase 1: Build The Test Foundation
Define and stand up synthetic test environments for our AI/ML conversational stack
Conversational AI testing: voice-to-voice chat quality, response appropriateness, tool calling accuracy
Memory system testing: context retention, recall accuracy, conversation coherence
Audio modelling and testing: multi-speaker scenarios, room acoustics, voice activity detection
Perception system testing: social awareness (face detection, gaze tracking, person tracking)
Gesture appropriateness testing: Working with our Controls/ML team, create test infrastructure to validate that Abi's body gestures
CI/CD and automated regression testing for all AI/ML subsystems
Custodian of quality metrics: if they don't exist, work with stakeholders to elicit use cases, derive requirements, and establish measurable quality metrics
Requirements formalisation: you're skilled at gathering, documenting, and tracing requirements back to test cases
Phase 2: Scale To ML Training Infrastructure
Our approach to gesture generation requires high-fidelity synthetic interaction data at scale. You'll investigate and build the infrastructure to generate this data, working closely with our character software team to define requirements and validate approaches.
Extend test environments into training data generation pipelines
Investigate and stand up simulation tools (e.g. Unity, Unreal Engine, Isaac Sim) to support our machine learning pipeline with synthetic data and validation infrastructure
Build infrastructure for fine-tuning character animation models on simulated multi-actor scenarios
Enable ML-generated gesture development to augment hand-crafted animation workflows
Create virtual environments with diverse social interaction scenarios for training and evaluation
Success In This Role Looks Like
Months 1-3, stabilise our conversational system with automated regression tests and measurable quality benchmarks. By month 6, deliver an integrated simulation environment enabling rapid testing and iteration across our AI/ML stack.
You'll design tests that push our systems beyond their limits and find what's brittle. Through trade studies and make-vs-buy decisions, you'll establish the infrastructure, set up automatic regression tests, and trace test cases back to high-level requirements. You'll be the final guardian, verifying that our AI and machine learning systems work as intended before integration with Abi's physical platform. Your work will directly impact the speed and quality of our development, ensuring that every software build is robust, reliable, and safe.
Key Responsibilities
Architect and Build: Design, develop, and maintain scalable test infrastructure for conversational AI, perception, and gesture control systems
Own Testing Pipeline: Develop a robust CI/CD pipeline for automated regression testing, enabling rapid iteration and guaranteeing quality before deployment
Test Scenarios: Create diverse audio environments, multi-actor social scenarios, and edge cases to rigorously test Abi's conversational and social capabilitiesli>
Model with Fidelity: Implement accurate models of Abi's hardware stack (cameras, microphone array, upper body motion) as needed for test and simulation scenarios
Enable Future ML Training: Design test infrastructure with an eye towards evolution into a simulation platform for generating synthetic training data for character animation and gesture models
Integrate and Collaborate: Work closely with the robotics, AI, and software teams to seamlessly integrate their stacks into the test infrastructure and define testing requirements
Analyse and Improve: Develop metrics, tools, and dashboards to analyse test data, identify bugs, track performance, and provide actionable feedback to the engineering teams
Ideally You Have
Bachelor or Masters in Computer Science, Robotics, Engineering, or a related field
5+ years of professional experience testing complex AI/ML systems (conversational AI, perception systems, or embodied AI)
Strong programming proficiency in Python (essential); C++ experience valuable
Hands‑on experience with LLM testing, voice AI systems, or chatbot evaluation frameworks
Understanding of audio processing, speech recognition, and/or computer vision fundamentals
Experience with testing frameworks and CI/CD tools (pytest, Jenkins, GitHub Actions, etc.)
Familiarity with ML evaluation metrics and experimental design
A proactive, first‑principles thinker who is excited by the prospect of owning a critical system at an early‑stage startup
Bonus Points
Experience with simulation platforms (e.g. Unity, Unreal Engine, NVIDIA Isaac Sim, Gazebo) and physics engines
Experience with character animation systems, motion capture data, or gesture generation
Knowledge of reinforcement learning, imitation learning, or synthetic data generation for training ML models
Experience with 3D modelling tools and game engine content creation
Understanding of ROS2 for robotics integration
Knowledge of sensor modelling techniques for cameras and audio
Experience building and managing large-scale, cloud-based simulation infrastructure
PhD in a relevant field
The expected base salary range for this role, when performed in our San Francisco office, is $150,000 - $250,000 USD, depending on skills and experience. The salary for this position may vary depending on factors such as job‑related knowledge, skills, and experience. The total compensation package may also include additional benefits or components based on the specific role. Details will be provided if an employment offer is made.
If you're excited about this role but don't meet every requirement, that's okay-we encourage you to apply. At Andromeda Robotics, we celebrate diversity and are committed to creating an inclusive environment for all employees. Let's build the future together.
We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, colour, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
#J-18808-Ljbffr
$150k-250k yearly 1d ago
Simulation & Test Engineer for Conversational AI & Robotics
Dromeda
Quality assurance tester job in San Francisco, CA
A forward-thinking robotics company based in San Francisco is seeking a Simulation and Test Engineer to develop and maintain scalable testing infrastructure for their conversational AI systems. The ideal candidate will have a strong background in AI/ML testing, proficiency in Python, and experience with simulation platforms. The role offers a competitive salary range of $150,000 - $250,000 USD based on skills and experience, in a dynamic environment focused on innovation and diversity.
#J-18808-Ljbffr
$150k-250k yearly 5d ago
Software Engineer, Search Quality and Ranking
Monograph
Quality assurance tester job in San Francisco, CA
Employment Type
Full time
Department
Engineering
About Us:
We're on a mission to make it possible for every person, team, and company to be able to tailor their software to solve any problem and take on any challenge. Computers may be our most powerful tools, but most of us can't build or modify the software we use on them every day. At Notion, we want to change this with focus, design, and craft.
We've been working on this together since 2016, and have customers like OpenAI, Toyota, Figma, Ramp, and thousands more on this journey with us. Today, we're growing fast and excited for new teammates to join us who are the best at what they do. We're passionate about building a company as diverse and creative as the millions of people Notion reaches worldwide.
Notion is an in person company, and currently requires its employees to come to the office for two Anchor Days (Mondays & Thursdays) and requests that employees spend the majority of their in the office (including a third day).
About The Role:
We are looking for an ML Engineer to join our small but nimble AI team whose mission is to make Notion an ML-powered product. As an ML Engineer, you will work on incorporating large language models (LLMs), embeddings, and other AI technologies into Notion's product in a high quality way. You'll be exploring the boundaries of what's possible with ML technology and finding innovative ways to apply new industry learnings to Notion's offering.
What You'll Achieve:
Work with the team to prototype and experiment with AI model quality improvements, either by fine tuning, prompt engineering, or building new models when needed
Productionize and launch new AI technology integrations into Notion's core product
Collaborate with cross-functional teams to deliver product features on time
Stay up-to-date with the latest AI technologies and trends
Skills You'll Need to Bring:
Domain Expert, Teacher and Learner: You have experience building AI products using LLMs, embeddings or other ML natural language technologies. 3+ years of experience in one or more of the following areas: machine learning, recommendation or ranking systems, natural language understanding/generation or artificial intelligence.
Holistic Problem Solver: You approach problems holistically, starting with a clear and accurate understanding of the context. You think critically about the implications of what you're building and how it will impact real people's lives. You can navigate ambiguity flawlessly, decompose complex problems into clean solutions.
Communicate with Care: You communicate nuanced ideas clearly, whether you're explaining technical decisions in writing or brainstorming in real time. In disagreements, you engage thoughtfully with other perspectives and compromise when needed. You are a lifelong learner and invest in both your own growth and the growth, learning, and development of your teammates.
Impact Driven: You care about business impact and prioritize projects accordingly. You understand the balance between craft, speed, and the bottom line. You understand that reach comes with responsibility for our impact-good and bad. Work isn't a solo endeavor for you, and you enjoy collaborating cross-functionally to accomplish shared goals.
Nice to Haves:
You understand how parts of a system fit together, from the user interface to the data model. You are familiar with relational database systems like Postgres or MySQL, and have experience building products from ground up.
You're proficient with data pipeline technologies: Spark, DBT, etc.
You're proficient with any part of our technology stack: React, TypeScript, Node.js, and Postgres.
You have experience driving teams toward shared goals and can balance business priorities with individuals' strengths, areas of interest, and career development goals.
Experience in ranking algorithms, search quality, or related domains is a plus.
You've heard of computing pioneers like Ada Lovelace, Douglas Engelbart, Alan Kay, and others-and understand why we're big fans of their work.
You have interests outside of technology, such as in art, history, or literature.
We hire talented and passionate people from a variety of backgrounds because we want our global employee base to represent the wide diversity of our customers. If you're excited about a role but your past experience doesn't align perfectly with every bullet point listed in the job description, we still encourage you to apply. If you're a builder at heart, share our company values, and enthusiastic about making software toolmaking ubiquitous, we want to hear from you.
Notion is proud to be an equal opportunity employer. We do not discriminate in hiring or any employment decision based on race, color, religion, national origin, age, sex (including pregnancy, childbirth, or related medical conditions), marital status, ancestry, physical or mental disability, genetic information, veteran status, gender identity or expression, sexual orientation, or other applicable legally protected characteristic. Notion considers qualified applicants with criminal histories, consistent with applicable federal, state and local law. Notion is also committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you need assistance or an accommodation made due to a disability, please let your recruiter know.
Notion is committed to providing highly competitive cash compensation, equity, and benefits. The compensation offered for this role will be based on multiple factors such as location, the role's scope and complexity, and the candidate's experience and expertise, and may vary from the range provided below. For roles based in San Francisco and New York, the estimated base salary range for this role is $160,000-$220,000 per year.
By clicking “Submit Application”, I understand and agree that Notion and its affiliates and subsidiaries will collect and process my information in accordance with Notion's Global Recruiting Privacy Policy.
#J-18808-Ljbffr
$160k-220k yearly 1d ago
Online Product Tester
Online Consumer Panels America
Quality assurance tester job in Daly City, CA
Product Testers are wanted to work from home nationwide in the US to fulfill upcoming contracts with national and international companies. We guarantee 15-25 hours per week with an hourly pay of between $25/hr. and $45/hr., depending on the In-Home Usage Test project. No experience required.
There is no payment required in order to apply or to work as an In-Home Usage Tester. You don't have to buy products or pay for shipping, everything is paid by our company. In-Home Usage Testers are considered independent contractors, we pay weekly every Wednesday by direct deposit or by cheque.
Online Consumer Panels America is a consulting firm that specializes in product testing and product development work. We design and conduct In-Home Usage Testing (IHUT) locally and nationally to provide actual user feedback in real-time to companies and market research firms to evaluate products to ensure proper product certification and greater market access.
It is important to note that during your application process, reputable market research companies will determine your demographics and consumer profile to establish what products would be suitable for you to test. Market research companies that partner with us will use questionnaires to identify and target certain types of consumers, to ensure that the right participants are engaged and to achieve the representative sample needed.
Participation in these product testing and consumer panels is always free, secure and private. In-Home Usage Testing is a quick, easy and fun way to make extra cash by telling big brands what you think about their upcoming products and services in the American market.
Main Duties:
Properly document In-Home Usage Tests as instructed in the In-Home Usage Test Daily Schedule (screenshots, audio recordings, videos, product journal entries, etc.)
Take care of the product being tested and use it responsibly
Read and strictly follow the In-Home Usage Test Daily Schedule provided with each product testing project (may include tasks such as unpacking, reading instructions, journal entries, online or mobile feedback, usage of product for a certain amount of time, writing reviews, taking pictures, etc.)
Some In-Home Usage Tests projects may require participants to use MFour's Mobile In-Home Use Test Technology (cutting-edge smartphone technology to capture Point-of-Emotion insights to gain unparalleled depth of responses)
There are times when the product being tested may be discussed in a private chat room that is opened by a market research firm
Write reviews as requested in the In-Home Usage Test Daily Schedule for each project
Requirements:
Ability to follow specific instructions
Excellent attention to detail and curious spirit
Be able to work 15-25 hours per week and commit to a certain routine
Have access to a computer and a reliable internet connection
Have access to a digital camera or cell phone that takes pictures -Be honest and reliable -Good communication skills are an asset -18 years or older
A paid Product Tester position is perfect for those looking for an entry-level opportunity, flexible or seasonal work, temporary work or part-time work. The hours are completely flexible and no previous experience is necessary.
Benefits:
Very competitive pay rate
Weekly pay
Work around your own schedule
Learn about an exciting industry
Telecommute (you can work from home, work or school)
Most of the time you can keep the product you tested
$25 hourly 60d+ ago
Web3 Manual QA Engineer
Dynamic.Xyz
Quality assurance tester job in San Francisco, CA
We're seeking a Web3 Manual QA Engineer
Dynamic started with a simple vision: every app and website will have a wallet. Three years in, that vision is no longer just an idea. It's happening now. Wallets are no longer just for crypto apps. They're becoming the backbone of fintech, payroll, and global remittances. They power faster, cheaper, and more accessible transactions. The best crypto apps, like Ondo Finance, Story, and Magic Eden already run on Dynamic. Now, the world's top fintech and HR platforms are integrating wallets and payments through Dynamic, tapping into crypto rails. We are at a pivotal moment as we scale from supporting leading crypto apps to becoming the wallet infrastructure of the internet.
Why join Dynamic now?
Own the next wave of apps and fintechs: Your work will directly impact how the world's biggest fintech players adopt wallets and stablecoin payments.
Join at the perfect moment: We're scaling fast, but still early enough that your contributions will define our trajectory.
Build the foundation of modern money: Backed by a16z crypto, Founders Fund, and other top investors, we're making money more connected across chains and ecosystems.
Our product:
Check out a product demo here
What we're looking for:
As a Web3 Manual QA Engineer at Dynamic, you'll play a key role in ensuring the quality and reliability of our core products across web and mobile platforms. You'll work closely with our Engineering and Product teams to identify and reproduce bugs, validate features, and help maintain a world-class user experience across a wide variety of devices and environments.
In this role, you'll also interface directly with customers to understand and triage reported issues, helping to ensure those are accurately documented and prioritized for the engineering team. You'll contribute to our manual QA suite and help continuously improve our QA processes as we scale.
You'll be working across Dynamic's diverse and fast-moving customer base, including some of the most exciting projects in Web3. These span DeFi, NFTs, gaming, and blockchain infrastructure. The position requires a sharp eye for detail, excellent communication skills, and a passion for delivering high-quality software in a fast-paced environment.
Location: We're remote-first, but ideally you're based in New York, the Bay Area, or Miami. We'd love to have more of the team near our core hubs.
You will be a fantastic fit for this role if:
As a Web3 Manual QA Engineer at Dynamic, you bring over three years of hands-on experience in manual QA testing, with a strong foundation in testing methodologies and best practices. You've worked extensively on mobile applications across both iOS and Android, ensuring quality across a range of devices and OS versions. You are proficient in writing and executing test cases, logging detailed bug reports, and working closely with developers to drive issues to resolution. Your experience includes using browser developer tools to debug UI and UX problems, as well as interacting with end users to triage issues, update test suites, and create engineering tickets. You are highly detail-oriented, organized, and thrive in fast-paced startup environments. You have excellent written and verbal communication skills and are comfortable owning the QA process from start to finish. Bonus points if you have experience testing Web3 wallets, blockchain/on chain apps, or smart contract interactions, which is highly relevant to our platform.
You will:
Conduct manual testing of Dynamic's web and mobile applications, ensuring a seamless user experience.
Identify, document, and track software defects using issue-tracking tools such as Jira and Linear.
Collaborate closely with engineers and product managers to refine product quality and user experience.
Develop and execute test cases based on product requirements and user scenarios.
Validate new features and bug fixes before release, ensuring product stability.
Provide detailed feedback on usability, functionality, and performance issues.
Assist in improving testing processes and documentation.
#J-18808-Ljbffr
A cutting-edge aviation technology firm in San Francisco is looking for a QA Engineer to establish their quality platform. The successful candidate will drive automation, design testing frameworks, and integrate quality checks into CI/CD. With over 7 years of experience in QA engineering, you will play a pivotal role in ensuring the reliability of software operations. This role is vital in shaping the quality strategy and culture as the company scales rapidly. Join a small high-performing team dedicated to transforming business aviation operations.
#J-18808-Ljbffr
$89k-125k yearly est. 5d ago
QA Engineer
Air Apps, Inc.
Quality assurance tester job in San Francisco, CA
About Air Apps
At Air Apps, we believe in thinking bigger-and moving faster. We're a family-founded company on a mission to create the world's first AI-powered Personal & Entrepreneurial Resource Planner (PRP), and we need your passion and ambition to help us change how people plan, work, and live. Born in Lisbon, Portugal, in 2018-and now with offices in both Lisbon and San Francisco-we've remained self-funded while reaching over 100 million downloads worldwide.
Our long-term focus drives us to challenge the status quo every day, pushing the boundaries of AI-driven solutions that truly make a difference. Here, you'll be a creative force, shaping products that empower people across the globe.
Join us on this journey to redefine resource management-and change lives along the way.
The Role
As a QA Engineer at Air Apps, you will be responsible for ensuring the quality, functionality, and usability of our applications through manual and functional testing. You will work closely with developers, product managers, and designers to identify issues early, improve test coverage, and deliver a seamless user experience.
Your role will be essential in detecting bugs, verifying feature implementations, and validating software stability before deployment.
Responsibilities
Perform manual testing on web and mobile applications to ensure a seamless user experience.
Conduct functional, regression, usability, and exploratory testing across multiple platforms.
Develop and execute detailed test plans, test cases, and test scripts.
Document and report bugs, working closely with developers to resolve issues.
Ensure feature compliance by validating product requirements against test results.
Identify edge cases, inconsistencies, and performance issues.
Work with cross-functional teams to improve software quality throughout the development lifecycle.
Provide clear and structured feedback to enhance product functionality and user experience.
Maintain test documentation and contribute to quality assurance best practices.
Requirements
Around 3+ years of experience in manual and functional testing.
Strong understanding of QA methodologies, testing types, and best practices.
Experience with test case management tools (e.g., TestRail, Zephyr, Xray).
Familiarity with bug tracking systems (e.g., Jira, Trello, Bugzilla).
Experience testing web and mobile applications (iOS & Android).
Strong attention to detail and ability to identify complex usability issues.
Knowledge of API testing (e.g., Postman) is a plus.
Ability to work in an agile development environment and adapt to changing priorities.
Strong communication and collaboration skills.
What benefits are we offering?
Apple hardware ecosystem for work.
Annual Bonus.
Medical Insurance (including vision & dental).
Disability insurance - short and long-term.
401k up to 4% contribution.
Air Conference - an opportunity to meet the team, collaborate, and grow together.
Transportation budget
Free meals at the hub
Gym membership
Diversity & Inclusion
At Air Apps, we are committed to fostering a diverse, inclusive, and equitable workplace. We enthusiastically welcome applicants from all backgrounds, experiences, and perspectives. We celebrate diversity in all its forms and believe that varied voices and experiences make us stronger.
Application Disclaimer
At Air Apps, we value transparency and integrity in our hiring process. Applicants must submit their own work without any AI-generated assistance. Any use of AI in application materials, assessments, or interviews will result in disqualification.
#J-18808-Ljbffr
How much does a quality assurance tester earn in San Mateo, CA?
The average quality assurance tester in San Mateo, CA earns between $59,000 and $123,000 annually. This compares to the national average quality assurance tester range of $55,000 to $99,000.
Average quality assurance tester salary in San Mateo, CA
$85,000
What are the biggest employers of Quality Assurance Testers in San Mateo, CA?
The biggest employers of Quality Assurance Testers in San Mateo, CA are: