Post job

Quality assurance analyst jobs in Chico, CA - 3,921 jobs

All
Quality Assurance Analyst
Quality Assurance Engineer
Test Engineer
Tester
Senior Performance Engineer
Quality Assurance Consultant
Quality Assurance Coordinator
Quality Assurance Specialist
Quality Assurance Team Leader
Performance Test Lead
Functional Tester
Senior Quality Assurance Engineer
  • Siri Speech QA Engineer: Automation & Quality

    Apple Inc. 4.8company rating

    Quality assurance analyst job in San Francisco, CA

    A leading tech company is seeking a Tools and Automation Engineer for the Siri Speech Quality Engineering team in San Francisco. The role involves developing test plans, identifying critical testing areas, and ensuring user-facing features meet high-quality standards. Candidates should have strong coding skills in C/C++, Objective C/Swift, and Python, along with experience in audio testing. The position offers competitive compensation and benefits that include comprehensive medical coverage and stock options. #J-18808-Ljbffr
    $134k-172k yearly est. 1d ago
  • Job icon imageJob icon image 2

    Looking for a job?

    Let Zippia find it for you.

  • Salesforce QA Lead - Testing & Automation Leader

    N28Tech

    Quality assurance analyst job in San Francisco, CA

    A leading Salesforce Implementation Partner in San Francisco seeks a Salesforce QA Lead to execute functional and non-functional testing on the Salesforce platform. Candidates should have at least 4 years of QA experience, preferably with Sales Cloud, Service Cloud, and CPQ. This role involves writing test cases, performing various tests, and managing a small team. An ideal candidate is a self-starter with strong communication skills and eager to grow within the organization. #J-18808-Ljbffr
    $110k-154k yearly est. 4d ago
  • ERP Test Automation Consultant

    Menlo Ventures

    Quality assurance analyst job in San Francisco, CA

    A leading technology firm in San Francisco is seeking a Technical Consultant focused on building automated test frameworks. The ideal candidate will have 3 to 5 years of experience in test automation, proficiency in C# and SQL, and excellent communication skills. This role involves designing test suites, translating requirements into strategies, and contributing to the quality of software implementations. The company offers a comprehensive benefits package, including unlimited paid time off and a wellness benefit. #J-18808-Ljbffr
    $78k-103k yearly est. 1d ago
  • QA Analyst: Elevate Software Quality & Delivery

    Vkare Corp

    Quality assurance analyst job in San Francisco, CA

    A technology solutions provider is seeking a detail-oriented Quality Assurance (QA) Analyst to join the QA team. The ideal candidate will review and test software, create test plans, and collaborate with developers to ensure high quality and timely delivery. Important skills include experience with both manual and automated testing, strong analytical abilities, and effective communication in Agile environments. This role supports operational excellence and process improvement in software development. #J-18808-Ljbffr
    $77k-103k yearly est. 3d ago
  • Senior Biotech QA Consultant - GMP/GCP Expert

    Bull City Talent Group

    Quality assurance analyst job in San Francisco, CA

    A leading consulting firm in San Francisco is seeking an experienced QA professional specializing in the biotech sector. Candidates should have over 10 years of relevant experience and strong expertise in GMP and GCP regulations. The role involves collaboration with CMC teams and providing QA oversight for clinical trials. Exceptional communication skills are essential. This position offers an opportunity to work in a dynamic and fast-paced environment. #J-18808-Ljbffr
    $86k-118k yearly est. 1d ago
  • Quality Assurance Engineer

    Alibaba Cloud

    Quality assurance analyst job in Sunnyvale, CA

    We, Alibaba Overseas Engineering & TPM team, are seeking for a highly skilled and experienced Construction Quality Assurance Expert/On-site Testing & Commissioning Supervisor to join our dynamic and innovative team. Our team is dedicated to the design, construction, testing & commissioning and optimization of public cloud infrastructure and facilities. This multidisciplinary group combines expertise in electrical, mechanical and civil engineering, construction progress management, construction quality management to ensure delivery of high-performance environments that support critical IT equipment needs. In this role, you will be responsible for ensuring the successful testing and commissioning of our electrical and mechanical facilities, with a focus on spending at least 30% of your working time on construction sites. You will be accountable for the following key responsibilities, 1. Site Supervision and Coordination 2. Facility Testing and Commissioning 3. Documentation and Reporting 4. Compliance and Quality Assurance 5. Escalation and Stakeholder Engagement Minimum qualification: - A minimum of 5 years of proven experience in facility testing and commissioning, with a strong track record of successful construction project delivery. - Excellent communication and stakeholder management skills, with the ability to present technical information to both technical and non-technical audiences. - Proficiency in developing and executing comprehensive testing and commissioning plans, as well as interpreting and documenting test results. - Bachelor's degree in Engineering (Electrical, Mechanical or a related field) Preferred qualification: - Extensive knowledge of electrical and mechanical infrastructure, including but not limited to power, cooling, ventilation, fire-fighting, plumbing, drainage and monitoring. - Excellent problem-solving and analytical skills, with the ability to identify and resolve complex technical issues. - Strong project management and coordination skills, with the ability to work effectively with cross-functional teams. - Master's degree in Engineering (Electrical, Mechanical or a related field) - Professional engineer (PE) is preferred. The pay range for this position at commencement of employment is expected to be between $133,200 and $219,600/year. However, base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If hired, employee will be in an “at-will position” and the Company reserves the right to modify base salary (as well as any other discretionary payment or compensation program) at any time, including for reasons related to individual performance, Company or individual department/team performance, and market factors.
    $133.2k-219.6k yearly 2d ago
  • QA Engineer, Scientific Workflows

    Mithrl Inc.

    Quality assurance analyst job in San Francisco, CA

    ABOUT MITHRL We imagine a world where new medicines reach patients in months, not years, and where scientific breakthroughs happen at the speed of thought. Mithrl is building the world's first commercially available AI Co-Scientist. It is a discovery engine that transforms messy biological data into insights in minutes. Scientists ask questions in natural language, and Mithrl responds with real analysis, novel targets, hypotheses, and patent-ready reports. Our traction speaks for itself: 12X year-over-year revenue growth Trusted by leading biotechs and big pharma across three continents Driving real breakthroughs from target discovery to patient outcomes. ABOUT THE ROLE We are hiring a QA Engineer, Scientific Software to build the test, validation, and monitoring infrastructure that guarantees the correctness and reliability of the Mithrl AI Co-Scientist. This role requires a PhD-level scientist or computational biologist who understands the drug development lifecycle and who has hands‑on experience with omics data. Without this scientific foundation, it is not possible to evaluate whether Mithrl's outputs are biologically meaningful. You will create automated tests for analysis workflows, ingestion pipelines, and discovery applications. You will build CI systems that catch regressions early, set up monitoring and alerting for system behavior, and ensure that every module in Mithrl produces scientifically valid and reproducible outputs. This role bridges scientific understanding with software quality engineering and is critical for maintaining trust in Mithrl's analysis engine. If you are a scientist with a passion for product reliability, reproducibility, and validation of ML powered scientific tools, this is a uniquely impactful position. WHAT YOU WILL DO Build automated test infrastructure for data ingestion, analysis modules, discovery applications, and new product features Develop scientific validation frameworks that check correctness and reproducibility of ML driven biological analyses Establish CI workflows that run end to end tests on every commit and catch scientific and computational regressions Build monitoring and alerting systems that track the health, performance, and scientific integrity of product modules Design automated checks for omics workflows Validate that responses generated by Mithrl align with biological logic and expectations from discovery and preclinical development Work closely with ML engineers, data engineers, and application scientists to ensure scientific accuracy across releases Maintain documentation, data fixtures, and gold standard datasets for regression testing Support the development of QA processes that scale with rapid product growth and new analysis capabilities Build a culture of scientific correctness and software reliability throughout the engineering and product teams WHAT YOU BRING Required Qualifications PhD in biology, computational biology, bioinformatics, systems biology, or a related discovery field Deep understanding of the drug discovery and preclinical development lifecycle Hands‑on experience working with omics data such as transcriptomics, RNA‑seq, proteomics, ATAC‑seq, single cell datasets, or imaging‑derived features Ability to evaluate whether a result, analysis, or insight is scientifically correct based on domain knowledge Familiarity with common discovery analyses such as differential expression, enrichment, pathway reasoning, target scoring, and feature importance Experience with Python or similar languages and comfort with scientific computing workflows Strong interest in software quality, reproducibility, and validation of ML driven scientific systems Excellent communication skills and ability to partner with engineers and scientists Nice to Have Experience building automated tests or QA frameworks for scientific or ML systems Experience with workflow engines, scientific pipelines, or reproducibility tools Familiarity with CI tools and modern software development practices Experience validating outputs of AI powered analysis tools Previous work in a tech bio company or computational platform environment WHAT YOU WILL LOVE AT MITHRL High ownership: You will be the guardian of scientific correctness and reliability inside the AI Co-Scientist Impact: You will work with cutting edge ML, multi modal data, and real discovery workflows Team: Join a tight-knit, talent-dense team of engineers, scientists, and builders Culture: We value consistency, clarity, and hard work. We solve hard problems through focused daily execution Speed: We ship fast (2x/week) and improve continuously based on real user feedback Location: Beautiful SF office with a high-energy, in-person culture Benefits: Comprehensive PPO health coverage through Anthem (medical, dental, and vision) + 401(k) with top-tier plans We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. #J-18808-Ljbffr
    $89k-125k yearly est. 5d ago
  • QA Automation Engineer

    Air Apps, Inc.

    Quality assurance analyst job in San Francisco, CA

    About Air Apps At Air Apps, we believe in thinking bigger-and moving faster. We're a family-founded company on a mission to create the world's first AI-powered Personal & Entrepreneurial Resource Planner (PRP), and we need your passion and ambition to help us change how people plan, work, and live. Born in Lisbon, Portugal, in 2018-and now with offices in both Lisbon and San Francisco-we've remained self-funded while reaching over 100 million downloads worldwide. Our long-term focus drives us to challenge the status quo every day, pushing the boundaries of AI-driven solutions that truly make a difference. Here, you'll be a creative force, shaping products that empower people across the globe. Join us on this journey to redefine resource management-and change lives along the way. The Role As a QA Automation Engineer at Air Apps, you will play a crucial role in building and maintaining automated test frameworks and regression test suites to ensure our applications meet the highest quality standards. You will work closely with developers, product managers, and QA teams to implement automated testing strategies, increase test coverage, and optimize testing efficiency. Your contributions will directly impact the stability, performance, and reliability of our applications across web and mobile platforms. Responsibilities Design, develop, and maintain automated test frameworks for web and mobile applications. Create and execute automated regression, functional, performance, and API tests. Integrate automated tests into CI/CD pipelines for continuous testing. Collaborate with development teams to ensure testability of features and early defect detection. Analyze test results, troubleshoot failures, and report issues to development teams. Enhance test coverage by identifying critical scenarios and edge cases. Work with cross-functional teams to define quality standards and best practices. Stay up to date with emerging automation tools, frameworks, and testing methodologies. Requirements Around 3+ years of experience in test automation development. Proficiency in test automation frameworks (e.g., Selenium, Cypress, Appium, Playwright). Strong experience in programming languages such as Python, Java, JavaScript, or TypeScript. Hands‑on experience with API testing (Postman, REST Assured, or similar). Familiarity with CI/CD tools (Jenkins, GitHub Actions, GitLab CI, CircleCI). Experience with version control systems (Git) and test management tools (TestRail, Zephyr). Strong understanding of software testing principles, SDLC, and agile methodologies. Knowledge of performance and load testing tools (JMeter, Gatling) is a plus. Experience testing mobile applications (iOS & Android) is a plus. Strong analytical and problem‑solving skills with a keen attention to detail. What benefits are we offering? Apple hardware ecosystem for work. Annual Bonus. Medical Insurance (including vision & dental). Disability insurance - short and long-term. 401k up to 4% contribution. Air Conference - an opportunity to meet the team, collaborate, and grow together. Transportation budget Free meals at the hub Gym membership Diversity & Inclusion At Air Apps, we are committed to fostering a diverse, inclusive, and equitable workplace. We enthusiastically welcome applicants from all backgrounds, experiences, and perspectives. We celebrate diversity in all its forms and believe that varied voices and experiences make us stronger. Application Disclaimer At Air Apps, we value transparency and integrity in our hiring process. Applicants must submit their own work without any AI‑generated assistance. Any use of AI in application materials, assessments, or interviews will result in disqualification. #J-18808-Ljbffr
    $89k-125k yearly est. 3d ago
  • Simulation and Test Engineer (Conversational AI) - US Based

    Dromeda

    Quality assurance analyst job in San Francisco, CA

    The Bigger Picture At Andromeda Robotics, we're not just imagining the future of human-robot relationships; we're building it. Abi is the first emotionally intelligent humanoid companion robot, designed to bring care, conversation, and joy to the people who need it most. Backed by tier-1 investors and with customers already deploying Abi across aged care and healthcare, we're scaling fast, and we're doing it with an engineering-first culture that's obsessed with pushing the limits of what's possible. This is a rare moment to join: we're post-technical validation, pre-ubiquity, and building out the team that will take Abi from early access to global scale. Our Values Empathy - Kindness and compassion are at the heart of everything we do. Play - Play sharpens focus. It keeps us curious, fast and obsessed with the craft. Never Settle - A relentless ambition, bias toward action, and uncomfortable levels of curiosity. Tenacity - Tenacious under pressure, we assume chaos and stay in motion to adapt and progress. Unity - Different minds. Shared mission. No passengers. The Role We are looking for a creative and driven Simulation and Test Engineer to build Andromeda's testing infrastructure for our conversational AI systems and embodied character behaviours. Your immediate focus will be creating robust test systems for Abi's voice-to-voice chatbot, social awareness perception, and gesture motor control. As this infrastructure matures, you'll extend it into simulation environments for generating synthetic training data for character animation and gesture models. The Team You'll work at the intersection of our character software, robotics, perception, conversational AI, controls, and audio engineering teams. We bring deep expertise from autonomous vehicles and robotics, including simulation backgrounds. You'll collaborate with product owners and technical specialists to define requirements, integrate systems, and ensure quality across our AI/ML stack. Phase 1: Build The Test Foundation Define and stand up synthetic test environments for our AI/ML conversational stack Conversational AI testing: voice-to-voice chat quality, response appropriateness, tool calling accuracy Memory system testing: context retention, recall accuracy, conversation coherence Audio modelling and testing: multi-speaker scenarios, room acoustics, voice activity detection Perception system testing: social awareness (face detection, gaze tracking, person tracking) Gesture appropriateness testing: Working with our Controls/ML team, create test infrastructure to validate that Abi's body gestures CI/CD and automated regression testing for all AI/ML subsystems Custodian of quality metrics: if they don't exist, work with stakeholders to elicit use cases, derive requirements, and establish measurable quality metrics Requirements formalisation: you're skilled at gathering, documenting, and tracing requirements back to test cases Phase 2: Scale To ML Training Infrastructure Our approach to gesture generation requires high-fidelity synthetic interaction data at scale. You'll investigate and build the infrastructure to generate this data, working closely with our character software team to define requirements and validate approaches. Extend test environments into training data generation pipelines Investigate and stand up simulation tools (e.g. Unity, Unreal Engine, Isaac Sim) to support our machine learning pipeline with synthetic data and validation infrastructure Build infrastructure for fine-tuning character animation models on simulated multi-actor scenarios Enable ML-generated gesture development to augment hand-crafted animation workflows Create virtual environments with diverse social interaction scenarios for training and evaluation Success In This Role Looks Like Months 1-3, stabilise our conversational system with automated regression tests and measurable quality benchmarks. By month 6, deliver an integrated simulation environment enabling rapid testing and iteration across our AI/ML stack. You'll design tests that push our systems beyond their limits and find what's brittle. Through trade studies and make-vs-buy decisions, you'll establish the infrastructure, set up automatic regression tests, and trace test cases back to high-level requirements. You'll be the final guardian, verifying that our AI and machine learning systems work as intended before integration with Abi's physical platform. Your work will directly impact the speed and quality of our development, ensuring that every software build is robust, reliable, and safe. Key Responsibilities Architect and Build: Design, develop, and maintain scalable test infrastructure for conversational AI, perception, and gesture control systems Own Testing Pipeline: Develop a robust CI/CD pipeline for automated regression testing, enabling rapid iteration and guaranteeing quality before deployment Develop Test Scenarios: Create diverse audio environments, multi-actor social scenarios, and edge cases to rigorously test Abi's conversational and social capabilities Model with Fidelity: Implement accurate models of Abi's hardware stack (cameras, microphone array, upper body motion) as needed for test and simulation scenarios Enable Future ML Training: Design test infrastructure with an eye towards evolution into a simulation platform for generating synthetic data for character animation and gesture models Integrate and Collaborate: Work closely with the robotics, AI, and software teams to seamlessly integrate their stacks into the test infrastructure and define testing requirements Analyse and Improve: Develop metrics, tools, and dashboards to analyse test data, identify bugs, track performance, and provide actionable feedback to the engineering teams Ideally You Have Bachelor or Masters in Computer Science, Robotics, Engineering, or a related field 5+ years of professional experience testing complex AI/ML systems (conversational AI, perception systems, or embodied AI) Strong programming proficiency in Python (essential); C++ experience valuable Hands‑on experience with LLM testing, voice AI systems, or chatbot evaluation frameworks Understanding of audio processing, speech recognition, and/or computer vision fundamentals Experience with testing frameworks and CI/CD tools (pytest, Jenkins, GitHub Actions, etc.) Familiarity with ML evaluation metrics and experimental design A proactive, first‑principles thinker who is excited by the prospect of owning a critical system at an early‑stage startup Bonus Points Experience with simulation platforms (e.g. Unity, Unreal Engine, NVIDIA Isaac Sim, Gazebo) and physics engines Experience with character animation systems, motion capture data, or gesture generation Knowledge of reinforcement learning, imitation learning, or synthetic data generation for training ML models Experience with 3D modelling tools and game engine content creation Understanding of ROS2 for robotics integration Knowledge of sensor modelling techniques for cameras and audio Experience building and managing large‑scale, cloud‑based simulation infrastructure PhD in a relevant field The expected base salary range for this role, when performed in our San Francisco office, is $150,000 - $250,000 USD, depending on skills and experience. The salary for this position may vary depending on factors such as job‑related knowledge, skills, and experience. The total compensation package may also include additional benefits or components based on the specific role. Details will be provided if an employment offer is made. If you're excited about this role but don't meet every requirement, that's okay-we encourage you to apply. At Andromeda Robotics, we celebrate diversity and are committed to creating an inclusive environment for all employees. Let's build the future together. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, colour, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. Please note: At this time, we are generally not offering visa sponsorship for this role. #J-18808-Ljbffr
    $150k-250k yearly 5d ago
  • Simulation and Test Engineer (Conversational AI) - US Based

    Andromeda Robotics

    Quality assurance analyst job in San Francisco, CA

    About Us Andromeda Robotics is an ambitious social robotics company with offices in Melbourne and San Francisco, dedicated to creating robots that seamlessly and intelligently interact with the human world. Our first robot, Abi, is a testament to this vision-a custom-built platform designed from the ground up to be a helpful aid and intuitive partner in aged care homes. We are a passionate, collaborative team of engineers who solve some of the most challenging problems in AI and robotics. To accelerate our development and ensure Abi's reliability, we are seeking a foundational member to build out our capabilities to train and test our robot in simulation. Our Values Deeply empathetic - Kindness and compassion are at the heart of everything we do. Purposely playful - Play sharpens focus. It keeps us curious, fast and obsessed with the craft. Relentlessly striving - With relentless ambition, an action bias and constant curiosity, we don't settle. Strong when it counts - Tenacious under pressure, we expect problems and stay in motion to adapt and progress. United in action - Different minds. Shared mission. No passengers. The Role We are looking for a creative and driven Simulation and Test Engineer to build Andromeda's testing infrastructure for our conversational AI systems and embodied character behaviours. Your immediate focus will be creating robust test systems for Abi's voice-to-voice chatbot, social awareness perception, and gesture motor control. As this infrastructure matures, you'll extend it into simulation environments for generating synthetic training data for character animation and gesture models. The Team You'll work at the intersection of our character software, robotics, perception, conversational AI, controls, and audio engineering teams. We bring deep expertise from autonomous vehicles and robotics, including simulation backgrounds. You'll collaborate with product owners and technical specialists to define requirements, integrate systems, and ensure quality across our AI/ML stack. Phase 1: Build The Test Foundation Define and stand up synthetic test environments for our AI/ML conversational stack Conversational AI testing: voice-to-voice chat quality, response appropriateness, tool calling accuracy Memory system testing: context retention, recall accuracy, conversation coherence Audio modelling and testing: multi-speaker scenarios, room acoustics, voice activity detection Perception system testing: social awareness (face detection, gaze tracking, person tracking) Gesture appropriateness testing: Working with our Controls/ML team, create test infrastructure to validate that Abi's body gestures CI/CD and automated regression testing for all AI/ML subsystems Custodian of quality metrics: if they don't exist, work with stakeholders to elicit use cases, derive requirements, and establish measurable quality metrics Requirements formalisation: you're skilled at gathering, documenting, and tracing requirements back to test cases Phase 2: Scale To ML Training Infrastructure Our approach to gesture generation requires high-fidelity synthetic interaction data at scale. You'll investigate and build the infrastructure to generate this data, working closely with our character software team to define requirements and validate approaches. Extend test environments into training data generation pipelines Investigate and stand up simulation tools (e.g. Unity, Unreal Engine, Isaac Sim) to support our machine learning pipeline with synthetic data and validation infrastructure Build infrastructure for fine-tuning character animation models on simulated multi-actor scenarios Enable ML-generated gesture development to augment hand-crafted animation workflows Create virtual environments with diverse social interaction scenarios for training and evaluation Success In This Role Looks Like Months 1-3, stabilise our conversational system with automated regression tests and measurable quality benchmarks. By month 6, deliver an integrated simulation environment enabling rapid testing and iteration across our AI/ML stack. You'll design tests that push our systems beyond their limits and find what's brittle. Through trade studies and make-vs-buy decisions, you'll establish the infrastructure, set up automatic regression tests, and trace test cases back to high-level requirements. You'll be the final guardian, verifying that our AI and machine learning systems work as intended before integration with Abi's physical platform. Your work will directly impact the speed and quality of our development, ensuring that every software build is robust, reliable, and safe. Key Responsibilities Architect and Build: Design, develop, and maintain scalable test infrastructure for conversational AI, perception, and gesture control systems Own Testing Pipeline: Develop a robust CI/CD pipeline for automated regression testing, enabling rapid iteration and guaranteeing quality before deployment Test Scenarios: Create diverse audio environments, multi-actor social scenarios, and edge cases to rigorously test Abi's conversational and social capabilitiesli> Model with Fidelity: Implement accurate models of Abi's hardware stack (cameras, microphone array, upper body motion) as needed for test and simulation scenarios Enable Future ML Training: Design test infrastructure with an eye towards evolution into a simulation platform for generating synthetic training data for character animation and gesture models Integrate and Collaborate: Work closely with the robotics, AI, and software teams to seamlessly integrate their stacks into the test infrastructure and define testing requirements Analyse and Improve: Develop metrics, tools, and dashboards to analyse test data, identify bugs, track performance, and provide actionable feedback to the engineering teams Ideally You Have Bachelor or Masters in Computer Science, Robotics, Engineering, or a related field 5+ years of professional experience testing complex AI/ML systems (conversational AI, perception systems, or embodied AI) Strong programming proficiency in Python (essential); C++ experience valuable Hands‑on experience with LLM testing, voice AI systems, or chatbot evaluation frameworks Understanding of audio processing, speech recognition, and/or computer vision fundamentals Experience with testing frameworks and CI/CD tools (pytest, Jenkins, GitHub Actions, etc.) Familiarity with ML evaluation metrics and experimental design A proactive, first‑principles thinker who is excited by the prospect of owning a critical system at an early‑stage startup Bonus Points Experience with simulation platforms (e.g. Unity, Unreal Engine, NVIDIA Isaac Sim, Gazebo) and physics engines Experience with character animation systems, motion capture data, or gesture generation Knowledge of reinforcement learning, imitation learning, or synthetic data generation for training ML models Experience with 3D modelling tools and game engine content creation Understanding of ROS2 for robotics integration Knowledge of sensor modelling techniques for cameras and audio Experience building and managing large-scale, cloud-based simulation infrastructure PhD in a relevant field The expected base salary range for this role, when performed in our San Francisco office, is $150,000 - $250,000 USD, depending on skills and experience. The salary for this position may vary depending on factors such as job‑related knowledge, skills, and experience. The total compensation package may also include additional benefits or components based on the specific role. Details will be provided if an employment offer is made. If you're excited about this role but don't meet every requirement, that's okay-we encourage you to apply. At Andromeda Robotics, we celebrate diversity and are committed to creating an inclusive environment for all employees. Let's build the future together. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, colour, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. #J-18808-Ljbffr
    $150k-250k yearly 1d ago
  • Senior GPU Training Performance Engineer

    Advanced Micro Devices 4.9company rating

    Quality assurance analyst job in San Jose, CA

    A leading technology company is seeking a Principal / Senior GPU Software Performance Engineer in San Jose, CA. This role involves optimizing GPU workloads for training performance, enhancing throughput, and resolving bottlenecks in distributed systems. The ideal candidate will have strong skills in GPU performance engineering and experience with deep learning frameworks, particularly PyTorch. This position offers a chance to work in a collaborative environment focused on innovation and inclusivity. #J-18808-Ljbffr
    $137k-178k yearly est. 4d ago
  • Marketplace Growth & Performance Lead

    Scoutbee GmbH

    Quality assurance analyst job in San Francisco, CA

    A leading procurement technology company in San Francisco is seeking a Head/Director of Marketplace Performance to enhance engagement in their marketplace. This role involves cross-functional leadership, KPI management, and strategic development to improve buyer and supplier experiences. Ideal candidates will have over 8 years of relevant experience and strong analytical skills. Competitive compensation and a remote-flexible culture are offered. #J-18808-Ljbffr
    $104k-151k yearly est. 3d ago
  • Test Engineer

    Mvp VC

    Quality assurance analyst job in San Francisco, CA

    Wanna join the adventure? As a Test Engineer at Loft Orbital, you will contribute to the design execution, and improvement of test processes that ensure the reliability and performance of Loft's space hardware. You'll work hands‑on with a range of systems-from subsystems to full spacecraft-supporting environmental, functional, and automated campaigns. You will collaborate closely with design, manufacturing, and software teams to develop test plans, execute procedures, and analyze results to ensure our satellites and components meet mission requirements. This role offers the opportunity to deepen your technical expertise in testing complex aerospace hardware while working in a fast‑paced and collaborative environment. About the Role Support the design, development, and execution of test procedures, setups, and instrumentation for spacecraft and subsystem‑level testing. Participate in the development and implementation of automated and manual test equipment, fixtures, and/or ground support equipment (GSE). Execute and document environmental and functional tests, including vibration, thermal, thermal vacuum (TVAC), and EMI. Collect, analyze, and interpret test data; generate reports and document results, flags, and non‑conformances. Collaborate cross‑functionally to ensure tests align with mission requirements, schedules, and quality standards. Contribute to process improvement initiatives for test documentation, configuration control, and data management. Must Haves Bachelor's degree in Electrical, Mechanical, Aerospace, or Systems Engineering, or equivalent experience. 3-7 years of experience in aerospace or related hardware testing. Working knowledge of test methods and environmental test standards (e.g., GEVS, MIL‑STD). Ability to interpret engineering drawings, schematics, and test specifications. Familiarity with data acquisition systems, instrumentation, and control systems. Experience with scripting or automation tools (e.g., Python, MATLAB, LabVIEW). Strong problem‑solving skills and attention to detail. Comfortable working hands‑on with hardware and in test lab environments. Nice to Haves Experience with Electrical Ground Support Equipment (EGSE). Exposure to requirements verification and validation processes. Experience supporting satellite or aerospace test campaigns, including environmental testing. Familiarity with data management tools or configuration control systems. Willingness to support off‑hours or weekend testing when required. Benefits 100% company‑paid medical, dental, and vision insurance option for employees and dependents. Flexible Spending (FSA) and Health Savings (HSA) Accounts offered with an employer contribution to the HSA. 100% employer‑paid Life, AD&D, Short‑Term, and Long‑Term Disability insurance. Flexible Time Off policy for vacation and sick leave, and 12 paid holidays. 401(k) plan and equity options. Daily catered lunches and snacks in office. International exposure to our team in France. Fully paid parental leave; 14 weeks for birthing parent and 10 weeks for non‑birthing parent. Carrot Fertility provides comprehensive, inclusive fertility healthcare and family‑forming benefits with financial support. Off‑sites and many social events and celebrations. Relocation assistance when applicable. Salary: $100,000 - $135,000 a year. * Research shows that while men apply to jobs where they meet an average of 60% of the criteria, women and other underrepresented people tend to only apply when they meet 100% of the qualifications. At Loft, we value respectful debate and people who aren't afraid to challenge assumptions. We strongly encourage you to apply, even if you don't check all the boxes. Who We Are Loft: Space Made Simple. Founded in 2017, Loft provides governments, companies, and research institutions with a fast, reliable, and flexible way to deploy missions in orbit. We integrate, launch, and operate spacecraft, offering end‑to‑end missions as a service across Earth observation, IoT connectivity, in‑orbit demonstrations, national security missions, and more. Leveraging our existing space infrastructure and an extensive inventory of satellite buses, Loft is reducing years‑long integration and launch timelines to months. With more than 25 missions flown, Loft's flight heritage and proven technologies enable customers to focus on their mission objectives. At Loft, you'll be given the autonomy and ownership to solve significant challenges, but with a close‑knit and supportive team at your back. We believe that diversity and community are the foundation of an open culture. We are committed to hiring the best people regardless of background and make their time at Loft the most fulfilling period of their career. We value kind, supportive and team‑oriented collaborators. It is also crucial for us that you are a problem solver and a great communicator. As our team is international, you will need strong English skills to better collaborate, easily communicate complex ideas and convey important messages. With 4 satellites on‑orbit and a wave of exciting missions launching soon, we are scaling up quickly across our offices in San Francisco, CA | Golden, CO | and Toulouse, France. As an international company your resume will be reviewed by people across our offices so please attach a copy in English. #J-18808-Ljbffr
    $100k-135k yearly 3d ago
  • Senior QA & Test Automation Engineer

    Williams-Sonoma, Inc. 4.4company rating

    Quality assurance analyst job in San Francisco, CA

    A leading specialty retailer in home products is seeking a Sr. QA Engineer to ensure the quality and reliability of its digital commerce platforms. This role involves driving QA strategies, leading testing efforts, and collaborating with multiple teams to deliver exceptional customer experiences. Candidates should have 7-9 years of experience in Quality Engineering or Software Testing and a strong understanding of e-commerce. The position is located in San Francisco, California. #J-18808-Ljbffr
    $122k-150k yearly est. 4d ago
  • Senior Performance Test Engineer - Scale & Optimize APIs

    Symphony Industrial Ai, Inc.

    Quality assurance analyst job in Palo Alto, CA

    A leading AI-powered firm in Palo Alto, California, seeks a Senior Performance Test Engineer to lead diverse testing strategies for web applications and data platforms. With 5-8 years of performance testing experience, the ideal candidate will design and analyze tests, collaborate with teams, and provide insights. Strong proficiency in tools like JMeter and scripting in languages such as Java or Python is essential. The company offers an innovative environment and values performance excellence. #J-18808-Ljbffr
    $125k-180k yearly est. 5d ago
  • Senior Performance Modelling Engineer AI/Hardware Simulator

    Pagebolt Wordpress

    Quality assurance analyst job in San Francisco, CA

    A leading technology company is seeking a Staff Performance Modelling Engineer in San Francisco to create and own analytical models influencing software and hardware evolution. The role involves significant collaboration, performance analysis, and requires expertise in performance modelling with C++ and Python. Strong experience and a Bachelor's degree in a related field are preferred. This position offers excellent benefits, including competitive salary and full relocation support. #J-18808-Ljbffr
    $125k-180k yearly est. 1d ago
  • test job

    Sphere Consulting 3.6company rating

    Quality assurance analyst job in Chico, CA

    nkscc;d; Qualifications ncknl;l; b Additional Information xl';vd
    $76k-116k yearly est. 2d ago
  • Test Only

    Elijah House Foundation 3.5company rating

    Quality assurance analyst job in Oroville, CA

    Job Description Apply Here: **************************************************************************** Test only
    $46k-77k yearly est. 8d ago
  • Quality Assurance Specialist

    Rush Personnel Services, Inc.

    Quality assurance analyst job in Yuba City, CA

    Thriving Yuba City business seeks motivated Quality Assurance Specialist! Hiring NOW for this fantastic full-time opportunity! Assist with document control, production and quality record review, record keeping, internal audits and system documents. Requirements: Must have 1-2 years of recent experience. HACCP Certified Able to achieve internal Audit, BRC, and PCQI Certification within 6 months Responsibilities and skills: Backing up the QC Line Technician in conducting QC Line Checks Monitoring CCPs, GMP Inspections, Pre-Operational Inspections, Environmental Swabbing, etc. Must have excellent communication and organizational skills to back up the front office receptionist. Assist with maintaining the Food Safety and Quality Systems Assist with regulatory and third-party audits Maintain the company s document control system and document verification. Conduct daily, weekly, and monthly GMP Inspections Outgoing person to be the connection point of customers to size, quote and support! Assisting customers in fulfillment and technical support, troubleshooting. Schedule: Monday thru Friday 7am to 4pm Apply Now RUSH Personnel Services Inc. 650 N. Walton St. Yuba City, Ca 95993 Call 530-770-3790 for more information!
    $63k-103k yearly est. 15d ago
  • Quality Assurance Coordinator

    Turning Point Community Programs 4.2company rating

    Quality assurance analyst job in Chico, CA

    Turning Point Community Programs is seeking a Quality Assurance Coordinator for our Transition Support Services (TSS) North program in Chico. Turning Point Community Programs (TPCP) provides integrated, cost-effective mental health services, employment and housing for adults, children and their families that promote recovery, independence and self-sufficiency. We are committed to innovative and high quality services that assist adults and children with psychiatric, emotional and/or developmental disabilities in achieving their goals. Turning Point Community Programs (TPCP) has offered a path to mental health and recovery since 1976. We help people in our community every single day - creating a better space for all types of people in need. Join our mission of offering hope, respect and support to our clients on their journey to mental health and wellness. GENERAL PURPOSE Under the administrative supervision of the Clinical Director, this position is responsible for ensuring that the program remains in compliance with Regional Center guidelines. Assists the Clinical Director in the quality management functioning of the Program. DISTINGUISHING CHARACTERISTICS This is an at-will administrative position within a program. Additionally, this position is responsible for the day-to-day completion of critical paperwork and assisting the Clinical Director. ESSENTIAL DUTIES AND RESPONSIBILITIES - (ILLUSTRATIVE ONLY) The duties listed below are intended only as illustrations of the various types of work that could be performed. The omission of specific statements of duties does not exclude them from the position if the work is similar, related or a logical assignment to this class Completes Diagnosis updates as assigned. Completes MORS/8 Determinants assessments as assigned. Tracks progress notes and provides feedback directly to management team. Tracks assessments due and completion. Works in coordination with the Team Leaders and Clinical Director to ensure that all assessments and client plans are completed in a timely manner. Attends/Conducts Utilization Review meetings when the Clinical Director is not available. Coordinates with Clinical Director to implement recommendations. Responsible for tracking and reviewing results of internal utilization and review. Reviews charts to ensure that they meet state and legal/Regional Center requirements. Assists the Clinical Director and Program Director with developing the Quality Improvement plan and implementing changes. Assist the Clinical Director with filing, organizing and maintaining a record of KETs (Key Event Tracking) and inputting data into charted system. Assist the Clinical Director with filing, organizing and maintaining a record of Risk Management Binder (SIRs). In coordination with the Clinical Director and Program Director, reviews and evaluates customer satisfaction/performance outcome data. Ensures the safety, health, and well-being of the members. Completes paperwork as assigned in a timely manner. Meets the standards set for performance in all aspects of job duties. Provides support to other staff members as needed. Adheres to and upholds the policies and procedures of Turning Point Community Programs. Attends staff meetings unless approval for non-attendance is secured from the Clinical Director or Program Director. Schedule: Monday - Friday, 8:30 am - 5:00 pm Compensation: $26.00 - $27.59 per hour Interested? Join us at our open interviews on Wednesdays from 2-4PM, located at 10850 Gold Center Drive, Suite 325, Rancho Cordova, CA 95670 -or- CLICK HERE TO APPLY NOW!
    $26-27.6 hourly 60d+ ago

Learn more about quality assurance analyst jobs

How much does a quality assurance analyst earn in Chico, CA?

The average quality assurance analyst in Chico, CA earns between $66,000 and $115,000 annually. This compares to the national average quality assurance analyst range of $57,000 to $93,000.

Average quality assurance analyst salary in Chico, CA

$87,000
Job type you want
Full Time
Part Time
Internship
Temporary