Research and development engineer jobs in San Rafael, CA - 7,218 jobs
All
Research And Development Engineer
Research Engineer
Optical Engineer
Senior Product Design Engineer
Device Engineer
Product Development Engineer
Clinical Engineer
On-Device CVML Engineer for Spatial AI
Apple Inc. 4.8
Research and development engineer job in Sunnyvale, CA
A leading technology company in Sunnyvale is seeking a dedicated engineer for machine learning systems, focusing on AI and spatial computing. The role requires expertise in deep learning frameworks and strong programming skills in C++, Swift, and Python. Responsibilities include developing advanced AI systems, collaborating with cross-functional teams, and pushing the boundaries of innovation with real-world applications. This position offers competitive compensation with a base pay range of $181,100 to $318,400, alongside comprehensive benefits.
#J-18808-Ljbffr
$181.1k-318.4k yearly 3d ago
Looking for a job?
Let Zippia find it for you.
Optical Engineer
Meta 4.8
Research and development engineer job in Menlo Park, CA
Meta Platforms, Inc. (Meta), formerly known as Facebook Inc., builds technologies that help people connect, find communities, and grow businesses. When Facebook launched in 2004, it changed the way people connect. Apps and services like Messenger, Instagram, and WhatsApp further empowered billions around the world. Now, Meta is moving beyond 2D screens toward immersive experiences like augmented and virtual reality to help build the next evolution in social technology. To apply, click “Apply to Job” online on this web page.
Optical Engineer Responsibilities:
Collaborate with Meta engineers on the next generation optical communication technology with a particular focus on the testing of optical modules and subsystems for Data Center applications.
Work in a collaborative environment across Meta teams including SW, Thermal, Mechanical, and EE.
Develop test and simulation methodologies relevant for next-generation, high-speed optical technologies.
Develop automation practices and solutions for lab testing.
Lead new optical communication NPI programs, working closely with optics vendors and ODMs.
Interface with the industry and contribute to the RFI/RFP process for optical solutions.
Communicate complex optical hardware requirements to non-experts both internally and to industry-facing stakeholders.
Minimum Qualifications:
Master's degree (or foreign degree equivalent) in Electrical Engineering, Computer Engineering, Optical Engineering, Physics, or a related field and 3 years of experience in the job offered or in a computer-related occupation
Requires 3 years of experience in the following:
Imaging optics for both photographic and computer vision applications
Both sequential and non-sequential optical design software (e.g. Zemax, Code V, LightTools, FRED and/or ASAP)
Image quality evaluation, imaging sensor operation and characterization, stray light modeling and analysis, camera module or lens testing and characterization, imaging and illumination optics design and manufacturing
Theoretical analysis using MATLAB, Python or similar programs
Statistical analysis using MATLAB
Public Compensation:
$181,390/year to $189,200/year + bonus + equity + benefits
Industry:
Internet
Equal Opportunity:
Meta is proud to be an Equal Employment Opportunity and affirmative action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Meta participates in the E-Verify program in certain locations, as required by law. Please note that Meta may leverage artificial intelligence and machine learning technologies in connection with applications for employment.
Meta is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance or accommodations due to a disability, please let us know at accommodations-ext@fb.com.
#J-18808-Ljbffr
$181.4k-189.2k yearly 3d ago
Systems R&D Engineer, Industrial Compute
Openai 4.2
Research and development engineer job in San Francisco, CA
About the Team
The Scaling team is responsible for the architectural and engineering backbone of OpenAI's infrastructure. We design and deliver advanced systems that support the deployment and operation of cutting‑edge AI models. Our work spans system software, networking, platform architecture, fleet‑level monitoring, and performance optimization.
As part of the Stargate initiative, the Industrial Compute team's mission is to build and operate the foundation that powers our research and products. To deliver on this mission, there is a critical need for advanced lab infrastructure and capabilities to perform rapid prototyping, targeted and end‑to‑end testing, and system qualification across the full software and hardware stack to ensure readiness and stability for fast and smooth deployment of our large‑scale clusters.
About the Role
We are seeking an experienced engineer to lead advanced R&D, rapid prototyping, and build‑out of in‑house lab infrastructure to support system bring‑up, testing, and qualification for next‑generation systems.
In this role, you will:
You will work closely with the research, fleet, silicon, hardware health, and external vendors and partners to accelerate system qualification, increase deployment velocity, and improve platform and fleet robustness at scale. You will focus on detailed evaluation and systematic testing of early hardware, including servers, racks, NICs, switches, chips, optics, transceivers, and other lab prototype equipment.
Your engineering work will span a broad range of software (systems, firmware, RDMA, collectives, cluster provisioning, orchestration, etc.) and/or hardware solutions (e.g., FPGA development) to manage lab infrastructure, support bring‑up activities and perform extensive characterizations of new hardware through microbenchmarking, fault injection, impairment modelling, localized stress testing, and end‑to‑end evaluations.
This role is based in San Francisco, CA or Seattle, WA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.
You might thrive in this role if you:
Bachelor's Degree in Computer Science or a related field
Strong background in system architecture, large‑scale distributed systems and AI/HPC networking expertise (high performance multipath transport protocols, RDMA, RoCE, Infiniband)
5+ years of software development experience in C/C++ and Python
Experience developing and running large‑scale distributed AI workloads
Hands‑on lab experience with datacenter hardware (racks, servers, switches, NICs, accelerators, etc.)
Experience with low‑level software and/or FPGA hardware development (e.g., firmware or RTL)
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general‑purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.
For additional information, please see OpenAI's affirmative action and equal employment opportunity policy statement.
Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US‑based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non‑public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.
To notify OpenAI that you believe this job posting is non‑compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
#J-18808-Ljbffr
$118k-163k yearly est. 17h ago
Robotics ML Research Engineer
Scale Ai, Inc. 4.1
Research and development engineer job in San Francisco, CA
A leading AI company in San Francisco is seeking a Robotics Researcher to contribute to applied research and ML pipeline development. The ideal candidate has a PhD in Machine Learning or Robotics and 3+ years of relevant experience, excelling in collaboration and communication. This full-time role offers a competitive salary range of $218,400 - $273,000 USD and equity options, along with comprehensive benefits.
#J-18808-Ljbffr
$218.4k-273k yearly 3d ago
Product Engineer: Build Developer-First AI Tools
Cognition 4.2
Research and development engineer job in San Francisco, CA
A cutting-edge AI lab in San Francisco is seeking experienced end-to-end engineers to join their small, talent-dense team. The role involves building innovative software products, enhancing user experiences, and collaborating closely with product teams. Ideal candidates should have experience with Python, Typescript, and React. Strong achievers who thrive in fast-moving environments are encouraged to apply.
#J-18808-Ljbffr
$90k-126k yearly est. 1d ago
Foundation AI Research Engineer for Biology & Multi-omics
Prima Mente
Research and development engineer job in San Francisco, CA
A pioneering AI research firm in San Francisco seeks a skilled professional to design and implement foundational AI models for multi-omics at a large scale. The role involves developing ML algorithms, optimizing performance, and creating data processing workflows, thus contributing to transformative applications in medicine and biology. Ideal candidates will have strong backgrounds in machine learning, distributed computing, and teamwork, thriving in an innovative environment focused on making a significant impact in health and research.
#J-18808-Ljbffr
$108k-163k yearly est. 1d ago
ML Research Engineer
Specter
Research and development engineer job in San Francisco, CA
Company Background:
Specter is creating a software-defined “control plane” for the physical world. We are starting with protecting American businesses by granting them ubiquitous perception over their physical assets.
To do so, we are creating a connected hardware-software ecosystem on top of multi-modal wireless mesh sensing technology. This allows us to drive down the cost and time of deploying sensors by 10x. Our platform will ultimately become the perception engine for a company's physical footprint, enabling real-time perimeter visibility, autonomous operations management, and “digital twinning” of physical processes.
Our co-founders Xerxes and Philip are passionate about empowering our partners in the fast approaching world of physical AI and robotics. We are a small, fast growing team who hail from Anduril, Tesla, Uber, and the U.S. Special Forces.
Role + Responsibilities:
Specter is hiring a perception AI engineer responsible for turning sensor data pipelines into actionable insights for our customers.
Responsibilities include:
Implementing and deploying a variety of deep-learning based vision, vision-language, and large language models to our world-class distributed perception system
Building and scaling a production-grade data-collection, labelling, and model re-training platform
Driving the design behind a multimodal software user interface
Qualifications:
5+ years of experience training, implementing, and deploying deep-learning based computer vision models in tasks such as object detection, semantic segmentation, object tracking, etc. (both single and multi-frame) in frameworks such as PyTorch, TensorRT, and ONNX
Experience fine-tuning, implementing, and deploying vision-language models and large language models in frameworks such as PyTorch, TensorRT-LLM, and ONNX
Experience optimizing model runtimes utilizing techniques such as quantization, pruning, low-rank adaptation, etc. where appropriate
Experience building production-grade RAG pipelines, and scaling vector databases in production
Strong experience in C++/Rust development in embedded systems and knowledge of Linux fundamentals
Strong knowledge of CUDA fundamentals
Experience with image/video processing, filtering, and enhancement. Knowledge of various video codecs desirable.
Experience with variety of sensor types such as EO and IR cameras
Familiarity with Rust (or ability to come up the curve quickly!)
#J-18808-Ljbffr
$108k-163k yearly est. 4d ago
Molecule ML Research Engineer - Graph Transformers & HPC
Achira
Research and development engineer job in San Francisco, CA
A pioneering technology firm is looking for a high-performing individual to advance molecular machine learning using deep learning architectures. You will architect and integrate state-of-the-art models, optimize performance from code to hardware, and collaborate with scientists to develop innovations for drug discovery and material sciences. The role requires expertise in frameworks like PyTorch and JAX, deep understanding of GPU optimizations, and a passion for tackling challenging technical problems.
#J-18808-Ljbffr
$108k-163k yearly est. 1d ago
Audio AI Research Engineer
David Ai
Research and development engineer job in San Francisco, CA
David AI is the first audio data research company. We bring an R&D approach to data-developing datasets with the same rigor AI labs bring to models. Our mission is to bring AI into the real world, and we believe audio is the gateway. Speech is versatile, accessible, and human-it fits naturally into everyday life. As audio AI advances and new use cases emerge, high-quality training data is the bottleneck. This is where David AI comes in.
David AI was founded in 2024 by a team of former Scale AI engineers and operators. In less than a year, we've brought on most FAANG companies and AI labs as customers. We recently raised a $50M Series B from Meritech, NVIDIA, Jack Altman (Alt Capital), Amplify Partners, First Round Capital and other Tier 1 investors.
Our team is sharp, humble, ambitious, and tight-knit. We're looking for the best research, engineering, product, and operations minds to join us on our mission to push the frontier of audio AI.
About our Research team
As an audio data research company, we believe model capabilities come from high quality, differentiated data. David AI's research team performs ambitious, long-term research into audio capabilities, while working with internal and external stakeholders to productionalize the latest research insights.
About this role
As a Founding Audio AI ResearchEngineer, you'll define the research agenda that shapes how the world's best AI labs train their audio models. You'll have a world‑class workforce of human AI trainers, compute resources, and independence to drive your roadmap.
In this role, you will
Define and build comprehensive evaluation frameworks for audio AI capabilities across speech, emotion, conversation dynamics, and acoustic patterns.
Research and prototype novel approaches to audio quality assessment, automated labeling, and data collection optimization.
Design targeted data collection pipelines for novel, high-value audio capabilities
Architect automated systems for continuous classifier improvement and prompt engineering evaluation.
Evaluate frontier models and create actionable research directions.
Publish findings and papers in top conferences.
Your background looks like
3+ years of experience in AI/ML research or engineering, preferably with multimodal models
Proven ability to translate research insights into production systems and scalable solutions.
Experience with Python and PyTorchExperience designing and conducting rigorous ML experiments and evaluations.
Bonus points if you have
PhD or Masters in Computer Science or a related field with a focus on ML or audio/speech.
Published research work in top‑tier CS, AI and/or audio‑related conferences.
Strong record of publication in evaluations or data, particularly for audio/speech.
Experience in large scale data processing or high performance computing.
Benefits
Unlimited PTO.
Top‑notch health, dental, and vision coverage with 100% coverage for most plans.
FSA & HSA access.
401k access.
Meals 2x daily through DoorDash + snacks and beverages available at the office.
Unlimited company‑sponsored Barry's classes.
#J-18808-Ljbffr
$108k-163k yearly est. 2d ago
Research Engineer: AI Systems & LLM Evaluation + Equity
Mercor, Inc.
Research and development engineer job in San Francisco, CA
A cutting-edge technology company in San Francisco is seeking a ResearchEngineer. The role involves working on post-training and RLVR, designing experiments, and improving large language models. Ideal candidates will have a strong applied research background, coding proficiency, and an understanding of data structures and algorithms. The position requires in-person attendance five days a week and offers unique benefits including equity grants, relocation bonuses, and meal stipends.
#J-18808-Ljbffr
$108k-163k yearly est. 2d ago
Founding RL Research Engineer - Open-Source Data & Infra
The LLM Data Company
Research and development engineer job in San Francisco, CA
A pioneering AI firm in San Francisco is seeking an experienced professional to drive scalable RL research and build innovative data generation pipelines. The role offers significant autonomy and the opportunity to work on cutting-edge projects that are validated in production. Candidates should have a Master's or PhD in Computer Science, expertise in core tooling like PyTorch, and familiarity with modern post-training techniques. Join as an early team member to enjoy a significant equity upside.
#J-18808-Ljbffr
$108k-163k yearly est. 4d ago
Large-Scale ML Research Engineer for Biology & AI
Jack & Jill/External ATS
Research and development engineer job in San Francisco, CA
A leading AI research organization is seeking a Machine Learning ResearchEngineer in San Francisco, California. This role focuses on designing and optimizing AI models for large-scale multi-omics applications, driving critical advances in scientific understanding. The ideal candidate will have deep expertise in ML frameworks and experience deploying high-performance algorithms. Join a multidisciplinary team that is working on groundbreaking technology to protect the human brain.
#J-18808-Ljbffr
$108k-163k yearly est. 17h ago
Research Engineer / Scientist, Alignment Science
Menlo Ventures
Research and development engineer job in San Francisco, CA
About Anthropic
Anthropic's mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the role:
You want to build and run elegant and thorough machine learning experiments to help us understand and steer the behavior of powerful AI systems. You care about making AI helpful, honest, and harmless, and are interested in the ways that this could be challenging in the context of human-level capabilities. You could describe yourself as both a scientist and an engineer. As a ResearchEngineer on Alignment Science, you'll contribute to exploratory experimental research on AI safety, with a focus on risks from powerful future systems (like those we would designate as ASL-3 or ASL-4 under our Responsible Scaling Policy ), often in collaboration with other teams including Interpretability, Fine-Tuning, and the Frontier Red Team.
Our blog provides an overview of topics that the Alignment Science team is either currently exploring or has previously explored. Our current topics of focus include...
Scalable Oversight:Developing techniques to keep highly capable models helpful and honest, even as they surpass human-level intelligence in various domains.
AI Control:Creating methods to ensure advanced AI systems remain safe and harmless in unfamiliar or adversarial scenarios.
Alignment Stress-testing :Creatingmodel organisms of misalignment to improve our empirical understanding of how alignment failures might arise.
Automated Alignment Research:Building and aligning a system that can speed up & improve alignment research.
Representative projects:
Testing the robustness of our safety techniques by training language models to subvert our safety techniques, and seeing how effective they are at subverting our interventions.
Run multi-agent reinforcement learning experiments to test out techniques like AI Debate .
Build tooling to efficiently evaluate the effectiveness of novel LLM-generated jailbreaks.
Write scripts and prompts to efficiently produce evaluation questions to test models' reasoning abilities in safety-relevant contexts.
Contribute ideas, figures, and writing to research papers, blog posts, and talks.
Run experiments that feed into key AI safety efforts at Anthropic, like the design and implementation of our Responsible Scaling Policy .
You may be a good fit if you:
Have significant software, ML, or researchengineering experience
Have some experience contributing to empirical AI research projects
Have some familiarity with technical AI safety research
Prefer fast-moving collaborative projects to extensive solo efforts
Pick up slack, even if it goes outside your job description
Care about the impacts of AI
Strong candidates may also:
Have experience authoring research papers in machine learning, NLP, or AI safety
Have experience with LLMs
Have experience with reinforcement learning
Have experience with Kubernetes clusters and complex shared codebases
Candidates need not have:
100% of the skills needed to perform the job
Formal certifications or education credentials
The expected salary range for this position is:
$280,000 - $690,000 USD
Logistics
Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship:We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
How we're different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact - advancing our long-term goals of steerable, trustworthy AI - rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.
Apply for this job
*
indicates a required field
First Name *
Last Name *
Email *
Phone
Resume/CV
Enter manually
Accepted file types: pdf, doc, docx, txt, rtf
(Optional) Personal Preferences *
How do you pronounce your name?
Website
Publications (e.g. Google Scholar) URL
When is the earliest you would want to start working with us?
Do you have any deadlines or timeline considerations we should be aware of?
AI Policy for Application * Select...
While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process. We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate 'Yes' if you have read and agree.
In a paragraph or two, why do you want to work on AI safety at Anthropic? *
Given your understanding of our team's priorities, what are three projects you'd be excited about working on at Anthropic that are aligned with those priorities? (1-2 sentences each) *
You can learn about the team's work here: *******************************
Share a link to the piece of work you've done that is most relevant to the Alignment Science team, along with a brief description of the work and its relevance. *
What's your ideal breakdown of your time in a working week, in terms of hours or % per week spent on meetings, coding, reading papers, etc.? *
In one paragraph, provide an example of something meaningful that you have done in line with your values. Examples could include past work, volunteering, civic engagement, community organizing, donations, family support, etc. *
Will you now or will you in the future require employment visa sponsorship to work in the country in which the job you're applying for is located? * Select...
Additional Information *
Add a cover letter or anything else you want to share.
LinkedIn Profile
Please ensure to provide
either
your LinkedIn profile or Resume, we require at least one of the two.
Are you open to working in-person in one of our offices 25% of the time? * Select...
Are you open to relocation for this role? * Select...
What is the address from which you plan on working? If you would need to relocate, please type "relocating".
Team Matching *
Pre-training - The Pre-training team trains large language models that are used by our product, alignment, and interpretability teams. Some projects include figuring out the optimal dataset, architecture, hyper-parameters, and scaling and managing large training runs on our cluster.
AI Alignment Research - the Alignment team works to train more aligned (helpful, honest, and harmless) models and does “alignment science” to understand how alignment techniques work and try to extrapolate to uncover and address new failure modes.
Reinforcement Learning - Reinforcement Learning is used by a variety of different teams, both for alignment and to teach models to be more capable at specific tasks.
Platform - The Platform team builds shared infrastructure used by Anthropic's research and product teams. Areas of ownership include: the inference service that generates predictions from language models; extensive continuous integration and testing infrastructure; several very large supercomputing clusters and the associated tooling.
Interpretability - The Interpretability team investigates what's going on inside large language models - in a sense, they are trying to reverse engineer the concepts and mechanics from the inscrutable learned weights of these systems. Their goal is to ensure that AI systems are safe by being able to assess whether they're doing what we actually want, all the way down to the individual neurons.
Societal Impacts - Our Societal Impacts team designs and executes experiments that evaluate the capabilities and harms of the technologies we build. They also support the policy team with empirical evidence.
Product - The Product research team trains, evaluates, and improves upon Claude, integrating all of our research techniques to make our AI systems as safe and helpful as possible.
Which teams or projects are you most interested in? (Note: if none of the teams you select are hiring, we won't proceed with your application at this time, although we may reach out if those teams open roles in the future.)
Have you ever interviewed at Anthropic before? * Select...
Do you require visa sponsorship? * Select...
Voluntary Self-Identification
For government reporting purposes, we ask candidates to respond to the below self-identification survey.Completion of the form is entirely voluntary. Whatever your decision, it will not be considered in the hiringprocess or thereafter. Any information that you do provide will be recorded and maintained in aconfidential file.
As set forth in Anthropic's Equal Employment Opportunity policy,we do not discriminate on the basis of any protected group status under any applicable law.
If you believe you belong to any of the categories of protected veterans listed below, please indicate by making the appropriate selection.As a government contractor subject to the Vietnam Era Veterans Readjustment Assistance Act (VEVRAA), we request this information in order to measurethe effectiveness of the outreach and positive recruitment efforts we undertake pursuant to VEVRAA. Classification of protected categoriesis as follows:
A "disabled veteran" is one of the following: a veteran of the U.S. military, ground, naval or air service who is entitled to compensation (or who but for the receipt of military retired pay would be entitled to compensation) under laws administered by the Secretary of Veterans Affairs; or a person who was discharged or released from active duty because of a service-connected disability.
A "recently separated veteran" means any veteran during the three-year period beginning on the date of such veteran's discharge or release from active duty in the U.S. military, ground, naval, or air service.
An "active duty wartime or campaign badge veteran" means a veteran who served on active duty in the U.S. military, ground, naval or air service during a war, or in a campaign or expedition for which a campaign badge has been authorized under the laws administered by the Department of Defense.
An "Armed forces service medal veteran" means a veteran who, while serving on active duty in the U.S. military, ground, naval or air service, participated in a United States military operation for which an Armed Forces service medal was awarded pursuant to Executive Order 12985.
Select...
Voluntary Self-Identification of Disability
Form CC-305
Page 1 of 1
OMB Control Number 1250-0005
Expires 04/30/2026
Voluntary Self-Identification of DisabilityForm CC-305 Page 1 of 1 OMB Control Number 1250-0005 Expires 04/30/2026
Why are you being asked to complete this form?
We are a federal contractor or subcontractor. The law requires us to provide equal employment opportunity to qualified people with disabilities. We have a goal of having at least 7% of our workers as people with disabilities. The law says we must measure our progress towards this goal. To do this, we must ask applicants and employees if they have a disability or have ever had one. People can become disabled, so we need to ask this question at least every five years.
Completing this form is voluntary, and we hope that you will choose to do so. Your answer is confidential. No one who makes hiring decisions will see it. Your decision to complete the form and your answer will not harm you in any way. If you want to learn more about the law or this form, visit the U.S. Department of Labor's Office of Federal Contract Compliance Programs (OFCCP) website at ***************** .
How do you know if you have a disability?
A disability is a condition that substantially limits one or more of your “major life activities.” If you have or have ever had such a condition, you are a person with a disability. Disabilities include, but are not limited to:
Alcohol or other substance use disorder (not currently using drugs illegally)
Autoimmune disorder, for example, lupus, fibromyalgia, rheumatoid arthritis, HIV/AIDS
Blind or low vision
Cancer (past or present)
Cardiovascular or heart disease
Celiac disease
Cerebral palsy
Deaf or serious difficulty hearing
Diabetes
Disfigurement, for example, disfigurement caused by burns, wounds, accidents, or congenital disorders
Epilepsy or other seizure disorder
Gastrointestinal disorders, for example, Crohn's Disease, irritable bowel syndrome
Intellectual or developmental disability
Mental health conditions, for example, depression, bipolar disorder, anxiety disorder, schizophrenia, PTSD
Missing limbs or partially missing limbs
Mobility impairment, benefiting from the use of a wheelchair, scooter, walker, leg brace(s) and/or other supports
Nervous system condition, for example, migraine headaches, Parkinson's disease, multiple sclerosis (MS)
Neurodivergence, for example, attention-deficit/hyperactivity disorder (ADHD), autism spectrum disorder, dyslexia, dyspraxia, other learning disabilities
Partial or complete paralysis (any cause)
Pulmonary or respiratory conditions, for example, tuberculosis, asthma, emphysema
Short stature (dwarfism)
Traumatic brain injury
Disability Status Select...
PUBLIC BURDEN STATEMENT: According to the Paperwork Reduction Act of 1995 no persons are required to respond to a collection of information unless such collection displays a valid OMB control number. This survey should take about 5 minutes to complete.
#J-18808-Ljbffr
$108k-163k yearly est. 1d ago
Personal AI Research Engineer - Privacy-First
Workshop Labs
Research and development engineer job in San Francisco, CA
An innovative AI startup in San Francisco seeks a talented individual to join their team, focusing on developing personal AI models tailored to users' preferences. The ideal candidate will have a strong background in machine learning and experience with fine-tuning models. This role offers generous compensation, equity, and the opportunity to work at the forefront of AI development to ensure technology serves humanity's interests.
#J-18808-Ljbffr
$108k-163k yearly est. 1d ago
Research Engineer
Appliedcompute
Research and development engineer job in San Francisco, CA
Applied Compute builds Specific Intelligence for enterprises, unlocking the knowledge inside a company to train custom models and deploy an in-house agent workforce.
Today's state-of-the-art AI isn't one-size-fits-all-it's a tailored system that continuously learns from a company's processes, data, expertise, and goals. The same way companies compete today by having the best human workforce, the companies building for the future will compete by having the best agent workforce supporting their human bosses. We call this Specific Intelligence and we're already building them today.
We are a small, talent-dense team of engineers, researchers, and operators who have built some of the most influential AI systems in the world, including reinforcement learning infrastructure at OpenAI and data foundations at Scale AI, with additional experience from Together, Two Sigma, and Watershed.
We're backed with $80M from Benchmark, Sequoia, Lux, Hanabi, Neo, Elad Gil, Victor Lazarte, Omri Casspi, and others. We work in-person in San Francisco.
The Role
As a founding ResearchEngineer, you'll train frontier-scale models and adapt them into specialized experts for enterprises. You will design and run experiments at scale, developing novel methods for agentic training.
You'll work closely with researchers to experiment with and invent new algorithms, and you'll collaborate with infrastructure engineers to post train LLMs on thousands of GPUs. We believe that research velocity is tied with having world class tooling; you'll build tools and observability for yourself and others, enabling deeper investigation into how models specialize during training. If you get excited by challenging systems and ML problems at scale, this role is for you.
What You'll Do
Post-train frontier scale large language models on enterprise tasks and environments
Explore cutting edge RL techniques, co-designing algorithms and systems
Partner with infrastructure engineers to scale training and inference efficiently across thousands of GPUs
Build high-performance internal tools for probing, debugging, and analyzing training runs
What We're Looking For
Experience training or serving LLMs
Experience building RL environments and evals for language models
Proficiency in PyTorch, JAX, or similar ML frameworks, and experience with distributed training
Strong experimental design skills
Strong Candidates May Also Have
Background in pre- or post-training
Previous experience in high-performance computing environments or working with large-scale clusters
Contributions to open-source ML research or infrastructure
Demonstrated technical creativity through published research, OSS contributions, or side projects
Logistics
Location: This role is based in San Francisco, California.
Benefits: Applied Compute offers generous health benefits, unlimited PTO, paid parental leave, lunches and dinners at the office, and relocation support as needed. We work in-person at a beautiful office in San Francisco's Design District.
Visa sponsorship: We sponsor visas. While we can't guarantee success for every candidate or role, if you're the right fit, we're committed to working through the visa process with you.
We encourage you to apply even if you do not believe you meet every single qualification. As set forth in Applied Compute's Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
#J-18808-Ljbffr
$108k-163k yearly est. 17h ago
ML/AI Research Engineer - Agentic AI Lab (Founding Team)
Fabrion
Research and development engineer job in San Francisco, CA
Type: Full-Time Compensation: Competitive salary + meaningful equity (founding tier)
Backed by 8VC, we're building a world‑class team to tackle one of the industry's most critical infrastructure problems.
About the Role
We're designing the future of enterprise AI infrastructure - grounded in agents, retrieval‑augmented generation (RAG), knowledge graphs, and multi‑tenant governance.
We're looking for an ML/AI ResearchEngineer to join our AI Lab and lead the design, training, evaluation, and optimization of agent‑native AI models. You'll work at the intersection of LLMs, vector search, graph reasoning, and reinforcement learning - building the intelligence layer that sits on top of our enterprise data fabric.
This isn't a prompt engineer role. It's full‑cycle ML: from data curation and fine‑tuning to evaluation, interpretability, and deployment - with cost‑awareness, alignment, and agent coordination all in scope.
Core Responsibilities
Fine‑tune and evaluate open‑source LLMs (e.g. LLaMA 3, Mistral, Falcon, Mixtral) for enterprise use cases with both structured and unstructured data
Build and optimize RAG pipelines using LangChain, LangGraph, LlamaIndex, or Dust - integrated with our vector DBs and internal knowledge graph
Train agent architectures (ReAct, AutoGPT, BabyAGI, OpenAgents) using enterprise task data
Develop embedding‑based memory and retrieval chains with token‑efficient chunking strategies
Create reinforcement learning pipelines to optimize agent behaviors (e.g. RLHF, DPO, PPO)
Establish scalable evaluation harnesses for LLM and agent performance, including synthetic evals, trace capture, and explainability tools
Contribute to model observability, drift detection, error classification, and alignment
Optimize inference latency and GPU resource utilization across cloud and on‑prem environments
Desired Experience Model Training
Deep experience fine‑tuning open‑source LLMs using HuggingFace Transformers, DeepSpeed, vLLM, FSDP, LoRA/QLoRA
Worked with both base and instruction‑tuned models; familiar with SFT, RLHF, DPO pipelines
Comfortable building and maintaining custom training datasets, filters, and eval splits
Understand trade‑offs in batch size, token window, optimizer, precision (FP16, bfloat16), and quantization
RAG + Knowledge Graphs
Experience building enterprise‑grade RAG pipelines integrated with real‑time or contextual data
Familiar with LangChain, LangGraph, LlamaIndex, and open‑source vector DBs (Weaviate, Qdrant, FAISS)
Experience grounding models with structured data (SQL, graph, metadata) + unstructured sources
Bonus: Worked with Neo4j, Puppygraph, RDF, OWL, or other semantic modeling systems
Agent Intelligence
Experience training or customizing agent frameworks with multi‑step reasoning and memory
Understand common agent loop patterns (e.g. Plan→Act→Reflect), memory recall, and tools
Familiar with self‑correction, multi‑agent communication, and agent ops logging
Optimization
Strong background in token cost optimization, chunking strategies, reranking (e.g. Cohere, Jina), compression, and retrieval latency tuning
Experience running models under quantized (int4/int8) or multi‑GPU settings with inference tuning (vLLM, TGI)
Preferred Tech Stack
LLM Training & Inference: HuggingFace Transformers, DeepSpeed, vLLM, FlashAttention, FSDP, LoRA
Agent Orchestration: LangChain, LangGraph, ReAct, OpenAgents, LlamaIndex
Vector DBs: Weaviate, Qdrant, FAISS, Pinecone, Chroma
Graph Knowledge Systems: Neo4j, Puppygraph, RDF, Gremlin, JSON-LD
Storage & Access: Iceberg, DuckDB, Postgres, Parquet, Delta Lake
Evaluation: OpenLLM Evals, Trulens, Ragas, LangSmith, Weight & Biases
Compute: Ray, Kubernetes, TGI, Sagemaker, LambdaLabs, Modal
Languages: Python (core), optionally Rust (for inference layers) or JS (for UX experimentation)
Soft Skills & Mindset
Startup DNA: resourceful, fast‑moving, and capable of working in ambiguity
Deep curiosity about agent‑based architectures and real‑world enterprise complexity
Comfortable owning model performance end‑to‑end: from dataset to deployment
Strong instincts around explainability, safety, and continuous improvement
Enjoy pair‑designing with product and UX to shape capabilities, not just APIs
Why This Role Matters
This role is foundational to our thesis: that agents + enterprise data + knowledge modeling can create intelligent infrastructure for real‑world, multi‑billion‑dollar workflows. Your work won't be buried in research reports - it will be productionized and activated by hundreds of users and hundreds of thousands of decisions. If this is your dream role - we would love to hear from you.
#J-18808-Ljbffr
$108k-163k yearly est. 3d ago
Real-Time Data Adaptation Research Engineer
Adaption Labs
Research and development engineer job in San Francisco, CA
A pioneering AI company is seeking talented individuals to lead innovative data adaptation efforts. The ideal candidate will possess deep expertise in model efficiency and algorithmic optimization, with strong programming skills in Python. Responsibilities include developing algorithmic recipes and exploring new data strategies. The position offers a flexible work environment with in-person collaboration in the Bay Area, quarterly offsites, and comprehensive benefits including a lunch stipend and travel budget.
#J-18808-Ljbffr
$108k-163k yearly est. 4d ago
aMACI - Cryptographic Research Engineer
Dora Factory
Research and development engineer job in San Francisco, CA
Dora Factory is building the next generation of governance infrastructure for decentralized and real-world communities. With advanced cryptographic stacks such as MACI (Minimal Anti-Collusion Infrastructure) and anonymous MACI (aMACI), Dora Factory is a leader in privacy-preserving, tamper-resistant voting technology. By pushing the boundaries of trustless, anonymous, and autonomous governance, we're not only creating infrastructure for communities of the future, but also building new platforms for world consciousness.
About the Role
An aMACI - Cryptographic ResearchEngineer to join our researchengineer team focused on advancing the MACI, anonymous MACI, and other advanced governance protocols. You will be designing and engineering novel cryptographic mechanisms that are core to collusion resistance, anonymity, and censorship-resistance in decentralized governance systems and autonomous organizations. Your work will directly contribute to the evolution of autonomous, privacy-preserving organizations.
Key Responsibilities
Research and develop enhancements to the MACI and aMACI protocols, with a focus on scalability, anonymity, and verifiability.
Implement and test advanced cryptographic primitives (e.g., ZK-SNARKs, FHE, threshold cryptography) in autonomous systems.
Collaborate with protocol engineers and product teams to integrate cryptographic innovations into real-world deployments.
Conduct threat modeling and security analysis to ensure robustness against collusion, bribery, and deanonymization.
Optimize cryptographic circuits and improve gas efficiency for on-chain operations.
Publish findings, write documentation, and contribute to open-source codebases supporting MACI/aMACI.
Stay up to date on cutting-edge cryptographic research and bring best practices to the team.
Required Qualifications
Bachelor's, Master's, or Ph.D. in Cryptography, Foundation Computer Science, Applied Mathematics, or related fields.
Experience working with libraries like Circom, Halo2, Arkworks, or gnark.
Solid understanding of cryptographic protocols such as zero-knowledge proofs (e.g., SNARKs, STARKs), secure multi-party computation, and anonymous credentials.
Experience implementing cryptographic systems in Rust, TypeScript, or Solidity.
A strong understanding of privacy, identity, and governance challenges in decentralized systems.
Preferred Qualifications
Contributions to open-source cryptography repos or zero-knowledge proof systems.
Prior experience with MACI or anonymous voting systems.
Familiarity with Ethereum and layer-2 ecosystems, especially ZK-rollups.
Familiarity with CosmosSDK and CosmosWasm.
Publications or active participation in cryptography or blockchain research communities (e.g., IACR, ZK Summit, EthResearch).
Familiarity with governance design, decentralized voting mechanisms, and DAOs.
#J-18808-Ljbffr
$108k-163k yearly est. 17h ago
AI research engineer
Monograph
Research and development engineer job in San Francisco, CA
Employment Type
Full time
Department
AI
Note - Below contains the outcomes and competencies for the team. If you bring standout strengths in some areas but not all, you are still encouraged to apply.
Mission
Design, train, ship, iterate on, and innovate on the AI brains behind Pathos' AI Therapist. Combine research, data science, and engineering to create models, orchestration, and evaluation systems that make therapy conversations deeply effective, clinically grounded, and safe.
Outcomes
Improve quality of AI Therapy: Deliver measurable improvements in conversation quality, therapeutic alliance, and user outcomes through fine-tuning strategies, training data curation, building RL environments, new model architectures and other AI innovations.
Improve evaluation of AI quality: Improve on and maintain a robust eval stack that includes scripted tests, LLM-as-judge evaluations, human ratings, and safety checks. Improve automated regression testing, detection of defects, and observability (eg dashboards).
Own AI system. Build, maintain, and iterate on the production codebase that delivers AI therapy and supports the evaluation and iteration of our AI.
Productionize Models and Pipelines. Own the path from notebook to production: training jobs, model packaging, deployment, monitoring, and rollback strategies. Keep latency, reliability, and cost within agreed budgets while enabling rapid iteration on new ideas.
Improve Safety, Alignment, and Clinical Guardrails Work with clinicians and internal experts to encode clinical guidelines into prompts, reward functions, tools, and filters. Proactively identify and reduce harmful or low-quality behaviors through targeted experiments, red teaming, and mitigations.
Own Research Roadmap and Experiment Velocity Run high-quality experiments from hypothesis to analysis to improve our understanding of what matters and what works. Shape and execute a focused R&D roadmap.
Collaboration with Clinicians, Product, and Engineering. Translate product and clinical requirements into concrete model and system changes. Partner with full-stack product engineers so that new AI capabilities are easy to integrate and maintain in the product.
Competencies
LLM and Applied ML Depth. Demonstrates strong experience with large language models, including fine-tuning, training data design, and model selection. Knows how to move core metrics on conversation quality and user outcomes, rather than chasing generic benchmarks. Can look at evals, transcripts, and metrics and quickly form grounded hypotheses for improvement.
Ships clean, maintainable, quality code. No only do you know how transformers work, but you are also an engineer that has experience shipping production-level code and/or maintaining an AI system in production.
Data Engineering Skills. Can set up production-level data pipelines for training new models, evals, analysis, etc.
Scientific Mindset. You formulate hypotheses, and you are good at evaluating them (eg through experiments, data analysis, etc). You are consistently learning at the cutting edge, and you're able to leverage and communicate those learnings to make the entire company more successful.
Ruthless Prioritizer. You are keenly aware of how to provide company value and to prioritize projects accordingly. Resistant to nerd-sniping.
Quality Obsessive: Refuses to ship subpar work, continuously improving the codebase.
Fast: Prioritizes speed by leveraging AI, breaking down complex tasks, shipping early, optimizing for learnings, iterating quickly, and avoiding over-engineering.
Strong communicator. You can work collaboratively in a positive way. Sees others perspectives. Strong opinions, loosely held. Focused on user/business value, not ego.
Great to have
Personal or other experience with therapy or coaching
Domain knowledge of psychology, neuroscience, therapy, or coaching.
#J-18808-Ljbffr
$108k-163k yearly est. 17h ago
AI Research Engineer - Virtual Collaboration Specialist
Victrays
Research and development engineer job in San Francisco, CA
A leading AI research firm in San Francisco seeks a ResearchEngineer to develop reinforcement learning systems for virtual collaboration. The role involves designing training environments and collaborating with product teams. Candidates should have strong Python programming skills and experience in machine learning. This position offers competitive compensation, flexible hours, and a hybrid work model, emphasizing the importance of diverse perspectives in AI research.
#J-18808-Ljbffr
$108k-163k yearly est. 17h ago
Learn more about research and development engineer jobs
How much does a research and development engineer earn in San Rafael, CA?
The average research and development engineer in San Rafael, CA earns between $96,000 and $189,000 annually. This compares to the national average research and development engineer range of $74,000 to $135,000.
Average research and development engineer salary in San Rafael, CA
$135,000
What are the biggest employers of Research And Development Engineers in San Rafael, CA?
The biggest employers of Research And Development Engineers in San Rafael, CA are:
8427-Janssen Cilag Manufacturing Legal Entity
Job type you want
Full Time
Part Time
Internship
Temporary
Research And Development Engineer jobs by location