Research internship jobs in Walnut Creek, CA - 1,656 jobs
All
Research Internship
Research Engineer
Research Associate
Researcher
Research Analyst
Audio AI Researcher: Build Steerable Speech Systems
Menlo Ventures
Research internship job in San Francisco, CA
A leading AI research company in San Francisco is hiring for a position on their Audio team. The role involves developing and training advanced audio models, optimizing performance, and working collaboratively across teams. Ideal candidates will have strong expertise in JAX or PyTorch, a background in audio machine learning, and a passion for creating safe, reliable AI systems. The compensation ranges from $350,000 to $500,000, and the position offers flexible working hours and a supportive team environment.
#J-18808-Ljbffr
$66k-129k yearly est. 3d ago
Looking for a job?
Let Zippia find it for you.
Hedge Fund Research Analyst - Quant & Portfolio Monitoring
Callan 4.3
Research internship job in San Francisco, CA
A leading investment consulting firm in San Francisco seeks a hedge fund investment analyst to conduct research and monitor hedge fund performance. The candidate will collaborate with a team to provide insights into hedge fund strategies and assist in presentations to clients. An ideal candidate will possess a bachelor's degree in finance or a related field, along with two years of related experience. A commitment to strong communication and client relationships is essential.
#J-18808-Ljbffr
$110k-169k yearly est. 1d ago
Fund & Co-Investment Research Associate
Allocate Holdings Inc.
Research internship job in Palo Alto, CA
Fund & Co-Investment Research Associate About Allocate
Allocate is transforming private market investing by enabling wealth advisory firms to seamlessly build and manage high-quality private market programs.
About the Role
We're seeking a Fund & Co-Investment Research Associate to join our Private Investments team. This isn't a traditional allocator role, as you'll be a critical connector between world-class fund managers, wealth advisory CIOs, and help build and manage our technology platform.
You'll evaluate venture capital and private equity opportunities, conduct manager diligence, and spend significant time with wealth advisor investment teams discussing curated deals. You need to deeply understand opportunities, answer sophisticated questions, and provide clear, accurate analysis that helps clients make confident investment decisions.
Equally important, you'll own the completeness and accuracy of our platform content, ensuring all investment materials, copy, and compliance documentation stay current.
Key Responsibilities
Investment Research & Diligence: Conduct quantitative and qualitative research on private market managers and co-investment opportunities across venture capital, private equity, and adjacent asset classes. Prepare balanced investment analysis and recommendations.
Client Engagement: Spend substantial time with wealth advisor CIOs and their investment teams discussing curated opportunities. Field detailed questions, articulate investment theses, and provide real-time analysis that demonstrates deep command of each deal.
GP Relationship Development: Build and maintain relationships with fund managers, including structuring conversations, access discussions, and ongoing partnership development.
Platform Completeness: Own platform content integrity-ensure investment materials are updated, copy is accurate and compliant, and all client-facing documentation metour quality standards.
Sourcing & Pipeline Development: Proactively source differentiated fund and co-investment opportunities through targeted outreach, industry relationships, and market intelligence.
Cross-Functional Collaboration: Partner with product, technology, and operations teams to refine platform capabilities and enhance the client investment experience.
Portfolio Monitoring: Support post-investment updates, quarterly reporting, and ongoing portfolio analytics for client transparency.
Market Intelligence: Develop insights and content that position Allocate as a leading voice in private markets.
Qualifications Experience & Knowledge:
3+ years of experience in venture capital, private equity, wealth management, investment banking, or related fields.
Deep industry knowledge of the VC/PE ecosystem, fund structures, and how institutional platforms operate.
Strong understanding of private markets and alternative investments.
Skills & Attributes:
Hustler mentality: Proactive, resourceful, and self-directed-you don't wait to be told what to do.
Exceptional communication: You can explain complex investment concepts clearly and handle sophisticated Q&A with institutional allocators.
Highly personable: You build authentic relationships with both GPs and wealth advisory teams.
Commercial mindset: You think about client needs, platform scalability, and growth.
Startup DNA: You thrive in fast-paced environments, embrace ambiguity, and move quickly.
Meticulous attention to detail: You ensure accuracy and compliance across all materials.
Analytical excellence: You combine quantitative rigor with qualitative judgment.
Not a Fit If You're:
A pure allocator looking for a traditional fund-of-funds seat.
More comfortable with analysis only role than relationship-building and client engagement.
Seeking a slow-paced, process-heavy environment.
Unfamiliar with how technology platforms operate.
Why Allocate?
Shape the infrastructure layer for an entire industry.
Ability to immediately work with top VCs and PE firms, along with CIOs in the wealth advisory world.
Allocate is one of the fastest-growing platforms in private markets.
Palo Alto office in the heart of the venture ecosystem.
High impact, high visibility work with real ownership and autonomy.
Additional Information
Location: Palo Alto, CA. Must be able to work in-office 4 days a week.
Compensation: $160K-$200K base + bonus + stock equity.
Benefits: Medical, dental, vision, 401(k), responsible time off.
Employment: Full-time.
Compliance: This role is subject to Allocate's Code of Ethics and all related compliance obligations.
In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification form upon hire. Must have legal authorization to work in the U.S. now and in the future without visa sponsorship.
#J-18808-Ljbffr
$49k-80k yearly est. 1d ago
AI Research Engineer US - San Francisco
Near Inc. 4.6
Research internship job in San Francisco, CA
The NEAR AI team is building decentralized and confidential machine learning infrastructure to enable user-owned AI. Our mission is to make open-source AI truly accessible by bringing together researchers worldwide to advance model research, fine-tuning, and large-scale training. We are seeking a Research Engineer with a strong background in large language models (LLMs) and reasoning systems to help push the boundaries of AI innovation.
What You'll Be Doing
Designing and executing fine-tuning pipelines for open-weight LLMs across diverse use cases, from concept to deployment.
Conducting research on novel reasoning architectures and training methods to improve model performance.
Collaborating with researchers and engineers across institutions to accelerate progress in AI research.
Contributing to open-source projects and advancing best practices in decentralized AI development.
What We're Looking For
Strong hands-on experience in LLM training, fine-tuning, and evaluation.
Deep understanding of reinforcement learning (RL), particularly in the context of reasoning.
Strong problem-solving skills and the ability to communicate technical ideas clearly to diverse collaborators.
A passion for advancing open and decentralized AI ecosystems.
We'd Love If You Have
Experience with formal verification
Experience with trusted execution environment (TEE)
Contributions to open-source machine learning libraries
Please let us know if you require any special requirements for your interview and we'll do our best to accommodate.
Locations: San Francisco.
Apply for this job #J-18808-Ljbffr
$123k-173k yearly est. 2d ago
AI research engineer
Monograph
Research internship job in San Francisco, CA
Employment Type
Full time
Department
AI
Note - Below contains the outcomes and competencies for the team. If you bring standout strengths in some areas but not all, you are still encouraged to apply.
Mission
Design, train, ship, iterate on, and innovate on the AI brains behind Pathos' AI Therapist. Combine research, data science, and engineering to create models, orchestration, and evaluation systems that make therapy conversations deeply effective, clinically grounded, and safe.
Outcomes
Improve quality of AI Therapy: Deliver measurable improvements in conversation quality, therapeutic alliance, and user outcomes through fine-tuning strategies, training data curation, building RL environments, new model architectures and other AI innovations.
Improve evaluation of AI quality: Improve on and maintain a robust eval stack that includes scripted tests, LLM-as-judge evaluations, human ratings, and safety checks. Improve automated regression testing, detection of defects, and observability (eg dashboards).
Own AI system. Build, maintain, and iterate on the production codebase that delivers AI therapy and supports the evaluation and iteration of our AI.
Productionize Models and Pipelines. Own the path from notebook to production: training jobs, model packaging, deployment, monitoring, and rollback strategies. Keep latency, reliability, and cost within agreed budgets while enabling rapid iteration on new ideas.
Improve Safety, Alignment, and Clinical Guardrails Work with clinicians and internal experts to encode clinical guidelines into prompts, reward functions, tools, and filters. Proactively identify and reduce harmful or low-quality behaviors through targeted experiments, red teaming, and mitigations.
Own Research Roadmap and Experiment Velocity Run high-quality experiments from hypothesis to analysis to improve our understanding of what matters and what works. Shape and execute a focused R&D roadmap.
Collaboration with Clinicians, Product, and Engineering. Translate product and clinical requirements into concrete model and system changes. Partner with full-stack product engineers so that new AI capabilities are easy to integrate and maintain in the product.
Competencies
LLM and Applied ML Depth. Demonstrates strong experience with large language models, including fine-tuning, training data design, and model selection. Knows how to move core metrics on conversation quality and user outcomes, rather than chasing generic benchmarks. Can look at evals, transcripts, and metrics and quickly form grounded hypotheses for improvement.
Ships clean, maintainable, quality code. No only do you know how transformers work, but you are also an engineer that has experience shipping production-level code and/or maintaining an AI system in production.
Data Engineering Skills. Can set up production-level data pipelines for training new models, evals, analysis, etc.
Scientific Mindset. You formulate hypotheses, and you are good at evaluating them (eg through experiments, data analysis, etc). You are consistently learning at the cutting edge, and you're able to leverage and communicate those learnings to make the entire company more successful.
Ruthless Prioritizer. You are keenly aware of how to provide company value and to prioritize projects accordingly. Resistant to nerd-sniping.
Quality Obsessive: Refuses to ship subpar work, continuously improving the codebase.
Fast: Prioritizes speed by leveraging AI, breaking down complex tasks, shipping early, optimizing for learnings, iterating quickly, and avoiding over-engineering.
Strong communicator. You can work collaboratively in a positive way. Sees others perspectives. Strong opinions, loosely held. Focused on user/business value, not ego.
Great to have
Personal or other experience with therapy or coaching
Domain knowledge of psychology, neuroscience, therapy, or coaching.
#J-18808-Ljbffr
$108k-163k yearly est. 3d ago
Post-Training AI Research Engineer - In-Person (SF)
Letta Inc.
Research internship job in San Francisco, CA
A forward-looking AI company based in San Francisco is seeking talented researchers and engineers to pioneer enhancements in self-improving artificial intelligence. This role requires proficiency in Python and deep learning frameworks, along with expertise in post-training techniques. The ideal candidate thrives in an agile environment. This position is in-person, five days a week, allowing for direct collaboration with a world-class team.
#J-18808-Ljbffr
$108k-163k yearly est. 1d ago
Founding AI CAD Research Engineer
Camfer
Research internship job in San Francisco, CA
A pioneering engineering company in San Francisco is seeking talented research engineers to develop innovative parametric CAD tools. You will work on tasks such as generating CAD models from text and creating vector representations of 3D models. Ideal candidates possess strong backgrounds in algorithms and deep learning, and the ability to innovate. This role offers autonomy and the chance to be part of a passionate and creative team.
#J-18808-Ljbffr
$108k-163k yearly est. 2d ago
Staff Research Engineer - Generative Video Systems
Black.Ai
Research internship job in San Francisco, CA
A tech company in San Francisco is seeking a Staff Research Engineer to develop AI-powered generative video systems. You'll work closely with research and engineering teams to create scalable solutions and improve operations. Ideal candidates have significant experience in generative AI systems and strong collaboration skills. The role supports flexible working arrangements and offers a competitive salary package, alongside a range of benefits to promote well-being and family care.
#J-18808-Ljbffr
$108k-163k yearly est. 2d ago
Founding RL Research Engineer - Open-Source Data & Infra
The LLM Data Company
Research internship job in San Francisco, CA
A pioneering AI firm in San Francisco is seeking an experienced professional to drive scalable RL research and build innovative data generation pipelines. The role offers significant autonomy and the opportunity to work on cutting-edge projects that are validated in production. Candidates should have a Master's or PhD in Computer Science, expertise in core tooling like PyTorch, and familiarity with modern post-training techniques. Join as an early team member to enjoy a significant equity upside.
#J-18808-Ljbffr
$108k-163k yearly est. 2d ago
Research Engineer
Appliedcompute
Research internship job in San Francisco, CA
Applied Compute builds Specific Intelligence for enterprises, unlocking the knowledge inside a company to train custom models and deploy an in-house agent workforce.
Today's state-of-the-art AI isn't one-size-fits-all-it's a tailored system that continuously learns from a company's processes, data, expertise, and goals. The same way companies compete today by having the best human workforce, the companies building for the future will compete by having the best agent workforce supporting their human bosses. We call this Specific Intelligence and we're already building them today.
We are a small, talent-dense team of engineers, researchers, and operators who have built some of the most influential AI systems in the world, including reinforcement learning infrastructure at OpenAI and data foundations at Scale AI, with additional experience from Together, Two Sigma, and Watershed.
We're backed with $80M from Benchmark, Sequoia, Lux, Hanabi, Neo, Elad Gil, Victor Lazarte, Omri Casspi, and others. We work in-person in San Francisco.
The Role
As a founding Research Engineer, you'll train frontier-scale models and adapt them into specialized experts for enterprises. You will design and run experiments at scale, developing novel methods for agentic training.
You'll work closely with researchers to experiment with and invent new algorithms, and you'll collaborate with infrastructure engineers to post train LLMs on thousands of GPUs. We believe that research velocity is tied with having world class tooling; you'll build tools and observability for yourself and others, enabling deeper investigation into how models specialize during training. If you get excited by challenging systems and ML problems at scale, this role is for you.
What You'll Do
Post-train frontier scale large language models on enterprise tasks and environments
Explore cutting edge RL techniques, co-designing algorithms and systems
Partner with infrastructure engineers to scale training and inference efficiently across thousands of GPUs
Build high-performance internal tools for probing, debugging, and analyzing training runs
What We're Looking For
Experience training or serving LLMs
Experience building RL environments and evals for language models
Proficiency in PyTorch, JAX, or similar ML frameworks, and experience with distributed training
Strong experimental design skills
Strong Candidates May Also Have
Background in pre- or post-training
Previous experience in high-performance computing environments or working with large-scale clusters
Contributions to open-source ML research or infrastructure
Demonstrated technical creativity through published research, OSS contributions, or side projects
Logistics
Location: This role is based in San Francisco, California.
Benefits: Applied Compute offers generous health benefits, unlimited PTO, paid parental leave, lunches and dinners at the office, and relocation support as needed. We work in-person at a beautiful office in San Francisco's Design District.
Visa sponsorship: We sponsor visas. While we can't guarantee success for every candidate or role, if you're the right fit, we're committed to working through the visa process with you.
We encourage you to apply even if you do not believe you meet every single qualification. As set forth in Applied Compute's Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
#J-18808-Ljbffr
$108k-163k yearly est. 3d ago
Machine Learning Research Engineer
Decagon Ai, Inc.
Research internship job in San Francisco, CA
About Decagon
Decagon is the leading conversational AI platform empowering every brand to deliver concierge customer experience. Our AI agents provide intelligent, human-like responses across chat, email, and voice, resolving millions of customer inquiries across every language and at any time.
Since coming out of stealth, Decagon has experienced rapid growth. We partner with industry leaders like Hertz, Eventbrite, Duolingo, Oura, Bilt, Curology, and Samsara to redefine customer experience at scale. We've raised over $200M from Bain Capital Ventures, Accel, a16z, BOND Capital, A*, Elad Gil, and notable angels such as the founders of Box, Airtable, Rippling, Okta, Lattice, and Klaviyo.
We're an in-office company, driven by a shared commitment to excellence and velocity. Our values-customers are everything, relentless momentum, winner's mindset, and stronger together-shape how we work and grow as a team.
About the Team
The Research team at Decagon innovates on building the most advanced conversational AI agents for enterprise customers. Decagon's AI agents understand context, respond with genuine empathy, and solve complex problems with surgical precision.
Our mission is to deliver magical support experiences - AI agents working alongside human agents to help users resolve their issues.
About the Role
On the Research team, you'll be responsible for building AI systems that can perform previously impossible tasks or achieve unprecedented levels of performance. You will design and implement state of the art methods for instruction tuning and information retrieval. We\'re looking for people with strong engineering skills, writing bug-free machine learning code, and building the science behind the algorithms that power our AI agents.
Engineers here own their work end-to-end and are trusted to make a real impact. This role is for someone who dives deep into complex system challenges, builds elegant solutions that scale to millions of users, and creates automation that prevents problems before they happen.
In this role, you will
Develop models for customer support tasks that exceed the performance of closed source models
Experiment with small open-source models to drive order of magnitude reductions in latency across channels
Break down ambiguous research ideas into clear, iterative milestones and roadmaps.
Your background looks something like this
3+ years of experience in AI/ML engineering or research.
Proven track record of working on AI/ML projects from concept to production.
Experience fine-tuning and deploying LLMs in production environments.
Prior experience working with multi-modal models
Benefits
Medical, dental, and vision benefits
Take what you need vacation policy
Daily lunches, dinners and snacks in the office to keep you at your best
Compensation
$250K - $415K + Offers Equity
#J-18808-Ljbffr
$108k-163k yearly est. 2d ago
AI Engineer & Researcher, Inference - San Francisco, USA
Clutch Canada
Research internship job in San Francisco, CA
PLEASE APPLY THROUGH THIS LINK: https://job-boards.greenhouse.io/speechify/jobs/**********. DO NOT APPLY BELOW.
The mission of Speechify is to make sure that reading is never a barrier to learning. Over 50 million people use Speechify's text-to-speech products to turn PDFs, books, Google Docs, news articles, websites, and more into audio, so they can read faster, read more, and remember more. Speechify's products include its iOS app, Android app, Mac app, Chrome Extension, and Web App. Google named Speechify Chrome Extension of the Year and Apple named Speechify App of the Day.
Today, nearly 200 people around the globe work in a 100% distributed setting. Our team includes frontend and backend engineers, AI research scientists, and others from Amazon, Microsoft, and Google, with backgrounds from top programs and companies.
This is a key role and ideal for someone who thinks strategically, enjoys fast-paced environments, cares about making product decisions, and has experience building great user experiences that delight users.
We are a flat organization that allows anyone to become a leader by showing excellent technical skills and delivering results consistently and fast. Work ethic, solid communication skills, and obsession with winning are paramount.
Our interview process involves several technical interviews and we aim to complete them within 1 week.
What You'll Do
Work alongside machine learning researchers, engineers, and product managers to bring our AI Voices to customers for a diverse range of use cases
Deploy and operate the core ML inference workloads for our AI Voices serving pipeline
Introduce new techniques, tools, and architecture that improve the performance, latency, throughput, and efficiency of our deployed models
Build tools to give visibility into bottlenecks and instability, and design and implement solutions to address the highest priority issues
An Ideal Candidate Should Have
Experience shipping Python-based services
Experience being responsible for the successful operation of a critical production service
Experience with public cloud environments, GCP preferred
Experience with infrastructure as code, Docker, and containerized deployments
Preferred: Experience deploying high-availability applications on Kubernetes
Preferred: Experience deploying ML models to production
What We Offer
A dynamic environment where your contributions shape the company and its products
A team that values innovation, intuition, and drive
Autonomy, fostering focus and creativity
The opportunity to have a significant impact in a revolutionary industry
Competitive compensation, a welcoming atmosphere, and a commitment to an exceptional asynchronous work culture
The privilege of working on a product that changes lives for people with learning differences like dyslexia, ADD, and more
An active role at the intersection of artificial intelligence and audio - a rapidly evolving tech domain
Think you're a good fit for this job?
Tell us more about yourself and why you're interested in the role when you apply. And don't forget to include links to your portfolio and LinkedIn.
Not looking but know someone who would make a great fit?
Refer them!
Speechify is committed to a diverse and inclusive workplace.
Speechify does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.
#J-18808-Ljbffr
$108k-163k yearly est. 5d ago
ML Research Engineer - Production LLMs & Multimodal
DRH Search
Research internship job in San Francisco, CA
A leading AI Customer Support startup located in San Francisco is seeking a Machine Learning Research Engineer. The role involves developing cutting-edge AI models for customer support, focusing on enhancing performance beyond existing solutions. Ideal candidates will have over 3 years of experience in AI/ML engineering and a proven ability to take projects from concept to production. This is an on-site position in the Rincon Hill area, promising exciting challenges and innovative work opportunities.
#J-18808-Ljbffr
$108k-163k yearly est. 1d ago
Generative AI Research Engineer (LLMs & Multimodal)
Globalsouthopportunities
Research internship job in San Jose, CA
A leading tech company invites applications for a Machine Learning Engineer to advance AI research in California. The role focuses on Generative AI and requires a PhD or Master's degree, with a minimum of 3 publications in AI. The candidate will collaborate across teams and publish findings, contributing to networking solutions and AI innovations. This full-time position offers a competitive salary and benefits, fostering an inclusive workplace culture.
#J-18808-Ljbffr
$108k-163k yearly est. 4d ago
LLM Architecture Research Engineer
Openai 4.2
Research internship job in San Francisco, CA
A leading AI research company in San Francisco seeks a talented architecture team member to enhance its flagship models. Ideal candidates will have a strong understanding of LLM architectures and experience with deep learning. The role offers a hybrid work model of 3 days in the office, alongside relocation assistance. This position presents an exciting opportunity to contribute to groundbreaking AI advancements.
#J-18808-Ljbffr
$124k-173k yearly est. 1d ago
Research Engineer - On-Chain Protocols for Payments
Tempo 4.2
Research internship job in San Francisco, CA
A blockchain company in San Francisco is seeking a candidate to innovate the Tempo protocol and collaborate with industry experts. The ideal applicant will possess substantial experience in protocol research, particularly regarding consensus and scaling, as well as strong abilities in Solidity. This role emphasizes the design of new features while addressing privacy and efficiency, contributing to the evolution of payment solutions in the crypto space.
#J-18808-Ljbffr
$111k-148k yearly est. 2d ago
Research Engineer
Mercor, Inc.
Research internship job in San Francisco, CA
About Mercor
Mercor is at the intersection of labor markets and AI research. We partner with leading AI labs and enterprises to provide the human intelligence essential to AI development.
Our vast talent network trains frontier AI models in the same way teachers teach students: by sharing knowledge, experience, and context that can't be captured in code alone. Today, more than 30,000 experts in our network collectively earn over $1.5 million a day.
Mercor is creating a new category of work where expertise powers AI advancement. Achieving this requires an ambitious, fast‑paced and deeply committed team. You'll work alongside researchers, operators, and AI companies at the forefront of shaping the systems that are redefining society.
Mercor is a profitable Series C company valued at $10 billion. We work in‑person five days a week in our new San Francisco headquarters.
About the Role
As a Research Engineer at Mercor, you'll work at the intersection of engineering and applied AI research. You'll contribute directly to post‑training and RLVR, synthetic data generation, and large‑scale evaluation workflows that meaningfully impact frontier language models.
Your work will be used to train large language models to master tool use, agentic behavior, and real‑world reasoning in real‑world production environments. You'll shape rewards, run post‑training experiments, and build scalable systems that improve model performance. You'll help design and evaluate datasets, create scalable data augmentation pipelines, and build rubrics and evaluators that push the boundaries of what LLMs can learn.
What You'll Do
Work on post‑training and RLVR pipelines to understand how datasets, rewards, and training strategies impact model performance.
Design and run reward‑shaping experiments and algorithmic improvements (e.g., GRPO, DAPO) to improve LLM tool‑use, agentic behavior, and real‑world reasoning.
Quantify data usability, quality, and performance uplift on key benchmarks.
Build and maintain data generation and augmentation pipelines that scale with training needs.
Create and refine rubrics, evaluators, and scoring frameworks that guide training and evaluation decisions.
Build and operate LLM evaluation systems, benchmarks, and metrics at scale.
Collaborate closely with AI researchers, applied AI teams, and experts producing training data.
Operate in a fast‑paced, experimental research environment with rapid iteration cycles and high ownership.
What We're Looking For
Strong applied research background, with a focus on post‑training and/or model evaluation.
Strong coding proficiency and hands‑on experience working with machine learning models.
Strong understanding of data structures, algorithms, backend systems, and core engineering fundamentals.
Familiarity with APIs, SQL/NoSQL databases, and cloud platforms.
Ability to reason deeply about model behavior, experimental results, and data quality.
Excitement to work in person in San Francisco, five days a week (with optional remote Saturdays), and thrive in a high‑intensity, high‑ownership environment.
Nice To Have
Real‑world post‑training team experience in industry (highest priority).
Publications at top‑tier conferences (NeurIPS, ICML, ACL).
Experience training models or evaluating model performance.
Experience in synthetic data generation, LLM evaluations, or RL‑style workflows.
Work samples, artifacts, or code repositories demonstrating relevant skills.
Benefits
Generous equity grant vested over 4 years
A $20K relocation bonus (if moving to the Bay Area)
A $10K housing bonus (if you live within 0.5 miles of our office)
A $1K monthly stipend for meals
Free Equinox membership
Health insurance
#J-18808-Ljbffr
$20k yearly 5d ago
AI Research Engineer - Drug Discovery, Foundation Models
Menlo Ventures
Research internship job in San Francisco, CA
A pioneering AI biology firm in San Francisco is seeking an AI Engineer to develop next-generation AI models for drug discovery. You will collaborate with top talent and work on cutting-edge technology that aims to transform drug creation. Ideal candidates will have strong programming skills, experience with HPC infrastructure, and familiarity with deep learning libraries. This position offers a competitive salary, equity package, and a collaborative culture focused on impactful advancements in the healthcare field.
#J-18808-Ljbffr
$108k-163k yearly est. 3d ago
Research Engineer, Privacy
Openai 4.2
Research internship job in San Francisco, CA
About the Team
The Privacy Engineering Team at OpenAI is committed to integrating privacy as a foundational element in OpenAI's mission of advancing Artificial General Intelligence (AGI). Our focus is on all OpenAI products and systems handling user data, striving to uphold the highest standards of data privacy and security.
We build essential production services, develop novel privacy-preserving techniques, and equip cross-functional engineering and research partners with the necessary tools to ensure responsible data use. Our approach to prioritizing responsible data use is integral to OpenAI's mission of safely introducing AGI that offers widespread benefits.
About the Role
As a part of the Privacy Engineering Team, you will work on the frontlines of safeguarding user data while ensuring the usability and efficiency of our AI systems. You will help us understand and implement the latest research in privacy-enhancing technologies such as differential privacy, federated learning, and data memorization. Moreover, you will focus on investigating the interaction between privacy and machine learning, developing innovative techniques to improve data anonymization, and preventing model inversion and membership inference attacks.
This position is located in San Francisco. Relocation assistance is available.
In this role, you will:
Design and prototype privacy-preserving machine-learning algorithms (e.g., differential privacy, secure aggregation, federated learning) that can be deployed at OpenAI scale.
Measure and strengthen model robustness against privacy attacks such as membership inference, model inversion, and data memorization leaks-balancing utility with provable guarantees.
Develop internal libraries, evaluation suites, and documentation that make cutting‑edge privacy techniques accessible to engineering and research teams.
Lead deep‑drive investigations into the privacy-performance trade‑offs of large models, publishing insights that inform model‑training and product‑safety decisions.
Define and codify privacy standards, threat models, and audit procedures that guide the entire ML lifecycle-from dataset curation to post‑deployment monitoring.
Collaborate across Security, Policy, Product, and Legal to translate evolving regulatory requirements into practical technical safeguards and tooling.
You might thrive in this role if you:
Have hands‑on research or production experience with PETs.
Are fluent in modern deep‑learning stacks (PyTorch/JAX) and comfortable turning cutting‑edge papers into reliable, well‑tested code.
Enjoy stress‑testing models-probing them for private data leakage-and can explain complex attack vectors to non‑experts with clarity.
Have a track record of publishing (or implementing) novel privacy or security work and relish bridging the gap between academia and real‑world systems.
Thrive in fast‑moving, cross‑disciplinary environments where you alternate between open‑ended research and shipping production features under tight deadlines.
Communicate crisply, document rigorously, and care deeply about building AI systems that respect user privacy while pushing the frontiers of capability.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general‑purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.
For additional information, please see OpenAI's affirmative action and equal employment opportunity policy statement.
Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act for U.S.‑based candidates. For Unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non‑public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.
To notify OpenAI that you believe this job posting is non‑compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
OpenAI global applicant privacy policy.
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
#J-18808-Ljbffr
$124k-173k yearly est. 4d ago
Research Engineer
Tempo 4.2
Research internship job in San Francisco, CA
Tempo is a layer-1 blockchain purpose-built for stablecoins and real-world payments, born from Stripe's experience in global payments and Paradigm's expertise in crypto tech.
Tempo's payment-first design provides a scalable, low-cost predictable backbone that meets the needs of high-volume payment use cases. Our goal is to move money reliably, cheaply, and at scale. Our north star is simplicity for users: fintechs, traditional banks, merchants, platforms, and anyone else looking to move their payments into the 21st century.
We're building Tempo with design partners who are global leaders in AI, e-commerce, and financial services: Anthropic, Coupang, Deutsche Bank, DoorDash, Mercury, Nubank, OpenAI, Revolut, Shopify, Standard Chartered, Visa, and more.
We're a team of crypto-optimists, building the infrastructure needed to bring real, substantial economic flows onchain. Our team primarily works in-person out of our San Francisco and NYC offices. We like to move fast and swing for the fences - join us!
The Role
This person would work closely with researchers at Tempo, Paradigm, Stripe, and our design partners to invent the future of the Tempo protocol.
Responsibilities
Design new protocol-level features to solve problems like privacy, capital efficiency, and scaling
Pressure-test ideas with the rest of the team and the community
Prototype reference implementation into new features and ship into production
Explain novel inventions to users, partners, and the world
Qualifications
Experience with protocol research on topics like consensus, scaling, cryptography, liquidity, and MEV
Demonstrated experience shipping production-level features for DEXs, L1s, L2s, or other onchain products.
Ability to write, test, and optimize Solidity code.
Deep understanding of EVM blockchains, including features like account abstraction and gas metering
Interest in using crypto to solve real-world problems around payments
Attributes
Razor-sharp thinker with precise command of language
Concise, evidence-based storytelling ability
Excellent organizational and logistical skills
Intense curiosity and open-mindedness
Scrappiness; willingness to roll up sleeves
Growth mindset
#J-18808-Ljbffr
How much does a research internship earn in Walnut Creek, CA?
The average research internship in Walnut Creek, CA earns between $35,000 and $91,000 annually. This compares to the national average research internship range of $26,000 to $59,000.
Average research internship salary in Walnut Creek, CA