Research internship jobs in Fremont, CA - 3,642 jobs
All
Research Internship
Research Engineer
Researcher
Research Associate
Research Analyst
Online Research Participant - Earn Cash for Sharing Your Views
Opinion Bureau
Research internship job in San Mateo, CA
Take quick online surveys and earn rewards for sharing your thoughts. Join today - it's free and easy!
$65k-128k yearly est. 1d ago
Looking for a job?
Let Zippia find it for you.
Founding AI Researcher for Diamond Product
Menlo Ventures
Research internship job in San Francisco, CA
A dynamic tech startup in San Francisco is seeking a founding AI Researcher to advance their AI product capabilities. The ideal candidate will lead research in large language models and program synthesis, integrating groundbreaking insights into practical applications. With a competitive compensation package of $140-200k base salary plus substantial equity, this role offers a unique opportunity to shape the future of AI in software engineering workflows. Join a passionate team committed to innovation and diversity.
#J-18808-Ljbffr
$140k-200k yearly 3d ago
Exascale Storage SRE for AI Research
Pantera Capital
Research internship job in Palo Alto, CA
A cutting-edge technology company in California seeks a Site Reliability Storage Engineer to design and operate scalable storage systems for AI research. The ideal candidate will have strong programming skills in Rust or Go, and experience with IaC tools and Kubernetes. This role offers a competitive salary ranging from $180,000 to $440,000 and comprehensive benefits including equity and health insurance.
#J-18808-Ljbffr
$65k-128k yearly est. 2d ago
Machine Learning - Research
Causal Labs, Inc.
Research internship job in San Francisco, CA
About us
Our mission is to build causal intelligence, starting with physics models to predict and control the weather.
We're building a small team driven by a deep passion and urgency to solve this civilizationally important problem.
Our founding team has led & shipped models across self-driving cars, humanoid robotics, protein folding, and video generation at world-class institutions including Google DeepMind, Cruise, Waymo, Meta, Nabla Bio, and Apple.
Responsibilities
Work across the full ML stack (data, model, eval, and infrastructure)
Implement novel model architectures and training algorithms
Build data pipelines and training infrastructure for massive, petabyte-scale, multimodal datasets
Rapidly iterate on experiments and ablations
Stay up-to-date on research to bring new ideas to work
What we're looking for
We value a relentless approach to problem-solving, rapid execution, and the ability to quickly learn in unfamiliar domains.
Strong grasp of machine learning fundamentals, and depth in at least one core domain (e.g. Computer Vision, Sensor Fusion, Language Models, Physics-informed NNs)
Experienced at training models and understanding experiment results through careful analysis and ablation studies.
Experienced at writing and optimizing massive petabyte-scale data pipelines.
Familiarity with distributed training and inference.
[bonus] Familiarity with meteorology, computational fluid dynamics, and/or numerical simulations.
You don't have to meet every single requirement above.
Benefits
Work on deeply challenging, unsolved problems
Competitive cash and equity compensation
Medical, dental, and vision insurance
Catered lunch & dinner
Unlimited paid time off
Visa sponsorship & relocation support
#J-18808-Ljbffr
$66k-129k yearly est. 3d ago
ML Researcher - EEG/BCI Brain Decoding
Alljoined, Inc.
Research internship job in San Francisco, CA
A technology company based in San Francisco is seeking a talented Machine Learning Researcher to join its R&D team. The role focuses on developing advanced machine learning models for EEG-based neural decoding, collaborating with experts in the field, and contributing to high-impact research publications. The ideal candidate holds a significant background in ML research or engineering, particularly with deep learning, and proficiency with tools like Python and PyTorch. This position offers a competitive salary and benefits, including visa sponsorship and housing support.
#J-18808-Ljbffr
$66k-129k yearly est. 20h ago
Machine Learning Researcher
Prima Mente
Research internship job in San Francisco, CA
Prima Mente's goal is to deeply understand the brain, to protect the brain from neurological disease and enhance the brain in health. We do this by generating our own data, building brain foundation models, and translating discovery to real clinical and research impact.
Role focus
As a Machine Learning Researcher, you will help design, train, and evaluate foundation models that learn from large-scale biological data (genomics, epigenomics, single-cell, proteomics, clinical signals).
Depending on your strengths, you might skew more towards:
Modelling & algorithms - new architectures, training objectives, scaling strategies, multi-task / multi-modal learning.
Applied research - framing high-impact questions with clinicians and biologists, building end-to-end disease models, and stress-testing them on real data.
Analysis & insight - probing model internals, interpretability, mechanistic understanding, biomarker discovery.
Systems & efficiency - if you enjoy it, helping push training, data, and inference infrastructure to the next scale.
The role is deliberately broad: we're looking for exceptional ML talent with strong research instincts, not a single CV template.
What you'll work on
You won't do all of these on day one; think of this as the space of things you may own.
Design and implement ML models for large-scale biological data, from pre-training to task-specific fine tuning.
Partner with biologists, clinicians, and data scientists to translate biological and clinical questions into tractable ML problems.
Run end-to-end experiments: dataset curation, training, evaluation, error analysis, and iteration.
Develop and refine evaluation suites for robustness, generalisation, and clinical relevance (e.g. across cohorts, sites, populations).
Explore multi-modal and multi-task training across genomic, epigenomic, transcriptomic, proteomic and clinical signals.
Perform in-depth model analysis to extract mechanistic or biomarker-level insights, not just metrics.
Collaborate on papers, internal memos, and external communication of key research results.
(Optional / plus) Contribute to scaling and optimisation of training and data pipelines, in close collaboration with research engineers.
Expected Growth
This is illustrative; we know great people ramp differently.
1 month:
You've reproduced key baselines, run initial experiments on our internal datasets, and are comfortable with our training stack.
You've shipped your first improvements (e.g. better objective, data pre-processing, or evaluation variant) and presented results to the team.
3 months:
You own a research thread: a model family, disease application, or methodological idea.
You're independently designing experiments, refining hypotheses, and coordinating with relevant partners (ML, wet lab, clinical).
6 months:
You've delivered meaningful research impact: a stronger model, a new capability, a better biomarker, or evidence that changes our direction.
You are a go-to person for your area, helping others design experiments, debug models, and evaluate results.
Why Join Us:
Direct patient impact: Your work sits on the critical path to earlier detection and better treatment of devastating brain diseases.
End-to-end environment: We run the full stack from data generation to models to clinical studies, giving you an unusually tight feedback loop.
Exceptional peers: You'll work with a small, high-calibre team across ML, biology, and clinical medicine.
High autonomy, high bar: You'll have genuine ownership over problems that matter, with the expectation of operating at a very high standard.
Who You Are
You likely recognise yourself in several of these:
Motivated by advancing human health through AI, especially in neuroscience and complex disease.
Deeply curious, with a habit of reading papers, prototyping ideas, and stress-testing your own assumptions.
Comfortable doing real engineering work in service of research - but see yourself first and foremost as a researcher.
Enjoy collaborating across disciplines and explaining your work to people with very different backgrounds.
Able to stay with hard problems for a long time, and to make progress even when the path isn't obvious.
Ideal experience
We don't expect you to check every box. Strong applicants often have depth in some of these and interest in growing into others.
Strong background in machine learning or a closely-related field (e.g. deep learning, statistics, optimisation).
Industry, academic, or hybrid paths are all welcome.
Demonstrated experience training and evaluating modern ML models (e.g. transformers, diffusion, graph models, sequence models).
Solid software skills in Python and at least one major ML framework (PyTorch, JAX, or TensorFlow).
Experience designing and running non-trivial experiments: controlling for confounders, building robust baselines, and doing thorough error analysis.
Ability to write clearly - whether in code comments, research docs, or papers.
At least one of the following (more is a plus, not a requirement):
Experience with large-scale data (e.g. 100B+ tokens or equivalent) or distributed training.
Background in computational biology, genomics, epigenomics, neuroscience, or related areas.
Work on foundation models (language, vision, or multi-modal) and interest in applying that to biology.
Infra/optimisation experience (e.g. FSDP/ZeRO, quantisation, compilation, custom kernels) - especially valuable, but not mandatory.
If you're unsure whether you “count” as an engineer or a researcher: please apply. We care about what you can do and how you think, not your current job title.
Location
Based in San Francisco, US or London, UK. We support visa applications.
Culture Insight
What we are doing is extremely hard. Prima Mente is for great people. We are team players who appreciate challenges, want to be hands-on, and thrive on curiosity by throwing away assumptions. We are focused on excellence at pace and huge personal growth. We are strong communicators who are highly disciplined and rigorous.
Prima Mente operates with a flat organizational structure. We gain and share knowledge by contributing to multiple opportunities. Leadership is given to those who show initiative and consistently deliver excellence.
We arrange our lives so we can work in person as much as possible.
Our ValuesExceptional performance at exceptional pace
The solutions we build demand uncompromising quality and rigour.
The problems we are solving are grave and present.
Inquisitive discovery
We embrace curiosity and creativity.
Every question is a path to a transformational breakthrough.
Radical candour
We practice unwavering honesty and transparency in all our challenges and interactions.
Purposeful individuality
Every individual in our team is celebrated for their identity, uniqueness, and experiences.
We are invested in each one's bespoke personal development.
Nurturing individuality will supercharge our collective purpose and spirit.
Patient impact at scale
We have a steadfast commitment to improve the health and well-being of patients globally.
Every experiment run, every dataset analysed, and every innovation developed, is a step towards achieving a scalable impact.
#J-18808-Ljbffr
$66k-129k yearly est. 2d ago
Machine Learning Researcher
Doe 3.8
Research internship job in San Francisco, CA
At Doe, we're building an AI workforce that operates mission-critical workflows across private equity-backed rollups - starting with DSOs. These agents need to be fast, resilient, auditable, and secure.
Here's why we might not be the right fit for you:
We work hard and have a high-velocity environment with lots of growth opportunities.
We value exceptional performance and continuous improvement. We believe that if you aren't constantly learning, you aren't growing.
You will be responsible and accountable for making high-impact decisions that determine
Who we're looking for:
Someone who has a deep understanding of the ML development cycle, focusing on iteration speed and identifying bottlenecks
Someone who is goal oriented and driven. You must be comfortable being given a goal and doing whatever is necessary to achieve the goal
Profficient in Python, Tensorflow or PyTorch
#J-18808-Ljbffr
$74k-122k yearly est. 1d ago
Hedge Fund Research Analyst - Quant & Portfolio Monitoring
Callan 4.3
Research internship job in San Francisco, CA
A leading investment consulting firm in San Francisco seeks a hedge fund investment analyst to conduct research and monitor hedge fund performance. The candidate will collaborate with a team to provide insights into hedge fund strategies and assist in presentations to clients. An ideal candidate will possess a bachelor's degree in finance or a related field, along with two years of related experience. A commitment to strong communication and client relationships is essential.
#J-18808-Ljbffr
$110k-169k yearly est. 4d ago
Fund & Co-Investment Research Associate
Allocate Holdings Inc.
Research internship job in Palo Alto, CA
Fund & Co-Investment Research Associate About Allocate
Allocate is transforming private market investing by enabling wealth advisory firms to seamlessly build and manage high-quality private market programs.
About the Role
We're seeking a Fund & Co-Investment Research Associate to join our Private Investments team. This isn't a traditional allocator role, as you'll be a critical connector between world-class fund managers, wealth advisory CIOs, and help build and manage our technology platform.
You'll evaluate venture capital and private equity opportunities, conduct manager diligence, and spend significant time with wealth advisor investment teams discussing curated deals. You need to deeply understand opportunities, answer sophisticated questions, and provide clear, accurate analysis that helps clients make confident investment decisions.
Equally important, you'll own the completeness and accuracy of our platform content, ensuring all investment materials, copy, and compliance documentation stay current.
Key Responsibilities
Investment Research & Diligence: Conduct quantitative and qualitative research on private market managers and co-investment opportunities across venture capital, private equity, and adjacent asset classes. Prepare balanced investment analysis and recommendations.
Client Engagement: Spend substantial time with wealth advisor CIOs and their investment teams discussing curated opportunities. Field detailed questions, articulate investment theses, and provide real-time analysis that demonstrates deep command of each deal.
GP Relationship Development: Build and maintain relationships with fund managers, including structuring conversations, access discussions, and ongoing partnership development.
Platform Completeness: Own platform content integrity-ensure investment materials are updated, copy is accurate and compliant, and all client-facing documentation metour quality standards.
Sourcing & Pipeline Development: Proactively source differentiated fund and co-investment opportunities through targeted outreach, industry relationships, and market intelligence.
Cross-Functional Collaboration: Partner with product, technology, and operations teams to refine platform capabilities and enhance the client investment experience.
Portfolio Monitoring: Support post-investment updates, quarterly reporting, and ongoing portfolio analytics for client transparency.
Market Intelligence: Develop insights and content that position Allocate as a leading voice in private markets.
Qualifications Experience & Knowledge:
3+ years of experience in venture capital, private equity, wealth management, investment banking, or related fields.
Deep industry knowledge of the VC/PE ecosystem, fund structures, and how institutional platforms operate.
Strong understanding of private markets and alternative investments.
Skills & Attributes:
Hustler mentality: Proactive, resourceful, and self-directed-you don't wait to be told what to do.
Exceptional communication: You can explain complex investment concepts clearly and handle sophisticated Q&A with institutional allocators.
Highly personable: You build authentic relationships with both GPs and wealth advisory teams.
Commercial mindset: You think about client needs, platform scalability, and growth.
Startup DNA: You thrive in fast-paced environments, embrace ambiguity, and move quickly.
Meticulous attention to detail: You ensure accuracy and compliance across all materials.
Analytical excellence: You combine quantitative rigor with qualitative judgment.
Not a Fit If You're:
A pure allocator looking for a traditional fund-of-funds seat.
More comfortable with analysis only role than relationship-building and client engagement.
Seeking a slow-paced, process-heavy environment.
Unfamiliar with how technology platforms operate.
Why Allocate?
Shape the infrastructure layer for an entire industry.
Ability to immediately work with top VCs and PE firms, along with CIOs in the wealth advisory world.
Allocate is one of the fastest-growing platforms in private markets.
Palo Alto office in the heart of the venture ecosystem.
High impact, high visibility work with real ownership and autonomy.
Additional Information
Location: Palo Alto, CA. Must be able to work in-office 4 days a week.
Compensation: $160K-$200K base + bonus + stock equity.
Benefits: Medical, dental, vision, 401(k), responsible time off.
Employment: Full-time.
Compliance: This role is subject to Allocate's Code of Ethics and all related compliance obligations.
In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification form upon hire. Must have legal authorization to work in the U.S. now and in the future without visa sponsorship.
#J-18808-Ljbffr
$49k-80k yearly est. 4d ago
Research Engineer - Design Generation Modeling
Black.Ai
Research internship job in San Francisco, CA
Join the team redefining how the world experiences design.
Hey, g'day, mabuhay, kia ora,你好, hallo, vítejte!
Thanks for stopping by. We know job hunting can be a little time consuming and you're probably keen to find out what's on offer, so we'll get straight to the point.
Where and how you can work
Our flagship office is in Sydney, Australia, but we've made our way from down under, to a hub in San Francisco, which is now home to our US operations. We offer flexibility in how and where you work. We trust our Canvanauts to choose the balance that empowers them and their team to achieve their goals.
Job Description
As a Research Engineer in Design Generation Modeling, you will work in collaboration with researchers across the globe to produce artifacts and knowledge that will help us push the state of the art of generative design modeling forward. The role will require developing and implementing experimental training pipelines from data to training to inference.
At the moment, this role is focused on:
Foundation models for design understanding and generation
Visual models for design generation and decomposition
Development of design representations for discrete modeling with MLMs
Primary Responsibilities:
Contribute to the development and optimization of ML data pipelines and training workflows.
Improve internal training codebases and procedures.
Collaborate with research scientists and ML engineers to design and implement experiments, ablation studies, etc.
You're probably a match if you:
Understand and have experience with distributed training at scale using libraries like DeepSpeed, FSDP, Torch, Titan, etc.
Enjoy diving deep into and understanding complex engineering problems.
Are familiar with the literature around GANs, diffusion modeling, transformer architectures, and vision language modeling.
You have disciplined coding practices and are experienced with code reviews and pull requests.
Have experience working in cloud environments, ideally AWS.
Are passionate about both product-focused and basic research.
Nice to Haves:
Specific experience with modeling design data.
Experience with Graph Neural Networks.
Are able to quickly prototype ML demos with appealing user interfaces. E.g. Gradio and other customized interfaces.
Additional Information
What\'s in it for you?
Achieving our crazy big goals motivates us to work hard - and we do - but you\'ll experience lots of moments of magic, connectivity and fun woven throughout life at Canva, too. We also offer a range of benefits to set you up for every success in and outside of work.
Here\'s a taste of what\'s on offer:
Equity packages - we want our success to be yours too
Health benefits plans to support you and your wellbeing
401(k) retirement plan with company contribution
Inclusive parental leave policy that supports all parents & carers
An annual Vibe & Thrive allowance to support your wellbeing, social connection, office setup & more
Flexible leave options that empower you to be a force for good, take time to recharge and supports you personally
Check out lifeatcanva.com for more information.
Other stuff to know
We make hiring decisions based on your experience, skills, merit and business needs, in compliance with applicable local laws. We celebrate all types of skills and backgrounds at Canva so even if you don't feel like your skills quite match what\u2019s listed above - we still want to hear from you! When you apply, please tell us the pronouns you use and any reasonable adjustments you may need during the interview process. Please note that interviews are conducted virtually. At Canva, we value fairness, and we strive to provide competitive, market-informed compensation whilst ensuring internal equity within the team in each region. We make hiring decisions based on your skills, experience and our overall assessment of what we observed and learnt in the hiring process. The target base salary range for this position is $180,000 - $225,000. When calculating offers, we make salary decisions based on market data, your experience levels, and internal benchmarks of your peers in the same domain and job level.
Please note that interviews are predominantly conducted virtually.
#J-18808-Ljbffr
$180k-225k yearly 2d ago
Generative AI Research Engineer (LLMs & Multimodal)
Globalsouthopportunities
Research internship job in San Jose, CA
A leading tech company invites applications for a Machine Learning Engineer to advance AI research in California. The role focuses on Generative AI and requires a PhD or Master's degree, with a minimum of 3 publications in AI. The candidate will collaborate across teams and publish findings, contributing to networking solutions and AI innovations. This full-time position offers a competitive salary and benefits, fostering an inclusive workplace culture.
#J-18808-Ljbffr
$108k-163k yearly est. 2d ago
Edge ML Researcher & MLOps Engineer for Vision Systems
Rivet Industries, Inc.
Research internship job in Palo Alto, CA
A technology firm specializing in integrated task systems seeks a Machine Learning Researcher / ML-Ops Engineer to advance computer vision and sensor fusion capabilities. The role involves implementing machine learning pipelines and optimizing models for deployment. Candidates should have a strong Python background and experience in deep learning frameworks like PyTorch and TensorFlow. This position in Palo Alto, California, offers competitive compensation and a collaborative work environment.
#J-18808-Ljbffr
$108k-164k yearly est. 4d ago
Robotics Hardware Research Engineer
1X Technologies As
Research internship job in Palo Alto, CA
A cutting-edge robotics company in Palo Alto is seeking an engineer to develop advanced humanoid technologies. The role involves research, design, and prototyping critical components of robots. Candidates should have strong engineering principles and a background in Mechanical or Electrical Engineering. This position requires in-person collaboration to foster innovation. Competitive compensation and inclusive work culture are offered.
#J-18808-Ljbffr
$108k-164k yearly est. 20h ago
Machine Learning Research Engineer
Sunday Robotics
Research internship job in Mountain View, CA
Join Us in Building the Future of Home Robotics
At Sunday, we're developing personal robots to reclaim the hours lost to repetitive tasks. We're focused on an ambitious goal to make generalized robots broadly accessible, enabling households to take back quality time.
We have spent the last 18 months building a talented team, securing capital, and validating our technology. We are now seeking passionate individuals to join us in the next phase of our growth. If you are ready to apply your skills to the forefront of robotics innovation, we'd love to hear from you.
What to Expect
In building robots for the home, we are tackling a true grand challenge in robotics: dexterous and safe mobile manipulation in unstructured environments. To make this possible, we are building across the entire robotics stack. We're training state-of-the-art AI models that leverage our large-scale, high-quality, real-world data collection system. At the same time, we're building a new kind of consumer hardware product, which will be deployed into homes and delivering real value for real customers. As an early member of a small, cross-functional team, you'll play a key role in pushing both our technology and product forward: advancing the frontier of embodied AI, and soon giving countless hours back to our customers so they can spend more time on the things they value most.
As a Machine Learning Research Engineer, you will work on the software and algorithms that enable our robots to complete dexterous manipulation tasks in home environments. You will leverage our unique large-scale in-the-wild data collection operation and our growing robot fleet to continuously add new behaviors and improve robustness across tasks and environments.
What You'll Do
Design and develop state-of-the-art robot learning algorithms for manipulation and controls
Leverage large-scale in-the-wild data collection to develop generalizable robot behaviors
Own the end-to-end loop from task definition and data curation to model training, evaluation, and on-robot deployment
Collaborate closely with a full-stack robotics team to develop a consumer personal robot
Write maintainable, production quality code for research and deployment
What You'll Bring
3+ years (or equivalent) working on machine learning for robotics, controls, or perception
Proficiency with Python and any deep learning framework (PyTorch preferred)
Hands-on experience with robot learning for real-world tasks, including offline/online training and evaluation
Implement tooling for visualization, debugging, and evaluation of data and learning pipelines
Communicates clearly and collaborates effectively, thriving in fast-moving, cross-functional teams
Nice to Have
Familiarity with classical robotics fundamentals (e.g. controls, planning, state estimation)
Comfortable profiling and optimizing for latency, memory, and compute on edge devices
Experience with large-scale model training pipelines (LLM/VLM/VLA)
At Sunday Robotics, we're building technology shaped by real people - curious, creative, and diverse. We're proud to be an equal opportunity employer and consider all qualified applicants regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.
Even if you don't meet every single requirement, we encourage you to apply. Studies show that women and underrepresented groups often hold back unless they meet 100% of the criteria - we don't want that to be the reason we miss out on great talent.
#J-18808-Ljbffr
$108k-164k yearly est. 4d ago
Research Engineer
Corridor 3.7
Research internship job in San Francisco, CA
AI has changed software development. Security hasn't caught up - until now. Corridor is changing the game of product security, giving developers the ability to secure their AI coding.
Our team has lived at the intersection of AI and cybersecurity. Collectively, we've led security at some of the world's largest companies, driven cybersecurity policy efforts in the US government, and published AI research at Stanford. We're growing fast and seeking a Research Engineer to push the frontier of agentic systems, reinforcement learning environments, and secure AI behavior for code security.
What You'll Do
Drive innovation in agentic systems and reinforcement learning environments.
Bridge cutting-edge research with the Corridor platform, contributing to research publications, and collaborating with the engineering team to advance the state of secure AI systems.
Develop rigorous security benchmarks to evaluate model robustness and adversarial behavior, and prototype novel architectures that combine large language models (LLMs) and program analysis.
What We're Looking For
Deep experience in AI/ML research - typically a PhD in computer science, machine learning, or related fields, or equivalent industry experience.
Strong publication record or evidence of impactful research; papers in AI/ML venues are a plus.
Expertise building agentic systems and RL environments
Experience building evals, benchmarks, or adversarial tests for model performance, reasoning, or security.
Strong programming ability (Python / TypeScript preferred) and comfort building research pipelines and experimental environments.
Familiarity with program analysis, static/dynamic analysis, or code reasoning (bonus).
Experience with open-weight models or fine-tuning workflows is a plus.
About Us
CEO Jack Cable is a top-ranked bug bounty hunter who previously led Secure by Design at CISA.
CTO Ashwin Ramaswami built large-scale systems at Skiff, Caldera, and Nooks, and published research on AI and foundation models at Stanford.
CSO Alex Stamos is the former CISO of Facebook, Yahoo, and SentinelOne and a current lecturer at Stanford University.
#J-18808-Ljbffr
$121k-171k yearly est. 3d ago
Founding RL Research Engineer - Open-Source Data & Infra
The LLM Data Company
Research internship job in San Francisco, CA
A pioneering AI firm in San Francisco is seeking an experienced professional to drive scalable RL research and build innovative data generation pipelines. The role offers significant autonomy and the opportunity to work on cutting-edge projects that are validated in production. Candidates should have a Master's or PhD in Computer Science, expertise in core tooling like PyTorch, and familiarity with modern post-training techniques. Join as an early team member to enjoy a significant equity upside.
#J-18808-Ljbffr
$108k-163k yearly est. 20h ago
Research Engineer
Appliedcompute
Research internship job in San Francisco, CA
Applied Compute builds Specific Intelligence for enterprises, unlocking the knowledge inside a company to train custom models and deploy an in-house agent workforce.
Today's state-of-the-art AI isn't one-size-fits-all-it's a tailored system that continuously learns from a company's processes, data, expertise, and goals. The same way companies compete today by having the best human workforce, the companies building for the future will compete by having the best agent workforce supporting their human bosses. We call this Specific Intelligence and we're already building them today.
We are a small, talent-dense team of engineers, researchers, and operators who have built some of the most influential AI systems in the world, including reinforcement learning infrastructure at OpenAI and data foundations at Scale AI, with additional experience from Together, Two Sigma, and Watershed.
We're backed with $80M from Benchmark, Sequoia, Lux, Hanabi, Neo, Elad Gil, Victor Lazarte, Omri Casspi, and others. We work in-person in San Francisco.
The Role
As a founding Research Engineer, you'll train frontier-scale models and adapt them into specialized experts for enterprises. You will design and run experiments at scale, developing novel methods for agentic training.
You'll work closely with researchers to experiment with and invent new algorithms, and you'll collaborate with infrastructure engineers to post train LLMs on thousands of GPUs. We believe that research velocity is tied with having world class tooling; you'll build tools and observability for yourself and others, enabling deeper investigation into how models specialize during training. If you get excited by challenging systems and ML problems at scale, this role is for you.
What You'll Do
Post-train frontier scale large language models on enterprise tasks and environments
Explore cutting edge RL techniques, co-designing algorithms and systems
Partner with infrastructure engineers to scale training and inference efficiently across thousands of GPUs
Build high-performance internal tools for probing, debugging, and analyzing training runs
What We're Looking For
Experience training or serving LLMs
Experience building RL environments and evals for language models
Proficiency in PyTorch, JAX, or similar ML frameworks, and experience with distributed training
Strong experimental design skills
Strong Candidates May Also Have
Background in pre- or post-training
Previous experience in high-performance computing environments or working with large-scale clusters
Contributions to open-source ML research or infrastructure
Demonstrated technical creativity through published research, OSS contributions, or side projects
Logistics
Location: This role is based in San Francisco, California.
Benefits: Applied Compute offers generous health benefits, unlimited PTO, paid parental leave, lunches and dinners at the office, and relocation support as needed. We work in-person at a beautiful office in San Francisco's Design District.
Visa sponsorship: We sponsor visas. While we can't guarantee success for every candidate or role, if you're the right fit, we're committed to working through the visa process with you.
We encourage you to apply even if you do not believe you meet every single qualification. As set forth in Applied Compute's Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
#J-18808-Ljbffr
$108k-163k yearly est. 1d ago
Founding Audio AI Research Engineer
David Ai
Research internship job in San Francisco, CA
David AI is the first audio data research company. We bring an R&D approach to data-developing datasets with the same rigor AI labs bring to models. Our mission is to bring AI into the real world, and we believe audio is the gateway. Speech is versatile, accessible, and human-it fits naturally into everyday life. As audio AI advances and new use cases emerge, high-quality training data is the bottleneck. This is where David AI comes in.
David AI was founded in 2024 by a team of former Scale AI engineers and operators. In less than a year, we've brought on most FAANG companies and AI labs as customers. We recently raised a $50M Series B from Meritech, NVIDIA, Jack Altman (Alt Capital), Amplify Partners, First Round Capital and other Tier 1 investors.
Our team is sharp, humble, ambitious, and tight-knit. We're looking for the best research, engineering, product, and operations minds to join us on our mission to push the frontier of audio AI.
About our Research team
As an audio data research company, we believe model capabilities come from high quality, differentiated data. David AI's research team performs ambitious, long-term research into audio capabilities, while working with internal and external stakeholders to productionalize the latest research insights.
About this role
As a Founding Audio AI Research Engineer, you'll define the research agenda that shapes how the world's best AI labs train their audio models. You'll have a world‑class workforce of human AI trainers, compute resources, and independence to drive your roadmap.
In this role, you will
Define and build comprehensive evaluation frameworks for audio AI capabilities across speech, emotion, conversation dynamics, and acoustic patterns.
Research and prototype novel approaches to audio quality assessment, automated labeling, and data collection optimization.
Design targeted data collection pipelines for novel, high-value audio capabilities
Architect automated systems for continuous classifier improvement and prompt engineering evaluation.
Evaluate frontier models and create actionable research directions.
Publish findings and papers in top conferences.
Your background looks like
3+ years of experience in AI/ML research or engineering, preferably with multimodal models
Proven ability to translate research insights into production systems and scalable solutions.
Experience with Python and PyTorchExperience designing and conducting rigorous ML experiments and evaluations.
Bonus points if you have
PhD or Masters in Computer Science or a related field with a focus on ML or audio/speech.
Published research work in top‑tier CS, AI and/or audio‑related conferences.
Strong record of publication in evaluations or data, particularly for audio/speech.
Experience in large scale data processing or high performance computing.
Benefits
Unlimited PTO.
Top‑notch health, dental, and vision coverage with 100% coverage for most plans.
FSA & HSA access.
401k access.
Meals 2x daily through DoorDash + snacks and beverages available at the office.
Unlimited company‑sponsored Barry's classes.
#J-18808-Ljbffr
$108k-163k yearly est. 3d ago
AI Research Engineer - Virtual Collaboration Specialist
Victrays
Research internship job in San Francisco, CA
A leading AI research firm in San Francisco seeks a Research Engineer to develop reinforcement learning systems for virtual collaboration. The role involves designing training environments and collaborating with product teams. Candidates should have strong Python programming skills and experience in machine learning. This position offers competitive compensation, flexible hours, and a hybrid work model, emphasizing the importance of diverse perspectives in AI research.
#J-18808-Ljbffr
$108k-163k yearly est. 1d ago
ML/AI Research Engineer - Agentic AI Lab (Founding Team)
Fabrion
Research internship job in San Francisco, CA
Type: Full-Time Compensation: Competitive salary + meaningful equity (founding tier)
Backed by 8VC, we're building a world‑class team to tackle one of the industry's most critical infrastructure problems.
About the Role
We're designing the future of enterprise AI infrastructure - grounded in agents, retrieval‑augmented generation (RAG), knowledge graphs, and multi‑tenant governance.
We're looking for an ML/AI Research Engineer to join our AI Lab and lead the design, training, evaluation, and optimization of agent‑native AI models. You'll work at the intersection of LLMs, vector search, graph reasoning, and reinforcement learning - building the intelligence layer that sits on top of our enterprise data fabric.
This isn't a prompt engineer role. It's full‑cycle ML: from data curation and fine‑tuning to evaluation, interpretability, and deployment - with cost‑awareness, alignment, and agent coordination all in scope.
Core Responsibilities
Fine‑tune and evaluate open‑source LLMs (e.g. LLaMA 3, Mistral, Falcon, Mixtral) for enterprise use cases with both structured and unstructured data
Build and optimize RAG pipelines using LangChain, LangGraph, LlamaIndex, or Dust - integrated with our vector DBs and internal knowledge graph
Train agent architectures (ReAct, AutoGPT, BabyAGI, OpenAgents) using enterprise task data
Develop embedding‑based memory and retrieval chains with token‑efficient chunking strategies
Create reinforcement learning pipelines to optimize agent behaviors (e.g. RLHF, DPO, PPO)
Establish scalable evaluation harnesses for LLM and agent performance, including synthetic evals, trace capture, and explainability tools
Contribute to model observability, drift detection, error classification, and alignment
Optimize inference latency and GPU resource utilization across cloud and on‑prem environments
Desired Experience Model Training
Deep experience fine‑tuning open‑source LLMs using HuggingFace Transformers, DeepSpeed, vLLM, FSDP, LoRA/QLoRA
Worked with both base and instruction‑tuned models; familiar with SFT, RLHF, DPO pipelines
Comfortable building and maintaining custom training datasets, filters, and eval splits
Understand trade‑offs in batch size, token window, optimizer, precision (FP16, bfloat16), and quantization
RAG + Knowledge Graphs
Experience building enterprise‑grade RAG pipelines integrated with real‑time or contextual data
Familiar with LangChain, LangGraph, LlamaIndex, and open‑source vector DBs (Weaviate, Qdrant, FAISS)
Experience grounding models with structured data (SQL, graph, metadata) + unstructured sources
Bonus: Worked with Neo4j, Puppygraph, RDF, OWL, or other semantic modeling systems
Agent Intelligence
Experience training or customizing agent frameworks with multi‑step reasoning and memory
Understand common agent loop patterns (e.g. Plan→Act→Reflect), memory recall, and tools
Familiar with self‑correction, multi‑agent communication, and agent ops logging
Optimization
Strong background in token cost optimization, chunking strategies, reranking (e.g. Cohere, Jina), compression, and retrieval latency tuning
Experience running models under quantized (int4/int8) or multi‑GPU settings with inference tuning (vLLM, TGI)
Preferred Tech Stack
LLM Training & Inference: HuggingFace Transformers, DeepSpeed, vLLM, FlashAttention, FSDP, LoRA
Agent Orchestration: LangChain, LangGraph, ReAct, OpenAgents, LlamaIndex
Vector DBs: Weaviate, Qdrant, FAISS, Pinecone, Chroma
Graph Knowledge Systems: Neo4j, Puppygraph, RDF, Gremlin, JSON-LD
Storage & Access: Iceberg, DuckDB, Postgres, Parquet, Delta Lake
Evaluation: OpenLLM Evals, Trulens, Ragas, LangSmith, Weight & Biases
Compute: Ray, Kubernetes, TGI, Sagemaker, LambdaLabs, Modal
Languages: Python (core), optionally Rust (for inference layers) or JS (for UX experimentation)
Soft Skills & Mindset
Startup DNA: resourceful, fast‑moving, and capable of working in ambiguity
Deep curiosity about agent‑based architectures and real‑world enterprise complexity
Comfortable owning model performance end‑to‑end: from dataset to deployment
Strong instincts around explainability, safety, and continuous improvement
Enjoy pair‑designing with product and UX to shape capabilities, not just APIs
Why This Role Matters
This role is foundational to our thesis: that agents + enterprise data + knowledge modeling can create intelligent infrastructure for real‑world, multi‑billion‑dollar workflows. Your work won't be buried in research reports - it will be productionized and activated by hundreds of users and hundreds of thousands of decisions. If this is your dream role - we would love to hear from you.
#J-18808-Ljbffr
How much does a research internship earn in Fremont, CA?
The average research internship in Fremont, CA earns between $35,000 and $91,000 annually. This compares to the national average research internship range of $26,000 to $59,000.
Average research internship salary in Fremont, CA
$56,000
What are the biggest employers of Research Interns in Fremont, CA?
The biggest employers of Research Interns in Fremont, CA are: