Post job

Research Scientist jobs at OpenAI - 28 jobs

  • Research Scientist

    Openai 4.2company rating

    Research scientist job at OpenAI

    By applying to this role, you will be considered for Research Scientist roles across all teams at OpenAI. About The Role As a Research Scientist here, you will develop innovative machine learning techniques and advance the research agenda of the team you work on, while also collaborating with peers across the organization. We are looking for people who want to discover simple, generalizable ideas that work well even at large scale, and form part of a broader research vision that unifies the entire company. We Expect You To Have a track record of coming up with new ideas or improving upon existing ideas in machine learning, demonstrated by accomplishments such as first author publications or projects. Possess the ability to own and pursue a research agenda, including choosing impactful research problems and autonomously carrying out long-running projects. Be excited about OpenAI's approach to research. Nice To Have Interested in and thoughtful about the impacts of AI technology. Past experience in creating high-performance implementations of deep learning algorithms. About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status. #J-18808-Ljbffr
    $106k-174k yearly est. 2d ago
  • Job icon imageJob icon image 2

    Looking for a job?

    Let Zippia find it for you.

  • Frontier Mathematical AI Research Scientist

    Openai 4.2company rating

    Research scientist job at OpenAI

    A leading AI research company in San Francisco is seeking a Research Scientist in mathematical sciences to design and build AI models that tackle complex scientific problems. The ideal candidate will have strong communication skills and experience with frontier models. This position operates on a hybrid work model, valuing rigor and reproducibility in scientific research. Competitive compensation is offered. #J-18808-Ljbffr
    $106k-174k yearly est. 2d ago
  • Research Scientist, Mathematical Sciences

    Openai 4.2company rating

    Research scientist job at OpenAI

    About the Team The Strategic Deployment team makes frontier models more capable, reliable, and aligned to transform high-impact domains. On one hand, this involves deploying models in real-world, high-stakes settings to drive AI-driven transformation and elicit insights-training data, evaluation methods, and techniques-to shape our frontier model development. On the other hand, we leverage these learnings to build the science and engineering of impactful frontier model deployment. As a key element of this effort, OpenAI for Science aims to harness AI to accelerate the process of scientific research. This involves building models and an AI-powered platform that speeds up discovery and helps researchers everywhere do more, faster. About the Role As a Research Scientist focused on the mathematical sciences, you will help build models, tools, and workflows that move theoretical research-in fields such as mathematics, theoretical physics, and theoretical computer science-forward. You\'ll design domain-specific data and signals, shape training and evaluation, guide how to wire models to scientific tools, and work with the academic community to speed up adoption and impact. We\'re looking for people who... Hold a current or recent academic position in mathematical sciences (mathematics, theoretical physics, theoretical computer science) or a related field Regularly use frontier models in their own research Move easily between theory and code, and are eager to contribute technically as well as academically Either know or are eager to learn modern AI and run AI experiments end-to-end Are strong scientific communicators Care about rigor and reproducibility in scientific results This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees. In this role, you will Assist in designing and building frontier AI models that are great at solving frontier mathematical sciences problems Build high-quality scientific datasets and synthetic data pipelines (symbolic, numeric, and simulator-based) Design reinforcement and grading signals for physics and run reinforcement learning/optimization loops to improve model reasoning Define and run evals for scientific reasoning, derivations, simulations, and literature grounding; track progress over time Partner with research labs and the academic community Drive adoption of frontier AI within the scientific community Uphold high standards for safety, data governance, and reproducibility You might thrive in this role if you Are passionate about pushing the boundaries of your field using AI Have used ChatGPT to do calculations and prove or improve lemmas in your field of study Communicate clearly to both scientists and AI engineers; you like collaborating across teams and with academia Nice to have Open-source contributions to mathematical science or AI tooling Experience building or curating domain datasets and benchmarks Experience engaging a research community (teaching, workshops, tutorials, standards) About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI\'s Affirmative Action and Equal Employment Opportunity Policy Statement. Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link. OpenAI Global Applicant Privacy Policy At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology. Compensation Range: $380K - $460K #J-18808-Ljbffr
    $106k-174k yearly est. 2d ago
  • Research Scientist - Drive Scalable ML & Broad Impact

    Openai 4.2company rating

    Research scientist job at OpenAI

    An established industry player is seeking a passionate Research Scientist to innovate and advance machine learning techniques. This role offers the opportunity to develop impactful research while collaborating across teams to push the boundaries of AI. Ideal candidates will have a track record of innovation, a strong understanding of deep learning algorithms, and a commitment to the ethical implications of AI technology. Join a forward-thinking organization dedicated to ensuring AI benefits all of humanity and play a pivotal role in shaping the future of technology. #J-18808-Ljbffr
    $106k-174k yearly est. 2d ago
  • Technical Lead, Safety Research

    Openai 4.2company rating

    Research scientist job at OpenAI

    About the Team The Safety Systems team is responsible for various safety work to ensure our best models can be safely deployed to the real world to benefit society and is at the forefront of OpenAI's mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency. The Safety Research team aims to fundamentally advance our capabilities for precisely implementing robust, safe behavior in AI models and systems. As capabilities continue to advance, it is imperative that our approaches to safety continue to improve and scale to address evolving risks. This is important both for ensuring our systems are robust to prevent harmful misuse as well as ensuring potential misalignment cannot cause harm. We are working on these problems in a way that is grounded in our current models and methods but that generalizes to future systems. We are growing our team to expand our research on methods that will improve safety for AGI and beyond. This will include exploratory research, for example, new methods to improve safety common sense and generalizable reasoning, developing new evaluations to elicit or detect misalignment or inner goals of the AI, and new methods to support human oversight of long-running tasks. About the Role As a tech lead, you will be responsible for developing our strategy in new directions to address potential harms from misalignment or significant mistakes. This will in practice include: Setting north star goals and milestones for new research directions, and developing challenging evaluations to track progress. Personally driving or leading research in new exploratory directions to demonstrate feasibility and scalability of the approaches. Working horizontally across safety research and related teams to ensure different technical approaches work together to achieve strong safety results. We're looking for people who have a strong track record of practical research on safety and alignment, ideally in AI and LLMs, and have led large research efforts in the past. This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees. In this role, you will: Set the research directions and strategies to make our AI systems safer, more aligned and more robust. Coordinate and collaborate with cross-functional teams, including the rest of the research organization, T&S, policy and related alignment teams, to ensure that our AI meets the highest safety standards. Actively evaluate and understand the safety of our models and systems, identifying areas of risk and proposing mitigation strategies. Conduct state-of-the-art research on AI safety topics such as RLHF, adversarial training, robustness, and more. Implement new methods in OpenAI's core model training and launch safety improvements in OpenAI's products. You might thrive in this role if you: Are excited about OpenAI's mission of building safe, universally beneficial AGI and are aligned with OpenAI's charter Demonstrate a passion for AI safety and making cutting-edge AI models safer for real-world use. Bring 4+ years of experience in the field of AI safety, especially in areas like RLHF, adversarial training, robustness, fairness & biases. Hold a Ph.D. or other degree in computer science, machine learning, or a related field. Possess experience in safety work for AI model deployment Have an in-depth understanding of deep learning research and/or strong engineering skills. Are a team player who enjoys collaborative work environments. OpenAI is an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law. We are committed to providing reasonable accommodations to applicants with disabilities. Compensation Range: $460K - $555K #J-18808-Ljbffr
    $75k-112k yearly est. 1d ago
  • Research Scientist, Mathematical Sciences

    Openai 4.2company rating

    Research scientist job at OpenAI

    About the Team The Strategic Deployment team makes frontier models more capable, reliable, and aligned to transform high-impact domains. On one hand, this involves deploying models in real-world, high-stakes settings to drive AI-driven transformation and elicit insights-training data, evaluation methods, and techniques-to shape our frontier model development. On the other hand, we leverage these learnings to build the science and engineering of impactful frontier model deployment. As a key element of this effort, OpenAI for Science aims to harness AI to accelerate the process of scientific research. This involves building models and an AI-powered platform that speeds up discovery and helps researchers everywhere do more, faster. About the Role As a Research Scientist focused on the mathematical sciences, you will help build models, tools, and workflows that move theoretical research-in fields such as mathematics, theoretical physics, and theoretical computer science-forward. You'll design domain-specific data and signals, shape training and evaluation, guide how to wire models to scientific tools, and work with the academic community to speed up adoption and impact. We're looking for people who… Hold a current or recent academic position in mathematical sciences (mathematics, theoretical physics, theoretical computer science) or a related field Regularly use frontier models in their own research Move easily between theory and code, and are eager to contribute technically as well as academically Either know or are eager to learn modern AI and run AI experiments end-to-end Are strong scientific communicators Care about rigor and reproducibility in scientific results This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees. In this role, you will Assist in designing and building frontier AI models that are great at solving frontier mathematical sciences problems Build high-quality scientific datasets and synthetic data pipelines (symbolic, numeric, and simulator-based) Design reinforcement and grading signals for mathematical sciences and run reinforcement learning/optimization loops to improve model reasoning Define and run evals for scientific reasoning, derivations, simulations, and literature grounding; track progress over time Partner with research labs and the academic community Drive adoption of frontier AI within the scientific community Uphold high standards for safety, data governance, and reproducibility You might thrive in this role if you Are passionate about pushing the boundaries of your field using AI Have used ChatGPT to do calculations and prove or improve lemmas in your field of study Communicate clearly to both scientists and AI engineers; you like collaborating across teams and with academia Nice to have Open-source contributions to mathematical science or AI tooling Experience building or curating domain datasets and benchmarks Experience engaging a research community (teaching, workshops, tutorials, standards) About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI's Affirmative Action and Equal Employment Opportunity Policy Statement. Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link. OpenAI Global Applicant Privacy Policy At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
    $106k-174k yearly est. Auto-Apply 60d+ ago
  • Research Scientist

    Openai 4.2company rating

    Research scientist job at OpenAI

    By applying to this role, you will be considered for Research Scientist roles across all teams at OpenAI. About the Role As a Research Scientist here, you will develop innovative machine learning techniques and advance the research agenda of the team you work on, while also collaborating with peers across the organization. We are looking for people who want to discover simple, generalizable ideas that work well even at large scale, and form part of a broader research vision that unifies the entire company. We expect you to: * Have a track record of coming up with new ideas or improving upon existing ideas in machine learning, demonstrated by accomplishments such as first author publications or projects * Possess the ability to own and pursue a research agenda, including choosing impactful research problems and autonomously carrying out long-running projects * Be excited about OpenAI's approach to research Nice to have: * Interested in and thoughtful about the impacts of AI technology * Past experience in creating high-performance implementations of deep learning algorithms About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI's Affirmative Action and Equal Employment Opportunity Policy Statement. Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link. OpenAI Global Applicant Privacy Policy At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
    $106k-174k yearly est. 56d ago
  • Research Scientist

    Openai 4.2company rating

    Research scientist job at OpenAI

    By applying to this role, you will be considered for Research Scientist roles across all teams at OpenAI. About the Role As a Research Scientist here, you will develop innovative machine learning techniques and advance the research agenda of the team you work on, while also collaborating with peers across the organization. We are looking for people who want to discover simple, generalizable ideas that work well even at large scale, and form part of a broader research vision that unifies the entire company. We expect you to: Have a track record of coming up with new ideas or improving upon existing ideas in machine learning, demonstrated by accomplishments such as first author publications or projects Possess the ability to own and pursue a research agenda, including choosing impactful research problems and autonomously carrying out long-running projects Be excited about OpenAI's approach to research Nice to have: Interested in and thoughtful about the impacts of AI technology Past experience in creating high-performance implementations of deep learning algorithms About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI's Affirmative Action and Equal Employment Opportunity Policy Statement. Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link. OpenAI Global Applicant Privacy Policy At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
    $106k-174k yearly est. Auto-Apply 60d+ ago
  • Researcher, Preparedness

    Openai 4.2company rating

    Research scientist job at OpenAI

    About the Team The Preparedness team is an important part of the Safety Systems org at OpenAI, and is guided by OpenAI's Preparedness Framework. Frontier AI models have the potential to benefit all of humanity, but also pose increasingly severe risks. To ensure that AI promotes positive change, the Preparedness team helps us prepare for the development of increasingly capable frontier AI models. This team is tasked with identifying, tracking, and preparing for catastrophic risks related to frontier AI models. The mission of the Preparedness team is to: Closely monitor and predict the evolving capabilities of frontier AI systems, with an eye towards misuse risks whose impact could be catastrophic to our society Ensure we have concrete procedures, infrastructure and partnerships to mitigate these risks and to safely handle the development of powerful AI systems Preparedness tightly connects capability assessment, evaluations, and internal red teaming, and mitigations for frontier models, as well as overall coordination on AGI preparedness. This is fast paced, exciting work that has far reaching importance for the company and for society. About the Role We are looking to hire exceptional research engineers that can push the boundaries of our frontier models. Specifically, we are looking for those that will help us shape our empirical grasp of the whole spectrum of AI safety concerns and will own individual threads within this endeavor end-to-end. You will own the scientific validity of our frontier preparedness capability evaluations-designing new evals grounded in real threat models (including high-consequence domains like CBRN as well as cyber and other frontier-risk areas), and maintaining existing evals so they don't stale or silently regress. You'll define datasets, graders, rubrics, and threshold guidance, and produce auditable artifacts (evaluation cards, capability reports, system-card inputs) that leadership can trust during high-stakes launches. In this role, you'll: Work on identifying emerging AI safety risks and new methodologies for exploring the impact of these risks Build (and then continuously refine) evaluations of frontier AI models that assess the extent of identified risks Design and build scalable systems and processes that can support these kinds of evaluations Contribute to the refinement of risk management and the overall development of "best practice" guidelines for AI safety evaluations You might thrive in this role if you: Are passionate and knowledgeable about short-term and long-term AI safety risks Demonstrate the ability to think outside the box and have a robust “red-teaming mindset” Have experience in ML research engineering, ML observability and monitoring, creating large language model-enabled applications, and/or another technical domain applicable to AI risk Are able to operate effectively in a dynamic and extremely fast-paced research environment as well as scope and deliver projects end-to-end It would be great if you also have: First-hand experience in red-teaming systems-be it computer systems or otherwise A good understanding of the (nuances of) societal aspects of AI deployment Excellent communication skills and the ability to work cross-functionally This role may require access to technology or technical data controlled under the U.S. Export Administration Regulations or International Traffic in Arms Regulations. Therefore, this role is restricted to individuals described in paragraph (a)(1) of the definition of “U.S. person” in the U.S. Export Administration Regulations, 15 C.F.R. § 772.1, and in the International Traffic in Arms Regulations, 22 C.F.R. § 120.62. U.S. persons are U.S. citizens, U.S. legal permanent residents, individuals granted asylum status in the United States, and individuals admitted to the United States as refugees. About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI's Affirmative Action and Equal Employment Opportunity Policy Statement. Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link. OpenAI Global Applicant Privacy Policy At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
    $88k-154k yearly est. Auto-Apply 9d ago
  • Research Lead, Cyber

    Openai 4.2company rating

    Research scientist job at OpenAI

    About the team The Safety Systems team is responsible for various safety work to ensure our best models can be safely deployed to the real world to benefit the society and is at the forefront of OpenAI's mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency. We seek to learn from deployment and distribute the benefits of AI, while ensuring that this powerful tool is used responsibly and safely. About the role As the lead researcher for cybersecurity risks, you will design, implement, and oversee an end-to-end mitigation stack to prevent severe cyber misuse across OpenAI's products. This role demands technical depth, decisive leadership, and cross-company influence to ensure safeguards are enforceable, scalable, and effective. You'll set the technical strategy, drive execution, and ensure our products cannot be misused for severe harm. In this role, you will: Lead the full-stack mitigation strategy and implement solutions for model-enabled cybersecurity misuse-from prevention to monitoring, detection, and enforcement. Integrate safeguards across products so protections are consistent, low-latency, and scale with usage and new model surfaces. Make decisive technical trade-offs within the cybersecurity risk domain, balancing coverage, latency, model utility, and user privacy. Partner with risk/threat modeling leadership to align mitigation design with anticipated attacker behaviors and high-impact scenarios. Drive rigorous testing and red-teaming, stress-testing the mitigation stack against evolving threats (e.g., novel exploits, tool-use chains, automated attack workflows) and across product surfaces. You might thrive in this role if you: Have a passion for AI safety and are motivated to make cutting-edge AI models safer for real-world use. Bring demonstrated experience in deep learning and transformer models. Are proficient with frameworks such as PyTorch or TensorFlow. Possess a strong foundation in data structures, algorithms, and software engineering principles. Are familiar with methods for training and fine-tuning large language models, including distillation, supervised fine-tuning, and policy optimization. Excel at working collaboratively with cross-functional teams across research, security, policy, product, and engineering. Show decisive leadership in high-stakes, ambiguous environments. Have significant experience designing and deploying technical safeguards for abuse prevention, detection, and enforcement at scale. (Nice to have) Bring background knowledge in cybersecurity or adjacent fields. About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI's Affirmative Action and Equal Employment Opportunity Policy Statement. Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link. OpenAI Global Applicant Privacy Policy At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
    $88k-154k yearly est. Auto-Apply 60d+ ago
  • Researcher, Pretraining Safety

    Openai 4.2company rating

    Research scientist job at OpenAI

    About the Team The Safety Systems team is responsible for various safety work to ensure our best models can be safely deployed to the real world to benefit the society and is at the forefront of OpenAI's mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency. The Pretraining Safety team's goal is to build safer, more capable base models and enable earlier, more reliable safety evaluation during training. We aim to: Develop upstream safety evaluations that to monitor how and when unsafe behaviors and goals emerge; Create safer priors through targeted pretraining and mid-training interventions that make downstream alignment more effective and efficient Design safe-by-design architectures that allow for more controllability of model capabilities In addition, we will conduct the foundational research necessary for understanding how behaviors emerge, generalize, and can be reliably measured throughout training. About the Role The Pretraining Safety team is pioneering how safety is built into models before they reach post-training and deployment. In this role, you will work throughout the full stack of model development with a focus on pre-training: Identify safety-relevant behaviors as they first emerge in base models Evaluate and reduce risk without waiting for full-scale training runs Design architectures and training setups that make safer behavior the default Strengthen models by incorporating richer, earlier safety signals We collaborate across OpenAI's safety ecosystem-from Safety Systems to Training-to ensure that safety foundations are robust, scalable, and grounded in real-world risks. In this role, you will: Develop new techniques to predict, measure, and evaluate unsafe behavior in early-stage models Design data curation strategies that improve pretraining priors and reduce downstream risk Explore safe-by-design architectures and training configurations that improve controllability Introduce novel safety-oriented loss functions, metrics, and evals into the pretraining stack Work closely with cross-functional safety teams to unify pre- and post-training risk reduction You might thrive in this role if you: Have experience developing or scaling pretraining architectures (LLMs, diffusion models, multimodal models, etc.) Are comfortable working with training infrastructure, data pipelines, and evaluation frameworks (e.g., Python, PyTorch/JAX, Apache Beam) Enjoy hands-on research - designing, implementing, and iterating on experiments Enjoy collaborating with diverse technical and cross-functional partners (e.g., policy, legal, training) Are data-driven with strong statistical reasoning and rigor in experimental design Value building clean, scalable research workflows and streamlining processes for yourself and others About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI's Affirmative Action and Equal Employment Opportunity Policy Statement. Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link. OpenAI Global Applicant Privacy Policy At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
    $88k-154k yearly est. Auto-Apply 60d+ ago
  • Researcher, Synthetic RL

    Openai 4.2company rating

    Research scientist job at OpenAI

    About the Team The Synthetic RL team develops reinforcement learning methods that leverage synthetic data, environments, and feedback to train and evaluate frontier AI models. The team explores approaches such as self-play, simulators, and other synthetic evaluations to push model capability, generalization, and alignment beyond what is possible with the current prevailing methodology. About the Role As a Research Scientist on the Synthetic RL team, you will develop novel reinforcement learning techniques that use synthetic environments and feedback to improve large-scale models. You'll work closely with other researchers to design experiments, analyze learning dynamics, and translate research insights into training approaches used in production systems. We're looking for researchers who enjoy working on open-ended problems, value fast iteration, and want their work to directly shape how frontier models are trained. This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees. In this role, you will: Research and develop reinforcement learning algorithms Design and run experiments to study training dynamics and model behavior at scale Collaborate with engineers and researchers to integrate successful approaches into model training pipelines You might thrive in this role if you: Have a strong background in reinforcement learning, machine learning research, or related fields Have strong engineering and statistical analysis skills Enjoy exploring new problem spaces where data, objectives, and evaluation are imperfect or evolving Are motivated by seeing research ideas influence real-world AI systems About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI's Affirmative Action and Equal Employment Opportunity Policy Statement. Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link. OpenAI Global Applicant Privacy Policy At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
    $88k-154k yearly est. Auto-Apply 20d ago
  • Researcher, Training

    Openai 4.2company rating

    Research scientist job at OpenAI

    About the Team OpenAI's Training team is responsible for producing the large language models that power our research, our products, and ultimately bring us closer to AGI. Achieving this goal requires combining deep research into improving our current architecture, datasets and optimization techniques, alongside long-term bets aimed at improving the efficiency and capability of future generations of models. We are responsible for integrating these techniques and producing model artifacts used by the rest of the company, and ensuring that these models are world-class in every respect. Recent examples of artifacts with major contributions from our team include GPT4-Turbo, GPT-4o and o1-mini. About the Role As a member of the architecture team, you will push the frontier of architecture development for OpenAI's flagship models, enhancing intelligence, efficiency, and adding new capabilities. Ideal candidates have a deep understanding of LLM architectures, a sophisticated understanding of model inference, and a hands-on empirical approach. A good fit for this role will be equally happy coming up with a creative breakthrough, investing in strengthening a baseline, designing an eval, debugging a thorny regression, or tracking down a bottleneck. This role is based in San Francisco. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees. In this role, you will: * Design, prototype and scale up new architectures to improve model intelligence * Execute and analyze experiments autonomously and collaboratively * Study, debug, and optimize both model performance and computational performance * Contribute to training and inference infrastructure You might thrive in this role if you: * Have experience landing contributions to major LLM training runs * Can thoroughly evaluate and improve deep learning architectures in a self-directed fashion * Are motivated by safely deploying LLMs in the real world * Are well-versed in the state of the art transformer modifications for efficiency About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI's Affirmative Action and Equal Employment Opportunity Policy Statement. Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link. OpenAI Global Applicant Privacy Policy At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
    $88k-154k yearly est. 56d ago
  • Researcher, Robustness & Safety Training

    Openai 4.2company rating

    Research scientist job at OpenAI

    About the Team The Safety Systems team is responsible for various safety work to ensure our best models can be safely deployed to the real world to benefit the society and is at the forefront of OpenAI's mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency. The Model Safety Research team aims to fundamentally advance our capabilities for precisely implementing robust, safe behavior in AI models, and to leverage these advances to make OpenAI's deployed models safe and beneficial. This requires a breadth of new ML research to address the growing set of safety challenges as AI becomes more powerful and used in more settings. Key focus areas include how to enforce nuanced safety policies without trading off helpfulness and capabilities, how to make the model robust to adversaries, how to address privacy and security risks, and how to make the model trustworthy in safety-critical domains. We seek to learn from deployment and distribute the benefits of AI, while ensuring that this powerful tool is used responsibly and safely. About the Role OpenAI is seeking a senior researcher with passion for AI safety and experience in safety research. Your role will set directions for research to enable and empower safe AGI and work on research projects to make our AI systems safer, more aligned and more robust to adversarial or malicious use cases. You will play a critical role in shaping how a safe AI system should look like in the future at OpenAI, making a significant impact on our mission to build and deploy safe AGI. In this role, you will: * Conduct state-of-the-art research on AI safety topics such as RLHF, adversarial training, robustness, and more. * Implement new methods in OpenAI's core model training and launch safety improvements in OpenAI's products. * Set the research directions and strategies to make our AI systems safer, more aligned and more robust. * Coordinate and collaborate with cross-functional teams, including T&S, legal, policy and other research teams, to ensure that our products meet the highest safety standards. * Actively evaluate and understand the safety of our models and systems, identifying areas of risk and proposing mitigation strategies. You might thrive in this role if you: * Are excited about OpenAI's mission of building safe, universally beneficial AGI and are aligned with OpenAI's charter * Demonstrate a passion for AI safety and making cutting-edge AI models safer for real-world use. * Bring 4+ years of experience in the field of AI safety, especially in areas like RLHF, adversarial training, robustness, fairness & biases. * Hold a Ph.D. or other degree in computer science, machine learning, or a related field. * Possess experience in safety work for AI model deployment * Have an in-depth understanding of deep learning research and/or strong engineering skills. * Are a team player who enjoys collaborative work environments. About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI's Affirmative Action and Equal Employment Opportunity Policy Statement. Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link. OpenAI Global Applicant Privacy Policy At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
    $88k-154k yearly est. 56d ago
  • Researcher, Alignment

    Openai 4.2company rating

    Research scientist job at OpenAI

    About the Team The Alignment team at OpenAI is dedicated to ensuring that our AI systems are safe, trustworthy, and consistently aligned with human values, even as they scale in complexity and capability. Our work is at the cutting edge of AI research, focusing on developing methodologies that enable AI to robustly follow human intent across a wide range of scenarios, including those that are adversarial or high-stakes. We concentrate on the most pressing challenges, ensuring our work addresses areas where AI could have the most significant consequences. By focusing on risks that we can quantify and where our efforts can make a tangible difference, we aim to ensure that our models are ready for the complex, real-world environments in which they will be deployed. The two pillars of our approach are: (1) harnessing improved capabilities into alignment, making sure that our alignment techniques improve, rather than break, as capabilities grow, and (2) centering humans by developing mechanisms and interfaces that enable humans to both express their intent and to effectively supervise and control AIs, even in highly complex situations. About the Role As a Research Engineer / Research Scientist on the Alignment team, you will be at the forefront of ensuring that our AI systems consistently follow human intent, even in complex and unpredictable scenarios. Your role will involve designing and implementing scalable solutions that ensure the alignment of AI as their capabilities grow and that integrate human oversight into AI decision-making. This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees. In this role, you will: We are seeking research engineers and research scientists to help design and implement experiments for alignment research. Responsibilities may include: * Develop and evaluate alignment capabilities that are subjective, context-dependent, and hard to measure. * Design evaluations to reliably measure risks and alignment with human intent and values. * Build tools and evaluations to study and test model robustness in different situations. * Design experiments to understand laws for how alignment scales as a function of compute, data, lengths of context and action, as well as resources of adversaries. * Design and evaluate new Human-AI-interaction paradigms and scalable oversight methods that redefine how humans interact with, understand, and supervise our models. * Train model to be calibrated on correctness and risk. * Designing novel approaches for using AI in alignment research You might thrive in this role if you: * Are a team player - willing to do a variety of tasks that move the team forward. * Have a PhD or equivalent experience in research in computer science, computational science, data science, cognitive science, or similar fields. * Have strong engineering skills, particularly in designing and optimizing large-scale machine learning systems(e.g., PyTorch). * Have a deep understanding of the science behind alignment algorithms and techniques. * Can develop data visualization or data collection interfaces (e.g., TypeScript, Python). * Enjoy fast-paced, collaborative, and cutting-edge research environments. * Want to focus on developing AI models that are trustworthy, safe, and reliable, especially in high-stakes scenarios. About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI's Affirmative Action and Equal Employment Opportunity Policy Statement. Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link. OpenAI Global Applicant Privacy Policy At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
    $88k-154k yearly est. 56d ago
  • Researcher, Safety Oversight

    Openai 4.2company rating

    Research scientist job at OpenAI

    About the Team The Safety Systems team is responsible for various safety work to ensure our best models can be safely deployed to the real world to benefit the society, and is at the forefront of OpenAI's mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency. The Safety Oversight Research team aims to fundamentally advance our capabilities to maintain oversight over frontier AI models, and leverage these advances to ensure OpenAI's deployed models are safe and beneficial. This requires a breadth of new ML research in the areas of human-AI collaboration, reasoning, robustness, and scalable oversight to keep pace with model capabilities. We invest heavily in developing novel model and system-level methods of identifying and mitigating AI misuse and misalignment. Our goal is to learn from deployment and distribute the benefits of AI, while ensuring that this powerful tool is used responsibly and safely. About the Role OpenAI is seeking a senior researcher with a passion for AI safety and experience in safety research. Your role will set directions for research to maintain effective oversight of safe AGI and work on research projects to identify and mitigate misuse and misalignment in our AI systems. You will play a critical role in defining how a safe AI system should look in the future at OpenAI, making a significant impact on our mission to build and deploy safe AGI. In this role, you will: * Develop and refine AI monitor models to detect and mitigate known and emerging patterns of misuse and misalignment. * Set research directions and strategies to make our AI systems safer, more aligned, and more robust. * Evaluate and design effective red-teaming pipelines to examine the end-to-end robustness of our safety systems, and identify areas for future improvement. * Conduct research to improve models' ability to reason about questions of human values, and apply these improved models to practical safety challenges. * Coordinate and collaborate with cross-functional teams, including T&S, legal, policy and other research teams, to ensure that our products meet the highest safety standards. You might thrive in this role if you: * Are excited about OpenAI's mission of building safe, universally beneficial AGI and are aligned with OpenAI's charter * Show enthusiasm for AI safety and dedication to enhancing the safety of cutting-edge AI models for real-world use. * Bring 4+ years of experience in the field of AI safety, especially in areas like RLHF, human-AI collaboration, fairness & biases. * Hold a Ph.D. or other degree in computer science, machine learning, or a related field. * Thrive in environments involving large-scale AI systems. * Possess 4+ years of research engineering experience and proficiency in Python or similar languages. About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI's Affirmative Action and Equal Employment Opportunity Policy Statement. Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link. OpenAI Global Applicant Privacy Policy At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
    $88k-154k yearly est. 56d ago
  • Researcher, Interpretability

    Openai 4.2company rating

    Research scientist job at OpenAI

    About the Team The Interpretability team studies internal representations of deep learning models. We are interested in using representations to understand model behavior, and in engineering models to have more understandable representations. We are particularly interested in applying our understanding to ensure the safety of powerful AI systems. Our working style is collaborative and curiosity-driven. About the Role OpenAI is seeking a researcher passionate about understanding deep networks, with a strong background in engineering, quantitative reasoning, and the research process. You will develop and carry out a research plan in mechanistic interpretability, in close collaboration with a highly motivated team. You will play a critical role in helping OpenAI ensure future models remain safe even as they grow in capability. This will make a significant impact on our goal of building and deploying safe AGI. In this role, you will: * Develop and publish research on techniques for understanding representations of deep networks. * Engineer infrastructure for studying model internals at scale. * Collaborate across teams to work on projects that OpenAI is uniquely suited to pursue. * Guide research directions toward demonstrable usefulness and/or long-term scalability. You might thrive in this role if you: * Are excited about OpenAI's mission of ensuring AGI benefits all of humanity, and are aligned with OpenAI's charter. * Show enthusiasm for long-term AI safety, and have thought deeply about technical paths to safe AGI. * Bring experience in the field of AI safety, mechanistic interpretability, or spiritually related disciplines. * Hold a Ph.D. or have research experience in computer science, machine learning, or a related field. * Thrive in environments involving large-scale AI systems, and are excited to make use of OpenAI's unique resources in this area. * Possess 2+ years of research engineering experience and proficiency in Python or similar languages. * Are deeply curious. About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI's Affirmative Action and Equal Employment Opportunity Policy Statement. Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link. OpenAI Global Applicant Privacy Policy At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
    $88k-154k yearly est. 56d ago
  • Researcher, Trustworthy AI

    Openai 4.2company rating

    Research scientist job at OpenAI

    About the team The Safety Systems team is responsible for various safety work to ensure our best models can be safely deployed to the real world to benefit the society and is at the forefront of OpenAI's mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency. The Trustworthy AI team works on action relevant or decision relevant research to ensure we shape A(G) I keeping societal impacts in mind. This includes work on full stack policy problems such as building methods for public inputs into model values and understanding impacts of anthropomorphism of AI. We aim to translate nebulous policy problems to be technically tractable and measurable. We then use this work to inform and build interventions that increase societal readiness for increasingly intelligent systems. Our team also works on external assurances for AI with an aim for increasing independent checks and forming additional layers of validation. About the role We are looking to hire exceptional research scientists/engineers that can push the rigor of work needed to increase societal readiness for AGI. Specifically, we are looking for those that will enable us to translate nebulous policy problems to be technically tractable and measurable. This role is based in our San Francisco HQ. We offer relocation assistance to new employees. In this role, you will enable: * Set research and strategies to study societal impacts of our models in an action-relevant manner and figure out how to tie this back into model design * Build creative methods and run experiments that enable public input into model values * Increasing rigor of external assurances by turning external findings into robust evaluations * Facilitating and growing our ability to effectively de-risk flagship model deployments in a timely manner You might thrive in this role if you: * Are excited about OpenAI's mission of building safe, universally beneficial AGI and are aligned with OpenAI's charter * Demonstrate a passion for AI safety and making cutting-edge AI models safer for real-world use. * Possess 3+ years of research experience (industry or similar academic experience) and proficiency in Python or similar languages * Thrive in environments involving large-scale AI systems and multimodal datasets * Enjoy working on large-scale, difficult, and nebulous problems in a well-resourced environment * Exhibit proficiency in the field of AI safety, focusing on topics like RLHF, adversarial training, robustness, LLM evaluations * Have past experience in interdisciplinary research * Show enthusiasm for socio-technical topics About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI's Affirmative Action and Equal Employment Opportunity Policy Statement. Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link. OpenAI Global Applicant Privacy Policy At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
    $88k-154k yearly est. 56d ago
  • Researcher, Alignment

    Openai 4.2company rating

    Research scientist job at OpenAI

    About the Team The Alignment team at OpenAI is dedicated to ensuring that our AI systems are safe, trustworthy, and consistently aligned with human values, even as they scale in complexity and capability. Our work is at the cutting edge of AI research, focusing on developing methodologies that enable AI to robustly follow human intent across a wide range of scenarios, including those that are adversarial or high-stakes. We concentrate on the most pressing challenges, ensuring our work addresses areas where AI could have the most significant consequences. By focusing on risks that we can quantify and where our efforts can make a tangible difference, we aim to ensure that our models are ready for the complex, real-world environments in which they will be deployed. The two pillars of our approach are: (1) harnessing improved capabilities into alignment, making sure that our alignment techniques improve, rather than break, as capabilities grow, and (2) centering humans by developing mechanisms and interfaces that enable humans to both express their intent and to effectively supervise and control AIs, even in highly complex situations. About the Role As a Research Engineer / Research Scientist on the Alignment team, you will be at the forefront of ensuring that our AI systems consistently follow human intent, even in complex and unpredictable scenarios. Your role will involve designing and implementing scalable solutions that ensure the alignment of AI as their capabilities grow and that integrate human oversight into AI decision-making. This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees. In this role, you will: We are seeking research engineers and research scientists to help design and implement experiments for alignment research. Responsibilities may include: Develop and evaluate alignment capabilities that are subjective, context-dependent, and hard to measure. Design evaluations to reliably measure risks and alignment with human intent and values. Build tools and evaluations to study and test model robustness in different situations. Design experiments to understand laws for how alignment scales as a function of compute, data, lengths of context and action, as well as resources of adversaries. Design and evaluate new Human-AI-interaction paradigms and scalable oversight methods that redefine how humans interact with, understand, and supervise our models. Train model to be calibrated on correctness and risk. Designing novel approaches for using AI in alignment research You might thrive in this role if you: Are a team player - willing to do a variety of tasks that move the team forward. Have a PhD or equivalent experience in research in computer science, computational science, data science, cognitive science, or similar fields. Have strong engineering skills, particularly in designing and optimizing large-scale machine learning systems(e.g., PyTorch). Have a deep understanding of the science behind alignment algorithms and techniques. Can develop data visualization or data collection interfaces (e.g., TypeScript, Python). Enjoy fast-paced, collaborative, and cutting-edge research environments. Want to focus on developing AI models that are trustworthy, safe, and reliable, especially in high-stakes scenarios. About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI's Affirmative Action and Equal Employment Opportunity Policy Statement. Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link. OpenAI Global Applicant Privacy Policy At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
    $88k-154k yearly est. Auto-Apply 60d+ ago
  • Researcher, Trustworthy AI

    Openai 4.2company rating

    Research scientist job at OpenAI

    About the team The Safety Systems team is responsible for various safety work to ensure our best models can be safely deployed to the real world to benefit the society and is at the forefront of OpenAI's mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency. The Trustworthy AI team works on action relevant or decision relevant research to ensure we shape A(G) I keeping societal impacts in mind. This includes work on full stack policy problems such as building methods for public inputs into model values and understanding impacts of anthropomorphism of AI. We aim to translate nebulous policy problems to be technically tractable and measurable. We then use this work to inform and build interventions that increase societal readiness for increasingly intelligent systems. Our team also works on external assurances for AI with an aim for increasing independent checks and forming additional layers of validation. About the role We are looking to hire exceptional research scientists/engineers that can push the rigor of work needed to increase societal readiness for AGI. Specifically, we are looking for those that will enable us to translate nebulous policy problems to be technically tractable and measurable. This role is based in our San Francisco HQ. We offer relocation assistance to new employees. In this role, you will enable: Set research and strategies to study societal impacts of our models in an action-relevant manner and figure out how to tie this back into model design Build creative methods and run experiments that enable public input into model values Increasing rigor of external assurances by turning external findings into robust evaluations Facilitating and growing our ability to effectively de-risk flagship model deployments in a timely manner You might thrive in this role if you: Are excited about OpenAI's mission of building safe, universally beneficial AGI and are aligned with OpenAI's charter Demonstrate a passion for AI safety and making cutting-edge AI models safer for real-world use. Possess 3+ years of research experience (industry or similar academic experience) and proficiency in Python or similar languages Thrive in environments involving large-scale AI systems and multimodal datasets Enjoy working on large-scale, difficult, and nebulous problems in a well-resourced environment Exhibit proficiency in the field of AI safety, focusing on topics like RLHF, adversarial training, robustness, LLM evaluations Have past experience in interdisciplinary research Show enthusiasm for socio-technical topics About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI's Affirmative Action and Equal Employment Opportunity Policy Statement. Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link. OpenAI Global Applicant Privacy Policy At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
    $88k-154k yearly est. Auto-Apply 60d+ ago

Learn more about OpenAI jobs