Post job

Research Engineer jobs at Pacific Northwest National Laboratory - 13765 jobs

  • Power Systems Research Engineer - Grid Modeling & Planning

    Pacific Northwest National Laboratory 4.5company rating

    Research engineer job at Pacific Northwest National Laboratory

    At PNNL, our core capabilities are divided among major departments that we refer to as Directorates within the Lab, focused on a specific area of scientific research or other function, with its own leadership team and dedicated budget. Our Science & Technology directorates include National Security, Earth and Biological Sciences, Physical and Computational Sciences, and Energy and Environment. In addition, we have an Environmental Molecular Sciences Laboratory, a Department of Energy, Office of Science user facility housed on the PNNL campus. The Energy and Environment Directorate (EED) delivers science and technology solutions for the nation's biggest energy and environmental challenges. Our more than 1,700 staff support the Department of Energy (DOE), delivering on key DOE mission areas including: modernizing our nation's power grid to maintain a reliable, affordable, secure, and resilient electricity delivery infrastructure; research, development, validation, and effective utilization of renewable energy and efficiency technologies that improve the affordability, reliability, resiliency, and security of the American energy system; and resolving complex issues in nuclear science, energy, and environmental management. The Electricity Infrastructure and Buildings Division (EI&BD), part of EED, is accelerating the transition to an efficient, resilient, and secure energy system through basic and applied research. We leverage a strong technical foundation in power and energy systems and advanced data analytics to drive innovation, transform markets, and shape energy policy. Within this division, the Power System Modeling Group (PSMG) develops advanced simulation, analysis, and optimization tools to understand and enhance grid performance across all levels, from the bulk energy system to the distribution grid. **Responsibilities** PNNL seeks a creative and inter-disciplinary power system engineer to develop model enhancements for conventional power system models (primarily production cost models and expansion planning models) to represent changing system constraints and dependencies imposed by the rapid growth of our Nation's energy system. The selected candidate is expected to participate in the development, deployment, and application of open-source power system models that integrate tools for capacity expansion, resource adequacy, production cost modeling, and powerflow to answer complex questions across multiple modeling regimes. The successful candidate will use integrated, multi-scale models to explore and quantify the benefits of novel management and optimization strategies that coordinate electric power decision considering evolving policies and new load paradigms. The candidate will use these tools to investigate how policies and incentives will impact our nation's electrical infrastructure and recommend strategies to mitigate future challenges. Key Responsibilities: The successful candidate would be expected to be a senior subject matter expert to projects within their field of specialization. The position will lead teams of junior scientists and engineers and will be responsible for delivering technical tasks with the level of quality, timeliness, and cost, to meet required project and client expectations. The desired engineer will: + Provide subject matter expertise within multi-disciplinary teams with a focus on power system optimization, resource adequacy, reliable capacity expansion, and the related policies that must be enacted. + Provide electric system simulations with a good working knowledge of grid topology and transmission planning, operational costs and optimization, system constraints, and financial analysis. + Lead model enhancements activities and participate in the design of simulation experiments, evaluation of technology performance, or development of scientific and engineered solutions to assess or solve a variety of technical problems. + Broadly apply theory to the development and implementation of new tools including applications, algorithms, models, and visualization. + Publish research results, targeting peer reviewed journals and other written reports. + Build strong teams for project execution that transfer knowledge through the organization. + Mentor junior staff and other engineers and collaborate with senior level staff from within PNNL. + Travel to execute projects, build relationships, and grow leadership in the field. + Manage individual tasks or projects; including directing other staff, ensuring milestones are met, and communicating with senior management. + Ensure safe operating practices are followed. + Effectively communicate engineering challenges and solutions to a wide range of audiences. + Effectively communicate with senior engineers to ensure high-quality delivery of research projects on budget and on time. + Ability to deal with ambiguity. **On-site in Richland, WA is strongly preferred; remote work may be considered.** **Qualifications** Minimum Qualifications: + BS/BA and 5+ years of relevant production cost and expansion planning model work experience -OR- + MS/MA and 3+ years of relevant production cost and expansion planning model work experience -OR- + PhD with 1+ year of relevant production cost and expansion planning model experience Preferred Qualifications: + Experience in using and/or contributing to the development of large-scale power system optimization models, using either commercial or open-source tools. + Understanding of power grid operations, transmission and resource planning, and long-term investment. + Knowledge of utility industry economics and business processes. + Broadly apply principles, standards work, theories and concepts of systems engineering to the development and implementation of new tools including applications, algorithms, models, and visualization techniques. + Experience in grid applications, software development, or active participation in related national and international forums for power system modeling. + Experience in stochastic optimization and dynamic programming. + Fluency in one or more programming or scripting languages (e.g., GAMS, Julia, perl, Python, etc.). + Experience working in an inter-disciplinary team. + Demonstrated success of articulating research through publications, particularly in top-tier journals. **About PNNL** Pacific Northwest National Laboratory (PNNL) is a world-class research institution powered by a highly educated, diverse workforce committed to the values of Integrity, Creativity, Collaboration, Impact, and Courage. Every year, scores of dynamic, driven people come to PNNL to work with renowned researchers on meaningful science, innovations and outcomes for the U.S. Department of Energy and other sponsors; here is your chance to be one of them! At PNNL, you will find an exciting research environment and excellent benefits including health insurance, and flexible work schedules. PNNL is located in eastern Washington State-the dry side of Washington known for its stellar outdoor recreation and affordable cost of living. The Lab's campus is only a 45-minute flight (or ~3 hour drive) from Seattle or Portland, and is serviced by the convenient PSC airport, connected to 8 major hubs. **Commitment to Excellence and Equal Employment Opportunity** Our laboratory is committed to fostering a work environment where all individuals are treated with fairness and respect while solving critical challenges in fundamental sciences, national security, and energy resiliency. We are an Equal Employment Opportunity employer. Pacific Northwest National Laboratory (PNNL) is an Equal Opportunity Employer. PNNL considers all applicants for employment without regard to race, religion, color, sex, national origin, age, disability, genetic information (including family medical history), protected veteran status, and any other status or characteristic protected by federal, state, and/or local laws. We are committed to providing reasonable accommodations for individuals with disabilities and disabled veterans in our job application procedures and in employment. If you need assistance or an accommodation due to a disability, contact us at **************** . **Drug Free Workplace** PNNL is committed to a drug-free workplace supported by Workplace Substance Abuse Program (WSAP) and complies with federal laws prohibiting the possession and use of illegal drugs. If you are offered employment at PNNL, you must pass a drug test prior to commencing employment. PNNL complies with federal law regarding illegal drug use. Under federal law, marijuana remains an illegal drug. If you test positive for any illegal controlled substance, including marijuana, your offer of employment will be withdrawn. **Security, Credentialing, and Eligibility Requirements** As a national laboratory, PNNL is responsible for adhering to the Homeland Security Presidential Directive 12 (HSPD-12) and Department of Energy (DOE) Order 473.1A, which require new employees to obtain and maintain a HSPD-12 Personal Identify Verification (PIV) Credential. To obtain this credential, new employees must successfully complete the applicable tier of federal background investigation post hire and receive a favorable federal adjudication. The tier of federal background investigation will be determined by job duties and national security or public trust responsibilities associated with the job. All tiers of investigation include a declaration of illegal drug activities, including use, supply, possession, or manufacture within the last 1 to 7 years (depending on the applicable tier of investigation). Illegal drug activities include marijuana and cannabis derivatives, which are still considered illegal under federal law, regardless of state laws. For foreign national candidates: If you have not resided in the U.S. for three consecutive years, you are not eligible for the PIV credential and instead will need to obtain a favorable Local Site Specific Only (LSSO) Federal risk determination to maintain employment. Once you meet the three-year residency requirement thereafter, you will be required to obtain a PIV credential to maintain employment. The tier of federal background investigation required to obtain the PIV credential will be determined by job duties at the time you become eligible for the PIV credential. **Mandatory Requirements** Please be aware that the Department of Energy (DOE) prohibits DOE employees and contractors from having any affiliation with the foreign government of a country DOE has identified as a "country of risk" without explicit approval by DOE and Battelle. If you are offered a position at PNNL and currently have any affiliation with the government of one of these countries, you will be required to disclose this information and recuse yourself of that affiliation or receive approval from DOE and Battelle prior to your first day of employment. **Rockstar Rewards** Employees and their families are offered medical insurance, dental insurance, vision insurance, robust telehealth care options, several mental health benefits, free wellness coaching, health savings account, flexible spending accounts, basic life insurance, disability insurance*, employee assistance program, business travel insurance, tuition assistance, relocation, backup childcare, legal benefits, supplemental parental bonding leave, surrogacy and adoption assistance, and fertility support. Employees are automatically enrolled in our company-funded pension plan* and may enroll in our 401 (k) savings plan with company match*. Employees may accrue up to 120 vacation hours per year and may receive ten paid holidays per year. * Research Associates excluded. **All benefits are dependent upon eligibility. Click Here For Rockstar Rewards (****************************************** **Notice to Applicants** PNNL lists the full pay range for the position in the job posting. Starting pay is calculated from the minimum of the pay range and actual placement in the range is determined based on an individual's relevant job-related skills, qualifications, and experience. This approach is applicable to all positions, with the exception of positions governed by collective bargaining agreements and certain limited-term positions which have specific pay rules. As part of our commitment to fair compensation practices, we do not ask for or consider current or past salaries in making compensation offers at hire. Instead, our compensation offers are determined by the specific requirements of the position, prevailing market trends, applicable collective bargaining agreements, pay equity for the position type, and individual qualifications and skills relevant to the performance of the position. **Minimum Salary** USD $122,100.00/Yr. **Maximum Salary** USD $193,000.00/Yr.
    $122.1k-193k yearly 12d ago
  • Job icon imageJob icon image 2

    Looking for a job?

    Let Zippia find it for you.

  • Multimodal ML Research Engineer (LLMs & AI Agents)

    Apple Inc. 4.8company rating

    Cupertino, CA jobs

    A leading technology company in Cupertino is seeking mid-level and junior researchers in machine learning. The role involves innovative research on Multimodal LLMs and AI Agents, collaboration with experts, and the possibility of publishing results. A PhD or MS in Computer Science or Engineering is required, alongside strong expertise in machine learning. Competitive compensation package includes base pay between $181,100 and $318,400, and a full range of benefits including stock options and educational reimbursement. #J-18808-Ljbffr
    $181.1k-318.4k yearly 1d ago
  • ML Safety Research Engineer

    Apple Inc. 4.8company rating

    San Francisco, CA jobs

    San Francisco, California, United States Machine Learning and AI Apple Services Engineering (ASE) powers many AI features across App Store, Music, Video and more. We build deeply personal products with the goal of representing users around the globe authentically. We work continuously to avoid perpetuating systemic biases and maintain safe and trustworthy experiences across our AI tools and models. Description Our team, part of Apple Services Engineering, is looking for an ML Research Engineer to lead the design and continuous development of automated safety benchmarking methodologies. In this role, you will investigate how media‑related agents behave, develop rigorous evaluation frameworks and techniques, and establish scientific standards for assessing risks they pose and safety performance. This role supports the development of scalable evaluation techniques that ensure our engineers have the right tools to assess candidate models and product features for responsible and safe performance. The capabilities you build will allow for the generation of benchmark datasets and evaluation methodologies for model and application outputs, at scale, to enable engineering teams to translate safety insights into actionable engineering and product improvements. This role blends deep technical expertise with strong analytical judgment to develop tools and capabilities for assessing and improving the behavior of advanced AI/ML models. You will work cross‑functionally with Engineering and Project Managers, Product, and Governance teams to develop a suite of technologies to ensure that AI experiences are reliable, safe, and aligned with human expectations. The successful candidate will take a proactive approach to working independently and collaboratively on a wide range of projects. In this role, you will work alongside a small but impactful team, collaborating with ML and data scientists, software developers, project managers, and other teams at Apple to understand requirements and translate them into scalable, reliable, and efficient evaluation frameworks. Responsibilities Design scientifically‑grounded benchmarking methodologies covering multiple dimensions of responsibility and safety across several media and application marketplace use cases Develop automated evaluation pipelines that collect, automatically judge, and analyze model outputs with respect to safety policies, at scale Create and curate datasets, tasks, and feature usage scenarios that represent realistic and adversarial use cases across multiple languages, markets, and domains Define and validate new metrics for complex phenomena such as multi‑turn agentic interaction patterns Apply statistical rigor and reproducibility to above mentioned objectives Work closely with engineering and research teams to translate experimental findings into actionable model improvements and safety mitigations Publish internal reports and external papers Monitor evolving industry practices and academic work to ensure benchmarks remain relevant Minimum Qualifications Advanced degree (MS or PhD) in Computer Science, Software Engineering, or equivalent research/work experience 1+ years of work experience either as a postdoc or in the industry Strong research background in empirical evaluation, experimental design, or benchmarking Strong proficiency in Python (pandas, NumPy, Jupyter, PyTorch, etc.) Deep familiarity with software engineering workflows and developer tools Experience working with or evaluating AI/ML models, preferably LLMs or program synthesis systems Strong analytical and communication skills, including the ability to write clear reports Technical Skills: Proficiency in Python (pandas, NumPy, Jupyter, PyTorch, etc.). Experience working with large datasets, annotation tools, and model evaluation pipelines Familiarity with evaluations specific to responsible AI and safety, hallucination detection, and/or model alignment concerns Ability to design taxonomies, categorization schemes, and structured labeling frameworks Analytical Strength: Ability to interpret unstructured data (text, transcripts, user sessions) and derive meaningful insights Communication: Strong ability to stitch together qualitative and quantitative insights into actionable guidance; strong ability to communicate complex architectures and systems to a variety of stakeholders Education in Data Science, Linguistics, Cognitive Science, HCI, Psychology, Social Science, or a related field Preferred Qualifications Publications in AI/ML evaluation or related fields Experience with automated testing frameworks Experience constructing human‑in‑the‑loop or multi‑turn evaluation setups Intermediate or Advanced Proficiency in Swift Familiarity with RAG systems, reinforcement learning, agentic architectures, and model fine‑tuning Expertise in designing annotation guidelines and validation instruments and techniques Background in human factors, social science, and/or safety assessment methodologies At Apple, base pay is one part of our total compensation package and is determined within a range. This provides the opportunity to progress as you grow and develop within a role. The base pay range for this role is between $181,100 and $272,100, and your base pay will depend on your skills, qualifications, experience, and location. Apple employees also have the opportunity to become an Apple shareholder through participation in Apple's discretionary employee stock programs. Apple employees are eligible for discretionary restricted stock unit awards, and can purchase Apple stock at a discount if voluntarily participating in Apple's Employee Stock Purchase Plan. You'll also receive benefits including: Comprehensive medical and dental coverage, retirement benefits, a range of discounted products and free services, and for formal education related to advancing your career at Apple, reimbursement for certain educational expenses - including tuition. Additionally, this role might be eligible for discretionary bonuses or commission payments as well as relocation. Learn more about Apple Benefits. Note: Apple benefit, compensation and employee stock programs are subject to eligibility requirements and other terms of the applicable plan or program. Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant. Apple accepts applications to this posting on an ongoing basis. #J-18808-Ljbffr
    $181.1k-272.1k yearly 4d ago
  • Machine Learning Research Engineer, Text Generation

    Apple Inc. 4.8company rating

    Cupertino, CA jobs

    Cupertino, California, United States Machine Learning and AI Apple is where individual imaginations gather together, committing to the values that lead to great work. Every new product we build or service we create, what we deliver is the result of us making each other's ideas stronger. It's the diversity of our people and their thinking that inspires the innovation that runs through everything we do. When we bring everybody in, we can do the best work of our lives. Here, you'll do more than join something - you'll add something. Text generation is a key enabler for accelerated text input and intelligent interaction on Apple platforms. Our team is working on redefining user interaction with generative models for text generation. If you want to be part of an ambitious, organized and collaborative team that ships user experiences with pioneering ML, partnered with the best UI designs, come join the Input Experience NLP team. Our work has been highlighted in multiple WWDC keynotes including Intelligent Input in 2023, Writing Tools and Smart Reply in 2024! We exemplify Apple's outstanding integration of hardware and software to create seamless input experiences. You will have the opportunity to go from building offline pioneering NLP models to optimization of the models for different hardware backends and user interfaces that make the experience magical. Our vision always includes a deep dedication to strengthening Apple's privacy policy by achieving all of the above on device with powerful ML. Description As a key pillar of Apple Intelligence, input experience will be the main area where you bring impact to billions of users with your ML expertise, engineering passion, and programming skills. You will work with a hard-working and dedicated set of outstanding ML and software engineers on a wide range of most advanced text generation technologies such as context-augmented text rewriting, safety-controlled text composition, free-form text transformation, personalized smart interactions, etc. Our team has been working in this area for years and owns the NLP and ML text input stack for the keyboard input that includes auto correction, predictive typing on all Apple platforms. We also work on full stack ML applied to NLP and expose these key technologies across Apple on device and also to third party applications through the Natural-Language framework. If you want to amplify your strong ML and NLP skills into user experiences that will reach every person around you, this is the perfect opportunity! Minimum Qualifications Strong machine learning fundamentals Knowledge of ML techniques such as implementing basic optimizers, applying parameter tuning in model training and evaluation, and reproducing research experiments Strong programming and communication skills Ph. D. in CS/EE/Physics/Statistics/etc. (or Masters with 4 years of proven experience) Preferred Qualifications Familiar with model compression algorithms including quantization, pruning, distillations, and experience optimizing large diffusion models or language models Experience with hardware architecture, software & hardware co-design Experience with deploying large ML models in real world products Actively programming with high-quality codes across complex and large repositories Familiar with common NLP algorithms and applications, including tokenization, language modeling, text decoding, text classifier etc Experience of multi-modal modeling, presenting plans, progress, and results or demos regularly and concisely At Apple, base pay is one part of our total compensation package and is determined within a range. This provides the opportunity to progress as you grow and develop within a role. The base pay range for this role is between $147,400 and $272,100, and your base pay will depend on your skills, qualifications, experience, and location. Apple employees also have the opportunity to become an Apple shareholder through participation in Apple's discretionary employee stock programs. Apple employees are eligible for discretionary restricted stock unit awards, and can purchase Apple stock at a discount if voluntarily participating in Apple's Employee Stock Purchase Plan. You'll also receive benefits including: Comprehensive medical and dental coverage, retirement benefits, a range of discounted products and free services, and for formal education related to advancing your career at Apple, reimbursement for certain educational expenses - including tuition. Additionally, this role might be eligible for discretionary bonuses or commission payments as well as relocation. Learn more about Apple Benefits. Note: Apple benefit, compensation and employee stock programs are subject to eligibility requirements and other terms of the applicable plan or program. Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant . Apple accepts applications to this posting on an ongoing basis. #J-18808-Ljbffr
    $147.4k-272.1k yearly 2d ago
  • Machine Learning Research Engineer - Large Language Models (LLMs), Siri Core Modeling

    Apple Inc. 4.8company rating

    Cupertino, CA jobs

    Cupertino, California, United States Machine Learning and AI Join the Siri Planner team as a Machine Learning Research Engineer and play a pivotal role in shaping the future of virtual assistants. You'll contribute to revolutionizing how millions of Siri users worldwide interact with their Apple devices by building ground breaking AI solutions. Description As a Machine Learning Research Engineer, you will help with initiatives to significantly advance Siri's natural language understanding and planning capabilities using innovative LLM technologies. Developing innovative systems for synthetic training data generation and implementing strategies for the continuous optimization of model performance. Designing and implementing agentic workflows and RAG systems to enhance Siri's capabilities. Optimizing model performance for tool calling and reasoning tasks. Actively staying at the forefront of academic and industry research in LLMs, NLP, and agentic systems, and translating novel insights into practical solutions. Collaborating closely with a multidisciplinary team of researchers, software engineers, and product designers to seamlessly integrate AI innovations into the Siri user experience. Minimum Qualifications Advanced degree (MSc/PhD) in Machine Learning, Computer Science, or a related quantitative field; or BSc with 2+ years of relevant industry experience Hands on experience in machine learning engineering outside of research Strong Python proficiency, including development, debugging, and design, coupled with extensive experience using ML frameworks (e.g. PyTorch, Jax, HuggingFace) Excellent problem-solving, critical thinking, and interpersonal skills, with a collaborative attitude Preferred Qualifications Proven hands‑on experience in machine learning engineering for large‑scale models, with a strong focus on generative AI, LLMs, Retrieval Augmented Generation (RAG), or agentic systems Applying LLMs for synthetic data generation (e.g. for knowledge distillation) or applying reinforcement learning for post‑training or fine‑tuning of LLMs. A successful track record of building and deploying end‑to‑end ML data pipelines (data preparation, storage, training, and inference) in cloud or on‑premise environments. Experience with training, fine‑tuning, and deploying LLMs in production environments. Proficiency in evaluating LLMs for specific product tasks and performance metrics. At Apple, base pay is one part of our total compensation package and is determined within a range. This provides the opportunity to progress as you grow and develop within a role. The base pay range for this role is between $147,400 and $272,100, and your base pay will depend on your skills, qualifications, experience, and location. Apple employees also have the opportunity to become an Apple shareholder through participation in Apple's discretionary employee stock programs. Apple employees are eligible for discretionary restricted stock unit awards, and can purchase Apple stock at a discount if voluntarily participating in Apple's Employee Stock Purchase Plan. You'll also receive benefits including: Comprehensive medical and dental coverage, retirement benefits, a range of discounted products and free services, and for formal education related to advancing your career at Apple, reimbursement for certain educational expenses - including tuition. Additionally, this role might be eligible for discretionary bonuses or commission payments as well as relocation. Learn more about Apple Benefits. Note: Apple benefit, compensation and employee stock programs are subject to eligibility requirements and other terms of the applicable plan or program. Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant. Apple accepts applications to this posting on an ongoing basis. #J-18808-Ljbffr
    $147.4k-272.1k yearly 3d ago
  • LLM Research Engineer: RAG & Synthetic Data

    Apple Inc. 4.8company rating

    Cupertino, CA jobs

    A leading tech company in Cupertino is seeking a Machine Learning Research Engineer to enhance Siri's capabilities through innovative AI solutions. This role involves developing systems for synthetic training data and optimizing model performance. The ideal candidate will possess a strong background in machine learning, particularly with large-scale models. Competitive compensation package includes salary range of $147,400 to $272,100, stock options, and comprehensive benefits. #J-18808-Ljbffr
    $147.4k-272.1k yearly 2d ago
  • ML Research Engineer: Data-Driven Foundation Models Innovation

    Apple Inc. 4.8company rating

    Cupertino, CA jobs

    A leading technology company in Cupertino, California is seeking a Machine Learning Research Engineer. The role involves advancing foundational models to provide state-of-the-art capabilities for Apple Intelligence. Responsibilities include designing models, optimizing performance, and collaborating with cross-functional teams. Candidates should have a Master or PhD in a related field and be proficient in ML frameworks. The compensation package includes a salary range of $147,400 to $272,100, stock options, and comprehensive benefits. #J-18808-Ljbffr
    $147.4k-272.1k yearly 1d ago
  • ML Translation Research Engineer: Build NextGen MT & LLMs

    Apple Inc. 4.8company rating

    San Francisco, CA jobs

    A leading technology firm based in San Francisco is looking for a Machine Learning Research Engineer to advance core machine translation technology. The ideal candidate will have experience in machine learning engineering, a strong software development background in C/C++, and familiarity with deep learning frameworks. Responsibilities include integrating models into production and optimizing performance. Competitive compensation, including stock options and extensive benefits, is offered in this innovative workplace. #J-18808-Ljbffr
    $151k-193k yearly est. 3d ago
  • Research Engineer - ML / Systems

    Epsilon 4.2company rating

    San Francisco, CA jobs

    About Us We're tackling one of healthcare's most critical challenges in medical imaging and diagnostics. Our company operates at the intersection of cutting‑edge AI and clinical practice, building technology that directly impacts patient outcomes. We've assembled one of the industry's most comprehensive and diverse medical imaging datasets and have a proven product‑market fit with a substantial customer pipeline already in place. Role Overview We're seeking a Research Engineer to bridge the gap between research and production, building ML infrastructure and data systems for medical imaging at scale. You'll own critical data pipelines that unify live production traffic with offline datasets, design storage solutions for multimodal medical data, and build training + inference infrastructure that enables our research team to iterate rapidly. This role requires someone who can move fluidly between model training, data engineering, ML systems, and production deployment. Key Responsibilities Build and optimize distributed ML infrastructure for training foundation models on large‑scale medical imaging datasets. Design and implement robust data pipelines to collect, process, and store large‑scale multimodal medical imaging data from both production traffic and offline sources. Build centralized data storage solutions with standardized formats (e.g., protobufs) that enable efficient retrieval and training across the organization. Create model inference pipelines and evaluation frameworks that work seamlessly across research experimentation and production deployment. Collaborate with researchers to rapidly prototype new ideas and translate them into production‑ready code. Own end‑to‑end delivery of ML systems from experimentation through deployment and monitoring. Qualifications 3+ years building ML infrastructure, data pipelines, or ML systems in production. Strong Python skills and expertise in PyTorch or JAX. Hands‑on experience with data pipeline technologies (e.g., Spark, Airflow, BigQuery, Snowflake, Databricks, Chalk) and schema design. Experience with distributed systems, cloud infrastructure (AWS/GCP), and containerization (Docker/Kubernetes). Track record of building scalable data systems and shipping production ML infrastructure. Ability to move quickly and handle competing priorities in a fast‑paced environment. Preferred Qualifications Experience with medical imaging formats (DICOM) and healthcare data standards. Background in distributed training frameworks (PyTorch Lightning, DeepSpeed, Accelerate). Familiarity with MLOps practices and model deployment pipelines. Experience with privacy‑preserving data systems and HIPAA compliance. Contributions to open‑source ML or data infrastructure projects. #J-18808-Ljbffr
    $124k-168k yearly est. 1d ago
  • Research Engineer

    Factory 4.7company rating

    San Francisco, CA jobs

    Factory is seeking innovative AI Engineers to design and integrate advanced AI and ML capabilities that revolutionize productivity and accelerate innovation within software organizations. What you will do and achieve: Design, develop, and deploy AI-driven agentic systems that enhance Factory's core AI capabilities, focusing on retrieval systems, code generation evaluation, agentic user experience development, and agent backend architecture. Collaborate with product and engineering teams to incorporate emerging AI research findings into practical, customer-centric solutions. Engage in iterative development processes, rapidly prototyping and refining AI functionalities based on real-world feedback to meet the evolving needs of developers and enterprises Take the initiative in identifying and addressing product stability and scalability challenges, ensuring our platform reliably support large-scale operational demands Qualifications: 2+ years of hands-on experience in AI/ML engineering roles post-acquisition of a Bachelor's or Master's degree in Computer Science, Engineering, AI, or a related technical field. Demonstrated proficiency with LLMs and a track record of applying AI/ML techniques to solve complex, unstructured problems, ideally within the domain of software engineering or related areas. Self-motivated and autonomous, with a proven ability to navigate ambiguous research projects from conception to impactful deployment. Strong communication skills that enable you to articulate technical concepts and strategies clearly at all levels, coupled with a strategic mindset oriented towards innovation. Experience with data-intensive applications and familiarity with the software development lifecycle are highly desirable, though not mandatory. The team goes into the office 5 days a week in San Francisco (walking distance to Caltrain). #J-18808-Ljbffr
    $123k-167k yearly est. 2d ago
  • Decentralized AI LLM Research Engineer

    Near Inc. 4.6company rating

    San Francisco, CA jobs

    A leading AI research organization is seeking a Research Engineer in San Francisco to drive innovations in decentralized AI. You will design and execute fine-tuning pipelines for large language models, conduct novel research, and collaborate with a global team. Ideal candidates will have strong experience in ML training and a passion for advancing decentralized AI ecosystems. The position offers opportunities to contribute to open-source projects and engage in meaningful AI research. #J-18808-Ljbffr
    $123k-173k yearly est. 2d ago
  • AI Research Engineer US - San Francisco

    Near Inc. 4.6company rating

    San Francisco, CA jobs

    The NEAR AI team is building decentralized and confidential machine learning infrastructure to enable user-owned AI. Our mission is to make open-source AI truly accessible by bringing together researchers worldwide to advance model research, fine-tuning, and large-scale training. We are seeking a Research Engineer with a strong background in large language models (LLMs) and reasoning systems to help push the boundaries of AI innovation. What You'll Be Doing Designing and executing fine-tuning pipelines for open-weight LLMs across diverse use cases, from concept to deployment. Conducting research on novel reasoning architectures and training methods to improve model performance. Collaborating with researchers and engineers across institutions to accelerate progress in AI research. Contributing to open-source projects and advancing best practices in decentralized AI development. What We're Looking For Strong hands-on experience in LLM training, fine-tuning, and evaluation. Deep understanding of reinforcement learning (RL), particularly in the context of reasoning. Strong problem-solving skills and the ability to communicate technical ideas clearly to diverse collaborators. A passion for advancing open and decentralized AI ecosystems. We'd Love If You Have Experience with formal verification Experience with trusted execution environment (TEE) Contributions to open-source machine learning libraries Please let us know if you require any special requirements for your interview and we'll do our best to accommodate. Locations: San Francisco. Apply for this job #J-18808-Ljbffr
    $123k-173k yearly est. 2d ago
  • Research Engineer, Frontier AI Evaluations in Finance

    Openai 4.2company rating

    San Francisco, CA jobs

    A leading AI research firm in San Francisco seeks a Research Engineer to evaluate model capabilities in finance. The candidate should possess strong analytical skills, be detail-oriented, and thrive in a dynamic environment. Responsibilities include identifying significant financial workflows and refining AI evaluations. A passion for AGI/ASI measurement and proficiency in Excel are essential. Join us to contribute to advancing AI technology responsibly. #J-18808-Ljbffr
    $124k-173k yearly est. 3d ago
  • Research Engineer, Privacy

    Openai 4.2company rating

    San Francisco, CA jobs

    About the Team The Privacy Engineering Team at OpenAI is committed to integrating privacy as a foundational element in OpenAI's mission of advancing Artificial General Intelligence (AGI). Our focus is on all OpenAI products and systems handling user data, striving to uphold the highest standards of data privacy and security. We build essential production services, develop novel privacy-preserving techniques, and equip cross-functional engineering and research partners with the necessary tools to ensure responsible data use. Our approach to prioritizing responsible data use is integral to OpenAI's mission of safely introducing AGI that offers widespread benefits. About the Role As a part of the Privacy Engineering Team, you will work on the frontlines of safeguarding user data while ensuring the usability and efficiency of our AI systems. You will help us understand and implement the latest research in privacy-enhancing technologies such as differential privacy, federated learning, and data memorization. Moreover, you will focus on investigating the interaction between privacy and machine learning, developing innovative techniques to improve data anonymization, and preventing model inversion and membership inference attacks. This position is located in San Francisco. Relocation assistance is available. In this role, you will: Design and prototype privacy-preserving machine-learning algorithms (e.g., differential privacy, secure aggregation, federated learning) that can be deployed at OpenAI scale. Measure and strengthen model robustness against privacy attacks such as membership inference, model inversion, and data memorization leaks-balancing utility with provable guarantees. Develop internal libraries, evaluation suites, and documentation that make cutting‑edge privacy techniques accessible to engineering and research teams. Lead deep‑drive investigations into the privacy-performance trade‑offs of large models, publishing insights that inform model‑training and product‑safety decisions. Define and codify privacy standards, threat models, and audit procedures that guide the entire ML lifecycle-from dataset curation to post‑deployment monitoring. Collaborate across Security, Policy, Product, and Legal to translate evolving regulatory requirements into practical technical safeguards and tooling. You might thrive in this role if you: Have hands‑on research or production experience with PETs. Are fluent in modern deep‑learning stacks (PyTorch/JAX) and comfortable turning cutting‑edge papers into reliable, well‑tested code. Enjoy stress‑testing models-probing them for private data leakage-and can explain complex attack vectors to non‑experts with clarity. Have a track record of publishing (or implementing) novel privacy or security work and relish bridging the gap between academia and real‑world systems. Thrive in fast‑moving, cross‑disciplinary environments where you alternate between open‑ended research and shipping production features under tight deadlines. Communicate crisply, document rigorously, and care deeply about building AI systems that respect user privacy while pushing the frontiers of capability. About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general‑purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI's affirmative action and equal employment opportunity policy statement. Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act for U.S.‑based candidates. For Unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non‑public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. To notify OpenAI that you believe this job posting is non‑compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link. OpenAI global applicant privacy policy. At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology. #J-18808-Ljbffr
    $124k-173k yearly est. 4d ago
  • Research Engineer, Notifcations

    Openai 4.2company rating

    San Francisco, CA jobs

    About the Team The ChatGPT team works across research, engineering, product, and design to bring OpenAI's technology to the world. We seek to learn from deployment and broadly distribute the benefits of AI, while ensuring that this powerful tool is used responsibly and safely. We aim to make our innovative tools globally accessible, transcending geographic, economic, or platform barriers. Our commitment is to facilitate the use of AI to enhance lives, fostered by rigorous insights into how people use our products. About the Role We are looking for a Machine Learning Engineer to join our Notifications team, focused on building and scaling intelligent notification systems that provide real value to users even when they are not actively on the product. This role will be central to shaping how ChatGPT communicates proactively and helpfully with users-surfacing the right content, at the right time, through the right channel. You will work on designing and implementing ranking and recommendation systems that leverage both classical ML techniques and large language models (LLMs) to optimize notification relevance, timeliness, and user experience. The ideal candidate has strong ML fundamentals, experience shipping ranking or recommendation systems in production, exposure to LLMs, and sharp product intuition. You should be comfortable operating across research and product boundaries: thinking from first principles, running rigorous experiments, and building scalable systems. This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees. In this role, you will: Design and build end-to-end ranking and recommendation systems for notifications, from modeling to evaluation and deployment. Apply and adapt LLMs to ranking problems, including prompt-based approaches and fine-tuning. Develop experiments to evaluate notification relevance, user value, and long-term impact, working closely with product, data science, and engineering teams. Collaborate with research teams to leverage the latest modeling techniques while balancing practical constraints of production systems. Build robust offline and online evaluations to measure system improvements and user outcomes. Contribute to the broader Growth ML stack and help set technical direction for intelligent user engagement systems. You might thrive in this role if you: Have hands-on experience building and deploying ranking, recommendation, or personalization systems at scale. Have a deep understanding of machine learning and its applications, with exposure to LLMs and their integration into product experiences. Are experienced in experimentation, A/B testing, and analyzing impact on user behavior and business metrics. Are comfortable diving into large ML codebases, designing evaluations, and debugging complex modeling issues. Thrive in dynamic, fast-changing environments and can think from first principles to design elegant solutions. Have strong product intuition and can balance modeling sophistication with product impact. About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. OpenAI's Affinity Action and Equal Employment Opportunity Policy Statement. Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via link. OpenAI Global Applicant Privacy Policy. At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology. #J-18808-Ljbffr
    $124k-173k yearly est. 5d ago
  • Research Engineer, Codex

    Openai 4.2company rating

    San Francisco, CA jobs

    The Codex team is responsible for building state-of-the-art AI systems that can write code, reason about software, and act as intelligent agents for developers and non-developers alike. Our mission is to push the frontier of code generation and agentic reasoning, and deploy these capabilities in real-world products such as ChatGPT and the API, as well as in next-generation tools specifically designed for agentic coding. We operate across research, engineering, product, and infrastructure-owning the full lifecycle of experimentation, deployment, and iteration on novel coding capabilities. About the Role As a member of the Codex team, you will advance the capabilities, performance, and reliability of AI coding models through a combination of research, experimentation, and system optimization. You'll collaborate with world-class researchers and engineers to develop and deploy systems that help millions of users write better code, faster-while also ensuring these systems are efficient, cost-effective, and production-ready. We're looking for people who combine deep curiosity, strong technical fundamentals, and a bias toward impact. Whether your strengths lie in ML research, systems engineering, or performance optimization, you'll play a pivotal role in pushing the state of the art and bringing these advances into the hands of real users. This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees. In this role, you might: Design and run experiments to improve code generation, reasoning, and agentic behavior in Codex models. Develop research insights into model training, alignment, and evaluation. Hunt down and address inefficiencies across the Codex system stack-from agent behavior to LLM inference to container orchestration-and land high-leverage performance improvements. Build tooling to measure, profile, and optimize system performance at scale. Work across the stack to prototype new capabilities, debug complex issues, and ship improvements to production. You might thrive in this role if you: Are excited to explore and push the boundaries of large language models, especially in the domain of software reasoning and code generation. Have strong software engineering skills and enjoy quickly turning ideas into working prototypes. Think holistically about performance, balancing speed, cost, and user experience. Bring creativity and rigor to open-ended research problems and thrive in highly iterative, ambiguous environments. Have experience operating across both ML systems and cloud infrastructure. About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form . No response will be provided to inquiries unrelated to job posting compliance. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this . At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology. #J-18808-Ljbffr
    $124k-173k yearly est. 3d ago
  • LLM Architecture Research Engineer

    Openai 4.2company rating

    San Francisco, CA jobs

    A leading AI research company in San Francisco seeks a talented architecture team member to enhance its flagship models. Ideal candidates will have a strong understanding of LLM architectures and experience with deep learning. The role offers a hybrid work model of 3 days in the office, alongside relocation assistance. This position presents an exciting opportunity to contribute to groundbreaking AI advancements. #J-18808-Ljbffr
    $124k-173k yearly est. 1d ago
  • Research Engineer, Frontier Evals & Environments - Finance

    Openai 4.2company rating

    San Francisco, CA jobs

    The Frontier Evals team builds north star model evaluations to drive progress towards safe AGI/ASI. This team builds ambitious evaluations to measure and steer our models, and creates self-improvement loops to steer our training, safety, and launch decisions. Some of the team's open-sourced evaluations include SWE-bench Verified, MLE-bench, PaperBench, and SWE-Lancer, and the team built and ran frontier evaluations for GPT4o, o1, o3, GPT 4.5, ChatGPT Agent, and GPT5. If you are interested in feeling firsthand the fast progress of our models, and steering them towards good, this is the team for you. About you We seek exceptional research engineers that can push the boundaries of our frontier models in the finance domain. We are looking for those who will help shape AI evaluations of financial reasoning and related capabilities, and will own individual threads within this endeavor end-to-end. Responsibilities Identify important model capabilities, skills, and behaviors that are crucial to financial workflows, and design methods to quantify performance in these areas Own and pursue a research agenda to identify an important model capability (especially as it relates to financial reasoning) and build evals to measure it Continuously refine evaluations of frontier AI models to assess the extent of frontier capabilities Requirements Have strong engineering and statistical analysis skills (with at least 2-3 years of full-time technical experience) Be passionate about Excel spreadsheets and/or finance Be detail-oriented and thorough Be a team player / willing to do a variety of tasks to move the team forward Be passionate and knowledgeable about AGI/ASI measurement Be able to operate effectively in a dynamic and extremely fast-paced research environment as well as scope and deliver projects end-to-end Nice to have Prior background / domain expertise in finance, especially investment banking or private equity (e.g., through internships, prior jobs) An ability to work cross-functional Excellent communication skills About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI's Affirmative Action and Equal Employment Opportunity Policy Statement. Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link. OpenAI Global Applicant Privacy Policy At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology. #J-18808-Ljbffr
    $124k-173k yearly est. 3d ago
  • Product Engineer: Build Developer-First AI Tools

    Cognition 4.2company rating

    San Francisco, CA jobs

    A cutting-edge AI lab in San Francisco is seeking experienced end-to-end engineers to join their small, talent-dense team. The role involves building innovative software products, enhancing user experiences, and collaborating closely with product teams. Ideal candidates should have experience with Python, Typescript, and React. Strong achievers who thrive in fast-moving environments are encouraged to apply. #J-18808-Ljbffr
    $90k-126k yearly est. 4d ago
  • Power Systems Research Engineer - Energy Infrastructure Modeling

    Pacific Northwest National Laboratory 4.5company rating

    Research engineer job at Pacific Northwest National Laboratory

    At PNNL, our core capabilities are divided among major departments that we refer to as Directorates within the Lab, focused on a specific area of scientific research or other function, with its own leadership team and dedicated budget. Our Science & Technology directorates include National Security, Earth and Biological Sciences, Physical and Computational Sciences, and Energy and Environment. In addition, we have an Environmental Molecular Sciences Laboratory, a Department of Energy, Office of Science user facility housed on the PNNL campus. The Energy and Environment Directorate (EED) delivers science and technology solutions for the nation's biggest energy and environmental challenges. Our more than 1,700 staff support the Department of Energy (DOE), delivering on key DOE mission areas including: modernizing our nation's power grid to maintain a reliable, affordable, secure, and resilient electricity delivery infrastructure; research, development, validation, and effective utilization of renewable energy and efficiency technologies that improve the affordability, reliability, resiliency, and security of the American energy system; and resolving complex issues in nuclear science, energy, and environmental management. The Electricity Infrastructure and Buildings Division (EI&BD), part of EED, is accelerating the transition to an efficient, resilient, and secure energy system through basic and applied research. We leverage a strong technical foundation in power and energy systems and advanced data analytics to drive innovation, transform markets, and shape energy policy. Within this division, the Power System Modeling Group (PSMG) develops advanced simulation, analysis, and optimization tools to understand and enhance grid performance across all levels, from the bulk energy system to the distribution grid. **Responsibilities** PNNL seeks a creative and inter-disciplinary power system engineer to develop model enhancements for conventional power system models (primarily production cost models and expansion planning models) to represent changing system constraints and dependencies imposed by the rapid growth of our Nation's energy system. The selected candidate is expected to participate in science-based, mission-focused research to develop complex solutions integrating a wide range of power system tools to enhance the security of our nation's energy system. The successful candidate will use integrated, multi-scale models spanning capacity expansion, production cost modeling, and powerflow to explore and quantify the impacts of real-world events on energy system performance. The candidate will use these tools to develop and investigate mitigating techniques to affordably improve the reliability and resilience of critical energy infrastructure. Key Responsibilities: The successful candidate would be expected to be a senior subject matter expert to projects within their field of specialization. The position will lead teams of junior scientists and engineers and will be responsible for delivering technical tasks with the level of quality, timeliness, and cost, to meet required project and client expectations. The desired engineer will: + Provide subject matter expertise within multi-disciplinary teams with a focus on power system optimization, resource adequacy, reliable capacity expansion, and the related policies that must be enacted. + Provide electric system simulations with a good working knowledge of grid topology and transmission planning, operational costs and optimization, system constraints, and financial analysis. + Lead model enhancements activities and participate in the design of simulation experiments, evaluation of technology performance, or development of scientific and engineered solutions to assess or solve a variety of technical problems. + Broadly apply theory to the development and implementation of new tools including applications, algorithms, models, and visualization. + Publish research results, targeting peer reviewed journals and other written reports. + Build strong teams for project execution that transfer knowledge through the organization. + Mentor junior staff and other engineers and collaborate with senior level staff from within PNNL. + Travel to execute projects, build relationships, and grow leadership in the field. + Manage individual tasks or projects; including directing other staff, ensuring milestones are met, and communicating with senior management. + Ensure safe operating practices are followed. + Effectively communicate engineering challenges and solutions to a wide range of audiences. + Effectively communicate with senior engineers to ensure high-quality delivery of research projects on budget and on time. + Ability to deal with ambiguity. **On-site in Richland, WA is strongly preferred; remote work may be considered.** **Qualifications** Minimum Qualifications: + BS/BA and 5+ years of relevant production cost and expansion planning model work experience -OR- + MS/MA and 3+ years of relevant production cost and expansion planning model work experience -OR- + PhD with 1+ year of relevant production cost and expansion planning model experience Preferred Qualifications: + Experience in using and/or contributing to the development of large-scale power system optimization models, using either commercial or open-source tools. + Understanding of power grid operations, transmission and resource planning, and long-term investment. + Knowledge of utility industry economics and business processes. + Broadly apply principles, standards work, theories and concepts of systems engineering to the development and implementation of new tools including applications, algorithms, models, and visualization techniques. + Experience in grid applications, software development, or active participation in related national and international forums for power system modeling. + Experience in stochastic optimization and dynamic programming. + Fluency in one or more programming or scripting languages (e.g., GAMS, Julia, perl, Python, etc.). + Experience working in an inter-disciplinary team. + Demonstrated success of articulating research through publications, particularly in top-tier journals. **Additional Information** This position requires the ability to obtain and maintain a federal security clearance. A security clearance background investigation includes review of your employment, education, financial, and criminal history, as well as interviews with you and your personal references, neighbors, and co-workers to determine trustworthiness, reliability, and loyalty to the United States. The investigation also examines your foreign connections, drug and alcohol use, foreign influence, and overall conduct. Requirements: + U.S. Citizenship + Background Investigation: Applicants selected will be subject to a Federal background investigation and must meet eligibility requirements for access to classified matter in accordance with 10 CFR 710, Appendix B. + Drug Testing: All Security Clearance positions are Testing Designated Positions, which means that the applicant selected for hire is subject to pre-employment drug testing, and post-employment random drug testing. In addition, applicants must be able to demonstrate non-use of illegal drugs, including marijuana, for the 12 consecutive months preceding completion of the requisite Questionnaire for National Security Positions (QNSP). Note: Applicants will be considered ineligible for security clearance processing by the U.S. Department of Energy if non-use of illegal drugs, including marijuana, for 12 months cannot be demonstrated. **Testing Designated Position** This position is a Testing Designated Position (TDP). The candidate selected for this position will be subject to pre-employment and random drug testing for illegal drugs, including marijuana, consistent with the Controlled Substances Act and the PNNL Workplace Substance Abuse Program. **About PNNL** Pacific Northwest National Laboratory (PNNL) is a world-class research institution powered by a highly educated, diverse workforce committed to the values of Integrity, Creativity, Collaboration, Impact, and Courage. Every year, scores of dynamic, driven people come to PNNL to work with renowned researchers on meaningful science, innovations and outcomes for the U.S. Department of Energy and other sponsors; here is your chance to be one of them! At PNNL, you will find an exciting research environment and excellent benefits including health insurance, and flexible work schedules. PNNL is located in eastern Washington State-the dry side of Washington known for its stellar outdoor recreation and affordable cost of living. The Lab's campus is only a 45-minute flight (or ~3 hour drive) from Seattle or Portland, and is serviced by the convenient PSC airport, connected to 8 major hubs. **Commitment to Excellence and Equal Employment Opportunity** Our laboratory is committed to fostering a work environment where all individuals are treated with fairness and respect while solving critical challenges in fundamental sciences, national security, and energy resiliency. We are an Equal Employment Opportunity employer. Pacific Northwest National Laboratory (PNNL) is an Equal Opportunity Employer. PNNL considers all applicants for employment without regard to race, religion, color, sex, national origin, age, disability, genetic information (including family medical history), protected veteran status, and any other status or characteristic protected by federal, state, and/or local laws. We are committed to providing reasonable accommodations for individuals with disabilities and disabled veterans in our job application procedures and in employment. If you need assistance or an accommodation due to a disability, contact us at **************** . **Drug Free Workplace** PNNL is committed to a drug-free workplace supported by Workplace Substance Abuse Program (WSAP) and complies with federal laws prohibiting the possession and use of illegal drugs. If you are offered employment at PNNL, you must pass a drug test prior to commencing employment. PNNL complies with federal law regarding illegal drug use. Under federal law, marijuana remains an illegal drug. If you test positive for any illegal controlled substance, including marijuana, your offer of employment will be withdrawn. **Security, Credentialing, and Eligibility Requirements** As a national laboratory, PNNL is responsible for adhering to the Homeland Security Presidential Directive 12 (HSPD-12) and Department of Energy (DOE) Order 473.1A, which require new employees to obtain and maintain a HSPD-12 Personal Identify Verification (PIV) Credential. To obtain this credential, new employees must successfully complete the applicable tier of federal background investigation post hire and receive a favorable federal adjudication. The tier of federal background investigation will be determined by job duties and national security or public trust responsibilities associated with the job. All tiers of investigation include a declaration of illegal drug activities, including use, supply, possession, or manufacture within the last 1 to 7 years (depending on the applicable tier of investigation). Illegal drug activities include marijuana and cannabis derivatives, which are still considered illegal under federal law, regardless of state laws. For foreign national candidates: If you have not resided in the U.S. for three consecutive years, you are not eligible for the PIV credential and instead will need to obtain a favorable Local Site Specific Only (LSSO) Federal risk determination to maintain employment. Once you meet the three-year residency requirement thereafter, you will be required to obtain a PIV credential to maintain employment. The tier of federal background investigation required to obtain the PIV credential will be determined by job duties at the time you become eligible for the PIV credential. **Mandatory Requirements** Please be aware that the Department of Energy (DOE) prohibits DOE employees and contractors from having any affiliation with the foreign government of a country DOE has identified as a "country of risk" without explicit approval by DOE and Battelle. If you are offered a position at PNNL and currently have any affiliation with the government of one of these countries, you will be required to disclose this information and recuse yourself of that affiliation or receive approval from DOE and Battelle prior to your first day of employment. **Rockstar Rewards** Employees and their families are offered medical insurance, dental insurance, vision insurance, robust telehealth care options, several mental health benefits, free wellness coaching, health savings account, flexible spending accounts, basic life insurance, disability insurance*, employee assistance program, business travel insurance, tuition assistance, relocation, backup childcare, legal benefits, supplemental parental bonding leave, surrogacy and adoption assistance, and fertility support. Employees are automatically enrolled in our company-funded pension plan* and may enroll in our 401 (k) savings plan with company match*. Employees may accrue up to 120 vacation hours per year and may receive ten paid holidays per year. * Research Associates excluded. **All benefits are dependent upon eligibility. Click Here For Rockstar Rewards (****************************************** **Notice to Applicants** PNNL lists the full pay range for the position in the job posting. Starting pay is calculated from the minimum of the pay range and actual placement in the range is determined based on an individual's relevant job-related skills, qualifications, and experience. This approach is applicable to all positions, with the exception of positions governed by collective bargaining agreements and certain limited-term positions which have specific pay rules. As part of our commitment to fair compensation practices, we do not ask for or consider current or past salaries in making compensation offers at hire. Instead, our compensation offers are determined by the specific requirements of the position, prevailing market trends, applicable collective bargaining agreements, pay equity for the position type, and individual qualifications and skills relevant to the performance of the position. **Minimum Salary** USD $122,100.00/Yr. **Maximum Salary** USD $193,000.00/Yr.
    $122.1k-193k yearly 9d ago

Learn more about Pacific Northwest National Laboratory jobs

Most common jobs at Pacific Northwest National Laboratory

View all jobs