Security Engineer jobs at Rockwell Automation - 3448 jobs
Lead AI Security Architect
Rockwell Automation, Inc. 4.4
Security engineer job at Rockwell Automation
Rockwell Automation is a global technology leader focused on helping the world's manufacturers be more productive, sustainable, and agile. With more than 28,000 employees who make the world better every day, we know we have something special. Behind our customers - amazing companies that help feed the world, provide life-saving medicine on a global scale, and focus on clean water and green mobility - our people are energized problem solvers that take pride in how the work we do changes the world for the better.
We welcome all makers, forward thinkers, and problem solvers who are looking for a place to do their best work. And if that's you we would love to have you join us!
Position Summary:
The Lead AI Security Architect defines and inspires the security architecture strategy for artificial intelligence (AI) and machine learning (ML) systems across the enterprise. You will design the security of AI-enabled platforms, data pipelines, and models with corporate cybersecurity, privacy, and compliance frameworks in mind. You will be an expert, ensuring AI projects meet secure-by-design principles, while mitigating risks associated with generative AI, large language models (LLMs), and autonomous systems.
You will balance innovation in AI with risk. You influence secure development of AI systems and ensure responsible adoption of advanced technologies across the enterprise.
Your Responsibilities:
Strategy & Architecture
* Develop the enterprise AI security architecture. Align it with our goals, AI governance frameworks (e.g., NIST AI RMF, ISO/IEC 42001), and cybersecurity standards (e.g., NIST CSF, ISO 27001, IEC 62443).
* Define secure architectures for AI/ML model development, deployment, and integration with enterprise data and cloud platforms.
* Establish security reference architectures for GenAI, LLMOps, MLOps, and AI-driven automation.
Risk Management & Governance
* Conduct AI threat modeling, risk assessments, and red teaming for AI/ML systems.
* Find and address AI-specific risks such as model inversion, prompt injection, data poisoning, and adversarial attacks.
* Support compliance with the latest AI security and ethics regulations (e.g., EU AI Act, U.S. Executive Orders on AI, sector-specific standards).
Technical Leadership
* Guide data scientists and developers on implementing secure model training, validation, and inference pipelines.
* Partner with enterprise architects to integrate AI trust controls (authenticity, traceability, explainability, and accountability) into platforms and services.
* Evaluate and deploy AI security tools for model protection, data governance, and AI behavior monitoring.
Collaboration & Enablement
* Collaborate with product security, DevSecOps, and data engineering teams to embed AI security into the SDLC and CI/CD pipelines.
* Work with legal, risk, and compliance teams to establish AI acceptable use, data residency, and model governance policies.
* Lead security reviews and architecture boards for AI-enabled projects.
Innovation & Continuous Improvement
* Stay current on AI cybersecurity research, frameworks, and the latest AI threats.
* Develop best practices and strategies for responsible AI security and assurance.
* Mentor junior architects and engineers in AI and cybersecurity principles.
Core Competencies
* Technical depth in both cybersecurity and AI domains.
* Ability to translate complex concepts to executives and technical teams both verbally and in writing.
* Expertise in emerging AI security trends and best practices.
* Collaborative and mentoring approach with cross-functional teams.
The Essentials - You Will Have:
* Bachelor's Degree or equivalent years of relevant work experience.
* Legal authorization to work in the U.S. We will not sponsor individuals for employment visas, now or in the future, for this job opening.
* Ability to travel up to 10%.
The Preferred - You Might Also Have:
* Typically requires 12+ years of relevant experience in cybersecurity architecture.
* 3+ years focused on AI/ML or data science security.
* Advanced degree in Computer Science, Engineering, Cybersecurity, or related field
* Experience with AI/ML pipelines, MLOps, Model Context Protocol (MPC), Agentic Identity, and cloud-native architectures (AWS SageMaker, Azure ML, GCP Vertex AI).
* Expertise in data protection, identity and access management, encryption, and secure software development.
* Knowledge of AI threat landscapes, adversarial machine learning, and model integrity protection.
* Experience with compliance frameworks such as NIST AI RMF, ISO/IEC 42001, and data privacy regulations (GDPR, CCPA).
* Professional certifications such as CISSP, CISM, CCSP and enterprise architecture certifications.
* AI/ML certifications (e.g., TensorFlow, AWS ML Specialty, Microsoft Azure AI Engineer).
* Hands-on experience with secure LLM deployments and GenAI security testing.
* Experience in OT or industrial AI environments (IEC 62443 knowledge).
What We Offer:
* Health Insurance including Medical, Dental and Vision
* 401k
* Paid Time off
* Parental and Caregiver Leave
* Flexible Work Schedule where you will work with your manager to enjoy a work schedule that can be flexible with your personal life.
* To learn more about our benefits package, please visit at ********************
At Rockwell Automation we are dedicated to building a diverse, inclusive and authentic workplace, so if you're excited about this role but your experience doesn't align perfectly with every qualification in the job description, we encourage you to apply anyway. You may be just the right person for this or other roles.
#LI-Hybrid
#LI-LifeAtROK
#LI-MG4
We are an Equal Opportunity Employer including disability and veterans.
If you are an individual with a disability and you need assistance or a reasonable accommodation during the application process, please contact our services team at *****************.
Rockwell Automation's hybrid policy aligns that employees are expected to work at a Rockwell location at least Mondays, Tuesdays, and Thursdays unless they have a business obligation out of the office.
$95k-124k yearly est. Auto-Apply 33d ago
Looking for a job?
Let Zippia find it for you.
Senior Network & Security Consultant - Remote
Thales Group 4.5
Palo Alto, CA jobs
A leading global security company in California is seeking a Consultant for Network & Security. Key responsibilities include acting as a technical advisor, defining architecture for solutions, and supporting operational teams. Ideal candidates should have expertise in firewalls, encryption, and IT security practices. The role offers hybrid working conditions and emphasizes continuous learning and coaching.
#J-18808-Ljbffr
$104k-138k yearly est. 5d ago
Staff Product Security Engineer
Databricks Inc. 3.8
San Francisco, CA jobs
RDQ226R605; This role can be based remotely anywhere in the United States.
The Product Security Team's mission is to Left-shift SDLC (Security Development Lifecycle) processes for ALL code written in Databricks (for Customer Use or Supporting Customer internally) to reduce the likelihood of introducing new vulnerabilities in production and minimize the count and effect of externally identified vulnerabilities on Databricks Services.
You will be an individual contributor on the product security team at Databricks, managing SDLC functions for features and products within Databricks. This would include, but is not limited to, security design reviews, threat models, manual code reviews, exploit writing and exploit chain creation. You will also support IR and VRP programs when there is a vulnerability report or a product security incident. You will work with a global team, spread across various locations in the US and EMEA.
The impact you will have:
Full SDLC Support for new product features being developed in ENG and non-ENG teams. This would include Threat Modeling, Design Review, Manual Code Review, Exploit writing, etc.
Work with other security teams to provide support for Incident Response and Vulnerability Response as and when needed.
Work with the results of SAST tools to help evaluate and identify false positives and file defects for real issues.
Work on DAST tools and related automation for auto-assessment and defect filing.
Maintain the automation framework and add new features as needed to support different security compliances that Databricks may want to get into - FedRamp, PCI, HIPPA, etc.
Prioritize security from a risk management perspective, rather than an absolute textbook version.
Help develop and implement security processes to improve the overall productivity of the product security organization and the SDLC process in general
What we look for:
3-10 years Experience with the Threat Modeling process and ability to find design problems based on a block diagram of data flow.
Solid understanding on at least two of the following domains - Web Security, Cloud Security, Systems Security and Applied Cryptography.
Strong skills on scripting and automation on exploits
Fuzzing skills are good to have.
Exploit writing skills is a positive and greatly required.
Zone 1 Pay Range
$178,200 - $249,450 USD
Zone 2 Pay Range
$160,300 - $224,425 USD
Zone 3 Pay Range
$151,400 - $212,000 USD
Zone 4 Pay Range
$142,500 - $199,500 USD
About Databricks
Databricks is the data and AI company. More than 10,000 organizations worldwide - including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 - rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook .
Benefits
At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit**************************************** .
Our Commitment to Diversity and Inclusion
At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.
Compliance
If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
#J-18808-Ljbffr
$178.2k-249.5k yearly 2d ago
Senior Security Engineer, Spam & Abuse Lead
Snap Inc. 4.7
Palo Alto, CA jobs
A leading technology company in California is seeking a SecurityEngineer for the Spam & Abuse team. This role involves architecting solutions against spam threats, collaborating across departments, and leading incident response efforts. The ideal candidate has a strong background in software engineering, experience with big data technologies, and excels in a collaborative setting. Compensation in the range of $209,000-$313,000 annually, along with additional employee benefits.
#J-18808-Ljbffr
A leading AI research company in San Francisco is hiring a SecurityEngineer, specializing in application security. Responsibilities include conducting security assessments, developing security tools, and collaborating with development teams to integrate security best practices throughout the software development lifecycle. The ideal candidate has extensive experience in cybersecurity and strong programming skills. This role offers a hybrid work model with relocation assistance.
#J-18808-Ljbffr
$125k-175k yearly est. 5d ago
Security Engineer, Application Security
Openai 4.2
San Francisco, CA jobs
About the Team
Security is at the foundation of OpenAI's mission to ensure that artificial general intelligence benefits all of humanity. The Security team protects OpenAI's technology, people, and products. We are technical in what we build but are operational in how we do our work, and are committed to supporting all products and research at OpenAI. Our Security team tenets include: prioritizing for impact, enabling researchers, preparing for future transformative technologies, and engaging a robust security culture.
About the Role
As a SecurityEngineer, Application Security you will be responsible for identifying and mitigating security vulnerabilities within software applications through building security tools, code reviews, penetration testing, and security assessments.
We're looking for people who will work closely with development teams to ensure secure coding practices are integrated throughout the software development lifecycle, preventing security risks before they emerge. You will also provide security guidance to developers and other stakeholders, fostering a culture of security awareness within the organization.
The role is preferred to be based in San Francisco, Seattle or New York City but may consider remote work. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.
In this role, you will:
Perform Security Assessments: Conduct regular security assessments, code reviews, and penetration testing to identify vulnerabilities in applications and software.
Develop and Implement Security Tools: Design, develop, and implement security tools, frameworks, and methodologies to protect applications against security threats.
Collaborate with Development Teams: Work closely with development teams to ensure security best practices are integrated throughout the software development lifecycle (SDLC), including secure coding guidelines.
Threat Modeling and Risk Assessment: Conduct threat modeling and risk assessments to proactively identify potential risks and develop mitigation strategies.
Vulnerability Management: Track, analyze, and manage vulnerabilities in applications, providing guidance and support for remediation efforts.
Incident Response Support: Assist in investigating, analyzing, and responding to security incidents related to applications, ensuring timely resolution and documentation of incidents.
Stay Current on Security Trends: Continuously stay updated on the latest security threats, vulnerabilities, and technologies to enhance security measures in applications.
You might thrive in this role if you:
Extensive experience in information security, cybersecurity, or a related field, with a significant portion of that experience in leadership or management roles.
Deep understanding of security technologies, tools, and best practices, including experience with secure coding practices, threat modeling, risk assessments, and incident response.
Experience in application security, software development, or related areas with a strong understanding of secure coding practices and application security frameworks.
Proficiency in programming languages (such as Python, Java, C++, etc.), knowledge of security tools (e.g., Burp Suite, OWASP ZAP), and familiarity with security protocols and encryption methods.
Strong written and verbal communication skills, with the ability to explain complex security issues to both technical and non-technical audiences
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.
For additional information, please see OpenAI's Aff… Statement.
Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.
To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
Compensation Range: $325K - $405K
#J-18808-Ljbffr
$125k-175k yearly est. 5d ago
Remote Security Observability Engineer - Data Pipelines
Openai 4.2
San Francisco, CA jobs
A leading company, OpenAI, is seeking a Software Engineer, Security Observability to enhance their security infrastructure in a collaborative environment. The role focuses on designing scalable systems, improving data visibility, and requires strong software engineering skills, particularly in Python and Golang. Expansion into cloud platforms like Azure is also essential in this high-impact position.
#J-18808-Ljbffr
$125k-175k yearly est. 5d ago
Security Engineer
Factory 4.7
San Francisco, CA jobs
Factory is seeking a talented SecurityEngineer to join our team. In this role, you will play a critical role in developing and maintaining the security foundation of our platform. You will conduct in-depth code reviews, implement security best practices, and influence the overall security strategy. Your expertise in TypeScript, Python, Kubernetes, CI/CD, and terraform orchestration will be crucial in identifying and mitigating potential security vulnerabilities.
What you will do and achieve:
Design, implement, and manage security measures for the protection of our cloud infrastructure, applications, and data, focusing on both preventative controls and rapid response capabilities.
Collaborate closely with our engineering teams to integrate security practices into the software development lifecycle, including secure coding standards, automated security testing, and secure architecture design.
Stay up-to-date on the latest security threats, vulnerabilities, and mitigation strategies.
Conduct security code reviews to identify and remediate security vulnerabilities.
Develop and implement automated security testing procedures to identify vulnerabilities and risks, recommending and implementing appropriate mitigation strategies.
Respond to security incidents and participate in incident response procedures.
Document security processes, procedures, and best practices.
Lead security awareness and training programs, empowering all team members to recognize and prevent potential security threats.
Qualifications
Minimum 5+ years of experience as a SecurityEngineer with a focus on product security, with a strong background in securing cloud-based environments (AWS, Azure, GCP) and understanding of Infrastructure as Code (IaC) security practices.
Strong coding skills with proficiency in TypeScript and Python.
Expertise in various security domains such as application security, network security, security operations, and incident response.
Experience with container security (Docker Security, Kubernetes Security).
Familiarity with a wide range of AWS services, including but not limited to VPC, EC2, Lambda, Amazon RDS, and S3.
In-depth knowledge of CI/CD pipeline tools and practices, ideally with experience in GitHub Actions or Jenkins.
Knowledgeable in security compliance frameworks and regulations (e.g., ISO 27001, SOC 2, GDPR) and experience with security assessments and third-party audits.
Proficiency with security tools and technologies, such as firewalls, IDS/IPS, vulnerability scanners, WAF, SIEM, and encryption solutions.
Demonstrated ability to influence security strategies and drive improvements within a team.
The team goes into the office 5 days a week in San Francisco (walking distance to Caltrain).
#J-18808-Ljbffr
$116k-164k yearly est. 3d ago
Cloud Security Engineer
Meta 4.8
Menlo Park, CA jobs
We are seeking a SecurityEngineer who specializes in designing and implementing new systems and tools to enhance the security of Meta's products and infrastructure. This role is ideal for individuals with deep security domain expertise who are passionate about building solutions and using AI to address evolving security requirements and use cases.
Cloud SecurityEngineer Responsibilities
Design, prototype, and implement AI-driven security systems and tools to protect Meta's products and internal infrastructure
Develop and maintain security-focused code, libraries, and frameworks for use by SecurityEngineers, Analysts, and engineering teams
Collaborate with cross-functional partners to deliver scalable, security solutions aligned with company objectives
Rapidly experiment with and iterate on specialized security technologies, leveraging AI to address emerging threats and requirements
Apply deep security expertise to solve complex challenges, mitigate risks, and mentor other engineers in advanced security domains
Minimum Qualifications
B.S. or M.S. in Computer Science or related field, or equivalent experience
5+ years of experience in designing and implementing security systems, tools, or frameworks
Extensive knowledge of attacker tactics, techniques, and procedures
Proficiency in coding with experience in languages such as Python, C/C++, Go, or equivalent
Experience collaborating with technical and non-technical stakeholders
Ability to rapidly prototype and iterate on security solutions
Preferred Qualifications
Experience on securing cloud deployments, IAC (Infrastructure as Code) deployments for cloud (terraform)
Experience addressing security problems by building scalable engineering solutions
Experience influencing software engineers building security products
Experience creating metrics to measure service and program effectiveness and consistency
Experience making contributions to the security or privacy community (public research, OSS, blogging, presentations, etc.)
Public Compensation
$147,000/year to $208,000/year + bonus + equity + benefits
Industry
Internet
Equal Opportunity
Meta is proud to be an Equal Employment Opportunity and Affia... (full statement)
Meta is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance or accommodations due to a disability, please let us know at accommodations-ext@fb.com.
#J-18808-Ljbffr
A technology firm in San Francisco is seeking a Corporate SecurityEngineer to lead initiatives ensuring the safety of its corporate environment. The ideal candidate will have over 4 years of experience in Corporate Security, familiarity with Identity and Access Management, and a strong understanding of security concepts. This role offers hybrid working arrangements and a competitive compensation and benefits package, including mental health support and career coaching.
#J-18808-Ljbffr
A leading collaboration platform seeks a Corporate SecurityEngineer to lead security initiatives and protect the corporate environment. You will collaborate with IT and engineering teams, design security tools, and drive the implementation of Data Loss Prevention strategies. Ideal candidates have 4 years of experience in IT Security, knowledge of security frameworks, and familiarity with various security tools. This role is hybrid, based at the San Francisco office.
#J-18808-Ljbffr
$142k-185k yearly est. 5d ago
Principal Enterprise Security Engineer
F5 Networks, Inc. 4.6
Palo Alto, CA jobs
**Location:** Remote **About the Role** We are seeking a seasoned **Principal Enterprise SecurityEngineer** to design, implement, and manage enterprise-wide security solutions. You'll shape our security strategy across endpoint protection, network, SaaS, IAM, and observability, while aligning to NIST and CIS standards. This role reports to the Head of Enterprise Security, with close collaboration with the CISO, CISO staff, and cross-functional teams. **Key Responsibilities** Experience with Security Service Edge and Software-Defined Perimeter enables ZTNA solutions such as NetSCOPE, Zscaler, and PAN**Qualifications & Experience** **Education:** Bachelor's degree in cybersecurity, computer science, engineering-or equivalent years of corporate security/SOC experience. **Experience:** 7+ years in enterprise or cloud security with hands-on background in IAM, endpoint/network/SaaS security, incident management, vulnerability management, and log analytics. Experience with DoD IL‑4/IL‑5 programs-understand added encryption, personnel restrictions, and control overlays **Technical Skills:** Proficiency with tools like CrowdStrike, Palo Alto, F5, Splunk/ELK, and IAM platforms (Okta, Entra ID, etc.).**Soft Skills:** Strong verbal and written communication; ability to convey complex topics to executives; experience working with stakeholders across multiple time zones. Nice to have experience with F5 BigIP LTM**Personality Traits:** Strategic thinker, collaborative, proactive, with the ability to thrive in fast-moving environments. It is the policy of F5 to provide equal employment opportunities to all employees and employment applicants without regard to unlawful considerations of race, religion, color, national origin, sex, sexual orientation, gender identity or expression, age, sensory, physical, or mental disability, marital status, veteran or military status, genetic information, or any other classification protected by applicable local, state, or federal laws. This policy applies to all aspects of employment, including, but not limited to, hiring, job assignment, compensation, promotion, benefits, training, discipline, and termination. F5 offers a variety of reasonable accommodations for candidates. Requesting an accommodation is completely voluntary. F5 will assess the need for accommodations in the application process separately from those that may be needed to perform the job. Request by contacting accommodations@f5.com.**Remote**: Primarily work from designated home location but can come into an F5 office to work or travel to an offsite location as needed.
#J-18808-Ljbffr
$133k-172k yearly est. 2d ago
Consultant Network & Security
Thales Group 4.5
Palo Alto, CA jobs
Consultant Network & Security page is loaded## Consultant Network & Securityremote type: Hybrid or Remotelocations: Contern\_EXCtime type: Full timeposted on: Posted 30+ Days Agojob requisition id: R0287400Les entreprises et les gouvernements comptent sur Thales pour apporter de la confiance aux milliards d'interactions numériques qu'ils établissent avec les utilisateurs. L'Activité Mondiale Identité & Sécurité Numériques (DIS) fournit des technologies et services (des logiciels sécurisés en passant par la biométrie ou encore le cryptage), qui permettent aux entreprises ainsi qu'aux gouvernements de vérifier des identités, et de protéger les données afin qu'elles restent sûres.**Tasks and responsibilities*** Act as the dedicated technical advisor towards the customer in the limit of the scope of skills of the consultant* Gather technical needs from customer and propose the most adapted solutions to customer* Define architecture and participate to conception of technical solutions* Realize advisory and expertize missions* Support Account Managers during the presales phase of writing offers to customers such as “Proof of Concept” realization, writing Statements of Work, technical solutions presentations, preparation of technical aspects for all call for tenders* Implement and support operational teams during rollout phases* Manage and ensure engagement on deliverables in due time in the limit of the scope of skills of the consultant* Maintain updated general skills on network security (best practices, protocols, security fundamentals, risks identification, methodology etc.)* Technical skills to maintain updated (certifications / self-training / trainings to follow) further to the Personal targets defined* Skills, tailor made trainings and knowledge transfer towards customers* Skills, tailor made trainings and knowledge transfer towards team mates (to ensure coverage in case of absence)* Coaching junior consultants in order to make them grow up in terms of soft and technical skills* Coaching trainees (manage and follow up of the whole internship of the trainees)* Creativity in problem solving on a constant basis in day to day tasks and in exceptional tasks (such as incidents) in the limit of the scope of skills of the consultant* Detect and evaluate level of importance of any crisis situation* Escalate any crisis situation (previously detected and evaluated by the junior consultant himself) to confirmed and senior peers when necessary* Reporting and documentation (French and English)* Act the role of Back-Up for 1rst/2nd line support Engineer* Act the role of 3rd line for the SOD in the limit of the scope of skills of the consultant**Prerequisite**Expertise in one or more of the following areas:* Firewalls,* Web and Database Application firewalls,* Protection systems against unknown threats (Sandboxing* Messaging Security,* Strong authentication and SAML identity federation, OpenID Connect, …* Encryption (IPSEC VPN, data encryption, …),* Hardware Security Module (HSM),* More specifically, you are familiar with the technologies of the following manufacturers,* Firewalls: Checkpoint, Palo Alto Networks, Fortinet,* Web Proxy: Bluecoat ProxySG, McAfee Web Gateway R,* Load Balancing / WAFF: F5, Netscaler, DenyAll, Imperva,* Email Security: Cisco, TrendMicro, Trustwave,* Strong authentication: RSA, Gemalto, RCDevs,* HSM: Gemalto, Thales,* Sandboxing: BlueCoat MAA, TrendMicro,* Identity Federation: Microsoft ADFS, auth0, okta.**Security*** In-depth knowledge of IT security practices, including identity and access management, vulnerability management, network and operating system security, and application security* Competence in cryptography and data security* Knowledge of information security standards and regulations* Experience in detecting and responding to information security incidents, including compliance management and conducting information security audits**Back-up**Other Senior consultants / Practice Leader/ Team LeaderThales, entreprise Handi-Engagée, reconnait tous les talents. La diversité est notre meilleur atout. Postulez et rejoignez nous !remote type: Hybrid or Remotelocations: Contern\_EXCtime type: Full timeposted on: Posted 30+ Days Ago
#J-18808-Ljbffr
$104k-138k yearly est. 5d ago
Senior Cloud Visibility & Security PM
Gigamon 4.8
Santa Clara, CA jobs
A leading technology firm is seeking a Senior Product Manager to drive strategy and execution for cloud visibility and security solutions. This role involves collaborating cross-functionally with engineering, sales, and marketing teams. The ideal candidate should have a master's degree with an MBA preferred, along with over 10 years of experience in product management within cloud technologies. The role offers a competitive salary ranging from $192,000 to $240,000 and various employee benefits including health insurance and 401(k) matching.
#J-18808-Ljbffr
$192k-240k yearly 3d ago
Offensive Security Engineer, Agent Security
Openai 4.2
San Francisco, CA jobs
About the Team
Security is at the foundation of OpenAI's mission to ensure that artificial general intelligence benefits all of humanity. The Security team protects OpenAI's technology, people, and products. We are technical in what we build but are operational in how we do our work, and are committed to supporting all products and research at OpenAI. Our Security team tenets include: prioritizing for impact, enabling researchers, preparing for future transformative technologies, and engaging a robust security culture.
About the Role
We're seeking an exceptional Principal-level Offensive SecurityEngineer to challenge and strengthen OpenAI's security posture. This role isn't your typical red team job - it's an opportunity to engage broadly and deeply, craft innovative attack simulations, collaborate closely with defensive teams, and influence strategic security improvements across the organization.
You will have the chance to not only find vulnerabilities but actively drive their resolution, automate offensive techniques with cutting‑edge technologies, and use your unique attacker perspective to shape our security strategy.
This role will be primarily focused on continuously testing our agent powered products like Codex and Operator. These systems are uniquely valuable targets because they're rapidly evolving, have access to perform sensitive actions on behalf of users, and have large, diverse attack surfaces. You will play a crucial role in securing our agents by hunting for realistic vulnerabilities that emerge from the interactions between the applications, infrastructure, and models that power them.
In this role you will:
Continuously hunt for vulnerabilities in the interactions between the applications, infrastructure, and models that power our agentic products.
Conduct open‑scope red and purple team operations, simulating realistic attack scenarios.
Collaborate proactively with defensive security teams to enhance detection, response, and mitigation capabilities.
Perform comprehensive penetration testing on our diverse suite of products.
Leverage advanced automation and OpenAI technologies to optimize your offensive security work.
Present insightful, actionable findings clearly and compellingly to inspire impactful change.
Influence security strategy by providing attacker‑driven insights into risk and threat modeling.
You might thrive in this role if you have:
7+ years of hands‑on red team experience or exceptional accomplishments demonstrating equivalent expertise.
Deep expertise conducting offensive security operations within modern technology companies.
Experience designing, developing, or testing assessing the security of AI‑powered systems.
Experience working finding, exploiting and mitigating common vulnerabilities in AI systems like prompt injection, leaking sensitive data, confused deputies, and dynamically generated UI components.
Exceptional skill in code review, identifying novel and subtle vulnerabilities.
Proven experience performing offensive security assessments in at least one hyperscaler cloud environment (Azure preferred).
Demonstrated mastery assessing complex technology stacks, including:
Highly customized Kubernetes clusters
Container environments
CI/CD pipelines
GitHub security
mac OS and Linux operating systems
Data science tooling and environments
Python‑based web services
React‑based frontend applications
Strong intuitive understanding of trust boundaries and risk assessment in dynamic contexts.
Excellent coding skills, capable of writing robust tools and automation for offensive operations.
Ability to communicate complex technical concepts effectively through compelling storytelling.
Proven track record of not just finding vulnerabilities but actively contributing to solutions in complex codebases.
Bonus points:
Background or expertise in AI or data science.
Prior experience working in tech startups or fast‑paced technology environments.
Experience in related disciplines such as Software Engineering (SWE), Detection Engineering, Site Reliability Engineering (SRE), SecurityEngineering, or IT Infrastructure.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general‑purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.
For additional information, please see OpenAI's affirmative action and equal employment opportunity policy statement.
Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non‑public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.
To notify OpenAI that you believe this job posting is non‑compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
#J-18808-Ljbffr
$125k-175k yearly est. 6d ago
Offensive Security Engineer, Hardware
Openai 4.2
San Francisco, CA jobs
Security is at the foundation of OpenAI's mission to ensure that artificial general intelligence benefits all of humanity. The Security team protects OpenAI's technology, people, and products. We are technical in what we build but are operational in how we do our work, and are committed to supporting all products and research at OpenAI. Our Security team tenets include: prioritizing for impact, enabling researchers, preparing for future transformative technologies, and engaging a robust security culture.
About the Role
We're seeking an exceptional Principal-level Offensive SecurityEngineer to challenge and strengthen OpenAI's security posture. This role isn't your typical red team job - it's an opportunity to engage broadly and deeply, craft innovative attack simulations, collaborate closely with defensive teams, and influence strategic security improvements across the organization.
You have the chance to not only find vulnerabilities but actively drive their resolution, automate offensive techniques with cutting-edge technologies, and use your unique attacker perspective to shape our security strategy. This role will be primarily focused on continuously testing our hardware products and related services.
In this role you will:
Collaborate proactively with engineering teams to enhance security and mitigate risks in hardware, firmware, and software.
Perform comprehensive penetration testing on our diverse suite of products.
Leverage advanced automation and OpenAI technologies to optimize your offensive security work.
Present insightful, actionable findings clearly and compellingly to inspire impactful change.
Influence security strategy by providing attacker-driven insights into risk and threat modeling.
You might thrive in this role if you have:
7+ years of hands‑on experience or exceptional accomplishments demonstrating equivalent expertise.
Exceptional skill in code review, identifying novel and subtle vulnerabilities.
Demonstrated mastery assessing complex technology stacks, including:
Proven ability to reverse engineer bootrom images, firmware, or silicon‑level components.
Deep familiarity with low‑level kernel operations, secure boot processes, and hardware‑software interactions.
Hands‑on experience building and validating secure boot chains and threat models.
Proficiency with hardware debugging tools (UART, JTAG, SWD, oscilloscopes, logic analyzers).
Solid programming skills in C/C++, Python, or assembly for embedded systems.
Industry experience securing consumer hardware (e.g., mobile devices, IoT, chipsets).
Excellent written and verbal communication skills for technical and non‑technical audiences.
Strong intuitive understanding of trust boundaries and risk assessment in dynamic contexts.
Excellent coding skills, capable of writing robust tools and automation for offensive operations.
Ability to communicate complex technical concepts effectively through compelling storytelling.
Proven track record of not just finding vulnerabilities but actively contributing to solutions in complex codebases.
Prior experience working in tech startups or fast‑paced technology environments.
Experience in related disciplines such as Software Engineering (SWE), Detection Engineering, Site Reliability Engineering (SRE), SecurityEngineering, or IT Infrastructure.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general‑purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.
Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non‑public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.
To notify OpenAI that you believe this job posting is non‑compliant, please submit a report through this form . No response will be provided to inquiries unrelated to job posting compliance.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
#J-18808-Ljbffr
$125k-175k yearly est. 2d ago
Principal Offensive Security Engineer - AI Agent Security
Openai 4.2
San Francisco, CA jobs
A leading AI research firm in San Francisco seeks a Principal-level Offensive SecurityEngineer to enhance its security posture. This role involves hunting for vulnerabilities, conducting red team operations, and collaborating with defensive teams to secure AI-powered products. Ideal candidates have over 7 years of experience, excellent coding skills, and a deep understanding of security operations in modern tech environments.
#J-18808-Ljbffr
$125k-175k yearly est. 6d ago
Principal Offensive Security Engineer, Hardware
Openai 4.2
San Francisco, CA jobs
A leading AI research company in San Francisco is hiring a Principal Offensive SecurityEngineer. In this role, you will craft attack simulations and collaborate with teams to strengthen security posture across products. The ideal candidate has over 7 years of experience, exceptional programming skills, and a strong background in identifying vulnerabilities. This position offers an opportunity to influence security strategy and contribute to innovative projects in a dynamic environment.
#J-18808-Ljbffr
About the Team
Security is at the foundation of OpenAI's mission to ensure that artificial general intelligence benefits all of humanity.
The Security team protects OpenAI's technology, people, and products. We are technical in what we build but are operational in how we do our work, and are committed to supporting all products and research at OpenAI. Our Security team tenets include: prioritizing for impact, enabling researchers, preparing for future transformative technologies, and engaging a robust security culture.
About the Role
As a SecurityEngineer you will join our OpenAI engineers and researchers in building, operating and securing transformational AI technologies. This role will focus on all aspects of Detection & Response but with a strong emphasis on detecting insider threats and influencing controls to safeguard OpenAI's most sensitive assets. In this role, you will:
Innovate on Detection and Response infrastructure to engineer and automate end-to-end detection and investigation workflows.
Develop, measure, and tune detection rules to ensure effective and sustainable operations.
Drive projects across OpenAI's technology stack with a focus on insider threats, ranging from access abuse and intellectual property theft to novel risks emerging within AI infrastructure.
Partner closely with cross-functional stakeholders, including HR, Legal, and peer investigative teams, providing technical expertise and evidence to support investigations.
Collaborate on cutting‑edge AI research, and use AI to improve OpenAI's Security posture.
You might thrive in this role if you:
5+ years experience working in a detection/response or insider-risk role. We are seeking mid‑level and senior candidates.
You have broad familiarity with operating systems and platforms such as mac OS, Windows, Linux, and Kubernetes, along with experience in cloud infrastructure.
Knowledge of modern adversary tactics and attack paths, data exfiltration techniques, and have experience running and leading incidents.
Proficiency with a scripting language (e.g. Python, Bash, PowerShell, or similar).
Independently manage and run projects, balance preventative controls with user friction, and prioritize efforts for risk reduction.
You're motivated by securing transformative technology and can adapt familiar security frameworks to new risks in AI infrastructure.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.
For additional information, please see OpenAI's affirmative action and equal employment opportunity policy statement.
Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.
To notify OpenAI that you believe this job posting is non‑compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
Compensation Range: $260K - $405K
#J-18808-Ljbffr
$125k-175k yearly est. 5d ago
Enterprise Security Engineer
Openai 4.2
San Francisco, CA jobs
About the Team
Within the OpenAI Security organization, our IT team works to ensure our team of researchers, engineers, and staff have the tools they need to work comfortably, securely, and with minimal interruptions. As an Enterprise SecurityEngineer, you will work in a highly technical and employee-focused environment.
Our IT team is a small and nimble team, where you'll have the opportunity to dive into a wide breadth of areas and build from the ground up. We're well supported and well resourced, and have a mandate to deliver a world-class enterprise security program to our teams.
About the Role
As an Enterprise SecurityEngineer, you will be responsible for implementing and managing the security of OpenAI's internal information systems' infrastructure and processes. You will work closely with our IT and Security teams to develop security capabilities, enforce security policies, and monitor internal systems for security threats.
In this role, you will:
Develop and implement security measures to protect our company's information assets against unauthorized access, disclosure, or misuse.
Monitor internal and external systems for security threats and respond to alerts.
Contribute to and enforce our company's IT and Security policies and procedures.
Work closely with our IT department to harden our infrastructure using best practices in AzureAD, GSuite, Github, and other SaaS tooling.
Advise our employees on best practices for maintaining the security of their endpoints, and office AV and network infrastructure.
Devise novel sharing controls and associated monitoring to protect company data, including intelligent groups management, Data Loss Prevention (DLP) and other security controls as appropriate.
Employ forward-thinking models like “secure by default” and “zero trust” to create sustainably secure environments for knowledge workers and developers.
Identify and remediate vulnerabilities in our internal systems, adhering to best practices for data security.
Use our own AI-driven models to develop systems for improved security detection and response, data classification, and other security-related tasks.
Educate employees on the importance of data security, and advise them on best practices for maintaining a secure environment.
Contribute to OpenAI's endpoint and cloud security roadmaps by staying up to date with the latest security threats, and making recommendations for improving our security posture.
You might thrive in this role if you have:
Experience in protecting and managing mac OS fleets.
Experience deploying and managing endpoint security solutions (e.g. management frameworks, EDR tools).
Experience with public cloud service providers (e.g. Amazon AWS, Microsoft Azure).
Experience with identity and access management frameworks and protocols, including SAML, OAUTH, and SCIM.
Experience with e-mail security protocols (e.g. SPF, DKIM, DMARC) and controls.
Intermediate or advanced proficiency with a scripting language (e.g. Python, Bash, or similar).
Knowledge of modern adversary tactics, techniques, and procedures.
Ability to empathize and collaborate with colleagues, independently manage and run projects, and prioritize efforts for risk reduction.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.
For additional information, please see OpenAI's Affinnece Action and Equal Employment Opportunity Policy Statement.
Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.
To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
#J-18808-Ljbffr