Post job

Requirements engineer jobs in Pico Rivera, CA

- 1,251 jobs
All
Requirements Engineer
Data Engineer
Devops Engineer
Systems Engineer
Staff Engineer
Software Engineer
  • Backend Engineer - Python / API (Onsite)

    CGS Business Solutions 4.7company rating

    Requirements engineer job in Beverly Hills, CA

    CGS Business Solutions is committed to helping you, as an esteemed IT Professional, find the next right step in your career. We match professionals like you to rewarding consulting or full-time opportunities in your area of expertise. We are currently seeking Technical Professionals who are searching for challenging and rewarding jobs for the following opportunity: CGS is hiring on behalf of one of our Risk & Protection Services clients in the West LA area for a full-time role. We're looking for a strategic Backend Engineer to join a high-growth team building next-generation technology. In this role, you'll play a critical part in architecting and delivering scalable backend services that power an AI-native agent workspace. You'll translate complex business needs into secure, high-performance, and maintainable systems. This opportunity is ideal for a hands-on engineer who excels at designing cloud-native architectures and thrives in a fast-paced, highly collaborative startup environment. What You'll Do • Partner closely with engineering, product, and operations to define high-impact problems - and craft the right technical solutions. • Design and deliver scalable backend systems using modern architectures and best practices. • Build Python APIs and complex backend logic on top of AWS serverless infrastructure. • Contribute to the architecture and evolution of core system components. • Elevate engineering standards, tooling, and backend development processes across the team. Who You Are • 6+ years of software engineering experience, with deep expertise in building end-to-end systems and a strong backend focus. • Expert-level proficiency in Python and API development with Flask • Strong understanding of AWS and cloud-native architecture. • Experience with distributed systems, APIs, and data modeling. • Proven ability to architect and optimize systems for performance and reliability. • Excellent technical judgment and ability to drive clarity and execution in ambiguous environments. • Experience in insurance or enterprise SaaS is a strong plus. About CGS Business Solutions: CGS specializes in IT business solutions, staffing and consulting services. With a strong focus in IT Applications, Network Infrastructure, Information Security, and Engineering. CGS is an INC 5000 company and is honored to be selected as one of the Best IT Recruitment Firms in California. After five consecutive Fastest Growing Company titles, CGS continues to break into new markets across the USA. Companies are counting on CGS to attract and help retain these resource pools in order to gain a competitive advantage the rapidly changing business environments.
    $90k-118k yearly est. 22h ago
  • Analytics Engineer

    Subject 3.1company rating

    Requirements engineer job in Beverly Hills, CA

    Turn Learning Data Into Learning Breakthroughs At Subject, we're building AI-powered, personalized education at scale. Backed by Owl Ventures, Kleiner Perkins, Latitude Ventures, and more, we serve students across the country with cinematic, video-based learning. But we have a challenge: data is our superpower, and we need more talented, committed, and passionate people helping us build it out further. We're looking for an Analytics Engineer to join our growing data and product organization. You'll sit at the intersection of data, product, and engineering-transforming raw data into accessible, reliable, and actionable insights that guide decision-making across the company. This role will be foundational in building Subject's analytics infrastructure, supporting initiatives like: Product engagement and learning outcomes measurement Operational analytics for school implementations Generative AI products (e.g. Subject Spark Homework Helper and SparkTA) Data integration across systems like postgres, dbt, Pendo, Looker, and more You'll help define what “good data” means at Subject and ensure that stakeholders-from executives to course designers-can make confident, data-informed decisions. What You'll Build: Scalable Data Transformation Infrastructure Design and optimize dbt models that handle 100M+ daily events with Build modular, tested transformation pipelines that reduce compute costs by 70%+ Create data quality frameworks and governance standards that make our warehouse reliable Architect incremental models that process only what's changed High-Performance Analytics Dashboards Build Looker dashboards that load in Design Hex notebooks turning hours-long reports into one-click updates Create self-service analytics empowering teams to answer their own questions Develop real-time monitoring alerting teams to critical student engagement changes Intelligent Data Models for AI-Powered Learning Design dimensional models enabling rapid exploration of learning patterns Build feature stores feeding AI systems with clean, timely learning signals Create cohort analysis frameworks revealing which interventions work for which students Architect data products bridging raw events and business intelligence Data Infrastructure That Scales Write SQL optimized for millisecond response times Build Python automation eliminating manual work and catching errors early Design orchestration workflows that run reliably and recover gracefully Optimize cloud costs while improving performance The Technical Stack: dbt - Transformation layer (50% of your time) SQL (PostgreSQL) - Complex analytical queries, performance tuning Python - pandas, numpy, matplotlib for analysis and automation Hex - Interactive notebooks for reporting Looker - Business intelligence and dashboards Cloud Data Warehouse - BigQuery You'll work with billions of learning events, student performance data, video engagement metrics, assessment results, and feedback loops. What We're Looking For: Required Experience 3-5+ years in analytics engineering or data analytics building production systems Advanced SQL mastery - Elegant, performant queries. Understanding query plans and optimization dbt expertise - Built and maintained dbt projects with 100+ models Python proficiency - pandas, numpy, automation, and data pipelines BI tool experience - Production dashboards in Looker, Tableau, or similar Data modeling skills - Dimensional models, normalization tradeoffs, schemas that scale T he Mindset We Need Performance obsession - Can't stand slow dashboards or inefficient queries User empathy - Build for people who need insights, not just technical elegance Systems thinking - Optimize the entire data pipeline from source to dashboard Ownership mentality - Maintain what you build, not just ship and move on Educational curiosity - Genuine interest in learning science and student success Collaborative spirit - Explain concepts clearly and elevate team data literacy Bonus Points Education data or student analytics experience Data science or ML workflow exposure Cloud platform experience (GCP, AWS, Azure) Reverse ETL or operational analytics Analytics engineering open source contributions Why This Role Matters: Your dashboards inform decisions affecting 5 million students Your optimizations save hundreds of engineering hours monthly Your data models power AI personalization for each student Your work helps teachers understand and improve outcomes Compensation & Benefits Base Salary: $140K - $180K based on experience Equity: Meaningful ownership that grows with your impact Performance Bonus: Tied to infrastructure improvements and outcomes Health & Wellness: Comprehensive coverage, gym membership, daily meals Location: Los Angeles, CA (in-office preferred) Ready to Build Education's Data Foundation? This isn't just another analytics role. Define how a category leader uses data, build infrastructure that becomes industry-standard, and improve educational outcomes for millions. Apply now and transform education through data.
    $140k-180k yearly 1d ago
  • Azure Cloud Engineer (Jr/Mid) - (Locals only)

    Maxonic Inc.

    Requirements engineer job in Los Angeles, CA

    Job Title: Cloud Team Charter Job Type: Contract to Hire Work Schedule: Hybrid (3 days onsite, 2 days remote) Rate: $60. Based on experience Responsibilities: Cloud Team Charter/ Scope- 2 resources (1 Sr and 1 Mid/Jr) Operate and maintain Cloud Foundation Services, such as: Azure Policies Backup Engineering and Enforcement Logging Standard and Enforcement AntiVirus and Malware Enforcement Azure service/resources life cycle management, including retirement of resources Tagging enforcement Infrastructure Security Ownership of Defender reporting as it relates to Infrastructure. Collaboration with Cyber Security and App team to generate necessary reports for Infrastructure security review. Actively monitoring and remediating infrastructure vulnerability with App Team. Coordinate with the App team to address infrastructure vulnerabilities. Drive continuous improvement in Cloud Security by tracking/maintaining infrastructure vulnerabilities through Azure Security Center. Cloud Support: PaaS DB support Support for Cloud Networking (L2) and work with the Network team as needed Developer support in the Cloud. Support for the CMDB team to track the Cloud assets. L4 Cloud support for the enterprise. About Maxonic: Since 2002 Maxonic has been at the forefront of connecting candidate strengths to client challenges. Our award winning, dedicated team of recruiting professionals are specialized by technology, are great listeners, and will seek to find a position that meets the long-term career needs of our candidates. We take pride in the over 10,000 candidates that we have placed, and the repeat business that we earn from our satisfied clients. Interested in Applying? Please apply with your most current resume. Feel free to contact Jhankar Chanda (******************* / ************ ) for more details.
    $60 hourly 2d ago
  • Kafka Platform Engineer

    Thought Byte

    Requirements engineer job in Irvine, CA

    We are seeking a Senior Kafka Engineer to manage, enhance, and scale an enterprise-grade Apache Kafka implementation deployed on AWS and the Confluent Platform. This person will be responsible for keeping the system reliable, improving it over time, and expanding it to support new applications. This role involves performing detailed architectural reviews, monitoring, performance tuning, optimizing existing Kafka pipelines, and partnering with application teams to deliver reliable, secure, and performant streaming solutions in a FinTech environment. Qualifications: 10+ years in platform engineering with 3+ years of hands-on experience with Apache Kafka. Proficiency in Kafka client development using Java or Python. Expertise with Confluent Platform (Brokers, Schema Registry, Control Center, ksql DB). Experience deploying and managing Kafka on AWS (including MSK or self-managed EC2-based setups). Solid understanding of Kubernetes, especially EKS, for microservices integration. Hands-on experience with Kafka Connect, Kafka Streams, and schema management. Infrastructure automation experience with Terraform and Helm. Familiarity with monitoring and alerting stacks: Prometheus, Grafana, ELK, or similar. Preferred Qualifications: Prior experience in the FinTech domain or other regulated industries. Understanding of security best practices, including TLS, authentication (SASL, OAuth), RBAC, and encryption at rest. Exposure to Apache Flink, Spark Streaming, or other stream processing engines. Experience establishing Kafka governance frameworks and multi-tenant topic strategies. Must Have & Desired Skills: Confluent Kafka, Terraform, Kubernetes, EKS. Nice to have Kafka Certification. Responsibilities: Manage and enhance existing Apache Kafka and Confluent Platform on AWS. Review existing implementations and recommend improvements. Collaborate with engineering and product teams to integrate new use cases and define scalable streaming patterns. Implement and maintain Kafka producers/consumers, Connectors, and Kafka Streams applications. Enforce governance around topic design, schema evolution, partitioning, and data retention. Monitor, troubleshoot, and tune Kafka clusters using Confluent Control Center, Prometheus, and Grafana. Use Kubernetes and Terraform to automate Kafka infrastructure deployment and scaling. Ensure high availability, security and disaster recovery. Mentor other engineers and provide leadership in Kafka-related initiatives.
    $86k-122k yearly est. 3d ago
  • Senior Data Engineer/Lead-SoCal only(No C2C)

    JSG (Johnson Service Group, Inc.

    Requirements engineer job in Calabasas, CA

    JSG is seeking a Senior Data Solutions Architect for a client in Woodland Hills, CA. This position is remote, and our client is looking for local candidates based in Southern California.The Senior Data Solutions Engineer will design, scale, and optimize the company's enterprise data platform. This role will build and maintain cloud-native data pipelines, lakehouse/warehouse architectures, and multi-system integrations that support Finance, CRM, Operations, Marketing, and guest experience analytics. The engineer will focus on building secure, scalable, and cost-efficient systems while applying modern DevOps, ETL, and cloud engineering practices requiring a strong technologist with hands-on expertise across data pipelines, orchestration, governance, and cloud infrastructure. Key Responsibilities Design, build, and maintain ELT/ETL pipelines across Snowflake, Databricks, Microsoft Fabric Gen 2, Azure Synapse Analytics, and legacy SQL/Oracle platforms. Implement medallion/lakehouse architecture, CDC pipelines, and streaming ingestion frameworks. Leverage Python (90%) and SQL (10%) for data processing, orchestration, and automation. Manage AWS and Azure multi-account environments, enforcing MFA, IAM policies, and governance. Build serverless architectures (AWS Lambda, Azure Functions, EventBridge, SQS, Step Functions) for event-driven data flows. Integrate infrastructure with CI/CD pipelines (GitHub Actions, Azure DevOps, MWAA/Airflow, dbt) for automated testing and deployments. Deploy infrastructure as code using Terraform and Azure DevOps for reproducible, version-controlled environments. Implement observability and monitoring frameworks (Datadog, Prometheus, Grafana, Kibana, Azure Monitor) to ensure system reliability, performance, and cost efficiency. Collaborate with stakeholders to deliver secure, scalable, and cost-efficient data solutions. Background in Finance a or Consumer facing industries is preferred Salary: $160K-$175K JSG offers medical, dental, vision, life insurance options, short-term disability, 401(k), weekly pay, and more. Johnson Service Group (JSG) is an Equal Opportunity Employer. JSG provides equal employment opportunities to all applicants and employees without regard to race, color, religion, sex, age, sexual orientation, gender identity, national origin, disability, marital status, protected veteran status, or any other characteristic protected by law.
    $160k-175k yearly 22h ago
  • Senior Data Engineer

    Akube

    Requirements engineer job in Glendale, CA

    City: Glendale, CA Onsite/ Hybrid/ Remote: Hybrid (3 days a week onsite, Friday - Remote) Duration: 12 months Rate Range: Up to$85/hr on W2 depending on experience (no C2C or 1099 or sub-contract) Work Authorization: GC, USC, All valid EADs except OPT, CPT, H1B Must Have: • 5+ years Data Engineering • Airflow • Spark DataFrame API • Databricks • SQL • API integration • AWS • Python or Java or Scala Responsibilities: • Maintain, update, and expand Core Data platform pipelines. • Build tools for data discovery, lineage, governance, and privacy. • Partner with engineering and cross-functional teams to deliver scalable solutions. • Use Airflow, Spark, Databricks, Delta Lake, Kubernetes, and AWS to build and optimize workflows. • Support platform standards, best practices, and documentation. • Ensure data quality, reliability, and SLA adherence across datasets. • Participate in Agile ceremonies and continuous process improvement. • Work with internal customers to understand needs and prioritize enhancements. • Maintain detailed documentation that supports governance and quality. Qualifications: • 5+ years in data engineering with large-scale pipelines. • Strong SQL and one major programming language (Python, Java, or Scala). • Production experience with Spark and Databricks. • Experience ingesting and interacting with API data sources. • Hands-on Airflow orchestration experience. • Experience developing APIs with GraphQL. • Strong AWS knowledge and infrastructure-as-code familiarity. • Understanding of OLTP vs OLAP, data modeling, and data warehousing. • Strong problem-solving and algorithmic skills. • Clear written and verbal communication. • Agile/Scrum experience. • Bachelor's degree in a STEM field or equivalent industry experience.
    $85 hourly 1d ago
  • Mobile DevOps Engineer

    Epitec 4.4company rating

    Requirements engineer job in Los Angeles, CA

    Job Title: Mobile DevOps Engineer Contract Duration: 8 months, possible extension Work Arrangement: Onsite Hours: 9AM - 5PM Pay Range: $60-$65/hour We are seeking a highly skilled Mobile DevOps Engineer who can bridge development and operations for mobile platforms. The ideal candidate will have deep expertise in Android and iOS deployment, CI/CD automation, mobile device management, and Microsoft mobile technologies, along with strong coding fundamentals and DevOps best practices. Key Responsibilities Design, implement, and maintain CI/CD pipelines for Android and iOS apps using tools like Jenkins, Bitrise, Fastlane, or GitHub Actions. Manage mobile deployments via AirWatch, MTX, and other MDM solutions. Automate build, test, and release processes for mobile applications. Implement infrastructure as code and configuration management for mobile environments. Collaborate with developers, QA, and security teams to ensure smooth releases. Monitor and optimize build systems, deployment pipelines, and app performance. Ensure compliance with security and privacy standards during deployments. Troubleshoot and resolve issues across build, deployment, and runtime environments. Required Skills & Qualifications 5+ years in DevOps or Mobile DevOps roles. Strong experience with Android and iOS deployment workflows. Expertise in CI/CD tools (Jenkins, GitHub Actions, Bitrise, Fastlane). Hands-on experience with AirWatch, MTX, or similar MDM platforms. Solid coding and scripting skills (e.g., Python, Bash, Groovy). Proficiency in C# and understanding of unit testing frameworks (e.g., NUnit, xUnit). Experience working with .NET, MAUI, and other Microsoft mobile solutions. Familiarity with version control systems (Git) and branching strategies. Knowledge of containerization (Docker) and orchestration (Kubernetes). Experience with cloud platforms (AWS, Azure, GCP) for mobile backend integration. Understanding of security best practices for mobile deployments. Strong troubleshooting and problem-solving skills. Preferred Skills Experience with monitoring tools (Datadog, New Relic, Firebase Crashlytics). Familiarity with artifact management (Nexus, Artifactory). Knowledge of mobile app signing, provisioning profiles, and certificate management. Exposure to automated testing frameworks for mobile (Appium, XCTest, Espresso). Ability to work in Agile/Scrum environments.
    $60-65 hourly 2d ago
  • Snowflake/AWS Data Engineer

    Ostechnical

    Requirements engineer job in Irvine, CA

    Sr. Data Engineer Full Time Direct Hire Job Hybrid with work location-Irvine, CA. The Senior Data Engineer will help design and build a modern data platform that supports enterprise analytics, integrations, and AI/ML initiatives. This role focuses on developing scalable data pipelines, modernizing the enterprise data warehouse, and enabling self-service analytics across the organization. Key Responsibilities • Build and maintain scalable data pipelines using Snowflake, dbt, and Fivetran. • Design and optimize enterprise data models for performance and scalability. • Support data cataloging, lineage, quality, and compliance efforts. • Translate business and analytics requirements into reliable data solutions. • Use AWS (primarily S3) for storage, integration, and platform reliability. • Perform other data engineering tasks as needed. Required Qualifications • Bachelor's degree in Computer Science, Data Engineering, Information Systems, or related field. • 5+ years of data engineering experience. • Hands-on expertise with Snowflake, dbt, and Fivetran. • Strong background in data warehousing, dimensional modeling, and SQL. • Experience with AWS (S3) and data governance tools such as Alation or Atlan. • Proficiency in Python for scripting and automation. • Experience with streaming technologies (Kafka, Kinesis, Flink) a plus. • Knowledge of data security and compliance best practices. • Exposure to AI/ML workflows and modern BI tools like Power BI, Tableau, or Looker. • Ability to mentor junior engineers. Skills • Snowflake • dbt • Fivetran • Data modeling and warehousing • AWS • Data governance • SQL • Python • Strong communication and cross-functional collaboration • Interest in emerging data and AI technologies
    $99k-139k yearly est. 2d ago
  • Snowflake DBT Data Engineer

    Galent

    Requirements engineer job in Irvine, CA

    We're hiring a Snowflake DBT Data Engineer Join Galent and help us deliver high-impact technology solutions that shape the future of digital transformation Experience : 12+ years Mandatory Skills : Snowflake, ANSI-SQL,DBT Key Responsibilities: Design develop and maintain ELT pipelines using Snowflake and DBT Build and optimize data models in Snowflake to support analytics and reporting Implement modular testable SQL transformations using DBT Integrate DBT workflows into CICD pipelines and manage infrastructure as code using Terraform Collaborate with data scientists analysts and business stakeholders to translate requirements into technical solutions Optimize Snowflake performance through clustering partitioning indexing and materialized views Automate data ingestion and transformation workflows using Airflow or similar orchestration tools Ensure data quality governance and security across pipelines Troubleshoot and resolve performance bottlenecks and data issues Maintain documentation for data architecture pipelines and operational procedures Required Skills Qualifications: Bachelors or Masters degree in Computer Science Data Engineering or related field 7 years of experience in data engineering with at least 2 years focused on Snowflake and DBT Strong proficiency in SQL and Python Experience with cloud platforms AWS GCP or Azure Familiarity with Git CICD and Infrastructure as Code tools Terraform CloudFormation Knowledge of data modelling star schema normalization and ELT best practices Why Galent Galent is a digital engineering firm that brings AI-driven innovation to enterprise IT. We're proud of our diverse and inclusive team culture where bold ideas drive transformation. Ready to Apply? Send your resume to *******************
    $99k-139k yearly est. 1d ago
  • DevOps Engineer

    Evona

    Requirements engineer job in Irvine, CA

    DevOps Engineer - Satellite Technology Onsite in Irvine, CA or Washington, DC Pioneering Space Technology | Secure Cloud | Mission-Critical Systems We're working with a leading organization in the satellite technology sector, seeking a DevOps Engineer to join their growing team. You'll play a key role in shaping, automating, and securing the software infrastructure that supports next-generation space missions. This is a hands-on role within a collaborative, high-impact environment-ideal for someone who thrives on optimizing cloud performance and supporting mission-critical operations in aerospace. What You'll Be Doing Maintain and optimize AWS cloud environments, implementing security updates and best practices Manage daily operations of Kubernetes clusters and ensure system reliability Collaborate with cybersecurity teams to ensure full compliance across AWS infrastructure Support software deployment pipelines and infrastructure automation using Terraform and CI/CD tools Work cross-functionally with teams including satellite operations, software analytics, and systems engineering Troubleshoot and resolve environment issues to maintain uptime and efficiency Apply an “Infrastructure as Code” approach to all system development and management What You'll Bring Degree in Computer Science or a related field 2-3 years' experience with Kubernetes and containerized environments 3+ years' Linux systems administration experience Hands-on experience with cloud services (AWS, GCP, or Azure) Strong understanding of Terraform and CI/CD pipeline tools (e.g. FluxCD, Argo) Skilled in Python or Go Familiarity with software version control systems Solid grounding in cybersecurity principles (networking, authentication, encryption, firewalls) Eligibility to obtain a U.S. Security Clearance Preferred: Certified Kubernetes Administrator or Developer AWS Certified Security credentials This role offers the chance to make a tangible impact in the satellite and space exploration sector, joining a team that's building secure, scalable systems for mission success. If you're passionate about space, cloud infrastructure, and cutting-edge DevOps practices-this is your opportunity to be part of something extraordinary.
    $98k-133k yearly est. 3d ago
  • Senior Data Engineer

    Kellymitchell Group 4.5company rating

    Requirements engineer job in Glendale, CA

    Our client is seeking a Senior Data Engineer to join their team! This position is located in Glendale, California. Contribute to maintaining, updating, and expanding existing Core Data platform data pipelines Build tools and services to support data discovery, lineage, governance, and privacy Collaborate with other software and data engineers and cross-functional teams Work with a tech stack that includes Airflow, Spark, Databricks, Delta Lake, Kubernetes, and AWS Collaborate with product managers, architects, and other engineers to drive the success of the Core Data platform Contribute to developing and documenting internal and external standards and best practices for pipeline configurations, naming conventions, and more Ensure high operational efficiency and quality of Core Data platform datasets to meet SLAs and ensure reliability and accuracy for stakeholders in Engineering, Data Science, Operations, and Analytics Participate in agile and scrum ceremonies to collaborate and refine team processes Engage with customers to build relationships, understand needs, and prioritize both innovative solutions and incremental platform improvements Maintain detailed documentation of work and changes to support data quality and data governance requirements Desired Skills/Experience: 5+ years of data engineering experience developing large data pipelines Proficiency in at least one major programming language such as: Python, Java or Scala Strong SQL skills and the ability to create queries to analyze complex datasets Hands-on production experience with distributed processing systems such as Spark Experience interacting with and ingesting data efficiently from API data sources Experience coding with the Spark DataFrame API to create data engineering workflows in Databricks Hands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines Experience developing APIs with GraphQL Deep understanding of AWS or other cloud providers, as well as infrastructure-as-code Familiarity with data modeling techniques and data warehousing best practices Strong algorithmic problem-solving skills Excellent written and verbal communication skills Advanced understanding of OLTP versus OLAP environments Benefits: Medical, Dental, & Vision Insurance Plans Employee-Owned Profit Sharing (ESOP) 401K offered The approximate pay range for this position is between $51.00 and $73.00. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
    $51-73 hourly 1d ago
  • Data Engineer

    RSM Solutions, Inc. 4.4company rating

    Requirements engineer job in Irvine, CA

    Thank you for stopping by to take a look at the Data Integration Engineer role I posted here on LinkedIN, I appreciate it. If you have read my s in the past, you will recognize how I write job descriptions. If you are new, allow me to introduce myself. My name is Tom Welke. I am Partner & VP at RSM Solutions, Inc and I have been recruiting technical talent for more than 23 years and been in the tech space since the 1990s. Due to this, I actually write JD's myself...no AI, no 'bots', just a real live human. I realized a while back that looking for work is about as fun as a root canal with no anesthesia...especially now. So, rather than saying 'must work well with others' and 'team mindset', I do away with that kind of nonsense and just tell it like it is. So, as with every role I work on, social fit is almost as important as technical fit. For this one, technical fit is very very important. But, we also have some social fit characteristics that are important. This is the kind of place that requires people to dive in and learn. The hiring manager for this one is actually a very dear friend of mine. He said something interesting to me not all that long ago. He mentioned, if you aren't spending at least an hour a day learning something new, you really are doing yourself a disservice. This is that classic environment where no one says 'this is not my job'. So that ability to jump in and help is needed for success in this role. This role is being done onsite in Irvine, California. I prefer working with candidates that are already local to the area. If you need to relocate, that is fine, but there are no relocation dollars available. I can only work with US Citizens or Green Card Holders for this role. I cannot work with H1, OPT, EAD, F1, H4, or anyone that is not already a US Citizen or Green Card Holder for this role. The Data Engineer role is similar to the Data Integration role I posted. However, this one is mor Ops focused, with the orchestration of deployment and ML flow, and including orchestrating and using data on the clusters and managing how the models are performing. This role focuses on coding & configuring on the ML side of the house. You will be designing, automating, and observing end to end data pipelines that feed this client's Kubeflow driven machine learning platform, ensuring models are trained, deployed, and monitored on trustworthy, well governed data. You will build batch/stream workflows, wire them into Azure DevOps CI/CD, and surface real time health metrics in Prometheus + Grafana dashboards to guarantee data availability. The role bridges Data Engineering and MLOps, allowing data scientists to focus on experimentation and the business sees rapid, reliable predictive insight. Here are some of the main responsibilities: Design and implement batch and streaming pipelines in Apache Spark running on Kubernetes and Kubeflow Pipelines to hydrate feature stores and training datasets. Build high throughput ETL/ELT jobs with SSIS, SSAS, and T SQL against MS SQL Server, applying Data Vault style modeling patterns for auditability. Integrate source control, build, and release automation using GitHub Actions and Azure DevOps for every pipeline component. Instrument pipelines with Prometheus exporters and visualize SLA, latency, and error budget metrics to enable proactive alerting. Create automated data quality and schema drift checks; surface anomalies to support a rapid incident response process. Use MLflow Tracking and Model Registry to version artifacts, parameters, and metrics for reproducible experiments and safe rollbacks. Work with data scientists to automate model retraining and deployment triggers within Kubeflow based on data freshness or concept drift signals. Develop PowerShell and .NET utilities to orchestrate job dependencies, manage secrets, and publish telemetry to Azure Monitor. Optimize Spark and SQL workloads through indexing, partitioning, and cluster sizing strategies, benchmarking performance in CI pipelines. Document lineage, ownership, and retention policies; ensure pipelines conform to PCI/SOX and internal data governance standards. Here is what we are seeking: At least 6 years of experience building data pipelines in Spark or equivalent. At least 2 years deploying workloads on Kubernetes/Kubeflow. At least 2 years of experience with MLflow or similar experiment‑tracking tools. At least 6 years of experience in T‑SQL, Python/Scala for Spark. At least 6 years of PowerShell/.NET scripting. At least 6 years of experience with with GitHub, Azure DevOps, Prometheus, Grafana, and SSIS/SSAS. Kubernetes CKA/CKAD, Azure Data Engineer (DP‑203), or MLOps‑focused certifications (e.g., Kubeflow or MLflow) would be great to see. Mentor engineers on best practices in containerized data engineering and MLOps.
    $111k-166k yearly est. 22h ago
  • Lead Data Engineer - (Automotive exp)

    Intelliswift-An LTTS Company

    Requirements engineer job in Torrance, CA

    Role: Sr Technical Lead Duration: 12+ Month Contract Daily Tasks Performed: Lead the design, development, and deployment of a scalable, secure, and high-performance CDP SaaS product. Architect solutions that integrate with various data sources, APIs, and third-party platforms. Design, develop, and optimize complex SQL queries for data extraction, transformation, and analysis Build and maintain workflow pipelines using Digdag, integrating with data platforms such as Treasure Data, AWS, or other cloud services Automate ETL processes and schedule tasks using Digdag's YAML-based workflow definitions Implement data quality checks, logging, and alerting mechanisms within workflow Leverage AWS services (e.g., S3, Lambda, Athena) where applicable to enhance data processing and storage capabilities Ensure best practices in software engineering, including code reviews, testing, CI/CD, and documentation. Oversee data privacy, security, and compliance initiatives (e.g., GDPR, CCPA). Ensure adherence to security, compliance, and data governance requirements. Oversee development of real-time and batch data processing systems. Collaborate with cross-functional teams including data analysts, product managers, and software engineers to translate business requirements into technical solutions Collaborate with the stakeholders to define technical requirements to align technical solutions with business goals and deliver product features. Mentor and guide developers, fostering a culture of technical excellence and continuous improvement. Troubleshoot complex technical issues and provide hands-on support as needed. Monitor, troubleshoot, and improve data workflows for performance, reliability, and cost-efficiency as needed Optimize system performance, scalability, and cost efficiency. What this person will be working on: As the Senior Technical Lead for our Customer Data Platform (CDP), the candidate will define the technical strategy, architecture, and execution of the platform. They will lead the design and delivery of scalable, secure, and high-performing solutions that enable unified customer data management, advanced analytics, and personalized experiences. This role demands deep technical expertise, strong leadership, and a solid understanding of data platforms and modern cloud technologies. It is a pivotal position that supports the CDP vision by mentoring team members and delivering solutions that empower our customers to unify, analyze, and activate their data. Position Success Criteria (Desired) - 'WANTS' Bachelor's or Master's degree in Computer Science, Engineering, or related field. 8+ years of software development experience, with at least 3+ years in a technical leadership role. Proven experience building and scaling SaaS products, preferably in customer data, marketing technology, or analytics domains Extensive hands-on experience with Presto, Hive, and Python Strong proficiency in writing complex SQL queries for data extraction, transformation, and analysis Familiarity with AWS data services such as S3, Athena, Glue, and Lambda Deep understanding of data modeling, ETL pipelines, workflow orchestration, and both real-time and batch data processing Experience ensuring data privacy, security, and compliance in SaaS environments Knowledge of Customer Data Platforms (CDPs), CDP concepts, and integration with CRM, marketing, and analytics tools Excellent communication, leadership, and project management skills Experience working with Agile methodologies and DevOps practices Ability to thrive in a fast-paced, agile environment Collaborative mindset with a proactive approach to problem-solving Stay current with industry trends and emerging technologies relevant to SaaS and customer data platforms.
    $100k-141k yearly est. 4d ago
  • Data Engineer (AWS Redshift, BI, Python, ETL)

    Prosum 4.4company rating

    Requirements engineer job in Manhattan Beach, CA

    We are seeking a skilled Data Engineer with strong experience in business intelligence (BI) and data warehouse development to join our team. In this role, you will design, build, and optimize data pipelines and warehouse architectures that support analytics, reporting, and data-driven decision-making. You will work closely with analysts, data scientists, and business stakeholders to ensure reliable, scalable, and high-quality data solutions. Responsibilities: Develop and maintain ETL/ELT pipelines for ingesting, transforming, and delivering data. Design and enhance data warehouse models (star/snowflake schemas) and BI datasets. Optimize data workflows for performance, scalability, and reliability. Collaborate with BI teams to support dashboards, reporting, and analytics needs. Ensure data quality, governance, and documentation across all solutions. Qualifications: Proven experience with data engineering tools (SQL, Python, ETL frameworks). Strong understanding of BI concepts, reporting tools, and dimensional modeling. Hands-on experience with cloud data platforms (e.g., AWS, Azure, GCP) is a plus. Excellent problem-solving skills and ability to work in a cross-functional environment.
    $99k-139k yearly est. 22h ago
  • Senior DevOps Engineer - AI Platform

    Ispace, Inc.

    Requirements engineer job in Westlake Village, CA

    JOB DETAILS: Sr DevOps Engineer - AI platform Contract Duration - 6 months contract to hire full time employment Hourly Rate: $60 - $72/hr on W2 contract. Job Description: Responsibilities: The Sr DevOps Engineer - AI platform will: Design, implement, and manage scalable and resilient infrastructure on AWS. Architect and maintain Windows/Linux based environments, ensuring seamless integration with cloud platforms. Develop and maintain infrastructure-as-code(IaC) using both AWS Cloudformation/CDK and Terraform/OpenTofu. Develop and maintain Configuration Management for Windows & Linux servers using Chef. Design, build, and optimize CI/CD pipelines using GitLab CI/CD for .NET applications. Integrate and support AI services, including orchestration with AWS Bedrock, Google Agentspace, and other generative AI frameworks, ensuring they can be securely and efficiently consumed by platform services. Enable AI/ML workflows by building and optimizing infrastructure pipelines that support large-scale model training, inference, and deployment across AWS and GCP environments. Automate model lifecycle management (training, deployment, monitoring) through CI/CD pipelines, ensuring reproducibility and seamless integration with development workflows. Collaborate with AI engineering teams to deliver scalable environments, standardized APIs, and infrastructure that accelerate AI adoption at the platform level. Implement observability, security, data privacy and cost-optimization strategies specifically for AI workloads, including monitoring and resource scaling for inference services. Implement and enforce security best practices across the infrastructure and deployment processes. Collaborate closely with development teams to understand their needs and provide DevOps expertise. Troubleshoot and resolve infrastructure and application deployment issues. Implement and manage monitoring and logging solutions to ensure system visibility and proactive issue detection. Clearly and concisely contribute to the development and documentation of DevOps standards and best practices. Stay up-to-date with the latest industry trends and technologies in cloud computing, DevOps, and security. Provide mentorship and guidance to junior team members. Qualifications: Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent experience). 5+ years of experience in a DevOps or Site Reliability Engineering (SRE) role. 1+ year(s) of experience with AI services & LLMs. Extensive hands-on experience with Amazon Web Services (AWS) Solid understanding of Windows/Linux Server administration and integration with cloud environments. Proven experience with infrastructure-as-code tools, specifically AWS CDK and Terraform. Strong experience designing and implementing CI/CD pipelines using GitLab CI/CD. Experience deploying and managing .NET applications in cloud environments. Deep understanding of security best practices and their implementation in cloud infrastructure and CI/CD pipelines. Solid understanding of networking principles (TCP/IP, DNS, load balancing, firewalls) in cloud environments. Experience with monitoring and logging tools (e.g., NewRelic, CloudWatch). Strong scripting skills (e.g., PowerShell, Python, Ruby, Bash). Excellent problem-solving and troubleshooting skills. Strong communication and collaboration skills. Experience with containerization technologies (e.g., Docker, Kubernetes) is a plus. Relevant AWS and/or GCP certifications are a plus. Experience with the configuration management tool Chef Preferred Qualifications: Knowledge of and a strong understanding of Powershell and Python Scripting Strong background with AWS EC2 features and Services (Autoscaling and WarmPools) Understanding of Windows server Build process using tools like Chocolaty for packages and Packer for AMI/Image generation. Extensive hands-on experience with Amazon Web Services (AWS)
    $60-72 hourly 4d ago
  • Staff Engineer

    Premier Group 4.5company rating

    Requirements engineer job in West Hollywood, CA

    West Hollywood $170K - $205K + stock + benefits Full time Are you ready to join one of the most innovative players in the Insurtech space, a company reshaping how data drives decision-making in insurance? Our client has recently secured a significant amount of funding, and with this is driving their next phase of growth. They're now looking to grow out their development function here in West Hollywood and want an experienced Technical Lead to join their team. What you'll be doing: Able to build scalable, high-performance systems. Architect and develop core integrations using Python, PostgreSQL and AWS. Able to both be hands on, as well as mentoring the team. Experience liaising with non-technical stakeholders & decision makers. What we're looking for: 10 + years' experience as a software engineer. Experience in a technical leadership / mentorship position previously, ideally for 2+ years. Strong proficiency in Python, PostgreSQL & AWS. Ideally some experience with React.js on the front-end. If you're an experienced staff level engineer or technical lead who's looking for a new opportunity based in West Hollywood, please do apply now for immediate consideration.
    $170k-205k yearly 1d ago
  • Plumbing Engineer

    K2D Consulting Engineers

    Requirements engineer job in Marina del Rey, CA

    We are currently seeking a Plumbing Engineer to join our team in Marina Del Rey, California. SUMMARY: This position is responsible for managing and performing tests on various materials and equipment and maintaining knowledge on all product specifications and ensure adherence to all required standards by performing the following duties. DUTIES AND RESPONSIBILITIES: Build long term customer relationships with existing and potential customers. Effectively manage Plumbing and design projects by satisfying clients' needs, meeting budget expectations and project schedules. Provide support during construction phases. Performs other related duties as assigned by management. SUPERVISORY RESPONSIBILITIES: Carries out supervisory responsibilities in accordance with the organization's policies and applicable laws. QUALIFICATIONS: Bachelor's Degree (BA) from four-year college or universityin Mechanical Engineering or completed Course Work in Plumbing, or one to two years of related experience and/or training, or equivalent combination of education and experience. Certificates, licenses and registrations required: LEED Certification is a plus. Computer skills required:Experienced at using a computer, preferably knowledgeable with MS Word, MS Excel, AutoCAD, and REVIT is a plus. Other skills required: 5 years of experience minimum, individuals should have recent experience working for a consulting engineering or engineering/architectural firm designing plumbing systems. Experience in the following preferred: Residential Commercial Multi-Family Restaurants Strong interpersonal skills and experience in maintaining strong client relationships are required. Ability to communicate effectively with people through oral presentations and written communications. Ability to motivate multiple-discipline project teams in meeting client's needs in a timely manner and meeting budget objectives.
    $87k-124k yearly est. 60d+ ago
  • DevOps Engineer

    Sonata Software

    Requirements engineer job in Westlake Village, CA

    In today's market, there is a unique duality in technology adoption. On one side, extreme focus on cost containment by clients, and on the other, deep motivation to modernize their Digital storefronts to attract more consumers and B2B customers. As a leading Modernization Engineering company, we aim to deliver modernization-driven hypergrowth for our clients based on the deep differentiation we have created in Modernization Engineering, powered by our Lightening suite and 16-step Platformation™ playbook. In addition, we bring agility and systems thinking to accelerate time to market for our clients. Headquartered in Bengaluru, India, Sonata has a strong global presence, including key regions in the US, UK, Europe, APAC, and ANZ. We are a trusted partner of world-leading companies in BFSI (Banking, Financial Services, and Insurance), HLS (Healthcare and Lifesciences), TMT (Telecom, Media, and Technology), Retail & CPG, and Manufacturing space. Our bouquet of Modernization Engineering Services cuts across Cloud, Data, Dynamics, Contact Centers, and around newer technologies like Generative AI, MS Fabric, and other modernization platforms. Job Title :Sr.Devops Engineer Location :Westlake Village CA(Onsite position) Interview :In person interview Responsibilities: The Sr DevOps Engineer - AI platform will: Design, implement, and manage scalable and resilient infrastructure on AWS. Architect and maintain Windows/Linux based environments, ensuring seamless integration with cloud platforms. Develop and maintain infrastructure-as-code(IaC) using both AWS Cloudformation/CDK and Terraform/OpenTofu. Develop and maintain Configuration Management for Windows & Linux servers using Chef. Design, build, and optimize CI/CD pipelines using GitLab CI/CD for .NET applications. Integrate and support AI services, including orchestration with AWS Bedrock, Google Agentspace, and other generative AI frameworks, ensuring they can be securely and efficiently consumed by platform services. Enable AI/ML workflows by building and optimizing infrastructure pipelines that support large-scale model training, inference, and deployment across AWS and GCP environments. Automate model lifecycle management (training, deployment, monitoring) through CI/CD pipelines, ensuring reproducibility and seamless integration with development workflows. Collaborate with AI engineering teams to deliver scalable environments, standardized APIs, and infrastructure that accelerate AI adoption at the platform level. Implement observability, security, data privacy and cost-optimization strategies specifically for AI workloads, including monitoring and resource scaling for inference services. Implement and enforce security best practices across the infrastructure and deployment processes. Collaborate closely with development teams to understand their needs and provide DevOps expertise. Troubleshoot and resolve infrastructure and application deployment issues. Implement and manage monitoring and logging solutions to ensure system visibility and proactive issue detection. Clearly and concisely contribute to the development and documentation of DevOps standards and best practices. Stay up-to-date with the latest industry trends and technologies in cloud computing, DevOps, and security. Provide mentorship and guidance to junior team members. Qualifications: Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent experience). 5+ years of experience in a DevOps or Site Reliability Engineering (SRE) role. 1+ year(s) of experience with AI services & LLMs. Extensive hands-on experience with Amazon Web Services (AWS) Solid understanding of Windows/Linux Server administration and integration with cloud environments. Proven experience with infrastructure-as-code tools, specifically AWS CDK and Terraform. Strong experience designing and implementing CI/CD pipelines using GitLab CI/CD. Experience deploying and managing .NET applications in cloud environments. Deep understanding of security best practices and their implementation in cloud infrastructure and CI/CD pipelines. Solid understanding of networking principles (TCP/IP, DNS, load balancing, firewalls) in cloud environments. Experience with monitoring and logging tools (e.g., NewRelic, CloudWatch). Strong scripting skills (e.g., PowerShell, Python, Ruby, Bash). Excellent problem-solving and troubleshooting skills. Strong communication and collaboration skills. Experience with containerization technologies (e.g., Docker, Kubernetes) is a plus. Relevant AWS and/or GCP certifications are a plus. Experience with the configuration management tool Chef Preferred Qualifications: Knowledge of and a strong understanding of Powershell and Python Scripting Strong background with AWS EC2 features and Services (Autoscaling and WarmPools) Understanding of Windows server Build process using tools like Chocolaty for packages and Packer for AMI/Image generation. Extensive hands-on experience with Amazon Web Services (AWS)
    $101k-137k yearly est. 1d ago
  • Aerospace System Engineer II

    L'Garde, Inc.

    Requirements engineer job in Irvine, CA

    L·Garde is a full-service design, development, manufacturing, and qual test supplier to Tier 1 primes and government agencies. We provide systems engineering and skilled technicians to make your make your Skunk Works-type project a reality. With over 50 years of aerospace expertise, our deployable systems test the limits of what's possible in the harshest of environments in space, on the moon, and even on other planets. If you're an engineer who thrives on teamwork, clear communication, and seeing your work translate into cutting-edge aerospace solutions, we'd love to talk to you. A Day in the Life: We're looking for a Systems Engineer who is passionate about solving complex challenges in aerospace and enjoys working closely with others to make big ideas a reality. In this role, you'll help transform mission requirements into fully engineered space systems, balancing technical performance, schedule, and cost. You'll collaborate across disciplines-design, test, integration, and program management-to ensure our spacecraft and payload systems meet the highest standards of innovation and reliability. Key Responsibilities: Lead systems engineering activities across the project lifecycle, from concept through delivery. Develop and maintain system requirements, CONOPS, ICDs, and risk matrices. Support Verification & Validation (V&V) efforts and create and maintain Model Based Systems Engineering (MBSE) models. Partner with engineers, technicians, suppliers, and customers to resolve issues and ensure requirements are met. Write and review test plans, procedures, and reports; analyze and post-process test data. Contribute to design trade studies and product development planning. Participate in major design reviews (SRR, PDR, CDR, TRR) and customer meetings. Support proposal writing for advanced aerospace concepts. Maintain a safe, clean, and organized work area by following 5S and safety guidelines. Who You Are: You have a Bachelor's degree in engineering, science, or related technical field. 2-4 years of satellite systems engineering experience with DoD, NASA, or commercial space programs. At least 2 years in management, project leadership, or team leadership roles. Proficiency with requirements tracking and management. Proficiency with Model Based Systems Engineering and requirements tracking tools such as CAMEO and DOORS is a plus. Systems Engineers will be expected to have completed training for these tools within the 1st year. Hands-on experience with hardware/software interfaces, aerospace drawings, and GD&T standards. Exposure to SolidWorks CAD, FEA, Matlab, Thermal Desktop, CFD (Star CCM+), or LabView preferred The ability to obtain a U.S. Security Clearance for which the U.S. Government requires U.S. Citizenship. Top Secret Security Clearance a plus. Excellent written and verbal communication skills. Strong interpersonal skills with the ability to collaborate across all levels of the organization. Detail-oriented, organized, and adaptable in a fast-paced environment. Strong problem-solving mindset and passion for working in a team-driven culture. What We Offer: Be at the forefront of aerospace innovation by working on cutting-edge aerospace technologies. Opportunity to wear multiple hats and grow your skill set. Collaborative and inclusive work culture where your contributions are highly valued. Competitive salary Top-Tiered Benefits, 100% of both employee and dependents are covered by the company = Medical, Dental, Vision Flexible Spending Account Retirement and Company Match Company-Sponsored Life and LTD Insurance Generous Paid Time Off Policy with up to 4 weeks in the first year. Robust Paid Holiday Schedule Pay range: $110,000.00 - $145,000.00 per year Join our team as an Aerospace Systems Engineer II and contribute to the advancement of aerospace innovation by taking on diverse, impactful projects in a collaborative environment, where your contributions are valued and your growth is fostered through hands-on experience. L·Garde is an equal opportunity employer, including individuals with disabilities and veterans, and participates in the E-Verify Program.
    $110k-145k yearly 22h ago
  • Descent Systems Engineer

    In Orbit Aerospace

    Requirements engineer job in Torrance, CA

    In Orbit envisions a world where our most critical resources are accessible when we need them the most. Today, In Orbit is on a mission to provide the most resilient and autonomous cargo delivery solutions for regions suffering from conflict and natural disasters. Descent Systems Engineer: In Orbit is looking for a Descent Systems Engineer eager to join a diverse and dynamic team developing solutions for cargo delivery where traditional aircraft and drones fail. As a Descent Systems Engineer at In Orbit you will work on the design, development, and testing of advanced parachutes and decelerator systems. You will work with other engineers on integrating decelerator subsystems into the vehicle. The ideal candidate for this position will have experience manufacturing and testing parachute systems, a solid foundation of aerodynamic and mechanical design principles as well as flight testing experience. Responsibilities: Lead the development of parafoils, reefing systems, and other decelerator components. Develop fabrication and manufacturing processes including material selection, patterning, sewing, rigging, and hardware integration. Plan and conduct flight tests including drop tests, high-altitude balloon tests, and other captive-carry deployments. Support the development of test plans, procedures, and instrumentation requirements to verify system performance. Collaborate closely with mechanical, avionics, and software teams for vehicle-level integrations Own documentation and configuration management for parachute assemblies, manufacturing specifications, and test reports. Basic Qualifications: Bachelor's Degree level of education in Aerospace Engineering or similar curriculum. Strong understanding of aerodynamics, drag modeling, reefing techniques, and dynamic behaviors of decelerators Experience with reefing line cutting systems or multi-stage deployment mechanisms Experience conducting ground and flight tests for decelerator systems, including test planning, instrument integration, data analysis, and anomaly investigation. Expertise with textile materials (e.g., F-111, S-P fabric, Kevlar, Dyneema). Ability to work hands-on with sewing machines and ground test fixtures. Solid teamworking and relationship building skills with the ability to effectively communicate difficult technical problems and solutions with other engineering disciplines. Preferred Experience and Skills: Experience with guided parachute systems. Familiarity with FAA coordination for flight testing in and out of controlled airspace. Experience with pattern design tools such as SpaceCAD, Lectra Modaris, or similar. Additional Requirements: Willing to work extended hours as needed Able to stand for extended periods of time Able to occasionally travel (~25%) and support off-site testing. ITAR Requirements: To conform to U.S. Government space technology export regulations, including the International Traffic in Arms Regulations (ITAR) you must be a U.S. citizen, lawful permanent resident of the U.S., protected individual as defined by 8 U.S.C. 1324b(a)(3), or eligible to obtain the required authorizations from the U.S. Department of State.
    $78k-106k yearly est. 4d ago

Learn more about requirements engineer jobs

How much does a requirements engineer earn in Pico Rivera, CA?

The average requirements engineer in Pico Rivera, CA earns between $74,000 and $144,000 annually. This compares to the national average requirements engineer range of $62,000 to $120,000.

Average requirements engineer salary in Pico Rivera, CA

$103,000

What are the biggest employers of Requirements Engineers in Pico Rivera, CA?

The biggest employers of Requirements Engineers in Pico Rivera, CA are:
  1. Pantar Solutions
  2. Kroger
  3. AHMC Healthcare
  4. G Holdings Inc
  5. Panda Express
  6. Hilton
  7. Aptus Group
Job type you want
Full Time
Part Time
Internship
Temporary