Post job

Requirements engineer jobs in East Los Angeles, CA

- 1,222 jobs
All
Requirements Engineer
Data Engineer
Systems Engineer
Devops Engineer
Software Engineer
  • Space-Based Environment Monitoring Systems Engineer (Secret clearance)

    Vantor

    Requirements engineer job in El Segundo, CA

    Vantor is forging the new frontier of spatial intelligence, helping decision makers and operators navigate what's happening now and shape what's coming next. Vantor is a place for problem solvers, changemakers, and go-getters-where people are working together to help our customers see the world differently, and in doing so, be seen differently. Come be part of a mission, not just a job, where you can: Shape your own future, build the next big thing, and change the world. To be eligible for this position, you must be a U.S. Person, defined as a U.S. citizen, permanent resident, Asylee, or Refugee. Note on Cleared Roles: If this position requires an active U.S. Government security clearance, applicants who do not currently hold the required clearance will not be eligible for consideration. Employment for cleared roles is contingent upon verification of clearance status. Export Control/ITAR: Certain roles may be subject to U.S. export control laws, requiring U.S. person status as defined by 8 U.S.C. 1324b(a)(3). Please review the job details below. Are you looking for an opportunity to combine your technical skills with big picture thinking to make an impact in the DoD? You understand your customer's environment and how to develop the right systems for their mission. Your ability to translate real-world needs into technical specifications makes you an integral part of delivering a customer focused engineering solution. At Vantor, you'll work with the U.S. Space Force as part of the effort to develop and rapidly deploy the next generation of resilient Missile Warning (MW), Tactical Intelligence, Surveillance, and Reconnaissance (TISR), and Environmental Monitoring (EM) capabilities to deter attacks and provide critical information to our warfighters to defeat our enemies in battle. Within this role, you will lead a Systems, Engineering, and Integration (SE&I) team to plan and execute SE&I processes for space programs, including requirements analysis, architecture design, integration, testing, verification, and transition. Plan and coordinate SE&I activities across the SE&I team and broader stakeholder community, including the Federally Funded Research and Developmental Centers (FFRDCs), development contractors, and external stakeholders. Grow your skills by researching new requirements, technologies, and threats and using innovative engineering methodologies and tools to create tomorrow's solutions. Join our team and create the future of Remote Sensing in the Space Force. Due to the nature of work performed within this facility, U.S. citizenship is required. Empower change with us. Build Your Career: When you join Vantor, you'll have the opportunity to connect with other professionals doing similar work across multiple markets. You'll share best practices and work through challenges as you gain experience and mentoring to develop your career. In addition, you will have access to a wealth of training resources through our Digital University, an online learning portal where you can access more than 5000 tech courses, certifications and books. Build your technical skills through hands-on training on the latest tools and tech from our in-house experts. Pursuing certifications? Take advantage of our tuition assistance, on-site courses, vendor relationships, and a network of experts who can give you helpful tips. We'll help you develop the career you want as you chart your own course for success. Qualifications: Secret clearance Bachelor's degree in a Science, Technology, Engineering, or Mathematics (STEM) field 10+ years of experience with performing SE&I tasks on a major DoD or IC space programs 5+ years of experience with leading a team performing SE&I on large-scale national security satellite programs Experience with leading a team for development of technical specifications, interface control documents, integration plans and schedules, and inter-service support agreements Knowledge of DoD 5000.01 and 5000.02 Ability to communicate and establish collaborative relationships with government clients, FFRDCs, and associate contractor teammates to achieve program goals Preferred Qualifications: Experience leading a team performing SE&I tasks on Space-Based Environmental Monitoring (SBEM) systems Experience leading a team using a Model-Based Systems Engineering approach to manage system definitions and technical baselines Knowledge of systems engineering standards, including IEEE 15288.1 and IEEE 15288.2 Knowledge of Agile Methodologies Ability to be highly motivated with a dynamic work ethic and demonstrate a strong desire to contribute to the DoD mission Ability to perform multiple systems engineering and program management functions in support of design reviews and requirements verification Ability to identify, analyze, and resolve technical risks and issues, develop technical reports, and collaborate with government and other stakeholders to implement recommended solutions TS/SCI clearance Master's degree in Engineering, Mathematics, Physics, or CS INCOSE Systems Engineering Professional Certification, including ASEP, CSEP, or ESEP Certification #cjpost #LI-CJ1 Pay Transparency: In support of pay transparency at Vantor, we disclose salary ranges on all U.S. job postings. The successful candidate's starting pay will fall within the salary range provided below and is determined based on job-related factors, including, but not limited to, the experience, qualifications, knowledge, skills, geographic work location, and market conditions. Candidates with the minimum necessary experience, qualifications, knowledge, and skillsets for the position should not expect to receive the upper end of the pay range. The base pay for this position within California, Colorado, Hawaii, New Jersey, the Washington, DC metropolitan area, and for all other states is: $137,000.00 - $229,000.00 Benefits: Vantor offers a competitive total rewards package that goes beyond the standard, including a robust 401(k) with company match, mental health resources, and unique perks like student loan repayment assistance, adoption reimbursement and pet insurance to support all aspects of your life. You can find more information on our benefits at: ****************************** The application window is three days from the date the job is posted and will remain posted until a qualified candidate has been identified for hire. If the job is reposted regardless of reason, it will remain posted three days from the date the job is reposted and will remain reposted until a qualified candidate has been identified for hire. The date of posting can be found on Vantor's Career page at the top of each job posting. To apply, submit your application via Vantor's Career page. EEO Policy: Vantor is an equal opportunity employer committed to an inclusive workplace. We believe in fostering an environment where all team members feel respected, valued, and encouraged to share their ideas. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, gender identity, sexual orientation, disability, protected veteran status, age, or any other characteristic protected by law.
    $78k-106k yearly est. 4d ago
  • Backend Engineer - Python / API (Onsite)

    CGS Business Solutions 4.7company rating

    Requirements engineer job in Beverly Hills, CA

    CGS Business Solutions is committed to helping you, as an esteemed IT Professional, find the next right step in your career. We match professionals like you to rewarding consulting or full-time opportunities in your area of expertise. We are currently seeking Technical Professionals who are searching for challenging and rewarding jobs for the following opportunity: CGS is hiring on behalf of one of our Risk & Protection Services clients in the West LA area for a full-time role. We're looking for a strategic Backend Engineer to join a high-growth team building next-generation technology. In this role, you'll play a critical part in architecting and delivering scalable backend services that power an AI-native agent workspace. You'll translate complex business needs into secure, high-performance, and maintainable systems. This opportunity is ideal for a hands-on engineer who excels at designing cloud-native architectures and thrives in a fast-paced, highly collaborative startup environment. What You'll Do • Partner closely with engineering, product, and operations to define high-impact problems - and craft the right technical solutions. • Design and deliver scalable backend systems using modern architectures and best practices. • Build Python APIs and complex backend logic on top of AWS serverless infrastructure. • Contribute to the architecture and evolution of core system components. • Elevate engineering standards, tooling, and backend development processes across the team. Who You Are • 6+ years of software engineering experience, with deep expertise in building end-to-end systems and a strong backend focus. • Expert-level proficiency in Python and API development with Flask • Strong understanding of AWS and cloud-native architecture. • Experience with distributed systems, APIs, and data modeling. • Proven ability to architect and optimize systems for performance and reliability. • Excellent technical judgment and ability to drive clarity and execution in ambiguous environments. • Experience in insurance or enterprise SaaS is a strong plus. About CGS Business Solutions: CGS specializes in IT business solutions, staffing and consulting services. With a strong focus in IT Applications, Network Infrastructure, Information Security, and Engineering. CGS is an INC 5000 company and is honored to be selected as one of the Best IT Recruitment Firms in California. After five consecutive Fastest Growing Company titles, CGS continues to break into new markets across the USA. Companies are counting on CGS to attract and help retain these resource pools in order to gain a competitive advantage the rapidly changing business environments.
    $90k-118k yearly est. 5d ago
  • Azure Cloud Engineer (Jr/Mid) - (Locals only)

    Maxonic Inc.

    Requirements engineer job in Los Angeles, CA

    Job Title: Cloud Team Charter Job Type: Contract to Hire Work Schedule: Hybrid (3 days onsite, 2 days remote) Rate: $60. Based on experience Responsibilities: Cloud Team Charter/ Scope- 2 resources (1 Sr and 1 Mid/Jr) Operate and maintain Cloud Foundation Services, such as: Azure Policies Backup Engineering and Enforcement Logging Standard and Enforcement AntiVirus and Malware Enforcement Azure service/resources life cycle management, including retirement of resources Tagging enforcement Infrastructure Security Ownership of Defender reporting as it relates to Infrastructure. Collaboration with Cyber Security and App team to generate necessary reports for Infrastructure security review. Actively monitoring and remediating infrastructure vulnerability with App Team. Coordinate with the App team to address infrastructure vulnerabilities. Drive continuous improvement in Cloud Security by tracking/maintaining infrastructure vulnerabilities through Azure Security Center. Cloud Support: PaaS DB support Support for Cloud Networking (L2) and work with the Network team as needed Developer support in the Cloud. Support for the CMDB team to track the Cloud assets. L4 Cloud support for the enterprise. About Maxonic: Since 2002 Maxonic has been at the forefront of connecting candidate strengths to client challenges. Our award winning, dedicated team of recruiting professionals are specialized by technology, are great listeners, and will seek to find a position that meets the long-term career needs of our candidates. We take pride in the over 10,000 candidates that we have placed, and the repeat business that we earn from our satisfied clients. Interested in Applying? Please apply with your most current resume. Feel free to contact Jhankar Chanda (******************* / ************ ) for more details.
    $60 hourly 2d ago
  • Snowflake DBT Engineer

    Marvel Infotech Inc.

    Requirements engineer job in Irvine, CA

    W2 Only (Visa Independent) "Key Responsibilities Design develop and maintain ELT pipelines using Snowflake and DBT Build and optimize data models in Snowflake to support analytics and reporting Implement modular testable SQL transformations using DBT Integrate DBT workflows into CICD pipelines and manage infrastructure as code using Terraform Collaborate with data scientists analysts and business stakeholders to translate requirements into technical solutions Optimize Snowflake performance through clustering partitioning indexing and materialized views Automate data ingestion and transformation workflows using Airflow or similar orchestration tools Ensure data quality governance and security across pipelines Troubleshoot and resolve performance bottlenecks and data issues Maintain documentation for data architecture pipelines and operational procedures Required Skills Qualifications Bachelors or Masters degree in Computer Science Data Engineering or related field 7 years of experience in data engineering with at least 2 years focused on Snowflake and DBT Strong proficiency in SQL and Python Experience with cloud platforms AWS GCP or Azure Familiarity with Git CICD and Infrastructure as Code tools Terraform CloudFormation Knowledge of data modelling star schema normalization and ELT best practices"
    $86k-122k yearly est. 21h ago
  • Snowflake DBT Engineer-- CDC5697451

    Compunnel Inc. 4.4company rating

    Requirements engineer job in Irvine, CA

    Key Responsibilities Design develop and maintain ELT pipelines using Snowflake and DBT Build and optimize data models in Snowflake to support analytics and reporting Implement modular testable SQL transformations using DBT Integrate DBT workflows into CICD pipelines and manage infrastructure as code using Terraform Collaborate with data scientists analysts and business stakeholders to translate requirements into technical solutions Optimize Snowflake performance through clustering partitioning indexing and materialized views Automate data ingestion and transformation workflows using Airflow or similar orchestration tools Ensure data quality governance and security across pipelines Troubleshoot and resolve performance bottlenecks and data issues Maintain documentation for data architecture pipelines and operational procedures Required Skills Qualifications Bachelors or Masters degree in Computer Science Data Engineering or related field 10 years of experience in data engineering with at least 3 years focused on Snowflake and DBT Strong proficiency in SQL and Python Experience with cloud platforms AWS GCP or Azure Familiarity with Git CICD and Infrastructure as Code tools Terraform CloudFormation Knowledge of data modeling star schema normalization and ELT best practices
    $92k-118k yearly est. 21h ago
  • Senior Data Engineer - Commerce Data Pipelines

    Akube

    Requirements engineer job in Santa Monica, CA

    City: Seattle, WA/ Santa Monica, CA or NYC Onsite/ Hybrid/ Remote: Hybrid (4 days a week onsite, Friday - Remote) Duration: 10 months Rate Range: Up to$92.5/hr on W2 depending on experience (no C2C or 1099 or sub-contract) Work Authorization: GC, USC, All valid EADs except OPT, CPT, H1B Must Have: • SQL • ETL design and development • Data modeling (dimensional and normalization) • ETL orchestration tools (Airflow or similar) • Data Quality frameworks • Performance tuning for SQL and ETL • Python or PySpark • Snowflake or Redshift Responsibilities: • Partner with business, analytics, and infrastructure teams to define data and reporting requirements. • Collect data from internal and external systems and design table structures for scalable data solutions. • Build, enhance, and maintain ETL pipelines with strong performance and reliability. • Develop automated Data Quality checks and support ongoing pipeline monitoring. • Implement database deployments using tools such as Schema Change. • Conduct SQL and ETL tuning and deliver ad hoc analysis as needed. • Support Agile ceremonies and collaborate in a fast-paced environment. Qualifications: • 3+ years of data engineering experience. • Strong grounding in data modeling, including dimensional models and normalization. • Deep SQL expertise with advanced tuning skills. • Experience with relational or distributed data systems such as Snowflake or Redshift. • Familiarity with ETL/orchestration platforms like Airflow or Nifi. • Programming experience with Python or PySpark. • Strong analytical reasoning, communication skills, and ability to work cross-functionally. • Bachelor's degree required.
    $92.5 hourly 1d ago
  • Senior Data Engineer

    Robert Half 4.5company rating

    Requirements engineer job in Los Angeles, CA

    Robert Half is partnering with a well known brand seeking an experienced Data Engineer with Databricks experience. Working alongside data scientists and software developers, you'll work will directly impact dynamic pricing strategies by ensuring the availability, accuracy, and scalability of data systems. This position is full time with full benefits and 3 days onsite in the Woodland Hills, CA area. Responsibilities: Design, build, and maintain scalable data pipelines for dynamic pricing models. Collaborate with data scientists to prepare data for model training, validation, and deployment. Develop and optimize ETL processes to ensure data quality and reliability. Monitor and troubleshoot data workflows for continuous integration and performance. Partner with software engineers to embed data solutions into product architecture. Ensure compliance with data governance, privacy, and security standards. Translate stakeholder requirements into technical specifications. Document processes and contribute to data engineering best practices. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 4+ years of experience in data engineering, data warehousing, and big data technologies. Proficiency in SQL and experience with relational databases (e.g., PostgreSQL, MySQL, SQL Server). Must have experience in Databricks. Experience working within Azure or AWS or GCP environment. Familiarity with big data tools like Spark, Hadoop, or Databricks. Experience in real-time data pipeline tools. Experienced with Python
    $116k-165k yearly est. 4d ago
  • Snowflake/AWS Data Engineer

    Ostechnical

    Requirements engineer job in Irvine, CA

    Sr. Data Engineer Full Time Direct Hire Job Hybrid with work location-Irvine, CA. The Senior Data Engineer will help design and build a modern data platform that supports enterprise analytics, integrations, and AI/ML initiatives. This role focuses on developing scalable data pipelines, modernizing the enterprise data warehouse, and enabling self-service analytics across the organization. Key Responsibilities • Build and maintain scalable data pipelines using Snowflake, dbt, and Fivetran. • Design and optimize enterprise data models for performance and scalability. • Support data cataloging, lineage, quality, and compliance efforts. • Translate business and analytics requirements into reliable data solutions. • Use AWS (primarily S3) for storage, integration, and platform reliability. • Perform other data engineering tasks as needed. Required Qualifications • Bachelor's degree in Computer Science, Data Engineering, Information Systems, or related field. • 5+ years of data engineering experience. • Hands-on expertise with Snowflake, dbt, and Fivetran. • Strong background in data warehousing, dimensional modeling, and SQL. • Experience with AWS (S3) and data governance tools such as Alation or Atlan. • Proficiency in Python for scripting and automation. • Experience with streaming technologies (Kafka, Kinesis, Flink) a plus. • Knowledge of data security and compliance best practices. • Exposure to AI/ML workflows and modern BI tools like Power BI, Tableau, or Looker. • Ability to mentor junior engineers. Skills • Snowflake • dbt • Fivetran • Data modeling and warehousing • AWS • Data governance • SQL • Python • Strong communication and cross-functional collaboration • Interest in emerging data and AI technologies
    $99k-139k yearly est. 2d ago
  • Data Engineer

    Vaco By Highspring

    Requirements engineer job in Irvine, CA

    Job Title: Data Engineer Duration: Direct-Hire Opportunity We are looking for a Data Engineer who is hands-on, collaborative, and experienced with Microsoft SQL Server, Snowflake, AWS RDS, and MySQL. The ideal candidate has a strong background in data warehousing, data lakes, ETL pipelines, and business intelligence tools. This role plays a key part in executing data strategy - driving optimization, reliability, and scalable BI capabilities across the organization. It's an excellent opportunity for a data professional who wants to influence architectural direction, contribute technical expertise, and grow within a data-driven company focused on innovation. Key Responsibilities Design, develop, and maintain SQL Server and Snowflake data warehouses and data lakes, focusing on performance, governance, and security. Manage and optimize database solutions within Snowflake, SQL Server, MySQL, and AWS RDS. Build and enhance ETL pipelines using tools such as Snowpipe, DBT, Boomi, SSIS, and Azure Data Factory. Utilize data tools such as SSMS, Profiler, Query Store, and Redgate for performance tuning and troubleshooting. Perform database administration tasks, including backup, restore, and monitoring. Collaborate with Business Intelligence Developers and Business Analysts on enterprise data projects. Ensure database integrity, compliance, and adherence to best practices in data security. Configure and manage data integration and BI tools such as Power BI, Tableau, Power Automate, and scripting languages (Python, R). Qualifications Proficiency with Microsoft SQL Server, including advanced T-SQL development and optimization. 7+ years working as a SQL Server Developer/Administrator, with experience in relational and object-oriented databases. 2+ years of experience with Snowflake data warehouse and data lake solutions. Experience developing pipelines and reporting solutions using Power BI, SSRS, SSIS, Azure Data Factory, or DBT. Scripting and automation experience using Python, PowerShell, or R. Familiarity with data integration and analytics tools such as Boomi, Redshift, or Databricks (a plus). Excellent communication, problem-solving, and organizational skills. Education: Bachelor's or Master's degree in Computer Science, Information Systems, Data Science, or a related field. Technical Skills SQL Server / Snowflake / MySQL / AWS RDS ETL Development (Snowpipe, SSIS, Azure Data Factory, DBT) BI Tools (Power BI, Tableau) Python, R, PowerShell Data Governance & Security Best Practices Determining compensation for this role (and others) at Vaco/Highspring depends upon a wide array of factors including but not limited to the individual's skill sets, experience and training, licensure and certifications, office location and other geographic considerations, as well as other business and organizational needs. With that said, as required by local law in geographies that require salary range disclosure, Vaco/Highspring notes the salary range for the role is noted in this job posting. The individual may also be eligible for discretionary bonuses, and can participate in medical, dental, and vision benefits as well as the company's 401(k) retirement plan. Additional disclaimer: Unless otherwise noted in the job description, the position Vaco/Highspring is filing for is occupied. Please note, however, that Vaco/Highspring is regularly asked to provide talent to other organizations. By submitting to this position, you are agreeing to be included in our talent pool for future hiring for similarly qualified positions. Submissions to this position are subject to the use of AI to perform preliminary candidate screenings, focused on ensuring minimum job requirements noted in the position are satisfied. Further assessment of candidates beyond this initial phase within Vaco/Highspring will be otherwise assessed by recruiters and hiring managers. Vaco/Highspring does not have knowledge of the tools used by its clients in making final hiring decisions and cannot opine on their use of AI products.
    $99k-139k yearly est. 21h ago
  • Data Analytics Engineer

    Archwest Capital

    Requirements engineer job in Irvine, CA

    We are seeking a Data Analytics Engineer to join our team who serves as a hybrid Database Administrator, Data Engineer, and Data Analyst, responsible for managing core data infrastructure, developing and maintaining ETL pipelines, and delivering high-quality analytics and visual insights to executive stakeholders. This role bridges technical execution with business intelligence, ensuring that data across Salesforce, financial, and operational systems is accurate, accessible, and strategically presented. Essential Functions Database Administration: Oversee and maintain database servers, ensuring performance, reliability, and security. Manage user access, backups, and data recovery processes while optimizing queries and database operations. Data Engineering (ELT): Design, build, and maintain robust ELT pipelines (SQL/DBT or equivalent) to extract, transform, and load data across Salesforce, financial, and operational sources. Ensure data lineage, integrity, and governance throughout all workflows. Data Modeling & Governance: Design scalable data models and maintain a governed semantic layer and KPI catalog aligned with business objectives. Define data quality checks, SLAs, and lineage standards to reconcile analytics with finance source-of-truth systems. Analytics & Reporting: Develop and manage executive-facing Tableau dashboards and visualizations covering key lending and operational metrics - including pipeline conversion, production, credit quality, delinquency/charge-offs, DSCR, and LTV distributions. Presentation & Insights: Translate complex datasets into clear, compelling stories and presentations for leadership and cross-functional teams. Communicate findings through visual reports and executive summaries to drive strategic decisions. Collaboration & Integration: Partner with Finance, Capital Markets, and Operations to refine KPIs and perform ad-hoc analyses. Collaborate with Engineering to align analytical and operational data, manage integrations, and support system scalability. Enablement & Training: Conduct training sessions, create documentation, and host data office hours to promote data literacy and empower business users across the organization. Competencies & Skills Advanced SQL proficiency with strong data modeling, query optimization, and database administration experience (PostgreSQL, MySQL, or equivalent). Hands-on experience managing and maintaining database servers and optimizing performance. Proficiency with ETL/ELT frameworks (DBT, Airflow, or similar) and cloud data stacks (AWS/Azure/GCP). Strong Tableau skills - parameters, LODs, row-level security, executive-level dashboard design, and storytelling through data. Experience with Salesforce data structures and ingestion methods. Proven ability to communicate and present technical data insights to executive and non-technical stakeholders. Solid understanding of lending/financial analytics (pipeline conversion, delinquency, DSCR, LTV). Working knowledge of Python for analytics tasks, cohort analysis, and variance reporting. Familiarity with version control (Git), CI/CD for analytics, and data governance frameworks. Excellent organizational, documentation, and communication skills with a strong sense of ownership and follow-through. Education & Experience Bachelor's degree in Computer Science, Engineering, Information Technology, Data Analytics, or a related field. 3+ years of experience in data analytics, data engineering, or database administration roles. Experience supporting executive-level reporting and maintaining database infrastructure in a fast-paced environment.
    $99k-139k yearly est. 1d ago
  • DevOps Engineer

    Evona

    Requirements engineer job in Irvine, CA

    DevOps Engineer - Satellite Technology Onsite in Irvine, CA or Washington, DC Pioneering Space Technology | Secure Cloud | Mission-Critical Systems We're working with a leading organization in the satellite technology sector, seeking a DevOps Engineer to join their growing team. You'll play a key role in shaping, automating, and securing the software infrastructure that supports next-generation space missions. This is a hands-on role within a collaborative, high-impact environment-ideal for someone who thrives on optimizing cloud performance and supporting mission-critical operations in aerospace. What You'll Be Doing Maintain and optimize AWS cloud environments, implementing security updates and best practices Manage daily operations of Kubernetes clusters and ensure system reliability Collaborate with cybersecurity teams to ensure full compliance across AWS infrastructure Support software deployment pipelines and infrastructure automation using Terraform and CI/CD tools Work cross-functionally with teams including satellite operations, software analytics, and systems engineering Troubleshoot and resolve environment issues to maintain uptime and efficiency Apply an “Infrastructure as Code” approach to all system development and management What You'll Bring Degree in Computer Science or a related field 2-3 years' experience with Kubernetes and containerized environments 3+ years' Linux systems administration experience Hands-on experience with cloud services (AWS, GCP, or Azure) Strong understanding of Terraform and CI/CD pipeline tools (e.g. FluxCD, Argo) Skilled in Python or Go Familiarity with software version control systems Solid grounding in cybersecurity principles (networking, authentication, encryption, firewalls) Eligibility to obtain a U.S. Security Clearance Preferred: Certified Kubernetes Administrator or Developer AWS Certified Security credentials This role offers the chance to make a tangible impact in the satellite and space exploration sector, joining a team that's building secure, scalable systems for mission success. If you're passionate about space, cloud infrastructure, and cutting-edge DevOps practices-this is your opportunity to be part of something extraordinary.
    $98k-133k yearly est. 3d ago
  • Big Data Engineer

    Kellymitchell Group 4.5company rating

    Requirements engineer job in Santa Monica, CA

    Our client is seeking a Big Data Engineer to join their team! This position is located in Santa Monica, California. Design and build core components of a large-scale data platform for both real-time and batch processing, owning key features of big data applications that evolve with business needs Develop next-generation, cloud-based big data infrastructure supporting batch and streaming workloads, with continuous improvements to performance, scalability, reliability, and availability Champion engineering excellence, promoting best practices such as design patterns, CI/CD, thorough code reviews, and automated testing Drive innovation, contributing new ideas and applying cutting-edge technologies to deliver impactful solutions Participate in the full software development lifecycle, including system design, experimentation, implementation, deployment, and testing Collaborate closely with program managers, product managers, SDETs, and researchers in an open, agile, and highly innovative environment Desired Skills/Experience: Bachelor's degree in a STEM field such as: Science, Technology, Engineering, Mathematics 5+ years of relevant professional experience 4+ years of professional software development experience using Java, Scala, Python, or similar programming languages 3+ years of hands-on big data development experience with technologies such as Spark, Flink, SingleStore, Kafka, NiFi, and AWS big data tools Strong understanding of system and application design, architecture principles, and distributed system fundamentals Proven experience building highly available, scalable, and production-grade services Genuine passion for technology, with the ability to work across interdisciplinary areas and adopt new tools or approaches Experience processing massive datasets at the petabyte scale Proficiency with cloud infrastructure and DevOps tools, such as Terraform, Kubernetes (K8s), Spinnaker, IAM, and ALB Hands-on experience with modern data warehousing and analytics platforms, including ClickHouse, Druid, Snowflake, Impala, Presto, Kinesis, and more Familiarity with common web development frameworks, such as Spring Boot, React.js, Vue.js, or Angular Benefits: Medical, Dental, & Vision Insurance Plans Employee-Owned Profit Sharing (ESOP) 401K offered The approximate pay range for this position is between $52.00 and $75.00. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
    $52-75 hourly 2d ago
  • Data Engineer

    RSM Solutions, Inc. 4.4company rating

    Requirements engineer job in Irvine, CA

    Thank you for stopping by to take a look at the Data Integration Engineer role I posted here on LinkedIN, I appreciate it. If you have read my s in the past, you will recognize how I write job descriptions. If you are new, allow me to introduce myself. My name is Tom Welke. I am Partner & VP at RSM Solutions, Inc and I have been recruiting technical talent for more than 23 years and been in the tech space since the 1990s. Due to this, I actually write JD's myself...no AI, no 'bots', just a real live human. I realized a while back that looking for work is about as fun as a root canal with no anesthesia...especially now. So, rather than saying 'must work well with others' and 'team mindset', I do away with that kind of nonsense and just tell it like it is. So, as with every role I work on, social fit is almost as important as technical fit. For this one, technical fit is very very important. But, we also have some social fit characteristics that are important. This is the kind of place that requires people to dive in and learn. The hiring manager for this one is actually a very dear friend of mine. He said something interesting to me not all that long ago. He mentioned, if you aren't spending at least an hour a day learning something new, you really are doing yourself a disservice. This is that classic environment where no one says 'this is not my job'. So that ability to jump in and help is needed for success in this role. This role is being done onsite in Irvine, California. I prefer working with candidates that are already local to the area. If you need to relocate, that is fine, but there are no relocation dollars available. I can only work with US Citizens or Green Card Holders for this role. I cannot work with H1, OPT, EAD, F1, H4, or anyone that is not already a US Citizen or Green Card Holder for this role. The Data Engineer role is similar to the Data Integration role I posted. However, this one is mor Ops focused, with the orchestration of deployment and ML flow, and including orchestrating and using data on the clusters and managing how the models are performing. This role focuses on coding & configuring on the ML side of the house. You will be designing, automating, and observing end to end data pipelines that feed this client's Kubeflow driven machine learning platform, ensuring models are trained, deployed, and monitored on trustworthy, well governed data. You will build batch/stream workflows, wire them into Azure DevOps CI/CD, and surface real time health metrics in Prometheus + Grafana dashboards to guarantee data availability. The role bridges Data Engineering and MLOps, allowing data scientists to focus on experimentation and the business sees rapid, reliable predictive insight. Here are some of the main responsibilities: Design and implement batch and streaming pipelines in Apache Spark running on Kubernetes and Kubeflow Pipelines to hydrate feature stores and training datasets. Build high throughput ETL/ELT jobs with SSIS, SSAS, and T SQL against MS SQL Server, applying Data Vault style modeling patterns for auditability. Integrate source control, build, and release automation using GitHub Actions and Azure DevOps for every pipeline component. Instrument pipelines with Prometheus exporters and visualize SLA, latency, and error budget metrics to enable proactive alerting. Create automated data quality and schema drift checks; surface anomalies to support a rapid incident response process. Use MLflow Tracking and Model Registry to version artifacts, parameters, and metrics for reproducible experiments and safe rollbacks. Work with data scientists to automate model retraining and deployment triggers within Kubeflow based on data freshness or concept drift signals. Develop PowerShell and .NET utilities to orchestrate job dependencies, manage secrets, and publish telemetry to Azure Monitor. Optimize Spark and SQL workloads through indexing, partitioning, and cluster sizing strategies, benchmarking performance in CI pipelines. Document lineage, ownership, and retention policies; ensure pipelines conform to PCI/SOX and internal data governance standards. Here is what we are seeking: At least 6 years of experience building data pipelines in Spark or equivalent. At least 2 years deploying workloads on Kubernetes/Kubeflow. At least 2 years of experience with MLflow or similar experiment‑tracking tools. At least 6 years of experience in T‑SQL, Python/Scala for Spark. At least 6 years of PowerShell/.NET scripting. At least 6 years of experience with with GitHub, Azure DevOps, Prometheus, Grafana, and SSIS/SSAS. Kubernetes CKA/CKAD, Azure Data Engineer (DP‑203), or MLOps‑focused certifications (e.g., Kubeflow or MLflow) would be great to see. Mentor engineers on best practices in containerized data engineering and MLOps.
    $111k-166k yearly est. 21h ago
  • Data Engineer (AWS Redshift, BI, Python, ETL)

    Prosum 4.4company rating

    Requirements engineer job in Manhattan Beach, CA

    We are seeking a skilled Data Engineer with strong experience in business intelligence (BI) and data warehouse development to join our team. In this role, you will design, build, and optimize data pipelines and warehouse architectures that support analytics, reporting, and data-driven decision-making. You will work closely with analysts, data scientists, and business stakeholders to ensure reliable, scalable, and high-quality data solutions. Responsibilities: Develop and maintain ETL/ELT pipelines for ingesting, transforming, and delivering data. Design and enhance data warehouse models (star/snowflake schemas) and BI datasets. Optimize data workflows for performance, scalability, and reliability. Collaborate with BI teams to support dashboards, reporting, and analytics needs. Ensure data quality, governance, and documentation across all solutions. Qualifications: Proven experience with data engineering tools (SQL, Python, ETL frameworks). Strong understanding of BI concepts, reporting tools, and dimensional modeling. Hands-on experience with cloud data platforms (e.g., AWS, Azure, GCP) is a plus. Excellent problem-solving skills and ability to work in a cross-functional environment.
    $99k-139k yearly est. 21h ago
  • Lead Data Engineer - (Automotive exp)

    Intelliswift-An LTTS Company

    Requirements engineer job in Torrance, CA

    Role: Sr Technical Lead Duration: 12+ Month Contract Daily Tasks Performed: Lead the design, development, and deployment of a scalable, secure, and high-performance CDP SaaS product. Architect solutions that integrate with various data sources, APIs, and third-party platforms. Design, develop, and optimize complex SQL queries for data extraction, transformation, and analysis Build and maintain workflow pipelines using Digdag, integrating with data platforms such as Treasure Data, AWS, or other cloud services Automate ETL processes and schedule tasks using Digdag's YAML-based workflow definitions Implement data quality checks, logging, and alerting mechanisms within workflow Leverage AWS services (e.g., S3, Lambda, Athena) where applicable to enhance data processing and storage capabilities Ensure best practices in software engineering, including code reviews, testing, CI/CD, and documentation. Oversee data privacy, security, and compliance initiatives (e.g., GDPR, CCPA). Ensure adherence to security, compliance, and data governance requirements. Oversee development of real-time and batch data processing systems. Collaborate with cross-functional teams including data analysts, product managers, and software engineers to translate business requirements into technical solutions Collaborate with the stakeholders to define technical requirements to align technical solutions with business goals and deliver product features. Mentor and guide developers, fostering a culture of technical excellence and continuous improvement. Troubleshoot complex technical issues and provide hands-on support as needed. Monitor, troubleshoot, and improve data workflows for performance, reliability, and cost-efficiency as needed Optimize system performance, scalability, and cost efficiency. What this person will be working on: As the Senior Technical Lead for our Customer Data Platform (CDP), the candidate will define the technical strategy, architecture, and execution of the platform. They will lead the design and delivery of scalable, secure, and high-performing solutions that enable unified customer data management, advanced analytics, and personalized experiences. This role demands deep technical expertise, strong leadership, and a solid understanding of data platforms and modern cloud technologies. It is a pivotal position that supports the CDP vision by mentoring team members and delivering solutions that empower our customers to unify, analyze, and activate their data. Position Success Criteria (Desired) - 'WANTS' Bachelor's or Master's degree in Computer Science, Engineering, or related field. 8+ years of software development experience, with at least 3+ years in a technical leadership role. Proven experience building and scaling SaaS products, preferably in customer data, marketing technology, or analytics domains Extensive hands-on experience with Presto, Hive, and Python Strong proficiency in writing complex SQL queries for data extraction, transformation, and analysis Familiarity with AWS data services such as S3, Athena, Glue, and Lambda Deep understanding of data modeling, ETL pipelines, workflow orchestration, and both real-time and batch data processing Experience ensuring data privacy, security, and compliance in SaaS environments Knowledge of Customer Data Platforms (CDPs), CDP concepts, and integration with CRM, marketing, and analytics tools Excellent communication, leadership, and project management skills Experience working with Agile methodologies and DevOps practices Ability to thrive in a fast-paced, agile environment Collaborative mindset with a proactive approach to problem-solving Stay current with industry trends and emerging technologies relevant to SaaS and customer data platforms.
    $100k-141k yearly est. 4d ago
  • Senior DevOps Engineer - AI Platform

    Ispace, Inc.

    Requirements engineer job in Westlake Village, CA

    JOB DETAILS: Sr DevOps Engineer - AI platform Contract Duration - 6 months contract to hire full time employment Hourly Rate: $60 - $72/hr on W2 contract. Job Description: Responsibilities: The Sr DevOps Engineer - AI platform will: Design, implement, and manage scalable and resilient infrastructure on AWS. Architect and maintain Windows/Linux based environments, ensuring seamless integration with cloud platforms. Develop and maintain infrastructure-as-code(IaC) using both AWS Cloudformation/CDK and Terraform/OpenTofu. Develop and maintain Configuration Management for Windows & Linux servers using Chef. Design, build, and optimize CI/CD pipelines using GitLab CI/CD for .NET applications. Integrate and support AI services, including orchestration with AWS Bedrock, Google Agentspace, and other generative AI frameworks, ensuring they can be securely and efficiently consumed by platform services. Enable AI/ML workflows by building and optimizing infrastructure pipelines that support large-scale model training, inference, and deployment across AWS and GCP environments. Automate model lifecycle management (training, deployment, monitoring) through CI/CD pipelines, ensuring reproducibility and seamless integration with development workflows. Collaborate with AI engineering teams to deliver scalable environments, standardized APIs, and infrastructure that accelerate AI adoption at the platform level. Implement observability, security, data privacy and cost-optimization strategies specifically for AI workloads, including monitoring and resource scaling for inference services. Implement and enforce security best practices across the infrastructure and deployment processes. Collaborate closely with development teams to understand their needs and provide DevOps expertise. Troubleshoot and resolve infrastructure and application deployment issues. Implement and manage monitoring and logging solutions to ensure system visibility and proactive issue detection. Clearly and concisely contribute to the development and documentation of DevOps standards and best practices. Stay up-to-date with the latest industry trends and technologies in cloud computing, DevOps, and security. Provide mentorship and guidance to junior team members. Qualifications: Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent experience). 5+ years of experience in a DevOps or Site Reliability Engineering (SRE) role. 1+ year(s) of experience with AI services & LLMs. Extensive hands-on experience with Amazon Web Services (AWS) Solid understanding of Windows/Linux Server administration and integration with cloud environments. Proven experience with infrastructure-as-code tools, specifically AWS CDK and Terraform. Strong experience designing and implementing CI/CD pipelines using GitLab CI/CD. Experience deploying and managing .NET applications in cloud environments. Deep understanding of security best practices and their implementation in cloud infrastructure and CI/CD pipelines. Solid understanding of networking principles (TCP/IP, DNS, load balancing, firewalls) in cloud environments. Experience with monitoring and logging tools (e.g., NewRelic, CloudWatch). Strong scripting skills (e.g., PowerShell, Python, Ruby, Bash). Excellent problem-solving and troubleshooting skills. Strong communication and collaboration skills. Experience with containerization technologies (e.g., Docker, Kubernetes) is a plus. Relevant AWS and/or GCP certifications are a plus. Experience with the configuration management tool Chef Preferred Qualifications: Knowledge of and a strong understanding of Powershell and Python Scripting Strong background with AWS EC2 features and Services (Autoscaling and WarmPools) Understanding of Windows server Build process using tools like Chocolaty for packages and Packer for AMI/Image generation. Extensive hands-on experience with Amazon Web Services (AWS)
    $60-72 hourly 4d ago
  • Plumbing Engineer

    K2D Consulting Engineers

    Requirements engineer job in Marina del Rey, CA

    We are currently seeking a Plumbing Engineer to join our team in Marina Del Rey, California. SUMMARY: This position is responsible for managing and performing tests on various materials and equipment and maintaining knowledge on all product specifications and ensure adherence to all required standards by performing the following duties. DUTIES AND RESPONSIBILITIES: Build long term customer relationships with existing and potential customers. Effectively manage Plumbing and design projects by satisfying clients' needs, meeting budget expectations and project schedules. Provide support during construction phases. Performs other related duties as assigned by management. SUPERVISORY RESPONSIBILITIES: Carries out supervisory responsibilities in accordance with the organization's policies and applicable laws. QUALIFICATIONS: Bachelor's Degree (BA) from four-year college or universityin Mechanical Engineering or completed Course Work in Plumbing, or one to two years of related experience and/or training, or equivalent combination of education and experience. Certificates, licenses and registrations required: LEED Certification is a plus. Computer skills required:Experienced at using a computer, preferably knowledgeable with MS Word, MS Excel, AutoCAD, and REVIT is a plus. Other skills required: 5 years of experience minimum, individuals should have recent experience working for a consulting engineering or engineering/architectural firm designing plumbing systems. Experience in the following preferred: Residential Commercial Multi-Family Restaurants Strong interpersonal skills and experience in maintaining strong client relationships are required. Ability to communicate effectively with people through oral presentations and written communications. Ability to motivate multiple-discipline project teams in meeting client's needs in a timely manner and meeting budget objectives.
    $87k-124k yearly est. 60d+ ago
  • Systems Engineer (Cloud & Infrastructure) - MSP

    Bowman Williams

    Requirements engineer job in Los Angeles, CA

    Systems Engineer (Cloud & Infrastructure) - MSP | Los Angeles, CA We're a growth-minded cloud and security MSP with a tight, collaborative engineering team. You'll work on meaningful, mid-to-advanced buildouts for clients who value modern tech (strong footprint in architecture/engineering/construction), with time carved out for design, delivery, and learning. If you want your ideas to land and your skills to accelerate, this is your next step. Must have MSP Experience!!! Why you'll love it Hybrid rhythm: about 3 days a week around greater Los Angeles after ramp; the rest remote Real project work: scope, design, and deliver cloud/infrastructure initiatives end-to-end Clear growth: mentorship, certifications, and a path to Senior / Solutions Engineer Low-ego culture: we share knowledge, standardize, and continually improve What you'll do Lead and execute projects across Microsoft 365, Azure, Windows Server, and core networking Plan and deliver endpoint and data migrations (including SharePoint/OneDrive) with clean cutovers Implement and tune Intune, Conditional Access, MFA, GPOs, DNS, and DHCP Configure firewalls and site-to-site VPNs; ensure backup/BCDR readiness and periodic test restores Document designs/runbooks and brief clients and teammates on changes and outcomes Handle a focused queue of higher-value technical tasks; light after-hours rotation as needed What makes you a great fit 5+ years hands-on in Microsoft-centric environments within a multi-client/MSP model Comfortable owning build/migrate/optimize work across servers, M365, and Azure Strong with Intune, Conditional Access, AD/GPO, DNS/DHCP, and Microsoft 365 migrations Solid grasp of firewalls, VPNs, and secure network fundamentals Clear communicator who can translate design choices into business value Nice to have Experience with AEC client environments PowerShell/automation interest Backup/BCDR platform proficiency Perks & culture Hybrid flexibility (yes, sweatpants a couple days a week) Idea-friendly team; regular lunches and knowledge shares Modern stack and standards-no heavy legacy anchors Growth opportunities as the team scales Logistics & comp Location: Los Angeles, CA + local client visits (about 3 days/week on-site after ramp) Compensation: Up to $120,000 base (DOE)
    $120k yearly 4d ago
  • Senior Data Engineer

    Akube

    Requirements engineer job in Glendale, CA

    City: Glendale, CA Onsite/ Hybrid/ Remote: Hybrid (3 days a week onsite, Friday - Remote) Duration: 12 months Rate Range: Up to$85/hr on W2 depending on experience (no C2C or 1099 or sub-contract) Work Authorization: GC, USC, All valid EADs except OPT, CPT, H1B Must Have: • 5+ years Data Engineering • Airflow • Spark DataFrame API • Databricks • SQL • API integration • AWS • Python or Java or Scala Responsibilities: • Maintain, update, and expand Core Data platform pipelines. • Build tools for data discovery, lineage, governance, and privacy. • Partner with engineering and cross-functional teams to deliver scalable solutions. • Use Airflow, Spark, Databricks, Delta Lake, Kubernetes, and AWS to build and optimize workflows. • Support platform standards, best practices, and documentation. • Ensure data quality, reliability, and SLA adherence across datasets. • Participate in Agile ceremonies and continuous process improvement. • Work with internal customers to understand needs and prioritize enhancements. • Maintain detailed documentation that supports governance and quality. Qualifications: • 5+ years in data engineering with large-scale pipelines. • Strong SQL and one major programming language (Python, Java, or Scala). • Production experience with Spark and Databricks. • Experience ingesting and interacting with API data sources. • Hands-on Airflow orchestration experience. • Experience developing APIs with GraphQL. • Strong AWS knowledge and infrastructure-as-code familiarity. • Understanding of OLTP vs OLAP, data modeling, and data warehousing. • Strong problem-solving and algorithmic skills. • Clear written and verbal communication. • Agile/Scrum experience. • Bachelor's degree in a STEM field or equivalent industry experience.
    $85 hourly 1d ago
  • Senior Data Engineer

    Kellymitchell Group 4.5company rating

    Requirements engineer job in Glendale, CA

    Our client is seeking a Senior Data Engineer to join their team! This position is located in Glendale, California. Contribute to maintaining, updating, and expanding existing Core Data platform data pipelines Build tools and services to support data discovery, lineage, governance, and privacy Collaborate with other software and data engineers and cross-functional teams Work with a tech stack that includes Airflow, Spark, Databricks, Delta Lake, Kubernetes, and AWS Collaborate with product managers, architects, and other engineers to drive the success of the Core Data platform Contribute to developing and documenting internal and external standards and best practices for pipeline configurations, naming conventions, and more Ensure high operational efficiency and quality of Core Data platform datasets to meet SLAs and ensure reliability and accuracy for stakeholders in Engineering, Data Science, Operations, and Analytics Participate in agile and scrum ceremonies to collaborate and refine team processes Engage with customers to build relationships, understand needs, and prioritize both innovative solutions and incremental platform improvements Maintain detailed documentation of work and changes to support data quality and data governance requirements Desired Skills/Experience: 5+ years of data engineering experience developing large data pipelines Proficiency in at least one major programming language such as: Python, Java or Scala Strong SQL skills and the ability to create queries to analyze complex datasets Hands-on production experience with distributed processing systems such as Spark Experience interacting with and ingesting data efficiently from API data sources Experience coding with the Spark DataFrame API to create data engineering workflows in Databricks Hands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines Experience developing APIs with GraphQL Deep understanding of AWS or other cloud providers, as well as infrastructure-as-code Familiarity with data modeling techniques and data warehousing best practices Strong algorithmic problem-solving skills Excellent written and verbal communication skills Advanced understanding of OLTP versus OLAP environments Benefits: Medical, Dental, & Vision Insurance Plans Employee-Owned Profit Sharing (ESOP) 401K offered The approximate pay range for this position is between $51.00 and $73.00. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
    $51-73 hourly 1d ago

Learn more about requirements engineer jobs

How much does a requirements engineer earn in East Los Angeles, CA?

The average requirements engineer in East Los Angeles, CA earns between $74,000 and $144,000 annually. This compares to the national average requirements engineer range of $62,000 to $120,000.

Average requirements engineer salary in East Los Angeles, CA

$104,000

What are the biggest employers of Requirements Engineers in East Los Angeles, CA?

The biggest employers of Requirements Engineers in East Los Angeles, CA are:
  1. Kroger
  2. Recruiting Solutions
  3. AHMC Healthcare
  4. G Holdings Inc
  5. Panda Express
  6. Hilton
  7. City of Pasadena
  8. Aptus Group
Job type you want
Full Time
Part Time
Internship
Temporary