Post job

Requirements engineer jobs in Compton, CA

- 1,207 jobs
All
Requirements Engineer
Data Engineer
Systems Engineer
Devops Engineer
Software Engineer
  • Space-Based Environment Monitoring Systems Engineer (Secret clearance)

    Vantor

    Requirements engineer job in El Segundo, CA

    Vantor is forging the new frontier of spatial intelligence, helping decision makers and operators navigate what's happening now and shape what's coming next. Vantor is a place for problem solvers, changemakers, and go-getters-where people are working together to help our customers see the world differently, and in doing so, be seen differently. Come be part of a mission, not just a job, where you can: Shape your own future, build the next big thing, and change the world. To be eligible for this position, you must be a U.S. Person, defined as a U.S. citizen, permanent resident, Asylee, or Refugee. Note on Cleared Roles: If this position requires an active U.S. Government security clearance, applicants who do not currently hold the required clearance will not be eligible for consideration. Employment for cleared roles is contingent upon verification of clearance status. Export Control/ITAR: Certain roles may be subject to U.S. export control laws, requiring U.S. person status as defined by 8 U.S.C. 1324b(a)(3). Please review the job details below. Are you looking for an opportunity to combine your technical skills with big picture thinking to make an impact in the DoD? You understand your customer's environment and how to develop the right systems for their mission. Your ability to translate real-world needs into technical specifications makes you an integral part of delivering a customer focused engineering solution. At Vantor, you'll work with the U.S. Space Force as part of the effort to develop and rapidly deploy the next generation of resilient Missile Warning (MW), Tactical Intelligence, Surveillance, and Reconnaissance (TISR), and Environmental Monitoring (EM) capabilities to deter attacks and provide critical information to our warfighters to defeat our enemies in battle. Within this role, you will lead a Systems, Engineering, and Integration (SE&I) team to plan and execute SE&I processes for space programs, including requirements analysis, architecture design, integration, testing, verification, and transition. Plan and coordinate SE&I activities across the SE&I team and broader stakeholder community, including the Federally Funded Research and Developmental Centers (FFRDCs), development contractors, and external stakeholders. Grow your skills by researching new requirements, technologies, and threats and using innovative engineering methodologies and tools to create tomorrow's solutions. Join our team and create the future of Remote Sensing in the Space Force. Due to the nature of work performed within this facility, U.S. citizenship is required. Empower change with us. Build Your Career: When you join Vantor, you'll have the opportunity to connect with other professionals doing similar work across multiple markets. You'll share best practices and work through challenges as you gain experience and mentoring to develop your career. In addition, you will have access to a wealth of training resources through our Digital University, an online learning portal where you can access more than 5000 tech courses, certifications and books. Build your technical skills through hands-on training on the latest tools and tech from our in-house experts. Pursuing certifications? Take advantage of our tuition assistance, on-site courses, vendor relationships, and a network of experts who can give you helpful tips. We'll help you develop the career you want as you chart your own course for success. Qualifications: Secret clearance Bachelor's degree in a Science, Technology, Engineering, or Mathematics (STEM) field 10+ years of experience with performing SE&I tasks on a major DoD or IC space programs 5+ years of experience with leading a team performing SE&I on large-scale national security satellite programs Experience with leading a team for development of technical specifications, interface control documents, integration plans and schedules, and inter-service support agreements Knowledge of DoD 5000.01 and 5000.02 Ability to communicate and establish collaborative relationships with government clients, FFRDCs, and associate contractor teammates to achieve program goals Preferred Qualifications: Experience leading a team performing SE&I tasks on Space-Based Environmental Monitoring (SBEM) systems Experience leading a team using a Model-Based Systems Engineering approach to manage system definitions and technical baselines Knowledge of systems engineering standards, including IEEE 15288.1 and IEEE 15288.2 Knowledge of Agile Methodologies Ability to be highly motivated with a dynamic work ethic and demonstrate a strong desire to contribute to the DoD mission Ability to perform multiple systems engineering and program management functions in support of design reviews and requirements verification Ability to identify, analyze, and resolve technical risks and issues, develop technical reports, and collaborate with government and other stakeholders to implement recommended solutions TS/SCI clearance Master's degree in Engineering, Mathematics, Physics, or CS INCOSE Systems Engineering Professional Certification, including ASEP, CSEP, or ESEP Certification #cjpost #LI-CJ1 Pay Transparency: In support of pay transparency at Vantor, we disclose salary ranges on all U.S. job postings. The successful candidate's starting pay will fall within the salary range provided below and is determined based on job-related factors, including, but not limited to, the experience, qualifications, knowledge, skills, geographic work location, and market conditions. Candidates with the minimum necessary experience, qualifications, knowledge, and skillsets for the position should not expect to receive the upper end of the pay range. The base pay for this position within California, Colorado, Hawaii, New Jersey, the Washington, DC metropolitan area, and for all other states is: $137,000.00 - $229,000.00 Benefits: Vantor offers a competitive total rewards package that goes beyond the standard, including a robust 401(k) with company match, mental health resources, and unique perks like student loan repayment assistance, adoption reimbursement and pet insurance to support all aspects of your life. You can find more information on our benefits at: ****************************** The application window is three days from the date the job is posted and will remain posted until a qualified candidate has been identified for hire. If the job is reposted regardless of reason, it will remain posted three days from the date the job is reposted and will remain reposted until a qualified candidate has been identified for hire. The date of posting can be found on Vantor's Career page at the top of each job posting. To apply, submit your application via Vantor's Career page. EEO Policy: Vantor is an equal opportunity employer committed to an inclusive workplace. We believe in fostering an environment where all team members feel respected, valued, and encouraged to share their ideas. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, gender identity, sexual orientation, disability, protected veteran status, age, or any other characteristic protected by law.
    $78k-106k yearly est. 4d ago
  • Azure Cloud Engineer (Jr/Mid) - (Locals only)

    Maxonic Inc.

    Requirements engineer job in Los Angeles, CA

    Job Title: Cloud Team Charter Job Type: Contract to Hire Work Schedule: Hybrid (3 days onsite, 2 days remote) Rate: $60. Based on experience Responsibilities: Cloud Team Charter/ Scope- 2 resources (1 Sr and 1 Mid/Jr) Operate and maintain Cloud Foundation Services, such as: Azure Policies Backup Engineering and Enforcement Logging Standard and Enforcement AntiVirus and Malware Enforcement Azure service/resources life cycle management, including retirement of resources Tagging enforcement Infrastructure Security Ownership of Defender reporting as it relates to Infrastructure. Collaboration with Cyber Security and App team to generate necessary reports for Infrastructure security review. Actively monitoring and remediating infrastructure vulnerability with App Team. Coordinate with the App team to address infrastructure vulnerabilities. Drive continuous improvement in Cloud Security by tracking/maintaining infrastructure vulnerabilities through Azure Security Center. Cloud Support: PaaS DB support Support for Cloud Networking (L2) and work with the Network team as needed Developer support in the Cloud. Support for the CMDB team to track the Cloud assets. L4 Cloud support for the enterprise. About Maxonic: Since 2002 Maxonic has been at the forefront of connecting candidate strengths to client challenges. Our award winning, dedicated team of recruiting professionals are specialized by technology, are great listeners, and will seek to find a position that meets the long-term career needs of our candidates. We take pride in the over 10,000 candidates that we have placed, and the repeat business that we earn from our satisfied clients. Interested in Applying? Please apply with your most current resume. Feel free to contact Jhankar Chanda (******************* / ************ ) for more details.
    $60 hourly 2d ago
  • Snowflake DBT Engineer

    Marvel Infotech Inc.

    Requirements engineer job in Irvine, CA

    W2 Only (Visa Independent) "Key Responsibilities Design develop and maintain ELT pipelines using Snowflake and DBT Build and optimize data models in Snowflake to support analytics and reporting Implement modular testable SQL transformations using DBT Integrate DBT workflows into CICD pipelines and manage infrastructure as code using Terraform Collaborate with data scientists analysts and business stakeholders to translate requirements into technical solutions Optimize Snowflake performance through clustering partitioning indexing and materialized views Automate data ingestion and transformation workflows using Airflow or similar orchestration tools Ensure data quality governance and security across pipelines Troubleshoot and resolve performance bottlenecks and data issues Maintain documentation for data architecture pipelines and operational procedures Required Skills Qualifications Bachelors or Masters degree in Computer Science Data Engineering or related field 7 years of experience in data engineering with at least 2 years focused on Snowflake and DBT Strong proficiency in SQL and Python Experience with cloud platforms AWS GCP or Azure Familiarity with Git CICD and Infrastructure as Code tools Terraform CloudFormation Knowledge of data modelling star schema normalization and ELT best practices"
    $86k-122k yearly est. 22h ago
  • Snowflake DBT Engineer-- CDC5697451

    Compunnel Inc. 4.4company rating

    Requirements engineer job in Irvine, CA

    Key Responsibilities Design develop and maintain ELT pipelines using Snowflake and DBT Build and optimize data models in Snowflake to support analytics and reporting Implement modular testable SQL transformations using DBT Integrate DBT workflows into CICD pipelines and manage infrastructure as code using Terraform Collaborate with data scientists analysts and business stakeholders to translate requirements into technical solutions Optimize Snowflake performance through clustering partitioning indexing and materialized views Automate data ingestion and transformation workflows using Airflow or similar orchestration tools Ensure data quality governance and security across pipelines Troubleshoot and resolve performance bottlenecks and data issues Maintain documentation for data architecture pipelines and operational procedures Required Skills Qualifications Bachelors or Masters degree in Computer Science Data Engineering or related field 10 years of experience in data engineering with at least 3 years focused on Snowflake and DBT Strong proficiency in SQL and Python Experience with cloud platforms AWS GCP or Azure Familiarity with Git CICD and Infrastructure as Code tools Terraform CloudFormation Knowledge of data modeling star schema normalization and ELT best practices
    $92k-118k yearly est. 22h ago
  • Senior Data Engineer

    Robert Half 4.5company rating

    Requirements engineer job in Los Angeles, CA

    Robert Half is partnering with a well known brand seeking an experienced Data Engineer with Databricks experience. Working alongside data scientists and software developers, you'll work will directly impact dynamic pricing strategies by ensuring the availability, accuracy, and scalability of data systems. This position is full time with full benefits and 3 days onsite in the Woodland Hills, CA area. Responsibilities: Design, build, and maintain scalable data pipelines for dynamic pricing models. Collaborate with data scientists to prepare data for model training, validation, and deployment. Develop and optimize ETL processes to ensure data quality and reliability. Monitor and troubleshoot data workflows for continuous integration and performance. Partner with software engineers to embed data solutions into product architecture. Ensure compliance with data governance, privacy, and security standards. Translate stakeholder requirements into technical specifications. Document processes and contribute to data engineering best practices. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 4+ years of experience in data engineering, data warehousing, and big data technologies. Proficiency in SQL and experience with relational databases (e.g., PostgreSQL, MySQL, SQL Server). Must have experience in Databricks. Experience working within Azure or AWS or GCP environment. Familiarity with big data tools like Spark, Hadoop, or Databricks. Experience in real-time data pipeline tools. Experienced with Python
    $116k-165k yearly est. 4d ago
  • Senior Data Engineer

    Akube

    Requirements engineer job in Glendale, CA

    City: Glendale, CA Onsite/ Hybrid/ Remote: Hybrid (3 days a week onsite, Friday - Remote) Duration: 12 months Rate Range: Up to$85/hr on W2 depending on experience (no C2C or 1099 or sub-contract) Work Authorization: GC, USC, All valid EADs except OPT, CPT, H1B Must Have: • 5+ years Data Engineering • Airflow • Spark DataFrame API • Databricks • SQL • API integration • AWS • Python or Java or Scala Responsibilities: • Maintain, update, and expand Core Data platform pipelines. • Build tools for data discovery, lineage, governance, and privacy. • Partner with engineering and cross-functional teams to deliver scalable solutions. • Use Airflow, Spark, Databricks, Delta Lake, Kubernetes, and AWS to build and optimize workflows. • Support platform standards, best practices, and documentation. • Ensure data quality, reliability, and SLA adherence across datasets. • Participate in Agile ceremonies and continuous process improvement. • Work with internal customers to understand needs and prioritize enhancements. • Maintain detailed documentation that supports governance and quality. Qualifications: • 5+ years in data engineering with large-scale pipelines. • Strong SQL and one major programming language (Python, Java, or Scala). • Production experience with Spark and Databricks. • Experience ingesting and interacting with API data sources. • Hands-on Airflow orchestration experience. • Experience developing APIs with GraphQL. • Strong AWS knowledge and infrastructure-as-code familiarity. • Understanding of OLTP vs OLAP, data modeling, and data warehousing. • Strong problem-solving and algorithmic skills. • Clear written and verbal communication. • Agile/Scrum experience. • Bachelor's degree in a STEM field or equivalent industry experience.
    $85 hourly 1d ago
  • Snowflake/AWS Data Engineer

    Ostechnical

    Requirements engineer job in Irvine, CA

    Sr. Data Engineer Full Time Direct Hire Job Hybrid with work location-Irvine, CA. The Senior Data Engineer will help design and build a modern data platform that supports enterprise analytics, integrations, and AI/ML initiatives. This role focuses on developing scalable data pipelines, modernizing the enterprise data warehouse, and enabling self-service analytics across the organization. Key Responsibilities • Build and maintain scalable data pipelines using Snowflake, dbt, and Fivetran. • Design and optimize enterprise data models for performance and scalability. • Support data cataloging, lineage, quality, and compliance efforts. • Translate business and analytics requirements into reliable data solutions. • Use AWS (primarily S3) for storage, integration, and platform reliability. • Perform other data engineering tasks as needed. Required Qualifications • Bachelor's degree in Computer Science, Data Engineering, Information Systems, or related field. • 5+ years of data engineering experience. • Hands-on expertise with Snowflake, dbt, and Fivetran. • Strong background in data warehousing, dimensional modeling, and SQL. • Experience with AWS (S3) and data governance tools such as Alation or Atlan. • Proficiency in Python for scripting and automation. • Experience with streaming technologies (Kafka, Kinesis, Flink) a plus. • Knowledge of data security and compliance best practices. • Exposure to AI/ML workflows and modern BI tools like Power BI, Tableau, or Looker. • Ability to mentor junior engineers. Skills • Snowflake • dbt • Fivetran • Data modeling and warehousing • AWS • Data governance • SQL • Python • Strong communication and cross-functional collaboration • Interest in emerging data and AI technologies
    $99k-139k yearly est. 2d ago
  • Data Engineer

    Vaco By Highspring

    Requirements engineer job in Irvine, CA

    Job Title: Data Engineer Duration: Direct-Hire Opportunity We are looking for a Data Engineer who is hands-on, collaborative, and experienced with Microsoft SQL Server, Snowflake, AWS RDS, and MySQL. The ideal candidate has a strong background in data warehousing, data lakes, ETL pipelines, and business intelligence tools. This role plays a key part in executing data strategy - driving optimization, reliability, and scalable BI capabilities across the organization. It's an excellent opportunity for a data professional who wants to influence architectural direction, contribute technical expertise, and grow within a data-driven company focused on innovation. Key Responsibilities Design, develop, and maintain SQL Server and Snowflake data warehouses and data lakes, focusing on performance, governance, and security. Manage and optimize database solutions within Snowflake, SQL Server, MySQL, and AWS RDS. Build and enhance ETL pipelines using tools such as Snowpipe, DBT, Boomi, SSIS, and Azure Data Factory. Utilize data tools such as SSMS, Profiler, Query Store, and Redgate for performance tuning and troubleshooting. Perform database administration tasks, including backup, restore, and monitoring. Collaborate with Business Intelligence Developers and Business Analysts on enterprise data projects. Ensure database integrity, compliance, and adherence to best practices in data security. Configure and manage data integration and BI tools such as Power BI, Tableau, Power Automate, and scripting languages (Python, R). Qualifications Proficiency with Microsoft SQL Server, including advanced T-SQL development and optimization. 7+ years working as a SQL Server Developer/Administrator, with experience in relational and object-oriented databases. 2+ years of experience with Snowflake data warehouse and data lake solutions. Experience developing pipelines and reporting solutions using Power BI, SSRS, SSIS, Azure Data Factory, or DBT. Scripting and automation experience using Python, PowerShell, or R. Familiarity with data integration and analytics tools such as Boomi, Redshift, or Databricks (a plus). Excellent communication, problem-solving, and organizational skills. Education: Bachelor's or Master's degree in Computer Science, Information Systems, Data Science, or a related field. Technical Skills SQL Server / Snowflake / MySQL / AWS RDS ETL Development (Snowpipe, SSIS, Azure Data Factory, DBT) BI Tools (Power BI, Tableau) Python, R, PowerShell Data Governance & Security Best Practices Determining compensation for this role (and others) at Vaco/Highspring depends upon a wide array of factors including but not limited to the individual's skill sets, experience and training, licensure and certifications, office location and other geographic considerations, as well as other business and organizational needs. With that said, as required by local law in geographies that require salary range disclosure, Vaco/Highspring notes the salary range for the role is noted in this job posting. The individual may also be eligible for discretionary bonuses, and can participate in medical, dental, and vision benefits as well as the company's 401(k) retirement plan. Additional disclaimer: Unless otherwise noted in the job description, the position Vaco/Highspring is filing for is occupied. Please note, however, that Vaco/Highspring is regularly asked to provide talent to other organizations. By submitting to this position, you are agreeing to be included in our talent pool for future hiring for similarly qualified positions. Submissions to this position are subject to the use of AI to perform preliminary candidate screenings, focused on ensuring minimum job requirements noted in the position are satisfied. Further assessment of candidates beyond this initial phase within Vaco/Highspring will be otherwise assessed by recruiters and hiring managers. Vaco/Highspring does not have knowledge of the tools used by its clients in making final hiring decisions and cannot opine on their use of AI products.
    $99k-139k yearly est. 22h ago
  • Data Analytics Engineer

    Archwest Capital

    Requirements engineer job in Irvine, CA

    We are seeking a Data Analytics Engineer to join our team who serves as a hybrid Database Administrator, Data Engineer, and Data Analyst, responsible for managing core data infrastructure, developing and maintaining ETL pipelines, and delivering high-quality analytics and visual insights to executive stakeholders. This role bridges technical execution with business intelligence, ensuring that data across Salesforce, financial, and operational systems is accurate, accessible, and strategically presented. Essential Functions Database Administration: Oversee and maintain database servers, ensuring performance, reliability, and security. Manage user access, backups, and data recovery processes while optimizing queries and database operations. Data Engineering (ELT): Design, build, and maintain robust ELT pipelines (SQL/DBT or equivalent) to extract, transform, and load data across Salesforce, financial, and operational sources. Ensure data lineage, integrity, and governance throughout all workflows. Data Modeling & Governance: Design scalable data models and maintain a governed semantic layer and KPI catalog aligned with business objectives. Define data quality checks, SLAs, and lineage standards to reconcile analytics with finance source-of-truth systems. Analytics & Reporting: Develop and manage executive-facing Tableau dashboards and visualizations covering key lending and operational metrics - including pipeline conversion, production, credit quality, delinquency/charge-offs, DSCR, and LTV distributions. Presentation & Insights: Translate complex datasets into clear, compelling stories and presentations for leadership and cross-functional teams. Communicate findings through visual reports and executive summaries to drive strategic decisions. Collaboration & Integration: Partner with Finance, Capital Markets, and Operations to refine KPIs and perform ad-hoc analyses. Collaborate with Engineering to align analytical and operational data, manage integrations, and support system scalability. Enablement & Training: Conduct training sessions, create documentation, and host data office hours to promote data literacy and empower business users across the organization. Competencies & Skills Advanced SQL proficiency with strong data modeling, query optimization, and database administration experience (PostgreSQL, MySQL, or equivalent). Hands-on experience managing and maintaining database servers and optimizing performance. Proficiency with ETL/ELT frameworks (DBT, Airflow, or similar) and cloud data stacks (AWS/Azure/GCP). Strong Tableau skills - parameters, LODs, row-level security, executive-level dashboard design, and storytelling through data. Experience with Salesforce data structures and ingestion methods. Proven ability to communicate and present technical data insights to executive and non-technical stakeholders. Solid understanding of lending/financial analytics (pipeline conversion, delinquency, DSCR, LTV). Working knowledge of Python for analytics tasks, cohort analysis, and variance reporting. Familiarity with version control (Git), CI/CD for analytics, and data governance frameworks. Excellent organizational, documentation, and communication skills with a strong sense of ownership and follow-through. Education & Experience Bachelor's degree in Computer Science, Engineering, Information Technology, Data Analytics, or a related field. 3+ years of experience in data analytics, data engineering, or database administration roles. Experience supporting executive-level reporting and maintaining database infrastructure in a fast-paced environment.
    $99k-139k yearly est. 1d ago
  • DevOps Engineer

    Evona

    Requirements engineer job in Irvine, CA

    DevOps Engineer - Satellite Technology Onsite in Irvine, CA or Washington, DC Pioneering Space Technology | Secure Cloud | Mission-Critical Systems We're working with a leading organization in the satellite technology sector, seeking a DevOps Engineer to join their growing team. You'll play a key role in shaping, automating, and securing the software infrastructure that supports next-generation space missions. This is a hands-on role within a collaborative, high-impact environment-ideal for someone who thrives on optimizing cloud performance and supporting mission-critical operations in aerospace. What You'll Be Doing Maintain and optimize AWS cloud environments, implementing security updates and best practices Manage daily operations of Kubernetes clusters and ensure system reliability Collaborate with cybersecurity teams to ensure full compliance across AWS infrastructure Support software deployment pipelines and infrastructure automation using Terraform and CI/CD tools Work cross-functionally with teams including satellite operations, software analytics, and systems engineering Troubleshoot and resolve environment issues to maintain uptime and efficiency Apply an “Infrastructure as Code” approach to all system development and management What You'll Bring Degree in Computer Science or a related field 2-3 years' experience with Kubernetes and containerized environments 3+ years' Linux systems administration experience Hands-on experience with cloud services (AWS, GCP, or Azure) Strong understanding of Terraform and CI/CD pipeline tools (e.g. FluxCD, Argo) Skilled in Python or Go Familiarity with software version control systems Solid grounding in cybersecurity principles (networking, authentication, encryption, firewalls) Eligibility to obtain a U.S. Security Clearance Preferred: Certified Kubernetes Administrator or Developer AWS Certified Security credentials This role offers the chance to make a tangible impact in the satellite and space exploration sector, joining a team that's building secure, scalable systems for mission success. If you're passionate about space, cloud infrastructure, and cutting-edge DevOps practices-this is your opportunity to be part of something extraordinary.
    $98k-133k yearly est. 3d ago
  • Big Data Engineer

    Kellymitchell Group 4.5company rating

    Requirements engineer job in Santa Monica, CA

    Our client is seeking a Big Data Engineer to join their team! This position is located in Santa Monica, California. Design and build core components of a large-scale data platform for both real-time and batch processing, owning key features of big data applications that evolve with business needs Develop next-generation, cloud-based big data infrastructure supporting batch and streaming workloads, with continuous improvements to performance, scalability, reliability, and availability Champion engineering excellence, promoting best practices such as design patterns, CI/CD, thorough code reviews, and automated testing Drive innovation, contributing new ideas and applying cutting-edge technologies to deliver impactful solutions Participate in the full software development lifecycle, including system design, experimentation, implementation, deployment, and testing Collaborate closely with program managers, product managers, SDETs, and researchers in an open, agile, and highly innovative environment Desired Skills/Experience: Bachelor's degree in a STEM field such as: Science, Technology, Engineering, Mathematics 5+ years of relevant professional experience 4+ years of professional software development experience using Java, Scala, Python, or similar programming languages 3+ years of hands-on big data development experience with technologies such as Spark, Flink, SingleStore, Kafka, NiFi, and AWS big data tools Strong understanding of system and application design, architecture principles, and distributed system fundamentals Proven experience building highly available, scalable, and production-grade services Genuine passion for technology, with the ability to work across interdisciplinary areas and adopt new tools or approaches Experience processing massive datasets at the petabyte scale Proficiency with cloud infrastructure and DevOps tools, such as Terraform, Kubernetes (K8s), Spinnaker, IAM, and ALB Hands-on experience with modern data warehousing and analytics platforms, including ClickHouse, Druid, Snowflake, Impala, Presto, Kinesis, and more Familiarity with common web development frameworks, such as Spring Boot, React.js, Vue.js, or Angular Benefits: Medical, Dental, & Vision Insurance Plans Employee-Owned Profit Sharing (ESOP) 401K offered The approximate pay range for this position is between $52.00 and $75.00. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
    $52-75 hourly 2d ago
  • Data Engineer

    RSM Solutions, Inc. 4.4company rating

    Requirements engineer job in Irvine, CA

    Thank you for stopping by to take a look at the Data Integration Engineer role I posted here on LinkedIN, I appreciate it. If you have read my s in the past, you will recognize how I write job descriptions. If you are new, allow me to introduce myself. My name is Tom Welke. I am Partner & VP at RSM Solutions, Inc and I have been recruiting technical talent for more than 23 years and been in the tech space since the 1990s. Due to this, I actually write JD's myself...no AI, no 'bots', just a real live human. I realized a while back that looking for work is about as fun as a root canal with no anesthesia...especially now. So, rather than saying 'must work well with others' and 'team mindset', I do away with that kind of nonsense and just tell it like it is. So, as with every role I work on, social fit is almost as important as technical fit. For this one, technical fit is very very important. But, we also have some social fit characteristics that are important. This is the kind of place that requires people to dive in and learn. The hiring manager for this one is actually a very dear friend of mine. He said something interesting to me not all that long ago. He mentioned, if you aren't spending at least an hour a day learning something new, you really are doing yourself a disservice. This is that classic environment where no one says 'this is not my job'. So that ability to jump in and help is needed for success in this role. This role is being done onsite in Irvine, California. I prefer working with candidates that are already local to the area. If you need to relocate, that is fine, but there are no relocation dollars available. I can only work with US Citizens or Green Card Holders for this role. I cannot work with H1, OPT, EAD, F1, H4, or anyone that is not already a US Citizen or Green Card Holder for this role. The Data Engineer role is similar to the Data Integration role I posted. However, this one is mor Ops focused, with the orchestration of deployment and ML flow, and including orchestrating and using data on the clusters and managing how the models are performing. This role focuses on coding & configuring on the ML side of the house. You will be designing, automating, and observing end to end data pipelines that feed this client's Kubeflow driven machine learning platform, ensuring models are trained, deployed, and monitored on trustworthy, well governed data. You will build batch/stream workflows, wire them into Azure DevOps CI/CD, and surface real time health metrics in Prometheus + Grafana dashboards to guarantee data availability. The role bridges Data Engineering and MLOps, allowing data scientists to focus on experimentation and the business sees rapid, reliable predictive insight. Here are some of the main responsibilities: Design and implement batch and streaming pipelines in Apache Spark running on Kubernetes and Kubeflow Pipelines to hydrate feature stores and training datasets. Build high throughput ETL/ELT jobs with SSIS, SSAS, and T SQL against MS SQL Server, applying Data Vault style modeling patterns for auditability. Integrate source control, build, and release automation using GitHub Actions and Azure DevOps for every pipeline component. Instrument pipelines with Prometheus exporters and visualize SLA, latency, and error budget metrics to enable proactive alerting. Create automated data quality and schema drift checks; surface anomalies to support a rapid incident response process. Use MLflow Tracking and Model Registry to version artifacts, parameters, and metrics for reproducible experiments and safe rollbacks. Work with data scientists to automate model retraining and deployment triggers within Kubeflow based on data freshness or concept drift signals. Develop PowerShell and .NET utilities to orchestrate job dependencies, manage secrets, and publish telemetry to Azure Monitor. Optimize Spark and SQL workloads through indexing, partitioning, and cluster sizing strategies, benchmarking performance in CI pipelines. Document lineage, ownership, and retention policies; ensure pipelines conform to PCI/SOX and internal data governance standards. Here is what we are seeking: At least 6 years of experience building data pipelines in Spark or equivalent. At least 2 years deploying workloads on Kubernetes/Kubeflow. At least 2 years of experience with MLflow or similar experiment‑tracking tools. At least 6 years of experience in T‑SQL, Python/Scala for Spark. At least 6 years of PowerShell/.NET scripting. At least 6 years of experience with with GitHub, Azure DevOps, Prometheus, Grafana, and SSIS/SSAS. Kubernetes CKA/CKAD, Azure Data Engineer (DP‑203), or MLOps‑focused certifications (e.g., Kubeflow or MLflow) would be great to see. Mentor engineers on best practices in containerized data engineering and MLOps.
    $111k-166k yearly est. 22h ago
  • Lead Data Engineer - (Automotive exp)

    Intelliswift-An LTTS Company

    Requirements engineer job in Torrance, CA

    Role: Sr Technical Lead Duration: 12+ Month Contract Daily Tasks Performed: Lead the design, development, and deployment of a scalable, secure, and high-performance CDP SaaS product. Architect solutions that integrate with various data sources, APIs, and third-party platforms. Design, develop, and optimize complex SQL queries for data extraction, transformation, and analysis Build and maintain workflow pipelines using Digdag, integrating with data platforms such as Treasure Data, AWS, or other cloud services Automate ETL processes and schedule tasks using Digdag's YAML-based workflow definitions Implement data quality checks, logging, and alerting mechanisms within workflow Leverage AWS services (e.g., S3, Lambda, Athena) where applicable to enhance data processing and storage capabilities Ensure best practices in software engineering, including code reviews, testing, CI/CD, and documentation. Oversee data privacy, security, and compliance initiatives (e.g., GDPR, CCPA). Ensure adherence to security, compliance, and data governance requirements. Oversee development of real-time and batch data processing systems. Collaborate with cross-functional teams including data analysts, product managers, and software engineers to translate business requirements into technical solutions Collaborate with the stakeholders to define technical requirements to align technical solutions with business goals and deliver product features. Mentor and guide developers, fostering a culture of technical excellence and continuous improvement. Troubleshoot complex technical issues and provide hands-on support as needed. Monitor, troubleshoot, and improve data workflows for performance, reliability, and cost-efficiency as needed Optimize system performance, scalability, and cost efficiency. What this person will be working on: As the Senior Technical Lead for our Customer Data Platform (CDP), the candidate will define the technical strategy, architecture, and execution of the platform. They will lead the design and delivery of scalable, secure, and high-performing solutions that enable unified customer data management, advanced analytics, and personalized experiences. This role demands deep technical expertise, strong leadership, and a solid understanding of data platforms and modern cloud technologies. It is a pivotal position that supports the CDP vision by mentoring team members and delivering solutions that empower our customers to unify, analyze, and activate their data. Position Success Criteria (Desired) - 'WANTS' Bachelor's or Master's degree in Computer Science, Engineering, or related field. 8+ years of software development experience, with at least 3+ years in a technical leadership role. Proven experience building and scaling SaaS products, preferably in customer data, marketing technology, or analytics domains Extensive hands-on experience with Presto, Hive, and Python Strong proficiency in writing complex SQL queries for data extraction, transformation, and analysis Familiarity with AWS data services such as S3, Athena, Glue, and Lambda Deep understanding of data modeling, ETL pipelines, workflow orchestration, and both real-time and batch data processing Experience ensuring data privacy, security, and compliance in SaaS environments Knowledge of Customer Data Platforms (CDPs), CDP concepts, and integration with CRM, marketing, and analytics tools Excellent communication, leadership, and project management skills Experience working with Agile methodologies and DevOps practices Ability to thrive in a fast-paced, agile environment Collaborative mindset with a proactive approach to problem-solving Stay current with industry trends and emerging technologies relevant to SaaS and customer data platforms.
    $100k-141k yearly est. 4d ago
  • Senior Data Engineer - Snowflake / ETL (Onsite)

    CGS Business Solutions 4.7company rating

    Requirements engineer job in Beverly Hills, CA

    CGS Business Solutions is committed to helping you, as an esteemed IT Professional, find the next right step in your career. We match professionals like you to rewarding consulting or full-time opportunities in your area of expertise. We are currently seeking Technical Professionals who are searching for challenging and rewarding jobs for the following opportunity: Summary CGS is hiring for a Senior Data Engineer to serve as a core member of the Platform team. This is a high-impact role responsible for advancing our foundational data infrastructure. Your primary mission will be to build key components of our Policy Journal - the central source of truth for all policy, commission, and client accounting data. You'll work closely with the Lead Data Engineer and business stakeholders to translate complex requirements into scalable data models and reliable pipelines that power analytics and operational decision-making for agents, managers, and leadership. This role blends greenfield engineering, strategic modernization, and a strong focus on delivering trusted, high-quality data products. Overview • Build the Policy Journal - Design and implement the master data architecture unifying policy, commission, and accounting data from sources like IVANS and Applied EPIC to create the platform's “gold record.” • Ensure Data Reliability - Define and implement data quality checks, monitoring, and alerting to guarantee accuracy, consistency, and timeliness across pipelines - while contributing to best practices in governance. • Build the Analytics Foundation - Enhance and scale our analytics stack (Snowflake, dbt, Airflow), transforming raw data into clean, performant dimensional models for BI and operational insights. • Modernize Legacy ETL - Refactor our existing Java + SQL (PostgreSQL) ETL system - diagnose duplication and performance issues, rewrite critical components in Python, and migrate orchestration to Airflow. • Implement Data Quality Frameworks - Develop automated testing and validation frameworks aligned with our QA strategy to ensure accuracy, completeness, and integrity across pipelines. • Collaborate on Architecture & Design - Partner with product and business stakeholders to deeply understand requirements and design scalable, maintainable data solutions. Ideal Experience • 5+ years of experience building and operating production-grade data pipelines. • Expert-level proficiency in Python and SQL. • Hands-on experience with the modern data stack - Snowflake/Redshift, Airflow, dbt, etc. • Strong understanding of AWS data services (S3, Glue, Lambda, RDS). • Experience working with insurance or insurtech data (policies, commissions, claims, etc.). • Proven ability to design robust data models (e.g., dimensional modeling) for analytics. • Pragmatic problem-solver capable of analyzing and refactoring complex legacy systems (ability to read Java/Hibernate is a strong plus - but no new Java coding required). • Excellent communicator comfortable working with both technical and non-technical stakeholders. Huge Plus! • Direct experience with Agency Management Systems (Applied EPIC, Nowcerts, EZLynx, etc.) • Familiarity with carrier data formats (Accord XML, IVANS AL3) • Experience with BI tools (Tableau, Looker, Power BI) About CGS Business Solutions: CGS specializes in IT business solutions, staffing and consulting services. With a strong focus in IT Applications, Network Infrastructure, Information Security, and Engineering. CGS is an INC 5000 company and is honored to be selected as one of the Best IT Recruitment Firms in California. After five consecutive Fastest Growing Company titles, CGS continues to break into new markets across the USA. Companies are counting on CGS to attract and help retain these resource pools in order to gain a competitive advantage the rapidly changing business environments.
    $99k-136k yearly est. 22h ago
  • Data Engineer (AWS Redshift, BI, Python, ETL)

    Prosum 4.4company rating

    Requirements engineer job in Manhattan Beach, CA

    We are seeking a skilled Data Engineer with strong experience in business intelligence (BI) and data warehouse development to join our team. In this role, you will design, build, and optimize data pipelines and warehouse architectures that support analytics, reporting, and data-driven decision-making. You will work closely with analysts, data scientists, and business stakeholders to ensure reliable, scalable, and high-quality data solutions. Responsibilities: Develop and maintain ETL/ELT pipelines for ingesting, transforming, and delivering data. Design and enhance data warehouse models (star/snowflake schemas) and BI datasets. Optimize data workflows for performance, scalability, and reliability. Collaborate with BI teams to support dashboards, reporting, and analytics needs. Ensure data quality, governance, and documentation across all solutions. Qualifications: Proven experience with data engineering tools (SQL, Python, ETL frameworks). Strong understanding of BI concepts, reporting tools, and dimensional modeling. Hands-on experience with cloud data platforms (e.g., AWS, Azure, GCP) is a plus. Excellent problem-solving skills and ability to work in a cross-functional environment.
    $99k-139k yearly est. 22h ago
  • Senior DevOps Engineer - AI Platform

    Ispace, Inc.

    Requirements engineer job in Westlake Village, CA

    JOB DETAILS: Sr DevOps Engineer - AI platform Contract Duration - 6 months contract to hire full time employment Hourly Rate: $60 - $72/hr on W2 contract. Job Description: Responsibilities: The Sr DevOps Engineer - AI platform will: Design, implement, and manage scalable and resilient infrastructure on AWS. Architect and maintain Windows/Linux based environments, ensuring seamless integration with cloud platforms. Develop and maintain infrastructure-as-code(IaC) using both AWS Cloudformation/CDK and Terraform/OpenTofu. Develop and maintain Configuration Management for Windows & Linux servers using Chef. Design, build, and optimize CI/CD pipelines using GitLab CI/CD for .NET applications. Integrate and support AI services, including orchestration with AWS Bedrock, Google Agentspace, and other generative AI frameworks, ensuring they can be securely and efficiently consumed by platform services. Enable AI/ML workflows by building and optimizing infrastructure pipelines that support large-scale model training, inference, and deployment across AWS and GCP environments. Automate model lifecycle management (training, deployment, monitoring) through CI/CD pipelines, ensuring reproducibility and seamless integration with development workflows. Collaborate with AI engineering teams to deliver scalable environments, standardized APIs, and infrastructure that accelerate AI adoption at the platform level. Implement observability, security, data privacy and cost-optimization strategies specifically for AI workloads, including monitoring and resource scaling for inference services. Implement and enforce security best practices across the infrastructure and deployment processes. Collaborate closely with development teams to understand their needs and provide DevOps expertise. Troubleshoot and resolve infrastructure and application deployment issues. Implement and manage monitoring and logging solutions to ensure system visibility and proactive issue detection. Clearly and concisely contribute to the development and documentation of DevOps standards and best practices. Stay up-to-date with the latest industry trends and technologies in cloud computing, DevOps, and security. Provide mentorship and guidance to junior team members. Qualifications: Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent experience). 5+ years of experience in a DevOps or Site Reliability Engineering (SRE) role. 1+ year(s) of experience with AI services & LLMs. Extensive hands-on experience with Amazon Web Services (AWS) Solid understanding of Windows/Linux Server administration and integration with cloud environments. Proven experience with infrastructure-as-code tools, specifically AWS CDK and Terraform. Strong experience designing and implementing CI/CD pipelines using GitLab CI/CD. Experience deploying and managing .NET applications in cloud environments. Deep understanding of security best practices and their implementation in cloud infrastructure and CI/CD pipelines. Solid understanding of networking principles (TCP/IP, DNS, load balancing, firewalls) in cloud environments. Experience with monitoring and logging tools (e.g., NewRelic, CloudWatch). Strong scripting skills (e.g., PowerShell, Python, Ruby, Bash). Excellent problem-solving and troubleshooting skills. Strong communication and collaboration skills. Experience with containerization technologies (e.g., Docker, Kubernetes) is a plus. Relevant AWS and/or GCP certifications are a plus. Experience with the configuration management tool Chef Preferred Qualifications: Knowledge of and a strong understanding of Powershell and Python Scripting Strong background with AWS EC2 features and Services (Autoscaling and WarmPools) Understanding of Windows server Build process using tools like Chocolaty for packages and Packer for AMI/Image generation. Extensive hands-on experience with Amazon Web Services (AWS)
    $60-72 hourly 4d ago
  • Plumbing Engineer

    K2D Consulting Engineers

    Requirements engineer job in Marina del Rey, CA

    We are currently seeking a Plumbing Engineer to join our team in Marina Del Rey, California. SUMMARY: This position is responsible for managing and performing tests on various materials and equipment and maintaining knowledge on all product specifications and ensure adherence to all required standards by performing the following duties. DUTIES AND RESPONSIBILITIES: Build long term customer relationships with existing and potential customers. Effectively manage Plumbing and design projects by satisfying clients' needs, meeting budget expectations and project schedules. Provide support during construction phases. Performs other related duties as assigned by management. SUPERVISORY RESPONSIBILITIES: Carries out supervisory responsibilities in accordance with the organization's policies and applicable laws. QUALIFICATIONS: Bachelor's Degree (BA) from four-year college or universityin Mechanical Engineering or completed Course Work in Plumbing, or one to two years of related experience and/or training, or equivalent combination of education and experience. Certificates, licenses and registrations required: LEED Certification is a plus. Computer skills required:Experienced at using a computer, preferably knowledgeable with MS Word, MS Excel, AutoCAD, and REVIT is a plus. Other skills required: 5 years of experience minimum, individuals should have recent experience working for a consulting engineering or engineering/architectural firm designing plumbing systems. Experience in the following preferred: Residential Commercial Multi-Family Restaurants Strong interpersonal skills and experience in maintaining strong client relationships are required. Ability to communicate effectively with people through oral presentations and written communications. Ability to motivate multiple-discipline project teams in meeting client's needs in a timely manner and meeting budget objectives.
    $87k-124k yearly est. 60d+ ago
  • Descent Systems Engineer

    In Orbit Aerospace

    Requirements engineer job in Torrance, CA

    In Orbit envisions a world where our most critical resources are accessible when we need them the most. Today, In Orbit is on a mission to provide the most resilient and autonomous cargo delivery solutions for regions suffering from conflict and natural disasters. Descent Systems Engineer: In Orbit is looking for a Descent Systems Engineer eager to join a diverse and dynamic team developing solutions for cargo delivery where traditional aircraft and drones fail. As a Descent Systems Engineer at In Orbit you will work on the design, development, and testing of advanced parachutes and decelerator systems. You will work with other engineers on integrating decelerator subsystems into the vehicle. The ideal candidate for this position will have experience manufacturing and testing parachute systems, a solid foundation of aerodynamic and mechanical design principles as well as flight testing experience. Responsibilities: Lead the development of parafoils, reefing systems, and other decelerator components. Develop fabrication and manufacturing processes including material selection, patterning, sewing, rigging, and hardware integration. Plan and conduct flight tests including drop tests, high-altitude balloon tests, and other captive-carry deployments. Support the development of test plans, procedures, and instrumentation requirements to verify system performance. Collaborate closely with mechanical, avionics, and software teams for vehicle-level integrations Own documentation and configuration management for parachute assemblies, manufacturing specifications, and test reports. Basic Qualifications: Bachelor's Degree level of education in Aerospace Engineering or similar curriculum. Strong understanding of aerodynamics, drag modeling, reefing techniques, and dynamic behaviors of decelerators Experience with reefing line cutting systems or multi-stage deployment mechanisms Experience conducting ground and flight tests for decelerator systems, including test planning, instrument integration, data analysis, and anomaly investigation. Expertise with textile materials (e.g., F-111, S-P fabric, Kevlar, Dyneema). Ability to work hands-on with sewing machines and ground test fixtures. Solid teamworking and relationship building skills with the ability to effectively communicate difficult technical problems and solutions with other engineering disciplines. Preferred Experience and Skills: Experience with guided parachute systems. Familiarity with FAA coordination for flight testing in and out of controlled airspace. Experience with pattern design tools such as SpaceCAD, Lectra Modaris, or similar. Additional Requirements: Willing to work extended hours as needed Able to stand for extended periods of time Able to occasionally travel (~25%) and support off-site testing. ITAR Requirements: To conform to U.S. Government space technology export regulations, including the International Traffic in Arms Regulations (ITAR) you must be a U.S. citizen, lawful permanent resident of the U.S., protected individual as defined by 8 U.S.C. 1324b(a)(3), or eligible to obtain the required authorizations from the U.S. Department of State.
    $78k-106k yearly est. 4d ago
  • System Engineer (Managed Service Provider)

    Bowman Williams

    Requirements engineer job in Costa Mesa, CA

    We are a long established Southern California Managed Service Provider supporting SMB clients across Los Angeles and Orange County with proactive IT, cybersecurity, cloud solutions, and hands on guidance. Our team is known for strong client relationships and clear communication, and we take a steady, service first approach to solving problems the right way. We are hiring a Tier 3 Systems Engineer to be the L3 escalation point and technical backstop for complex issues across diverse client environments. This role requires previous MSP experience and is ideal for someone who enjoys deep troubleshooting, ownership, and helping reduce repeat issues by getting to root cause. Expect about 75 percent escalations and 25 percent project work tied to recurring client needs. What You Will Do • Own Tier 3 escalations across servers, networking, virtualization, and Microsoft 365 • Troubleshoot deeply and drive root cause fixes • Handle SonicWall, VLAN, NAT, and site to site VPN work • Support Windows Server AD, GPO, DNS, DHCP • Support VMware ESXi vSphere and Hyper V • Lead Microsoft 365 escalations and hardening • Document clearly and communicate client ready updates What You Bring • 5 plus years MSP experience supporting multiple client environments • Strong troubleshooting and escalation ownership • SonicWall plus strong VLAN and VPN skills • Windows Server 2012 to 2022 • VMware and or Hyper V • Microsoft 365 plus Intune fundamentals • Azure and Entra ID security configuration • ConnectWise Command and ConnectWise Manage preferred Location, Pay, and Benefits • $95,000 to $105,000 DOE • Hybrid after onboarding • Medical, dental, vision • 401k with 3% company match • PTO and sick time plus paid holidays • Mileage reimbursement
    $95k-105k yearly 3d ago
  • Aerospace System Engineer II

    L'Garde, Inc.

    Requirements engineer job in Irvine, CA

    L·Garde is a full-service design, development, manufacturing, and qual test supplier to Tier 1 primes and government agencies. We provide systems engineering and skilled technicians to make your make your Skunk Works-type project a reality. With over 50 years of aerospace expertise, our deployable systems test the limits of what's possible in the harshest of environments in space, on the moon, and even on other planets. If you're an engineer who thrives on teamwork, clear communication, and seeing your work translate into cutting-edge aerospace solutions, we'd love to talk to you. A Day in the Life: We're looking for a Systems Engineer who is passionate about solving complex challenges in aerospace and enjoys working closely with others to make big ideas a reality. In this role, you'll help transform mission requirements into fully engineered space systems, balancing technical performance, schedule, and cost. You'll collaborate across disciplines-design, test, integration, and program management-to ensure our spacecraft and payload systems meet the highest standards of innovation and reliability. Key Responsibilities: Lead systems engineering activities across the project lifecycle, from concept through delivery. Develop and maintain system requirements, CONOPS, ICDs, and risk matrices. Support Verification & Validation (V&V) efforts and create and maintain Model Based Systems Engineering (MBSE) models. Partner with engineers, technicians, suppliers, and customers to resolve issues and ensure requirements are met. Write and review test plans, procedures, and reports; analyze and post-process test data. Contribute to design trade studies and product development planning. Participate in major design reviews (SRR, PDR, CDR, TRR) and customer meetings. Support proposal writing for advanced aerospace concepts. Maintain a safe, clean, and organized work area by following 5S and safety guidelines. Who You Are: You have a Bachelor's degree in engineering, science, or related technical field. 2-4 years of satellite systems engineering experience with DoD, NASA, or commercial space programs. At least 2 years in management, project leadership, or team leadership roles. Proficiency with requirements tracking and management. Proficiency with Model Based Systems Engineering and requirements tracking tools such as CAMEO and DOORS is a plus. Systems Engineers will be expected to have completed training for these tools within the 1st year. Hands-on experience with hardware/software interfaces, aerospace drawings, and GD&T standards. Exposure to SolidWorks CAD, FEA, Matlab, Thermal Desktop, CFD (Star CCM+), or LabView preferred The ability to obtain a U.S. Security Clearance for which the U.S. Government requires U.S. Citizenship. Top Secret Security Clearance a plus. Excellent written and verbal communication skills. Strong interpersonal skills with the ability to collaborate across all levels of the organization. Detail-oriented, organized, and adaptable in a fast-paced environment. Strong problem-solving mindset and passion for working in a team-driven culture. What We Offer: Be at the forefront of aerospace innovation by working on cutting-edge aerospace technologies. Opportunity to wear multiple hats and grow your skill set. Collaborative and inclusive work culture where your contributions are highly valued. Competitive salary Top-Tiered Benefits, 100% of both employee and dependents are covered by the company = Medical, Dental, Vision Flexible Spending Account Retirement and Company Match Company-Sponsored Life and LTD Insurance Generous Paid Time Off Policy with up to 4 weeks in the first year. Robust Paid Holiday Schedule Pay range: $110,000.00 - $145,000.00 per year Join our team as an Aerospace Systems Engineer II and contribute to the advancement of aerospace innovation by taking on diverse, impactful projects in a collaborative environment, where your contributions are valued and your growth is fostered through hands-on experience. L·Garde is an equal opportunity employer, including individuals with disabilities and veterans, and participates in the E-Verify Program.
    $110k-145k yearly 22h ago

Learn more about requirements engineer jobs

How much does a requirements engineer earn in Compton, CA?

The average requirements engineer in Compton, CA earns between $74,000 and $144,000 annually. This compares to the national average requirements engineer range of $62,000 to $120,000.

Average requirements engineer salary in Compton, CA

$103,000

What are the biggest employers of Requirements Engineers in Compton, CA?

The biggest employers of Requirements Engineers in Compton, CA are:
  1. SpaceX
  2. Relativity
  3. True Anomaly
  4. Divergent
  5. Saltchuk
  6. Astrolab
  7. Reflect Orbital
  8. Valar Atomics
  9. Galaxy Technologies
  10. Kroger
Job type you want
Full Time
Part Time
Internship
Temporary