Space-Based Environment Monitoring Systems Engineer (Secret clearance)
Requirements engineer job in El Segundo, CA
Vantor is forging the new frontier of spatial intelligence, helping decision makers and operators navigate what's happening now and shape what's coming next. Vantor is a place for problem solvers, changemakers, and go-getters-where people are working together to help our customers see the world differently, and in doing so, be seen differently. Come be part of a mission, not just a job, where you can: Shape your own future, build the next big thing, and change the world.
To be eligible for this position, you must be a U.S. Person, defined as a U.S. citizen, permanent resident, Asylee, or Refugee.
Note on Cleared Roles: If this position requires an active U.S. Government security clearance, applicants who do not currently hold the required clearance will not be eligible for consideration. Employment for cleared roles is contingent upon verification of clearance status.
Export Control/ITAR:
Certain roles may be subject to U.S. export control laws, requiring U.S. person status as defined by 8 U.S.C. 1324b(a)(3).
Please review the job details below.
Are you looking for an opportunity to combine your technical skills with big picture thinking to make an impact in the DoD? You understand your customer's environment and how to develop the right systems for their mission. Your ability to translate real-world needs into technical specifications makes you an integral part of delivering a customer focused engineering solution.
At Vantor, you'll work with the U.S. Space Force as part of the effort to develop and rapidly deploy the next generation of resilient Missile Warning (MW), Tactical Intelligence, Surveillance, and Reconnaissance (TISR), and Environmental Monitoring (EM) capabilities to deter attacks and provide critical information to our warfighters to defeat our enemies in battle. Within this role, you will lead a Systems, Engineering, and Integration (SE&I) team to plan and execute SE&I processes for space programs, including requirements analysis, architecture design, integration, testing, verification, and transition. Plan and coordinate SE&I activities across the SE&I team and broader stakeholder community, including the Federally Funded Research and Developmental Centers (FFRDCs), development contractors, and external stakeholders. Grow your skills by researching new requirements, technologies, and threats and using innovative engineering methodologies and tools to create tomorrow's solutions.
Join our team and create the future of Remote Sensing in the Space Force. Due to the nature of work performed within this facility, U.S. citizenship is required.
Empower change with us.
Build Your Career:
When you join Vantor, you'll have the opportunity to connect with other professionals doing similar work across multiple markets. You'll share best practices and work through challenges as you gain experience and mentoring to develop your career. In addition, you will have access to a wealth of training resources through our Digital University, an online learning portal where you can access more than 5000 tech courses, certifications and books. Build your technical skills through hands-on training on the latest tools and tech from our in-house experts. Pursuing certifications? Take advantage of our tuition assistance, on-site courses, vendor relationships, and a network of experts who can give you helpful tips. We'll help you develop the career you want as you chart your own course for success.
Qualifications:
Secret clearance
Bachelor's degree in a Science, Technology, Engineering, or Mathematics (STEM) field
10+ years of experience with performing SE&I tasks on a major DoD or IC space programs
5+ years of experience with leading a team performing SE&I on large-scale national security satellite programs
Experience with leading a team for development of technical specifications, interface control documents, integration plans and schedules, and inter-service support agreements
Knowledge of DoD 5000.01 and 5000.02
Ability to communicate and establish collaborative relationships with government clients, FFRDCs, and associate contractor teammates to achieve program goals
Preferred Qualifications:
Experience leading a team performing SE&I tasks on Space-Based Environmental Monitoring (SBEM) systems
Experience leading a team using a Model-Based Systems Engineering approach to manage system definitions and technical baselines
Knowledge of systems engineering standards, including IEEE 15288.1 and IEEE 15288.2
Knowledge of Agile Methodologies
Ability to be highly motivated with a dynamic work ethic and demonstrate a strong desire to contribute to the DoD mission
Ability to perform multiple systems engineering and program management functions in support of design reviews and requirements verification
Ability to identify, analyze, and resolve technical risks and issues, develop technical reports, and collaborate with government and other stakeholders to implement recommended solutions
TS/SCI clearance
Master's degree in Engineering, Mathematics, Physics, or CS
INCOSE Systems Engineering Professional Certification, including ASEP, CSEP, or ESEP Certification
#cjpost
#LI-CJ1
Pay Transparency: In support of pay transparency at Vantor, we disclose salary ranges on all U.S. job postings. The successful candidate's starting pay will fall within the salary range provided below and is determined based on job-related factors, including, but not limited to, the experience, qualifications, knowledge, skills, geographic work location, and market conditions. Candidates with the minimum necessary experience, qualifications, knowledge, and skillsets for the position should not expect to receive the upper end of the pay range.
The base pay for this position within California, Colorado, Hawaii, New Jersey, the Washington, DC metropolitan area, and for all other states is:
$137,000.00 - $229,000.00
Benefits: Vantor offers a competitive total rewards package that goes beyond the standard, including a robust 401(k) with company match, mental health resources, and unique perks like student loan repayment assistance, adoption reimbursement and pet insurance to support all aspects of your life. You can find more information on our benefits at: ******************************
The application window is three days from the date the job is posted and will remain posted until a qualified candidate has been identified for hire. If the job is reposted regardless of reason, it will remain posted three days from the date the job is reposted and will remain reposted until a qualified candidate has been identified for hire.
The date of posting can be found on Vantor's Career page at the top of each job posting.
To apply, submit your application via Vantor's Career page.
EEO Policy: Vantor is an equal opportunity employer committed to an inclusive workplace. We believe in fostering an environment where all team members feel respected, valued, and encouraged to share their ideas. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, gender identity, sexual orientation, disability, protected veteran status, age, or any other characteristic protected by law.
Backend Engineer - Python / API (Onsite)
Requirements engineer job in Beverly Hills, CA
CGS Business Solutions is committed to helping you, as an esteemed IT Professional, find the next right step in your career. We match professionals like you to rewarding consulting or full-time opportunities in your area of expertise. We are currently seeking Technical Professionals who are searching for challenging and rewarding jobs for the following opportunity:
CGS is hiring on behalf of one of our Risk & Protection Services clients in the West LA area for a full-time role. We're looking for a strategic Backend Engineer to join a high-growth team building next-generation technology. In this role, you'll play a critical part in architecting and delivering scalable backend services that power an AI-native agent workspace. You'll translate complex business needs into secure, high-performance, and maintainable systems. This opportunity is ideal for a hands-on engineer who excels at designing cloud-native architectures and thrives in a fast-paced, highly collaborative startup environment.
What You'll Do
• Partner closely with engineering, product, and operations to define high-impact problems - and craft the right technical solutions.
• Design and deliver scalable backend systems using modern architectures and best practices.
• Build Python APIs and complex backend logic on top of AWS serverless infrastructure.
• Contribute to the architecture and evolution of core system components.
• Elevate engineering standards, tooling, and backend development processes across the team.
Who You Are
• 6+ years of software engineering experience, with deep expertise in building end-to-end systems and a strong backend focus.
• Expert-level proficiency in Python and API development with Flask
• Strong understanding of AWS and cloud-native architecture.
• Experience with distributed systems, APIs, and data modeling.
• Proven ability to architect and optimize systems for performance and reliability.
• Excellent technical judgment and ability to drive clarity and execution in ambiguous environments.
• Experience in insurance or enterprise SaaS is a strong plus.
About CGS Business Solutions: CGS specializes in IT business solutions, staffing and consulting services. With a strong focus in IT Applications, Network Infrastructure, Information Security, and Engineering. CGS is an INC 5000 company and is honored to be selected as one of the Best IT Recruitment Firms in California. After five consecutive Fastest Growing Company titles, CGS continues to break into new markets across the USA. Companies are counting on CGS to attract and help retain these resource pools in order to gain a competitive advantage the rapidly changing business environments.
Azure Cloud Engineer (Jr/Mid) - (Locals only)
Requirements engineer job in Los Angeles, CA
Job Title: Cloud Team Charter
Job Type: Contract to Hire
Work Schedule: Hybrid (3 days onsite, 2 days remote)
Rate: $60. Based on experience
Responsibilities:
Cloud Team Charter/ Scope- 2 resources (1 Sr and 1 Mid/Jr)
Operate and maintain Cloud Foundation Services, such as:
Azure Policies
Backup Engineering and Enforcement
Logging Standard and Enforcement
AntiVirus and Malware Enforcement
Azure service/resources life cycle management, including retirement of resources
Tagging enforcement
Infrastructure Security
Ownership of Defender reporting as it relates to Infrastructure.
Collaboration with Cyber Security and App team to generate necessary reports for Infrastructure security review.
Actively monitoring and remediating infrastructure vulnerability with App Team. Coordinate with the App team to address infrastructure vulnerabilities.
Drive continuous improvement in Cloud Security by tracking/maintaining infrastructure vulnerabilities through Azure Security Center.
Cloud Support:
PaaS DB support
Support for Cloud Networking (L2) and work with the Network team as needed
Developer support in the Cloud.
Support for the CMDB team to track the Cloud assets.
L4 Cloud support for the enterprise.
About Maxonic:
Since 2002 Maxonic has been at the forefront of connecting candidate strengths to client challenges. Our award winning, dedicated team of recruiting professionals are specialized by technology, are great listeners, and will seek to find a position that meets the long-term career needs of our candidates. We take pride in the over 10,000 candidates that we have placed, and the repeat business that we earn from our satisfied clients.
Interested in Applying?
Please apply with your most current resume. Feel free to contact
Jhankar Chanda (******************* / ************ ) for more details.
Senior Data Engineer - Commerce Data Pipelines
Requirements engineer job in Santa Monica, CA
City: Seattle, WA/ Santa Monica, CA or NYC
Onsite/ Hybrid/ Remote: Hybrid (4 days a week onsite, Friday - Remote)
Duration: 10 months
Rate Range: Up to$92.5/hr on W2 depending on experience (no C2C or 1099 or sub-contract)
Work Authorization: GC, USC, All valid EADs except OPT, CPT, H1B
Must Have:
• SQL
• ETL design and development
• Data modeling (dimensional and normalization)
• ETL orchestration tools (Airflow or similar)
• Data Quality frameworks
• Performance tuning for SQL and ETL
• Python or PySpark
• Snowflake or Redshift
Responsibilities:
• Partner with business, analytics, and infrastructure teams to define data and reporting requirements.
• Collect data from internal and external systems and design table structures for scalable data solutions.
• Build, enhance, and maintain ETL pipelines with strong performance and reliability.
• Develop automated Data Quality checks and support ongoing pipeline monitoring.
• Implement database deployments using tools such as Schema Change.
• Conduct SQL and ETL tuning and deliver ad hoc analysis as needed.
• Support Agile ceremonies and collaborate in a fast-paced environment.
Qualifications:
• 3+ years of data engineering experience.
• Strong grounding in data modeling, including dimensional models and normalization.
• Deep SQL expertise with advanced tuning skills.
• Experience with relational or distributed data systems such as Snowflake or Redshift.
• Familiarity with ETL/orchestration platforms like Airflow or Nifi.
• Programming experience with Python or PySpark.
• Strong analytical reasoning, communication skills, and ability to work cross-functionally.
• Bachelor's degree required.
DevOps Engineer
Requirements engineer job in Irvine, CA
DevOps Engineer - Satellite Technology
Onsite in Irvine, CA or Washington, DC
Pioneering Space Technology | Secure Cloud | Mission-Critical Systems
We're working with a leading organization in the satellite technology sector, seeking a DevOps Engineer to join their growing team. You'll play a key role in shaping, automating, and securing the software infrastructure that supports next-generation space missions.
This is a hands-on role within a collaborative, high-impact environment-ideal for someone who thrives on optimizing cloud performance and supporting mission-critical operations in aerospace.
What You'll Be Doing
Maintain and optimize AWS cloud environments, implementing security updates and best practices
Manage daily operations of Kubernetes clusters and ensure system reliability
Collaborate with cybersecurity teams to ensure full compliance across AWS infrastructure
Support software deployment pipelines and infrastructure automation using Terraform and CI/CD tools
Work cross-functionally with teams including satellite operations, software analytics, and systems engineering
Troubleshoot and resolve environment issues to maintain uptime and efficiency
Apply an “Infrastructure as Code” approach to all system development and management
What You'll Bring
Degree in Computer Science or a related field
2-3 years' experience with Kubernetes and containerized environments
3+ years' Linux systems administration experience
Hands-on experience with cloud services (AWS, GCP, or Azure)
Strong understanding of Terraform and CI/CD pipeline tools (e.g. FluxCD, Argo)
Skilled in Python or Go
Familiarity with software version control systems
Solid grounding in cybersecurity principles (networking, authentication, encryption, firewalls)
Eligibility to obtain a U.S. Security Clearance
Preferred:
Certified Kubernetes Administrator or Developer
AWS Certified Security credentials
This role offers the chance to make a tangible impact in the satellite and space exploration sector, joining a team that's building secure, scalable systems for mission success.
If you're passionate about space, cloud infrastructure, and cutting-edge DevOps practices-this is your opportunity to be part of something extraordinary.
Snowflake DBT Data Engineer
Requirements engineer job in Irvine, CA
We're hiring a Snowflake DBT Data Engineer
Join Galent and help us deliver high-impact technology solutions that shape the future of digital transformation
Experience : 12+ years
Mandatory Skills : Snowflake, ANSI-SQL,DBT
Key Responsibilities:
Design develop and maintain ELT pipelines using Snowflake and DBT
Build and optimize data models in Snowflake to support analytics and reporting
Implement modular testable SQL transformations using DBT
Integrate DBT workflows into CICD pipelines and manage infrastructure as code using Terraform
Collaborate with data scientists analysts and business stakeholders to translate requirements into technical solutions
Optimize Snowflake performance through clustering partitioning indexing and materialized views
Automate data ingestion and transformation workflows using Airflow or similar orchestration tools
Ensure data quality governance and security across pipelines
Troubleshoot and resolve performance bottlenecks and data issues
Maintain documentation for data architecture pipelines and operational procedures
Required Skills Qualifications:
Bachelors or Masters degree in Computer Science Data Engineering or related field
7 years of experience in data engineering with at least 2 years focused on Snowflake and DBT
Strong proficiency in SQL and Python
Experience with cloud platforms AWS GCP or Azure
Familiarity with Git CICD and Infrastructure as Code tools Terraform CloudFormation
Knowledge of data modelling star schema normalization and ELT best practices
Why Galent
Galent is a digital engineering firm that brings AI-driven innovation to enterprise IT. We're proud of our diverse and inclusive team culture where bold ideas drive transformation.
Ready to Apply?
Send your resume to *******************
Snowflake/AWS Data Engineer
Requirements engineer job in Irvine, CA
Sr. Data Engineer
Full Time Direct Hire Job
Hybrid with work location-Irvine, CA.
The Senior Data Engineer will help design and build a modern data platform that supports enterprise analytics, integrations, and AI/ML initiatives. This role focuses on developing scalable data pipelines, modernizing the enterprise data warehouse, and enabling self-service analytics across the organization.
Key Responsibilities
• Build and maintain scalable data pipelines using Snowflake, dbt, and Fivetran.
• Design and optimize enterprise data models for performance and scalability.
• Support data cataloging, lineage, quality, and compliance efforts.
• Translate business and analytics requirements into reliable data solutions.
• Use AWS (primarily S3) for storage, integration, and platform reliability.
• Perform other data engineering tasks as needed.
Required Qualifications
• Bachelor's degree in Computer Science, Data Engineering, Information Systems, or related field.
• 5+ years of data engineering experience.
• Hands-on expertise with Snowflake, dbt, and Fivetran.
• Strong background in data warehousing, dimensional modeling, and SQL.
• Experience with AWS (S3) and data governance tools such as Alation or Atlan.
• Proficiency in Python for scripting and automation.
• Experience with streaming technologies (Kafka, Kinesis, Flink) a plus.
• Knowledge of data security and compliance best practices.
• Exposure to AI/ML workflows and modern BI tools like Power BI, Tableau, or Looker.
• Ability to mentor junior engineers.
Skills
• Snowflake
• dbt
• Fivetran
• Data modeling and warehousing
• AWS
• Data governance
• SQL
• Python
• Strong communication and cross-functional collaboration
• Interest in emerging data and AI technologies
Data Engineer
Requirements engineer job in Culver City, CA
Robert Half is partnering with a well known high tech company seeking an experienced Data Engineer with strong Python and SQL skills. The primary duties involve managing the complete data lifecycle and utilizing extensive datasets across marketing, software, and web platforms. This position is full time with full benefits and 3 days onsite in the Culver CIty area.
Responsibilities:
4+ years of professional experience ideally in a combination of data engineering and business intelligence.
Working heavily with SQL and programming in Python.
Ownership mindset to oversee the entire data lifecycle, including collection, extraction, and cleansing processes.
Building reports and data visualization to help advance business.
Leverage industry-standard tools for data integration such as Talend.
Work extensively within Cloud based ecosystems such as AWS and GCP ecosystems.
Requirements:
Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
5+ years of experience in data engineering, data warehousing, and big data technologies.
Proficiency in SQL and experience with relational databases (e.g., PostgreSQL, MySQL, SQL Server) and NoSQL Technologies.
Experience working within GCP environments and AWS.
Experience in real-time data pipeline tools.
Hands-on expertise with Google Cloud services including BigQuery.
Deep knowledge of SQL including Dimension tables and experienced in Python programming.
Big Data Engineer
Requirements engineer job in Santa Monica, CA
Our client is seeking a Big Data Engineer to join their team! This position is located in Santa Monica, California.
Design and build core components of a large-scale data platform for both real-time and batch processing, owning key features of big data applications that evolve with business needs
Develop next-generation, cloud-based big data infrastructure supporting batch and streaming workloads, with continuous improvements to performance, scalability, reliability, and availability
Champion engineering excellence, promoting best practices such as design patterns, CI/CD, thorough code reviews, and automated testing
Drive innovation, contributing new ideas and applying cutting-edge technologies to deliver impactful solutions
Participate in the full software development lifecycle, including system design, experimentation, implementation, deployment, and testing
Collaborate closely with program managers, product managers, SDETs, and researchers in an open, agile, and highly innovative environment
Desired Skills/Experience:
Bachelor's degree in a STEM field such as: Science, Technology, Engineering, Mathematics
5+ years of relevant professional experience
4+ years of professional software development experience using Java, Scala, Python, or similar programming languages
3+ years of hands-on big data development experience with technologies such as Spark, Flink, SingleStore, Kafka, NiFi, and AWS big data tools
Strong understanding of system and application design, architecture principles, and distributed system fundamentals
Proven experience building highly available, scalable, and production-grade services
Genuine passion for technology, with the ability to work across interdisciplinary areas and adopt new tools or approaches
Experience processing massive datasets at the petabyte scale
Proficiency with cloud infrastructure and DevOps tools, such as Terraform, Kubernetes (K8s), Spinnaker, IAM, and ALB
Hands-on experience with modern data warehousing and analytics platforms, including ClickHouse, Druid, Snowflake, Impala, Presto, Kinesis, and more
Familiarity with common web development frameworks, such as Spring Boot, React.js, Vue.js, or Angular
Benefits:
Medical, Dental, & Vision Insurance Plans
Employee-Owned Profit Sharing (ESOP)
401K offered
The approximate pay range for this position is between $52.00 and $75.00. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
Data Engineer
Requirements engineer job in Irvine, CA
Thank you for stopping by to take a look at the Data Integration Engineer role I posted here on LinkedIN, I appreciate it.
If you have read my s in the past, you will recognize how I write job descriptions. If you are new, allow me to introduce myself. My name is Tom Welke. I am Partner & VP at RSM Solutions, Inc and I have been recruiting technical talent for more than 23 years and been in the tech space since the 1990s. Due to this, I actually write JD's myself...no AI, no 'bots', just a real live human. I realized a while back that looking for work is about as fun as a root canal with no anesthesia...especially now. So, rather than saying 'must work well with others' and 'team mindset', I do away with that kind of nonsense and just tell it like it is.
So, as with every role I work on, social fit is almost as important as technical fit. For this one, technical fit is very very important. But, we also have some social fit characteristics that are important. This is the kind of place that requires people to dive in and learn. The hiring manager for this one is actually a very dear friend of mine. He said something interesting to me not all that long ago. He mentioned, if you aren't spending at least an hour a day learning something new, you really are doing yourself a disservice. This is that classic environment where no one says 'this is not my job'. So that ability to jump in and help is needed for success in this role.
This role is being done onsite in Irvine, California. I prefer working with candidates that are already local to the area. If you need to relocate, that is fine, but there are no relocation dollars available.
I can only work with US Citizens or Green Card Holders for this role. I cannot work with H1, OPT, EAD, F1, H4, or anyone that is not already a US Citizen or Green Card Holder for this role.
The Data Engineer role is similar to the Data Integration role I posted. However, this one is mor Ops focused, with the orchestration of deployment and ML flow, and including orchestrating and using data on the clusters and managing how the models are performing. This role focuses on coding & configuring on the ML side of the house.
You will be designing, automating, and observing end to end data pipelines that feed this client's Kubeflow driven machine learning platform, ensuring models are trained, deployed, and monitored on trustworthy, well governed data. You will build batch/stream workflows, wire them into Azure DevOps CI/CD, and surface real time health metrics in Prometheus + Grafana dashboards to guarantee data availability. The role bridges Data Engineering and MLOps, allowing data scientists to focus on experimentation and the business sees rapid, reliable predictive insight.
Here are some of the main responsibilities:
Design and implement batch and streaming pipelines in Apache Spark running on Kubernetes and Kubeflow Pipelines to hydrate feature stores and training datasets.
Build high throughput ETL/ELT jobs with SSIS, SSAS, and T SQL against MS SQL Server, applying Data Vault style modeling patterns for auditability.
Integrate source control, build, and release automation using GitHub Actions and Azure DevOps for every pipeline component.
Instrument pipelines with Prometheus exporters and visualize SLA, latency, and error budget metrics to enable proactive alerting.
Create automated data quality and schema drift checks; surface anomalies to support a rapid incident response process.
Use MLflow Tracking and Model Registry to version artifacts, parameters, and metrics for reproducible experiments and safe rollbacks.
Work with data scientists to automate model retraining and deployment triggers within Kubeflow based on data freshness or concept drift signals.
Develop PowerShell and .NET utilities to orchestrate job dependencies, manage secrets, and publish telemetry to Azure Monitor.
Optimize Spark and SQL workloads through indexing, partitioning, and cluster sizing strategies, benchmarking performance in CI pipelines.
Document lineage, ownership, and retention policies; ensure pipelines conform to PCI/SOX and internal data governance standards.
Here is what we are seeking:
At least 6 years of experience building data pipelines in Spark or equivalent.
At least 2 years deploying workloads on Kubernetes/Kubeflow.
At least 2 years of experience with MLflow or similar experiment‑tracking tools.
At least 6 years of experience in T‑SQL, Python/Scala for Spark.
At least 6 years of PowerShell/.NET scripting.
At least 6 years of experience with with GitHub, Azure DevOps, Prometheus, Grafana, and SSIS/SSAS.
Kubernetes CKA/CKAD, Azure Data Engineer (DP‑203), or MLOps‑focused certifications (e.g., Kubeflow or MLflow) would be great to see.
Mentor engineers on best practices in containerized data engineering and MLOps.
Senior Data Engineer
Requirements engineer job in West Hollywood, CA
Remote
$160K - $190K + stock + benefits
Full time
Are you ready to join one of the most innovative players in the Insurtech space, a company reshaping how data drives decision-making in insurance?
Our client has recently secured a significant amount of funding, and with this is driving their next phase of growth. They're looking for 2x experienced Data Engineers to join the team, building scalable data pipelines utilizing Python, SQL, Airflow & AWS.
To be suitable for this position, you must have previous experience working within the insurance / insurtech industry. Ideally, some of this will be within the P&C space.
What you'll be doing:
Implementing data models and pipelines that power their platform.
Transforming raw data into clean and reliable data.
Working closely with cross-functional teams to understand business requirements.
What we're looking for:
5+ years' experience as a Data Engineer
Strong proficiency in Python and SQL for data wrangling and automation.
Proven experience building ETL pipelines and working with AWS data services.
Knowledge of data modeling, APIs, and distributed data systems.
Bonus: familiarity with tools like Airflow, Snowflake, or Databricks.
If this is something of interest, please do apply now for immediate consideration.
Plumbing Engineer
Requirements engineer job in Marina del Rey, CA
We are currently seeking a Plumbing Engineer to join our team in Marina Del Rey, California. SUMMARY: This position is responsible for managing and performing tests on various materials and equipment and maintaining knowledge on all product specifications and ensure adherence to all required standards by performing the following duties.
DUTIES AND RESPONSIBILITIES:
Build long term customer relationships with existing and potential customers.
Effectively manage Plumbing and design projects by satisfying clients' needs, meeting budget expectations and project schedules.
Provide support during construction phases.
Performs other related duties as assigned by management.
SUPERVISORY RESPONSIBILITIES:
Carries out supervisory responsibilities in accordance with the organization's policies and applicable laws.
QUALIFICATIONS:
Bachelor's Degree (BA) from four-year college or universityin Mechanical Engineering or completed Course Work in Plumbing, or one to two years of related experience and/or training, or equivalent combination of education and experience.
Certificates, licenses and registrations required: LEED Certification is a plus.
Computer skills required:Experienced at using a computer, preferably knowledgeable with MS Word, MS Excel, AutoCAD, and REVIT is a plus.
Other skills required:
5 years of experience minimum, individuals should have recent experience working for a consulting engineering or engineering/architectural firm designing plumbing systems.
Experience in the following preferred:
Residential
Commercial
Multi-Family
Restaurants
Strong interpersonal skills and experience in maintaining strong client relationships are required.
Ability to communicate effectively with people through oral presentations and written communications.
Ability to motivate multiple-discipline project teams in meeting client's needs in a timely manner and meeting budget objectives.
Lead Data Engineer - (Automotive exp)
Requirements engineer job in Torrance, CA
Role: Sr Technical Lead
Duration: 12+ Month Contract
Daily Tasks Performed:
Lead the design, development, and deployment of a scalable, secure, and high-performance CDP SaaS product.
Architect solutions that integrate with various data sources, APIs, and third-party platforms.
Design, develop, and optimize complex SQL queries for data extraction, transformation, and analysis
Build and maintain workflow pipelines using Digdag, integrating with data platforms such as Treasure Data, AWS, or other cloud services
Automate ETL processes and schedule tasks using Digdag's YAML-based workflow definitions
Implement data quality checks, logging, and alerting mechanisms within workflow
Leverage AWS services (e.g., S3, Lambda, Athena) where applicable to enhance data processing and storage capabilities
Ensure best practices in software engineering, including code reviews, testing, CI/CD, and documentation.
Oversee data privacy, security, and compliance initiatives (e.g., GDPR, CCPA).
Ensure adherence to security, compliance, and data governance requirements.
Oversee development of real-time and batch data processing systems.
Collaborate with cross-functional teams including data analysts, product managers, and software engineers to translate business requirements into technical solutions
Collaborate with the stakeholders to define technical requirements to align technical solutions with business goals and deliver product features.
Mentor and guide developers, fostering a culture of technical excellence and continuous improvement.
Troubleshoot complex technical issues and provide hands-on support as needed.
Monitor, troubleshoot, and improve data workflows for performance, reliability, and cost-efficiency as needed
Optimize system performance, scalability, and cost efficiency.
What this person will be working on:
As the Senior Technical Lead for our Customer Data Platform (CDP), the candidate will define the technical strategy, architecture, and execution of the platform. They will lead the design and delivery of scalable, secure, and high-performing solutions that enable unified customer data management, advanced analytics, and personalized experiences. This role demands deep technical expertise, strong leadership, and a solid understanding of data platforms and modern cloud technologies. It is a pivotal position that supports the CDP vision by mentoring team members and delivering solutions that empower our customers to unify, analyze, and activate their data.
Position Success Criteria (Desired) - 'WANTS'
Bachelor's or Master's degree in Computer Science, Engineering, or related field.
8+ years of software development experience, with at least 3+ years in a technical leadership role.
Proven experience building and scaling SaaS products, preferably in customer data, marketing technology, or analytics domains
Extensive hands-on experience with Presto, Hive, and Python
Strong proficiency in writing complex SQL queries for data extraction, transformation, and analysis
Familiarity with AWS data services such as S3, Athena, Glue, and Lambda
Deep understanding of data modeling, ETL pipelines, workflow orchestration, and both real-time and batch data processing
Experience ensuring data privacy, security, and compliance in SaaS environments
Knowledge of Customer Data Platforms (CDPs), CDP concepts, and integration with CRM, marketing, and analytics tools
Excellent communication, leadership, and project management skills
Experience working with Agile methodologies and DevOps practices
Ability to thrive in a fast-paced, agile environment
Collaborative mindset with a proactive approach to problem-solving
Stay current with industry trends and emerging technologies relevant to SaaS and customer data platforms.
Data Engineer (AWS Redshift, BI, Python, ETL)
Requirements engineer job in Manhattan Beach, CA
We are seeking a skilled Data Engineer with strong experience in business intelligence (BI) and data warehouse development to join our team. In this role, you will design, build, and optimize data pipelines and warehouse architectures that support analytics, reporting, and data-driven decision-making. You will work closely with analysts, data scientists, and business stakeholders to ensure reliable, scalable, and high-quality data solutions.
Responsibilities:
Develop and maintain ETL/ELT pipelines for ingesting, transforming, and delivering data.
Design and enhance data warehouse models (star/snowflake schemas) and BI datasets.
Optimize data workflows for performance, scalability, and reliability.
Collaborate with BI teams to support dashboards, reporting, and analytics needs.
Ensure data quality, governance, and documentation across all solutions.
Qualifications:
Proven experience with data engineering tools (SQL, Python, ETL frameworks).
Strong understanding of BI concepts, reporting tools, and dimensional modeling.
Hands-on experience with cloud data platforms (e.g., AWS, Azure, GCP) is a plus.
Excellent problem-solving skills and ability to work in a cross-functional environment.
System Engineer (Managed Service Provider)
Requirements engineer job in Costa Mesa, CA
We are a long established Southern California Managed Service Provider supporting SMB clients across Los Angeles and Orange County with proactive IT, cybersecurity, cloud solutions, and hands on guidance. Our team is known for strong client relationships and clear communication, and we take a steady, service first approach to solving problems the right way.
We are hiring a Tier 3 Systems Engineer to be the L3 escalation point and technical backstop for complex issues across diverse client environments. This role requires previous MSP experience and is ideal for someone who enjoys deep troubleshooting, ownership, and helping reduce repeat issues by getting to root cause. Expect about 75 percent escalations and 25 percent project work tied to recurring client needs.
What You Will Do
• Own Tier 3 escalations across servers, networking, virtualization, and Microsoft 365
• Troubleshoot deeply and drive root cause fixes
• Handle SonicWall, VLAN, NAT, and site to site VPN work
• Support Windows Server AD, GPO, DNS, DHCP
• Support VMware ESXi vSphere and Hyper V
• Lead Microsoft 365 escalations and hardening
• Document clearly and communicate client ready updates
What You Bring
• 5 plus years MSP experience supporting multiple client environments
• Strong troubleshooting and escalation ownership
• SonicWall plus strong VLAN and VPN skills
• Windows Server 2012 to 2022
• VMware and or Hyper V
• Microsoft 365 plus Intune fundamentals
• Azure and Entra ID security configuration
• ConnectWise Command and ConnectWise Manage preferred
Location, Pay, and Benefits
• $95,000 to $105,000 DOE
• Hybrid after onboarding
• Medical, dental, vision
• 401k with 3% company match
• PTO and sick time plus paid holidays
• Mileage reimbursement
System Engineer
Requirements engineer job in Los Angeles, CA
Job Title: Systems Engineer
Employment Type: Full-Time
TransSIGHT is at the forefront of delivering advanced transportation solutions, providing innovative software and hardware products that enhance system efficiency, reliability, and customer experience. We are seeking a highly motivated Systems Engineer to join our Los Angeles team and contribute to the design, development, and deployment of cutting-edge solutions.
Position Summary:
The Systems Engineer will play a critical role in supporting the development, integration, and deployment of software and hardware systems. This role requires a detail-oriented professional capable of coordinating system development tasks, maintaining thorough technical documentation, and ensuring successful solution implementation for our clients.
Primary Responsibilities:
Support the definition of software and hardware products and interfaces in the areas of CAD/AVL and fare collection.
Coordinate system development tasks, including design, integration, and formal testing.
Oversee all transitions into customer deployment environments.
Develop and execute projects encompassing system specifications, technical and logistical requirements, and other disciplines.
Create and maintain programmatic and technical documentation, including design documents, requirement matrices, and system diagrams, to ensure efficient planning and execution.
Manage and document system configurations.
Ensure the successful deployment of solutions.
Maintain a thorough working knowledge of enterprise applications.
Implement performance analysis and reliability analysis of end-to-end protocols.
Assist with other tasks as needed by the Systems Engineering department.
Provide training and guidance to Associates and Project Engineer I/II staff as needed.
Qualifications:
Bachelor's degree in Systems Engineering, Computer Science, Electrical Engineering, or a related field.
Minimum of 5 years of experience in systems engineering, software/hardware integration, or related disciplines.
Strong understanding of system design, integration, and testing processes.
Experience managing technical documentation and system configurations.
Ability to perform performance and reliability analysis of complex systems.
Excellent problem-solving, organizational, and communication skills.
Ability to work both independently and collaboratively in a fast-paced environment.
Preferred:
Prior experience in transportation, software, or hardware systems engineering.
Familiarity with enterprise applications and deployment processes.
Why Join TransSIGHT:
Work on innovative projects shaping the future of transportation.
Collaborative and supportive team environment.
Opportunities for professional growth and continuous learning.
Descent Systems Engineer
Requirements engineer job in Torrance, CA
In Orbit envisions a world where our most critical resources are accessible when we need them the most. Today, In Orbit is on a mission to provide the most resilient and autonomous cargo delivery solutions for regions suffering from conflict and natural disasters.
Descent Systems Engineer:
In Orbit is looking for a Descent Systems Engineer eager to join a diverse and dynamic team developing solutions for cargo delivery where traditional aircraft and drones fail.
As a Descent Systems Engineer at In Orbit you will work on the design, development, and testing of advanced parachutes and decelerator systems. You will work with other engineers on integrating decelerator subsystems into the vehicle. The ideal candidate for this position will have experience manufacturing and testing parachute systems, a solid foundation of aerodynamic and mechanical design principles as well as flight testing experience.
Responsibilities:
Lead the development of parafoils, reefing systems, and other decelerator components.
Develop fabrication and manufacturing processes including material selection, patterning, sewing, rigging, and hardware integration.
Plan and conduct flight tests including drop tests, high-altitude balloon tests, and other captive-carry deployments.
Support the development of test plans, procedures, and instrumentation requirements to verify system performance.
Collaborate closely with mechanical, avionics, and software teams for vehicle-level integrations
Own documentation and configuration management for parachute assemblies, manufacturing specifications, and test reports.
Basic Qualifications:
Bachelor's Degree level of education in Aerospace Engineering or similar curriculum.
Strong understanding of aerodynamics, drag modeling, reefing techniques, and dynamic behaviors of decelerators
Experience with reefing line cutting systems or multi-stage deployment mechanisms
Experience conducting ground and flight tests for decelerator systems, including test planning, instrument integration, data analysis, and anomaly investigation.
Expertise with textile materials (e.g., F-111, S-P fabric, Kevlar, Dyneema).
Ability to work hands-on with sewing machines and ground test fixtures.
Solid teamworking and relationship building skills with the ability to effectively communicate difficult technical problems and solutions with other engineering disciplines.
Preferred Experience and Skills:
Experience with guided parachute systems.
Familiarity with FAA coordination for flight testing in and out of controlled airspace.
Experience with pattern design tools such as SpaceCAD, Lectra Modaris, or similar.
Additional Requirements:
Willing to work extended hours as needed
Able to stand for extended periods of time
Able to occasionally travel (~25%) and support off-site testing.
ITAR Requirements:
To conform to U.S. Government space technology export regulations, including the International Traffic in Arms Regulations (ITAR) you must be a U.S. citizen, lawful permanent resident of the U.S., protected individual as defined by 8 U.S.C. 1324b(a)(3), or eligible to obtain the required authorizations from the U.S. Department of State.
Senior Data Engineer
Requirements engineer job in Glendale, CA
City: Glendale, CA
Onsite/ Hybrid/ Remote: Hybrid (3 days a week onsite, Friday - Remote)
Duration: 12 months
Rate Range: Up to$85/hr on W2 depending on experience (no C2C or 1099 or sub-contract)
Work Authorization: GC, USC, All valid EADs except OPT, CPT, H1B
Must Have:
• 5+ years Data Engineering
• Airflow
• Spark DataFrame API
• Databricks
• SQL
• API integration
• AWS
• Python or Java or Scala
Responsibilities:
• Maintain, update, and expand Core Data platform pipelines.
• Build tools for data discovery, lineage, governance, and privacy.
• Partner with engineering and cross-functional teams to deliver scalable solutions.
• Use Airflow, Spark, Databricks, Delta Lake, Kubernetes, and AWS to build and optimize workflows.
• Support platform standards, best practices, and documentation.
• Ensure data quality, reliability, and SLA adherence across datasets.
• Participate in Agile ceremonies and continuous process improvement.
• Work with internal customers to understand needs and prioritize enhancements.
• Maintain detailed documentation that supports governance and quality.
Qualifications:
• 5+ years in data engineering with large-scale pipelines.
• Strong SQL and one major programming language (Python, Java, or Scala).
• Production experience with Spark and Databricks.
• Experience ingesting and interacting with API data sources.
• Hands-on Airflow orchestration experience.
• Experience developing APIs with GraphQL.
• Strong AWS knowledge and infrastructure-as-code familiarity.
• Understanding of OLTP vs OLAP, data modeling, and data warehousing.
• Strong problem-solving and algorithmic skills.
• Clear written and verbal communication.
• Agile/Scrum experience.
• Bachelor's degree in a STEM field or equivalent industry experience.
Senior Data Engineer
Requirements engineer job in Glendale, CA
Our client is seeking a Senior Data Engineer to join their team! This position is located in Glendale, California.
Contribute to maintaining, updating, and expanding existing Core Data platform data pipelines
Build tools and services to support data discovery, lineage, governance, and privacy
Collaborate with other software and data engineers and cross-functional teams
Work with a tech stack that includes Airflow, Spark, Databricks, Delta Lake, Kubernetes, and AWS
Collaborate with product managers, architects, and other engineers to drive the success of the Core Data platform
Contribute to developing and documenting internal and external standards and best practices for pipeline configurations, naming conventions, and more
Ensure high operational efficiency and quality of Core Data platform datasets to meet SLAs and ensure reliability and accuracy for stakeholders in Engineering, Data Science, Operations, and Analytics
Participate in agile and scrum ceremonies to collaborate and refine team processes
Engage with customers to build relationships, understand needs, and prioritize both innovative solutions and incremental platform improvements
Maintain detailed documentation of work and changes to support data quality and data governance requirements
Desired Skills/Experience:
5+ years of data engineering experience developing large data pipelines
Proficiency in at least one major programming language such as: Python, Java or Scala
Strong SQL skills and the ability to create queries to analyze complex datasets
Hands-on production experience with distributed processing systems such as Spark
Experience interacting with and ingesting data efficiently from API data sources
Experience coding with the Spark DataFrame API to create data engineering workflows in Databricks
Hands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines
Experience developing APIs with GraphQL
Deep understanding of AWS or other cloud providers, as well as infrastructure-as-code
Familiarity with data modeling techniques and data warehousing best practices
Strong algorithmic problem-solving skills
Excellent written and verbal communication skills
Advanced understanding of OLTP versus OLAP environments
Benefits:
Medical, Dental, & Vision Insurance Plans
Employee-Owned Profit Sharing (ESOP)
401K offered
The approximate pay range for this position is between $51.00 and $73.00. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
Staff Engineer
Requirements engineer job in West Hollywood, CA
West Hollywood
$170K - $205K + stock + benefits
Full time
Are you ready to join one of the most innovative players in the Insurtech space, a company reshaping how data drives decision-making in insurance?
Our client has recently secured a significant amount of funding, and with this is driving their next phase of growth. They're now looking to grow out their development function here in West Hollywood and want an experienced Technical Lead to join their team.
What you'll be doing:
Able to build scalable, high-performance systems.
Architect and develop core integrations using Python, PostgreSQL and AWS.
Able to both be hands on, as well as mentoring the team.
Experience liaising with non-technical stakeholders & decision makers.
What we're looking for:
10 + years' experience as a software engineer.
Experience in a technical leadership / mentorship position previously, ideally for 2+ years.
Strong proficiency in Python, PostgreSQL & AWS.
Ideally some experience with React.js on the front-end.
If you're an experienced staff level engineer or technical lead who's looking for a new opportunity based in West Hollywood, please do apply now for immediate consideration.