HTM Clinical Systems Engineer- Cybersecurity
Requirements engineer job in Whittier, CA
Lifesaving technology, powered by you. Your expertise impacts the lives of others. Invest in your life and the life of others. Invest in Sodexo.
Sodexo at PIH Health has a great opportunity for an HTM Clinical Systems Engineer- Cybersecurity located in Whittier, CA primarily.
PIH Health Whittier Hospital was founded in 1959 by community members who needed quality healthcare services close to home. The 523-bed hospital has grown into a healthcare system that serves residents of Los Angeles County, Orange County and the San Gabriel Valley region. In addition to the hospital, the Whittier campus is home to the Washington and Wells Medical Office Buildings, a community pharmacy, an outpatient surgery center and the Patricia L. Scheifly Breast Health Center.
Typical Knowledge & Skills:
Strong Analytical Ability - aggregation of complex data sets, sorting of data into logical segments, identification of relevant data trends, summary of findings, executive-level display of data insights
Translation of Data into Strategy - ability to develop core components of a robust strategy with minimal direction, connection of data evidence & outcomes to progress towards defined goals, adjust strategy based on data, identify opportunities for improvement or pivot
Strong Understanding of Clinical Workflows - ability to identify impact of a change on patient safety, risk, and/or delivery of patient care including the efficient use of medical technologies, common challenges and risks in the clinical environment, understanding of infection control and safety protocols in the clinical environment, some understanding of key clinical metrics.
Software and Server Management - knowledge and experience with hands-on management of highly technical and sensitive hardware and software used to support the delivery of patient care, includes the daily management of key components to ensure high uptime and availability, some experience with the triage and troubleshooting of highly technical scenarios, some ability to oversee the response to both planned and unplanned downtime of key components
Change Management - ability to engage stakeholders proactively to plan for change, ability to monitor progress and identify red-flags, ability to empathize and support stakeholder response to change, ability to promote positive outcomes and benefits of change
This role combines deep clinical, technical, and cybersecurity expertise to ensure medical technology environments are safe, secure, and aligned with both patient care and business goals. The ideal candidate brings systems engineering principles to real-time problem-solving, working across disciplines to manage cybersecurity risk and promote operational excellence in clinical settings.
****A valid driver's license and acceptable driver's license record check is required.****
What You'll Do:
Advise hospital leadership on the selection of medical technologies, with a focus on functionality and cybersecurity.
Lead complex projects to connect medical devices to hospital networks securely and efficiently.
Manage and monitor IoT security tools, analyze alerts, and develop advanced remediation and patching strategies.
Conduct risk assessments and business impact analyses to support informed technology decisions.
Oversee data quality and management for asset inventories, ensuring accuracy and completeness.
Support cybersecurity audits and regulatory compliance efforts, including HIPAA and Joint Commission.
Provide cybersecurity training and guidance to HTM teams and hospital leadership.
Represent Sodexo in industry cybersecurity forums and support strategic innovation initiatives.
What We Offer:
Compensation is fair and equitable, partially determined by a candidate's education level or years of relevant experience. Salary offers are based on a candidate's specific criteria, like experience, skills, education, and training. Sodexo offers a comprehensive benefits package that may include:
Medical, Dental, Vision Care and Wellness Programs
401(k) Plan with Matching Contributions
Paid Time Off and Company Holidays
Career Growth Opportunities and Tuition Reimbursement
More extensive information is provided to new employees upon hire.
What You Bring:
Bachelor's degree in biomedical engineering, Information Technology, Cybersecurity, or equivalent experience.
3+ years of experience in Healthcare Technology Management with a focus on cybersecurity.
Strong understanding of medical device integration, clinical workflows, and network security principles.
Hands-on experience with IoT security solutions and medical device risk assessment.
Proven ability to lead complex projects across multiple hospital sites.
Excellent communication and leadership skills.
Who We Are:
At Sodexo, our purpose is to create a better everyday for everyone and build a better life for all. We believe in improving the quality of life for those we serve and contributing to the economic, social, and environmental progress in the communities where we operate. Sodexo partners with clients to provide a truly memorable experience for both customers and employees alike. We do this by providing food service, catering, facilities management, and other integrated solutions worldwide.
Our company values you for you; you will be treated fairly and with respect, and you can be yourself. You will have your ideas count and your opinions heard because we can be a stronger team when you're happy at work. This is why we embrace diversity and inclusion as core values, fostering an environment where all employees are valued and respected. We are committed to providing equal employment opportunities to individuals regardless of race, color, religion, national origin, age, sex, gender identity, pregnancy, disability, sexual orientation, military status, protected veteran status, or any other characteristic protected by applicable federal, state, or local law. If you need assistance with the application process, please complete this form.
Qualifications & Requirements:
Minimum Education Requirement: Bachelor's degree or equivalent experience
Minimum Functional Experience: 3 years
Auto-ApplyAnalytics Engineer
Requirements engineer job in Los Angeles, CA
Proper Hospitality is seeking a visionary Analytics Engineer to help build the future of data across our growing portfolio.
You will be the foundational data architect for Proper's next-generation hospitality intelligence platform, joining as the founding member of the data engineering team. Your mission is to design, build, and own the semantic modeling layer, identity-resolution framework, and core data infrastructure that powers analytics, personalization, membership logic, and AI-driven operations across all Proper properties. This is a hands-on role with direct ownership of data modeling, governance, and data quality, with influence over technical direction, vendor management, and cross-functional alignment.
You will collaborate with data engineering vendors, AI/ML engineering vendors and internal business leaders across operations, marketing, revenue management and sales to ensure our data infrastructure is accurate, scalable, governed, and actionable. You will be responsible for portfolio-wide hotel performance analytics, trend identification, and decision-support for Operations, Finance, Revenue Management, and senior executives.
Key Responsibilities
Data Architecture & Modeling
Design and own the company-wide dimensional modeling strategy using dbt (data build tool)
Create and maintain clean, well-documented, version-controlled models for core domains (PMS, POS, spa/wellness, membership, digital, etc)
Establish and enforce naming conventions, data contracts, lineage, and schema governance
Identity Resolution & Guest Graph
Architect and maintain the Proper guest identity graph, unifying data across all systems into a single, accurate guest profile
Develop deterministic and heuristic matching rules; iterate on feature extraction, merging logic, and identity quality metrics
Data Quality & Reliability
Implement robust data validation, monitoring, and alerting frameworks to ensure completeness, accuracy, and timeliness across all pipelines
Partner with contractors to ensure staging layers ingest data consistently and reliably
Business-Facing Metrics & Semantic Layer
Define and maintain authoritative metric definitions (LTV, ADR, occupancy, conversion, channel attribution, membership value, churn)
Build accessible data marts and semantic layers that can serve BI tools, CRM systems, and AI services
Design metrics and data visualizations with dashboarding tools like Tableau, Sigma, and Mode
Cross-Functional Collaboration
Work closely with operations, revenue management, marketing, and the executive team to understand data needs and translate them into scalable models
Provide technical guidance and enforce standards with third-party engineering vendors
Be a cross-functional champion at upholding high data integrity standards to increase reusability, readability and standardization
Hotel Performance Analytics
Build recurring analytical frameworks and dashboards for property-level and portfolio-level insights (occupancy, ADR, RevPAR, segmentation mix, pickup behavior, channel performance, cost-per-room, labor productivity, F&B covers, check averages, menu engineering)
Detect structural trends and operational inefficiencies by analyzing PMS, POS, labor, spa, digital, and membership datasets
Partner with property and cluster leadership to interpret trends, validate root causes, and tie data outputs to operational actions
Build forecasting models for occupancy, F&B demand, spa utilization, labor, and revenue
Produce executive-level performance briefs that combine data engineering rigor with applied hospitality interpretation
AI/ML Enablement
Create and maintain feature tables for predictive models (propensity, demand forecasting, churn, LTV)
Support experimentation and real-time personalization use cases by providing clean features and stable data sources
Documentation & Governance
Maintain comprehensive documentation of all datasets, lineage, assumptions, and transformations
Own data governance, security, privacy compliance, and access controls in coordination with leadership
Qualifications
Required
4-7+ years of hands-on experience in analytics engineering, data engineering, or modern data stack architecture
Expert-level SQL
Deep experience with dbt, dimensional modeling, and analytics engineering best practices
Strong understanding of cloud data warehouses (Snowflake, BigQuery, or Databricks)
Experience building and validating ETL/ELT pipelines and working with raw staging layers
Strong understanding of data quality frameworks, testing, lineage, and documentation
Demonstrated ability to unify data across disparate systems and design customer 360 profiles
Proven ability to translate raw data into actionable insights for operators, leaders, and executives
Preferred
Experience in hospitality, retail, wellness, or membership-based businesses
Familiarity with reverse-ETL tools (Hightouch, Census)
Experience with event streaming (Kafka, Pub/Sub) and real-time architecture
Exposure to Python for data modeling or feature engineering
Understanding of marketing automation platforms (Klaviyo, Salesforce, Braze)
Strong data privacy and governance understanding (GDPR/CCPA)
Success in the First 90 Days Looks Like
Proper-wide data modeling standards defined and documented
Unified guest identity graph MVP created and validated on core systems
dbt project structured, version-controlled, and integrated with CI/CD
Vendor pipelines reviewed, documented, and aligned with governance
First wave of clean, tested metric tables delivered to stakeholders
Proper's first set of high-value feature tables ready for AI/ML use cases
Delivery of first hotel performance analytics suite roadmap (occupancy, ADR, RevPAR, segmentation, labor, F&B) with recommended actions
Salary
$155,000-185,000
Proper Perks & Benefits
Compensation & Recognition
Competitive Salary + Bonus: Rewarding exceptional talent and performance across all levels.
Recognition Programs: Celebrating achievements big and small through company-wide appreciation and milestone rewards.
Annual Performance Reviews: Regular opportunities for feedback, growth, and advancement.
Culture of Growth & Belonging
Culture of Growth: A collaborative, design-forward environment that values creativity, intelligence, and curiosity - where learning and excellence are a daily practice.
Guided Skills Development: Access to training, leadership programs, mentorship, and cross-property mobility to encourage achievement and discovery.
Diversity, Equity, Inclusion & Belonging: We honor individuality while fostering a culture of respect and belonging across all teams.
Community Engagement: Opportunities to give back through local volunteerism, sustainability, and charitable partnerships.
Health & Wellness
Comprehensive Health Coverage: Medical, dental, and vision plans through Aetna, designed to fit a range of personal and family needs.
Wellness Access: Company-subsidized memberships with Equinox and ClassPass, plus wellbeing workshops and mental health resources.
Employee Assistance Program (EAP): Confidential support for emotional wellbeing, financial planning, and life management through Unum.
Time Off & Flexibility
Paid Time Off: Flexible PTO plus 11 paid holidays each year for corporate team members.
Paid Parental Leave: Paid time off for eligible employees welcoming a new child through birth, adoption, or foster placement.
Flexible Work Practices: Hybrid schedules for eligible roles and an emphasis on work-life balance.
Financial Wellbeing & Core Protections
401(k) Program: Company match of 50% of employee deferrals, up to the first 4% of eligible compensation.
Employer-Paid Life & Disability Insurance: Core protections with optional additional coverage.
Financial Education: Access to planning tools and workshops to support long-term stability and growth.
Lifestyle & Travel Perks
Hotel Stay Benefits: 75% off BAR (floor of $100) across the Proper portfolio.
Design Hotels Partnership: 50% off participating Marriott Design Hotels.
Dining Discounts: 75% off food & beverage at all Proper Hospitality outlets.
Lifestyle Perks: Complimentary or subsidized parking, cell phone reimbursement, and exclusive hospitality and retail discounts.
Why Join Proper Hospitality
At Proper, we build experiences that move people - and that begins with the team behind them. As a best-in-class employer, we're committed to creating one of the Best Places to Work in hospitality by nurturing a culture where creativity, excellence, and humanity thrive together.
Everything we do is grounded in the belief that hospitality is more than a profession - it's an opportunity to care for others and make lives better. Guided by the Pillars of Proper, we show up with warmth and authenticity (Care Proper), strive for excellence in everything we do (Achieve Proper), think creatively and resourcefully (Imagine Proper), and take pride in the style and culture that make us who we are (Present Proper).
We believe our people are our greatest strength, and we invest deeply in their wellbeing, growth, and sense of belonging. From comprehensive benefits to meaningful development programs, Proper is designed to help you build a career, and a life, that feels as inspiring as the experiences we create for our guests.
Our Commitment: Building the Best Place to Work
Our Best Place to Work initiative is a living commitment - a continuous investment in our people, our culture, and our purpose. We listen, learn, and evolve together to create an environment where everyone feels empowered to imagine boldly, achieve confidently, care deeply, and present themselves authentically.
At Proper, joining the team means more than finding a job - it means joining a community that believes in building beautiful experiences together, for our guests and for one another.
Thermal Engineer
Requirements engineer job in Los Angeles, CA
Basic qualifications
Bachelor's degree in mechanical engineering or related discipline, or equivalent experience.
3+ year experience with Thermal Desktop (SINDA) and/or Ansys thermal analysis tools (ICEPAK. Fluent, mechanical, etc.)
2+ year experience with test planning, test set-up (thermocouple and heater installation), operating DAQ and power supplies, results correlation and system verification for production
Experience in documentation and writing test reports.
Preferred qualifications
Experience with avionics thermal design and analysis.
CAD skills (NX or Solidworks).
Experience with interpreting and correlating test data to thermal models.
Specifics tasks that this individual will support are as follows:
Kuiper Modem Module
Provide design inputs to the mechanical and electrical teams.
Complete thermal modeling analysis using Ansys analysis tools.
Develop comprehensive thermal analysis documentation
Develop thermal testing and qualification plan.
Conduct thermal testing and model validation.
Optical Communications Terminal v2 (OCT v2)
Perform trade studies to select architecture of the electronic enclosures.
Preliminary thermal analysis of printed circuit board assemblies.
Collaborate with design team to develop preliminary designs.
Define and execute development testing.
Pavement Engineering
Requirements engineer job in Rancho Santa Margarita, CA
Career Opportunity: Pavement Engineer
Project, Senior, Associate
About the Company
They are a well-established engineering consulting firm with a decades-long reputation for delivering technical excellence and practical solutions across Southern California's most complex and high-profile projects. Their services span Pavement Engineering, Geotechnical Engineering, Structural Engineering, Construction Management, Instrumentation, Laboratory Testing, and Forensic Consulting.
They work on diverse projects, including master-planned communities, high-rise office towers, major infrastructure, custom homes, and pavement improvement programs for public and private clients. They offer a collaborative environment with opportunities for professional development, hands-on experience, and meaningful contributions to infrastructure that supports communities.
Major Responsibilities
As a Pavement Engineer, you will engage in technical analysis, fieldwork, and cross-disciplinary collaboration. Depending on experience and level, responsibilities include:
Field Evaluations & Investigations: Perform on-site pavement evaluations through visual surveys, subsurface explorations, and non-destructive testing.
Pavement Testing & Analysis: Utilize tools such as Falling Weight Deflectometer (FWD), Ground Penetrating Radar (GPR), and Dynamic Cone Penetrometer (DCP).
Pavement Management Plans & PCI Surveys: Conduct Pavement Condition Index (PCI) surveys and assist in developing pavement management plans.
Pavement Design: Design new and rehabilitated pavement sections (flexible and rigid systems).
Construction Observation & Materials Testing: Provide or manage field inspection and materials testing during construction.
Data Processing & Technical Reporting: Compile and analyze lab results, core logs, and test data; contribute to technical reports and cost estimates.
Forensic Evaluations: Assess causes of pavement distress and recommend corrective actions.
Technical Documentation: Prepare drawings, reports, figures, and other deliverables.
Project Management: Manage projects, including proposals, budgets, schedules, and deliverables.
Software: Use AutoCAD, ArcGIS, MicroPAVER, StreetSaver, and GPR data processing tools.
Interdisciplinary Collaboration: Work with geotechnical, civil, and structural engineering teams.
Minimum Requirements
Education: BS in Civil Engineering with interest in pavement engineering.
Experience: 4-10+ years in pavement engineering.
Skills: Strong communication, organization, and attention to detail.
Physical: Ability to lift up to 30 lbs. and walk project sites.
Transportation: Reliable vehicle for site visits.
Attributes: Self-driven, detail-oriented, and collaborative.
Preferred Qualifications
EIT certification
California PE license (for senior roles)
MS in Civil Engineering with pavement emphasis
Proficiency in Microsoft Office and Adobe Acrobat
Experience with pavement engineering software
Field and lab testing experience
ServiceNow CMDB Engineer
Requirements engineer job in Irvine, CA
Employment Type: Full-Time, Direct Hire (W2 Only - No sponsorship available)
About the Role
We're seeking a skilled and driven ServiceNow CMDB Engineer to join our team in Irving, TX. This is a hands-on, onsite role focused on designing, implementing, and maintaining a robust Configuration Management Database (CMDB) aligned with ServiceNow's Common Service Data Model (CSDM). You'll play a critical role in enhancing IT operations, asset management, and service delivery across the enterprise.
Responsibilities
Architect, configure, and maintain the ServiceNow CMDB to support ITOM and ITAM initiatives
Implement and optimize CSDM frameworks to ensure data integrity and alignment with business services
Collaborate with cross-functional teams to define CI classes, relationships, and lifecycle processes
Develop and enforce CMDB governance, data quality standards, and reconciliation rules
Integrate CMDB with discovery tools and external data sources
Support audits, compliance, and reporting requirements related to ITIL processes
Troubleshoot and resolve CMDB-related issues and performance bottlenecks
Qualifications
3+ years of hands-on experience with ServiceNow CMDB and CSDM implementation
Strong understanding of ITIL practices and ITOM/ITAM modules
Proven ability to manage CI lifecycle and maintain data accuracy
Experience with ServiceNow Discovery, Service Mapping, and integrations
ServiceNow Certified System Administrator (CSA) or higher certifications preferred
Excellent communication and documentation skills
Must be authorized to work in the U.S. without sponsorship
Perks & Benefits
Competitive compensation package
Collaborative and innovative work environment
Opportunity to work with cutting-edge ServiceNow technologies
Azure & Microsoft Fabric Data Engineer & Architect
Requirements engineer job in Los Angeles, CA
STAND 8 provides end-to-end IT solutions to enterprise partners across the United States and with offices in Los Angeles, New York, New Jersey, Atlanta, and more including internationally in Mexico and India. Our global solutions team is seeking an Azure & Microsoft Fabric Data Engineer/Architect to support and lead our Media & Entertainment client to build a next-generation financial data platform. We're looking for someone who can contribute both strategically and will have the hands-on skill to provide subject matter expertise. In this role, you'll design and build enterprise-level data models, lead data migration efforts into Azure, and develop cutting-edge data processing pipelines using Microsoft Fabric. If you thrive at the intersection of architecture and hands-on engineering, and want to help shape a modern financial system with complex upstream data processing - this is the opportunity for you!
This project will required the person to work onsite the Burbank / Studio City adjacent location 3 days / week. We are setting up interviews immediately and look forward to hearing from you!
This is a hybrid position requiring 3-4 days per week onsite.
Responsibilities
Architect, design, and hands-on develop end-to-end data solutions using Azure Data Services and Microsoft Fabric.
Build and maintain complex data models in Azure SQL, Lakehouse, and Fabric environments that support advanced financial calculations.
Lead and execute data migration efforts from multiple upstream and legacy systems into Azure and Fabric.
Develop, optimize, and maintain ETL/ELT pipelines using Microsoft Fabric Data Pipelines, Data Factory, and Azure engineering tools.
Perform hands-on SQL development, including stored procedures, query optimization, performance tuning, and data transformation logic.
Partner with finance, engineering, and product stakeholders to translate requirements into scalable, maintainable data solutions.
Ensure data quality, lineage, profiling, and governance across ingestion and transformation layers.
Tune and optimize Azure SQL databases and Fabric Lakehouse environments for performance and cost efficiency.
Troubleshoot data processing and pipeline issues to maintain stability and reliability.
Document architecture, data flows, engineering standards, and best practices.
Qualifications
Expert, hands-on experience with Azure Data Services (Azure SQL, Data Factory, Data Lake Storage, Synapse, Azure Storage).
Deep working knowledge of Microsoft Fabric, including Data Engineering workloads, Lakehouse, Fabric SQL, Pipelines, and governance.
Strong experience designing and building data models within Azure SQL and Fabric architectures.
Proven track record delivering large-scale data migrations into Azure environments.
Advanced proficiency in SQL/T-SQL, including stored procedures, indexing, and performance tuning.
Demonstrated success building and optimizing ETL/ELT pipelines for complex financial or multi-source datasets.
Understanding of financial systems, data structures, and complex calculation logic.
Excellent communication and documentation skills with the ability to collaborate across technical and business teams.
Additional Details
The base range for this contract position is $70-85/per hour, depending on experience. Our pay ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target for new hires of this position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Qualified applicants with arrest or conviction records will be considered.
Additional Details
The base range for this contract position is $70-85/per hour, depending on experience. Our pay ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target for new hires of this position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Qualified applicants with arrest or conviction records will be considered.
Benefits
Medical coverage and Health Savings Account (HSA) through Anthem
Dental/Vision/Various Ancillary coverages through Unum
401(k) retirement savings plan
Company-paid Employee Assistance Program (EAP)
Discount programs through ADP WorkforceNow
About Us
STAND 8 provides end-to-end IT solutions to enterprise partners across the United States and globally with offices in Los Angeles, Atlanta, New York, Mexico, Japan, India, and more. STAND 8 focuses on the "bleeding edge" of technology and leverages automation, process, marketing, and over fifteen years of success and growth to provide a world-class experience for our customers, partners, and employees. Our mission is to impact the world positively by creating success through PEOPLE, PROCESS, and TECHNOLOGY.
Check out more at ************** and reach out today to explore opportunities to grow together!
By applying to this position, your data will be processed in accordance with the STAND 8 Privacy Policy.
Data Engineer
Requirements engineer job in Culver City, CA
Robert Half is partnering with a well known high tech company seeking an experienced Data Engineer with strong Python and SQL skills. The primary duties involve managing the complete data lifecycle and utilizing extensive datasets across marketing, software, and web platforms. This position is full time with full benefits and 3 days onsite in the Culver CIty area.
Responsibilities:
4+ years of professional experience ideally in a combination of data engineering and business intelligence.
Working heavily with SQL and programming in Python.
Ownership mindset to oversee the entire data lifecycle, including collection, extraction, and cleansing processes.
Building reports and data visualization to help advance business.
Leverage industry-standard tools for data integration such as Talend.
Work extensively within Cloud based ecosystems such as AWS and GCP ecosystems.
Requirements:
Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
5+ years of experience in data engineering, data warehousing, and big data technologies.
Proficiency in SQL and experience with relational databases (e.g., PostgreSQL, MySQL, SQL Server) and NoSQL Technologies.
Experience working within GCP environments and AWS.
Experience in real-time data pipeline tools.
Hands-on expertise with Google Cloud services including BigQuery.
Deep knowledge of SQL including Dimension tables and experienced in Python programming.
Senior Data Engineer
Requirements engineer job in Glendale, CA
City: Glendale, CA
Onsite/ Hybrid/ Remote: Hybrid (3 days a week onsite, Friday - Remote)
Duration: 12 months
Rate Range: Up to$85/hr on W2 depending on experience (no C2C or 1099 or sub-contract)
Work Authorization: GC, USC, All valid EADs except OPT, CPT, H1B
Must Have:
• 5+ years Data Engineering
• Airflow
• Spark DataFrame API
• Databricks
• SQL
• API integration
• AWS
• Python or Java or Scala
Responsibilities:
• Maintain, update, and expand Core Data platform pipelines.
• Build tools for data discovery, lineage, governance, and privacy.
• Partner with engineering and cross-functional teams to deliver scalable solutions.
• Use Airflow, Spark, Databricks, Delta Lake, Kubernetes, and AWS to build and optimize workflows.
• Support platform standards, best practices, and documentation.
• Ensure data quality, reliability, and SLA adherence across datasets.
• Participate in Agile ceremonies and continuous process improvement.
• Work with internal customers to understand needs and prioritize enhancements.
• Maintain detailed documentation that supports governance and quality.
Qualifications:
• 5+ years in data engineering with large-scale pipelines.
• Strong SQL and one major programming language (Python, Java, or Scala).
• Production experience with Spark and Databricks.
• Experience ingesting and interacting with API data sources.
• Hands-on Airflow orchestration experience.
• Experience developing APIs with GraphQL.
• Strong AWS knowledge and infrastructure-as-code familiarity.
• Understanding of OLTP vs OLAP, data modeling, and data warehousing.
• Strong problem-solving and algorithmic skills.
• Clear written and verbal communication.
• Agile/Scrum experience.
• Bachelor's degree in a STEM field or equivalent industry experience.
Big Data Engineer
Requirements engineer job in Santa Monica, CA
Our client is seeking a Big Data Engineer to join their team! This position is located in Santa Monica, California.
Design and build core components of a large-scale data platform for both real-time and batch processing, owning key features of big data applications that evolve with business needs
Develop next-generation, cloud-based big data infrastructure supporting batch and streaming workloads, with continuous improvements to performance, scalability, reliability, and availability
Champion engineering excellence, promoting best practices such as design patterns, CI/CD, thorough code reviews, and automated testing
Drive innovation, contributing new ideas and applying cutting-edge technologies to deliver impactful solutions
Participate in the full software development lifecycle, including system design, experimentation, implementation, deployment, and testing
Collaborate closely with program managers, product managers, SDETs, and researchers in an open, agile, and highly innovative environment
Desired Skills/Experience:
Bachelor's degree in a STEM field such as: Science, Technology, Engineering, Mathematics
5+ years of relevant professional experience
4+ years of professional software development experience using Java, Scala, Python, or similar programming languages
3+ years of hands-on big data development experience with technologies such as Spark, Flink, SingleStore, Kafka, NiFi, and AWS big data tools
Strong understanding of system and application design, architecture principles, and distributed system fundamentals
Proven experience building highly available, scalable, and production-grade services
Genuine passion for technology, with the ability to work across interdisciplinary areas and adopt new tools or approaches
Experience processing massive datasets at the petabyte scale
Proficiency with cloud infrastructure and DevOps tools, such as Terraform, Kubernetes (K8s), Spinnaker, IAM, and ALB
Hands-on experience with modern data warehousing and analytics platforms, including ClickHouse, Druid, Snowflake, Impala, Presto, Kinesis, and more
Familiarity with common web development frameworks, such as Spring Boot, React.js, Vue.js, or Angular
Benefits:
Medical, Dental, & Vision Insurance Plans
Employee-Owned Profit Sharing (ESOP)
401K offered
The approximate pay range for this position is between $52.00 and $75.00. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
Data Analytics Engineer
Requirements engineer job in Irvine, CA
We are seeking a Data Analytics Engineer to join our team who serves as a hybrid Database Administrator, Data Engineer, and Data Analyst, responsible for managing core data infrastructure, developing and maintaining ETL pipelines, and delivering high-quality analytics and visual insights to executive stakeholders. This role bridges technical execution with business intelligence, ensuring that data across Salesforce, financial, and operational systems is accurate, accessible, and strategically presented.
Essential Functions
Database Administration: Oversee and maintain database servers, ensuring performance, reliability, and security. Manage user access, backups, and data recovery processes while optimizing queries and database operations.
Data Engineering (ELT): Design, build, and maintain robust ELT pipelines (SQL/DBT or equivalent) to extract, transform, and load data across Salesforce, financial, and operational sources. Ensure data lineage, integrity, and governance throughout all workflows.
Data Modeling & Governance: Design scalable data models and maintain a governed semantic layer and KPI catalog aligned with business objectives. Define data quality checks, SLAs, and lineage standards to reconcile analytics with finance source-of-truth systems.
Analytics & Reporting: Develop and manage executive-facing Tableau dashboards and visualizations covering key lending and operational metrics - including pipeline conversion, production, credit quality, delinquency/charge-offs, DSCR, and LTV distributions.
Presentation & Insights: Translate complex datasets into clear, compelling stories and presentations for leadership and cross-functional teams. Communicate findings through visual reports and executive summaries to drive strategic decisions.
Collaboration & Integration: Partner with Finance, Capital Markets, and Operations to refine KPIs and perform ad-hoc analyses. Collaborate with Engineering to align analytical and operational data, manage integrations, and support system scalability.
Enablement & Training: Conduct training sessions, create documentation, and host data office hours to promote data literacy and empower business users across the organization.
Competencies & Skills
Advanced SQL proficiency with strong data modeling, query optimization, and database administration experience (PostgreSQL, MySQL, or equivalent).
Hands-on experience managing and maintaining database servers and optimizing performance.
Proficiency with ETL/ELT frameworks (DBT, Airflow, or similar) and cloud data stacks (AWS/Azure/GCP).
Strong Tableau skills - parameters, LODs, row-level security, executive-level dashboard design, and storytelling through data.
Experience with Salesforce data structures and ingestion methods.
Proven ability to communicate and present technical data insights to executive and non-technical stakeholders.
Solid understanding of lending/financial analytics (pipeline conversion, delinquency, DSCR, LTV).
Working knowledge of Python for analytics tasks, cohort analysis, and variance reporting.
Familiarity with version control (Git), CI/CD for analytics, and data governance frameworks.
Excellent organizational, documentation, and communication skills with a strong sense of ownership and follow-through.
Education & Experience
Bachelor's degree in Computer Science, Engineering, Information Technology, Data Analytics, or a related field.
3+ years of experience in data analytics, data engineering, or database administration roles.
Experience supporting executive-level reporting and maintaining database infrastructure in a fast-paced environment.
Data Engineer
Requirements engineer job in Irvine, CA
Thank you for stopping by to take a look at the Data Integration Engineer role I posted here on LinkedIN, I appreciate it.
If you have read my s in the past, you will recognize how I write job descriptions. If you are new, allow me to introduce myself. My name is Tom Welke. I am Partner & VP at RSM Solutions, Inc and I have been recruiting technical talent for more than 23 years and been in the tech space since the 1990s. Due to this, I actually write JD's myself...no AI, no 'bots', just a real live human. I realized a while back that looking for work is about as fun as a root canal with no anesthesia...especially now. So, rather than saying 'must work well with others' and 'team mindset', I do away with that kind of nonsense and just tell it like it is.
So, as with every role I work on, social fit is almost as important as technical fit. For this one, technical fit is very very important. But, we also have some social fit characteristics that are important. This is the kind of place that requires people to dive in and learn. The hiring manager for this one is actually a very dear friend of mine. He said something interesting to me not all that long ago. He mentioned, if you aren't spending at least an hour a day learning something new, you really are doing yourself a disservice. This is that classic environment where no one says 'this is not my job'. So that ability to jump in and help is needed for success in this role.
This role is being done onsite in Irvine, California. I prefer working with candidates that are already local to the area. If you need to relocate, that is fine, but there are no relocation dollars available.
I can only work with US Citizens or Green Card Holders for this role. I cannot work with H1, OPT, EAD, F1, H4, or anyone that is not already a US Citizen or Green Card Holder for this role.
The Data Engineer role is similar to the Data Integration role I posted. However, this one is mor Ops focused, with the orchestration of deployment and ML flow, and including orchestrating and using data on the clusters and managing how the models are performing. This role focuses on coding & configuring on the ML side of the house.
You will be designing, automating, and observing end to end data pipelines that feed this client's Kubeflow driven machine learning platform, ensuring models are trained, deployed, and monitored on trustworthy, well governed data. You will build batch/stream workflows, wire them into Azure DevOps CI/CD, and surface real time health metrics in Prometheus + Grafana dashboards to guarantee data availability. The role bridges Data Engineering and MLOps, allowing data scientists to focus on experimentation and the business sees rapid, reliable predictive insight.
Here are some of the main responsibilities:
Design and implement batch and streaming pipelines in Apache Spark running on Kubernetes and Kubeflow Pipelines to hydrate feature stores and training datasets.
Build high throughput ETL/ELT jobs with SSIS, SSAS, and T SQL against MS SQL Server, applying Data Vault style modeling patterns for auditability.
Integrate source control, build, and release automation using GitHub Actions and Azure DevOps for every pipeline component.
Instrument pipelines with Prometheus exporters and visualize SLA, latency, and error budget metrics to enable proactive alerting.
Create automated data quality and schema drift checks; surface anomalies to support a rapid incident response process.
Use MLflow Tracking and Model Registry to version artifacts, parameters, and metrics for reproducible experiments and safe rollbacks.
Work with data scientists to automate model retraining and deployment triggers within Kubeflow based on data freshness or concept drift signals.
Develop PowerShell and .NET utilities to orchestrate job dependencies, manage secrets, and publish telemetry to Azure Monitor.
Optimize Spark and SQL workloads through indexing, partitioning, and cluster sizing strategies, benchmarking performance in CI pipelines.
Document lineage, ownership, and retention policies; ensure pipelines conform to PCI/SOX and internal data governance standards.
Here is what we are seeking:
At least 6 years of experience building data pipelines in Spark or equivalent.
At least 2 years deploying workloads on Kubernetes/Kubeflow.
At least 2 years of experience with MLflow or similar experiment‑tracking tools.
At least 6 years of experience in T‑SQL, Python/Scala for Spark.
At least 6 years of PowerShell/.NET scripting.
At least 6 years of experience with with GitHub, Azure DevOps, Prometheus, Grafana, and SSIS/SSAS.
Kubernetes CKA/CKAD, Azure Data Engineer (DP‑203), or MLOps‑focused certifications (e.g., Kubeflow or MLflow) would be great to see.
Mentor engineers on best practices in containerized data engineering and MLOps.
Data Engineer (AWS Redshift, BI, Python, ETL)
Requirements engineer job in Manhattan Beach, CA
We are seeking a skilled Data Engineer with strong experience in business intelligence (BI) and data warehouse development to join our team. In this role, you will design, build, and optimize data pipelines and warehouse architectures that support analytics, reporting, and data-driven decision-making. You will work closely with analysts, data scientists, and business stakeholders to ensure reliable, scalable, and high-quality data solutions.
Responsibilities:
Develop and maintain ETL/ELT pipelines for ingesting, transforming, and delivering data.
Design and enhance data warehouse models (star/snowflake schemas) and BI datasets.
Optimize data workflows for performance, scalability, and reliability.
Collaborate with BI teams to support dashboards, reporting, and analytics needs.
Ensure data quality, governance, and documentation across all solutions.
Qualifications:
Proven experience with data engineering tools (SQL, Python, ETL frameworks).
Strong understanding of BI concepts, reporting tools, and dimensional modeling.
Hands-on experience with cloud data platforms (e.g., AWS, Azure, GCP) is a plus.
Excellent problem-solving skills and ability to work in a cross-functional environment.
Lead Data Engineer - (Automotive exp)
Requirements engineer job in Torrance, CA
Role: Sr Technical Lead
Duration: 12+ Month Contract
Daily Tasks Performed:
Lead the design, development, and deployment of a scalable, secure, and high-performance CDP SaaS product.
Architect solutions that integrate with various data sources, APIs, and third-party platforms.
Design, develop, and optimize complex SQL queries for data extraction, transformation, and analysis
Build and maintain workflow pipelines using Digdag, integrating with data platforms such as Treasure Data, AWS, or other cloud services
Automate ETL processes and schedule tasks using Digdag's YAML-based workflow definitions
Implement data quality checks, logging, and alerting mechanisms within workflow
Leverage AWS services (e.g., S3, Lambda, Athena) where applicable to enhance data processing and storage capabilities
Ensure best practices in software engineering, including code reviews, testing, CI/CD, and documentation.
Oversee data privacy, security, and compliance initiatives (e.g., GDPR, CCPA).
Ensure adherence to security, compliance, and data governance requirements.
Oversee development of real-time and batch data processing systems.
Collaborate with cross-functional teams including data analysts, product managers, and software engineers to translate business requirements into technical solutions
Collaborate with the stakeholders to define technical requirements to align technical solutions with business goals and deliver product features.
Mentor and guide developers, fostering a culture of technical excellence and continuous improvement.
Troubleshoot complex technical issues and provide hands-on support as needed.
Monitor, troubleshoot, and improve data workflows for performance, reliability, and cost-efficiency as needed
Optimize system performance, scalability, and cost efficiency.
What this person will be working on:
As the Senior Technical Lead for our Customer Data Platform (CDP), the candidate will define the technical strategy, architecture, and execution of the platform. They will lead the design and delivery of scalable, secure, and high-performing solutions that enable unified customer data management, advanced analytics, and personalized experiences. This role demands deep technical expertise, strong leadership, and a solid understanding of data platforms and modern cloud technologies. It is a pivotal position that supports the CDP vision by mentoring team members and delivering solutions that empower our customers to unify, analyze, and activate their data.
Position Success Criteria (Desired) - 'WANTS'
Bachelor's or Master's degree in Computer Science, Engineering, or related field.
8+ years of software development experience, with at least 3+ years in a technical leadership role.
Proven experience building and scaling SaaS products, preferably in customer data, marketing technology, or analytics domains
Extensive hands-on experience with Presto, Hive, and Python
Strong proficiency in writing complex SQL queries for data extraction, transformation, and analysis
Familiarity with AWS data services such as S3, Athena, Glue, and Lambda
Deep understanding of data modeling, ETL pipelines, workflow orchestration, and both real-time and batch data processing
Experience ensuring data privacy, security, and compliance in SaaS environments
Knowledge of Customer Data Platforms (CDPs), CDP concepts, and integration with CRM, marketing, and analytics tools
Excellent communication, leadership, and project management skills
Experience working with Agile methodologies and DevOps practices
Ability to thrive in a fast-paced, agile environment
Collaborative mindset with a proactive approach to problem-solving
Stay current with industry trends and emerging technologies relevant to SaaS and customer data platforms.
DevOps Engineer
Requirements engineer job in Westlake Village, CA
In today's market, there is a unique duality in technology adoption. On one side, extreme focus on cost containment by clients, and on the other, deep motivation to modernize their Digital storefronts to attract more consumers and B2B customers.
As a leading Modernization Engineering company, we aim to deliver modernization-driven hypergrowth for our clients based on the deep differentiation we have created in Modernization Engineering, powered by our Lightening suite and 16-step Platformation™ playbook. In addition, we bring agility and systems thinking to accelerate time to market for our clients.
Headquartered in Bengaluru, India, Sonata has a strong global presence, including key regions in the US, UK, Europe, APAC, and ANZ. We are a trusted partner of world-leading companies in BFSI (Banking, Financial Services, and Insurance), HLS (Healthcare and Lifesciences), TMT (Telecom, Media, and Technology), Retail & CPG, and Manufacturing space. Our bouquet of Modernization Engineering Services cuts across Cloud, Data, Dynamics, Contact Centers, and around newer technologies like Generative AI, MS Fabric, and other modernization platforms.
Job Role : Sr. DevOps Engineer, platforms
Work Location: Westlake Village, CA (5 Days Onsite)
Duration: Contract to Hire
Job Description:
Responsibilities:
Design, implement, and manage scalable and resilient infrastructure on AWS.
Architect and maintain Windows/Linux based environments, ensuring seamless integration with cloud platforms.
Develop and maintain infrastructure-as-code(IaC) using both AWS Cloudformation/CDK and Terraform.
Develop and maintain Configuration Management for Windows servers using Chef.
Design, build, and optimize CI/CD pipelines using GitLab CI/CD for .NET applications.
Implement and enforce security best practices across the infrastructure and deployment processes.
Collaborate closely with development teams to understand their needs and provide DevOps expertise.
Troubleshoot and resolve infrastructure and application deployment issues.
Implement and manage monitoring and logging solutions to ensure system visibility and proactive issue detection.
Clearly and concisely contribute to the development and documentation of DevOps standards and best practices.
Stay up-to-date with the latest industry trends and technologies in cloud computing, DevOps, and security.
Provide mentorship and guidance to junior team members.
Qualifications:
Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent experience).
5+ years of experience in a DevOps or Site Reliability Engineering (SRE) role.
Extensive hands-on experience with Amazon Web Services (AWS)
Solid understanding of Windows/Linux Server administration and integration with cloud environments.
Proven experience with infrastructure-as-code tools, specifically AWS CDK and Terraform.
Strong experience designing and implementing CI/CD pipelines using GitLab CI/CD.
Experience deploying and managing .NET applications in cloud environments.
Deep understanding of security best practices and their implementation in cloud infrastructure and CI/CD pipelines.
Solid understanding of networking principles (TCP/IP, DNS, load balancing, firewalls) in cloud environments.
Experience with monitoring and logging tools (e.g., NewRelic, CloudWatch, Cloud Logging, Prometheus).
Strong scripting skills(e.g., PowerShell, Python, Ruby, Bash).
Excellent problem-solving and troubleshooting skills.
Strong communication and collaboration skills.
Experience with containerization technologies (e.g., Docker, Kubernetes) is a plus.
Relevant AWS and/or GCP certifications are a plus.
Experience with the configuration management tool Chef
Preferred Qualifications
Knowledge of and a strong understanding of Powershell and Python Scripting
Strong background with AWS EC2 features and Services (Autoscaling and WarmPools)
Understanding of Windows server Build process using tools like Chocolaty for packages and Packer for AMI/Image generation.
Extensive hands-on experience with Amazon Web Services (AWS)
Why join Sonata Software?
At Sonata, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build never seen before solutions to some of the world's toughest problems. You´ll be challenged, but you will not be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next.
Sonata Software is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity, age, religion, disability, sexual orientation, veteran status, marital status, or any other characteristics protected by law.
DevOps Engineer
Requirements engineer job in Westlake Village, CA
At Akkodis, we use our insight, knowledge, and global resources to make exceptional connections every day. With 60 branch offices located strategically throughout North America, we are positioned perfectly to deliver the industry's top talent to each of our clients. Clients choose Akkodis as their workforce partner to solve staffing challenges that range from locating hard-to-find niche talent to completing quick-fill demands.
Akkodis is seeking a DevOps Engineer with a Westlake Village, CA-based client to join their team.
JOB TITLE: DevOps Engineer
EMPLOYMENT TYPE/DURATION: Contract role - 6 months + (possible conversion to FTE)
COMPENSATION: Pay rate $60- $62.50/hour
LOCATION DETAILS: On-Site Mon-Fri 8am -5pm PST-- Westlake Village, CA
Top Required Skills:
3+ years of experience in a DevOps or Site Reliability Engineering (SRE) role.
Extensive hands-on experience with Amazon Web Services (AWS)
Solid understanding of Windows/Linux Server administration and integration with cloud environments.
Proven experience with infrastructure-as-code (IaC) tools, specifically Terraform (OpenTofu) and AWS CDK.
Strong experience designing and implementing CI/CD pipelines using GitLab CI/CD.
Experience deploying and managing .NET applications in cloud environments.
We're looking for an experienced, forward-thinking engineer to strengthen our DevOps capabilities across AWS and Windows/Linux environments. In this role, you'll drive the design and evolution of scalable, secure, and automated infrastructure to support our Infrastructure and Application stack. You'll work closely with development teams to streamline CI/CD pipelines, embed security best practices, and champion infrastructure-as-code. If you're passionate about automation, cloud-native patterns, and making systems run smarter and faster, we want to hear from you.
Design, implement, and manage scalable and resilient infrastructure on AWS.
Architect and maintain Windows/Linux based environments, ensuring seamless integration with cloud platforms.
Develop and maintain infrastructure-as-code (IaC) using both AWS CloudFormation/CDK and Terraform.
Develop and maintain Configuration Management for Windows servers using Chef.
Design, build, and optimize CI/CD pipelines using GitLab CI/CD for .NET applications.
Implement and enforce security best practices across the infrastructure and deployment processes.
Collaborate closely with development teams to understand their needs and provide DevOps expertise.
Troubleshoot and resolve infrastructure and application deployment issues.
Implement and manage monitoring and logging solutions to ensure system visibility and proactive issue detection.
Clearly and concisely contributes to the development and documentation of DevOps standards and best practices.
Stay up to date with the latest industry trends and technologies in cloud computing, DevOps, and security.
Provide mentorship and guidance to junior team members.
Qualifications:
Bachelor's degree in computer science, Engineering, or a related field (or equivalent experience).
3+ years of experience in a DevOps or Site Reliability Engineering (SRE) role.
Extensive hands-on experience with Amazon Web Services (AWS)
Solid understanding of Windows/Linux Server administration and integration with cloud environments.
Proven experience with infrastructure-as-code (IaC) tools, specifically Terraform (OpenTofu) and AWS CDK.
Strong experience designing and implementing CI/CD pipelines using GitLab CI/CD.
Experience deploying and managing .NET applications in cloud environments.
Deep understanding of security best practices and their implementation in cloud infrastructure and CI/CD pipelines.
Solid understanding of networking principles (TCP/IP, DNS, load balancing, firewalls) in cloud environments.
Experience with monitoring and logging tools (e.g., NewRelic, CloudWatch, Cloud Logging, Prometheus).
Strong scripting skills (e.g., Python, Ruby, PowerShell, Bash).
Experience with the configuration management tool Chef
Excellent problem-solving and troubleshooting skills.
Strong communication and collaboration skills.
Preferred Qualifications
Experience with containerization & orchestration technologies (e.g., Docker, Kubernetes) is a plus.
Relevant AWS and/or GCP certifications are a plus.
Strong understanding of Powershell and Python Scripting
Strong background with AWS EC2 features and Services (Autoscaling and WarmPools)
Understanding of Windows server Build process using tools like Chocolaty for packages and Packer for AMI/Image generation.
Solid experience with the Windows server operating system and server tools such as IIS.
If you are interested in this role, then please click APPLY NOW. For other opportunities available at Akkodis go to **************** If you have questions about the position, please contact Dana More at **************************
Equal Opportunity Employer/Veterans/Disabled
Benefit offerings include medical, dental, vision, term life insurance, short-term disability insurance, additional voluntary benefits, commuter benefits and 401K plan. Our program provides employees the flexibility to choose the type of coverage that meets their individual needs. Available paid leave may include Paid Sick Leave, where required by law; any other paid leave required by Federal, State, or local law; and Holiday pay upon meeting eligibility criteria. Disclaimer: These benefit offerings do not apply to client-recruited jobs and jobs which are direct hire to a client
To read our Candidate Privacy Information Statement, which explains how we will use your information, please visit **********************************************
The Company will consider qualified applicants with arrest and conviction record.
Software Engineer, Entry Level (New Grad)
Requirements engineer job in Los Angeles, CA
About the role
We are hiring an Entry Level Software Engineer to join a collaborative engineering team building modern web and backend systems. This is a great opportunity for recent graduates to work on real production features, learn best practices, and grow with mentorship.
What you will do
• Build and enhance backend services and APIs
• Develop UI features and improve user experience (depending on team)
• Write clean, testable code and participate in code reviews
• Troubleshoot issues, fix bugs, and improve system reliability
• Collaborate with product, QA, and other engineers in agile sprints
• Document technical work and contribute to team knowledge bases
What we are looking for
• Bachelor's or Master's degree in Computer Science, Software Engineering, or related field (or equivalent experience)
• Strong fundamentals in data structures, algorithms, and OOP
• Experience with at least one programming language (Java, Python, JavaScript, C#, etc.)
• Familiarity with Git and basic CI/CD concepts
• Comfort working with SQL or basic database concepts
• Strong communication and willingness to learn
Nice to have
• Internship, capstone, or personal projects (GitHub preferred)
• Exposure to cloud platforms (AWS, Azure, GCP)
• Familiarity with Docker, REST, microservices, or React
System Engineer
Requirements engineer job in Los Angeles, CA
Job Title: Systems Engineer
Employment Type: Full-Time
TransSIGHT is at the forefront of delivering advanced transportation solutions, providing innovative software and hardware products that enhance system efficiency, reliability, and customer experience. We are seeking a highly motivated Systems Engineer to join our Los Angeles team and contribute to the design, development, and deployment of cutting-edge solutions.
Position Summary:
The Systems Engineer will play a critical role in supporting the development, integration, and deployment of software and hardware systems. This role requires a detail-oriented professional capable of coordinating system development tasks, maintaining thorough technical documentation, and ensuring successful solution implementation for our clients.
Primary Responsibilities:
Support the definition of software and hardware products and interfaces in the areas of CAD/AVL and fare collection.
Coordinate system development tasks, including design, integration, and formal testing.
Oversee all transitions into customer deployment environments.
Develop and execute projects encompassing system specifications, technical and logistical requirements, and other disciplines.
Create and maintain programmatic and technical documentation, including design documents, requirement matrices, and system diagrams, to ensure efficient planning and execution.
Manage and document system configurations.
Ensure the successful deployment of solutions.
Maintain a thorough working knowledge of enterprise applications.
Implement performance analysis and reliability analysis of end-to-end protocols.
Assist with other tasks as needed by the Systems Engineering department.
Provide training and guidance to Associates and Project Engineer I/II staff as needed.
Qualifications:
Bachelor's degree in Systems Engineering, Computer Science, Electrical Engineering, or a related field.
Minimum of 5 years of experience in systems engineering, software/hardware integration, or related disciplines.
Strong understanding of system design, integration, and testing processes.
Experience managing technical documentation and system configurations.
Ability to perform performance and reliability analysis of complex systems.
Excellent problem-solving, organizational, and communication skills.
Ability to work both independently and collaboratively in a fast-paced environment.
Preferred:
Prior experience in transportation, software, or hardware systems engineering.
Familiarity with enterprise applications and deployment processes.
Why Join TransSIGHT:
Work on innovative projects shaping the future of transportation.
Collaborative and supportive team environment.
Opportunities for professional growth and continuous learning.
Descent Systems Engineer
Requirements engineer job in Torrance, CA
In Orbit envisions a world where our most critical resources are accessible when we need them the most. Today, In Orbit is on a mission to provide the most resilient and autonomous cargo delivery solutions for regions suffering from conflict and natural disasters.
Descent Systems Engineer:
In Orbit is looking for a Descent Systems Engineer eager to join a diverse and dynamic team developing solutions for cargo delivery where traditional aircraft and drones fail.
As a Descent Systems Engineer at In Orbit you will work on the design, development, and testing of advanced parachutes and decelerator systems. You will work with other engineers on integrating decelerator subsystems into the vehicle. The ideal candidate for this position will have experience manufacturing and testing parachute systems, a solid foundation of aerodynamic and mechanical design principles as well as flight testing experience.
Responsibilities:
Lead the development of parafoils, reefing systems, and other decelerator components.
Develop fabrication and manufacturing processes including material selection, patterning, sewing, rigging, and hardware integration.
Plan and conduct flight tests including drop tests, high-altitude balloon tests, and other captive-carry deployments.
Support the development of test plans, procedures, and instrumentation requirements to verify system performance.
Collaborate closely with mechanical, avionics, and software teams for vehicle-level integrations
Own documentation and configuration management for parachute assemblies, manufacturing specifications, and test reports.
Basic Qualifications:
Bachelor's Degree level of education in Aerospace Engineering or similar curriculum.
Strong understanding of aerodynamics, drag modeling, reefing techniques, and dynamic behaviors of decelerators
Experience with reefing line cutting systems or multi-stage deployment mechanisms
Experience conducting ground and flight tests for decelerator systems, including test planning, instrument integration, data analysis, and anomaly investigation.
Expertise with textile materials (e.g., F-111, S-P fabric, Kevlar, Dyneema).
Ability to work hands-on with sewing machines and ground test fixtures.
Solid teamworking and relationship building skills with the ability to effectively communicate difficult technical problems and solutions with other engineering disciplines.
Preferred Experience and Skills:
Experience with guided parachute systems.
Familiarity with FAA coordination for flight testing in and out of controlled airspace.
Experience with pattern design tools such as SpaceCAD, Lectra Modaris, or similar.
Additional Requirements:
Willing to work extended hours as needed
Able to stand for extended periods of time
Able to occasionally travel (~25%) and support off-site testing.
ITAR Requirements:
To conform to U.S. Government space technology export regulations, including the International Traffic in Arms Regulations (ITAR) you must be a U.S. citizen, lawful permanent resident of the U.S., protected individual as defined by 8 U.S.C. 1324b(a)(3), or eligible to obtain the required authorizations from the U.S. Department of State.
System Engineer (Managed Service Provider)
Requirements engineer job in Costa Mesa, CA
We are a long established Southern California Managed Service Provider supporting SMB clients across Los Angeles and Orange County with proactive IT, cybersecurity, cloud solutions, and hands on guidance. Our team is known for strong client relationships and clear communication, and we take a steady, service first approach to solving problems the right way.
We are hiring a Tier 3 Systems Engineer to be the L3 escalation point and technical backstop for complex issues across diverse client environments. This role requires previous MSP experience and is ideal for someone who enjoys deep troubleshooting, ownership, and helping reduce repeat issues by getting to root cause. Expect about 75 percent escalations and 25 percent project work tied to recurring client needs.
What You Will Do
• Own Tier 3 escalations across servers, networking, virtualization, and Microsoft 365
• Troubleshoot deeply and drive root cause fixes
• Handle SonicWall, VLAN, NAT, and site to site VPN work
• Support Windows Server AD, GPO, DNS, DHCP
• Support VMware ESXi vSphere and Hyper V
• Lead Microsoft 365 escalations and hardening
• Document clearly and communicate client ready updates
What You Bring
• 5 plus years MSP experience supporting multiple client environments
• Strong troubleshooting and escalation ownership
• SonicWall plus strong VLAN and VPN skills
• Windows Server 2012 to 2022
• VMware and or Hyper V
• Microsoft 365 plus Intune fundamentals
• Azure and Entra ID security configuration
• ConnectWise Command and ConnectWise Manage preferred
Location, Pay, and Benefits
• $95,000 to $105,000 DOE
• Hybrid after onboarding
• Medical, dental, vision
• 401k with 3% company match
• PTO and sick time plus paid holidays
• Mileage reimbursement
System Engineer
Requirements engineer job in Costa Mesa, CA
Must Have Technical/Functional Skills
1. Basic to moderate understanding of Salesforce.
2. Strong understanding of SQL, including the ability to write and analyze queries.
3. Familiarity with Linux file systems and basic commands.
4. Experience with or understanding ServiceNow for IT service management
5. Basic knowledge of ETL (Extract, Transform, Load) processes and tools.
6. Proficiency in working with XML and JSON file formats
7. Experience in interacting with RESTful APIs.
8. Familiarity with Splunk for searching, monitoring, and analyzing machine-generated data.
9. Experience in using testing tools for API testing and development such as Postman.
Roles & Responsibilities
10. Collaborates with client team(s) to improve their level of technical knowledge and understanding of products.
11. Assists the client team(s) with coordinating scheduling of project processing steps and how to use priority
requests appropriately.
12. Acts as technical resource to sales personnel on existing accounts or running tests for prospective accounts for
the client team(s).
13. Documents change and additions to internal technical processes and on client-specific projects and disseminate
information to appropriate personnel.
14. Interprets client specifications and instructions of high complexity for Technical Solutions personnel and explains
how to appropriately apply theory to this practice.
15. Defines and implements quality control/troubleshooting standards and procedures for the department.
16. Creates and provides necessary quality control reports, output files, and summarized data reports.
Salary Range $110,00 to 115,000 a year.