Post job

Data engineer jobs in Eastvale, CA

- 1,878 jobs
All
Data Engineer
Data Scientist
Software Engineer
Senior Software Engineer
Data Consultant
Senior Data Architect
Requirements Engineer
Senior Systems Developer
Devops Engineer
  • Data Scientist

    Stand 8 Technology Consulting

    Data engineer job in Long Beach, CA

    STAND 8 provides end to end IT solutions to enterprise partners across the United States and with offices in Los Angeles, New York, New Jersey, Atlanta, and more including internationally in Mexico and India We are seeking a highly analytical and technically skilled Data Scientist to transform complex, multi-source data into unified, actionable insights used for executive reporting and decision-making. This role requires expertise in business intelligence design, data modeling, metadata management, data integrity validation, and the development of dashboards, reports, and analytics used across operational and strategic environments. The ideal candidate thrives in a fast-paced environment, demonstrates strong investigative skills, and can collaborate effectively with technical teams, business stakeholders, and leadership. Essential Duties & Responsibilities As a Data Scientist, participate across the full solution lifecycle: business case, planning, design, development, testing, migration, and production support. Analyze large and complex datasets with accuracy and attention to detail. Collaborate with users to develop effective metadata and data relationships. Identify reporting and dashboard requirements across business units. Determine strategic placement of business logic within ETL or metadata models. Build enterprise data warehouse metadata/semantic models. Design and develop unified dashboards, reports, and data extractions from multiple data sources. Develop and execute testing methodologies for reports and metadata models. Document BI architecture, data lineage, and project report requirements. Provide technical specifications and data definitions to support the enterprise data dictionary. Apply analytical skills and Data Science techniques to understand business processes, financial calculations, data flows, and application interactions. Identify and implement improvements, workarounds, or alternative solutions related to ETL processes, ensuring integrity and timeliness. Create UI components or portal elements (e.g., SharePoint) for dynamic or interactive stakeholder reporting. As a Data Scientist, download and process SQL database information to build Power BI or Tableau reports (including cybersecurity awareness campaigns). Utilize SQL, Python, R, or similar languages for data analysis and modeling. Support process optimization through advanced modeling, leveraging experience as a Data Scientist where needed. Required Knowledge & Attributes Highly self-motivated with strong organizational skills and ability to manage multiple verbal and written assignments. Experience collaborating across organizational boundaries for data sourcing and usage. Analytical understanding of business processes, forecasting, capacity planning, and data governance. Proficient with BI tools (Power BI, Tableau, PBIRS, SSRS, SSAS). Strong Microsoft Office skills (Word, Excel, Visio, PowerPoint). High attention to detail and accuracy. Ability to work independently, demonstrate ownership, and ensure high-quality outcomes. Strong communication, interpersonal, and stakeholder engagement skills. Deep understanding that data integrity and consistency are essential for adoption and trust. Ability to shift priorities and adapt within fast-paced environments. Required Education & Experience Bachelor's degree in Computer Science, Mathematics, or Statistics (or equivalent experience). 3+ years of BI development experience. 3+ years with Power BI and supporting Microsoft stack tools (SharePoint 2019, PBIRS/SSRS, Excel 2019/2021). 3+ years of experience with SDLC/project lifecycle processes 3+ years of experience with data warehousing methodologies (ETL, Data Modeling). 3+ years of VBA experience in Excel and Access. Strong ability to write SQL queries and work with SQL Server 2017-2022. Experience with BI tools including PBIRS, SSRS, SSAS, Tableau. Strong analytical skills in business processes, financial modeling, forecasting, and data flow understanding. Critical thinking and problem-solving capabilities. Experience producing high-quality technical documentation and presentations. Excellent communication and presentation skills, with the ability to explain insights to leadership and business teams. Benefits Medical coverage and Health Savings Account (HSA) through Anthem Dental/Vision/Various Ancillary coverages through Unum 401(k) retirement savings plan Paid-time-off options Company-paid Employee Assistance Program (EAP) Discount programs through ADP WorkforceNow Additional Details The base range for this contract position is $73 - $83 / per hour, depending on experience. Our pay ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target for new hires of this position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Qualified applicants with arrest or conviction records will be considered About Us STAND 8 provides end-to-end IT solutions to enterprise partners across the United States and globally with offices in Los Angeles, Atlanta, New York, Mexico, Japan, India, and more. STAND 8 focuses on the "bleeding edge" of technology and leverages automation, process, marketing, and over fifteen years of success and growth to provide a world-class experience for our customers, partners, and employees. Our mission is to impact the world positively by creating success through PEOPLE, PROCESS, and TECHNOLOGY. Check out more at ************** and reach out today to explore opportunities to grow together! By applying to this position, your data will be processed in accordance with the STAND 8 Privacy Policy.
    $73-83 hourly 1d ago
  • Senior Data Consultant - Supply Chain Planning

    Bristlecone 3.9company rating

    Data engineer job in Corona, CA

    🚀 We're Hiring: Senior Data Consultant - (Supply Chain Planning) Bristlecone, a Mahindra company, is a leading supply chain and business analytics advisor, rated by Gartner as one of the top ten system integrators in the supply chain space. We have been a trusted partner to global enterprises such as Applied Materials, Exxon Mobil, Flextronics, Nestle, Unilever, Whirlpool, and many others. 🔍 Project Overview: We are looking for a strong Data Consultant to support our planning projects. The ideal candidate will have a solid understanding of planning processes and data management within a supply chain or business planning environment. While deep configuration knowledge of SAP IBP is not mandatory, the consultant must have a strong grasp of planning data, business rules, and their impact on planning outcomes. This is a strategic initiative aimed at transforming planning processes across Raw Materials, Finished Goods, and Packaging materials. You'll be the go-to expert for managing end-to-end planning data across SAP IBP and ECC systems (SD, MM, PP). 🛠️ Key Responsibilities: Collaborate with planning teams to analyze, validate, and manage data relevant to planning processes. Demonstrate a clear understanding of basic planning functionalities and how data supports them. Identify, define, and manage data elements that impact demand, supply, and inventory planning. Understand and document business rules and prerequisites related to data maintenance and planning accuracy. Coordinate data collection activities from super users and end users across multiple functions. Support data readiness for project milestones including testing, validation, and go-live. Explain how different data elements influence planning outcomes to non-technical stakeholders. Work closely with functional and technical teams to ensure data integrity and consistency across systems. Required Skills & Qualifications: Strong understanding of planning processes (demand, supply, or S&OP). Proven experience working with planning master data (e.g., product, location, BOM, resources, etc.). Ability to analyze complex datasets and identify inconsistencies or dependencies. Excellent communication and coordination skills with cross-functional teams. Exposure to SAP IBP, APO, or other advanced planning tools (preferred but not mandatory). Strong business acumen with the ability to link data quality to planning outcomes. 5-10 years of relevant experience in data management, planning, or supply chain roles. Preferred Qualifications: Experience with large-scale planning transformation or ERP implementation projects. Knowledge of data governance and data quality frameworks. Experience in working with super users/end users for data validation and readiness. Privacy Notice Declarations for California based candidates/Jobs:: ********************************************************
    $85k-113k yearly est. 1d ago
  • Data Engineer

    Robert Half Recruiting 4.5company rating

    Data engineer job in Culver City, CA

    Robert Half is partnering with a well known high tech company seeking an experienced Data Engineer with strong Python and SQL skills. The primary duties involve managing the complete data lifecycle and utilizing extensive datasets across marketing, software, and web platforms. This position is full time with full benefits and 3 days onsite in the Culver CIty area. Responsibilities: 4+ years of professional experience ideally in a combination of data engineering and business intelligence. Working heavily with SQL and programming in Python. Ownership mindset to oversee the entire data lifecycle, including collection, extraction, and cleansing processes. Building reports and data visualization to help advance business. Leverage industry-standard tools for data integration such as Talend. Work extensively within Cloud based ecosystems such as AWS and GCP ecosystems. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 5+ years of experience in data engineering, data warehousing, and big data technologies. Proficiency in SQL and experience with relational databases (e.g., PostgreSQL, MySQL, SQL Server) and NoSQL Technologies. Experience working within GCP environments and AWS. Experience in real-time data pipeline tools. Hands-on expertise with Google Cloud services including BigQuery. Deep knowledge of SQL including Dimension tables and experienced in Python programming.
    $116k-165k yearly est. 4d ago
  • Senior Data Engineer - Snowflake / ETL (Onsite)

    CGS Business Solutions 4.7company rating

    Data engineer job in Beverly Hills, CA

    CGS Business Solutions is committed to helping you, as an esteemed IT Professional, find the next right step in your career. We match professionals like you to rewarding consulting or full-time opportunities in your area of expertise. We are currently seeking Technical Professionals who are searching for challenging and rewarding jobs for the following opportunity: Summary CGS is hiring for a Senior Data Engineer to serve as a core member of the Platform team. This is a high-impact role responsible for advancing our foundational data infrastructure. Your primary mission will be to build key components of our Policy Journal - the central source of truth for all policy, commission, and client accounting data. You'll work closely with the Lead Data Engineer and business stakeholders to translate complex requirements into scalable data models and reliable pipelines that power analytics and operational decision-making for agents, managers, and leadership. This role blends greenfield engineering, strategic modernization, and a strong focus on delivering trusted, high-quality data products. Overview • Build the Policy Journal - Design and implement the master data architecture unifying policy, commission, and accounting data from sources like IVANS and Applied EPIC to create the platform's “gold record.” • Ensure Data Reliability - Define and implement data quality checks, monitoring, and alerting to guarantee accuracy, consistency, and timeliness across pipelines - while contributing to best practices in governance. • Build the Analytics Foundation - Enhance and scale our analytics stack (Snowflake, dbt, Airflow), transforming raw data into clean, performant dimensional models for BI and operational insights. • Modernize Legacy ETL - Refactor our existing Java + SQL (PostgreSQL) ETL system - diagnose duplication and performance issues, rewrite critical components in Python, and migrate orchestration to Airflow. • Implement Data Quality Frameworks - Develop automated testing and validation frameworks aligned with our QA strategy to ensure accuracy, completeness, and integrity across pipelines. • Collaborate on Architecture & Design - Partner with product and business stakeholders to deeply understand requirements and design scalable, maintainable data solutions. Ideal Experience • 5+ years of experience building and operating production-grade data pipelines. • Expert-level proficiency in Python and SQL. • Hands-on experience with the modern data stack - Snowflake/Redshift, Airflow, dbt, etc. • Strong understanding of AWS data services (S3, Glue, Lambda, RDS). • Experience working with insurance or insurtech data (policies, commissions, claims, etc.). • Proven ability to design robust data models (e.g., dimensional modeling) for analytics. • Pragmatic problem-solver capable of analyzing and refactoring complex legacy systems (ability to read Java/Hibernate is a strong plus - but no new Java coding required). • Excellent communicator comfortable working with both technical and non-technical stakeholders. Huge Plus! • Direct experience with Agency Management Systems (Applied EPIC, Nowcerts, EZLynx, etc.) • Familiarity with carrier data formats (Accord XML, IVANS AL3) • Experience with BI tools (Tableau, Looker, Power BI) About CGS Business Solutions: CGS specializes in IT business solutions, staffing and consulting services. With a strong focus in IT Applications, Network Infrastructure, Information Security, and Engineering. CGS is an INC 5000 company and is honored to be selected as one of the Best IT Recruitment Firms in California. After five consecutive Fastest Growing Company titles, CGS continues to break into new markets across the USA. Companies are counting on CGS to attract and help retain these resource pools in order to gain a competitive advantage the rapidly changing business environments.
    $99k-136k yearly est. 23h ago
  • Principal Data Scientist

    Hiretalent-Staffing & Recruiting Firm

    Data engineer job in Alhambra, CA

    The Principal Data Scientist works to establish a comprehensive Data Science Program to advance data-driven decision-making, streamline operations, and fully leverage modern platforms including Databricks, or similar, to meet increasing demand for predictive analytics and AI solutions. The Principal Data Scientist will guide program development, provide training and mentorship to junior members of the team, accelerate adoption of advanced analytics, and build internal capacity through structured mentorship. The Principal Data Scientist will possess exceptional communication abilities, both verbal and written, with a strong customer service mindset and the ability to translate complex concepts into clear, actionable insights; strong analytical and business acumen, including foundational experience with regression, association analysis, outlier detection, and core data analysis principles; working knowledge of database design and organization, with the ability to partner effectively with Data Management and Data Engineering teams; outstanding time management and organizational skills, with demonstrated success managing multiple priorities and deliverables in parallel; a highly collaborative work style, coupled with the ability to operate independently, maintain focus, and drive projects forward with minimal oversight; a meticulous approach to quality, ensuring accuracy, reliability, and consistency in all deliverables; and proven mentorship capabilities, including the ability to guide, coach, and upskill junior data scientists and analysts. 5+ years of professional experience leading data science initiatives, including developing machine learning models, statistical analyses, and end-to-end data science workflows in production environments. 3+ years of experience working with Databricks and similar cloud-based analytics platforms, including notebook development, feature engineering, ML model training, and workflow orchestration. 3+ years of experience applying advanced analytics and predictive modeling (e.g., regression, classification, clustering, forecasting, natural language processing). 2+ years of experience implementing MLOps practices, such as model versioning, CI/CD for ML, MLflow, automated pipelines, and model performance monitoring. 2+ years of experience collaborating with data engineering teams to design data pipelines, optimize data transformations, and implement Lakehouse or data warehouse architectures (e.g., Databricks, Snowflake, SQL-based platforms). 2+ years of experience mentoring or supervising junior data scientists or analysts, including code reviews, training, and structured skill development. 2+ years of experience with Python and SQL programming, using data sources such as SQL Server, Oracle, PostgreSQL, or similar relational databases. 1+ year of experience operationalizing analytics within enterprise governance frameworks, partnering with Data Management, Security, and IT to ensure compliance, reproducibility, and best practices. Education: This classification requires possession of a Master's degree or higher in Data Science, Statistics, Computer Science, or a closely related field. Additional qualifying professional experience may be substituted for the required education on a year-for-year basis. At least one of the following industry-recognized certifications in data science or cloud analytics, such as: • Microsoft Azure Data Scientist Associate (DP-100) • Databricks Certified Data Scientist or Machine Learning Professional • AWS Machine Learning Specialty • Google Professional Data Engineer • or equivalent advanced analytics certifications. The certification is required and may not be substituted with additional experience.
    $97k-141k yearly est. 3d ago
  • Data Scientist

    Us Tech Solutions 4.4company rating

    Data engineer job in Alhambra, CA

    Title: Principal Data Scientist Duration: 12 Months Contract Additional Information California Resident Candidates Only. This position is HYBRID (2 days onsite, 2 days telework). Interviews will be conducted via Microsoft Teams. The work schedule follows a 4/40 (10-hour days, Monday-Thursday), with the specific shift determined by the program manager. Shifts may range between 7:15 a.m. and 6:00 p.m. Job description: The Principal Data Scientist works to establish a comprehensive Data Science Program to advance data-driven decision-making, streamline operations, and fully leverage modern platforms including Databricks, or similar, to meet increasing demand for predictive analytics and AI solutions. The Principal Data Scientist will guide program development, provide training and mentorship to junior members of the team, accelerate adoption of advanced analytics, and build internal capacity through structured mentorship. The Principal Data Scientist will possess exceptional communication abilities, both verbal and written, with a strong customer service mindset and the ability to translate complex concepts into clear, actionable insights; strong analytical and business acumen, including foundational experience with regression, association analysis, outlier detection, and core data analysis principles; working knowledge of database design and organization, with the ability to partner effectively with Data Management and Data Engineering teams; outstanding time management and organizational skills, with demonstrated success managing multiple priorities and deliverables in parallel; a highly collaborative work style, coupled with the ability to operate independently, maintain focus, and drive projects forward with minimal oversight; a meticulous approach to quality, ensuring accuracy, reliability, and consistency in all deliverables; and proven mentorship capabilities, including the ability to guide, coach, and upskill junior data scientists and analysts. Experience Required: Five (5)+ years of professional experience leading data science initiatives, including developing machine learning models, statistical analyses, and end-to-end data science workflows in production environments. Three (3)+ years of experience working with Databricks and similar cloud-based analytics platforms, including notebook development, feature engineering, ML model training, and workflow orchestration. Three (3)+ years of experience applying advanced analytics and predictive modeling (e.g., regression, classification, clustering, forecasting, natural language processing). Two (2)+ years of experience implementing MLOps practices, such as model versioning, CI/CD for ML, MLflow, automated pipelines, and model performance monitoring. Two (2)+ years of experience collaborating with data engineering teams to design data pipelines, optimize data transformations, and implement Lakehouse or data warehouse architectures (e.g., Databricks, Snowflake, SQL-based platforms). Two (2)+ years of experience mentoring or supervising junior data scientists or analysts, including code reviews, training, and structured skill development. Two (2)+ years of experience with Python and SQL programming, using data sources such as SQL Server, Oracle, PostgreSQL, or similar relational databases. One (1)+ year of experience operationalizing analytics within enterprise governance frameworks, partnering with Data Management, Security, and IT to ensure compliance, reproducibility, and best practices. Education Required & certifications: This classification requires possession of a Master's degree or higher in Data Science, Statistics, Computer Science, or a closely related field. Additional qualifying professional experience may be substituted for the required education on a year-for-year basis. At least one of the following industry-recognized certifications in data science or cloud analytics, such as: Microsoft Azure Data Scientist Associate (DP-100) Databricks Certified Data Scientist or Machine Learning Professional AWS Machine Learning Specialty Google Professional Data Engineer • or equivalent advanced analytics certifications. The certification is required and may not be substituted with additional experience. About US Tech Solutions: US Tech Solutions is a global staff augmentation firm providing a wide range of talent on-demand and total workforce solutions. To know more about US Tech Solutions, please visit ************************ US Tech Solutions is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Recruiter Details: Name: T Saketh Ram Sharma Email: ***************************** Internal Id: 25-54101
    $92k-133k yearly est. 3d ago
  • Senior Data Engineer

    Akube

    Data engineer job in Glendale, CA

    City: Glendale, CA Onsite/ Hybrid/ Remote: Hybrid (3 days a week onsite, Friday - Remote) Duration: 12 months Rate Range: Up to$85/hr on W2 depending on experience (no C2C or 1099 or sub-contract) Work Authorization: GC, USC, All valid EADs except OPT, CPT, H1B Must Have: • 5+ years Data Engineering • Airflow • Spark DataFrame API • Databricks • SQL • API integration • AWS • Python or Java or Scala Responsibilities: • Maintain, update, and expand Core Data platform pipelines. • Build tools for data discovery, lineage, governance, and privacy. • Partner with engineering and cross-functional teams to deliver scalable solutions. • Use Airflow, Spark, Databricks, Delta Lake, Kubernetes, and AWS to build and optimize workflows. • Support platform standards, best practices, and documentation. • Ensure data quality, reliability, and SLA adherence across datasets. • Participate in Agile ceremonies and continuous process improvement. • Work with internal customers to understand needs and prioritize enhancements. • Maintain detailed documentation that supports governance and quality. Qualifications: • 5+ years in data engineering with large-scale pipelines. • Strong SQL and one major programming language (Python, Java, or Scala). • Production experience with Spark and Databricks. • Experience ingesting and interacting with API data sources. • Hands-on Airflow orchestration experience. • Experience developing APIs with GraphQL. • Strong AWS knowledge and infrastructure-as-code familiarity. • Understanding of OLTP vs OLAP, data modeling, and data warehousing. • Strong problem-solving and algorithmic skills. • Clear written and verbal communication. • Agile/Scrum experience. • Bachelor's degree in a STEM field or equivalent industry experience.
    $85 hourly 1d ago
  • Lead Data Engineer - (Automotive exp)

    Intelliswift-An LTTS Company

    Data engineer job in Torrance, CA

    Role: Sr Technical Lead Duration: 12+ Month Contract Daily Tasks Performed: Lead the design, development, and deployment of a scalable, secure, and high-performance CDP SaaS product. Architect solutions that integrate with various data sources, APIs, and third-party platforms. Design, develop, and optimize complex SQL queries for data extraction, transformation, and analysis Build and maintain workflow pipelines using Digdag, integrating with data platforms such as Treasure Data, AWS, or other cloud services Automate ETL processes and schedule tasks using Digdag's YAML-based workflow definitions Implement data quality checks, logging, and alerting mechanisms within workflow Leverage AWS services (e.g., S3, Lambda, Athena) where applicable to enhance data processing and storage capabilities Ensure best practices in software engineering, including code reviews, testing, CI/CD, and documentation. Oversee data privacy, security, and compliance initiatives (e.g., GDPR, CCPA). Ensure adherence to security, compliance, and data governance requirements. Oversee development of real-time and batch data processing systems. Collaborate with cross-functional teams including data analysts, product managers, and software engineers to translate business requirements into technical solutions Collaborate with the stakeholders to define technical requirements to align technical solutions with business goals and deliver product features. Mentor and guide developers, fostering a culture of technical excellence and continuous improvement. Troubleshoot complex technical issues and provide hands-on support as needed. Monitor, troubleshoot, and improve data workflows for performance, reliability, and cost-efficiency as needed Optimize system performance, scalability, and cost efficiency. What this person will be working on: As the Senior Technical Lead for our Customer Data Platform (CDP), the candidate will define the technical strategy, architecture, and execution of the platform. They will lead the design and delivery of scalable, secure, and high-performing solutions that enable unified customer data management, advanced analytics, and personalized experiences. This role demands deep technical expertise, strong leadership, and a solid understanding of data platforms and modern cloud technologies. It is a pivotal position that supports the CDP vision by mentoring team members and delivering solutions that empower our customers to unify, analyze, and activate their data. Position Success Criteria (Desired) - 'WANTS' Bachelor's or Master's degree in Computer Science, Engineering, or related field. 8+ years of software development experience, with at least 3+ years in a technical leadership role. Proven experience building and scaling SaaS products, preferably in customer data, marketing technology, or analytics domains Extensive hands-on experience with Presto, Hive, and Python Strong proficiency in writing complex SQL queries for data extraction, transformation, and analysis Familiarity with AWS data services such as S3, Athena, Glue, and Lambda Deep understanding of data modeling, ETL pipelines, workflow orchestration, and both real-time and batch data processing Experience ensuring data privacy, security, and compliance in SaaS environments Knowledge of Customer Data Platforms (CDPs), CDP concepts, and integration with CRM, marketing, and analytics tools Excellent communication, leadership, and project management skills Experience working with Agile methodologies and DevOps practices Ability to thrive in a fast-paced, agile environment Collaborative mindset with a proactive approach to problem-solving Stay current with industry trends and emerging technologies relevant to SaaS and customer data platforms.
    $100k-141k yearly est. 4d ago
  • Data Engineer (AWS Redshift, BI, Python, ETL)

    Prosum 4.4company rating

    Data engineer job in Manhattan Beach, CA

    We are seeking a skilled Data Engineer with strong experience in business intelligence (BI) and data warehouse development to join our team. In this role, you will design, build, and optimize data pipelines and warehouse architectures that support analytics, reporting, and data-driven decision-making. You will work closely with analysts, data scientists, and business stakeholders to ensure reliable, scalable, and high-quality data solutions. Responsibilities: Develop and maintain ETL/ELT pipelines for ingesting, transforming, and delivering data. Design and enhance data warehouse models (star/snowflake schemas) and BI datasets. Optimize data workflows for performance, scalability, and reliability. Collaborate with BI teams to support dashboards, reporting, and analytics needs. Ensure data quality, governance, and documentation across all solutions. Qualifications: Proven experience with data engineering tools (SQL, Python, ETL frameworks). Strong understanding of BI concepts, reporting tools, and dimensional modeling. Hands-on experience with cloud data platforms (e.g., AWS, Azure, GCP) is a plus. Excellent problem-solving skills and ability to work in a cross-functional environment.
    $99k-139k yearly est. 23h ago
  • Lead Data Scientist

    TPI Global Solutions 4.6company rating

    Data engineer job in Alhambra, CA

    Role: Principal Data Scientist Duration: 12+ Months contract The Principal Data Scientist works to establish a comprehensive Data Science Program to advance data-driven decision-making, streamline operations, and fully leverage modern platforms including Databricks, or similar, to meet increasing demand for predictive analytics and AI solutions. The Principal Data Scientist will guide program development, provide training and mentorship to junior members of the team, accelerate adoption of advanced analytics, and build internal capacity through structured mentorship. The Principal Data Scientist will possess exceptional communication abilities, both verbal and written, with a strong customer service mindset and the ability to translate complex concepts into clear, actionable insights; strong analytical and business acumen, including foundational experience with regression, association analysis, outlier detection, and core data analysis principles; working knowledge of database design and organization, with the ability to partner effectively with Data Management and Data Engineering teams; outstanding time management and organizational skills, with demonstrated success managing multiple priorities and deliverables in parallel; a highly collaborative work style, coupled with the ability to operate independently, maintain focus, and drive projects forward with minimal oversight; a meticulous approach to quality, ensuring accuracy, reliability, and consistency in all deliverables; and proven mentorship capabilities, including the ability to guide, coach, and upskill junior data scientists and analysts. Required Experience • Five (5)+ years of professional experience leading data science initiatives, including developing machine learning models, statistical analyses, and end-to-end data science workflows in production environments. • Three (3)+ years of experience working with Databricks and similar cloud-based analytics platforms, including notebook development, feature engineering, ML model training, and workflow orchestration. • Three (3)+ years of experience applying advanced analytics and predictive modeling (e.g., regression, classification, clustering, forecasting, natural language processing). • Two (2)+ years of experience implementing MLOps practices, such as model versioning, CI/CD for ML, MLflow, automated pipelines, and model performance monitoring. • Two (2)+ years of experience collaborating with data engineering teams to design data pipelines, optimize data transformations, and implement Lakehouse or data warehouse architectures (e.g., Databricks, Snowflake, SQL-based platforms). • Two (2)+ years of experience mentoring or supervising junior data scientists or analysts, including code reviews, training, and structured skill development. • Two (2)+ years of experience with Python and SQL programming, using data sources such as SQL Server, Oracle, PostgreSQL, or similar relational databases. • One (1)+ year of experience operationalizing analytics within enterprise governance frameworks, partnering with Data Management, Security, and IT to ensure compliance, reproducibility, and best practices. Education This classification requires possession of a Master's degree or higher in Data Science, Statistics, Computer Science, or a closely related field. Additional qualifying professional experience may be substituted for the required education on a year-for-year basis. At least one of the following industry-recognized certifications in data science or cloud analytics, such as: • Microsoft Azure Data Scientist Associate (DP-100) • Databricks Certified Data Scientist or Machine Learning Professional • AWS Machine Learning Specialty • Google Professional Data Engineer • or equivalent advanced analytics certifications. The certification is required and may not be substituted with additional experience. Additional Information • California Resident Candidates Only. This position is HYBRID (2 days onsite, 2 days telework). Interviews will be conducted via Microsoft Teams. The work schedule follows a 4/40 (10-hour days, Monday-Thursday), with the specific shift determined by the program manager. Shifts may range between 7:15 a.m. and 6:00 p.m.
    $90k-125k yearly est. 4d ago
  • Snowflake/AWS Data Engineer

    Ostechnical

    Data engineer job in Irvine, CA

    Sr. Data Engineer Full Time Direct Hire Job Hybrid with work location-Irvine, CA. The Senior Data Engineer will help design and build a modern data platform that supports enterprise analytics, integrations, and AI/ML initiatives. This role focuses on developing scalable data pipelines, modernizing the enterprise data warehouse, and enabling self-service analytics across the organization. Key Responsibilities • Build and maintain scalable data pipelines using Snowflake, dbt, and Fivetran. • Design and optimize enterprise data models for performance and scalability. • Support data cataloging, lineage, quality, and compliance efforts. • Translate business and analytics requirements into reliable data solutions. • Use AWS (primarily S3) for storage, integration, and platform reliability. • Perform other data engineering tasks as needed. Required Qualifications • Bachelor's degree in Computer Science, Data Engineering, Information Systems, or related field. • 5+ years of data engineering experience. • Hands-on expertise with Snowflake, dbt, and Fivetran. • Strong background in data warehousing, dimensional modeling, and SQL. • Experience with AWS (S3) and data governance tools such as Alation or Atlan. • Proficiency in Python for scripting and automation. • Experience with streaming technologies (Kafka, Kinesis, Flink) a plus. • Knowledge of data security and compliance best practices. • Exposure to AI/ML workflows and modern BI tools like Power BI, Tableau, or Looker. • Ability to mentor junior engineers. Skills • Snowflake • dbt • Fivetran • Data modeling and warehousing • AWS • Data governance • SQL • Python • Strong communication and cross-functional collaboration • Interest in emerging data and AI technologies
    $99k-139k yearly est. 2d ago
  • Data Engineer

    Vaco By Highspring

    Data engineer job in Irvine, CA

    Job Title: Data Engineer Duration: Direct-Hire Opportunity We are looking for a Data Engineer who is hands-on, collaborative, and experienced with Microsoft SQL Server, Snowflake, AWS RDS, and MySQL. The ideal candidate has a strong background in data warehousing, data lakes, ETL pipelines, and business intelligence tools. This role plays a key part in executing data strategy - driving optimization, reliability, and scalable BI capabilities across the organization. It's an excellent opportunity for a data professional who wants to influence architectural direction, contribute technical expertise, and grow within a data-driven company focused on innovation. Key Responsibilities Design, develop, and maintain SQL Server and Snowflake data warehouses and data lakes, focusing on performance, governance, and security. Manage and optimize database solutions within Snowflake, SQL Server, MySQL, and AWS RDS. Build and enhance ETL pipelines using tools such as Snowpipe, DBT, Boomi, SSIS, and Azure Data Factory. Utilize data tools such as SSMS, Profiler, Query Store, and Redgate for performance tuning and troubleshooting. Perform database administration tasks, including backup, restore, and monitoring. Collaborate with Business Intelligence Developers and Business Analysts on enterprise data projects. Ensure database integrity, compliance, and adherence to best practices in data security. Configure and manage data integration and BI tools such as Power BI, Tableau, Power Automate, and scripting languages (Python, R). Qualifications Proficiency with Microsoft SQL Server, including advanced T-SQL development and optimization. 7+ years working as a SQL Server Developer/Administrator, with experience in relational and object-oriented databases. 2+ years of experience with Snowflake data warehouse and data lake solutions. Experience developing pipelines and reporting solutions using Power BI, SSRS, SSIS, Azure Data Factory, or DBT. Scripting and automation experience using Python, PowerShell, or R. Familiarity with data integration and analytics tools such as Boomi, Redshift, or Databricks (a plus). Excellent communication, problem-solving, and organizational skills. Education: Bachelor's or Master's degree in Computer Science, Information Systems, Data Science, or a related field. Technical Skills SQL Server / Snowflake / MySQL / AWS RDS ETL Development (Snowpipe, SSIS, Azure Data Factory, DBT) BI Tools (Power BI, Tableau) Python, R, PowerShell Data Governance & Security Best Practices Determining compensation for this role (and others) at Vaco/Highspring depends upon a wide array of factors including but not limited to the individual's skill sets, experience and training, licensure and certifications, office location and other geographic considerations, as well as other business and organizational needs. With that said, as required by local law in geographies that require salary range disclosure, Vaco/Highspring notes the salary range for the role is noted in this job posting. The individual may also be eligible for discretionary bonuses, and can participate in medical, dental, and vision benefits as well as the company's 401(k) retirement plan. Additional disclaimer: Unless otherwise noted in the job description, the position Vaco/Highspring is filing for is occupied. Please note, however, that Vaco/Highspring is regularly asked to provide talent to other organizations. By submitting to this position, you are agreeing to be included in our talent pool for future hiring for similarly qualified positions. Submissions to this position are subject to the use of AI to perform preliminary candidate screenings, focused on ensuring minimum job requirements noted in the position are satisfied. Further assessment of candidates beyond this initial phase within Vaco/Highspring will be otherwise assessed by recruiters and hiring managers. Vaco/Highspring does not have knowledge of the tools used by its clients in making final hiring decisions and cannot opine on their use of AI products.
    $99k-139k yearly est. 23h ago
  • Data Analytics Engineer

    Archwest Capital

    Data engineer job in Irvine, CA

    We are seeking a Data Analytics Engineer to join our team who serves as a hybrid Database Administrator, Data Engineer, and Data Analyst, responsible for managing core data infrastructure, developing and maintaining ETL pipelines, and delivering high-quality analytics and visual insights to executive stakeholders. This role bridges technical execution with business intelligence, ensuring that data across Salesforce, financial, and operational systems is accurate, accessible, and strategically presented. Essential Functions Database Administration: Oversee and maintain database servers, ensuring performance, reliability, and security. Manage user access, backups, and data recovery processes while optimizing queries and database operations. Data Engineering (ELT): Design, build, and maintain robust ELT pipelines (SQL/DBT or equivalent) to extract, transform, and load data across Salesforce, financial, and operational sources. Ensure data lineage, integrity, and governance throughout all workflows. Data Modeling & Governance: Design scalable data models and maintain a governed semantic layer and KPI catalog aligned with business objectives. Define data quality checks, SLAs, and lineage standards to reconcile analytics with finance source-of-truth systems. Analytics & Reporting: Develop and manage executive-facing Tableau dashboards and visualizations covering key lending and operational metrics - including pipeline conversion, production, credit quality, delinquency/charge-offs, DSCR, and LTV distributions. Presentation & Insights: Translate complex datasets into clear, compelling stories and presentations for leadership and cross-functional teams. Communicate findings through visual reports and executive summaries to drive strategic decisions. Collaboration & Integration: Partner with Finance, Capital Markets, and Operations to refine KPIs and perform ad-hoc analyses. Collaborate with Engineering to align analytical and operational data, manage integrations, and support system scalability. Enablement & Training: Conduct training sessions, create documentation, and host data office hours to promote data literacy and empower business users across the organization. Competencies & Skills Advanced SQL proficiency with strong data modeling, query optimization, and database administration experience (PostgreSQL, MySQL, or equivalent). Hands-on experience managing and maintaining database servers and optimizing performance. Proficiency with ETL/ELT frameworks (DBT, Airflow, or similar) and cloud data stacks (AWS/Azure/GCP). Strong Tableau skills - parameters, LODs, row-level security, executive-level dashboard design, and storytelling through data. Experience with Salesforce data structures and ingestion methods. Proven ability to communicate and present technical data insights to executive and non-technical stakeholders. Solid understanding of lending/financial analytics (pipeline conversion, delinquency, DSCR, LTV). Working knowledge of Python for analytics tasks, cohort analysis, and variance reporting. Familiarity with version control (Git), CI/CD for analytics, and data governance frameworks. Excellent organizational, documentation, and communication skills with a strong sense of ownership and follow-through. Education & Experience Bachelor's degree in Computer Science, Engineering, Information Technology, Data Analytics, or a related field. 3+ years of experience in data analytics, data engineering, or database administration roles. Experience supporting executive-level reporting and maintaining database infrastructure in a fast-paced environment.
    $99k-139k yearly est. 1d ago
  • Data Engineer

    RSM Solutions, Inc. 4.4company rating

    Data engineer job in Irvine, CA

    Thank you for stopping by to take a look at the Data Integration Engineer role I posted here on LinkedIN, I appreciate it. If you have read my s in the past, you will recognize how I write job descriptions. If you are new, allow me to introduce myself. My name is Tom Welke. I am Partner & VP at RSM Solutions, Inc and I have been recruiting technical talent for more than 23 years and been in the tech space since the 1990s. Due to this, I actually write JD's myself...no AI, no 'bots', just a real live human. I realized a while back that looking for work is about as fun as a root canal with no anesthesia...especially now. So, rather than saying 'must work well with others' and 'team mindset', I do away with that kind of nonsense and just tell it like it is. So, as with every role I work on, social fit is almost as important as technical fit. For this one, technical fit is very very important. But, we also have some social fit characteristics that are important. This is the kind of place that requires people to dive in and learn. The hiring manager for this one is actually a very dear friend of mine. He said something interesting to me not all that long ago. He mentioned, if you aren't spending at least an hour a day learning something new, you really are doing yourself a disservice. This is that classic environment where no one says 'this is not my job'. So that ability to jump in and help is needed for success in this role. This role is being done onsite in Irvine, California. I prefer working with candidates that are already local to the area. If you need to relocate, that is fine, but there are no relocation dollars available. I can only work with US Citizens or Green Card Holders for this role. I cannot work with H1, OPT, EAD, F1, H4, or anyone that is not already a US Citizen or Green Card Holder for this role. The Data Engineer role is similar to the Data Integration role I posted. However, this one is mor Ops focused, with the orchestration of deployment and ML flow, and including orchestrating and using data on the clusters and managing how the models are performing. This role focuses on coding & configuring on the ML side of the house. You will be designing, automating, and observing end to end data pipelines that feed this client's Kubeflow driven machine learning platform, ensuring models are trained, deployed, and monitored on trustworthy, well governed data. You will build batch/stream workflows, wire them into Azure DevOps CI/CD, and surface real time health metrics in Prometheus + Grafana dashboards to guarantee data availability. The role bridges Data Engineering and MLOps, allowing data scientists to focus on experimentation and the business sees rapid, reliable predictive insight. Here are some of the main responsibilities: Design and implement batch and streaming pipelines in Apache Spark running on Kubernetes and Kubeflow Pipelines to hydrate feature stores and training datasets. Build high throughput ETL/ELT jobs with SSIS, SSAS, and T SQL against MS SQL Server, applying Data Vault style modeling patterns for auditability. Integrate source control, build, and release automation using GitHub Actions and Azure DevOps for every pipeline component. Instrument pipelines with Prometheus exporters and visualize SLA, latency, and error budget metrics to enable proactive alerting. Create automated data quality and schema drift checks; surface anomalies to support a rapid incident response process. Use MLflow Tracking and Model Registry to version artifacts, parameters, and metrics for reproducible experiments and safe rollbacks. Work with data scientists to automate model retraining and deployment triggers within Kubeflow based on data freshness or concept drift signals. Develop PowerShell and .NET utilities to orchestrate job dependencies, manage secrets, and publish telemetry to Azure Monitor. Optimize Spark and SQL workloads through indexing, partitioning, and cluster sizing strategies, benchmarking performance in CI pipelines. Document lineage, ownership, and retention policies; ensure pipelines conform to PCI/SOX and internal data governance standards. Here is what we are seeking: At least 6 years of experience building data pipelines in Spark or equivalent. At least 2 years deploying workloads on Kubernetes/Kubeflow. At least 2 years of experience with MLflow or similar experiment‑tracking tools. At least 6 years of experience in T‑SQL, Python/Scala for Spark. At least 6 years of PowerShell/.NET scripting. At least 6 years of experience with with GitHub, Azure DevOps, Prometheus, Grafana, and SSIS/SSAS. Kubernetes CKA/CKAD, Azure Data Engineer (DP‑203), or MLOps‑focused certifications (e.g., Kubeflow or MLflow) would be great to see. Mentor engineers on best practices in containerized data engineering and MLOps.
    $111k-166k yearly est. 23h ago
  • Senior Data Engineer

    Kellymitchell Group 4.5company rating

    Data engineer job in Glendale, CA

    Our client is seeking a Senior Data Engineer to join their team! This position is located in Glendale, California. Contribute to maintaining, updating, and expanding existing Core Data platform data pipelines Build tools and services to support data discovery, lineage, governance, and privacy Collaborate with other software and data engineers and cross-functional teams Work with a tech stack that includes Airflow, Spark, Databricks, Delta Lake, Kubernetes, and AWS Collaborate with product managers, architects, and other engineers to drive the success of the Core Data platform Contribute to developing and documenting internal and external standards and best practices for pipeline configurations, naming conventions, and more Ensure high operational efficiency and quality of Core Data platform datasets to meet SLAs and ensure reliability and accuracy for stakeholders in Engineering, Data Science, Operations, and Analytics Participate in agile and scrum ceremonies to collaborate and refine team processes Engage with customers to build relationships, understand needs, and prioritize both innovative solutions and incremental platform improvements Maintain detailed documentation of work and changes to support data quality and data governance requirements Desired Skills/Experience: 5+ years of data engineering experience developing large data pipelines Proficiency in at least one major programming language such as: Python, Java or Scala Strong SQL skills and the ability to create queries to analyze complex datasets Hands-on production experience with distributed processing systems such as Spark Experience interacting with and ingesting data efficiently from API data sources Experience coding with the Spark DataFrame API to create data engineering workflows in Databricks Hands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines Experience developing APIs with GraphQL Deep understanding of AWS or other cloud providers, as well as infrastructure-as-code Familiarity with data modeling techniques and data warehousing best practices Strong algorithmic problem-solving skills Excellent written and verbal communication skills Advanced understanding of OLTP versus OLAP environments Benefits: Medical, Dental, & Vision Insurance Plans Employee-Owned Profit Sharing (ESOP) 401K offered The approximate pay range for this position is between $51.00 and $73.00. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
    $51-73 hourly 1d ago
  • Senior Data Architect

    Ispace, Inc.

    Data engineer job in Torrance, CA

    Title: Senior Data Architect Duration: 12 Months Pay rate $90 Per hr on W2 Daily Tasks Performed: Translates high-level business requirements into data models and appropriate metadata, test data, and data quality standards. Manages senior business stakeholders to secure strong engagement and ensures that the delivery of the project aligns with longer-term strategic roadmaps. Leads and participates in the peer review and quality assurance of project architectural artifacts Defines and manages standards, guidelines, and processes to ensure data quality. Works with IT teams, business analysts, and data analytics teams to understand data consumers' needs and develop solutions. Evaluates and recommends emerging technologies for data management, storage, and analytics Establish and maintain governance frameworks for team and vendor partners to ensure the effectiveness of architecture services. What this person will be working on: Understand data confidentiality, security, compliance needs and apply data protection rules including data sharing, filtering, and fencing at storage, compute, and consumption layers. Design data protection solutions at database, table, column level and APIs based on enterprise standard data security, privacy, architecture principles and reference architectures. Design the structure and layout of data systems, including databases, warehouses, and lakes Select and implement database management systems that meet the organization's needs by defining data schemas, optimizing data storage, and establishing data access controls and security measures Deliver exceptional business value by enhancing data pipeline performance, ensuring timely orchestration, and upholding data governance. Position Success Criteria (Desired) - 'WANTS' A bachelor's degree in computer science, data science, engineering, or related field. At least 10 years of relevant experience in design and implementation of data models (Erwin) for enterprise data warehouse initiatives Experience leading projects involving cloud data lakes, data warehousing, data modeling, and data analysis Proficiency in the design and implementation of modern data architectures and concepts such as cloud services (AWS), real-time data distribution (Kinesis, Kafka, Dataflow), and modern data warehouse tools (Redshift) Experience with various database platforms, including DB2, MS SQL Server, PostgreSQL, MongoDB, etc. Understanding of entity-relationship modeling, metadata systems, and data security, quality tools and techniques Ability to design traditional - relational, analytics, datalake and lakehouse architecture based on business needs Experience with business intelligence tools and technologies such as Informatica, Power BI, and Tableau Exceptional communication and presentation skills Strong analytical and problem-solving skills Ability to collaborate and excel in complex, cross-functional teams involving data scientists, business analysts, and stakeholders Ability to guide solution design and architecture to meet business needs. If you're interested in above role please send me your updated resume to *******************************
    $90 hourly 3d ago
  • Sr. Developer eCommerce Systems

    Pacsun 3.9company rating

    Data engineer job in Anaheim, CA

    Join the Pacsun Community Co-created in Los Angeles, Pacsun inspires the next generation of youth, building community at the intersection of fashion, music, art and sport. Pacsun is a leading lifestyle brand offering an exclusive collection of the most relevant brands and styles such as adidas, Brandy Melville, Essentials Fear of God, our own brands, and many more. Our Pacsun community believes in and understands the importance of using our voice, platform, and resources to inspire and bring about positive development. Through our PacCares program, we are committed to our responsibility in using our platform to drive change and take action on the issues important to our community. Join the Pacsun Community. Learn more here: LinkedIn- Our Community About the Job: Pacsun's IT eCommerce team uses AI and innovative technologies to enhance customer experience and improve operational efficiency. As a key member of the team, the Senior eCommerce Developer contributes the architecture, development and optimization of the company's digital commerce experiences. This role is responsible for both back‑end and front-end development on Salesforce Commerce Cloud (SFCC), ensuring high‑performance, secure and accessible storefronts, with robust system integration in the eCommerce ecosystem. The Senior eCommerce Developer will lead end‑to‑end delivery of new features, mentor junior developers and off-shore team, and collaborate closely with UX, product, QA and business teams to create compelling online experiences that drive revenue and customer loyalty. This role will work on the full stack of Pacsun's Salesforce Commerce Cloud, mobile app, AI initiatives and system integrations, supporting Commerce, Loyalty, CRM, OMS, and other eCommerce platforms. A day in the life, what you'll be doing: Back‑End Development & Integration Design, build and maintain SFCC server‑side components, including controllers, pipelines, cartridges and custom business logic. Develop and manage robust APIs that connect SFCC with tax engines, payment processors, fraud management services and the order management system. Ensure reliable data synchronization between SFCC and external platforms such as CRM, Loyalty, OMS, ERP and analytics systems. Optimize database models, caching strategies and performance tuning to support high transaction volumes and peak traffic periods. Checkout & Transaction Optimization Own the end‑to‑end checkout experience, ensuring seamless, secure and performant workflows from cart to order confirmation. Integrate payment gateways and fraud protections to deliver accurate pricing and effortless transactions. Collaborate with UX and product teams to identify friction points in the checkout process and implement improvements that boost conversion and customer satisfaction. Tax, Shipping & OMS Integration Implement and maintain integrations with third‑party tax services to handle complex jurisdictional tax rules. Connect SFCC to shipping providers and fulfillment platforms to provide real‑time shipping options and tracking. Build and support integrations with the order management system to ensure accurate order routing, inventory updates and status synchronization. AI & Innovation Support Partner with data science and innovation teams to embed AI‑driven personalization, recommendation and search solutions into the platform. Develop integration points for machine‑learning models and real‑time personalization engines, ensuring data security and compliance. Prototype and implement new technologies that enhance the customer experience and streamline operations. Technical Leadership & Collaboration Lead code reviews, define backend architecture standards and mentor less experienced developers on integration patterns and best practices. Participate in IT management and technical teams to develop and deploy processes to ensure rapid, reliable releases. Work closely with product, UX, QA and DevOps teams to define requirements, plan sprints and deliver high‑quality software on schedule. What it takes to Join: 8+ years of experience in web development and at least 5 years focused on Salesforce Commerce Cloud and SFRA. Deep knowledge of modern front‑end technologies (HTML5, CSS3/SCSS, JavaScript, React or similar frameworks) and back‑end development (Node.js, Java or equivalent). Hands‑on experience with SFCC OCAPI/SCAPI, cartridge development, API integrations and Business Manager configurations. Proven track record integrating third‑party services (payments, tax, shipping, CRM, loyalty, analytics) and implementing secure, scalable solutions. Familiarity with Agile methodologies, version control (Git) and CI/CD pipelines. Strong understanding of web performance optimization, SEO and accessibility standards. Ability to lead discussions, mentor teammates and collaborate with technical teams. Bachelor's degree in Computer Science, Information Systems or related field; Salesforce B2C Commerce Developer certification is preferred. Salesforce Commerce Cloud SFRA certified developer is preferred. Proven ability to excel in fast-growing, dynamic business environments with competing priorities, with a positive, solution-oriented mindset. Excellent analytical and problem-solving skills. Salary Range: $149,000 - $159,000 Pac Perks: Dog friendly office environment On-site Cafe On-site Gym $1,000 referral incentive program Generous associate discount of 30-50% off merchandise online and in-stores Competitive long term and short-term incentive program Immediate 100% vested 401K contributions and employer match Calm Premium access for all employees Employee perks throughout the year Physical Requirements: The physical demands described here are representative of those that are required by an associate to successfully perform the essential functions of this job. While performing the duties of this job, the associate is regularly required to talk or hear. The associate is frequently required to sit; stand; walk; use hands to finger, handle or feel; as well as reach with hands and arms. Specific vision abilities required by this job include close vision, distance vision, depth perception and ability to adjust focus. Ability to work in open environment with fluctuating temperatures and standard lighting. Ability to work on computer and mobile phone for multiple hours; with frequent interruptions. Required to travel in elevator or stairwells to attend meetings and engage with associates on multiple floors throughout building. Hotel, Airplane, and Car Travel may be required. Position Type/Expected Hours of Work: This is a full-time position. As a National Retailer, occasional evening and/or weekend work may be required during periods of high volume. This role operates in a professional office environment and routinely uses standard office equipment. Other Considerations: Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the associate for this job. Duties, responsibilities and activities may change at any time with or without notice. Reasonable accommodations may be made to qualified individuals with disabilities to enable them to perform the essential functions of the role. Equal Opportunity Employer This employer is required to notify all applicants of their rights pursuant to federal employment laws. For further information, please review the Know Your Rights notice from the Department of Labor.
    $149k-159k yearly 2d ago
  • DevOps Engineer

    Evona

    Data engineer job in Irvine, CA

    DevOps Engineer - Satellite Technology Onsite in Irvine, CA or Washington, DC Pioneering Space Technology | Secure Cloud | Mission-Critical Systems We're working with a leading organization in the satellite technology sector, seeking a DevOps Engineer to join their growing team. You'll play a key role in shaping, automating, and securing the software infrastructure that supports next-generation space missions. This is a hands-on role within a collaborative, high-impact environment-ideal for someone who thrives on optimizing cloud performance and supporting mission-critical operations in aerospace. What You'll Be Doing Maintain and optimize AWS cloud environments, implementing security updates and best practices Manage daily operations of Kubernetes clusters and ensure system reliability Collaborate with cybersecurity teams to ensure full compliance across AWS infrastructure Support software deployment pipelines and infrastructure automation using Terraform and CI/CD tools Work cross-functionally with teams including satellite operations, software analytics, and systems engineering Troubleshoot and resolve environment issues to maintain uptime and efficiency Apply an “Infrastructure as Code” approach to all system development and management What You'll Bring Degree in Computer Science or a related field 2-3 years' experience with Kubernetes and containerized environments 3+ years' Linux systems administration experience Hands-on experience with cloud services (AWS, GCP, or Azure) Strong understanding of Terraform and CI/CD pipeline tools (e.g. FluxCD, Argo) Skilled in Python or Go Familiarity with software version control systems Solid grounding in cybersecurity principles (networking, authentication, encryption, firewalls) Eligibility to obtain a U.S. Security Clearance Preferred: Certified Kubernetes Administrator or Developer AWS Certified Security credentials This role offers the chance to make a tangible impact in the satellite and space exploration sector, joining a team that's building secure, scalable systems for mission success. If you're passionate about space, cloud infrastructure, and cutting-edge DevOps practices-this is your opportunity to be part of something extraordinary.
    $98k-133k yearly est. 3d ago
  • Snowflake DBT Engineer-- CDC5697451

    Compunnel Inc. 4.4company rating

    Data engineer job in Irvine, CA

    Key Responsibilities Design develop and maintain ELT pipelines using Snowflake and DBT Build and optimize data models in Snowflake to support analytics and reporting Implement modular testable SQL transformations using DBT Integrate DBT workflows into CICD pipelines and manage infrastructure as code using Terraform Collaborate with data scientists analysts and business stakeholders to translate requirements into technical solutions Optimize Snowflake performance through clustering partitioning indexing and materialized views Automate data ingestion and transformation workflows using Airflow or similar orchestration tools Ensure data quality governance and security across pipelines Troubleshoot and resolve performance bottlenecks and data issues Maintain documentation for data architecture pipelines and operational procedures Required Skills Qualifications Bachelors or Masters degree in Computer Science Data Engineering or related field 10 years of experience in data engineering with at least 3 years focused on Snowflake and DBT Strong proficiency in SQL and Python Experience with cloud platforms AWS GCP or Azure Familiarity with Git CICD and Infrastructure as Code tools Terraform CloudFormation Knowledge of data modeling star schema normalization and ELT best practices
    $92k-118k yearly est. 23h ago
  • Senior Data Engineer

    Robert Half 4.5company rating

    Data engineer job in Los Angeles, CA

    Robert Half is partnering with a well known brand seeking an experienced Data Engineer with Databricks experience. Working alongside data scientists and software developers, you'll work will directly impact dynamic pricing strategies by ensuring the availability, accuracy, and scalability of data systems. This position is full time with full benefits and 3 days onsite in the Woodland Hills, CA area. Responsibilities: Design, build, and maintain scalable data pipelines for dynamic pricing models. Collaborate with data scientists to prepare data for model training, validation, and deployment. Develop and optimize ETL processes to ensure data quality and reliability. Monitor and troubleshoot data workflows for continuous integration and performance. Partner with software engineers to embed data solutions into product architecture. Ensure compliance with data governance, privacy, and security standards. Translate stakeholder requirements into technical specifications. Document processes and contribute to data engineering best practices. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 4+ years of experience in data engineering, data warehousing, and big data technologies. Proficiency in SQL and experience with relational databases (e.g., PostgreSQL, MySQL, SQL Server). Must have experience in Databricks. Experience working within Azure or AWS or GCP environment. Familiarity with big data tools like Spark, Hadoop, or Databricks. Experience in real-time data pipeline tools. Experienced with Python
    $116k-165k yearly est. 4d ago

Learn more about data engineer jobs

How much does a data engineer earn in Eastvale, CA?

The average data engineer in Eastvale, CA earns between $85,000 and $163,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.

Average data engineer salary in Eastvale, CA

$118,000

What are the biggest employers of Data Engineers in Eastvale, CA?

The biggest employers of Data Engineers in Eastvale, CA are:
  1. CACI International
  2. Capgemini
Job type you want
Full Time
Part Time
Internship
Temporary