Data Scientist
Data engineer job in Long Beach, CA
STAND 8 provides end to end IT solutions to enterprise partners across the United States and with offices in Los Angeles, New York, New Jersey, Atlanta, and more including internationally in Mexico and India We are seeking a highly analytical and technically skilled Data Scientist to transform complex, multi-source data into unified, actionable insights used for executive reporting and decision-making.
This role requires expertise in business intelligence design, data modeling, metadata management, data integrity validation, and the development of dashboards, reports, and analytics used across operational and strategic environments.
The ideal candidate thrives in a fast-paced environment, demonstrates strong investigative skills, and can collaborate effectively with technical teams, business stakeholders, and leadership.
Essential Duties & Responsibilities
Solution Development & BI Architecture
Participate across the full solution lifecycle: business case, planning, design, development, testing, migration, and production support.
Analyze large and complex datasets with accuracy and attention to detail.
Collaborate with users to develop effective metadata and data relationships.
Identify reporting and dashboard requirements across business units.
Determine strategic placement of business logic within ETL or metadata models.
Build enterprise data warehouse metadata/semantic models.
Design and develop unified dashboards, reports, and data extractions from multiple data sources.
Develop and execute testing methodologies for reports and metadata models.
Document BI architecture, data lineage, and project report requirements.
Provide technical specifications and data definitions to support the enterprise data dictionary.
Data Analysis, Modeling & Process Optimization
Apply analytical skills to understand business processes, financial calculations, data flows, and application interactions.
Identify and implement improvements, workarounds, or alternative solutions related to ETL processes, ensuring integrity and timeliness.
Create UI components or portal elements (e.g., SharePoint) for dynamic or interactive stakeholder reporting.
Download and process SQL database information to build Power BI or Tableau reports (including cybersecurity awareness campaigns).
Utilize SQL, Python, R, or similar languages for data analysis and modeling.
Required Knowledge & Attributes
Highly self-motivated with strong organizational skills and ability to manage multiple verbal and written assignments.
Experience collaborating across organizational boundaries for data sourcing and usage.
Analytical understanding of business processes, forecasting, capacity planning, and data governance.
Proficient with BI tools (Power BI, Tableau, PBIRS, SSRS, SSAS).
Strong Microsoft Office skills (Word, Excel, Visio, PowerPoint).
High attention to detail and accuracy.
Ability to work independently, demonstrate ownership, and ensure high-quality outcomes.
Strong communication, interpersonal, and stakeholder engagement skills.
Deep understanding that data integrity and consistency are essential for adoption and trust.
Ability to shift priorities and adapt within fast-paced environments.
Required Education & Experience
Bachelor's degree in Computer Science, Mathematics, or Statistics (or equivalent experience).
3+ years of BI development experience.
3+ years with Power BI and supporting Microsoft stack tools (SharePoint 2019, PBIRS/SSRS, Excel 2019/2021).
3+ years of experience with SDLC/project lifecycle processes.
3+ years of experience with data warehousing methodologies (ETL, Data Modeling).
3+ years of VBA experience in Excel and Access.
Strong ability to write SQL queries and work with SQL Server 2017-2022.
Experience with BI tools including PBIRS, SSRS, SSAS, Tableau.
Strong analytical skills in business processes, financial modeling, forecasting, and data flow understanding.
Critical thinking and problem-solving capabilities.
Experience producing high-quality technical documentation and presentations.
Excellent communication and presentation skills, with the ability to explain insights to leadership and business teams.
Benefits
Medical coverage and Health Savings Account (HSA) through Anthem
Dental/Vision/Various Ancillary coverages through Unum
401(k) retirement savings plan
Paid-time-off options
Company-paid Employee Assistance Program (EAP)
Discount programs through ADP WorkforceNow
Additional Details
The base range for this contract position is $73 - $83 / per hour, depending on experience. Our pay ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target for new hires of this position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Qualified applicants with arrest or conviction records will be considered
About Us
STAND 8 provides end-to-end IT solutions to enterprise partners across the United States and globally with offices in Los Angeles, Atlanta, New York, Mexico, Japan, India, and more. STAND 8 focuses on the "bleeding edge" of technology and leverages automation, process, marketing, and over fifteen years of success and growth to provide a world-class experience for our customers, partners, and employees.
Our mission is to impact the world positively by creating success through PEOPLE, PROCESS, and TECHNOLOGY.
Check out more at ************** and reach out today to explore opportunities to grow together!
By applying to this position, your data will be processed in accordance with the STAND 8 Privacy Policy.
Senior Data Consultant - Supply Chain Planning
Data engineer job in Corona, CA
🚀 We're Hiring: Senior Data Consultant - (Supply Chain Planning)
Bristlecone, a Mahindra company, is a leading supply chain and business analytics advisor, rated by Gartner as one of the top ten system integrators in the supply chain space. We have been a trusted partner to global enterprises such as Applied Materials, Exxon Mobil, Flextronics, Nestle, Unilever, Whirlpool, and many others.
🔍 Project Overview:
We are looking for a strong Data Consultant to support our planning projects. The ideal candidate will have a solid understanding of planning processes and data management within a supply chain or business planning environment. While deep configuration knowledge of SAP IBP is not mandatory, the consultant must have a strong grasp of planning data, business rules, and their impact on planning outcomes.
This is a strategic initiative aimed at transforming planning processes across Raw Materials, Finished Goods, and Packaging materials. You'll be the go-to expert for managing end-to-end planning data across SAP IBP and ECC systems (SD, MM, PP).
🛠️ Key Responsibilities:
Collaborate with planning teams to analyze, validate, and manage data relevant to planning processes.
Demonstrate a clear understanding of basic planning functionalities and how data supports them.
Identify, define, and manage data elements that impact demand, supply, and inventory planning.
Understand and document business rules and prerequisites related to data maintenance and planning accuracy.
Coordinate data collection activities from super users and end users across multiple functions.
Support data readiness for project milestones including testing, validation, and go-live.
Explain how different data elements influence planning outcomes to non-technical stakeholders.
Work closely with functional and technical teams to ensure data integrity and consistency across systems.
Required Skills & Qualifications:
Strong understanding of planning processes (demand, supply, or S&OP).
Proven experience working with planning master data (e.g., product, location, BOM, resources, etc.).
Ability to analyze complex datasets and identify inconsistencies or dependencies.
Excellent communication and coordination skills with cross-functional teams.
Exposure to SAP IBP, APO, or other advanced planning tools (preferred but not mandatory).
Strong business acumen with the ability to link data quality to planning outcomes.
5-10 years of relevant experience in data management, planning, or supply chain roles.
Preferred Qualifications:
Experience with large-scale planning transformation or ERP implementation projects.
Knowledge of data governance and data quality frameworks.
Experience in working with super users/end users for data validation and readiness.
Privacy Notice Declarations for California based candidates/Jobs:: ********************************************************
Principal Data Scientist
Data engineer job in Alhambra, CA
Duration: 12 Months Contract
Additional Information
California Resident Candidates Only. This position is HYBRID (2 days onsite, 2 days telework). Interviews will be conducted via Microsoft Teams. The work schedule follows a 4/40 (10-hour days, Monday-Thursday), with the specific shift determined by the program manager. Shifts may range between 7:15 a.m. and 6:00 p.m.
Job description:
The Principal Data Scientist works to establish a comprehensive Data Science Program to advance data-driven decision-making, streamline operations, and fully leverage modern platforms including Databricks, or similar, to meet increasing demand for predictive analytics and AI solutions. The Principal Data Scientist will guide program development, provide training and mentorship to junior members of the team, accelerate adoption of advanced analytics, and build internal capacity through structured mentorship. The Principal Data Scientist will possess exceptional communication abilities, both verbal and written, with a strong customer service mindset and the ability to translate complex concepts into clear, actionable insights; strong analytical and business acumen, including foundational experience with regression, association analysis, outlier detection, and core data analysis principles; working knowledge of database design and organization, with the ability to partner effectively with Data Management and Data Engineering teams; outstanding time management and organizational skills, with demonstrated success managing multiple priorities and deliverables in parallel; a highly collaborative work style, coupled with the ability to operate independently, maintain focus, and drive projects forward with minimal oversight; a meticulous approach to quality, ensuring accuracy, reliability, and consistency in all deliverables; and proven mentorship capabilities, including the ability to guide, coach, and upskill junior data scientists and analysts.
Experience Required:
Five (5)+ years of professional experience leading data science initiatives, including developing machine learning models, statistical analyses, and end-to-end data science workflows in production environments.
Three (3)+ years of experience working with Databricks and similar cloud-based analytics platforms, including notebook development, feature engineering, ML model training, and workflow orchestration.
Three (3)+ years of experience applying advanced analytics and predictive modeling (e.g., regression, classification, clustering, forecasting, natural language processing).
Two (2)+ years of experience implementing MLOps practices, such as model versioning, CI/CD for ML, MLflow, automated pipelines, and model performance monitoring.
Two (2)+ years of experience collaborating with data engineering teams to design data pipelines, optimize data transformations, and implement Lakehouse or data warehouse architectures (e.g., Databricks, Snowflake, SQL-based platforms).
Two (2)+ years of experience mentoring or supervising junior data scientists or analysts, including code reviews, training, and structured skill development.
Two (2)+ years of experience with Python and SQL programming, using data sources such as SQL Server, Oracle, PostgreSQL, or similar relational databases.
One (1)+ year of experience operationalizing analytics within enterprise governance frameworks, partnering with Data Management, Security, and IT to ensure compliance, reproducibility, and best practices.
Education Required & certifications:
This classification requires possession of a Master's degree or higher in Data Science, Statistics, Computer Science, or a closely related field. Additional qualifying professional experience may be substituted for the required education on a year-for-year basis. At least one of the following industry-recognized certifications in data science or cloud analytics, such as:
Microsoft Azure Data Scientist Associate (DP-100)
Databricks Certified Data Scientist or Machine Learning Professional
AWS Machine Learning Specialty
Google Professional Data Engineer • or equivalent advanced analytics certifications. The certification is required and may not be substituted with additional experience.
About US Tech Solutions:
US Tech Solutions is a global staff augmentation firm providing a wide range of talent on-demand and total workforce solutions. To know more about US Tech Solutions, please visit ************************
US Tech Solutions is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Recruiter Details:
Name: Raviteja Yarram
Email: *********************************
Internal I'd : 25-54101
Data Engineer
Data engineer job in Los Angeles, CA
We are seeking a highly motivated Data Engineer
Key Responsibilities:
Build data pipelines that clean, transform, and aggregate data from disparate sources
Work closely with our business analysis team to provide unique insights into our data
Build robust systems for data quality assurance and validation at scale
Qualifications:
BS/MS degree in a technical field
Advanced Structure Query Language (SQL) and data warehousing experience
Bonus Qualifications
Exceptional Python or Scala skills
Experience designing and building complex data infrastructure at scale
Experience with large scale streaming platforms (e.g. Kafka, Kinesis), processing frameworks (e.g. Spark, Hadoop) and storage engines (e.g. HDFS, HBase)
Compensation & Benefits:
Competitive salary and benefits package.
Opportunities for professional growth and development.
Collaborative and inclusive work environment.
About Us:
Founded in 2009, IntelliPro is a global leader in talent acquisition and HR solutions. Our commitment to delivering unparalleled service to clients, fostering employee growth, and building enduring partnerships sets us apart. We continue leading global talent solutions with a dynamic presence in over 160 countries, including the USA, China, Canada, Singapore, Japan, Philippines, UK, India, Netherlands, and the EU.
IntelliPro, a global leader connecting individuals with rewarding employment opportunities, is dedicated to understanding your career aspirations. As an Equal Opportunity Employer, IntelliPro values diversity and does not discriminate based on race, color, religion, sex, sexual orientation, gender identity, national origin, age, genetic information, disability, or any other legally protected group status. Moreover, our Inclusivity Commitment emphasizes embracing candidates of all abilities and ensures that our hiring and interview processes accommodate the needs of all applicants. Learn more about our commitment to diversity and inclusivity at *****************************
Principal data scientist
Data engineer job in Alhambra, CA
The Principal Data Scientist works to establish a comprehensive Data Science Program to advance data-driven decision-making, streamline operations, and fully leverage modern platforms including Databricks, or similar, to meet increasing demand for predictive analytics and AI solutions. The Principal Data Scientist will guide program development, provide training and mentorship to junior members of the team, accelerate adoption of advanced analytics, and build internal capacity through structured mentorship. The Principal Data Scientist will possess exceptional communication abilities, both verbal and written, with a strong customer service mindset and the ability to translate complex concepts into clear, actionable insights; strong analytical and business acumen, including foundational experience with regression, association analysis, outlier detection, and core data analysis principles; working knowledge of database design and organization, with the ability to partner effectively with Data Management and Da
• Five (5)+ years of professional experience leading data science initiatives, including developing machine learning models, statistical analyses, and end-to-end data science workflows in production environments. • Three (3)+ years of experience working with Databricks and similar cloud-based analytics platforms, including notebook development, feature engineering, ML model training, and workflow orchestration. • Three (3)+ years of experience applying advanced analytics and predictive modeling (e.g., regression, classification, clustering, forecasting, natural language processing). • Two (2)+ years of experience implementing MLOps practices, such as model versioning, CI/CD for ML, MLflow, automated pipelines, and model performance monitoring. • Two (2)+ years of experience collaborating with data engineering teams to design data pipelines, optimize data transformations, and implement Lakehouse or data warehouse architectures (e.g., Databricks, Snowflake, SQL-based platforms). • Two (2)+ years of experience mentoring or supervising junior data scientists or analysts, including code reviews, training, and structured skill development. • Two (2)+ years of experience with Python and SQL programming, using data sources such as SQL Server, Oracle, PostgreSQL, or similar relational databases. • One (1)+ year of experience operationalizing analytics within enterprise governance frameworks, partnering with Data Management, Security, and IT to ensure compliance, reproducibility, and best practices.
This classification requires possession of a Master's degree or higher in Data Science, Statistics, Computer Science, or a closely related field. Additional qualifying professional experience may be substituted for the required education on a year-for-year basis. At least one of the following industry-recognized certifications in data science or cloud analytics, such as: • Microsoft Azure Data Scientist Associate (DP-100) • Databricks Certified Data Scientist or Machine Learning Professional • AWS Machine Learning Specialty • Google Professional Data Engineer • or equivalent advanced analytics certifications. The certification is required and may not be substituted with additional experience.
California Resident Candidates Only. This position is HYBRID (2 days onsite, 2 days telework). Interviews will be conducted via Microsoft Teams. The work schedule follows a 4/40 (10-hour days, Monday-Thursday), with the specific shift determined by the program manager. Shifts may range between 7:15 a.m. and 6:00 p.m.
Senior Data Engineer - Snowflake / ETL (Onsite)
Data engineer job in Beverly Hills, CA
CGS Business Solutions is committed to helping you, as an esteemed IT Professional, find the next right step in your career. We match professionals like you to rewarding consulting or full-time opportunities in your area of expertise. We are currently seeking Technical Professionals who are searching for challenging and rewarding jobs for the following opportunity:
Summary
CGS is hiring for a Senior Data Engineer to serve as a core member of the Platform team. This is a high-impact role responsible for advancing our foundational data infrastructure.
Your primary mission will be to build key components of our Policy Journal - the central source of truth for all policy, commission, and client accounting data. You'll work closely with the Lead Data Engineer and business stakeholders to translate complex requirements into scalable data models and reliable pipelines that power analytics and operational decision-making for agents, managers, and leadership.
This role blends greenfield engineering, strategic modernization, and a strong focus on delivering trusted, high-quality data products.
Overview
• Build the Policy Journal - Design and implement the master data architecture unifying policy, commission, and accounting data from sources like IVANS and Applied EPIC to create the platform's “gold record.”
• Ensure Data Reliability - Define and implement data quality checks, monitoring, and alerting to guarantee accuracy, consistency, and timeliness across pipelines - while contributing to best practices in governance.
• Build the Analytics Foundation - Enhance and scale our analytics stack (Snowflake, dbt, Airflow), transforming raw data into clean, performant dimensional models for BI and operational insights.
• Modernize Legacy ETL - Refactor our existing Java + SQL (PostgreSQL) ETL system - diagnose duplication and performance issues, rewrite critical components in Python, and migrate orchestration to Airflow.
• Implement Data Quality Frameworks - Develop automated testing and validation frameworks aligned with our QA strategy to ensure accuracy, completeness, and integrity across pipelines.
• Collaborate on Architecture & Design - Partner with product and business stakeholders to deeply understand requirements and design scalable, maintainable data solutions.
Ideal Experience
• 5+ years of experience building and operating production-grade data pipelines.
• Expert-level proficiency in Python and SQL.
• Hands-on experience with the modern data stack - Snowflake/Redshift, Airflow, dbt, etc.
• Strong understanding of AWS data services (S3, Glue, Lambda, RDS).
• Experience working with insurance or insurtech data (policies, commissions, claims, etc.).
• Proven ability to design robust data models (e.g., dimensional modeling) for analytics.
• Pragmatic problem-solver capable of analyzing and refactoring complex legacy systems (ability to read Java/Hibernate is a strong plus - but no new Java coding required).
• Excellent communicator comfortable working with both technical and non-technical stakeholders.
Huge Plus!
• Direct experience with Agency Management Systems (Applied EPIC, Nowcerts, EZLynx, etc.)
• Familiarity with carrier data formats (Accord XML, IVANS AL3)
• Experience with BI tools (Tableau, Looker, Power BI)
About CGS Business Solutions: CGS specializes in IT business solutions, staffing and consulting services. With a strong focus in IT Applications, Network Infrastructure, Information Security, and Engineering. CGS is an INC 5000 company and is honored to be selected as one of the Best IT Recruitment Firms in California. After five consecutive Fastest Growing Company titles, CGS continues to break into new markets across the USA. Companies are counting on CGS to attract and help retain these resource pools in order to gain a competitive advantage the rapidly changing business environments.
AWS Data Engineer
Data engineer job in Torrance, CA
Job Title : AWS Data Engineer
Informatica Data Catalog Engineer/Governance Administrator
This Informatica Cloud Catalog/Governance/Marketplace Administrator/Engineer will be focused on helping customers Configure Cloud Catalog/Governance/Marketplace with Informatica CDGC (Cloud Data Governance and Catalog) and data products from CDGC. This person will configure security and access roles, as well as integrate the SAML/SSO with ticketing services. The ideal candidate will have strong knowledge and experience in CDGC and overall IDMC (Informatica Intelligent Data Management Cloud). Overall IDMC tool experience also required.
Responsibilities will include the following:
Configure security, Implement Role Based Access Controls and Policy Based Access Controls.
Create and configure connections of SAAS platforms, mainframe, databases, No SQL, Cloud (AWS S3, Athena, Redshift etc.).
Identify the issues while creating connections and resolving them.
Identify performance bottlenecks when profiling and resolve them.
Monitor IPU's consumption and automate the consumption reporting to the specific domain teams.
Configure Cloud Marketplace with Informatica CDGC and CDMC to policy-based data protections.
Review Informatica upgrade schedules, explore the new features and communicate to Technical/Business Teams.
Communicate and coordinate with Business Teams regarding upgrade timelines/Implementations.
Setup/work with Infrastructure Teams to establish AWS ECS/EKS cluster to manage profiling workloads.
Support governance efforts to classify and protect sensitive financial/customer data.
Develop/create APIs to fetch cross region (Japan) metadata/Glossary in North America.
Utilize knowledge in regulatory compliance standards as well as understanding of Governance policies (e.g., GDPR, CCPA, SOX etc.).
Setup SAML/SSO, User and Groups.
Setup/create Certificates and Whitelisting IP's.
Scan, and profile Metadata Data Catalog and Data Observability.
Define data quality rules.
Required Skills and Expertise:6+ years of progressive experience in Informatica Platform Administration is required.
Strong knowledge and experience in CDGC (Catalog Data Governance Cloud) and overall IDMC (Informatica Intelligent Data Management Cloud).
Ability to perform all Catalog Development/Admin Activities
Must have 5+ years of progressive experience in Informatica Platform Data Catalog and Governance (CDGC) Administration
Needs to have experience administering Governance Policies of Organizations
3+ years of progressive experience in Informatica API's including creating API's
3+ years of progressive experience in Informatica data integration.
2+ years of progressive experience in Informatica data Market Place and Data Access Management tools.
Must be skilled with performing the Cloud Data Market Place Process Setup, etc.
Expertise in defining and managing RBAC and PBAC Policies
Overall IDMC tool experience also required
Familiarity with regulatory compliance standards and understanding Governance policies (e.g., GDPR, CCPA, SOX etc.).
Team player/collaborator who builds relation with customers and team members, earns their trust, and creates influence to meet tactical or strategic objectives.
Collaboration and Communication: Foster a culture of collaboration, knowledge sharing, and effective communication within the team and across departments. Facilitate meetings, workshops, and training sessions to enhance technical skills and foster innovation.
Strong oral, verbal communication/presentation skills.
Strong domain experience in auto finance: originations, servicing, collections, and customer data.
Strong knowledge overall IDMC Platform.
Desired Skills and Expertise:
Informatica CDGC Certification.
AWS Certifications
Data Engineer (AWS Redshift, BI, Python, ETL)
Data engineer job in Manhattan Beach, CA
We are seeking a skilled Data Engineer with strong experience in business intelligence (BI) and data warehouse development to join our team. In this role, you will design, build, and optimize data pipelines and warehouse architectures that support analytics, reporting, and data-driven decision-making. You will work closely with analysts, data scientists, and business stakeholders to ensure reliable, scalable, and high-quality data solutions.
Responsibilities:
Develop and maintain ETL/ELT pipelines for ingesting, transforming, and delivering data.
Design and enhance data warehouse models (star/snowflake schemas) and BI datasets.
Optimize data workflows for performance, scalability, and reliability.
Collaborate with BI teams to support dashboards, reporting, and analytics needs.
Ensure data quality, governance, and documentation across all solutions.
Qualifications:
Proven experience with data engineering tools (SQL, Python, ETL frameworks).
Strong understanding of BI concepts, reporting tools, and dimensional modeling.
Hands-on experience with cloud data platforms (e.g., AWS, Azure, GCP) is a plus.
Excellent problem-solving skills and ability to work in a cross-functional environment.
Data Scientist
Data engineer job in Garden Grove, CA
# Job Description: AI Task Evaluation & Statistical Analysis Specialist
## Role Overview We're seeking a data-driven analyst to conduct comprehensive failure analysis on AI agent performance across finance-sector tasks. You'll identify patterns, root causes, and systemic issues in our evaluation framework by analyzing task performance across multiple dimensions (task types, file types, criteria, etc.). ## Key Responsibilities - **Statistical Failure Analysis**: Identify patterns in AI agent failures across task components (prompts, rubrics, templates, file types, tags) - **Root Cause Analysis**: Determine whether failures stem from task design, rubric clarity, file complexity, or agent limitations - **Dimension Analysis**: Analyze performance variations across finance sub-domains, file types, and task categories - **Reporting & Visualization**: Create dashboards and reports highlighting failure clusters, edge cases, and improvement opportunities - **Quality Framework**: Recommend improvements to task design, rubric structure, and evaluation criteria based on statistical findings - **Stakeholder Communication**: Present insights to data labeling experts and technical teams ## Required Qualifications - **Statistical Expertise**: Strong foundation in statistical analysis, hypothesis testing, and pattern recognition - **Programming**: Proficiency in Python (pandas, scipy, matplotlib/seaborn) or R for data analysis - **Data Analysis**: Experience with exploratory data analysis and creating actionable insights from complex datasets - **AI/ML Familiarity**: Understanding of LLM evaluation methods and quality metrics - **Tools**: Comfortable working with Excel, data visualization tools (Tableau/Looker), and SQL ## Preferred Qualifications - Experience with AI/ML model evaluation or quality assurance - Background in finance or willingness to learn finance domain concepts - Experience with multi-dimensional failure analysis - Familiarity with benchmark datasets and evaluation frameworks - 2-4 years of relevant experience
Snowflake/AWS Data Engineer
Data engineer job in Irvine, CA
Sr. Data Engineer
Full Time Direct Hire Job
Hybrid with work location-Irvine, CA.
The Senior Data Engineer will help design and build a modern data platform that supports enterprise analytics, integrations, and AI/ML initiatives. This role focuses on developing scalable data pipelines, modernizing the enterprise data warehouse, and enabling self-service analytics across the organization.
Key Responsibilities
• Build and maintain scalable data pipelines using Snowflake, dbt, and Fivetran.
• Design and optimize enterprise data models for performance and scalability.
• Support data cataloging, lineage, quality, and compliance efforts.
• Translate business and analytics requirements into reliable data solutions.
• Use AWS (primarily S3) for storage, integration, and platform reliability.
• Perform other data engineering tasks as needed.
Required Qualifications
• Bachelor's degree in Computer Science, Data Engineering, Information Systems, or related field.
• 5+ years of data engineering experience.
• Hands-on expertise with Snowflake, dbt, and Fivetran.
• Strong background in data warehousing, dimensional modeling, and SQL.
• Experience with AWS (S3) and data governance tools such as Alation or Atlan.
• Proficiency in Python for scripting and automation.
• Experience with streaming technologies (Kafka, Kinesis, Flink) a plus.
• Knowledge of data security and compliance best practices.
• Exposure to AI/ML workflows and modern BI tools like Power BI, Tableau, or Looker.
• Ability to mentor junior engineers.
Skills
• Snowflake
• dbt
• Fivetran
• Data modeling and warehousing
• AWS
• Data governance
• SQL
• Python
• Strong communication and cross-functional collaboration
• Interest in emerging data and AI technologies
Data Engineer
Data engineer job in Loma Linda, CA
ABI Document Support Services is seeking a highly skilled Data Engineer with proven experience in Power BI and data pipeline development. This person will ideally work onsite out of the Loma Linda, CA office Tuesday, Wednesday and Thursdays and remote Mondays and Fridays.
Apply fast, check the full description by scrolling below to find out the full requirements for this role.
The ideal candidate will design, build, and maintain scalable data infrastructure to support analytics and reporting needs across the organization. You'll work closely with business stakeholders and analysts to ensure data accuracy, performance, and accessibility.
Design, build, and maintain ETL/ELT pipelines for structured and unstructured data from various sources.
Develop and manage data models, data warehouses, and data lakes to support analytics and BI initiatives.
Create and optimize Power BI dashboards and reports, ensuring accurate data representation and performance.
Collaborate with cross-functional teams to identify business requirements and translate them into technical solutions.
Implement and maintain data quality, governance, and security standards.
Monitor and optimize data pipeline performance, identifying bottlenecks and opportunities for improvement.
Integrate data from multiple systems using tools such as Azure Synapse, Microsoft Fabric, SQL, Python, or Spark.
Support data validation, troubleshooting, and root cause analysis for data inconsistencies.
Document data processes, architecture, and system configurations.
Required:
Bachelor's degree in Computer Science, Data Engineering, Information Systems, or a related field.
4-6 years of experience as a Data Engineer, BI Developer, or related role
Strong proficiency in SQL and data modeling techniques.
Hands-on experience with Power BI (DAX, Power Query, and data visualization best practices).
Experience with ETL tools and data orchestration frameworks (e.g., Azure Synapse, Airflow, or SSIS).
Familiarity with cloud data platforms (Azure, AWS, or GCP).
Strong understanding of data warehousing concepts (e.g., star schema, snowflake schema).
Preferred:
Experience with Python or Scala for data processing.
Knowledge of Azure Synapse, Databricks, or similar technologies.
Understanding of CI/CD pipelines and version control systems (Git).
Exposure to data governance and security frameworks.
Soft Skills
Strong analytical and problem-solving abilities.
Excellent communication and collaboration skills.
Detail-oriented with a focus on data accuracy and integrity.
Ability to work independently and manage multiple priorities in a fast-paced environment.
WHO WE ARE
ABI Document Support Services is the largest nationwide provider of records retrieval, subpoena services, and document management for the legal and insurance industries. There is no other company in the market that provides the volume of successfully retrieved records or the document management solutions that ABI offers. Our singular focus is records retrieval and the most advanced technology solutions for our clients to manage, analyze and summarize those retrieved records. We are committed to continually raising the bar for cost effective record retrieval and more thorough analysis and summarization.
Equal Opportunity Employer. xevrcyc All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, pregnancy, genetic information, disability, status as a protected veteran, or any other protected category under applicable federal, state, and local laws.
Equal Opportunity Employer - Minorities/Females/Disabled/Veterans
ABI offers a fast-paced team atmosphere with competitive benefits (medical, vision, dental), paid time off, and 401k.
#LI-MB1
Data Engineer
Data engineer job in Irvine, CA
Thank you for stopping by to take a look at the Data Integration Engineer role I posted here on LinkedIN, I appreciate it.
If you have read my s in the past, you will recognize how I write job descriptions. If you are new, allow me to introduce myself. My name is Tom Welke. I am Partner & VP at RSM Solutions, Inc and I have been recruiting technical talent for more than 23 years and been in the tech space since the 1990s. Due to this, I actually write JD's myself...no AI, no 'bots', just a real live human. I realized a while back that looking for work is about as fun as a root canal with no anesthesia...especially now. So, rather than saying 'must work well with others' and 'team mindset', I do away with that kind of nonsense and just tell it like it is.
So, as with every role I work on, social fit is almost as important as technical fit. For this one, technical fit is very very important. But, we also have some social fit characteristics that are important. This is the kind of place that requires people to dive in and learn. The hiring manager for this one is actually a very dear friend of mine. He said something interesting to me not all that long ago. He mentioned, if you aren't spending at least an hour a day learning something new, you really are doing yourself a disservice. This is that classic environment where no one says 'this is not my job'. So that ability to jump in and help is needed for success in this role.
This role is being done onsite in Irvine, California. I prefer working with candidates that are already local to the area. If you need to relocate, that is fine, but there are no relocation dollars available.
I can only work with US Citizens or Green Card Holders for this role. I cannot work with H1, OPT, EAD, F1, H4, or anyone that is not already a US Citizen or Green Card Holder for this role.
The Data Engineer role is similar to the Data Integration role I posted. However, this one is mor Ops focused, with the orchestration of deployment and ML flow, and including orchestrating and using data on the clusters and managing how the models are performing. This role focuses on coding & configuring on the ML side of the house.
You will be designing, automating, and observing end to end data pipelines that feed this client's Kubeflow driven machine learning platform, ensuring models are trained, deployed, and monitored on trustworthy, well governed data. You will build batch/stream workflows, wire them into Azure DevOps CI/CD, and surface real time health metrics in Prometheus + Grafana dashboards to guarantee data availability. The role bridges Data Engineering and MLOps, allowing data scientists to focus on experimentation and the business sees rapid, reliable predictive insight.
Here are some of the main responsibilities:
Design and implement batch and streaming pipelines in Apache Spark running on Kubernetes and Kubeflow Pipelines to hydrate feature stores and training datasets.
Build high throughput ETL/ELT jobs with SSIS, SSAS, and T SQL against MS SQL Server, applying Data Vault style modeling patterns for auditability.
Integrate source control, build, and release automation using GitHub Actions and Azure DevOps for every pipeline component.
Instrument pipelines with Prometheus exporters and visualize SLA, latency, and error budget metrics to enable proactive alerting.
Create automated data quality and schema drift checks; surface anomalies to support a rapid incident response process.
Use MLflow Tracking and Model Registry to version artifacts, parameters, and metrics for reproducible experiments and safe rollbacks.
Work with data scientists to automate model retraining and deployment triggers within Kubeflow based on data freshness or concept drift signals.
Develop PowerShell and .NET utilities to orchestrate job dependencies, manage secrets, and publish telemetry to Azure Monitor.
Optimize Spark and SQL workloads through indexing, partitioning, and cluster sizing strategies, benchmarking performance in CI pipelines.
Document lineage, ownership, and retention policies; ensure pipelines conform to PCI/SOX and internal data governance standards.
Here is what we are seeking:
At least 6 years of experience building data pipelines in Spark or equivalent.
At least 2 years deploying workloads on Kubernetes/Kubeflow.
At least 2 years of experience with MLflow or similar experiment‑tracking tools.
At least 6 years of experience in T‑SQL, Python/Scala for Spark.
At least 6 years of PowerShell/.NET scripting.
At least 6 years of experience with with GitHub, Azure DevOps, Prometheus, Grafana, and SSIS/SSAS.
Kubernetes CKA/CKAD, Azure Data Engineer (DP‑203), or MLOps‑focused certifications (e.g., Kubeflow or MLflow) would be great to see.
Mentor engineers on best practices in containerized data engineering and MLOps.
Lead Data Architect
Data engineer job in Burbank, CA
Role: Data Architect Lead
Contract
Key Responsibilities:
Conduct Comprehensive Data Assessment: Lead the analysis and auditing of current P2P data systems to evaluate their effectiveness, identify pain points (e.g., data silos, quality issues), and document data flows from source to consumption.
Design Future-State Architecture: Develop conceptual, logical, and physical data models for the P2P domain that support long-term business objectives and advanced analytics capabilities.
Develop Data Strategy & Roadmap: Create a strategic roadmap for data acquisition, integration, storage, and governance within P2P finance, outlining the transition from current to future-state architectures.
Establish Data Governance & Quality: Define and implement data governance frameworks, policies, and standards to ensure data quality, consistency, security, and compliance with financial regulations (e.g., SOX, GDPR).
Oversee Data Integration & Migration: Design and provide guidance on end-to-end data integration patterns and data migration processes, ensuring seamless data flow between various P2P systems (e.g., procurement software, general ledger, accounts payable).
Collaborate with Stakeholders: Work closely with finance teams, data engineers, data analysts, and IT leadership to understand business requirements, translate them into technical specifications, and build consensus on proposed solutions.
Performance Optimization: Analyze query performance and optimize database systems for improved efficiency and real-time insights into P2P operations.
Provide Technical Leadership: Act as a subject matter expert, providing guidance and mentorship to development teams and ensuring adherence to sound data management principles and best practices.
Best Regards,
Bismillah Arzoo (AB)
Synthetic Data Engineer (Observability & DevOps)
Data engineer job in Los Angeles, CA
About the Role: We're building a large-scale synthetic data generation engine to produce realistic observability datasets - metrics, logs, and traces - to support AI/ML training and benchmarking. You will design, implement, and scale pipelines that simulate complex production environments and emit controllable, parameterized telemetry data.
🧠 What You'll Do \t•\tDesign and implement generators for metrics (CPU, latency, throughput) and logs (structured/unstructured).
\t•\tBuild configurable pipelines to control data rate, shape, and anomaly injection.
\t•\tDevelop reproducible workload simulations and system behaviors (microservices, failures, recoveries).
\t•\tIntegrate synthetic data storage with Prometheus, ClickHouse, or Elasticsearch.
\t•\tCollaborate with ML researchers to evaluate realism and coverage of generated datasets.
\t•\tOptimize for scale and reproducibility using Docker containers.
✅ Who You Are \t•\tStrong programming skills in Python.
\t•\tFamiliarity with observability tools (Grafana, Prometheus, ELK, OpenTelemetry).
\t•\tSolid understanding of distributed systems metrics and log structures.
\t•\tExperience building data pipelines or synthetic data generators.
\t•\t(Bonus) Knowledge of anomaly detection, time-series analysis, or generative ML models.
💸 Pay $50 - 75/hr depending on experience Remote, flexible hours Project timeline: 5-6 weeks
Senior Software Engineer
Data engineer job in Burbank, CA
Our client is seeking a Senior Software Engineer to join their team! This position is located in Burbank CA, Seattle WA, Orlando FL, New York NY and Bristol CT.
Engage in full-cycle software development, from design and implementation to deployment
Troubleshoot and resolve technical issues across the entire technology stack
Collaborate closely with product managers, designers, QA engineers, and other cross-functional partners to deliver high-quality solutions
Write clean, efficient, and well-structured code following best engineering practices
Perform code reviews and provide mentorship to junior developers
Integrate third-party APIs and services to enhance system functionality
Ensure strong engineering standards by implementing CI/CD practices, automated testing, and DevOps methodologies
Desired Skills/Experience:
10+ years of professional experience in software development with a strong focus on Ruby on Rails technologies
10+ years with SDLC tools such as Jira, Confluence, Git, GitLab, GitHub
5+ years developing applications in React or similar JavaScript front end frameworks
3+ years with web performance technologies such as CloudFront, Redis, Batcache, Elasticache
Strong understanding of software design patterns, principles, and best practices
Experience with front end technologies such as Angular, React, or Blazor is a plus
Familiarity with cloud platforms, AWS preferred, and containerization with Docker or Kubernetes
Excellent problem solving skills and attention to detail
Strong interpersonal, analytical, problem solving, negotiating, and influencing skills
Benefits:
Medical, Dental, & Vision Insurance Plans
Employee-Owned Profit Sharing (ESOP)
401K offered
The approximate pay range for this position is between $60.00 and $85.00. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
DevOps Engineer
Data engineer job in Irvine, CA
DevOps Engineer - Satellite Technology
Onsite in Irvine, CA or Washington, DC
Pioneering Space Technology | Secure Cloud | Mission-Critical Systems
We're working with a leading organization in the satellite technology sector, seeking a DevOps Engineer to join their growing team. You'll play a key role in shaping, automating, and securing the software infrastructure that supports next-generation space missions.
This is a hands-on role within a collaborative, high-impact environment-ideal for someone who thrives on optimizing cloud performance and supporting mission-critical operations in aerospace.
What You'll Be Doing
Maintain and optimize AWS cloud environments, implementing security updates and best practices
Manage daily operations of Kubernetes clusters and ensure system reliability
Collaborate with cybersecurity teams to ensure full compliance across AWS infrastructure
Support software deployment pipelines and infrastructure automation using Terraform and CI/CD tools
Work cross-functionally with teams including satellite operations, software analytics, and systems engineering
Troubleshoot and resolve environment issues to maintain uptime and efficiency
Apply an “Infrastructure as Code” approach to all system development and management
What You'll Bring
Degree in Computer Science or a related field
2-3 years' experience with Kubernetes and containerized environments
3+ years' Linux systems administration experience
Hands-on experience with cloud services (AWS, GCP, or Azure)
Strong understanding of Terraform and CI/CD pipeline tools (e.g. FluxCD, Argo)
Skilled in Python or Go
Familiarity with software version control systems
Solid grounding in cybersecurity principles (networking, authentication, encryption, firewalls)
Eligibility to obtain a U.S. Security Clearance
Preferred:
Certified Kubernetes Administrator or Developer
AWS Certified Security credentials
This role offers the chance to make a tangible impact in the satellite and space exploration sector, joining a team that's building secure, scalable systems for mission success.
If you're passionate about space, cloud infrastructure, and cutting-edge DevOps practices-this is your opportunity to be part of something extraordinary.
Analytics Engineer
Data engineer job in Beverly Hills, CA
Turn Learning Data Into Learning Breakthroughs
At Subject, we're building AI-powered, personalized education at scale. Backed by Owl Ventures, Kleiner Perkins, Latitude Ventures, and more, we serve students across the country with cinematic, video-based learning. But we have a challenge: data is our superpower, and we need more talented, committed, and passionate people helping us build it out further.
We're looking for an Analytics Engineer to join our growing data and product organization. You'll sit at the intersection of data, product, and engineering-transforming raw data into accessible, reliable, and actionable insights that guide decision-making across the company.
This role will be foundational in building Subject's analytics infrastructure, supporting initiatives like:
Product engagement and learning outcomes measurement
Operational analytics for school implementations
Generative AI products (e.g. Subject Spark Homework Helper and SparkTA)
Data integration across systems like postgres, dbt, Pendo, Looker, and more
You'll help define what “good data” means at Subject and ensure that stakeholders-from executives to course designers-can make confident, data-informed decisions.
What You'll Build:
Scalable Data Transformation Infrastructure
Design and optimize dbt models that handle 100M+ daily events with
Build modular, tested transformation pipelines that reduce compute costs by 70%+
Create data quality frameworks and governance standards that make our warehouse reliable
Architect incremental models that process only what's changed
High-Performance Analytics Dashboards
Build Looker dashboards that load in
Design Hex notebooks turning hours-long reports into one-click updates
Create self-service analytics empowering teams to answer their own questions
Develop real-time monitoring alerting teams to critical student engagement changes
Intelligent Data Models for AI-Powered Learning
Design dimensional models enabling rapid exploration of learning patterns
Build feature stores feeding AI systems with clean, timely learning signals
Create cohort analysis frameworks revealing which interventions work for which students
Architect data products bridging raw events and business intelligence
Data Infrastructure That Scales
Write SQL optimized for millisecond response times
Build Python automation eliminating manual work and catching errors early
Design orchestration workflows that run reliably and recover gracefully
Optimize cloud costs while improving performance
The Technical Stack:
dbt - Transformation layer (50% of your time)
SQL (PostgreSQL) - Complex analytical queries, performance tuning
Python - pandas, numpy, matplotlib for analysis and automation
Hex - Interactive notebooks for reporting
Looker - Business intelligence and dashboards
Cloud Data Warehouse - BigQuery
You'll work with billions of learning events, student performance data, video engagement metrics, assessment results, and feedback loops.
What We're Looking For:
Required Experience
3-5+ years in analytics engineering or data analytics building production systems
Advanced SQL mastery - Elegant, performant queries. Understanding query plans and optimization
dbt expertise - Built and maintained dbt projects with 100+ models
Python proficiency - pandas, numpy, automation, and data pipelines
BI tool experience - Production dashboards in Looker, Tableau, or similar
Data modeling skills - Dimensional models, normalization tradeoffs, schemas that scale
T
he Mindset We Need
Performance obsession - Can't stand slow dashboards or inefficient queries
User empathy - Build for people who need insights, not just technical elegance
Systems thinking - Optimize the entire data pipeline from source to dashboard
Ownership mentality - Maintain what you build, not just ship and move on
Educational curiosity - Genuine interest in learning science and student success
Collaborative spirit - Explain concepts clearly and elevate team data literacy
Bonus Points
Education data or student analytics experience
Data science or ML workflow exposure
Cloud platform experience (GCP, AWS, Azure)
Reverse ETL or operational analytics
Analytics engineering open source contributions
Why This Role Matters:
Your dashboards inform decisions affecting 5 million students
Your optimizations save hundreds of engineering hours monthly
Your data models power AI personalization for each student
Your work helps teachers understand and improve outcomes
Compensation & Benefits Base Salary: $140K - $180K based on experience Equity: Meaningful ownership that grows with your impact Performance Bonus: Tied to infrastructure improvements and outcomes Health & Wellness: Comprehensive coverage, gym membership, daily meals Location: Los Angeles, CA (in-office preferred)
Ready to Build Education's Data Foundation? This isn't just another analytics role. Define how a category leader uses data, build infrastructure that becomes industry-standard, and improve educational outcomes for millions. Apply now and transform education through data.
Senior Software Engineer
Data engineer job in Irvine, CA
The Sr. Software Engineer will be responsible for the design/implementation of new software applications, maintenance and enhancement of various software products / solutions. They assist in successful execution of projects with minimal direction and guidance.
What You'll Be Doing
Spend 90% of your time actively designing and coding in support of the immediate team. 10% of your time will be spent researching new technology, coaching, and mentoring other engineers.
As a senior team member of developers, providing feedback and training where necessary, and ensure that technical initiatives align with organizational goals working closely with Principal Engineers / Development Managers.
As a Full Stack Engineer assigned to the product/project ensure performance, maintainability, and functional requirements from design, development, testing to rollout and support
Work with cross-engineering staff, collaborating on hardware and system monitoring requirements to ensure expected performance and reliability of the application / system developed.
Proactively communicate and work to mitigate changes to project timelines, degradation in performance of applications, troubleshooting / problem solving production issues.
Education
The Ideal Candidate:
Bachelor's degree in Computer Science, Engineering or related industry experience
Experience
A minimum of 6 years of professional software development experience in business process automation applications.
A minimum of 5 years' experience in .Net, C#, Windows tools and languages as well as modern web frameworks (Angular via Typescript, React, Vue)
Understanding of data repository models is a must. Understanding of SQL and NoSQL is preferred.
Understanding of Agile methodologies, Domain Driven Design, Test/Behavior Driven Design, Event Driven via Asynchronous messaging approaches, microservice architecture.
Preferred Experience
ASP.NET, WCF, Web Services, NServiceBus, Azure Cloud, Infrastructure as Code (IaC)
DevOps experience as a full stack developer owning the Software Development Lifecycle.
Strong understanding and experience writing unit and integration tests for all code produced.
Specialized Skills
Can effectively lead technical initiatives, collaboratively design/requirements meetings while gathering the necessary information for software development.
Ownership and accountability mindset, strong decision making along with communication and analytical skills that helps to partner with Product Owners and cross functional teams.
Leadership in project execution and delivery. Must be an excellent team player with the ability to handle stressful situations.
The individual has deep expertise in their chosen technology stack and have a broader knowledge of various programming languages, frameworks, and tools.
Brings a wealth of experience and a nuanced understanding of the specific domain, enabling insightful decisions and innovative problem-solving.
Ability to break up larger projects into individual pieces, assess complexity of each piece, and balance the work amongst team members.
Ability to work in fast paced / flexible environment that practices SAFe / Agile based SDLC.
Sets high standards for behavior and performance, models the values and principles of the organization, and inspires others through action.
Practices Test Driven Design leveraging unit tests, mocks, and data factories.
Experience with event driven design and microservice architecture best practices.
Posses strong sense of interpersonal awareness, has a bias for action, builds trust, is technically deep, and has good judgement.
Pay Range: $111k - 165k
The specific compensation for this position will be determined by a number of factors, including the scope, complexity and location of the role as well as the cost of labor in the market; the skills, education, training, credentials and experience of the candidate; and other conditions of employment. Our full-time consultants have access to benefits including medical, dental, vision as well as 401K contributions.
Senior Software Engineer
Data engineer job in Orange, CA
Job Title: Sr. Software Engineering
Reports to: CTO
FLSA Status: Full-time, Exempt
About Our Organization: RIS Rx (pronounced “RISE”) is a healthcare technology organization with a strong imprint in the patient access and affordability space. RIS Rx has quickly become an industry leader in delivering impactful solutions to stakeholders across the healthcare continuum. RIS Rx is proud to offer an immersive service portfolio to help address common access barriers. We don't believe in a “one size fits all” approach to our service offerings. Our philosophy is to bring forward innovation, value and service to everything that we do. This approach has allowed us to have the opportunity to serve countless patients to help produce better treatment outcomes and an overall improved quality of life. Here at RIS Rx, we invite our partners and colleagues to “Rise Up” with us to bring accessible healthcare and solutions for all.
Job Summary
We are seeking a highly skilled Senior Software Engineer to lead the design, development, and optimization of advanced technology solutions that address revenue leakage and operational challenges for pharmaceutical manufacturers. This role will play a key part in shaping scalable healthcare technology platforms, mentoring engineering talent, and driving architectural and process improvements. The Senior Software Engineer will collaborate with cross-functional teams, including product, clinical, and operations stakeholders to deliver secure, high-quality, and innovative software solutions. The ideal candidate is a hands-on technical leader with expertise in modern software development practices, cloud-native architectures, and healthcare or pharmaceutical systems.
Responsibilities
Lead the design, development, and maintenance of complex technology solutions that identify and mitigate gross-to-net (GTN) revenue leakage for pharmaceutical manufacturers
Mentor junior engineers and provide technical guidance on architecture decisions, code quality, and best practices
Collaborate with cross-functional teams including product managers, pharmacists, operations, and other software engineers to deliver high-quality software solutions
Drive technical initiatives and lead architectural discussions for scalable healthcare technology platforms serving multiple pharmaceutical manufacturers
Write clean, efficient, and well-documented code following established coding standards and best practices while establishing new standards for the team
Lead code reviews to ensure code quality, maintainability, and knowledge sharing across the team
Debug and troubleshoot complex software issues, implementing fixes and optimizations for mission-critical systems
Provide advanced production support for systems, including monitoring, incident response, resolution of critical issues, and post-incident analysis
Research and evaluate emerging technologies and industry trends, making recommendations for technology adoption and development process improvements
Lead agile development processes including sprint planning, daily standups, and retrospectives, while coaching team members on agile best practices
Skills
5+ years of experience in software development with advanced proficiency in languages like TypeScript and frameworks like React
Strong commitment to software quality with deep understanding of design patterns, clean code practices, and software architecture principles
Advanced experience with AWS cloud services, infrastructure-as-code, and cloud-native development patterns
Experience with database systems like PostgreSQL, SQL query optimization, and data modeling
Advanced experience with web development technologies including HTML/CSS and modern JavaScript frameworks
Experience leading technical projects and mentoring other developers
Proven experience leading Agile/Scrum teams and development practices
Experience with system design, scalability considerations, and performance optimization
Understanding of healthcare data standards and pharmaceutical industry processes preferred
Worked on projects that used CI/CD pipelines, automated testing, and DevOps practices
Strong leadership and mentoring skills with ability to guide technical decision-making
Excellent problem-solving skills and ability to work independently while leading cross-functional initiatives
Exceptional communication skills and ability to explain complex technical concepts to both technical and non-technical stakeholders
Education
This position requires a Bachelor's degree in Computer Science, Software Engineering, or a related technical field
Senior Staff Software Engineer
Data engineer job in Los Angeles, CA
We are seeking a highly experienced Senior Staff Software Engineer to lead and deliver complex technical projects from inception to deployment. This role requires a strong background in software architecture, hands-on development, and technical leadership across the full software development lifecycle.
This role is with a fast-growing technology company pioneering AI-driven solutions for real-world infrastructure. Backed by significant recent funding and valued at over $5 billion, the company is scaling rapidly across multiple verticals, including mobility, retail, and hospitality. Its platform leverages computer vision and cloud technologies to create frictionless, intelligent experiences, positioning it as a leader in the emerging Recognition Economy-a paradigm where physical environments adapt in real time to user presence and context.
Required Qualifications10+ years of professional software engineering experience.
Proven track record of leading and delivering technical projects end-to-end.
Strong proficiency in Java or Scala.
Solid understanding of cloud technologies (AWS, GCP, or Azure).
Experience with distributed systems, microservices, and high-performance applications.
Preferred / Bonus SkillsAdvanced expertise in Scala.
Prior experience mentoring engineers and building high-performing teams.
Background spanning FAANG companies or high-growth startups.
Exposure to AI/ML or general AI technologies.