Data Scientist
Data engineer job in Long Beach, CA
STAND 8 provides end to end IT solutions to enterprise partners across the United States and with offices in Los Angeles, New York, New Jersey, Atlanta, and more including internationally in Mexico and India We are seeking a highly analytical and technically skilled Data Scientist to transform complex, multi-source data into unified, actionable insights used for executive reporting and decision-making.
This role requires expertise in business intelligence design, data modeling, metadata management, data integrity validation, and the development of dashboards, reports, and analytics used across operational and strategic environments.
The ideal candidate thrives in a fast-paced environment, demonstrates strong investigative skills, and can collaborate effectively with technical teams, business stakeholders, and leadership.
Essential Duties & Responsibilities
As a Data Scientist, participate across the full solution lifecycle: business case, planning, design, development, testing, migration, and production support.
Analyze large and complex datasets with accuracy and attention to detail.
Collaborate with users to develop effective metadata and data relationships.
Identify reporting and dashboard requirements across business units.
Determine strategic placement of business logic within ETL or metadata models.
Build enterprise data warehouse metadata/semantic models.
Design and develop unified dashboards, reports, and data extractions from multiple data sources.
Develop and execute testing methodologies for reports and metadata models.
Document BI architecture, data lineage, and project report requirements.
Provide technical specifications and data definitions to support the enterprise data dictionary.
Apply analytical skills and Data Science techniques to understand business processes, financial calculations, data flows, and application interactions.
Identify and implement improvements, workarounds, or alternative solutions related to ETL processes, ensuring integrity and timeliness.
Create UI components or portal elements (e.g., SharePoint) for dynamic or interactive stakeholder reporting.
As a Data Scientist, download and process SQL database information to build Power BI or Tableau reports (including cybersecurity awareness campaigns).
Utilize SQL, Python, R, or similar languages for data analysis and modeling.
Support process optimization through advanced modeling, leveraging experience as a Data Scientist where needed.
Required Knowledge & Attributes
Highly self-motivated with strong organizational skills and ability to manage multiple verbal and written assignments.
Experience collaborating across organizational boundaries for data sourcing and usage.
Analytical understanding of business processes, forecasting, capacity planning, and data governance.
Proficient with BI tools (Power BI, Tableau, PBIRS, SSRS, SSAS).
Strong Microsoft Office skills (Word, Excel, Visio, PowerPoint).
High attention to detail and accuracy.
Ability to work independently, demonstrate ownership, and ensure high-quality outcomes.
Strong communication, interpersonal, and stakeholder engagement skills.
Deep understanding that data integrity and consistency are essential for adoption and trust.
Ability to shift priorities and adapt within fast-paced environments.
Required Education & Experience
Bachelor's degree in Computer Science, Mathematics, or Statistics (or equivalent experience).
3+ years of BI development experience.
3+ years with Power BI and supporting Microsoft stack tools (SharePoint 2019, PBIRS/SSRS, Excel 2019/2021).
3+ years of experience with SDLC/project lifecycle processes
3+ years of experience with data warehousing methodologies (ETL, Data Modeling).
3+ years of VBA experience in Excel and Access.
Strong ability to write SQL queries and work with SQL Server 2017-2022.
Experience with BI tools including PBIRS, SSRS, SSAS, Tableau.
Strong analytical skills in business processes, financial modeling, forecasting, and data flow understanding.
Critical thinking and problem-solving capabilities.
Experience producing high-quality technical documentation and presentations.
Excellent communication and presentation skills, with the ability to explain insights to leadership and business teams.
Benefits
Medical coverage and Health Savings Account (HSA) through Anthem
Dental/Vision/Various Ancillary coverages through Unum
401(k) retirement savings plan
Paid-time-off options
Company-paid Employee Assistance Program (EAP)
Discount programs through ADP WorkforceNow
Additional Details
The base range for this contract position is $73 - $83 / per hour, depending on experience. Our pay ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target for new hires of this position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Qualified applicants with arrest or conviction records will be considered
About Us
STAND 8 provides end-to-end IT solutions to enterprise partners across the United States and globally with offices in Los Angeles, Atlanta, New York, Mexico, Japan, India, and more. STAND 8 focuses on the "bleeding edge" of technology and leverages automation, process, marketing, and over fifteen years of success and growth to provide a world-class experience for our customers, partners, and employees.
Our mission is to impact the world positively by creating success through PEOPLE, PROCESS, and TECHNOLOGY.
Check out more at ************** and reach out today to explore opportunities to grow together!
By applying to this position, your data will be processed in accordance with the STAND 8 Privacy Policy.
Data Scientist
Data engineer job in Alhambra, CA
Title: Principal Data Scientist
Duration: 12 Months Contract
Additional Information
California Resident Candidates Only. This position is HYBRID (2 days onsite, 2 days telework). Interviews will be conducted via Microsoft Teams. The work schedule follows a 4/40 (10-hour days, Monday-Thursday), with the specific shift determined by the program manager. Shifts may range between 7:15 a.m. and 6:00 p.m.
Job description:
The Principal Data Scientist works to establish a comprehensive Data Science Program to advance data-driven decision-making, streamline operations, and fully leverage modern platforms including Databricks, or similar, to meet increasing demand for predictive analytics and AI solutions. The Principal Data Scientist will guide program development, provide training and mentorship to junior members of the team, accelerate adoption of advanced analytics, and build internal capacity through structured mentorship. The Principal Data Scientist will possess exceptional communication abilities, both verbal and written, with a strong customer service mindset and the ability to translate complex concepts into clear, actionable insights; strong analytical and business acumen, including foundational experience with regression, association analysis, outlier detection, and core data analysis principles; working knowledge of database design and organization, with the ability to partner effectively with Data Management and Data Engineering teams; outstanding time management and organizational skills, with demonstrated success managing multiple priorities and deliverables in parallel; a highly collaborative work style, coupled with the ability to operate independently, maintain focus, and drive projects forward with minimal oversight; a meticulous approach to quality, ensuring accuracy, reliability, and consistency in all deliverables; and proven mentorship capabilities, including the ability to guide, coach, and upskill junior data scientists and analysts.
Experience Required:
Five (5)+ years of professional experience leading data science initiatives, including developing machine learning models, statistical analyses, and end-to-end data science workflows in production environments.
Three (3)+ years of experience working with Databricks and similar cloud-based analytics platforms, including notebook development, feature engineering, ML model training, and workflow orchestration.
Three (3)+ years of experience applying advanced analytics and predictive modeling (e.g., regression, classification, clustering, forecasting, natural language processing).
Two (2)+ years of experience implementing MLOps practices, such as model versioning, CI/CD for ML, MLflow, automated pipelines, and model performance monitoring.
Two (2)+ years of experience collaborating with data engineering teams to design data pipelines, optimize data transformations, and implement Lakehouse or data warehouse architectures (e.g., Databricks, Snowflake, SQL-based platforms).
Two (2)+ years of experience mentoring or supervising junior data scientists or analysts, including code reviews, training, and structured skill development.
Two (2)+ years of experience with Python and SQL programming, using data sources such as SQL Server, Oracle, PostgreSQL, or similar relational databases.
One (1)+ year of experience operationalizing analytics within enterprise governance frameworks, partnering with Data Management, Security, and IT to ensure compliance, reproducibility, and best practices.
Education Required & certifications:
This classification requires possession of a Master's degree or higher in Data Science, Statistics, Computer Science, or a closely related field. Additional qualifying professional experience may be substituted for the required education on a year-for-year basis. At least one of the following industry-recognized certifications in data science or cloud analytics, such as:
Microsoft Azure Data Scientist Associate (DP-100)
Databricks Certified Data Scientist or Machine Learning Professional
AWS Machine Learning Specialty
Google Professional Data Engineer • or equivalent advanced analytics certifications. The certification is required and may not be substituted with additional experience.
About US Tech Solutions:
US Tech Solutions is a global staff augmentation firm providing a wide range of talent on-demand and total workforce solutions. To know more about US Tech Solutions, please visit ************************
US Tech Solutions is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Recruiter Details:
Name: T Saketh Ram Sharma
Email: *****************************
Internal Id: 25-54101
Principal Data Scientist
Data engineer job in Alhambra, CA
The Principal Data Scientist works to establish a comprehensive Data Science Program to advance data-driven decision-making, streamline operations, and fully leverage modern platforms including Databricks, or similar, to meet increasing demand for predictive analytics and AI solutions.
The Principal Data Scientist will guide program development, provide training and mentorship to junior members of the team, accelerate adoption of advanced analytics, and build internal capacity through structured mentorship.
The Principal Data Scientist will possess exceptional communication abilities, both verbal and written, with a strong customer service mindset and the ability to translate complex concepts into clear, actionable insights; strong analytical and business acumen, including foundational experience with regression, association analysis, outlier detection, and core data analysis principles; working knowledge of database design and organization, with the ability to partner effectively with Data Management and Data Engineering teams; outstanding time management and organizational skills, with demonstrated success managing multiple priorities and deliverables in parallel; a highly collaborative work style, coupled with the ability to operate independently, maintain focus, and drive projects forward with minimal oversight; a meticulous approach to quality, ensuring accuracy, reliability, and consistency in all deliverables; and proven mentorship capabilities, including the ability to guide, coach, and upskill junior data scientists and analysts.
5+ years of professional experience leading data science initiatives, including developing machine learning models, statistical analyses, and end-to-end data science workflows in production environments.
3+ years of experience working with Databricks and similar cloud-based analytics platforms, including notebook development, feature engineering, ML model training, and workflow orchestration.
3+ years of experience applying advanced analytics and predictive modeling (e.g., regression, classification, clustering, forecasting, natural language processing).
2+ years of experience implementing MLOps practices, such as model versioning, CI/CD for ML, MLflow, automated pipelines, and model performance monitoring.
2+ years of experience collaborating with data engineering teams to design data pipelines, optimize data transformations, and implement Lakehouse or data warehouse architectures (e.g., Databricks, Snowflake, SQL-based platforms).
2+ years of experience mentoring or supervising junior data scientists or analysts, including code reviews, training, and structured skill development.
2+ years of experience with Python and SQL programming, using data sources such as SQL Server, Oracle, PostgreSQL, or similar relational databases.
1+ year of experience operationalizing analytics within enterprise governance frameworks, partnering with Data Management, Security, and IT to ensure compliance, reproducibility, and best practices.
Education:
This classification requires possession of a Master's degree or higher in Data Science, Statistics, Computer Science, or a closely related field. Additional qualifying professional experience may be substituted for the required education on a year-for-year basis.
At least one of the following industry-recognized certifications in data science or cloud analytics, such as: • Microsoft Azure Data Scientist Associate (DP-100) • Databricks Certified Data Scientist or Machine Learning Professional • AWS Machine Learning Specialty • Google Professional Data Engineer • or equivalent advanced analytics certifications. The certification is required and may not be substituted with additional experience.
Data Engineer
Data engineer job in Culver City, CA
Robert Half is partnering with a well known high tech company seeking an experienced Data Engineer with strong Python and SQL skills. The primary duties involve managing the complete data lifecycle and utilizing extensive datasets across marketing, software, and web platforms. This position is full time with full benefits and 3 days onsite in the Culver CIty area.
Responsibilities:
4+ years of professional experience ideally in a combination of data engineering and business intelligence.
Working heavily with SQL and programming in Python.
Ownership mindset to oversee the entire data lifecycle, including collection, extraction, and cleansing processes.
Building reports and data visualization to help advance business.
Leverage industry-standard tools for data integration such as Talend.
Work extensively within Cloud based ecosystems such as AWS and GCP ecosystems.
Requirements:
Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
5+ years of experience in data engineering, data warehousing, and big data technologies.
Proficiency in SQL and experience with relational databases (e.g., PostgreSQL, MySQL, SQL Server) and NoSQL Technologies.
Experience working within GCP environments and AWS.
Experience in real-time data pipeline tools.
Hands-on expertise with Google Cloud services including BigQuery.
Deep knowledge of SQL including Dimension tables and experienced in Python programming.
Senior Data Engineer - Commerce Data Pipelines
Data engineer job in Santa Monica, CA
City: Seattle, WA/ Santa Monica, CA or NYC
Onsite/ Hybrid/ Remote: Hybrid (4 days a week onsite, Friday - Remote)
Duration: 10 months
Rate Range: Up to$92.5/hr on W2 depending on experience (no C2C or 1099 or sub-contract)
Work Authorization: GC, USC, All valid EADs except OPT, CPT, H1B
Must Have:
• SQL
• ETL design and development
• Data modeling (dimensional and normalization)
• ETL orchestration tools (Airflow or similar)
• Data Quality frameworks
• Performance tuning for SQL and ETL
• Python or PySpark
• Snowflake or Redshift
Responsibilities:
• Partner with business, analytics, and infrastructure teams to define data and reporting requirements.
• Collect data from internal and external systems and design table structures for scalable data solutions.
• Build, enhance, and maintain ETL pipelines with strong performance and reliability.
• Develop automated Data Quality checks and support ongoing pipeline monitoring.
• Implement database deployments using tools such as Schema Change.
• Conduct SQL and ETL tuning and deliver ad hoc analysis as needed.
• Support Agile ceremonies and collaborate in a fast-paced environment.
Qualifications:
• 3+ years of data engineering experience.
• Strong grounding in data modeling, including dimensional models and normalization.
• Deep SQL expertise with advanced tuning skills.
• Experience with relational or distributed data systems such as Snowflake or Redshift.
• Familiarity with ETL/orchestration platforms like Airflow or Nifi.
• Programming experience with Python or PySpark.
• Strong analytical reasoning, communication skills, and ability to work cross-functionally.
• Bachelor's degree required.
Senior Data Consultant - Supply Chain Planning
Data engineer job in Corona, CA
🚀 We're Hiring: Senior Data Consultant - (Supply Chain Planning)
Bristlecone, a Mahindra company, is a leading supply chain and business analytics advisor, rated by Gartner as one of the top ten system integrators in the supply chain space. We have been a trusted partner to global enterprises such as Applied Materials, Exxon Mobil, Flextronics, Nestle, Unilever, Whirlpool, and many others.
🔍 Project Overview:
We are looking for a strong Data Consultant to support our planning projects. The ideal candidate will have a solid understanding of planning processes and data management within a supply chain or business planning environment. While deep configuration knowledge of SAP IBP is not mandatory, the consultant must have a strong grasp of planning data, business rules, and their impact on planning outcomes.
This is a strategic initiative aimed at transforming planning processes across Raw Materials, Finished Goods, and Packaging materials. You'll be the go-to expert for managing end-to-end planning data across SAP IBP and ECC systems (SD, MM, PP).
🛠️ Key Responsibilities:
Collaborate with planning teams to analyze, validate, and manage data relevant to planning processes.
Demonstrate a clear understanding of basic planning functionalities and how data supports them.
Identify, define, and manage data elements that impact demand, supply, and inventory planning.
Understand and document business rules and prerequisites related to data maintenance and planning accuracy.
Coordinate data collection activities from super users and end users across multiple functions.
Support data readiness for project milestones including testing, validation, and go-live.
Explain how different data elements influence planning outcomes to non-technical stakeholders.
Work closely with functional and technical teams to ensure data integrity and consistency across systems.
Required Skills & Qualifications:
Strong understanding of planning processes (demand, supply, or S&OP).
Proven experience working with planning master data (e.g., product, location, BOM, resources, etc.).
Ability to analyze complex datasets and identify inconsistencies or dependencies.
Excellent communication and coordination skills with cross-functional teams.
Exposure to SAP IBP, APO, or other advanced planning tools (preferred but not mandatory).
Strong business acumen with the ability to link data quality to planning outcomes.
5-10 years of relevant experience in data management, planning, or supply chain roles.
Preferred Qualifications:
Experience with large-scale planning transformation or ERP implementation projects.
Knowledge of data governance and data quality frameworks.
Experience in working with super users/end users for data validation and readiness.
Privacy Notice Declarations for California based candidates/Jobs:: ********************************************************
Senior Data Engineer - Snowflake / ETL (Onsite)
Data engineer job in Beverly Hills, CA
CGS Business Solutions is committed to helping you, as an esteemed IT Professional, find the next right step in your career. We match professionals like you to rewarding consulting or full-time opportunities in your area of expertise. We are currently seeking Technical Professionals who are searching for challenging and rewarding jobs for the following opportunity:
Summary
CGS is hiring for a Senior Data Engineer to serve as a core member of the Platform team. This is a high-impact role responsible for advancing our foundational data infrastructure.
Your primary mission will be to build key components of our Policy Journal - the central source of truth for all policy, commission, and client accounting data. You'll work closely with the Lead Data Engineer and business stakeholders to translate complex requirements into scalable data models and reliable pipelines that power analytics and operational decision-making for agents, managers, and leadership.
This role blends greenfield engineering, strategic modernization, and a strong focus on delivering trusted, high-quality data products.
Overview
• Build the Policy Journal - Design and implement the master data architecture unifying policy, commission, and accounting data from sources like IVANS and Applied EPIC to create the platform's “gold record.”
• Ensure Data Reliability - Define and implement data quality checks, monitoring, and alerting to guarantee accuracy, consistency, and timeliness across pipelines - while contributing to best practices in governance.
• Build the Analytics Foundation - Enhance and scale our analytics stack (Snowflake, dbt, Airflow), transforming raw data into clean, performant dimensional models for BI and operational insights.
• Modernize Legacy ETL - Refactor our existing Java + SQL (PostgreSQL) ETL system - diagnose duplication and performance issues, rewrite critical components in Python, and migrate orchestration to Airflow.
• Implement Data Quality Frameworks - Develop automated testing and validation frameworks aligned with our QA strategy to ensure accuracy, completeness, and integrity across pipelines.
• Collaborate on Architecture & Design - Partner with product and business stakeholders to deeply understand requirements and design scalable, maintainable data solutions.
Ideal Experience
• 5+ years of experience building and operating production-grade data pipelines.
• Expert-level proficiency in Python and SQL.
• Hands-on experience with the modern data stack - Snowflake/Redshift, Airflow, dbt, etc.
• Strong understanding of AWS data services (S3, Glue, Lambda, RDS).
• Experience working with insurance or insurtech data (policies, commissions, claims, etc.).
• Proven ability to design robust data models (e.g., dimensional modeling) for analytics.
• Pragmatic problem-solver capable of analyzing and refactoring complex legacy systems (ability to read Java/Hibernate is a strong plus - but no new Java coding required).
• Excellent communicator comfortable working with both technical and non-technical stakeholders.
Huge Plus!
• Direct experience with Agency Management Systems (Applied EPIC, Nowcerts, EZLynx, etc.)
• Familiarity with carrier data formats (Accord XML, IVANS AL3)
• Experience with BI tools (Tableau, Looker, Power BI)
About CGS Business Solutions: CGS specializes in IT business solutions, staffing and consulting services. With a strong focus in IT Applications, Network Infrastructure, Information Security, and Engineering. CGS is an INC 5000 company and is honored to be selected as one of the Best IT Recruitment Firms in California. After five consecutive Fastest Growing Company titles, CGS continues to break into new markets across the USA. Companies are counting on CGS to attract and help retain these resource pools in order to gain a competitive advantage the rapidly changing business environments.
Lead Data Engineer - (Automotive exp)
Data engineer job in Torrance, CA
Role: Sr Technical Lead
Duration: 12+ Month Contract
Daily Tasks Performed:
Lead the design, development, and deployment of a scalable, secure, and high-performance CDP SaaS product.
Architect solutions that integrate with various data sources, APIs, and third-party platforms.
Design, develop, and optimize complex SQL queries for data extraction, transformation, and analysis
Build and maintain workflow pipelines using Digdag, integrating with data platforms such as Treasure Data, AWS, or other cloud services
Automate ETL processes and schedule tasks using Digdag's YAML-based workflow definitions
Implement data quality checks, logging, and alerting mechanisms within workflow
Leverage AWS services (e.g., S3, Lambda, Athena) where applicable to enhance data processing and storage capabilities
Ensure best practices in software engineering, including code reviews, testing, CI/CD, and documentation.
Oversee data privacy, security, and compliance initiatives (e.g., GDPR, CCPA).
Ensure adherence to security, compliance, and data governance requirements.
Oversee development of real-time and batch data processing systems.
Collaborate with cross-functional teams including data analysts, product managers, and software engineers to translate business requirements into technical solutions
Collaborate with the stakeholders to define technical requirements to align technical solutions with business goals and deliver product features.
Mentor and guide developers, fostering a culture of technical excellence and continuous improvement.
Troubleshoot complex technical issues and provide hands-on support as needed.
Monitor, troubleshoot, and improve data workflows for performance, reliability, and cost-efficiency as needed
Optimize system performance, scalability, and cost efficiency.
What this person will be working on:
As the Senior Technical Lead for our Customer Data Platform (CDP), the candidate will define the technical strategy, architecture, and execution of the platform. They will lead the design and delivery of scalable, secure, and high-performing solutions that enable unified customer data management, advanced analytics, and personalized experiences. This role demands deep technical expertise, strong leadership, and a solid understanding of data platforms and modern cloud technologies. It is a pivotal position that supports the CDP vision by mentoring team members and delivering solutions that empower our customers to unify, analyze, and activate their data.
Position Success Criteria (Desired) - 'WANTS'
Bachelor's or Master's degree in Computer Science, Engineering, or related field.
8+ years of software development experience, with at least 3+ years in a technical leadership role.
Proven experience building and scaling SaaS products, preferably in customer data, marketing technology, or analytics domains
Extensive hands-on experience with Presto, Hive, and Python
Strong proficiency in writing complex SQL queries for data extraction, transformation, and analysis
Familiarity with AWS data services such as S3, Athena, Glue, and Lambda
Deep understanding of data modeling, ETL pipelines, workflow orchestration, and both real-time and batch data processing
Experience ensuring data privacy, security, and compliance in SaaS environments
Knowledge of Customer Data Platforms (CDPs), CDP concepts, and integration with CRM, marketing, and analytics tools
Excellent communication, leadership, and project management skills
Experience working with Agile methodologies and DevOps practices
Ability to thrive in a fast-paced, agile environment
Collaborative mindset with a proactive approach to problem-solving
Stay current with industry trends and emerging technologies relevant to SaaS and customer data platforms.
Data Engineer (AWS Redshift, BI, Python, ETL)
Data engineer job in Manhattan Beach, CA
We are seeking a skilled Data Engineer with strong experience in business intelligence (BI) and data warehouse development to join our team. In this role, you will design, build, and optimize data pipelines and warehouse architectures that support analytics, reporting, and data-driven decision-making. You will work closely with analysts, data scientists, and business stakeholders to ensure reliable, scalable, and high-quality data solutions.
Responsibilities:
Develop and maintain ETL/ELT pipelines for ingesting, transforming, and delivering data.
Design and enhance data warehouse models (star/snowflake schemas) and BI datasets.
Optimize data workflows for performance, scalability, and reliability.
Collaborate with BI teams to support dashboards, reporting, and analytics needs.
Ensure data quality, governance, and documentation across all solutions.
Qualifications:
Proven experience with data engineering tools (SQL, Python, ETL frameworks).
Strong understanding of BI concepts, reporting tools, and dimensional modeling.
Hands-on experience with cloud data platforms (e.g., AWS, Azure, GCP) is a plus.
Excellent problem-solving skills and ability to work in a cross-functional environment.
Lead Data Scientist
Data engineer job in Alhambra, CA
Role: Principal Data Scientist
Duration: 12+ Months contract
The Principal Data Scientist works to establish a comprehensive Data Science Program to advance data-driven decision-making, streamline operations, and fully leverage modern platforms including Databricks, or similar, to meet increasing demand for predictive analytics and AI solutions. The Principal Data Scientist will guide program development, provide training and mentorship to junior members of the team, accelerate adoption of advanced analytics, and build internal capacity through structured mentorship. The Principal Data Scientist will possess exceptional communication abilities, both verbal and written, with a strong customer service mindset and the ability to translate complex concepts into clear, actionable insights; strong analytical and business acumen, including foundational experience with regression, association analysis, outlier detection, and core data analysis principles; working knowledge of database design and organization, with the ability to partner effectively with Data Management and Data Engineering teams; outstanding time management and organizational skills, with demonstrated success managing multiple priorities and deliverables in parallel; a highly collaborative work style, coupled with the ability to operate independently, maintain focus, and drive projects forward with minimal oversight; a meticulous approach to quality, ensuring accuracy, reliability, and consistency in all deliverables; and proven mentorship capabilities, including the ability to guide, coach, and upskill junior data scientists and analysts.
Required Experience
• Five (5)+ years of professional experience leading data science initiatives, including developing machine learning models, statistical analyses, and end-to-end data science workflows in production environments.
• Three (3)+ years of experience working with Databricks and similar cloud-based analytics platforms, including notebook development, feature engineering, ML model training, and workflow orchestration.
• Three (3)+ years of experience applying advanced analytics and predictive modeling (e.g., regression, classification, clustering, forecasting, natural language processing).
• Two (2)+ years of experience implementing MLOps practices, such as model versioning, CI/CD for ML, MLflow, automated pipelines, and model performance monitoring.
• Two (2)+ years of experience collaborating with data engineering teams to design data pipelines, optimize data transformations, and implement Lakehouse or data warehouse architectures (e.g., Databricks, Snowflake, SQL-based platforms).
• Two (2)+ years of experience mentoring or supervising junior data scientists or analysts, including code reviews, training, and structured skill development.
• Two (2)+ years of experience with Python and SQL programming, using data sources such as SQL Server, Oracle, PostgreSQL, or similar relational databases.
• One (1)+ year of experience operationalizing analytics within enterprise governance frameworks, partnering with Data Management, Security, and IT to ensure compliance, reproducibility, and best practices.
Education
This classification requires possession of a Master's degree or higher in Data Science, Statistics, Computer Science, or a closely related field. Additional qualifying professional experience may be substituted for the required education on a year-for-year basis.
At least one of the following industry-recognized certifications in data science or cloud analytics, such as:
• Microsoft Azure Data Scientist Associate (DP-100)
• Databricks Certified Data Scientist or Machine Learning Professional
• AWS Machine Learning Specialty
• Google Professional Data Engineer
• or equivalent advanced analytics certifications. The certification is required and may not be substituted with additional experience.
Additional Information
• California Resident Candidates Only.
This position is HYBRID (2 days onsite, 2 days telework).
Interviews will be conducted via Microsoft Teams.
The work schedule follows a 4/40 (10-hour days, Monday-Thursday), with the specific shift determined by the program manager.
Shifts may range between 7:15 a.m. and 6:00 p.m.
Snowflake/AWS Data Engineer
Data engineer job in Irvine, CA
Sr. Data Engineer
Full Time Direct Hire Job
Hybrid with work location-Irvine, CA.
The Senior Data Engineer will help design and build a modern data platform that supports enterprise analytics, integrations, and AI/ML initiatives. This role focuses on developing scalable data pipelines, modernizing the enterprise data warehouse, and enabling self-service analytics across the organization.
Key Responsibilities
• Build and maintain scalable data pipelines using Snowflake, dbt, and Fivetran.
• Design and optimize enterprise data models for performance and scalability.
• Support data cataloging, lineage, quality, and compliance efforts.
• Translate business and analytics requirements into reliable data solutions.
• Use AWS (primarily S3) for storage, integration, and platform reliability.
• Perform other data engineering tasks as needed.
Required Qualifications
• Bachelor's degree in Computer Science, Data Engineering, Information Systems, or related field.
• 5+ years of data engineering experience.
• Hands-on expertise with Snowflake, dbt, and Fivetran.
• Strong background in data warehousing, dimensional modeling, and SQL.
• Experience with AWS (S3) and data governance tools such as Alation or Atlan.
• Proficiency in Python for scripting and automation.
• Experience with streaming technologies (Kafka, Kinesis, Flink) a plus.
• Knowledge of data security and compliance best practices.
• Exposure to AI/ML workflows and modern BI tools like Power BI, Tableau, or Looker.
• Ability to mentor junior engineers.
Skills
• Snowflake
• dbt
• Fivetran
• Data modeling and warehousing
• AWS
• Data governance
• SQL
• Python
• Strong communication and cross-functional collaboration
• Interest in emerging data and AI technologies
Data Engineer
Data engineer job in Irvine, CA
Job Title: Data Engineer
Duration: Direct-Hire Opportunity
We are looking for a Data Engineer who is hands-on, collaborative, and experienced with Microsoft SQL Server, Snowflake, AWS RDS, and MySQL. The ideal candidate has a strong background in data warehousing, data lakes, ETL pipelines, and business intelligence tools.
This role plays a key part in executing data strategy - driving optimization, reliability, and scalable BI capabilities across the organization. It's an excellent opportunity for a data professional who wants to influence architectural direction, contribute technical expertise, and grow within a data-driven company focused on innovation.
Key Responsibilities
Design, develop, and maintain SQL Server and Snowflake data warehouses and data lakes, focusing on performance, governance, and security.
Manage and optimize database solutions within Snowflake, SQL Server, MySQL, and AWS RDS.
Build and enhance ETL pipelines using tools such as Snowpipe, DBT, Boomi, SSIS, and Azure Data Factory.
Utilize data tools such as SSMS, Profiler, Query Store, and Redgate for performance tuning and troubleshooting.
Perform database administration tasks, including backup, restore, and monitoring.
Collaborate with Business Intelligence Developers and Business Analysts on enterprise data projects.
Ensure database integrity, compliance, and adherence to best practices in data security.
Configure and manage data integration and BI tools such as Power BI, Tableau, Power Automate, and scripting languages (Python, R).
Qualifications
Proficiency with Microsoft SQL Server, including advanced T-SQL development and optimization.
7+ years working as a SQL Server Developer/Administrator, with experience in relational and object-oriented databases.
2+ years of experience with Snowflake data warehouse and data lake solutions.
Experience developing pipelines and reporting solutions using Power BI, SSRS, SSIS, Azure Data Factory, or DBT.
Scripting and automation experience using Python, PowerShell, or R.
Familiarity with data integration and analytics tools such as Boomi, Redshift, or Databricks (a plus).
Excellent communication, problem-solving, and organizational skills.
Education: Bachelor's or Master's degree in Computer Science, Information Systems, Data Science, or a related field.
Technical Skills
SQL Server / Snowflake / MySQL / AWS RDS
ETL Development (Snowpipe, SSIS, Azure Data Factory, DBT)
BI Tools (Power BI, Tableau)
Python, R, PowerShell
Data Governance & Security Best Practices
Determining compensation for this role (and others) at Vaco/Highspring depends upon a wide array of factors including but not limited to the individual's skill sets, experience and training, licensure and certifications, office location and other geographic considerations, as well as other business and organizational needs. With that said, as required by local law in geographies that require salary range disclosure, Vaco/Highspring notes the salary range for the role is noted in this job posting. The individual may also be eligible for discretionary bonuses, and can participate in medical, dental, and vision benefits as well as the company's 401(k) retirement plan. Additional disclaimer: Unless otherwise noted in the job description, the position Vaco/Highspring is filing for is occupied. Please note, however, that Vaco/Highspring is regularly asked to provide talent to other organizations. By submitting to this position, you are agreeing to be included in our talent pool for future hiring for similarly qualified positions. Submissions to this position are subject to the use of AI to perform preliminary candidate screenings, focused on ensuring minimum job requirements noted in the position are satisfied. Further assessment of candidates beyond this initial phase within Vaco/Highspring will be otherwise assessed by recruiters and hiring managers. Vaco/Highspring does not have knowledge of the tools used by its clients in making final hiring decisions and cannot opine on their use of AI products.
Data Analytics Engineer
Data engineer job in Irvine, CA
We are seeking a Data Analytics Engineer to join our team who serves as a hybrid Database Administrator, Data Engineer, and Data Analyst, responsible for managing core data infrastructure, developing and maintaining ETL pipelines, and delivering high-quality analytics and visual insights to executive stakeholders. This role bridges technical execution with business intelligence, ensuring that data across Salesforce, financial, and operational systems is accurate, accessible, and strategically presented.
Essential Functions
Database Administration: Oversee and maintain database servers, ensuring performance, reliability, and security. Manage user access, backups, and data recovery processes while optimizing queries and database operations.
Data Engineering (ELT): Design, build, and maintain robust ELT pipelines (SQL/DBT or equivalent) to extract, transform, and load data across Salesforce, financial, and operational sources. Ensure data lineage, integrity, and governance throughout all workflows.
Data Modeling & Governance: Design scalable data models and maintain a governed semantic layer and KPI catalog aligned with business objectives. Define data quality checks, SLAs, and lineage standards to reconcile analytics with finance source-of-truth systems.
Analytics & Reporting: Develop and manage executive-facing Tableau dashboards and visualizations covering key lending and operational metrics - including pipeline conversion, production, credit quality, delinquency/charge-offs, DSCR, and LTV distributions.
Presentation & Insights: Translate complex datasets into clear, compelling stories and presentations for leadership and cross-functional teams. Communicate findings through visual reports and executive summaries to drive strategic decisions.
Collaboration & Integration: Partner with Finance, Capital Markets, and Operations to refine KPIs and perform ad-hoc analyses. Collaborate with Engineering to align analytical and operational data, manage integrations, and support system scalability.
Enablement & Training: Conduct training sessions, create documentation, and host data office hours to promote data literacy and empower business users across the organization.
Competencies & Skills
Advanced SQL proficiency with strong data modeling, query optimization, and database administration experience (PostgreSQL, MySQL, or equivalent).
Hands-on experience managing and maintaining database servers and optimizing performance.
Proficiency with ETL/ELT frameworks (DBT, Airflow, or similar) and cloud data stacks (AWS/Azure/GCP).
Strong Tableau skills - parameters, LODs, row-level security, executive-level dashboard design, and storytelling through data.
Experience with Salesforce data structures and ingestion methods.
Proven ability to communicate and present technical data insights to executive and non-technical stakeholders.
Solid understanding of lending/financial analytics (pipeline conversion, delinquency, DSCR, LTV).
Working knowledge of Python for analytics tasks, cohort analysis, and variance reporting.
Familiarity with version control (Git), CI/CD for analytics, and data governance frameworks.
Excellent organizational, documentation, and communication skills with a strong sense of ownership and follow-through.
Education & Experience
Bachelor's degree in Computer Science, Engineering, Information Technology, Data Analytics, or a related field.
3+ years of experience in data analytics, data engineering, or database administration roles.
Experience supporting executive-level reporting and maintaining database infrastructure in a fast-paced environment.
Data Engineer
Data engineer job in Irvine, CA
Thank you for stopping by to take a look at the Data Integration Engineer role I posted here on LinkedIN, I appreciate it.
If you have read my s in the past, you will recognize how I write job descriptions. If you are new, allow me to introduce myself. My name is Tom Welke. I am Partner & VP at RSM Solutions, Inc and I have been recruiting technical talent for more than 23 years and been in the tech space since the 1990s. Due to this, I actually write JD's myself...no AI, no 'bots', just a real live human. I realized a while back that looking for work is about as fun as a root canal with no anesthesia...especially now. So, rather than saying 'must work well with others' and 'team mindset', I do away with that kind of nonsense and just tell it like it is.
So, as with every role I work on, social fit is almost as important as technical fit. For this one, technical fit is very very important. But, we also have some social fit characteristics that are important. This is the kind of place that requires people to dive in and learn. The hiring manager for this one is actually a very dear friend of mine. He said something interesting to me not all that long ago. He mentioned, if you aren't spending at least an hour a day learning something new, you really are doing yourself a disservice. This is that classic environment where no one says 'this is not my job'. So that ability to jump in and help is needed for success in this role.
This role is being done onsite in Irvine, California. I prefer working with candidates that are already local to the area. If you need to relocate, that is fine, but there are no relocation dollars available.
I can only work with US Citizens or Green Card Holders for this role. I cannot work with H1, OPT, EAD, F1, H4, or anyone that is not already a US Citizen or Green Card Holder for this role.
The Data Engineer role is similar to the Data Integration role I posted. However, this one is mor Ops focused, with the orchestration of deployment and ML flow, and including orchestrating and using data on the clusters and managing how the models are performing. This role focuses on coding & configuring on the ML side of the house.
You will be designing, automating, and observing end to end data pipelines that feed this client's Kubeflow driven machine learning platform, ensuring models are trained, deployed, and monitored on trustworthy, well governed data. You will build batch/stream workflows, wire them into Azure DevOps CI/CD, and surface real time health metrics in Prometheus + Grafana dashboards to guarantee data availability. The role bridges Data Engineering and MLOps, allowing data scientists to focus on experimentation and the business sees rapid, reliable predictive insight.
Here are some of the main responsibilities:
Design and implement batch and streaming pipelines in Apache Spark running on Kubernetes and Kubeflow Pipelines to hydrate feature stores and training datasets.
Build high throughput ETL/ELT jobs with SSIS, SSAS, and T SQL against MS SQL Server, applying Data Vault style modeling patterns for auditability.
Integrate source control, build, and release automation using GitHub Actions and Azure DevOps for every pipeline component.
Instrument pipelines with Prometheus exporters and visualize SLA, latency, and error budget metrics to enable proactive alerting.
Create automated data quality and schema drift checks; surface anomalies to support a rapid incident response process.
Use MLflow Tracking and Model Registry to version artifacts, parameters, and metrics for reproducible experiments and safe rollbacks.
Work with data scientists to automate model retraining and deployment triggers within Kubeflow based on data freshness or concept drift signals.
Develop PowerShell and .NET utilities to orchestrate job dependencies, manage secrets, and publish telemetry to Azure Monitor.
Optimize Spark and SQL workloads through indexing, partitioning, and cluster sizing strategies, benchmarking performance in CI pipelines.
Document lineage, ownership, and retention policies; ensure pipelines conform to PCI/SOX and internal data governance standards.
Here is what we are seeking:
At least 6 years of experience building data pipelines in Spark or equivalent.
At least 2 years deploying workloads on Kubernetes/Kubeflow.
At least 2 years of experience with MLflow or similar experiment‑tracking tools.
At least 6 years of experience in T‑SQL, Python/Scala for Spark.
At least 6 years of PowerShell/.NET scripting.
At least 6 years of experience with with GitHub, Azure DevOps, Prometheus, Grafana, and SSIS/SSAS.
Kubernetes CKA/CKAD, Azure Data Engineer (DP‑203), or MLOps‑focused certifications (e.g., Kubeflow or MLflow) would be great to see.
Mentor engineers on best practices in containerized data engineering and MLOps.
Big Data Engineer
Data engineer job in Santa Monica, CA
Our client is seeking a Big Data Engineer to join their team! This position is located in Santa Monica, California.
Design and build core components of a large-scale data platform for both real-time and batch processing, owning key features of big data applications that evolve with business needs
Develop next-generation, cloud-based big data infrastructure supporting batch and streaming workloads, with continuous improvements to performance, scalability, reliability, and availability
Champion engineering excellence, promoting best practices such as design patterns, CI/CD, thorough code reviews, and automated testing
Drive innovation, contributing new ideas and applying cutting-edge technologies to deliver impactful solutions
Participate in the full software development lifecycle, including system design, experimentation, implementation, deployment, and testing
Collaborate closely with program managers, product managers, SDETs, and researchers in an open, agile, and highly innovative environment
Desired Skills/Experience:
Bachelor's degree in a STEM field such as: Science, Technology, Engineering, Mathematics
5+ years of relevant professional experience
4+ years of professional software development experience using Java, Scala, Python, or similar programming languages
3+ years of hands-on big data development experience with technologies such as Spark, Flink, SingleStore, Kafka, NiFi, and AWS big data tools
Strong understanding of system and application design, architecture principles, and distributed system fundamentals
Proven experience building highly available, scalable, and production-grade services
Genuine passion for technology, with the ability to work across interdisciplinary areas and adopt new tools or approaches
Experience processing massive datasets at the petabyte scale
Proficiency with cloud infrastructure and DevOps tools, such as Terraform, Kubernetes (K8s), Spinnaker, IAM, and ALB
Hands-on experience with modern data warehousing and analytics platforms, including ClickHouse, Druid, Snowflake, Impala, Presto, Kinesis, and more
Familiarity with common web development frameworks, such as Spring Boot, React.js, Vue.js, or Angular
Benefits:
Medical, Dental, & Vision Insurance Plans
Employee-Owned Profit Sharing (ESOP)
401K offered
The approximate pay range for this position is between $52.00 and $75.00. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
Senior Data Architect
Data engineer job in Torrance, CA
Title: Senior Data Architect
Duration: 12 Months
Pay rate $90 Per hr on W2
Daily Tasks Performed:
Translates high-level business requirements into data models and appropriate metadata, test data, and data quality standards.
Manages senior business stakeholders to secure strong engagement and ensures that the delivery of the project aligns with longer-term strategic roadmaps.
Leads and participates in the peer review and quality assurance of project architectural artifacts Defines and manages standards, guidelines, and processes to ensure data quality.
Works with IT teams, business analysts, and data analytics teams to understand data consumers' needs and develop solutions.
Evaluates and recommends emerging technologies for data management, storage, and analytics
Establish and maintain governance frameworks for team and vendor partners to ensure the effectiveness of architecture services.
What this person will be working on:
Understand data confidentiality, security, compliance needs and apply data protection rules including data sharing, filtering, and fencing at storage, compute, and consumption layers.
Design data protection solutions at database, table, column level and APIs based on enterprise standard data security, privacy, architecture principles and reference architectures.
Design the structure and layout of data systems, including databases, warehouses, and lakes Select and implement database management systems that meet the organization's needs by defining data schemas, optimizing data storage, and establishing data access controls and security measures
Deliver exceptional business value by enhancing data pipeline performance, ensuring timely orchestration, and upholding data governance.
Position Success Criteria (Desired) - 'WANTS'
A bachelor's degree in computer science, data science, engineering, or related field.
At least 10 years of relevant experience in design and implementation of data models (Erwin) for enterprise data warehouse initiatives
Experience leading projects involving cloud data lakes, data warehousing, data modeling, and data analysis
Proficiency in the design and implementation of modern data architectures and concepts such as cloud services (AWS), real-time data distribution (Kinesis, Kafka, Dataflow), and modern data warehouse tools (Redshift)
Experience with various database platforms, including DB2, MS SQL Server, PostgreSQL, MongoDB, etc.
Understanding of entity-relationship modeling, metadata systems, and data security, quality tools and techniques
Ability to design traditional - relational, analytics, datalake and lakehouse architecture based on business needs
Experience with business intelligence tools and technologies such as Informatica, Power BI, and Tableau
Exceptional communication and presentation skills Strong analytical and problem-solving skills
Ability to collaborate and excel in complex, cross-functional teams involving data scientists, business analysts, and stakeholders
Ability to guide solution design and architecture to meet business needs.
If you're interested in above role please send me your updated resume to *******************************
Senior Data Engineer
Data engineer job in Los Angeles, CA
Robert Half is partnering with a well known brand seeking an experienced Data Engineer with Databricks experience. Working alongside data scientists and software developers, you'll work will directly impact dynamic pricing strategies by ensuring the availability, accuracy, and scalability of data systems. This position is full time with full benefits and 3 days onsite in the Woodland Hills, CA area.
Responsibilities:
Design, build, and maintain scalable data pipelines for dynamic pricing models.
Collaborate with data scientists to prepare data for model training, validation, and deployment.
Develop and optimize ETL processes to ensure data quality and reliability.
Monitor and troubleshoot data workflows for continuous integration and performance.
Partner with software engineers to embed data solutions into product architecture.
Ensure compliance with data governance, privacy, and security standards.
Translate stakeholder requirements into technical specifications.
Document processes and contribute to data engineering best practices.
Requirements:
Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
4+ years of experience in data engineering, data warehousing, and big data technologies.
Proficiency in SQL and experience with relational databases (e.g., PostgreSQL, MySQL, SQL Server).
Must have experience in Databricks.
Experience working within Azure or AWS or GCP environment.
Familiarity with big data tools like Spark, Hadoop, or Databricks.
Experience in real-time data pipeline tools.
Experienced with Python
Senior Data Engineer
Data engineer job in Glendale, CA
City: Glendale, CA
Onsite/ Hybrid/ Remote: Hybrid (3 days a week onsite, Friday - Remote)
Duration: 12 months
Rate Range: Up to$85/hr on W2 depending on experience (no C2C or 1099 or sub-contract)
Work Authorization: GC, USC, All valid EADs except OPT, CPT, H1B
Must Have:
• 5+ years Data Engineering
• Airflow
• Spark DataFrame API
• Databricks
• SQL
• API integration
• AWS
• Python or Java or Scala
Responsibilities:
• Maintain, update, and expand Core Data platform pipelines.
• Build tools for data discovery, lineage, governance, and privacy.
• Partner with engineering and cross-functional teams to deliver scalable solutions.
• Use Airflow, Spark, Databricks, Delta Lake, Kubernetes, and AWS to build and optimize workflows.
• Support platform standards, best practices, and documentation.
• Ensure data quality, reliability, and SLA adherence across datasets.
• Participate in Agile ceremonies and continuous process improvement.
• Work with internal customers to understand needs and prioritize enhancements.
• Maintain detailed documentation that supports governance and quality.
Qualifications:
• 5+ years in data engineering with large-scale pipelines.
• Strong SQL and one major programming language (Python, Java, or Scala).
• Production experience with Spark and Databricks.
• Experience ingesting and interacting with API data sources.
• Hands-on Airflow orchestration experience.
• Experience developing APIs with GraphQL.
• Strong AWS knowledge and infrastructure-as-code familiarity.
• Understanding of OLTP vs OLAP, data modeling, and data warehousing.
• Strong problem-solving and algorithmic skills.
• Clear written and verbal communication.
• Agile/Scrum experience.
• Bachelor's degree in a STEM field or equivalent industry experience.
Senior Data Engineer
Data engineer job in Glendale, CA
Our client is seeking a Senior Data Engineer to join their team! This position is located in Glendale, California.
Contribute to maintaining, updating, and expanding existing Core Data platform data pipelines
Build tools and services to support data discovery, lineage, governance, and privacy
Collaborate with other software and data engineers and cross-functional teams
Work with a tech stack that includes Airflow, Spark, Databricks, Delta Lake, Kubernetes, and AWS
Collaborate with product managers, architects, and other engineers to drive the success of the Core Data platform
Contribute to developing and documenting internal and external standards and best practices for pipeline configurations, naming conventions, and more
Ensure high operational efficiency and quality of Core Data platform datasets to meet SLAs and ensure reliability and accuracy for stakeholders in Engineering, Data Science, Operations, and Analytics
Participate in agile and scrum ceremonies to collaborate and refine team processes
Engage with customers to build relationships, understand needs, and prioritize both innovative solutions and incremental platform improvements
Maintain detailed documentation of work and changes to support data quality and data governance requirements
Desired Skills/Experience:
5+ years of data engineering experience developing large data pipelines
Proficiency in at least one major programming language such as: Python, Java or Scala
Strong SQL skills and the ability to create queries to analyze complex datasets
Hands-on production experience with distributed processing systems such as Spark
Experience interacting with and ingesting data efficiently from API data sources
Experience coding with the Spark DataFrame API to create data engineering workflows in Databricks
Hands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines
Experience developing APIs with GraphQL
Deep understanding of AWS or other cloud providers, as well as infrastructure-as-code
Familiarity with data modeling techniques and data warehousing best practices
Strong algorithmic problem-solving skills
Excellent written and verbal communication skills
Advanced understanding of OLTP versus OLAP environments
Benefits:
Medical, Dental, & Vision Insurance Plans
Employee-Owned Profit Sharing (ESOP)
401K offered
The approximate pay range for this position is between $51.00 and $73.00. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
Software Engineer
Data engineer job in Burbank, CA
Sr. Software Engineer
Pay Range: $75/hour to $85/hour
Our team is seeking a Sr Software Engineer who will be an important team member for our advertising machine learning platform, which focus on prediction and optimization engines for Disney's addressable ad platforms. The right person for this role should be experienced in machine learning technologies as well as solid in backend services. If you are someone who is proactive, inquisitive, and innovative in these domains, this is a phenomenal role for you!
Responsibilities
Build next-gen experiment platform for advertising decisioning and A/B testing to fit evolving business needs
Build next-gen simulation platform to apply state-of-the-art solutions for complicated ad challenges and to further enhance business performance
Develop scalable and efficient approaches for large scale data analysis
Collaborate with researchers to productize cutting edge innovations
Design scalable distributed systems with performance, scalability, reusability and flexibility
Advocate the best engineering practices, including the use of design patterns, CI/CD, code review and automated test
As a key member of the team, contribute to all aspects of the software lifecycle: design, experimentation, implementation and testing.
Collaborate with program managers, product managers, SDET, and researchers in an open and innovative environment
Basic Qualifications
At least 4 years of professional programming and design experience in Java, Python, Scala, etc.
Experience of building industry level high available and scalable micro-service
Knowledge of system, application design and architecture
Knowledge of big data processing and bigdata technologies
Passionate about understanding the ad business and seeking innovation opportunities to enhance business effectiveness.
Passionate about technology, and open to interdisciplinary collaborations
Preferred Qualifications
• Domain knowledge about advertising
• Knowledge for AI/ML technologies and typical technical stacks
• Experience with big data solutions like Airflow, Databricks, etc.
Required Education
Bachelor's Degree + 5 years of relevant experience