Data Engineer
Data engineer job in Pleasanton, CA
Hi
Job Title: Data Engineer
HM prefers candidate to be on site at Pleasanton
Proficiency in Spark, Python, and SQL is essential for this role. 10+ Experience with relational databases such as Oracle, NoSQL databases including MongoDB and Cassandra, and big data technologies, particularly Databricks, is required. Strong knowledge of data modeling techniques is necessary for designing efficient and scalable data structures. Familiarity with APIs and web services, including REST and SOAP, is important for integrating various data sources and ensuring seamless data flow. This role involves leveraging these technical skills to build and maintain robust data pipelines and support advanced data analytics.
SKILLS:
- Spark/Python/SQL
- Relational Database (Oracle) / NoSQL Database (MongoDB/ Cassandra) / Databricks
- Big Data technologies - Databricks preferred
- Data modelling techniques
- APIs and web services (REST/ SOAP)
If interested, Please share below details with update resume:
Full Name:
Phone:
E-mail:
Rate:
Location:
Visa Status:
Availability:
SSN (Last 4 digit):
Date of Birth:
LinkedIn Profile:
Availability for the interview:
Availability for the project:
MDM Engineer III
Data engineer job in Dublin, CA
W2 Contract-to-Hire
Salary Range: $156,000 - $176,800 per year
The MDM Engineer III is responsible for the development, integration, and implementation of configuration processes, procedures, and solutions within the Master Data Management (MDM) platform. The candidate will develop solutions on the MDM platform, participate in system/design/code reviews, address configuration and administration issues, as well as directly influence the direction of the Product domain on the MDM platform. The candidate will also collaborate with other technology partners to design and build quality, highly scalable solutions/applications, as well as interact with business teams as part of Agile development.
Duties and Responsibilities:
Design, develop, and test incoming and outgoing data feeds, data modeling, governance, and system administration as it pertains to MDM.
Responsible for providing technical consulting to management, business analysts, and technical associates, while working with the integration, architecture, and business teams to deliver MDM solutions.
Deliver high-quality solutions independently, while working collaboratively to share knowledge and ideas, and adapt quickly to the needs of the business.
Partner with Data Governance & Operations teams to deliver based on program/project needs.
Drive the architecture, design, and delivery of the end-state MDM solution in a hands-on manner, including modeling the MDM domains.
Establish monitoring and reporting capabilities for the MDM platform.
Engage with all levels across IT to deliver an Enterprise MDM Program solution (Product), including cross-functional coordination.
Help lead master data integration activities, which include, but are not limited to, data cleansing, data creation, data conversion, issue resolution, and data validation.
Identify, manage, and communicate issues, risks, and dependencies to project management.
Configuration of the MDM solution in a hands-on manner (Web UI, business rules, and workflow changes)
Provide support for the Master Data Management (MDM) platform, including technical architecture, inbound/outbound data integration (ETL, maintenance/tuning of match rules and exceptions, data model changes, executing and monitoring incremental updates, and working with infrastructure and DBA teams to maintain multiple environments).
Contribute to the design of logical and physical Data modeling to support the Enterprise Master Data Management system.
Establish & refine monitoring and reporting capabilities for the new MDM platform.
Provide level 3 support for the MDM platform as needed.
Manage Code configuration and code release management in Non-production environments.
Exceptional verbal communication and technical writing skills
Requirements and Qualifications:
8+ years of experience with Master Data Management solutions
5 - 8 years of experience working within the entire Software Development Lifecycle, including requirements gathering, design, implementation, integration testing, deployment, and post-production support
5 - 8+ years of experience in system design, implementation, and Level 3 support activities
Strong understanding of Master Data Management concepts, including Object Oriented Design, Programming, and Data Modeling
Strong experience in development configuration of Workflow, Web UI, and Business Rules Action components of MDM (STIBO STEP solution preferred)
Strong experience in identifying performance bottlenecks, providing a solution, and implementing functional performance recommendations
Experience in implementing deployment automation to support the data model / web-ui / JavaScript binds / configurations and product information between environments.
Experience in developing enterprise applications using Java, J2EE, JavaScript, HTML, CSS, Spring Framework
Working experience with at least one major MDM platform such as Oracle, Informatica, or Stibo (preferred)
Working experience in data profiling, data quality designing, and configuring MDM UI, workflows, and rules for business processes, preferably in a retail domain
Experience working in an Agile Environment
Experience working in an infrastructure environment (ability to assist in logins, restarting servers, etc.)
Experience working in Oracle as the main database or Linux operating systems
Experience with data modeling and data migration
Experience with security best practices of web applications to address vulnerabilities
Experience with application integration and middleware
Strong communication skills are required, with the ability to give and receive information, explain complex information in simple terms, and maintain a strong customer service approach to all users.
Knowledge of/prior DBA experience with SQL Server and/or Oracle is a plus. Minimum knowledge/experience in UNIX
Ability to work independently, creatively problem solve complex technical problems, and can provide guidance and training to others
Ability to provide accurate estimates of timeframes necessary to complete potential projects and develop project implementation plans
Bachelor's Degree in Computer Science or related experience
Desired Skills and Experience
Master Data Management (MDM), STIBO STEP, Oracle MDM, Informatica MDM, Java, J2EE, JavaScript, HTML, CSS, Spring Framework, Data Modeling, ETL, Data Integration, Data Migration, Data Profiling, Data Quality, Data Governance, Workflow Configuration, Web UI Development, Business Rules Configuration, System Design, Software Development Lifecycle (SDLC), Agile Development, SQL, Oracle Database, SQL Server, Linux, UNIX, Performance Tuning, Match Rules Configuration, Data Cleansing, Data Validation, API Integration, Middleware, Application Security, Requirements Gathering, Technical Architecture, Level 3 Support, Code Configuration Management, Release Management, Object Oriented Design, Data Conversion, Infrastructure Management, Technical Consulting, Cross-functional Collaboration, Issue Resolution, Risk Management, Technical Documentation
Bayside Solutions, Inc. is not able to sponsor any candidates at this time. Additionally, candidates for this position must qualify as a W2 candidate.
Bayside Solutions, Inc. may collect your personal information during the position application process. Please reference Bayside Solutions, Inc.'s CCPA Privacy Policy at *************************
Sr Node.js Developer
Data engineer job in Pleasanton, CA
About the role
The client Product Design and Development team is looking for a Software Engineer to help build and scale client's e-commerce and product lifecycle management (PLM) platforms. This role involves contributing to technical design, implementing cloud-based solutions, and working in an Agile environment.
This is a long term contract and onsite position (5 days a week) in Pleasanton, CA location.
W2 only.
Mandatory Skills: Node.js and TypeScript.
Engages actively in building out a dynamic and productive development organization and continuously improving practices and methodology
Excellent problem solving skills, meticulous & methodical Ability to learn and apply new technologies quickly and be self-directed
Minimum 7+ years of experience in backend application development
Profound knowledge of writing best practice code using Node.js, TypeScript, Docker
Experience of integrating and leveraging RESTful services
Good experience in designing scalable microservices architecture
Experienced with Design Patterns, Object Oriented Programming, and Functional Programming concepts
Writing runtime and test code Supports (2nd level and troubleshoots problems with existing applications
Experience in handling Git Hub Actions ( or any Ci-Cd Pipelines)
Understanding of Performance Scripts / Performance Improvements of microservices.
Java Software Engineer
Data engineer job in Pleasanton, CA
Backend Developer (Java) - 12-Month W2 Contract
Pay Rate: $55-$65/hour (Depending on Experience)
Contract Type: W2 | 12 Months
Russell Tobin is supporting a leading enterprise retailer in hiring a skilled Backend Developer for a long-term onsite contract in Pleasant, CA. This role is ideal for an experienced backend engineer with a strong background in Java, cloud technologies, microservices, and DevOps.
Key Responsibilities
Design, develop, and maintain backend services using Java and Spring Boot
Build and maintain RESTful APIs following best practices
Develop and support microservices-based architectures
Work with MongoDB and MySQL for data storage and retrieval
Implement event-driven solutions using Kafka or RabbitMQ
Deploy and manage containerized applications using Docker and Kubernetes (AKS or GKE)
Collaborate using DevOps tools such as GitHub, Jenkins, Chef, Puppet, or ArgoCD
Implement monitoring and alerting using Nagios, New Relic, GCP, or Splunk
Participate in Agile and Scrum ceremonies
Ensure adherence to SDLC and security compliance standards
Required Qualifications
7-10 years of total IT experience
6+ years of hands-on experience with Java, MongoDB, and MySQL
Strong hands-on experience with Java Spring Boot
Hands-on experience with API management and microservices development
Experience with public cloud platforms such as Azure or GCP
Hands-on experience with Kafka or RabbitMQ
Hands-on experience with Docker and Kubernetes (AKS or GKE)
Experience using DevOps tools including GitHub, Jenkins, Chef, Puppet, or ArgoCD
Experience with monitoring and alerting tools such as Nagios, New Relic, GCP, or Splunk
Strong understanding of RESTful API design principles
Strong knowledge of the Software Development Lifecycle, security compliance, Agile, and Scrum
Soft Skills
Inquisitive team player with an innovative mindset
Quick learner with strong adaptability to new technologies
Strong communication, collaboration, and problem-solving skills
Russell Tobin offers eligible employee's comprehensive healthcare coverage (medical, dental, and vision plans), supplemental coverage (accident insurance, critical illness insurance and hospital indemnity), 401(k)-retirement savings, life & disability insurance, an employee assistance program, legal support, auto, home insurance, pet insurance and employee discounts with preferred vendors.
Senior Software Engineer
Data engineer job in Morgan Hill, CA
The Sr. Engineer will be part of a product team enhancing and sustaining enterprise systems through multiple programming platforms, primarily Java. This individual develops, maintains and improves internally developed applications and integrates with third party applications. As a Sr. Engineer, you will be helping to transform the way applications are built and how data is processed.
Key Responsibilities
Support the development of a modern application platform using microservices architecture, Apache Kafka, Spring Boot, Docker, and Kubernetes.
Enhance and maintain customer-facing systems, including CRM and loyalty program platforms, leveraging the new architecture.
Implement CI/CD pipelines, monitoring, and other DevOps best practices to ensure reliability and scalability.
Mentor and support fellow engineers in adopting new technologies and engineering practices.
Drive innovation by identifying and delivering high-impact technical solutions that improve project outcomes.
Champion engineering excellence by promoting best practices such as code reviews, source control standards, and automated testing.
Qualifications
5+ years experience in Java backend development
Proficiency with the Spring Framework, experience with Sprint Boot is a plus
Strong experience with both relational and NoSQL databases such as MongoDB
Skilled in building RESTful APIs using JSON and/or XML
Familiarity with messaging systems such as MQ, JMS, RabbitMQ, or ActiveMQ; experience with Apache Kafka is a strong plus.
Understanding of containerized development using Docker; experience with Kubernetes is beneficial.
Exposure to frontend technologies (HTML, CSS, JavaScript); experience with frameworks like Angular, React, or Ember is a plus but not required.
Experience with build tools (Gradle, Maven, Ant) and CI/CD tools (e.g., Jenkins).
Strong engineering mindset with a focus on clean code, automated testing, and peer reviews.
Experience with Supply Chain or ERP systems is a plus.
Proven ability to collaborate with software architects and cross-functional teams.
Demonstrated leadership in mentoring and guiding other engineers.
Excellent communication and team-building skills with a passion for knowledge sharing.
Eagerness to learn and adopt new technologies to drive continuous improvement.
Eight Eleven Group (Brooksource) provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, national origin, age, sex, citizenship, disability, genetic information, sexual orientation, gender identity, marital status, amnesty or status as a covered veteran in accordance with applicable federal, state, and local laws.
Systems Software Engineer
Data engineer job in Pleasanton, CA
Now Hiring: Systems Software Engineer II
📍
Pleasanton, CA
| 💰
$108,000 - $135,000 per year
🏢 About the Role
We're looking for an experienced Systems Software Engineer II to join Sunbelt Controls, a leading provider of Building Automation System (BAS) solutions across the Western U.S.
In this role, you'll develop and program databases, create custom graphics, and integrate control systems for smart buildings. You'll also support project startups, commissioning, and troubleshooting - working closely with project managers and engineers to deliver high-quality, energy-efficient building automation solutions.
If you have a passion for technology, problem-solving, and helping create intelligent building systems, this opportunity is for you.
⚙️ What You'll Do
Design and program BAS control system databases and graphics for assigned projects.
Lead the startup, commissioning, and troubleshooting of control systems.
Work with networked systems and diagnose LAN/WAN connectivity issues.
Perform pre-functional and functional system testing, including LEED and Title 24 requirements.
Manage project documentation, including as-builts and commissioning records.
Coordinate with project teams, subcontractors, and clients for smooth execution.
Mentor and support junior Systems Software Engineers.
🧠 What We're Looking For
2-5 years of experience in Building Automation Systems or a related field.
Associate's degree in a technical field (Bachelor's in Mechanical or Electrical Engineering preferred).
Proficiency in MS Office, Windows, and basic TCP/IP networking.
Strong organizational skills and the ability to manage multiple priorities.
Excellent communication and customer-service skills.
Valid California driver's license.
💎 Why You'll Love Working With Us
At Sunbelt Controls, we don't just build smart buildings - we build smart careers. As a 100% employee-owned company (ESOP), we offer a supportive, growth-oriented environment where innovation and teamwork thrive.
What we offer:
Competitive salary: $108K - $135K, based on experience
Employee-owned company culture with a family-oriented feel
Comprehensive health, dental, and vision coverage
Paid time off, holidays, and 401(k)/retirement plan
Professional growth, mentorship, and ongoing learning opportunities
Veteran-friendly employer & Equal Opportunity workplace
🌍 About Sunbelt Controls
Sunbelt Controls is a premier BAS solutions provider serving clients across multiple industries, including data centers, healthcare, education, biotech, and commercial real estate. We specialize in smart building technology, system retrofits, analytics, and energy efficiency - helping clients reduce operational costs and achieve sustainable performance.
👉 Apply today to join a team that's shaping the future of intelligent buildings.
#Sunbelt #BuildingAutomation #SystemsEngineer #HVACControls #BASCareers
Principal Data Scientist : Product to Market (P2M) Optimization
Data engineer job in Pleasanton, CA
About Gap Inc. Our brands bridge the gaps we see in the world. Old Navy democratizes style to ensure everyone has access to quality fashion at every price point. Athleta unleashes the potential of every woman, regardless of body size, age or ethnicity. Banana Republic believes in sustainable luxury for all. And Gap inspires the world to bring individuality to modern, responsibly made essentials.
This simple idea-that we all deserve to belong, and on our own terms-is core to who we are as a company and how we make decisions. Our team is made up of thousands of people across the globe who take risks, think big, and do good for our customers, communities, and the planet. Ready to learn fast, create with audacity and lead boldly? Join our team.
About the Role
Gap Inc. is seeking a Principal Data Scientist with deep expertise in operations research and machine learning to lead the design and deployment of advanced analytics solutions across the Product-to-Market (P2M) space. This role focuses on driving enterprise-scale impact through optimization and data science initiatives spanning pricing, inventory, and assortment optimization.
The Principal Data Scientist serves as a senior technical and strategic thought partner, defining solution architectures, influencing product and business decisions, and ensuring that analytical solutions are both technically rigorous and operationally viable. The ideal candidate can lead end-to-end solutioning independently, manage ambiguity and complex stakeholder dynamics, and communicate technical and business risk effectively across teams and leadership levels.
What You'll Do
* Lead the framing, design, and delivery of advanced optimization and machine learning solutions for high-impact retail supply chain challenges.
* Partner with product, engineering, and business leaders to define analytics roadmaps, influence strategic priorities, and align technical investments with business goals.
* Provide technical leadership to other data scientists through mentorship, design reviews, and shared best practices in solution design and production deployment.
* Evaluate and communicate solution risks proactively, grounding recommendations in realistic assessments of data, system readiness, and operational feasibility.
* Evaluate, quantify, and communicate the business impact of deployed solutions using statistical and causal inference methods, ensuring benefit realization is measured rigorously and credibly.
* Serve as a trusted advisor by effectively managing stakeholder expectations, influencing decision-making, and translating analytical outcomes into actionable business insights.
* Drive cross-functional collaboration by working closely with engineering, product management, and business partners to ensure model deployment and adoption success.
* Quantify business benefits from deployed solutions using rigorous statistical and causal inference methods, ensuring that model outcomes translate into measurable value
* Design and implement robust, scalable solutions using Python, SQL, and PySpark on enterprise data platforms such as Databricks and GCP.
* Contribute to the development of enterprise standards for reproducible research, model governance, and analytics quality.
Who You Are
* Master's or Ph.D. in Operations Research, Operations Management, Industrial Engineering, Applied Mathematics, or a closely related quantitative discipline.
* 10+ years of experience developing, deploying, and scaling optimization and data science solutions in retail, supply chain, or similar complex domains.
* Proven track record of delivering production-grade analytical solutions that have influenced business strategy and delivered measurable outcomes.
* Strong expertise in operations research methods, including linear, nonlinear, and mixed-integer programming, stochastic modeling, and simulation.
* Deep technical proficiency in Python, SQL, and PySpark, with experience in optimization and ML libraries such as Pyomo, Gurobi, OR-Tools, scikit-learn, and MLlib.
* Hands-on experience with enterprise platforms such as Databricks and cloud environments
* Demonstrated ability to assess, communicate, and mitigate risk across analytical, technical, and business dimensions.
* Excellent communication and storytelling skills, with a proven ability to convey complex analytical concepts to technical and non-technical audiences.
* Strong collaboration and influence skills, with experience leading cross-functional teams in matrixed organizations.
* Experience managing code quality, CI/CD pipelines, and GitHub-based workflows.
Preferred Qualifications
* Experience shaping and executing multi-year analytics strategies in retail or supply chain domains.
* Proven ability to balance long-term innovation with short-term deliverables.
* Background in agile product development and stakeholder alignment for enterprise-scale initiatives.
Benefits at Gap Inc.
* Merchandise discount for our brands: 50% off regular-priced merchandise at Old Navy, Gap, Banana Republic and Athleta, and 30% off at Outlet for all employees.
* One of the most competitive Paid Time Off plans in the industry.*
* Employees can take up to five "on the clock" hours each month to volunteer at a charity of their choice.*
* Extensive 401(k) plan with company matching for contributions up to four percent of an employee's base pay.*
* Employee stock purchase plan.*
* Medical, dental, vision and life insurance.*
* See more of the benefits we offer.
* For eligible employees
Gap Inc. is an equal-opportunity employer and is committed to providing a workplace free from harassment and discrimination. We are committed to recruiting, hiring, training and promoting qualified people of all backgrounds, and make all employment decisions without regard to any protected status. We have received numerous awards for our long-held commitment to equality and will continue to foster a diverse and inclusive environment of belonging. In 2022, we were recognized by Forbes as one of the World's Best Employers and one of the Best Employers for Diversity.
Salary Range: $201,700 - $267,300 USD
Employee pay will vary based on factors such as qualifications, experience, skill level, competencies and work location. We will meet minimum wage or minimum of the pay range (whichever is higher) based on city, county and state requirements.
Senior Data Engineer
Data engineer job in Pleasanton, CA
Clorox is the place that's committed to growth - for our people and our brands. Guided by our purpose and values, and with people at the center of everything we do, we believe every one of us can make a positive impact on consumers, communities, and teammates. Join our team. #CloroxIsThePlace
Your role at Clorox:
At Clorox, we're revolutionizing the consumer product goods industry with cutting-edge technology and innovative solutions with data at the center of it all. We believe in harnessing the power of data to drive meaningful change and deliver exceptional results. Join our dynamic enterprise data team and be at the forefront of the data engineering revolution to build data as a product and use data as a competitive advantage.
Are you passionate about building the next generation of data solutions? We are looking for an experienced and highly skilled senior data engineer who thrives on solving complex problems and loves working with the latest cloud technologies. If you 're eager to shape the future of data in a fast-paced, tech-forward environment, we want to hear from you!
As a Senior Data Engineer at Clorox, you will lead exciting projects, design, build and maintain scalable data solution architectures and make a real impact on our data strategy. In this role, you will leverage our cutting-edge technology, your strong background in business intelligence and engineering in cloud platforms to build futuristic data products that create value across the organization. You will also serve as a key collaborator with our business analytics and enterprise technology stakeholders to innovate, build and sustain the cloud data infrastructure that will further Clorox's data strategy.
In this role, you will:
Architect & Innovate
* Use tools like Azure Data Factory, Databricks, Python, Spark, and other tools to build cutting-edge data pipelines, data models, semantic modeling layers and dynamic data solutions.
* Develop high-quality code to process and move the vast amount of data and integrate systems across diverse platforms.
* Develop, sustain, implement data improvements and optimization techniques to enhance the data products.
* Continuously evaluate and adopt new technologies and tools to enhance data engineering capabilities and efficiency.
Optimize and Scale
* Build and maintain data pipelines to integrate data from various source systems.
* Optimize data pipelines for performance, reliability and cost-effectiveness
* Work with enterprise infrastructure and technology teams to implement best practices for performance monitoring, cloud resource management, including scaling, cost control and security.
Ensure Quality and Governance
* Ensure safe custody, transport and storage of data in the data platforms.
* Collaborate with Data Governance Stewards and Business Stakeholders to enforce the business rules, data quality rules and data cataloging activities.
* Ensure data quality, security and compliance for the data products responsible under this role.
Enhance BI Capabilities
* Developing and managing business intelligence solutions for the organization to transform data into insights that can drive business value.
* Provide technical guidance to Analytics Product Owners, Business Leaders improve business decisions through data analytics, data visualization, and data modeling techniques and technologies.
Collaborate and Lead
* Work closely with analytics product owners, data scientists, analysts and business users to understand data needs and provide technical solutions.
* Provide technical guidance to junior engineers and BI teams to use data efficiently and effectively.
What we look for:
* 7 + years of experience with data engineering, data warehousing, business intelligence with substantial experience in managing large-scale data projects
* 5+ years' experience with data solutions implementations in Cloud platform technologies like Microsoft Azure, AWS etc. (Azure is preferred)
* 4+ years with business intelligence using technologies like PowerBI, Tableau etc.(PowerBI preferred)
* 4+ years of experience with Azure services like Data Factory, Databricks, and Delta Lake will be an added advantage.
* Experience in end-to-end support for data engineering solutions (Data Pipelines), including designing, developing, deploying, and supporting solutions for existing platforms
* Strong knowledge in Cloud service like Microsoft Azure and associated data services like such as Data Factory, Databricks, Delta Lake, Synapse, SQL DB etc.
* End-to-end knowledge in data engineering solutions (Data Pipelines), including designing, developing, deploying, and supporting solutions for existing platforms
* Ability to design and implement scalable and performant data pipelines and architecture. Deep understanding of data modeling, data access, and data storage techniques.
* Strong proficiency in Python(required), Spark, Pandas, and CI/CD methodologies.
* Strong BI skills to build reports & dashboards using Power BI and Tableau etc.
* Strong in design, building complex reports, dashboards, performance tuning etc.
* Strong in reporting security like row level, column level, object level and masking etc.
* Strong with SQL and DML to recast data in backend database for data changes, restatements and data processing errors, etc.
* Knowledge or experience in D365 Dataverse and reporting will be an added advantage.
#LI-HYBRID
Workplace type:
Hybrid- In office Tuesday- Thursday; WFH Monday and Friday
Our values-based culture connects to our purpose and empowers people to be their best, professionally and personally. We serve a diverse consumer base which is why we believe teams that reflect our consumers bring fresh perspectives, drive innovation, and help us stay attuned to the world around us. That's why we foster an inclusive culture where every person can feel respected, valued, and fully able to participate, and ultimately able to thrive. Learn more.
[U.S.]Additional Information:
At Clorox, we champion people to be well and thrive, starting with our own people. To help make this possible, we offer comprehensive, competitive benefits that prioritize all aspects of wellbeing and provide flexibility for our teammates' unique needs. This includes robust health plans, a market-leading 401(k) program with a company match, flexible time off benefits (including half-day summer Fridays depending on location), inclusive fertility/adoption benefits, and more.
We are committed to fair and equitable pay and are transparent with current and future teammates about our full salary ranges. We use broad salary ranges that reflect the competitive market for similar jobs, provide sufficient opportunity for growth as you gain experience and expand responsibilities, while also allowing for differentiation based on performance. Based on the breadth of our ranges, most new hires will start at Clorox in the first half of the applicable range. Your starting pay will depend on job-related factors, including relevant skills, knowledge, experience and location. The applicable salary range for every role in the U.S. is based on your work location and is aligned to one of three zones according to the cost of labor in your area.
-Zone A: $128,000 - $252,200
-Zone B: $117,400 - $231,200
-Zone C: $106,700 - $210,200
All ranges are subject to change in the future. Your recruiter can share more about the specific salary range for your location during the hiring process.
This job is also eligible for participation in Clorox's incentive plans, subject to the terms of the applicable plan documents and policies.
Please apply directly to our job postings and do not submit your resume to any person via text message. Clorox does not conduct text-based interviews and encourages you to be cautious of anyone posing as a Clorox recruiter via unsolicited texts during these uncertain times.
To all recruitment agencies: Clorox (and its brand families) does not accept agency resumes. Please do not forward resumes to Clorox employees, including any members of our leadership team. Clorox is not responsible for any fees related to unsolicited resumes.
Auto-ApplyData Scientist
Data engineer job in Pleasanton, CA
Sajix Inc. is a global health-tech company headquartered in Pleasanton, California, focused on transforming healthcare delivery through advanced digital solutions. Since its founding in 2006, Sajix has specialized in developing integrated healthcare information systems that streamline clinical, financial, and administrative operations for healthcare organizations around the world.
With operations in the United States, United Kingdom, Singapore, and India, Sajix serves clients across North America, Europe, Asia, and Africa. Its flagship platform,
iHelix
, is a modular, scalable solution designed for use in diverse healthcare settings-from single-doctor practices to large hospital networks.
Sajix offers a comprehensive suite of digital healthcare solutions, including:
iHelix
Lifeeazy
AI-based Revenue Cycle Management
- optimizing financial workflows through automation
All products are built with global standards in mind, supporting multiple languages and currencies, and comply with international healthcare regulations such as CCR and CCD.
Through strategic partnerships and a commitment to innovation, Sajix continues to lead the industry in delivering intelligent, interoperable, and patient-centric healthcare solutions.
Website:
*************
Job Description
About the Role:
As a Trainee Data Scientist at Sajix, you will assist in building predictive models and running experiments to derive insights from complex healthcare data.
Key Responsibilities:
Assist in cleaning and preprocessing structured and unstructured data.
Support the development of statistical and machine learning models.
Participate in exploratory data analysis and feature engineering.
Document findings and present insights to non-technical stakeholders.
Qualifications
Requirements:
Bachelor's or Master's in Computer Science, Statistics, or Data Science.
Basic knowledge of Python/R and libraries like Pandas, Scikit-learn, or NumPy.
Understanding of statistics, probability, and machine learning fundamentals.
Additional Information
These trainee roles are offered in collaboration with Sajix's
__init__py
program-a structured, hands-on learning initiative aimed at nurturing the next generation of full-stack developers and technology professionals. The program offers tiered training tracks.
Participants gain real-world experience through live projects, personalized mentorship, and exposure to industry-relevant tools and practices. This collaboration ensures that trainees are well-equipped with practical skills and knowledge to excel in their roles.
For more information about the __init__py program, visit initpy.sajix.com.
Data Scientist IV
Data engineer job in Pleasanton, CA
This individual contributor is primarily responsible for designing and developing data pipelines and automation for data acquisition and ingestion of raw data from multiple data sources and data formats by transforming, cleansing, and storing data for consumption. This role is also responsible for developing detailed problem statements outlining hypotheses and their effect on target clients/customers, analyzing and investigating complex data sets and summarizing key characteristics, selecting, manipulating and transforming data into features used in machine learning algorithms, training statistical models, deploying and maintaining reliable and efficient models through production, verifying model performance, and collaborating with internal and external stakeholders across domains to develop and deliver statistical driven outcomes.Essential Responsibilities:
Promotes learning in others by proactively providing and/or developing information, resources, advice, and expertise with coworkers and members; builds relationships with cross-functional/external stakeholders and customers. Listens to, seeks, and addresses performance feedback; proactively provides actionable feedback to others and to managers. Pursues self-development; creates and executes plans to capitalize on strengths and develop weaknesses; leads by influencing others through technical explanations and examples and provides options and recommendations. Adopts new responsibilities; adapts to and learns from change, challenges, and feedback; demonstrates flexibility in approaches to work; champions change and helps others adapt to new tasks and processes. Facilitates team collaboration to support a business outcome.
Completes work assignments autonomously and supports business-specific projects by applying expertise in subject area and business knowledge to generate creative solutions; encourages team members to adapt to and follow all procedures and policies. Collaborates cross-functionally and/or externally to achieve effective business decisions; provides recommendations and solves complex problems; escalates high-priority issues or risks, as appropriate; monitors progress and results. Supports the development of work plans to meet business priorities and deadlines; identifies resources to accomplish priorities and deadlines. Identifies, speaks up, and capitalizes on improvement opportunities across teams; uses influence to guide others and engages stakeholders to achieve appropriate solutions.
Develops detailed problem statements outlining hypotheses and their effect on target clients/customers by defining scope, objectives, outcome statements and metrics.
Designs and develops data pipelines and automation for data acquisition and ingestion of raw data from multiple data sources and data formats by transforming, cleansing, and storing data for consumption by downstream processes; writing and optimizing diverse SQL queries; and demonstrating advanced knowledge of database fundamentals.
Analyzes and investigates complex data sets and summarizes key characteristics by employing data visualization methods; and determining how best to manipulate data sources to discover patterns, spot anomalies, test hypotheses, and/or check assumptions.
Selects, manipulates, and transforms data into features used in machine learning algorithms by leveraging techniques to conduct dimensionality reduction, feature importance, and feature selection.
Trains statistical models by using algorithms and data mining techniques; testing models with various algorithms to assess the input dataset and related features; and applying techniques to prevent overfitting such as cross-validation.
Deploys and maintains reliable and efficient models through production.
Verifies model performance by demonstrating expertise in the practice of a variety of model validation techniques to assess and discriminate the goodness of model fit; and leveraging feedback and output to manage and strengthen model performance.
Collaborates with internal and external stakeholders across domains to develop and deliver statistical driven outcomes by delivering insights and values from heterogeneous data to investigate complex problems for multiple use cases; driving informed decision-making; and presenting findings to both technical and non-technical audiences.
Qualifications Minimum Qualifications:
Minimum three (3) years experience working with Exploratory Data Analysis (EDA) and visualization methods.
Minimum three (3) years machine learning and/or algorithmic experience.
Minimum three (3) years statistical analysis and modeling experience.
Minimum three (3) years programming experience.
Minimum one (1) year experience in a leadership role with or without direct reports.
Bachelors degree in Mathematics, Statistics, Computer Science, Engineering, Economics, Public Health, or related field AND Minimum five (5) years experience in data science or a directly related field. Additional equivalent work experience in a directly related field may be substituted for the degree requirement. Advanced degrees may be substituted for the work experience requirements.
Additional Requirements:
Knowledge, Skills, and Abilities (KSAs): Advanced Quantitative Data Modeling; Algorithms; Applied Data Analysis; Data Extraction; Data Visualization Tools; Machine Learning; Relational Database Management; Microsoft Excel; Design Thinking; Business Intelligence Tools; Data Manipulation/Wrangling; Data Ensemble Techniques; Feature Analysis/Engineering; Open Source Languages & Tools; Model Optimization; Strategic Thinking; Deep Learning/Neural Networks; Project ManagementPrimary Location: California-Pleasanton-Pleasanton Tech Cntr Building A Regular Scheduled Hours: 40 Shift: Day Working Days: Mon, Tue, Wed, Thu, Fri Start Time: 09:00 AM End Time: 05:00 PM Job Schedule: Full-time Job Type: Standard Employee Status: Regular Job Level: Individual Contributor Job Category: Data Science Public Department Name: Po/Ho Corp - KP Insight HQAA - 0308 Travel: No Employee Group: NUE-PO-01|NUE|Non Union Employee Posting Salary Low : 162900 Posting Salary High: 210760 Kaiser Permanente is an equal opportunity employer committed to a diverse and inclusive workforce. Applicants will receive consideration for employment without regard to race, color, religion, sex (including pregnancy), age, sexual orientation, national origin, marital status, parental status, ancestry, disability, gender identity, veteran status, genetic information, other distinguishing characteristics of diversity and inclusion, or any other protected status.Click here for Important Additional Job Requirements.
Share this job with a friend
You may also share this job description with a friend by email or social media. All the relevant details will be included in the message. Click the button labeled Share that is next to Submit.
Auto-ApplyData Model Engineer - 3
Data engineer job in Pleasanton, CA
is onsite in Pleasanton, CA - Remote is not an option.** Oracle Analytics is used by customers across the world to discover deep insights about their business, improve collaboration around a single view by securely including all relevant data, and increase agility by quickly spotting patterns and powering data-driven decisions with AI and machine learning.
Oracle Fusion Data Intelligence platform (FDI) is the next generation of Oracle Fusion Analytics Warehouse built for Oracle Fusion Cloud Applications, bringing together business data, ready-to-use analytics, and prebuilt AI and machine learning (ML) models to deliver deeper insights and accelerate the decision-making process into actionable results.
The backbone of FDI is the lights-out data pipeline that manages the data warehouse for all the customers. For details about the product, visit
***************************************************************
FDI Pipeline Data Model team defines the application development language and uses the same to build applications - deliver analytic data models for Fusion, NetSuite, Salesforce etc. sources.
As a member of the team, you'll build and support scalable data models for analytics. You'll have opportunities to learn and gain hands-on experience with business processes, data processing, semantics, the modern data stack, and AI-driven development using large language models and intelligent agents. You'll also collaborate on developing language processors, user interfaces, and automation solutions that showcase your creative thinking-all within an agile, innovative environment that values your growth.
**Responsibilities**
You will:
- Learn from senior engineers how to translate business processes/analytics requirements into effective data model designs.
- Build, test, and maintain scalable data models and pipelines
- Work with QA to validate data accuracy and resolve issues as they arise.
- Troubleshoot and fix routine data or performance problems with support from experienced teammates.
- Create and maintain technical documentation for team reference, knowledge sharing and even for Customers.
- Contribute to automation, language processing, and user interface projects within Pipeline team.
- Stay curious and keep learning about new tools, technologies, and approaches in data engineering and AI.
Qualifications:
You have:
+ BS or MS degree in computer science, Engineering, or a related field, or equivalent practical experience.
+ 3+ years of experience developing and delivering data products, data pipelines, or software/data engineering solutions.
+ Strong programming skills in Python/PySpark/Scala/Java and SQL.
+ Solid experience with cloud databases, data modeling concepts, ETL processes, and at least one public cloud platform (Oracle Cloud, AWS, Azure, or Google Cloud), especially for data services.
+ Good understanding of data management and software design fundamentals.
+ Proven problem-solving and analytical skills with attention to detail.
+ Ability to effectively communicate technical concepts verbally and through design and documentation.
+ Interest or exposure to data science, machine learning, or AI-driven development tools is a plus.
+ Willingness and enthusiasm to learn about business processes, data processing, semantics, and the modern data stack.
+ Positive attitude, curiosity, and eagerness to contribute new ideas while thriving in an inclusive, innovative, and collaborative team culture.
**This position is onsite in Pleasanton, CA - Remote is not an option.**
Disclaimer:
**Certain US customer or client-facing roles may be required to comply with applicable requirements, such as immunization and occupational health mandates.**
**Range and benefit information provided in this posting are specific to the stated locations only**
US: Hiring Range in USD from: $79,200 to $178,100 per annum. May be eligible for bonus and equity.
Oracle maintains broad salary ranges for its roles in order to account for variations in knowledge, skills, experience, market conditions and locations, as well as reflect Oracle's differing products, industries and lines of business.
Candidates are typically placed into the range based on the preceding factors as well as internal peer equity.
Oracle US offers a comprehensive benefits package which includes the following:
1. Medical, dental, and vision insurance, including expert medical opinion
2. Short term disability and long term disability
3. Life insurance and AD&D
4. Supplemental life insurance (Employee/Spouse/Child)
5. Health care and dependent care Flexible Spending Accounts
6. Pre-tax commuter and parking benefits
7. 401(k) Savings and Investment Plan with company match
8. Paid time off: Flexible Vacation is provided to all eligible employees assigned to a salaried (non-overtime eligible) position. Accrued Vacation is provided to all other employees eligible for vacation benefits. For employees working at least 35 hours per week, the vacation accrual rate is 13 days annually for the first three years of employment and 18 days annually for subsequent years of employment. Vacation accrual is prorated for employees working between 20 and 34 hours per week. Employees working fewer than 20 hours per week are not eligible for vacation.
9. 11 paid holidays
10. Paid sick leave: 72 hours of paid sick leave upon date of hire. Refreshes each calendar year. Unused balance will carry over each year up to a maximum cap of 112 hours.
11. Paid parental leave
12. Adoption assistance
13. Employee Stock Purchase Plan
14. Financial planning and group legal
15. Voluntary benefits including auto, homeowner and pet insurance
The role will generally accept applications for at least three calendar days from the posting date or as long as the job remains posted.
Career Level - IC3
**About Us**
As a world leader in cloud solutions, Oracle uses tomorrow's technology to tackle today's challenges. We've partnered with industry-leaders in almost every sector-and continue to thrive after 40+ years of change by operating with integrity.
We know that true innovation starts when everyone is empowered to contribute. That's why we're committed to growing an inclusive workforce that promotes opportunities for all.
Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs.
We're committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_************* or by calling *************** in the United States.
Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans' status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Data Engineer
Data engineer job in Dublin, CA
Kforce has a client in Dublin, CA that is looking for a Data Engineer to provide L3 escalation support for existing production systems and assist with related enhancement activities. The Data Engineer will ensure proper testing and adherence to business change management practices and procedures. Day to day includes the installation, patching, configuration, and maintenance of StreamSets on-prem runtime instances.* StreamSets SME, and knowledge on the data landing needs into Snowflake for data analytics and reporting needs
* Strong technical expertise in Cloud applications, Data ingestion, and Data Lake architecture
* In-depth knowledge of StreamSets cloud and on-prem architecture, including environment configuration and deployment models
* Hands-on experience and strong technical knowledge with platforms like Linux/Windows OS, authentication systems, networking, clustering, load balancers, Java, SSL, certificates, etc.
Data Engineer - Manager
Data engineer job in Pleasanton, CA
We are looking for a strategic and hands-on Manager of Data Engineering to lead a team focused on building scalable data infrastructure for Retail Media Network (RMN) initiatives. This role requires deep technical expertise in GCP, strong leadership skills, and a passion for driving data-driven media monetization strategies. Your Impact:
Lead and mentor a team of data engineers in designing and implementing scalable data solutions.
Architect and oversee the development of data pipelines using BigQuery, Dataflow, Cloud Composer, and GCS.
Drive the adoption of best practices in ETL, data warehousing, and event streaming.
Partner with product, analytics, and media teams to deliver high-impact data solutions for RMN monetization.
Ensure data governance, security, and compliance across all engineering efforts.
Stay current with emerging technologies and trends in cloud data engineering and media monetization.
Your Skills & Experience:
7+ years of experience in data engineering, with at least 2 years in a leadership role.
Expert-level proficiency in BigQuery, GCS, Dataflow, and Cloud Composer.
Strong background in ETL architecture, data warehousing, and event streaming.
Advanced Python skills for data engineering and automation.
Proven experience working with Retail Media Network (RMN) data and media monetization strategies.
Excellent leadership, communication, and stakeholder management skills.
Lead Data Engineer/ETL
Data engineer job in Pleasanton, CA
Core Competencies - Must Have: 5-10 years Informatica (ETL) experience BDM TDM Data Quality Big Data Data Warehouse Star Shema Very strong verbal and written communication skills Bachelor's degree in a technical field such as computer science, computer engineering or related field required
Nice to Haves:
Unix hands on knowledge
Additional Information
All your information will be kept confidential according to EEO guidelines.
Sr. Data Engineer
Data engineer job in Hughson, CA
Job Description Apply now: Sr. Data Engineer, Data Solutions Location: Onsite, Houston, TX (4 days/week, Mon-Thurs) Start Date: 2 weeks from offer for this 6-month contract position Job Title: Sr. Data Engineer, Data Solutions
Start Date Is: 2 weeks from offer
Duration: 6 month contract (potential to extend)
Compensation Range: $50 - $70/hr w2 Role Overview:
Client is seeking a Sr. Data Engineer to build and optimize data pipelines, ETL processes, and cloud-based data architectures. This role focuses on Python development, cloud data integration, and supporting advanced analytics and BI initiatives. You'll collaborate with cross-functional teams to enhance data flows and drive data product innovation.
Day-to-Day Responsibilities:
Develop Python modules using Numpy, Pandas, and dynamic programming techniques
Design, build, and optimize ELT/ETL pipelines using AWS and Snowflake
Manage data orchestration, transformation, and job automation
Troubleshoot complex SQL queries and optimize data queries
Collaborate with Data Architects, Analysts, BI Engineers, and Product Owners to define data transformation needs
Utilize cloud integration tools (Matillion, Informatica Cloud, AWS Glue, etc.)
Develop data validation processes to ensure data quality
Support continuous improvement and issue resolution of data processes
Must-Haves:
Bachelor's degree in Computer Science, Engineering, or related field
8 years of experience in data engineering, ETL, data warehouse, and data lake development
Strong SQL expertise and experience with relational databases
Experience with Snowflake, Amazon Redshift, or Google BigQuery
Proficiency in AWS services: EC2, S3, Lambda, SQS, SNS, etc.
Experience with cloud integration tools (Matillion, Dell Boomi, Informatica Cloud, Talend, AWS Glue)
GitHub version control knowledge and its integration with ETL pipelines
Familiarity with Spark, Hadoop, NoSQL, APIs, and streaming data platforms
Python (preferred), Java, or Scala scripting experience
Agile/Scrum development experience
Soft Skills:
Excellent communication and collaboration skills
Strong problem-solving abilities and attention to detail
Ability to multitask and meet tight deadlines
Highly motivated and self-directed
Comfortable working with technical and business stakeholders
Nice-to-Haves:
Experience with data quality and metadata management tools
Exposure to BI platforms and data visualization tools
Experience in building event-driven data architectures
Data Engineer Consultant
Data engineer job in Pleasanton, CA
Job Title: Data Engineer Consultant We are seeking a highly skilled Data Engineer Consultant to design, build, and optimize scalable data pipelines that power advanced analytics and business insights. This role is ideal for someone passionate about transforming raw data into actionable intelligence through modern data engineering practices. You will collaborate with cross-functional teams, leverage cutting-edge tools, and ensure data integrity, performance, and usability across the organization.
Key Responsibilities:
Data Pipeline Development:
Architect and maintain robust, automated pipelines for ingesting, transforming, and delivering data from diverse sources using modern ELT frameworks.
DBT Expertise:
Develop modular, reusable, and well-documented transformations in DBT, ensuring adherence to best practices for maintainability and scalability.
Version Control & CI/CD:
Manage code repositories in GitHub, implement branching strategies, and contribute to automated testing and deployment workflows.
Advanced SQL Engineering:
Write optimized, complex SQL queries leveraging Common Table Expressions (CTEs) and window functions to improve clarity and performance.
Data Modeling:
Design and implement dimensional models, star schemas, and dynamic tables to support analytics and reporting needs.
Performance Optimization:
Continuously monitor and tune data pipelines for efficiency, scalability, and cost-effectiveness in cloud environments (e.G., Snowflake).
Testing & Quality Assurance:
Build robust testing frameworks for data validation, schema checks, and transformation accuracy to ensure reliability.
Documentation & Standards:
Maintain comprehensive documentation of data flows, models, and processes to promote transparency and knowledge sharing.
Collaboration:
Work closely with data scientists, analysts, and business stakeholders to understand requirements and deliver high-quality solutions.
Preferred Qualifications:
Technical Skills:
Proficiency in SQL, DBT, and GitHub workflows.
Experience with Snowflake or similar cloud data warehouses.
Familiarity with Fivetran or other ingestion tools.
Strong understanding of data modeling principles and ETL/ELT best practices.
Soft Skills:
Excellent problem-solving and analytical abilities.
Strong communication skills for cross-functional collaboration.
Ability to work independently and manage multiple priorities.
Why Join Us?
Work on high-impact projects that shape global operations and decision-making.
Collaborate with a world-class data team using modern tools and methodologies.
Opportunity to innovate and influence data engineering standards and practices.
Data Scientist
Data engineer job in Livermore, CA
Join us and make YOUR mark on the World! Are you interested in joining some of the brightest talent in the world to strengthen the United States' security? Come join Lawrence Livermore National Laboratory (LLNL) where our employees apply their expertise to create solutions for BIG ideas that make our world a better place.
We are dedicated to fostering a culture that values individuals, talents, partnerships, ideas, experiences, and different perspectives, recognizing their importance to the continued success of the Laboratory's mission.
Pay Range:
$117,180 - $178,392 Annually
$117,180 - $148,608 Annually for the SES.1 level
$140,700 - $178,392 Annually for the SES.2 level
This is the lowest to highest salary we in good faith believe we would pay for this role at the time of this posting; pay will not be below any applicable local minimum wage. An employee's position within the salary range will be based on several factors including, but not limited to, specific competencies, relevant education, qualifications, certifications, experience, skills, seniority, geographic location, performance, and business or organizational needs.
Job Description
We have multiple openings for a Data Scientist to provide solutions for various projects. You will work in a dynamic, multidisciplinary team of independent/entrepreneurial computer scientists, engineers, and scientific staff who research, develop, and integrate state-of-the-art algorithms, software, hardware, and computer systems solutions to challenging research and development problems. These positions are in the Global Security Computing Applications Division (GS-CAD) within the Computing Directorate.
These positions will be filled at either level based on knowledge and related experience as assessed by the hiring team. Additional job responsibilities (outlined below) will be assigned if hired at the higher level.
You will
* Collaborate with scientists and researchers in one or more of the following areas: data intensive applications, natural language processing, graph analysis, machine learning, statistical learning, information visualization, low-level data management, data integration, data streaming, scientific data mining, data fusion, massive-scale knowledge fusion using semantic graphs, database technology, programming models for scalable parallel computing, application performance modeling and analysis, scalable tool development, novel architectures (e.g., FPGAs, GPUs and embedded systems), and HPC architecture simulation and evaluation.
* Partner with LLNL scientists and application developers to bring research results to practical use in LLNL programs.
* Assess the requirements for data sciences research from LLNL programs and external government sponsors.
* Contribute to the development of data analysis algorithms to address program and sponsor data sciences requirements.
* Engage with developers frequently to share relevant knowledge, opinions, and recommendations, working to fulfill deliverables as a team.
* Contribute to technical solutions, participate as a member of a multidisciplinary team to analyze sponsor requirements and designs, and implement software and perform analyses to address these requirements.
* Participate in the development and integration of components-such as web-based user interfaces, access control mechanisms, and commercial indexing products-for creating an operational information and knowledge discovery system.
* Perform other duties as assigned.
Additional job responsibilities, at the SES.2 level
* Contribute to multiple parallel tasks and priorities of customers and partners, ensuring deadlines are met.
* Solve abstract problems, converting them into useable algorithms and software modules.
* Provide solutions that require analysis of multiple factors and the creative use of established methods.
Qualifications
* Ability to secure and maintain a U.S. DOE Q-level security clearance which requires U.S. citizenship.
* Bachelor's degree in data science, computer science, mathematics, statistics, or related technical field, or the equivalent combination of education and related experience.
* Fundamental knowledge of one or more of the following: scientific data analysis, statistical analysis, knowledge discovery, supervised learning, unsupervised learning, deep learning, reinforcement learning, natural language processing, and big data technologies.
* Knowledge of adversarial AI methods, including evasion attacks, privacy attacks, and data poisoning attacks.
* Skilled in all aspects of the data science life cycle: feasibility / background research, data exploration, feature engineering, modeling, visualization, deployment
* Fundamental experience developing data science algorithms with C++, Python, or R in Linux, UNIX, Windows environments, sufficient to integrate solutions into larger applications.
* Experience developing extensible and maintainable software leveraging software design principles.
* Experience with scikit-learn, PyTorch, TensorFlow, or similar machine learning (AI/ML) development API for the purpose of developing data science solutions.
* Ability to effectively handle concurrent technical tasks with conflicting priorities, to approach difficult problems with enthusiasm and creativity and to change focus when necessary, and to work independently and implement research concepts in a multi-disciplinary team environment, where commitments and deadlines are important to project success.
* Sufficient interpersonal skills necessary to interact with all levels of personnel.
* Sufficient verbal and written communication skills necessary to effectively collaborate in a team environment and present and explain technical information.
Additional qualifications at the SES.2 level
* Comprehensive analytical, problem-solving, and decision-making skills to develop creative solutions to complex problems.
* Broad experience with one or more of the following technical languages, concepts, or constructs: Python, scientific data analysis, statistical analysis, knowledge discovery, supervised learning, unsupervised learning, deep learning, reinforcement learning, natural language processing, and big data technologies.
* Proficient experience with at least one of the following advanced ML concepts: Transfer Learning, distributed ML (data/model), ML operations, generative models, Bayesian optimization, computer vision modeling, transformers, graph neural networks, uncertainty quantification, surrogate modeling, or techniques for data-poor ML (low-shot, coresets, etc).
Additional Information
#LI-Hybrid
Position Information
This is a Flexible Term appointment, which is for a definite period not to exceed six years. If final candidate is a Career Indefinite employee, Career Indefinite status may be maintained (should funding allow).
Why Lawrence Livermore National Laboratory?
* Included in 2025 Best Places to Work by Glassdoor!
* Flexible Benefits Package
* 401(k)
* Relocation Assistance
* Education Reimbursement Program
* Flexible schedules (*depending on project needs)
* Our values - visit *****************************************
Security Clearance
This position requires a Department of Energy (DOE) Q-level clearance. If you are selected, we will initiate a Federal background investigation to determine if you meet eligibility requirements for access to classified information or matter. Also, all L or Q cleared employees are subject to random drug testing. Q-level clearance requires U.S. citizenship.
Pre-Employment Drug Test
External applicant(s) selected for this position must pass a post-offer, pre-employment drug test. This includes testing for use of marijuana as Federal Law applies to us as a Federal Contractor.
Wireless and Medical Devices
Per the Department of Energy (DOE), Lawrence Livermore National Laboratory must meet certain restrictions with the use and/or possession of mobile devices in Limited Areas. Depending on your job duties, you may be required to work in a Limited Area where you are not permitted to have a personal and/or laboratory mobile device in your possession. This includes, but not limited to cell phones, tablets, fitness devices, wireless headphones, and other Bluetooth/wireless enabled devices.
If you use a medical device, which pairs with a mobile device, you must still follow the rules concerning the mobile device in individual sections within Limited Areas. Sensitive Compartmented Information Facilities require separate approval. Hearing aids without wireless capabilities or wireless that has been disabled are allowed in Limited Areas, Secure Space and Transit/Buffer Space within buildings.
How to identify fake job advertisements
Please be aware of recruitment scams where people or entities are misusing the name of Lawrence Livermore National Laboratory (LLNL) to post fake job advertisements. LLNL never extends an offer without a personal interview and will never charge a fee for joining our company. All current job openings are displayed on the Career Page under "Find Your Job" of our website. If you have encountered a job posting or have been approached with a job offer that you suspect may be fraudulent, we strongly recommend you do not respond.
To learn more about recruitment scams: *****************************************************************************************
Equal Employment Opportunity
We are an equal opportunity employer that is committed to providing all with a work environment free of discrimination and harassment. All qualified applicants will receive consideration for employment without regard to race, color, religion, marital status, national origin, ancestry, sex, sexual orientation, gender identity, disability, medical condition, pregnancy, protected veteran status, age, citizenship, or any other characteristic protected by applicable laws.
Reasonable Accommodation
Our goal is to create an accessible and inclusive experience for all candidates applying and interviewing at the Laboratory. If you need a reasonable accommodation during the application or the recruiting process, please use our online form to submit a request.
California Privacy Notice
The California Consumer Privacy Act (CCPA) grants privacy rights to all California residents. The law also entitles job applicants, employees, and non-employee workers to be notified of what personal information LLNL collects and for what purpose. The Employee Privacy Notice can be accessed here.
Software Engineer
Data engineer job in Pleasanton, CA
Hi
Job Title: Software Engineer
Duration: 12 months
Engages actively in building out a dynamic and productive development organization and continuously improving practices and methodology
Excellent problem solving skills, meticulous & methodical Ability to learn and apply new technologies quickly and be self-directed
Minimum 7+ years of experience in backend application development
Profound knowledge of writing best practice code using Node.js, TypeScript, Docker
Experience of integrating and leveraging RESTful services
Good experience in designing scalable microservices architecture
Experienced with Design Patterns, Object Oriented Programming, and Functional Programming concepts
Writing runtime and test code Supports (2nd level and troubleshoots problems with existing applications
Experience in handling Git Hub Actions ( or any Ci-Cd Pipelines)
Understanding of Performance Scripts / Performance Improvements of microservices.
If interested, Please share below details with update resume:
Full Name:
Phone:
E-mail:
Rate:
Location:
Visa Status:
Availability:
SSN (Last 4 digit):
Date of Birth:
LinkedIn Profile:
Availability for the interview:
Availability for the project:
Senior Data Engineer
Data engineer job in Pleasanton, CA
Clorox is the place that's committed to growth - for our people and our brands. Guided by our purpose and values, and with people at the center of everything we do, we believe every one of us can make a positive impact on consumers, communities, and teammates. Join our team. #CloroxIsThePlace (**************************************************************************** UpdateUrns=urn%3Ali%3Aactivity%3A**********048001024)
**Your role at Clorox:**
At Clorox, we're revolutionizing the consumer product goods industry with cutting-edge technology and innovative solutions with data at the center of it all. We believe in harnessing the power of data to drive meaningful change and deliver exceptional results. Join our dynamic enterprise data team and be at the forefront of the data engineering revolution to build data as a product and use data as a competitive advantage.
Are you passionate about building the next generation of data solutions? We are looking for an experienced and highly skilled senior data engineer who thrives on solving complex problems and loves working with the latest cloud technologies. If you 're eager to shape the future of data in a fast-paced, tech-forward environment, we want to hear from you!
As a Senior Data Engineer at Clorox, you will lead exciting projects, design, build and maintain scalable data solution architectures and make a real impact on our data strategy. In this role, you will leverage our cutting-edge technology, your strong background in business intelligence and engineering in cloud platforms to build futuristic data products that create value across the organization. You will also serve as a key collaborator with our business analytics and enterprise technology stakeholders to innovate, build and sustain the cloud data infrastructure that will further Clorox's data strategy.
**In this role, you will:**
**Architect & Innovate**
+ Use tools like Azure Data Factory, Databricks, Python, Spark, and other tools to build cutting-edge data pipelines, data models, semantic modeling layers and dynamic data solutions.
+ Develop high-quality code to process and move the vast amount of data and integrate systems across diverse platforms.
+ Develop, sustain, implement data improvements and optimization techniques to enhance the data products.
+ Continuously evaluate and adopt new technologies and tools to enhance data engineering capabilities and efficiency.
**Optimize and Scale**
+ Build and maintain data pipelines to integrate data from various source systems.
+ Optimize data pipelines for performance, reliability and cost-effectiveness
+ Work with enterprise infrastructure and technology teams to implement best practices for performance monitoring, cloud resource management, including scaling, cost control and security.
**Ensure Quality and Governance**
+ Ensure safe custody, transport and storage of data in the data platforms.
+ Collaborate with Data Governance Stewards and Business Stakeholders to enforce the business rules, data quality rules and data cataloging activities.
+ Ensure data quality, security and compliance for the data products responsible under this role.
**Enhance BI Capabilities**
+ Developing and managing business intelligence solutions for the organization to transform data into insights that can drive business value.
+ Provide technical guidance to Analytics Product Owners, Business Leaders improve business decisions through data analytics, data visualization, and data modeling techniques and technologies.
**Collaborate and Lead**
+ Work closely with analytics product owners, data scientists, analysts and business users to understand data needs and provide technical solutions.
+ Provide technical guidance to junior engineers and BI teams to use data efficiently and effectively.
**What we look for:**
+ 7 + years of experience with data engineering, data warehousing, business intelligence with substantial experience in managing large-scale data projects
+ 5+ years' experience with data solutions implementations in Cloud platform technologies like Microsoft Azure, AWS etc. (Azure is preferred)
+ 4+ years with business intelligence using technologies like PowerBI, Tableau etc.(PowerBI preferred)
+ 4+ years of experience with Azure services like Data Factory, Databricks, and Delta Lake will be an added advantage.
+ Experience in end-to-end support for data engineering solutions (Data Pipelines), including designing, developing, deploying, and supporting solutions for existing platforms
+ Strong knowledge in Cloud service like Microsoft Azure and associated data services like such as Data Factory, Databricks, Delta Lake, Synapse, SQL DB etc.
+ End-to-end knowledge in data engineering solutions (Data Pipelines), including designing, developing, deploying, and supporting solutions for existing platforms
+ Ability to design and implement scalable and performant data pipelines and architecture. Deep understanding of data modeling, data access, and data storage techniques.
+ Strong proficiency in Python(required), Spark, Pandas, and CI/CD methodologies.
+ Strong BI skills to build reports & dashboards using Power BI and Tableau etc.
+ Strong in design, building complex reports, dashboards, performance tuning etc.
+ Strong in reporting security like row level, column level, object level and masking etc.
+ Strong with SQL and DML to recast data in backend database for data changes, restatements and data processing errors, etc.
+ Knowledge or experience in D365 Dataverse and reporting will be an added advantage.
\#LI-HYBRID
**Workplace type:**
Hybrid- In office Tuesday- Thursday; WFH Monday and Friday
**Our values-based culture connects to our purpose and empowers people to be their best, professionally and personally. We serve a diverse consumer base which is why we believe teams that reflect our consumers bring fresh perspectives, drive innovation, and help us stay attuned to the world around us. That's why we foster an inclusive culture where every person can feel respected, valued, and fully able to participate, and ultimately able to thrive.** Learn more (********************************************************************************************************* **.**
**[U.S.]Additional Information:**
At Clorox, we champion people to be well and thrive, starting with our own people. To help make this possible, we offer comprehensive, competitive benefits that prioritize all aspects of wellbeing and provide flexibility for our teammates' unique needs. This includes robust health plans, a market-leading 401(k) program with a company match, flexible time off benefits (including half-day summer Fridays depending on location), inclusive fertility/adoption benefits, and more.
We are committed to fair and equitable pay and are transparent with current and future teammates about our full salary ranges. We use broad salary ranges that reflect the competitive market for similar jobs, provide sufficient opportunity for growth as you gain experience and expand responsibilities, while also allowing for differentiation based on performance. Based on the breadth of our ranges, most new hires will start at Clorox in the first half of the applicable range. Your starting pay will depend on job-related factors, including relevant skills, knowledge, experience and location. The applicable salary range for every role in the U.S. is based on your work location and is aligned to one of three zones according to the cost of labor in your area.
-Zone A: $128,000 - $252,200
-Zone B: $117,400 - $231,200
-Zone C: $106,700 - $210,200
All ranges are subject to change in the future. Your recruiter can share more about the specific salary range for your location during the hiring process.
This job is also eligible for participation in Clorox's incentive plans, subject to the terms of the applicable plan documents and policies.
Please apply directly to our job postings and do not submit your resume to any person via text message. Clorox does not conduct text-based interviews and encourages you to be cautious of anyone posing as a Clorox recruiter via unsolicited texts during these uncertain times.
To all recruitment agencies: Clorox (and its brand families) does not accept agency resumes. Please do not forward resumes to Clorox employees, including any members of our leadership team. Clorox is not responsible for any fees related to unsolicited resumes.
**Who we are.**
We champion people to be well and thrive every single day. We're proud to be in every corner of homes, schools, and offices-making daily life simpler and easier through our beloved brands. Working with us, you'll join a team of passionate problem solvers and relentless innovators fueled by curiosity, growth, and progress. We relish taking on new, interesting challenges that allow our people to collaborate and thrive at work. And most importantly, we care about each other as multifaceted, whole humans. Join us as we reimagine what's possible and work with purpose to make a difference in the world.
**This is the place where doing the right thing matters.**
Doing the right thing is the compass that guides every decision we make-and we're proud to be globally recognized and awarded for our continuous corporate responsibility efforts. Clorox is a signatory of the United Nations Global Compact and the Ellen MacArthur Foundation's New Plastics Economy Global Commitment. The Clorox Company and its Foundation prioritize giving back to the communities we call home and contribute millions annually in combined cash grants, product donations, and cause-marketing. For more information, visit TheCloroxCompany.com and follow us on social media at @CloroxCo.
**Our commitment to diversity, inclusion, and equal employment opportunity.**
We seek out and celebrate diverse backgrounds and experiences. We're always looking for fresh perspectives, a desire to bring your best, and a nonstop drive to keep growing and learning. Learn more about our Inclusion, Diversity, Equity, and Allyship (IDEA) journey here (*********************************************** .
The Clorox Company and its subsidiaries are an EEO/AA/Minorities/Women/LGBT/Protected Veteran/Disabled employer. Learn more to Know Your Rights (*********************************************************************************************** .
Clorox is committed to providing reasonable accommodations for qualified applicants with disabilities and disabled veterans during the hiring and interview process. If you need assistance or accommodations due to a disability, please contact us at ***************** . Please note: this inbox is reserved for individuals with disabilities in need of assistance and is not a means of inquiry about positions/application statuses.
The Clorox Company and its subsidiaries are an EEO/AA/ Minorities/Women/LGBT/Protected Veteran/Disabled employer.
Lead Data Engineer/ETL
Data engineer job in Pleasanton, CA
Core Competencies - Must Have:
5-10 years Informatica (ETL) experience
BDM
TDM
Data Quality
Big Data
Data Warehouse
Star Shema
Very strong verbal and written communication skills
Bachelor's degree in a technical field such as computer science, computer engineering or related field required
Nice to Haves:
Unix hands on knowledge
Additional Information
All your information will be kept confidential according to EEO guidelines.