Databricks Data Engineer - Manager - Consulting - Location Open
Ernst & Young Oman 4.7
Data engineer job in San Francisco, CA
At EY, we're all in to shape your future with confidence.
We'll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world.
Technology - Data and Decision Science - DataEngineering - Manager
We are looking for a dynamic and experienced Manager of DataEngineering to lead our team in designing and implementing complex cloud analytics solutions with a strong focus on Databricks. The ideal candidate will possess deep technical expertise in data architecture, cloud technologies, and analytics, along with exceptional leadership and client management skills.
The opportunity
In this role, you will design and build analytics solutions that deliver significant business value. You will collaborate with other data and analytics professionals, management, and stakeholders to ensure that business requirements are translated into effective technical solutions. Key responsibilities include:
Understanding and analyzing business requirements to translate them into technical requirements.
Designing, building, and operating scalable data architecture and modeling solutions.
Staying up to date with the latest trends and emerging technologies to maintain a competitive edge.
Key Responsibilities
As a DataEngineering Manager, you will play a crucial role in managing and delivering complex technical initiatives. Your time will be spent across various responsibilities, including:
Leading workstream delivery and ensuring quality in all processes.
Engaging with clients on a daily basis, actively participating in working sessions, and identifying opportunities for additional services.
Implementing resource plans and budgets while managing engagement economics.
This role offers the opportunity to work in a dynamic environment where you will face challenges that require innovative solutions. You will learn and grow as you guide others and interpret internal and external issues to recommend quality solutions. Travel may be required regularly based on client needs.
Skills and attributes for success
To thrive in this role, you should possess a blend of technical and interpersonal skills. The following attributes will make a significant impact:
Lead the design and development of scalable dataengineering solutions using Databricks on cloud platforms (e.g., AWS, Azure, GCP).
Oversee the architecture of complex cloud analytics solutions, ensuring alignment with business objectives and best practices.
Manage and mentor a team of dataengineers, fostering a culture of innovation, collaboration, and continuous improvement.
Collaborate with clients to understand their analytics needs and deliver tailored solutions that drive business value.
Ensure the quality, integrity, and security of data throughout the data lifecycle, implementing best practices in data governance.
Drive end-to-end data pipeline development, including data ingestion, transformation, and storage, leveraging Databricks and other cloud services.
Communicate effectively with stakeholders, including technical and non-technical audiences, to convey complex data concepts and project progress.
Manage client relationships and expectations, ensuring high levels of satisfaction and engagement.
Stay abreast of the latest trends and technologies in dataengineering, cloud computing, and analytics.
Strong analytical and problem‑solving abilities.
Excellent communication skills, with the ability to convey complex information clearly.
Proven experience in managing and delivering projects effectively.
Ability to build and manage relationships with clients and stakeholders.
To qualify for the role, you must have
Bachelor's degree in computer science, Engineering, or a related field required; Master's degree preferred.
Typically, no less than 4‑6 years relevant experience in dataengineering, with a focus on cloud data solutions and analytics.
Proven expertise in Databricks and experience with Spark for big data processing.
Strong background in data architecture and design, with experience in building complex cloud analytics solutions.
Experience in leading and managing teams, with a focus on mentoring and developing talent.
Strong programming skills in languages such as Python, Scala, or SQL.
Excellent problem‑solving skills and the ability to work independently and as part of a team.
Strong communication and interpersonal skills, with a focus on client management.
Required Expertise for Managerial Role
Strategic Leadership: Ability to align dataengineering initiatives with organizational goals and drive strategic vision.
Project Management: Experience in managing multiple projects and teams, ensuring timely delivery and adherence to project scope.
Stakeholder Engagement: Proficiency in engaging with various stakeholders, including executives, to understand their needs and present solutions effectively.
Change Management: Skills in guiding clients through change processes related to data transformation and technology adoption.
Risk Management: Ability to identify potential risks in data projects and develop mitigation strategies.
Technical Leadership: Experience in leading technical discussions and making architectural decisions that impact project outcomes.
Documentation and Reporting: Proficiency in creating comprehensive documentation and reports to communicate project progress and outcomes to clients.
Large-Scale Implementation Programs
Enterprise Data Lake Implementation: Led the design and deployment of a cloud-based data lake solution for a Fortune 500 retail client, integrating data from multiple sources (e.g., ERPs, POS systems, e‑commerce platforms) to enable advanced analytics and reporting capabilities.
Real‑Time Analytics Platform: Managed the development of a real‑time analytics platform using Databricks for a financial services organization, enabling real‑time fraud detection and risk assessment through streaming data ingestion and processing.
Data Warehouse Modernization: Oversaw the modernization of a legacy data warehouse to a cloud‑native architecture for a healthcare provider, implementing ETL processes with Databricks and improving data accessibility for analytics and reporting.
Ideally, you'll also have
Experience with advanced data analytics tools and techniques.
Familiarity with machine learning concepts and applications.
Knowledge of industry trends and best practices in dataengineering.
Familiarity with cloud platforms (AWS, Azure, GCP) and their data services.
Knowledge of data governance and compliance standards.
Experience with machine learning frameworks and tools.
What we look for
We seek individuals who are not only technically proficient but also possess the qualities of top performers, including a strong sense of collaboration, adaptability, and a passion for continuous learning. If you are driven by results and have a desire to make a meaningful impact, we want to hear from you.
What we offer you
At EY, we'll develop you with future‑focused skills and equip you with world‑class experiences. We'll empower you in a flexible environment, and fuel you and your extraordinary talents in a diverse and inclusive culture of globally connected teams. Learn more.
We offer a comprehensive compensation and benefits package where you'll be rewarded based on your performance and recognized for the value you bring to the business. The base salary range for this job in all geographic locations in the US is $125,500 to $230,200. The base salary range for New York City Metro Area, Washington State and California (excluding Sacramento) is $150,700 to $261,600. Individual salaries within those ranges are determined through a wide variety of factors including but not limited to education, experience, knowledge, skills and geography. In addition, our Total Rewards package includes medical and dental coverage, pension and 401(k) plans, and a wide range of paid time off options.
Join us in our team‑led and leader‑enabled hybrid model. Our expectation is for most people in external, client serving roles to work together in person 40‑60% of the time over the course of an engagement, project or year.
Under our flexible vacation policy, you'll decide how much vacation time you need based on your own personal circumstances. You'll also be granted time off for designated EY Paid Holidays, Winter/Summer breaks, Personal/Family Care, and other leaves of absence when needed to support your physical, financial, and emotional well‑being.
Are you ready to shape your future with confidence? Apply today.
EY accepts applications for this position on an on‑going basis.
For those living in California, please click here for additional information.
EY focuses on high‑ethical standards and integrity among its employees and expects all candidates to demonstrate these qualities.
EY | Building a better working world
EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets.
Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow.
EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi‑disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
EY provides equal employment opportunities to applicants and employees without regard to race, color, religion, age, sex, sexual orientation, gender identity/expression, pregnancy, genetic information, national origin, protected veteran status, disability status, or any other legally protected basis, including arrest and conviction records, in accordance with applicable law.
EY is committed to providing reasonable accommodation to qualified individuals with disabilities including veterans with disabilities. If you have a disability and either need assistance applying online or need to request an accommodation during any part of the application process, please call 1‑800‑EY‑HELP3, select Option 2 for candidate related inquiries, then select Option 1 for candidate queries and finally select Option 2 for candidates with an inquiry which will route you to EY's Talent Shared Services Team (TSS) or email the TSS at **************************.
#J-18808-Ljbffr
$150.7k-261.6k yearly 23h ago
Looking for a job?
Let Zippia find it for you.
Full-Stack Engineer: AI Data Editor
Hex 3.9
Data engineer job in San Francisco, CA
A cutting-edge data analytics firm in San Francisco is seeking a full-stack engineer to enhance user experiences and integrate AI tools within their platform. You will work on innovative projects that shape data interactions, collaborate with teams on product initiatives, and tackle UX challenges. Ideal candidates should possess 3+ years of software engineering experience, proficiency in React and Typescript, and a strong desire to work in AI development. This position offers a competitive salary and benefits, with a hybrid work model.
#J-18808-Ljbffr
$126k-178k yearly est. 1d ago
Senior Applications Consultant - Workday Data Consultant
Capgemini 4.5
Data engineer job in San Francisco, CA
Job Description - Senior Applications Consultant - Workday Data Consultant (054374)
Senior Applications Consultant - Workday Data Consultant
Qualifications & Experience:
Certified in Workday HCM
Experience in Workday data conversion
At least one implementation as a data consultant
Ability to work with clients on data conversion requirements and load data into Workday tenants
Flexible to work across delivery landscape including Agile Applications Development, Support, and Deployment
Valid US work authorization (no visa sponsorship required)
6‑8 years overall experience (minimum 2 years relevant), Bachelor's degree
SE Level 1 certification; pursuing Level 2
Experience in package configuration, business analysis, architecture knowledge, technical solution design, vendor management
Responsibilities:
Translate business cases into detailed technical designs
Manage operational and technical issues, translating blueprints into requirements and specifications
Lead integration testing and user acceptance testing
Act as stream lead guiding team members
Participate as an active member within technology communities
Capgemini is an Equal Opportunity Employer encouraging diversity and providing accommodations for disabilities.
All qualified applicants will receive consideration without regard to race, national origin, gender identity or expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status, or any other characteristic protected by law.
Physical, mental, or environmental demands may be referenced. Reasonable accommodations will be considered where possible.
#J-18808-Ljbffr
$101k-134k yearly est. 1d ago
Staff Data Scientist - Sales Analytics
Harnham
Data engineer job in San Francisco, CA
Salary: $200-250k base + RSUs
This fast-growing Series E AI SaaS company is redefining how modern engineering teams build and deploy applications. We're looking for a Staff Data Scientist to drive Sales and Go-to-Market (GTM) analytics, applying advanced modeling and experimentation to accelerate revenue growth and optimize the full sales funnel.
About the Role
As the senior data scientist supporting Sales and GTM, you will combine statistical modeling, experimentation, and advanced analytics to inform strategy and guide decision-making across our revenue organization. Your work will help leadership understand pipeline health, predict outcomes, and identify the levers that unlock sustainable growth.
Key Responsibilities
Model the Business: Build forecasting and propensity models for pipeline generation, conversion rates, and revenue projections.
Optimize the Sales Funnel: Analyze lead scoring, opportunity progression, and deal velocity to recommend improvements in acquisition, qualification, and close rates.
Experimentation & Causal Analysis: Design and evaluate experiments (A/B tests, uplift modeling) to measure the impact of pricing, incentives, and campaign initiatives.
Advanced Analytics for GTM: Apply machine learning and statistical techniques to segment accounts, predict churn/expansion, and identify high-value prospects.
Cross-Functional Partnership: Work closely with Sales, Marketing, RevOps, and Product to influence GTM strategy and ensure data-driven decisions.
Data Infrastructure Collaboration: Partner with Analytics Engineering to define data requirements, ensure data quality, and enable self-serve reporting.
Strategic Insights: Present findings to executive leadership, translating complex analyses into actionable recommendations.
About You
Experience: 6+ years in data science or advanced analytics roles, with significant time spent in B2B SaaS or developer tools environments.
Technical Depth: Expert in SQL and proficient in Python or R for statistical modeling, forecasting, and machine learning.
Domain Knowledge: Strong understanding of sales analytics, revenue operations, and product-led growth (PLG) motions.
Analytical Rigor: Skilled in experimentation design, causal inference, and building predictive models that influence GTM strategy.
Communication: Exceptional ability to tell a clear story with data and influence senior stakeholders across technical and business teams.
Business Impact: Proven record of driving measurable improvements in pipeline efficiency, conversion rates, or revenue outcomes.
$200k-250k yearly 3d ago
Data Partnerships Lead - Equity & Growth (SF)
Exa
Data engineer job in San Francisco, CA
A cutting-edge AI search engine company in San Francisco is seeking a Data Partnerships specialist to build their data pipeline. The role involves owning the partnerships cycle, making strategic decisions, negotiating contracts, and potentially building a team. Candidates should have experience in contract negotiation and a Juris Doctor degree. This in-person role offers a competitive salary range of $160,000 - $250,000 with above-market equity.
#J-18808-Ljbffr
$160k-250k yearly 1d ago
Senior Energy Data Engineer - API & Spark Pipelines
Medium 4.0
Data engineer job in San Francisco, CA
A technology finance firm in San Francisco is seeking an experienced DataEngineer. The role involves building data pipelines, integrating data across various platforms, and developing scalable web applications. The ideal candidate will have a strong background in data analysis, software development, and experience with AWS. The salary range for this position is between $160,000 and $210,000, with potential bonuses and equity.
#J-18808-Ljbffr
$160k-210k yearly 4d ago
Global Data ML Engineer for Multilingual Speech & AI
Cartesia
Data engineer job in San Francisco, CA
A leading technology company in San Francisco is seeking a Machine Learning Engineer to ensure the quality and coverage of data across diverse languages. You will design large-scale datasets, evaluate models, and implement quality control systems. The ideal candidate has expertise in multilingual datasets and a strong background in applied ML. This full-time role offers competitive benefits, including fully covered insurance and in-office perks, in a supportive team environment.
#J-18808-Ljbffr
$110k-157k yearly est. 1d ago
Founding ML Infra Engineer - Audio Data Platform
David Ai
Data engineer job in San Francisco, CA
A pioneering audio tech company based in San Francisco is searching for a Founding Machine Learning Infrastructure Engineer. In this role, you will build and scale the core infrastructure that powers cutting-edge audio ML products. You will lead the development of systems for training and deploying models. Candidates should have over 5 years of backend experience with strong skills in cloud infrastructure and machine learning principles. The company offers benefits like unlimited PTO and comprehensive health coverage.
#J-18808-Ljbffr
$110k-157k yearly est. 4d ago
Data/Full Stack Engineer, Data Storage & Ingestion Consultant
Eon Systems PBC
Data engineer job in San Francisco, CA
About us
At Eon, we are at the forefront of large-scale neuroscientific data collection. Our mission is to enable the safe and scalable development of brain emulation technology to empower humanity over the next decade, beginning with the creation of a fully emulated digital twin of a mouse.
Role
We're a San Francisco team collecting very large microscopy datasets and we need an expert to design and implement our end-to-end data pipeline, from high-rate ingest to multi-petabyte storage and downstream processing. You'll own the strategy (on-prem vs. S3 or hybrid), the bill of materials, and the deployment, and you'll be on the floor wiring, racking, tuning, and validating performance.
Our current instruments generate data at ~1+ GB/s sustained (higher during bursts) and the program will accumulate multiple petabyes total over time. You'll help us choose and implement the right architecture considering reliability and cost controls.
Outcomes (what success looks like)
Within 2 weeks: Implement an immediate data-handling strategy that reliably ingests our initial data streams.
Within 2 weeks: Deliver a documented medium-term data architecture covering storage, networking, ingest, and durability.
Within 1 month: Operationalize the medium-term pipeline in production (ingest → buffer → long-term store → compute access).
Ongoing: Maintain ≥95% uptime for the end-to-end data-handling pipeline after setup.
Responsibilities
Architect ingest & storage: Choose and implement an on-prem hardware and data pipeline design or a cloud/S3 alternative with explicit cost and performance tradeoffs at multi-petabyte scale.
Set up a sustained-write ingest path ≥1 GB/s with adequate burst headroom (camera/frame-to-disk), including networking considerations, cooling, and throttling safeguards.
Optimize footprint & cost: Incorporate on-the-fly compression/downsampling options and quantify CPU budget vs. write-speed tradeoffs; document when/where to compress to control $/PB.
Integrate with acquisition workflows ensuring image data and metadata are compatible with downstream stitching/flat-field correction pipelines.
Enable downstream compute: Expose the data to segmentation/analysis stacks (local GPU nodes or cloud).
Skills
5+ years designing and deploying high-throughput storage or HPC pipelines (≥1 GB/s sustained ingest) in production.
Deep hands-on with: NVMe RAID/striping, ZFS/MDRAID/erasure coding, PCIe topology, NUMA pinning, Linux performance tuning, and NIC offload features.
Proven delivery of multi-GB/s ingest systems and petabyte-scale storage in production (life-sciences, vision, HPC, or media).
Experience building tiered storage systems (NVMe → HDD/object) and validating real-world throughput under sustained load.
Practical S3/object-storage know-how (AWS S3 and/or on-prem S3-compatible systems) with lifecycle, versioning, and cost controls.
Data integrity & reliability: snapshots, scrubs, replication, erasure coding, and backup/DR for PB-scale systems.
Networking: ****25/40/100 GbE (SFP+/SFP28), RDMA/ RoCE/iWARP familiarity; switch config and path tuning.
Ability to spec and rack hardware: selecting chassis/backplanes, RAID/HBA cards, NICs, and cooling strategies to prevent NVMe throttling under sustained writes.
Ideal skills:
Experience with microscopy or scientific imaging ingest at frame-to-disk speeds, including Micro-Manager-based pipelines and raw-to-containerized format conversions.
Experience with life science imaging data a plus.
Engagement details
Contract (1099 or corp-to-corp); contract-to-hire if there's a mutual fit.
On-site requirement: You must be physically present in San Francisco during build-out and initial operations; local field work (e.g., UCSF) as needed.
Compensation: Contract, $100-300/hour
Timeline: Immediate start
#J-18808-Ljbffr
$110k-157k yearly est. 23h ago
Senior Data Engineer: ML Pipelines & Signal Processing
Zendar
Data engineer job in Berkeley, CA
An innovative tech firm in Berkeley seeks a Senior DataEngineer to manage complex dataengineering pipelines. You will ensure data quality, support ML engineers across locations, and establish infrastructure standards. The ideal candidate has over 5 years of experience in Data Science or MLOps, strong algorithmic skills, and proficiency in GCP, Python, and SQL. This role offers competitive salary and the chance to impact a growing team in a dynamic field.
#J-18808-Ljbffr
$110k-157k yearly est. 4d ago
Senior Data Engineer, Card Data Platform
Capital One 4.7
Data engineer job in San Francisco, CA
A financial services company in San Francisco seeks a Distinguished DataEngineer to lead innovation in data architecture and management. The role involves building critical data solutions, mentoring teams, and leveraging cloud technologies like AWS. Ideal candidates will have significant experience in dataengineering, a Bachelor's degree, and proficiency in modern data practices to drive customer value through analytics and automation.
#J-18808-Ljbffr
$106k-144k yearly est. 1d ago
Staff Machine Learning Data Engineer
Backflip 3.7
Data engineer job in San Francisco, CA
Mechanical design, the work done in CAD, is the rate-limiter for progress in the physical world. However, there are only 2-4 million people on Earth who know how to CAD. But what if hundreds of millions could? What if creating something in the real world were as easy as imagining the use case, or sketching it on paper?
Backflip is building a foundation model for mechanical design: unifying the world's scattered engineering knowledge into an intelligent, end-to-end design environment. Our goal is to enable anyone to imagine a solution and hit “print.”
Founded by a second-time CEO in the same space (first company: Markforged), Backflip combines deep industry insight with breakthrough AI research. Backed by a16z and NEA, we raised a $30M Series A and built a deeply technical, mission-driven team.
We're building the AI foundation that tomorrow's space elevators, nanobots, and spaceships will be built in.
If you're excited to define the next generation of hard tech, come build it with us.
The Role
We're looking for a Staff Machine Learning DataEngineer to lead and build the data pipelines powering Backflip's foundation model for manufacturing and CAD.
You'll design the systems, tools, and strategies that turn the world's engineering knowledge - text, geometry, and design intent - into high-quality training data.
This is a core leadership role within the AI team, driving the data architecture, augmentation, and evaluation that underpin our model's performance and evolution.
You'll collaborate with Machine Learning Engineers to run data-driven experiments, analyze results, and deliver AI products that shape the future of the physical world.
What You'll Do
Architect and own Backflip's ML data pipeline, from ingestion to processing to evaluation.
Define data strategy: establish best practices for data augmentation, filtering, and sampling at scale.
Design scalable data systems for multimodal training (text, geometry, CAD, and more).
Develop and automate data collection, curation, and validation workflows.
Collaborate with MLEs to design and execute experiments that measure and improve model performance.
Build tools and metrics for dataset analysis, monitoring, and quality assurance.
Contribute to model development through insights grounded in data, shaping what, how, and when we train.
Who You Are
You've built and maintained ML data pipelines at scale, ideally for foundation or generative models, that shipped into production in the real world.
You have deep experience with dataengineering for ML, including distributed systems, data extraction, transformation, and loading, and large-scale data processing (e.g. PySpark, Beam, Ray, or similar).
You're fluent in Python and experienced with ML frameworks and data formats (Parquet, TFRecord, HuggingFace datasets, etc.).
You've developed data augmentation, sampling, or curation strategies that improved model performance.
You think like both an engineer and an experimentalist: curious, analytical, and grounded in evidence.
You collaborate well across AI development, infra, and product, and enjoy building the data systems that make great models possible.
You care deeply about data quality, reproducibility, and scalability.
You're excited to help shape the future of AI for physical design.
Bonus points if:
You are comfortable working with a variety of complex data formats, e.g. for 3D geometry kernels or rendering engines.
You have an interest in math, geometry, topology, rendering, or computational geometry.
You've worked in 3D printing, CAD, or computer graphics domains.
Why Backflip
This is a rare opportunity to own the data backbone of a frontier foundation model, and help define how AI learns to design the physical world.
You'll join a world-class, mission-driven team operating at the intersection of research, engineering, and deep product sense, building systems that let people design the physical world as easily as they imagine it.
Your work will directly shape the performance, capability, and impact of Backflip's foundation model, the core of how the world will build in the future.
Let's build the tools the future will be made in.
#J-18808-Ljbffr
$126k-178k yearly est. 4d ago
Foundry Data Engineer: ETL Automation & Dashboards
Data Freelance Hub 4.5
Data engineer job in San Francisco, CA
A data consulting firm based in San Francisco is seeking a Palantir Foundry Consultant for a contract position. The ideal candidate should have strong experience in Palantir Foundry, SQL, and PySpark, with proven skills in data pipeline development and ETL automation. Responsibilities include building data pipelines, implementing interactive dashboards, and leveraging data analysis for actionable insights. This on-site role offers an excellent opportunity for those experienced in the field.
#J-18808-Ljbffr
$114k-160k yearly est. 3d ago
Multi-Channel Demand Gen Leader - Data SaaS
Motherduck Corporation
Data engineer job in San Francisco, CA
A growing technology firm based in San Francisco is seeking a Demand Generation Marketer to drive campaigns that turn prospects into lifelong customers. This role emphasizes creativity in marketing, collaboration with teams, and a strong data-driven mindset. The ideal candidate will have experience in B2B SaaS environments and a passion for engaging technical audiences. Flexible work environment and competitive compensation offered.
#J-18808-Ljbffr
$112k-157k yearly est. 23h ago
Data Scientist
Talent Software Services 3.6
Data engineer job in Novato, CA
Are you an experienced Data Scientist with a desire to excel? If so, then Talent Software Services may have the job for you! Our client is seeking an experienced Data Scientist to work at their company in Novato, CA.
Client's Data Science is responsible for designing, capturing, analyzing, and presenting data that can drive key decisions for Clinical Development, Medical Affairs, and other business areas of Client. With a quality-by-design culture, Data Science builds quality data that is fit-for-purpose to support statistically sound investigation of critical scientific questions. The Data Science team develops solid analytics that are visually relevant and impactful in supporting key data-driven decisions across Client. The Data Management Science (DMS) group contributes to Data Science by providing complete, correct, and consistent analyzable data at data, data structure and documentation levels following international standards and GCP. The DMS Center of Risk Based Quality Management (RBQM) sub-function is responsible for the implementation of a comprehensive, cross-functional strategy to proactively manage quality risks for clinical trials. Starting at protocol development, the team collaborates to define critical-to-quality factors, design fit-for-purpose quality strategies, and enable ongoing oversight through centralized monitoring and data-driven risk management. The RBQM Data Scientist supports central monitoring and risk-based quality management (RBQM) for clinical trials. This role focuses on implementing and running pre-defined KRIs, QTLs, and other risk metrics using clinical data, with strong emphasis on SAS programming to deliver robust and scalable analytics across multiple studies.
Primary Responsibilities/Accountabilities:
The RBQM Data Scientist may perform a range of the following responsibilities, depending upon the study's complexity and the study's development stage:
Implement and maintain pre-defined KRIs, QTLs, and triggers using robust SAS programs/macros across multiple clinical studies.
Extract, transform, and integrate data from EDC systems (e.g., RAVE) and other clinical sources into analysis-ready SAS datasets.
Run routine and ad-hoc RBQM/central monitoring outputs (tables, listings, data extracts, dashboard feeds) to support signal detection and study review.
Perform QC and troubleshooting of SAS code; ensure outputs are accurate and efficient.
Maintain clear technical documentation (specifications, validation records, change logs) for all RBQM programs and processes.
Collaborate with Central Monitors, Central Statistical Monitors, Data Management, Biostatistics, and Study Operations to understand requirements and ensure correct implementation of RBQM metrics.
Qualifications:
PhD, MS, or BA/BS in statistics, biostatistics, computer science, data science, life science, or a related field.
Relevant clinical development experience (programming, RBM/RBQM, Data Management), for example:
PhD: 3+ years
MS: 5+ years
BA/BS: 8+ years
Advanced SAS programming skills (hard requirement) in a clinical trials environment (Base SAS, Macro, SAS SQL; experience with large, complex clinical datasets).
Hands-on experience working with clinical trial data.•Proficiency with Microsoft Word, Excel, and PowerPoint.
Technical - Preferred / Strong Plus
Experience with RAVE EDC.
Awareness or working knowledge of CDISC, CDASH, SDTM standards.
Exposure to R, Python, or JavaScript and/or clinical data visualization tools/platforms.
Preferred:
Knowledge of GCP, ICH, FDA guidance related to clinical trials and risk-based monitoring.
Strong analytical and problem-solving skills; ability to interpret complex data and risk outputs.
Effective communication and teamwork skills; comfortable collaborating with cross-functional, global teams.
Ability to manage multiple programming tasks and deliver high-quality work in a fast-paced environment.
$99k-138k yearly est. 4d ago
Staff Data Engineer
PG Forsta
Data engineer job in Emeryville, CA
PG Forsta is the leading experience measurement, data analytics, and insights provider for complex industries-a status we earned over decades of deep partnership with clients to help them understand and meet the needs of their key stakeholders. Our earliest roots are in U.S. healthcare -perhaps the most complex of all industries. Today we serve clients around the globe in every industry to help them improve the Human Experiences at the heart of their business. We serve our clients through an unparalleled offering that combines technology, data, and expertise to enable them to pinpoint and prioritize opportunities, accelerate improvement efforts and build lifetime loyalty among their customers and employees.
Like all great companies, our success is a function of our people and our culture. Our employees have world-class talent, a collaborative work ethic, and a passion for the work that have earned us trusted advisor status among the world's most recognized brands. As a member of the team, you will help us create value for our clients, you will make us better through your contribution to the work and your voice in the process. Ours is a path of learning and continuous improvement; team efforts chart the course for corporate success.
Our Mission:
We empower organizations to deliver the best experiences. With industry expertise and technology, we turn data into insights that drive innovation and action.
Our Values:
To put Human Experience at the heart of organizations so every person can be seen and understood.
Energize the customer relationship:Our clients are our partners. We make their goals our own, working side by side to turn challenges into solutions.
Success starts with me:Personal ownership fuels collective success. We each play our part and empower our teammates to do the same.
Commit to learning:Every win is a springboard. Every hurdle is a lesson. We use each experience as an opportunity to grow.
Dare to innovate:We challenge the status quo with creativity and innovation as our true north.
Better together:We check our egos at the door. We work together, so we win together.
We are seeking an experienced Staff DataEngineer to join our Unified Data Platform team. The ideal candidate will design, develop, and maintain enterprise-scale data infrastructure leveraging Azure and Databricks technologies. This role involves building robust data pipelines, optimizing data workflows, and ensuring data quality and governance across the platform. You will collaborate closely with analytics, data science, and business teams to enable data-driven decision-making.
Duties & Responsibilities:
Design, build, and optimizedata pipelinesand workflows in AzureandDatabricks, including Data Lake and SQL Database integrations.
Implement scalable ETL/ELT frameworksusing Azure Data Factory,Databricks, and Spark.
Optimize data structures and queries for performance, reliability, and cost efficiency.
Drivedata quality and governance initiatives, including metadata management and validation frameworks.
Collaborate with cross-functional teams to define and implementdata modelsaligned with business and analytical requirements.
Maintain clear documentation and enforce engineering best practices for reproducibility and maintainability.
Ensure adherence tosecurity, compliance, and data privacystandards.
Mentor junior engineers and contribute to establishingengineering best practices.
SupportCI/CD pipeline developmentfor data workflows using GitLab or Azure DevOps.
Partner with data consumers to publish curated datasets into reporting tools such as Power BI.
Stay current with advancements in Azure, Databricks, Delta Lake, and data architecture trends.
Technical Skills:
Advanced proficiency in Azure 5+ years(Data Lake, ADF, SQL).
Strong expertise in Databricks (5+ years),Apache Spark (5+ years), and Delta Lake (5+ years).
Proficient in SQL (10+ years)and Python (5+ years); familiarity with Scalais a plus.
Strong understanding ofdata modeling,data governance, andmetadata management.
Knowledge ofsource control (Git),CI/CD, and modern DevOps practices.
Familiarity with Power BIvisualization tool.
Minimum Qualifications:
Bachelor's or Master's degree in Computer Science, Data Science, or related field.
7+ yearsof experience in dataengineering, with significant hands-on work incloud-based data platforms (Azure).
Experience buildingreal-time data pipelinesand streaming frameworks.
Strong analytical and problem-solving skills.
Proven ability tolead projectsand mentor engineers.
Excellent communication and collaboration skills.
Preferred Qualifications:
Master's degree in Computer Science, Engineering, or a related field.
Exposure tomachine learning integrationwithin dataengineering pipelines.
Don't meet every single requirement?Studies have shown that women and people of color are less likely to apply to jobs unless they meet every single qualification. At PG Forsta we are dedicated to building a diverse, inclusive and authentic workplace, so if you're excited about this role but your past experience doesn't align perfectly with every qualification in the job description, we encourage you to apply anyways. You may be just the right candidate for this or other roles.
Additional Information for US based jobs:
Press Ganey Associates LLC is an Equal Employment Opportunity/Affirmative Action employer and well committed to a diverse workforce. We do not discriminate against any employee or applicant for employment because of race, color, sex, age, national origin, religion, sexual orientation, gender identity, veteran status, and basis of disability or any other federal, state, or local protected class.
Pay Transparency Non-Discrimination Notice - Press Ganey will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay or the pay of another employee or applicant. However, employees who have access to the compensation information of other employees or applicants as a part of their essential job functions cannot disclose the pay of other employees or applicants to individuals who do not otherwise have access to compensation information, unless the disclosure is (a) in response to a formal complaint or charge, (b) in furtherance of an investigation, proceeding, hearing, or action, including an investigation conducted by the employer, or (c) consistent with the contractor's legal duty to furnish information.
The expected base salary for this position ranges from $110,000 to $170,000. It is not typical for offers to be made at or near the top of the range. Salary offers are based on a wide range of factors including relevant skills, training, experience, education, and, where applicable, licensure or certifications obtained. Market and organizational factors are also considered. In addition to base salary and a competitive benefits package, successful candidates are eligible to receive a discretionary bonus or commission tied to achieved results.
All your information will be kept confidential according to EEO guidelines.
Our privacy policy can be found here:legal-privacy/
$110k-170k yearly 23h ago
Principal Software Development Build Engineer
Dell 4.8
Data engineer job in Pleasanton, CA
The Software Engineering team delivers next-generation application enhancements and new products for a changing world. Working at the cutting edge, we design and develop software for platforms, peripherals, applications and diagnostics - all with the most advanced technologies, tools, software engineering methodologies and the collaboration of internal and external partners.
Join us to do the best work of your career and make a profound social impact as a Principal Software Development Build Engineer
in Santa Clara, California .
What you'll achieve
As a Principal Software Development Build Engineer, you will own and evolve CI/CD pipelines, build automation and release processes for our scale-out storage and data protection platform. You'll drive modernization of build systems, ensure fast/reliable releases and mentor junior engineers while collaborating closely with development, QA and operations.
You will:
Architect and optimize build/release pipelines for complex, distributed software
Lead improvements in CI/CD workflows, automation, and developer productivity
Troubleshoot build failures and enforce branching, versioning, and governance standards
Integrate test automation and security checks into pipelines
Mentor engineers and drive adoption of modern build tools and practices
Take the first step towards your dream career
Every Dell Technologies team member brings something unique to the table. Here's what we are looking for with this role:
Essential Requirements
8+ years experience in build/release engineering or DevOps (or equivalent skill)
Expertise with CI/CD platforms e.g. Jenkins, GitLab CI, GitHub Actions
Proficiency in Python, Bash or Groovy for automation
Experience with Git-based SCM, artifact management (Artifactory/Nexus), and containerized builds (Docker/K8s)
Desirable Skills:
Bachelor's or Master's degree in Computer Science, Engineering or related field
Knowledge of modern build systems e.g. Bazel, CMake and cloud CI/CD
Compensation
Dell is committed to fair and equitable compensation practices. The base salary range for this position is $205,700-$266,200 .
Benefits and Perks of working at Dell Technologies
Your life. Your health. Supported by your benefits. You can explore the overall benefits experience that awaits you as a Dell Technologies team member - right now at MyWellatDell.com
Who we are
We believe that each of us has the power to make an impact. That's why we put our team members at the center of everything we do. If you're looking for an opportunity to grow your career with some of the best minds and most advanced tech in the industry, we're looking for you. Dell Technologies is a unique family of businesses that helps individuals and organizations transform how they work, live and play. Join us to build a future that works for everyone because Progress Takes All of Us. Dell Technologies is committed to the principle of equal employment opportunity for all employees and to providing employees with a work environment free of discrimination and harassment. Read the full Equal Employment Opportunity Policy here . Job ID: R283658
$205.7k-266.2k yearly 1d ago
Lead AI Engineer - Build Autonomous AI Agents & Real-Time Infra
CEF Ai
Data engineer job in San Francisco, CA
A pioneering AI infrastructure company in San Francisco is seeking a Lead AI Engineer to drive the development and implementation of cutting-edge AI solutions. The ideal candidate will have extensive experience in launching tech solutions and a strong understanding of modern AI workflows. As part of a close-knit team led by SV startup veterans, this position offers the chance to make a significant impact in building real-time, privacy-preserving AI systems and to work directly with company leaders.
#J-18808-Ljbffr
$76k-116k yearly est. 1d ago
Security Engineering Lead: Build Secure, Scalable Systems
Airbyte
Data engineer job in San Francisco, CA
A growing tech company in San Francisco is seeking a Security Engineering Lead to own security, compliance, and privacy. The role involves leading security initiatives, setting priorities, and collaborating with cross-functional teams. Candidates should have extensive security experience, including hands-on knowledge of cloud security and compliance frameworks such as SOC 2 and ISO 27001. Strong communication and risk management skills are essential to foster a secure environment as the company scales.
#J-18808-Ljbffr
$76k-116k yearly est. 4d ago
Staff Data Scientist - Post Sales
Harnham
Data engineer job in San Francisco, CA
Salary: $200-250k base + RSUs
This fast-growing Series E AI SaaS company is redefining how modern engineering teams build and deploy applications. We're expanding our data science organization to accelerate customer success after the initial sale-driving onboarding, retention, expansion, and long-term revenue growth.
About the Role
As the senior data scientist supporting post-sales teams, you will use advanced analytics, experimentation, and predictive modeling to guide strategy across Customer Success, Account Management, and Renewals. Your insights will help leadership forecast expansion, reduce churn, and identify the levers that unlock sustainable net revenue retention.
Key Responsibilities
Forecast & Model Growth: Build predictive models for renewal likelihood, expansion potential, churn risk, and customer health scoring.
Optimize the Customer Journey: Analyze onboarding flows, product adoption patterns, and usage signals to improve activation, engagement, and time-to-value.
Experimentation & Causal Analysis: Design and evaluate experiments (A/B tests, uplift modeling) to measure the impact of onboarding programs, success initiatives, and pricing changes on retention and expansion.
Revenue Insights: Partner with Customer Success and Sales to identify high-value accounts, cross-sell opportunities, and early warning signs of churn.
Cross-Functional Partnership: Collaborate with Product, RevOps, Finance, and Marketing to align post-sales strategies with company growth goals.
Data Infrastructure Collaboration: Work with Analytics Engineering to define data requirements, maintain data quality, and enable self-serve dashboards for Success and Finance teams.
Executive Storytelling: Present clear, actionable recommendations to senior leadership that translate complex analysis into strategic decisions.
About You
Experience: 6+ years in data science or advanced analytics, with a focus on post-sales, customer success, or retention analytics in a B2B SaaS environment.
Technical Skills: Expert SQL and proficiency in Python or R for statistical modeling, forecasting, and machine learning.
Domain Knowledge: Deep understanding of SaaS metrics such as net revenue retention (NRR), gross churn, expansion ARR, and customer health scoring.
Analytical Rigor: Strong background in experimentation design, causal inference, and predictive modeling to inform customer-lifecycle strategy.
Communication: Exceptional ability to translate data into compelling narratives for executives and cross-functional stakeholders.
Business Impact: Demonstrated success improving onboarding efficiency, retention rates, or expansion revenue through data-driven initiatives.
How much does a data engineer earn in Richmond, CA?
The average data engineer in Richmond, CA earns between $94,000 and $184,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.
Average data engineer salary in Richmond, CA
$131,000
What are the biggest employers of Data Engineers in Richmond, CA?
The biggest employers of Data Engineers in Richmond, CA are: