We design, build and maintain infrastructure to support agentic workflows for Siri. Our team is in charge of data generation, introspection and evaluation frameworks that are key to efficiently developing foundation models and agentic workflows for Siri applications. In this team you will have the opportunity to work at the intersection of with cutting edge foundation models and products.
Minimum Qualifications
Strong background in computer science: algorithms, data structures and system design
3+ year experience on large scale distributed system design, operation and optimization
Experience with SQL/NoSQL database technologies, data warehouse frameworks like BigQuery/Snowflake/RedShift/Iceberg and data pipeline frameworks like GCP Dataflow/Apache Beam/Spark/Kafka
Experience processing data for ML applications at scale
Excellent interpersonal skills able to work independently as well as cross-functionally
Preferred Qualifications
Experience fine-tuning and evaluating Large Language Models
Experience with Vector Databases
Experience deploying and serving of LLMs
At Apple, base pay is one part of our total compensation package and is determined within a range. This provides the opportunity to progress as you grow and develop within a role. The base pay range for this role is between $147,400 and $272,100, and your base pay will depend on your skills, qualifications, experience, and location.
Apple employees also have the opportunity to become an Apple shareholder through participation in Apple's discretionary employee stock programs. Apple employees are eligible for discretionary restricted stock unit awards, and can purchase Apple stock at a discount if voluntarily participating in Apple's Employee Stock Purchase Plan. You'll also receive benefits including: Comprehensive medical and dental coverage, retirement benefits, a range of discounted products and free services, and for formal education related to advancing your career at Apple, reimbursement for certain educational expenses - including tuition. Additionally, this role might be eligible for discretionary bonuses or commission payments as well as relocation. Learn more about Apple Benefits.
Note: Apple benefit, compensation and employee stock programs are subject to eligibility requirements and other terms of the applicable plan or program.
Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant .
#J-18808-Ljbffr
$147.4k-272.1k yearly 4d ago
Looking for a job?
Let Zippia find it for you.
Data Partnerships Lead - Equity & Growth (SF)
Exa
Data engineer job in San Francisco, CA
A cutting-edge AI search engine company in San Francisco is seeking a Data Partnerships specialist to build their data pipeline. The role involves owning the partnerships cycle, making strategic decisions, negotiating contracts, and potentially building a team. Candidates should have experience in contract negotiation and a Juris Doctor degree. This in-person role offers a competitive salary range of $160,000 - $250,000 with above-market equity.
#J-18808-Ljbffr
$160k-250k yearly 5d ago
Staff Data Engineer, Energy
Medium 4.0
Data engineer job in San Francisco, CA
About GoodLeap
GoodLeap is a technology company delivering best-in-class financing and software products for sustainable solutions, from solar panels and batteries to energy-efficient HVAC, heat pumps, roofing, windows, and more. Over 1 million homeowners have benefited from our simple, fast, and frictionless technology that makes the adoption of these products more affordable, accessible, and easier to understand. Thousands of professionals deploying home efficiency and solar solutions rely on GoodLeap's proprietary, AI-powered applications and developer tools to drive more transparent customer communication, deeper business intelligence, and streamlined payment and operations. Our platform has led to more than $30 billion in financing for sustainable solutions since 2018.
GoodLeap is also proud to support our award-winning nonprofit, GivePower, which is building and deploying life-saving water and clean electricity systems, changing the lives of more than 1.6 million people across Africa, Asia, and South America.
Position Summary
The GoodLeap team is looking for a hands‑on DataEngineer with a strong background in API data integrations, Spark processing and data lake development. The focus of this role will be on ingesting production energy data and helping get the aggregated metrics to the many teams in GoodLeap that need them. The successful candidate is a highly motivated individual with strong technical skills to create secure and performant data pipelines as well as support our foundational enterprise data warehouse. The ideal candidate is passionate about quality and has a bold, visionary approach to data practices in a modern finance enterprise.
The candidate in this role will be required to work closely with cross‑functional teams to effectively coordinate the complex interdependencies inherent in the applications. Typical teams we collaborate with are Analytics & Reporting, Origination Platform engineers and AI developers. We are looking for a hardworking and passionate engineer who wants to make a difference with the tools they develop.
Essential Job Duties and Responsibilities
Implement data integrations across the organization as well as with business applications
Develop and maintain data oriented web applications with scalable web services
Participate in the design and development of projects, either independently or in a team
Utilize agile software development lifecycle and DevOps principles
Be the data stewards of the organization upholding quality and availability standards for our downstream consumers
Be self‑sufficient and fully own the responsibility of executing projects from inception to delivery
Provide mentorship to team members including pair programming and skills development
Participate in data design and architecture discussions, considering solutions in the context of the larger GoodLeap ecosystem
Required Skills, Knowledge & Abilities
6-10 years of full‑time Data Analysis and/or Software Development experience
Experience with an end to end reporting & analytics technology: data warehousing (SQL, NoSQL) to BI/Visualization (Tableau, PowerBI, Excel)
Degree in Computer Science or related discipline
Experience with DataBricks/Spark processing
Expertise with relational databases (including functional SQL/stored procedures) and non‑relational databases (MongoDB, DynamoDB, Elastic Search)
Experience with orchestrating data pipelines with modern tools such as Airflow
Strong knowledge and hands‑on experience with open source web frameworks (e.g. Vue /React)
Solid understanding of performance implications and scalability of code
Experience with Amazon Web Services (IAM, Cognito, EC2, S3, RDS, Cloud Formation)
Experience with messaging paradigms and serverless technologies (Lambda, SQS, SNS, SES)
Experience working with server‑less applications on public clouds (e.g. AWS)
Experience with large, complex codebases and know how to maintain them
$160,000 - $210,000 a year
In addition to the above salary, this role may be eligible for a bonus and equity.
Additional Information Regarding Job Duties and s
Job duties include additional responsibilities as assigned by one's supervisor or other managers related to the position/department. This job description is meant to describe the general nature and level of work being performed; it is not intended to be construed as an exhaustive list of all responsibilities, duties and other skills required for the position. The Company reserves the right at any time with or without notice to alter or change job responsibilities, reassign or transfer job position or assign additional job responsibilities, subject to applicable law. The Company shall provide reasonable accommodations of known disabilities to enable a qualified applicant or employee to apply for employment, perform the essential functions of the job, or enjoy the benefits and privileges of employment as required by the law.
If you are an extraordinary professional who thrives in a collaborative work culture and values a rewarding career, then we want to work with you! Apply today!
We are committed to protecting your privacy. To learn more about how we collect, use, and safeguard your personal information during the application process, please review our Employment Privacy Policy and Recruiting Policy on AI.
#J-18808-Ljbffr
$160k-210k yearly 3d ago
Senior Data Engineer: ML Pipelines & Signal Processing
Zendar
Data engineer job in Berkeley, CA
An innovative tech firm in Berkeley seeks a Senior DataEngineer to manage complex dataengineering pipelines. You will ensure data quality, support ML engineers across locations, and establish infrastructure standards. The ideal candidate has over 5 years of experience in Data Science or MLOps, strong algorithmic skills, and proficiency in GCP, Python, and SQL. This role offers competitive salary and the chance to impact a growing team in a dynamic field.
#J-18808-Ljbffr
$110k-157k yearly est. 3d ago
Staff Data Scientist - Post Sales
Harnham
Data engineer job in San Francisco, CA
Salary: $200-250k base + RSUs
This fast-growing Series E AI SaaS company is redefining how modern engineering teams build and deploy applications. We're expanding our data science organization to accelerate customer success after the initial sale-driving onboarding, retention, expansion, and long-term revenue growth.
About the Role
As the senior data scientist supporting post-sales teams, you will use advanced analytics, experimentation, and predictive modeling to guide strategy across Customer Success, Account Management, and Renewals. Your insights will help leadership forecast expansion, reduce churn, and identify the levers that unlock sustainable net revenue retention.
Key Responsibilities
Forecast & Model Growth: Build predictive models for renewal likelihood, expansion potential, churn risk, and customer health scoring.
Optimize the Customer Journey: Analyze onboarding flows, product adoption patterns, and usage signals to improve activation, engagement, and time-to-value.
Experimentation & Causal Analysis: Design and evaluate experiments (A/B tests, uplift modeling) to measure the impact of onboarding programs, success initiatives, and pricing changes on retention and expansion.
Revenue Insights: Partner with Customer Success and Sales to identify high-value accounts, cross-sell opportunities, and early warning signs of churn.
Cross-Functional Partnership: Collaborate with Product, RevOps, Finance, and Marketing to align post-sales strategies with company growth goals.
Data Infrastructure Collaboration: Work with Analytics Engineering to define data requirements, maintain data quality, and enable self-serve dashboards for Success and Finance teams.
Executive Storytelling: Present clear, actionable recommendations to senior leadership that translate complex analysis into strategic decisions.
About You
Experience: 6+ years in data science or advanced analytics, with a focus on post-sales, customer success, or retention analytics in a B2B SaaS environment.
Technical Skills: Expert SQL and proficiency in Python or R for statistical modeling, forecasting, and machine learning.
Domain Knowledge: Deep understanding of SaaS metrics such as net revenue retention (NRR), gross churn, expansion ARR, and customer health scoring.
Analytical Rigor: Strong background in experimentation design, causal inference, and predictive modeling to inform customer-lifecycle strategy.
Communication: Exceptional ability to translate data into compelling narratives for executives and cross-functional stakeholders.
Business Impact: Demonstrated success improving onboarding efficiency, retention rates, or expansion revenue through data-driven initiatives.
$200k-250k yearly 2d ago
Staff Machine Learning Data Engineer
Backflip 3.7
Data engineer job in San Francisco, CA
Mechanical design, the work done in CAD, is the rate-limiter for progress in the physical world. However, there are only 2-4 million people on Earth who know how to CAD. But what if hundreds of millions could? What if creating something in the real world were as easy as imagining the use case, or sketching it on paper?
Backflip is building a foundation model for mechanical design: unifying the world's scattered engineering knowledge into an intelligent, end-to-end design environment. Our goal is to enable anyone to imagine a solution and hit “print.”
Founded by a second-time CEO in the same space (first company: Markforged), Backflip combines deep industry insight with breakthrough AI research. Backed by a16z and NEA, we raised a $30M Series A and built a deeply technical, mission-driven team.
We're building the AI foundation that tomorrow's space elevators, nanobots, and spaceships will be built in.
If you're excited to define the next generation of hard tech, come build it with us.
The Role
We're looking for a Staff Machine Learning DataEngineer to lead and build the data pipelines powering Backflip's foundation model for manufacturing and CAD.
You'll design the systems, tools, and strategies that turn the world's engineering knowledge - text, geometry, and design intent - into high-quality training data.
This is a core leadership role within the AI team, driving the data architecture, augmentation, and evaluation that underpin our model's performance and evolution.
You'll collaborate with Machine Learning Engineers to run data-driven experiments, analyze results, and deliver AI products that shape the future of the physical world.
What You'll Do
Architect and own Backflip's ML data pipeline, from ingestion to processing to evaluation.
Define data strategy: establish best practices for data augmentation, filtering, and sampling at scale.
Design scalable data systems for multimodal training (text, geometry, CAD, and more).
Develop and automate data collection, curation, and validation workflows.
Collaborate with MLEs to design and execute experiments that measure and improve model performance.
Build tools and metrics for dataset analysis, monitoring, and quality assurance.
Contribute to model development through insights grounded in data, shaping what, how, and when we train.
Who You Are
You've built and maintained ML data pipelines at scale, ideally for foundation or generative models, that shipped into production in the real world.
You have deep experience with dataengineering for ML, including distributed systems, data extraction, transformation, and loading, and large-scale data processing (e.g. PySpark, Beam, Ray, or similar).
You're fluent in Python and experienced with ML frameworks and data formats (Parquet, TFRecord, HuggingFace datasets, etc.).
You've developed data augmentation, sampling, or curation strategies that improved model performance.
You think like both an engineer and an experimentalist: curious, analytical, and grounded in evidence.
You collaborate well across AI development, infra, and product, and enjoy building the data systems that make great models possible.
You care deeply about data quality, reproducibility, and scalability.
You're excited to help shape the future of AI for physical design.
Bonus points if:
You are comfortable working with a variety of complex data formats, e.g. for 3D geometry kernels or rendering engines.
You have an interest in math, geometry, topology, rendering, or computational geometry.
You've worked in 3D printing, CAD, or computer graphics domains.
Why Backflip
This is a rare opportunity to own the data backbone of a frontier foundation model, and help define how AI learns to design the physical world.
You'll join a world-class, mission-driven team operating at the intersection of research, engineering, and deep product sense, building systems that let people design the physical world as easily as they imagine it.
Your work will directly shape the performance, capability, and impact of Backflip's foundation model, the core of how the world will build in the future.
Let's build the tools the future will be made in.
#J-18808-Ljbffr
$126k-178k yearly est. 3d ago
Founding ML Infra Engineer - Audio Data Platform
David Ai
Data engineer job in San Francisco, CA
A pioneering audio tech company based in San Francisco is searching for a Founding Machine Learning Infrastructure Engineer. In this role, you will build and scale the core infrastructure that powers cutting-edge audio ML products. You will lead the development of systems for training and deploying models. Candidates should have over 5 years of backend experience with strong skills in cloud infrastructure and machine learning principles. The company offers benefits like unlimited PTO and comprehensive health coverage.
#J-18808-Ljbffr
$110k-157k yearly est. 3d ago
Data/Full Stack Engineer, Data Storage & Ingestion Consultant
Eon Systems PBC
Data engineer job in San Francisco, CA
About us
At Eon, we are at the forefront of large-scale neuroscientific data collection. Our mission is to enable the safe and scalable development of brain emulation technology to empower humanity over the next decade, beginning with the creation of a fully emulated digital twin of a mouse.
Role
We're a San Francisco team collecting very large microscopy datasets and we need an expert to design and implement our end-to-end data pipeline, from high-rate ingest to multi-petabyte storage and downstream processing. You'll own the strategy (on-prem vs. S3 or hybrid), the bill of materials, and the deployment, and you'll be on the floor wiring, racking, tuning, and validating performance.
Our current instruments generate data at ~1+ GB/s sustained (higher during bursts) and the program will accumulate multiple petabyes total over time. You'll help us choose and implement the right architecture considering reliability and cost controls.
Outcomes (what success looks like)
Within 2 weeks: Implement an immediate data-handling strategy that reliably ingests our initial data streams.
Within 2 weeks: Deliver a documented medium-term data architecture covering storage, networking, ingest, and durability.
Within 1 month: Operationalize the medium-term pipeline in production (ingest → buffer → long-term store → compute access).
Ongoing: Maintain ≥95% uptime for the end-to-end data-handling pipeline after setup.
Responsibilities
Architect ingest & storage: Choose and implement an on-prem hardware and data pipeline design or a cloud/S3 alternative with explicit cost and performance tradeoffs at multi-petabyte scale.
Set up a sustained-write ingest path ≥1 GB/s with adequate burst headroom (camera/frame-to-disk), including networking considerations, cooling, and throttling safeguards.
Optimize footprint & cost: Incorporate on-the-fly compression/downsampling options and quantify CPU budget vs. write-speed tradeoffs; document when/where to compress to control $/PB.
Integrate with acquisition workflows ensuring image data and metadata are compatible with downstream stitching/flat-field correction pipelines.
Enable downstream compute: Expose the data to segmentation/analysis stacks (local GPU nodes or cloud).
Skills
5+ years designing and deploying high-throughput storage or HPC pipelines (≥1 GB/s sustained ingest) in production.
Deep hands-on with: NVMe RAID/striping, ZFS/MDRAID/erasure coding, PCIe topology, NUMA pinning, Linux performance tuning, and NIC offload features.
Proven delivery of multi-GB/s ingest systems and petabyte-scale storage in production (life-sciences, vision, HPC, or media).
Experience building tiered storage systems (NVMe → HDD/object) and validating real-world throughput under sustained load.
Practical S3/object-storage know-how (AWS S3 and/or on-prem S3-compatible systems) with lifecycle, versioning, and cost controls.
Data integrity & reliability: snapshots, scrubs, replication, erasure coding, and backup/DR for PB-scale systems.
Networking: ****25/40/100 GbE (SFP+/SFP28), RDMA/ RoCE/iWARP familiarity; switch config and path tuning.
Ability to spec and rack hardware: selecting chassis/backplanes, RAID/HBA cards, NICs, and cooling strategies to prevent NVMe throttling under sustained writes.
Ideal skills:
Experience with microscopy or scientific imaging ingest at frame-to-disk speeds, including Micro-Manager-based pipelines and raw-to-containerized format conversions.
Experience with life science imaging data a plus.
Engagement details
Contract (1099 or corp-to-corp); contract-to-hire if there's a mutual fit.
On-site requirement: You must be physically present in San Francisco during build-out and initial operations; local field work (e.g., UCSF) as needed.
Compensation: Contract, $100-300/hour
Timeline: Immediate start
#J-18808-Ljbffr
$110k-157k yearly est. 4d ago
Global Data ML Engineer for Multilingual Speech & AI
Cartesia
Data engineer job in San Francisco, CA
A leading technology company in San Francisco is seeking a Machine Learning Engineer to ensure the quality and coverage of data across diverse languages. You will design large-scale datasets, evaluate models, and implement quality control systems. The ideal candidate has expertise in multilingual datasets and a strong background in applied ML. This full-time role offers competitive benefits, including fully covered insurance and in-office perks, in a supportive team environment.
#J-18808-Ljbffr
$110k-157k yearly est. 5d ago
Data/Full Stack Engineer, Data Storage & Ingestion Consultant
Kubelt
Data engineer job in San Francisco, CA
Employment Type
Full time
Department
Engineering
About us
At Eon, we are at the forefront of large-scale neuroscientific data collection. Our mission is to enable the safe and scalable development of brain emulation technology to empower humanity over the next decade, beginning with the creation of a fully emulated digital twin of a mouse.
Role
We're a San Francisco team collecting very large microscopy datasets and we need an expert to design and implement our end-to-end data pipeline, from high-rate ingest to multi-petabyte storage and downstream processing. You'll own the strategy (on-prem vs. S3 or hybrid), the bill of materials, and the deployment, and you'll be on the floor wiring, racking, tuning, and validating performance.
Our current instruments generate data at ~1+ GB/s sustained (higher during bursts) and the program will accumulate multiple petabyes total over time. You'll help us choose and implement the right architecture considering reliability and cost controls.
Outcomes (what success looks like)
Within 2 weeks: Implement an immediate data-handling strategy that reliably ingests our initial data streams.
Within 2 weeks: Deliver a documented medium-term data architecture covering storage, networking, ingest, and durability.
Within 1 month: Operationalize the medium-term pipeline in production (ingest → buffer → long-term store → compute access).
Ongoing: Maintain ≥95% uptime for the end-to-end data-handling pipeline after setup.
Responsibilities
Architect ingest & storage: Choose and implement an on-prem hardware and data pipeline design or a cloud/S3 alternative with explicit cost and performance tradeoffs at multi-petabyte scale.
Set up a sustained-write ingest path ≥1 GB/s with adequate burst headroom (camera/frame-to-disk), including networking considerations, cooling, and throttling safeguards.
Optimize footprint & cost: Incorporate on-the-fly compression/downsampling options and quantify CPU budget vs. write-speed tradeoffs; document when/where to compress to control $/PB.
Integrate with acquisition workflows ensuring image data and metadata are compatible with downstream stitching/flat-field correction pipelines.
Enable downstream compute: Expose the data to segmentation/analysis stacks (local GPU nodes or cloud).
Skills
5+ years designing and deploying high-throughput storage or HPC pipelines (≥1 GB/s sustained ingest) in production.
Deep hands-on with: NVMe RAID/striping, ZFS/MDRAID/erasure coding, PCIe topology, NUMA pinning, Linux performance tuning, and NIC offload features.
Proven delivery of multi-GB/s ingest systems and petabyte-scale storage in production (life-sciences, vision, HPC, or media).
Experience building tiered storage systems (NVMe ← HDD/object) and validating real-world throughput under sustained load.
Practical S3/object-storage know-how (AWS S3 and/or on-prem S3-compatible systems) with lifecycle, versioning, and cost controls.
Data integrity & reliability: snapshots, scrubs, replication, erasure coding, and backup/DR for PB-scale systems.
Networking: ****25/40/100 GbE (SFP+/SFP28), RDMA/ RoCE/iWARP familiarity; switch config and path tuning.
Ability to spec and rack hardware: selecting chassis/backplanes, RAID/HBA cards, NICs, and cooling strategies to prevent NVMe throttling under sustained writes.
Ideal skills:
Experience with microscopy or scientific imaging ingest at frame-to-disk speeds, including Micro-Manager-based pipelines and raw-to-containerized format conversions.
Experience with life science imaging data a plus.
Engagement details
Contract (1099 or corp-to-corp); contract-to-hire if there's a mutual fit.
On-site requirement: You must be physically present in San Francisco during build-out and initial operations; local field work (e.g., UCSF) as needed.
Compensation: Contract, $100-300/hour
Timeline: Immediate start
#J-18808-Ljbffr
$110k-157k yearly est. 2d ago
ML Engineer: Fraud Detection & Big Data at Scale
Datavisor 4.5
Data engineer job in Mountain View, CA
A leading security technology firm in California is seeking a skilled Data Science Engineer. You will harness the power of unsupervised machine learning to detect fraudulent activities across various sectors. Ideal candidates have experience with Java/C++, data structures, and machine learning. The company offers competitive pay, flexible schedules, equity participation, health benefits, a collaborative environment, and unique perks such as catered lunches and game nights.
#J-18808-Ljbffr
$125k-177k yearly est. 4d ago
ML Data Engineer: Systems & Retrieval for LLMs
Zyphra Technologies Inc.
Data engineer job in Palo Alto, CA
A leading AI technology company based in Palo Alto, CA is seeking a Machine Learning DataEngineer. You will build and optimize the data infrastructure for our machine learning systems while collaborating with ML engineers and infrastructure teams. The ideal candidate has a strong engineering background in Python, experience in production data pipelines, and a deep understanding of distributed systems. This role offers comprehensive benefits, a collaborative environment, and opportunities for innovative contributions.
#J-18808-Ljbffr
$110k-157k yearly est. 2d ago
Foundry Data Engineer: ETL Automation & Dashboards
Data Freelance Hub 4.5
Data engineer job in San Francisco, CA
A data consulting firm based in San Francisco is seeking a Palantir Foundry Consultant for a contract position. The ideal candidate should have strong experience in Palantir Foundry, SQL, and PySpark, with proven skills in data pipeline development and ETL automation. Responsibilities include building data pipelines, implementing interactive dashboards, and leveraging data analysis for actionable insights. This on-site role offers an excellent opportunity for those experienced in the field.
#J-18808-Ljbffr
$114k-160k yearly est. 2d ago
Data Engineer
Luxoft
Data engineer job in Irvine, CA
Project description
Luxoft is looking for a Senior DataEngineer for development of new application to be used by investors and investment committees to review their portfolio data, tailored to specific user groups.
Responsibilities
Work with complex data structures and provide innovative ways to a solution for complex data delivery requirements
Evaluate new and alternative data sources and new integration techniques
Contribute to data models and designs for the data warehouse
Establish standards for documentation and ensure your team adheres to those standards
Influence and develop a thorough understanding of standards and best practices used by your team
Skills
Must have
Seasoned dataengineer who has hands-on experience in AWS to conduct end-to-end data analysis and data pipeline build-out using Python, Glue, S3, Airflow, DBT, Redshift, RDS, etc.
Extensive Python API design experience, preferably Fast API
Strong SQL knowledge
Nice to have
Pyspark
Databricks
ETL design
$99k-139k yearly est. 1d ago
Data Scientist
Talent Software Services 3.6
Data engineer job in Novato, CA
Are you an experienced Data Scientist with a desire to excel? If so, then Talent Software Services may have the job for you! Our client is seeking an experienced Data Scientist to work at their company in Novato, CA.
Client's Data Science is responsible for designing, capturing, analyzing, and presenting data that can drive key decisions for Clinical Development, Medical Affairs, and other business areas of Client. With a quality-by-design culture, Data Science builds quality data that is fit-for-purpose to support statistically sound investigation of critical scientific questions. The Data Science team develops solid analytics that are visually relevant and impactful in supporting key data-driven decisions across Client. The Data Management Science (DMS) group contributes to Data Science by providing complete, correct, and consistent analyzable data at data, data structure and documentation levels following international standards and GCP. The DMS Center of Risk Based Quality Management (RBQM) sub-function is responsible for the implementation of a comprehensive, cross-functional strategy to proactively manage quality risks for clinical trials. Starting at protocol development, the team collaborates to define critical-to-quality factors, design fit-for-purpose quality strategies, and enable ongoing oversight through centralized monitoring and data-driven risk management. The RBQM Data Scientist supports central monitoring and risk-based quality management (RBQM) for clinical trials. This role focuses on implementing and running pre-defined KRIs, QTLs, and other risk metrics using clinical data, with strong emphasis on SAS programming to deliver robust and scalable analytics across multiple studies.
Primary Responsibilities/Accountabilities:
The RBQM Data Scientist may perform a range of the following responsibilities, depending upon the study's complexity and the study's development stage:
Implement and maintain pre-defined KRIs, QTLs, and triggers using robust SAS programs/macros across multiple clinical studies.
Extract, transform, and integrate data from EDC systems (e.g., RAVE) and other clinical sources into analysis-ready SAS datasets.
Run routine and ad-hoc RBQM/central monitoring outputs (tables, listings, data extracts, dashboard feeds) to support signal detection and study review.
Perform QC and troubleshooting of SAS code; ensure outputs are accurate and efficient.
Maintain clear technical documentation (specifications, validation records, change logs) for all RBQM programs and processes.
Collaborate with Central Monitors, Central Statistical Monitors, Data Management, Biostatistics, and Study Operations to understand requirements and ensure correct implementation of RBQM metrics.
Qualifications:
PhD, MS, or BA/BS in statistics, biostatistics, computer science, data science, life science, or a related field.
Relevant clinical development experience (programming, RBM/RBQM, Data Management), for example:
PhD: 3+ years
MS: 5+ years
BA/BS: 8+ years
Advanced SAS programming skills (hard requirement) in a clinical trials environment (Base SAS, Macro, SAS SQL; experience with large, complex clinical datasets).
Hands-on experience working with clinical trial data.•Proficiency with Microsoft Word, Excel, and PowerPoint.
Technical - Preferred / Strong Plus
Experience with RAVE EDC.
Awareness or working knowledge of CDISC, CDASH, SDTM standards.
Exposure to R, Python, or JavaScript and/or clinical data visualization tools/platforms.
Preferred:
Knowledge of GCP, ICH, FDA guidance related to clinical trials and risk-based monitoring.
Strong analytical and problem-solving skills; ability to interpret complex data and risk outputs.
Effective communication and teamwork skills; comfortable collaborating with cross-functional, global teams.
Ability to manage multiple programming tasks and deliver high-quality work in a fast-paced environment.
$99k-138k yearly est. 3d ago
Training Assessment Data Scientist
Booz Allen Hamilton 4.9
Data engineer job in Twentynine Palms, CA
The Opportunity:
As a data scientist, you're excited at the prospect of unlocking the secrets held by a data set, and you're fascinated by the possibilities presented by IoT, machine learning, and artificial intelligence. In an increasingly connected world, massive amounts of structured and unstructured data open new opportunities. As a data scientist at Booz Allen, you can help turn these complex data sets into useful information to solve global challenges. Across private and public sectors-from fraud detection to cancer research to national intelligence-we need you to help find the answers in the data.
On our team, you'll use your data science expertise to help the client conduct training assessments and after actions based on data generated during live, virtual and constructive training. You'll work closely with clients to understand their questions and needs, and then dig into their data-rich environments to find the pieces of their information puzzle. You'll use the right combination of tools and frameworks to turn sets of disparate data points into objective answers to increase technical and tactical expertise of Marine Corps units. Ultimately, you'll provide a deep understanding of the data, what it all means, and how it can be used to improve Marine Corps training readiness.
Work with us as we use data science for good.
Join us. The world can't wait.
You Have:
5+ years of experience with data exploration, data cleaning, data analysis, data visualization, or data mining
5+ years of experience with statistical and general-purpose programming languages for data analysis
5+ years of experience analyzing structured and unstructured data sources
Experience developing predictive data models, quantitative analyses and visualization of targeted data sources
Experience leading the development of solutions to complex programs
Experience with natural language processing, text mining, or machine learning techniques
Secret clearance
Bachelor's degree
Nice If You Have:
Experience working with Marine Corps Live, Virtual and Constructive Training Systems
Experience with distributed data and computing tools, including MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL
Experience with visualization packages, including Plotly, Seaborn, or ggplot2
Clearance:
Applicants selected will be subject to a security investigation and may need to meet eligibility requirements for access to classified information; Secret clearance is required.
Compensation
At Booz Allen, we celebrate your contributions, provide you with opportunities and choices, and support your total well-being. Our offerings include health, life, disability, financial, and retirement benefits, as well as paid leave, professional development, tuition assistance, work-life programs, and dependent care. Our recognition awards program acknowledges employees for exceptional performance and superior demonstration of our values. Full-time and part-time employees working at least 20 hours a week on a regular basis are eligible to participate in Booz Allen's benefit programs. Individuals that do not meet the threshold are only eligible for select offerings, not inclusive of health benefits. We encourage you to learn more about our total benefits by visiting the Resource page on our Careers site and reviewing Our Employee Benefits page.
Salary at Booz Allen is determined by various factors, including but not limited to location, the individual's particular combination of education, knowledge, skills, competencies, and experience, as well as contract-specific affordability and organizational requirements. The projected compensation range for this position is $77,600.00 to $176,000.00 (annualized USD). The estimate displayed represents the typical salary range for this position and is just one component of Booz Allen's total compensation package for employees. This posting will close within 90 days from the Posting Date.
Identity Statement
As part of the application process, you are expected to be on camera during interviews and assessments. We reserve the right to take your picture to verify your identity and prevent fraud.
Work Model
Our people-first culture prioritizes the benefits of flexibility and collaboration, whether that happens in person or remotely.
If this position is listed as remote or hybrid, you'll periodically work from a Booz Allen or client site facility.
If this position is listed as onsite, you'll work with colleagues and clients in person, as needed for the specific role.
Commitment to Non-Discrimination
All qualified applicants will receive consideration for employment without regard to disability, status as a protected veteran or any other status protected by applicable federal, state, local, or international law.
$77.6k-176k yearly Auto-Apply 5d ago
Engineer | Arrive Palm Springs
Arrive Hotel Palm Springs
Data engineer job in Palm Springs, CA
We're looking for a seasoned Engineer that's savvy with preventative maintenance and ongoing repairs to ensure our hotel is safe and comfortable for guests and our team!
ABOUT ARRIVE PALM SPRINGS
Located in the Uptown Design District, ARRIVE Palms Springs is a striking design and architectural landmark, honoring the city's rich modernist legacy. Our 32-room boutique hotel features bright, residential-style guest rooms, a 42-foot long pool and hot tub, firepits, bocce ball, ping pong tables, and a poolside restaurant and bar serving refreshing cocktails and California-centric classics. If you're passionate about creating genuine connections, thrive in a dynamic hospitality environment, and find joy in elevating guest experiences, we invite you to join our team at ARRIVE Palm Springs!
THE TASK AT HAND:
Conducting ongoing room inspections to identify repair needs
Installing or repairing sheet rock and other wall coverings
Painting and painting touch-ups as needed throughout the property
Installing and repairing basic electrical fixtures, from replacing light switches to swapping lightbulbs
Repairing fixtures and furniture
Installing, replacing, and programing televisions
Performing minor plumbing functions
Replacing and repairing heating and cooling pumps as well as preventative maintenance on HVAC units
Tracing and repairing all types of water lines
Troubleshooting and repairing kitchen equipment
Maintaining repair and preventive maintenance records while following service recovery guidelines
Adhering to work to local, state and Federal codes while performing all building maintenance needs.
Supporting the operations team and completing some House Person functions in the event of staffing shortages or busy periods.
Practicing safe work habits by wearing protective safety equipment and complying with MSDS and OSHA standards
Helping to ensure overall guest satisfaction
Working a flexible schedule based on hotel occupancy or emergency repair needs
WHAT WE'RE LOOKING FOR:
A positive, upbeat attitude and a passion for building maintenance
A collaborative team member that's happy to pitch in, support coworkers, and try things differently if the situation calls for a quick pivot.
A good communicator
Top-notch organization skills and the ability to prioritize projects
The ability to safely work throughout a shift. Tasks may include walking, standing, bending, and lifting supplies up to 50lbs.
Comfort in a fast-paced environment
5+ years of experience in general repair and building maintenance
Professional skilled trade licensing in plumbing & electrical preferred, but not required
A flexible work schedule as weekend and holiday shifts may be required from time to time
Requires mobility and prolonged standing, walking, bending and lifting up to 50 lbs
Extensive knowledge of AC systems & refrigeration
WHAT'S IN IT FOR YOU:
A competitive compensation package including medical, dental, vision, and life insurance.
401(k) retirement plan (future you will love this one!)
Paid time off, holiday pay, and sick pay when you're under the weather.
Career advancement in an organization committed to helping star employees thrive.
There's also an opportunity to expand your career trajectory as we are a fast-growing company with hotels and restaurants in multiple cities.
Professional development that sets you up for success across multiple hospitality career paths.
A collaborative work environment where your creative ideas can come to fruition.
Amazing employee discounts on hotels and dining across our entire portfolio (18 hotels and more to come!)
Hands-on training with a nimble team.
Palisociety is an Equal Opportunity Employer committed to hiring a diverse workforce and sustaining an inclusive culture. Palisociety does not discriminate on the basis of disability, veteran status or any other basis protected under federal, state, or local laws.
Privacy Notice:
For information on the California Consumer Privacy Act of 2018 (“CCPA”), California Privacy Rights Act of 2020 (“CPRA”), and other California privacy laws, please go to the Palisociety Careers page at ******************* and ******************** to view the notice.
For more information, visit ******************* or follow @palisociety
For more information, visit lepetitpali.com or follow @lepetitpali
For more information, visit ******************** or follow @arrivehotels
We are an E-Verify Employer/Somos un empleador de E-Verify.
$86k-121k yearly est. 60d+ ago
Engineer
Huntremotely
Data engineer job in Palm Springs, CA
What you will be doing
Efficiently and safely operate central HVAC equipment and mechanical equipment using sound engineering practices and specified corporate operating procedures.
Accurately and efficiently repair and maintain food production and related kitchen equipment, laundry equipment, ice machines and refrigeration systems, boilers and plumbing systems and all electrical and natural gas distribution systems.
Trouble-shoot electrical and pneumatic problems and repair them as quickly and economically as possible.
Perform preventive and predictive maintenance on an on-going basis.
The hourly rate for this position ranges from $17-20, depending on experience and qualifications.
$17-20 hourly 2d ago
Engineer
Remington Hotels 4.3
Data engineer job in Palm Springs, CA
What you will be doing
Efficiently and safely operate central HVAC equipment and mechanical equipment using sound engineering practices and specified corporate operating procedures.
Accurately and efficiently repair and maintain food production and related kitchen equipment, laundry equipment, ice machines and refrigeration systems, boilers and plumbing systems and all electrical and natural gas distribution systems.
Trouble-shoot electrical and pneumatic problems and repair them as quickly and economically as possible.
Perform preventive and predictive maintenance on an on-going basis.
The hourly rate for this position ranges from $17-20, depending on experience and qualifications.
$17-20 hourly 2d ago
Class 2 Engineer PM
Omni Hotels & Resorts
Data engineer job in Rancho Mirage, CA
The 444-room Rancho Las Palmas Resort & Spa is classic Rancho Mirage re-imagined for the 21st-century traveler. Our luxurious Palm Springs hotel rooms surround you in Spanish Colonial-inspired style and a soothing desert palette of beige, sand, and ivory. With plenty of space for your peace and your quiet, you'll also open French doors to your very own private patio or balcony where the warm desert air and breathtaking views await.
Omni Rancho Las Palmas Resort and Spa's associates enjoy a dynamic and exciting work environment, comprehensive training and mentoring, along with the pride that comes from working for a company with a reputation for exceptional service. We embody a culture of respect, gratitude and empowerment day in and day out. If you are a friendly, motivated person, with a passion to serve others, the Omni Rancho Las Palmas may be your perfect match.
Job Description
The Engineer 2 is a valuable member of the Resort's Engineering Team dedicated to ensuring our Resort is in working order at all times. The Engineer 2 diagnoses problems, performs repairs and completes preventive maintenance tasks throughout the Resort according to Omni standards and with “intermediate” proficiency.
Responsibilities
Assist hotel guests with any guest room maintenance issues (plumbing, lighting, electrical, painting, etc)
Perform preventative maintenance responsibilities on all guest rooms as assigned
Maintain all mechanical items in guest rooms and public areas.
Work as a team to keep the back of house areas and equipment in safe, good working order.
Clean all work areas after completing job.
Assists mechanics and external contractors in repairs of hotel property and equipment.
Completion of all assigned work orders and daily tasks.
Receive direction for house calls via radio.
Be familiar with all hotel amenities and hotel facilities
To be familiar with the inter-relationship between the different departments (to include PBX, Guest Services, Housekeeping, F&B outlets, Banquets, Sales, Engineering and Purchasing)
Follow all company safety and security policies and procedures; report accidents, injuries, and unsafe work conditions; complete safety training and certifications.
Follow all company policies and procedures; ensure clean uniform and professional personal appearance; maintain confidentiality of proprietary information; protect company assets.
Effectively operate computer, two way radio, power and hand tools required to complete responsibilities.
Deliver personalized, memorable guest experiences by utilizing the Power of One
Perform other duties | special projects as assigned by Engineering Management
Qualifications
Excellent customer service and problem solving skills
EPA certificate holder preferred
1-2 years previous maintenance experience required
Prior building maintenance experience and/or relevant techinical training required.
Related Hotel/Resort experience preferred
Must possess painting, basic plumbing, basic electrical, minor carpentry, lighting and computer skills.
Must be willing to work based on business levels and occupancy levels. Must be able to work AM, PM, weekend and holiday shifts.
Walk, stand, climb, bend, reach over-head, squat and kneel for long periods at a time, as the position requires constant motion. Crawling for short periods of time.
Move, lift, carry and push items weighing up to 50-100lbs without assistance.
Candidates must be able to utilize step stools and ladders effectively.
Pay: $22.00/hour. The pay scale provided is a range that Omni Hotels & Resorts reasonably expects to pay. Actual compensation offered may fluctuate based on a candidate's qualifications and/or experience.
Omni Hotels & Resorts is an equal opportunity/AA/Disability/Veteran employer. We will consider qualified applicants with criminal histories in a manner consistent with the CA Fair Chance initiative for hiring. The EEO is the Law poster and its supplement are available using the following links: EEOC is the Law Poster and the following link is the OFCCP's Pay Transparency Nondiscrimination policy statement
If you are interested in applying for employment with Omni Hotels & Resorts and need special assistance to apply for a posted position, please send an email to applicationassistance@omnihotels.com
The average data engineer in Indio, CA earns between $84,000 and $160,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.