We design, build and maintain infrastructure to support agentic workflows for Siri. Our team is in charge of data generation, introspection and evaluation frameworks that are key to efficiently developing foundation models and agentic workflows for Siri applications. In this team you will have the opportunity to work at the intersection of with cutting edge foundation models and products.
Minimum Qualifications
Strong background in computer science: algorithms, data structures and system design
3+ year experience on large scale distributed system design, operation and optimization
Experience with SQL/NoSQL database technologies, data warehouse frameworks like BigQuery/Snowflake/RedShift/Iceberg and data pipeline frameworks like GCP Dataflow/Apache Beam/Spark/Kafka
Experience processing data for ML applications at scale
Excellent interpersonal skills able to work independently as well as cross-functionally
Preferred Qualifications
Experience fine-tuning and evaluating Large Language Models
Experience with Vector Databases
Experience deploying and serving of LLMs
At Apple, base pay is one part of our total compensation package and is determined within a range. This provides the opportunity to progress as you grow and develop within a role. The base pay range for this role is between $147,400 and $272,100, and your base pay will depend on your skills, qualifications, experience, and location.
Apple employees also have the opportunity to become an Apple shareholder through participation in Apple's discretionary employee stock programs. Apple employees are eligible for discretionary restricted stock unit awards, and can purchase Apple stock at a discount if voluntarily participating in Apple's Employee Stock Purchase Plan. You'll also receive benefits including: Comprehensive medical and dental coverage, retirement benefits, a range of discounted products and free services, and for formal education related to advancing your career at Apple, reimbursement for certain educational expenses - including tuition. Additionally, this role might be eligible for discretionary bonuses or commission payments as well as relocation. Learn more about Apple Benefits.
Note: Apple benefit, compensation and employee stock programs are subject to eligibility requirements and other terms of the applicable plan or program.
Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant .
#J-18808-Ljbffr
$147.4k-272.1k yearly 5d ago
Looking for a job?
Let Zippia find it for you.
Data Partnerships Lead - Equity & Growth (SF)
Exa
Data engineer job in San Francisco, CA
A cutting-edge AI search engine company in San Francisco is seeking a Data Partnerships specialist to build their data pipeline. The role involves owning the partnerships cycle, making strategic decisions, negotiating contracts, and potentially building a team. Candidates should have experience in contract negotiation and a Juris Doctor degree. This in-person role offers a competitive salary range of $160,000 - $250,000 with above-market equity.
#J-18808-Ljbffr
$160k-250k yearly 1d ago
Senior Energy Data Engineer - API & Spark Pipelines
Medium 4.0
Data engineer job in San Francisco, CA
A technology finance firm in San Francisco is seeking an experienced DataEngineer. The role involves building data pipelines, integrating data across various platforms, and developing scalable web applications. The ideal candidate will have a strong background in data analysis, software development, and experience with AWS. The salary range for this position is between $160,000 and $210,000, with potential bonuses and equity.
#J-18808-Ljbffr
$160k-210k yearly 4d ago
Senior Data Engineer: ML Pipelines & Signal Processing
Zendar
Data engineer job in Berkeley, CA
An innovative tech firm in Berkeley seeks a Senior DataEngineer to manage complex dataengineering pipelines. You will ensure data quality, support ML engineers across locations, and establish infrastructure standards. The ideal candidate has over 5 years of experience in Data Science or MLOps, strong algorithmic skills, and proficiency in GCP, Python, and SQL. This role offers competitive salary and the chance to impact a growing team in a dynamic field.
#J-18808-Ljbffr
$110k-157k yearly est. 4d ago
Staff Data Scientist - Post Sales
Harnham
Data engineer job in San Jose, CA
Salary: $200-250k base + RSUs
This fast-growing Series E AI SaaS company is redefining how modern engineering teams build and deploy applications. We're expanding our data science organization to accelerate customer success after the initial sale-driving onboarding, retention, expansion, and long-term revenue growth.
About the Role
As the senior data scientist supporting post-sales teams, you will use advanced analytics, experimentation, and predictive modeling to guide strategy across Customer Success, Account Management, and Renewals. Your insights will help leadership forecast expansion, reduce churn, and identify the levers that unlock sustainable net revenue retention.
Key Responsibilities
Forecast & Model Growth: Build predictive models for renewal likelihood, expansion potential, churn risk, and customer health scoring.
Optimize the Customer Journey: Analyze onboarding flows, product adoption patterns, and usage signals to improve activation, engagement, and time-to-value.
Experimentation & Causal Analysis: Design and evaluate experiments (A/B tests, uplift modeling) to measure the impact of onboarding programs, success initiatives, and pricing changes on retention and expansion.
Revenue Insights: Partner with Customer Success and Sales to identify high-value accounts, cross-sell opportunities, and early warning signs of churn.
Cross-Functional Partnership: Collaborate with Product, RevOps, Finance, and Marketing to align post-sales strategies with company growth goals.
Data Infrastructure Collaboration: Work with Analytics Engineering to define data requirements, maintain data quality, and enable self-serve dashboards for Success and Finance teams.
Executive Storytelling: Present clear, actionable recommendations to senior leadership that translate complex analysis into strategic decisions.
About You
Experience: 6+ years in data science or advanced analytics, with a focus on post-sales, customer success, or retention analytics in a B2B SaaS environment.
Technical Skills: Expert SQL and proficiency in Python or R for statistical modeling, forecasting, and machine learning.
Domain Knowledge: Deep understanding of SaaS metrics such as net revenue retention (NRR), gross churn, expansion ARR, and customer health scoring.
Analytical Rigor: Strong background in experimentation design, causal inference, and predictive modeling to inform customer-lifecycle strategy.
Communication: Exceptional ability to translate data into compelling narratives for executives and cross-functional stakeholders.
Business Impact: Demonstrated success improving onboarding efficiency, retention rates, or expansion revenue through data-driven initiatives.
$200k-250k yearly 3d ago
Staff Machine Learning Data Engineer
Backflip 3.7
Data engineer job in San Francisco, CA
Mechanical design, the work done in CAD, is the rate-limiter for progress in the physical world. However, there are only 2-4 million people on Earth who know how to CAD. But what if hundreds of millions could? What if creating something in the real world were as easy as imagining the use case, or sketching it on paper?
Backflip is building a foundation model for mechanical design: unifying the world's scattered engineering knowledge into an intelligent, end-to-end design environment. Our goal is to enable anyone to imagine a solution and hit “print.”
Founded by a second-time CEO in the same space (first company: Markforged), Backflip combines deep industry insight with breakthrough AI research. Backed by a16z and NEA, we raised a $30M Series A and built a deeply technical, mission-driven team.
We're building the AI foundation that tomorrow's space elevators, nanobots, and spaceships will be built in.
If you're excited to define the next generation of hard tech, come build it with us.
The Role
We're looking for a Staff Machine Learning DataEngineer to lead and build the data pipelines powering Backflip's foundation model for manufacturing and CAD.
You'll design the systems, tools, and strategies that turn the world's engineering knowledge - text, geometry, and design intent - into high-quality training data.
This is a core leadership role within the AI team, driving the data architecture, augmentation, and evaluation that underpin our model's performance and evolution.
You'll collaborate with Machine Learning Engineers to run data-driven experiments, analyze results, and deliver AI products that shape the future of the physical world.
What You'll Do
Architect and own Backflip's ML data pipeline, from ingestion to processing to evaluation.
Define data strategy: establish best practices for data augmentation, filtering, and sampling at scale.
Design scalable data systems for multimodal training (text, geometry, CAD, and more).
Develop and automate data collection, curation, and validation workflows.
Collaborate with MLEs to design and execute experiments that measure and improve model performance.
Build tools and metrics for dataset analysis, monitoring, and quality assurance.
Contribute to model development through insights grounded in data, shaping what, how, and when we train.
Who You Are
You've built and maintained ML data pipelines at scale, ideally for foundation or generative models, that shipped into production in the real world.
You have deep experience with dataengineering for ML, including distributed systems, data extraction, transformation, and loading, and large-scale data processing (e.g. PySpark, Beam, Ray, or similar).
You're fluent in Python and experienced with ML frameworks and data formats (Parquet, TFRecord, HuggingFace datasets, etc.).
You've developed data augmentation, sampling, or curation strategies that improved model performance.
You think like both an engineer and an experimentalist: curious, analytical, and grounded in evidence.
You collaborate well across AI development, infra, and product, and enjoy building the data systems that make great models possible.
You care deeply about data quality, reproducibility, and scalability.
You're excited to help shape the future of AI for physical design.
Bonus points if:
You are comfortable working with a variety of complex data formats, e.g. for 3D geometry kernels or rendering engines.
You have an interest in math, geometry, topology, rendering, or computational geometry.
You've worked in 3D printing, CAD, or computer graphics domains.
Why Backflip
This is a rare opportunity to own the data backbone of a frontier foundation model, and help define how AI learns to design the physical world.
You'll join a world-class, mission-driven team operating at the intersection of research, engineering, and deep product sense, building systems that let people design the physical world as easily as they imagine it.
Your work will directly shape the performance, capability, and impact of Backflip's foundation model, the core of how the world will build in the future.
Let's build the tools the future will be made in.
#J-18808-Ljbffr
$126k-178k yearly est. 4d ago
Founding ML Infra Engineer - Audio Data Platform
David Ai
Data engineer job in San Francisco, CA
A pioneering audio tech company based in San Francisco is searching for a Founding Machine Learning Infrastructure Engineer. In this role, you will build and scale the core infrastructure that powers cutting-edge audio ML products. You will lead the development of systems for training and deploying models. Candidates should have over 5 years of backend experience with strong skills in cloud infrastructure and machine learning principles. The company offers benefits like unlimited PTO and comprehensive health coverage.
#J-18808-Ljbffr
$110k-157k yearly est. 4d ago
Data/Full Stack Engineer, Data Storage & Ingestion Consultant
Eon Systems PBC
Data engineer job in San Francisco, CA
About us
At Eon, we are at the forefront of large-scale neuroscientific data collection. Our mission is to enable the safe and scalable development of brain emulation technology to empower humanity over the next decade, beginning with the creation of a fully emulated digital twin of a mouse.
Role
We're a San Francisco team collecting very large microscopy datasets and we need an expert to design and implement our end-to-end data pipeline, from high-rate ingest to multi-petabyte storage and downstream processing. You'll own the strategy (on-prem vs. S3 or hybrid), the bill of materials, and the deployment, and you'll be on the floor wiring, racking, tuning, and validating performance.
Our current instruments generate data at ~1+ GB/s sustained (higher during bursts) and the program will accumulate multiple petabyes total over time. You'll help us choose and implement the right architecture considering reliability and cost controls.
Outcomes (what success looks like)
Within 2 weeks: Implement an immediate data-handling strategy that reliably ingests our initial data streams.
Within 2 weeks: Deliver a documented medium-term data architecture covering storage, networking, ingest, and durability.
Within 1 month: Operationalize the medium-term pipeline in production (ingest → buffer → long-term store → compute access).
Ongoing: Maintain ≥95% uptime for the end-to-end data-handling pipeline after setup.
Responsibilities
Architect ingest & storage: Choose and implement an on-prem hardware and data pipeline design or a cloud/S3 alternative with explicit cost and performance tradeoffs at multi-petabyte scale.
Set up a sustained-write ingest path ≥1 GB/s with adequate burst headroom (camera/frame-to-disk), including networking considerations, cooling, and throttling safeguards.
Optimize footprint & cost: Incorporate on-the-fly compression/downsampling options and quantify CPU budget vs. write-speed tradeoffs; document when/where to compress to control $/PB.
Integrate with acquisition workflows ensuring image data and metadata are compatible with downstream stitching/flat-field correction pipelines.
Enable downstream compute: Expose the data to segmentation/analysis stacks (local GPU nodes or cloud).
Skills
5+ years designing and deploying high-throughput storage or HPC pipelines (≥1 GB/s sustained ingest) in production.
Deep hands-on with: NVMe RAID/striping, ZFS/MDRAID/erasure coding, PCIe topology, NUMA pinning, Linux performance tuning, and NIC offload features.
Proven delivery of multi-GB/s ingest systems and petabyte-scale storage in production (life-sciences, vision, HPC, or media).
Experience building tiered storage systems (NVMe → HDD/object) and validating real-world throughput under sustained load.
Practical S3/object-storage know-how (AWS S3 and/or on-prem S3-compatible systems) with lifecycle, versioning, and cost controls.
Data integrity & reliability: snapshots, scrubs, replication, erasure coding, and backup/DR for PB-scale systems.
Networking: ****25/40/100 GbE (SFP+/SFP28), RDMA/ RoCE/iWARP familiarity; switch config and path tuning.
Ability to spec and rack hardware: selecting chassis/backplanes, RAID/HBA cards, NICs, and cooling strategies to prevent NVMe throttling under sustained writes.
Ideal skills:
Experience with microscopy or scientific imaging ingest at frame-to-disk speeds, including Micro-Manager-based pipelines and raw-to-containerized format conversions.
Experience with life science imaging data a plus.
Engagement details
Contract (1099 or corp-to-corp); contract-to-hire if there's a mutual fit.
On-site requirement: You must be physically present in San Francisco during build-out and initial operations; local field work (e.g., UCSF) as needed.
Compensation: Contract, $100-300/hour
Timeline: Immediate start
#J-18808-Ljbffr
$110k-157k yearly est. 5d ago
Global Data ML Engineer for Multilingual Speech & AI
Cartesia
Data engineer job in San Francisco, CA
A leading technology company in San Francisco is seeking a Machine Learning Engineer to ensure the quality and coverage of data across diverse languages. You will design large-scale datasets, evaluate models, and implement quality control systems. The ideal candidate has expertise in multilingual datasets and a strong background in applied ML. This full-time role offers competitive benefits, including fully covered insurance and in-office perks, in a supportive team environment.
#J-18808-Ljbffr
$110k-157k yearly est. 1d ago
ML Engineer: Fraud Detection & Big Data at Scale
Datavisor 4.5
Data engineer job in Mountain View, CA
A leading security technology firm in California is seeking a skilled Data Science Engineer. You will harness the power of unsupervised machine learning to detect fraudulent activities across various sectors. Ideal candidates have experience with Java/C++, data structures, and machine learning. The company offers competitive pay, flexible schedules, equity participation, health benefits, a collaborative environment, and unique perks such as catered lunches and game nights.
#J-18808-Ljbffr
$125k-177k yearly est. 5d ago
ML Data Engineer: Systems & Retrieval for LLMs
Zyphra Technologies Inc.
Data engineer job in Palo Alto, CA
A leading AI technology company based in Palo Alto, CA is seeking a Machine Learning DataEngineer. You will build and optimize the data infrastructure for our machine learning systems while collaborating with ML engineers and infrastructure teams. The ideal candidate has a strong engineering background in Python, experience in production data pipelines, and a deep understanding of distributed systems. This role offers comprehensive benefits, a collaborative environment, and opportunities for innovative contributions.
#J-18808-Ljbffr
$110k-157k yearly est. 3d ago
Foundry Data Engineer: ETL Automation & Dashboards
Data Freelance Hub 4.5
Data engineer job in San Francisco, CA
A data consulting firm based in San Francisco is seeking a Palantir Foundry Consultant for a contract position. The ideal candidate should have strong experience in Palantir Foundry, SQL, and PySpark, with proven skills in data pipeline development and ETL automation. Responsibilities include building data pipelines, implementing interactive dashboards, and leveraging data analysis for actionable insights. This on-site role offers an excellent opportunity for those experienced in the field.
#J-18808-Ljbffr
$114k-160k yearly est. 3d ago
Multi-Channel Demand Gen Leader - Data SaaS
Motherduck Corporation
Data engineer job in San Francisco, CA
A growing technology firm based in San Francisco is seeking a Demand Generation Marketer to drive campaigns that turn prospects into lifelong customers. This role emphasizes creativity in marketing, collaboration with teams, and a strong data-driven mindset. The ideal candidate will have experience in B2B SaaS environments and a passion for engaging technical audiences. Flexible work environment and competitive compensation offered.
#J-18808-Ljbffr
$112k-157k yearly est. 5d ago
Data Engineer
Luxoft
Data engineer job in Irvine, CA
Project description
Luxoft is looking for a Senior DataEngineer for development of new application to be used by investors and investment committees to review their portfolio data, tailored to specific user groups.
Responsibilities
Work with complex data structures and provide innovative ways to a solution for complex data delivery requirements
Evaluate new and alternative data sources and new integration techniques
Contribute to data models and designs for the data warehouse
Establish standards for documentation and ensure your team adheres to those standards
Influence and develop a thorough understanding of standards and best practices used by your team
Skills
Must have
Seasoned dataengineer who has hands-on experience in AWS to conduct end-to-end data analysis and data pipeline build-out using Python, Glue, S3, Airflow, DBT, Redshift, RDS, etc.
Extensive Python API design experience, preferably Fast API
Strong SQL knowledge
Nice to have
Pyspark
Databricks
ETL design
$99k-139k yearly est. 2d ago
Data Scientist
Talent Software Services 3.6
Data engineer job in Novato, CA
Are you an experienced Data Scientist with a desire to excel? If so, then Talent Software Services may have the job for you! Our client is seeking an experienced Data Scientist to work at their company in Novato, CA.
Client's Data Science is responsible for designing, capturing, analyzing, and presenting data that can drive key decisions for Clinical Development, Medical Affairs, and other business areas of Client. With a quality-by-design culture, Data Science builds quality data that is fit-for-purpose to support statistically sound investigation of critical scientific questions. The Data Science team develops solid analytics that are visually relevant and impactful in supporting key data-driven decisions across Client. The Data Management Science (DMS) group contributes to Data Science by providing complete, correct, and consistent analyzable data at data, data structure and documentation levels following international standards and GCP. The DMS Center of Risk Based Quality Management (RBQM) sub-function is responsible for the implementation of a comprehensive, cross-functional strategy to proactively manage quality risks for clinical trials. Starting at protocol development, the team collaborates to define critical-to-quality factors, design fit-for-purpose quality strategies, and enable ongoing oversight through centralized monitoring and data-driven risk management. The RBQM Data Scientist supports central monitoring and risk-based quality management (RBQM) for clinical trials. This role focuses on implementing and running pre-defined KRIs, QTLs, and other risk metrics using clinical data, with strong emphasis on SAS programming to deliver robust and scalable analytics across multiple studies.
Primary Responsibilities/Accountabilities:
The RBQM Data Scientist may perform a range of the following responsibilities, depending upon the study's complexity and the study's development stage:
Implement and maintain pre-defined KRIs, QTLs, and triggers using robust SAS programs/macros across multiple clinical studies.
Extract, transform, and integrate data from EDC systems (e.g., RAVE) and other clinical sources into analysis-ready SAS datasets.
Run routine and ad-hoc RBQM/central monitoring outputs (tables, listings, data extracts, dashboard feeds) to support signal detection and study review.
Perform QC and troubleshooting of SAS code; ensure outputs are accurate and efficient.
Maintain clear technical documentation (specifications, validation records, change logs) for all RBQM programs and processes.
Collaborate with Central Monitors, Central Statistical Monitors, Data Management, Biostatistics, and Study Operations to understand requirements and ensure correct implementation of RBQM metrics.
Qualifications:
PhD, MS, or BA/BS in statistics, biostatistics, computer science, data science, life science, or a related field.
Relevant clinical development experience (programming, RBM/RBQM, Data Management), for example:
PhD: 3+ years
MS: 5+ years
BA/BS: 8+ years
Advanced SAS programming skills (hard requirement) in a clinical trials environment (Base SAS, Macro, SAS SQL; experience with large, complex clinical datasets).
Hands-on experience working with clinical trial data.•Proficiency with Microsoft Word, Excel, and PowerPoint.
Technical - Preferred / Strong Plus
Experience with RAVE EDC.
Awareness or working knowledge of CDISC, CDASH, SDTM standards.
Exposure to R, Python, or JavaScript and/or clinical data visualization tools/platforms.
Preferred:
Knowledge of GCP, ICH, FDA guidance related to clinical trials and risk-based monitoring.
Strong analytical and problem-solving skills; ability to interpret complex data and risk outputs.
Effective communication and teamwork skills; comfortable collaborating with cross-functional, global teams.
Ability to manage multiple programming tasks and deliver high-quality work in a fast-paced environment.
$99k-138k yearly est. 4d ago
Data Platform Engineer III
Cambia Health 3.9
Data engineer job in Medford, OR
DATA PLATFORM ENGINEER III (HEALTHCARE) Hybrid (Office 3 days/wk - Onsite-Flex) within Oregon, Washington, Idaho or Utah Build a career with purpose. Join our Cause to create a person-focused and economically sustainable health care system. Who We Are Looking For:
Every day, Cambia's Data/Software Engineering Team is living our mission to make health care easier and lives better. We are the Data and Analytics Solutions division within Cambia that builds and delivers data analytics products driven by value and focuses on our members health care journey. We provide enterprise data technology services by crafting data solutions that enable Analytics and AI capabilities. Our engineers specialize in a variety of technology stack like Snowpark, DBT, Apache Airflow, Stream Lit and integration with tools like Collibra, Sigma, Tableau, DBT Cloud, Alteryx and AWS Glue over Snowflake Cloud Platform.
The Senior Data Platform engineer will have extensive data product development experience specializing in database design and system testing in a cloud platform.
We are looking for a seasoned engineer who can work with Product to build our software and data products with a good technical vision - all in service of making our members' health journeys easier.
If you're a motivated and experienced Data Platform Engineer looking to make a difference in the healthcare industry, apply for this exciting opportunity today!
What You Bring to Cambia:
Qualifications and Certifications:
* Bachelor's degree in Computer Science, Engineering, or related field
* 6-8+ years of relevant experience in application and database development
* 6+ years of launching, maintaining and testing data products and 3+ years of experience in data modeling, design and architecture
* 6+ years of demonstrated proficiency writing complex, efficient SQL scripts, including complex joins, aggregations, and use of analytics/windowing functions.
* 4+ years of experience in cloud platforms such as Snowflake and AWS
* Equivalent combination of education and experience
Skills and Attributes (Not limited to):
* Experience in building and maintaining batch data pipelines using technologies like Airflow, Spark, EMR, S3, etc
* Strong dedication to code quality, automation and operational excellence including CI/CD pipelines, unit/integration tests.
* Value SQL as a flexible and extensible tool and are comfortable with modern SQL data orchestration tools like DBT, Mode, or Airflow.
* Experience working with different performant warehouses and data lakes like Snowflake or equivalent.
* Maintain data privacy and integrity, and always act in the best interest of consumers
* Experience integrating data from multiple sources into one or more targets.
* After initial training, able to maintain awareness, monitor, and manage direct computing costs, such as Snowflake credits.
* Intermediate knowledge around object oriented languages like Java would be desirable but not required.
* Proficient with defensive programming.
* Adhere to Cambia Coding Standards and guidelines and contribute to improving our technology and coding standards.
* Able to adapt to changing technologies and methodologies and apply them to technological and/or business needs of limited scope
What You Will Do at Cambia (Not limited to):
Note that these responsibilities are representative but not exhaustive. Higher-level roles involve successively stronger degrees of initiative taking and innovation beyond the core responsibilities listed here.
* Design, Build and Maintain scalable data pipelines using ETL and ELT on cloud platforms
* Develop Data Integration Solutions to connect various data sources to a single unified data source while ensuring data architecture that supports a single source of truth
* Develop efficient, effective, and maintainable program and system solutions to solve complex business problems
* Develop automated workflows for data ingestion, transformation, and applications integration
* Writes efficient code in languages such as SQL and Python.
* Responsible for supporting our Product and Business partners by researching, identifying and resolving highly technical programming problems
* Meets established deadlines while maintaining a high level of quality of work
* Determines program design and prepares work estimates for development or changes for assigned work
* Performs testing and documents the results
* Expected to be proficient in using version control software like GitHub, GitLab
* Expected deliverables include but are not limited to requirement analysis, system analysis, system design, data models, program design, source code development, test case development, testing, and documentation
* Adheres to policies, procedures, and standards in place within IT/Engineering as well as all corporate policies, procedures and standards established by Cambia. Those include, but are not limited to, technical and architecture standards, production implementation standards, regular status reporting, regular participation in team, regular one on one meetings with Lead or Manager, and providing work estimates and regular time tracking
* May be responsible for on-call duties as defined by management.
The expected hiring range for The Data Platform Engineer III is $120k-$145k, depending on skills, experience, education, and training; relevant licensure / certifications; performance history; and work location. The bonus target for this position is 15%. The current full salary range for this position is $104k Low/ $169k MRP
About Cambia
Working at Cambia means being part of a purpose-driven, award-winning culture built on trust and innovation anchored in our 100+ year history. Our caring and supportive colleagues are some of the best and brightest in the industry, innovating together toward sustainable, person-focused health care. Whether we're helping members, lending a hand to a colleague or volunteering in our communities, our compassion, empathy and team spirit always shine through.
Why Join the Cambia Team?
At Cambia, you can:
* Work alongside diverse teams building cutting-edge solutions to transform health care.
* Earn a competitive salary and enjoy generous benefits while doing work that changes lives.
* Grow your career with a company committed to helping you succeed.
* Give back to your community by participating in Cambia-supported outreach programs.
* Connect with colleagues who share similar interests and backgrounds through our employee resource groups.
We believe a career at Cambia is more than just a paycheck - and your compensation should be too. Our compensation package includes competitive base pay as well as a market-leading 401(k) with a significant company match, bonus opportunities and more.
In exchange for helping members live healthy lives, we offer benefits that empower you to do the same. Just a few highlights include:
* Medical, dental and vision coverage for employees and their eligible family members, including mental health benefits.
* Annual employer contribution to a health savings account.
* Generous paid time off varying by role and tenure in addition to 10 company-paid holidays.
* Market-leading retirement plan including a company match on employee 401(k) contributions, with a potential discretionary contribution based on company performance (no vesting period).
* Up to 12 weeks of paid parental time off (eligibility requires 12 months of continuous service with Cambia immediately preceding leave).
* Award-winning wellness programs that reward you for participation.
* Employee Assistance Fund for those in need.
* Commute and parking benefits.
Learn more about our benefits.
We are happy to offer work from home options for most of our roles. To take advantage of this flexible option, we require employees to have a wired internet connection that is not satellite or cellular and internet service with a minimum upload speed of 5Mb and a minimum download speed of 10 Mb.
We are an Equal Opportunity employer dedicated to a drug and tobacco-free workplace. All qualified applicants will receive consideration for employment without regard to race, color, national origin, religion, age, sex, sexual orientation, gender identity, disability, protected veteran status or any other status protected by law. A background check is required.
If you need accommodation for any part of the application process because of a medical condition or disability, please email ******************************. Information about how Cambia Health Solutions collects, uses, and discloses information is available in our Privacy Policy.
$120k-145k yearly Auto-Apply 20d ago
Staff Engineer - Cloud Software
BD Systems 4.5
Data engineer job in Ashland, OR
SummaryWe are seeking a highly skilled and experienced Staff Engineer - Cloud Software to design, develop, and maintain robust, scalable, and secure cloud-native applications and infrastructure. This role will be pivotal in shaping our cloud strategy and ensuring the high availability and performance of our critical life sciences platforms.
FlowJo is the world's leading flow cytometry analysis software, used by researchers and scientists globally to analyze complex biological data. FlowJo serves over 100,000 users worldwide in academic institutions, pharmaceutical companies, and research institutions. The software plays a critical role in advancing medical research, including cancer immunology, vaccine development, and cellular biology studies. We are transitioning FlowJo from desktop software to a cloud deployment model.Job Description
We are the makers of possible
BD is one of the largest global medical technology companies in the world. Advancing the world of health™ is our Purpose, and it's no small feat. It takes the imagination and passion of all of us-from design and engineering to the manufacturing and marketing of our billions of MedTech products per year-to look at the impossible and find transformative solutions that turn dreams into possibilities.
We believe that the human element, across our global teams, is what allows us to continually evolve. Join us and discover an environment in which you'll be supported to learn, grow and become your best self. Become a maker of possible with us.
Job Responsibilities
Guide the architecture, design, and implementation of complex cloud-native software solutions using AWS services.
Develop high-quality, maintainable, and well-documented code in languages such as TypeScript, Rust, C++.
Drive best practices in software development, including code reviews, automated testing, continuous integration, and continuous deployment (CI/CD).
Collaborate closely with product managers, data scientists, and other engineering teams to translate business requirements into technical solutions.
Champion security best practices and ensure compliance with relevant industry regulations (e.g., HIPAA, GxP, GDPR) within cloud environments.
Mentor junior engineers, provide technical guidance, and foster a culture of innovation and excellence within the team.
Proactively identify and resolve technical challenges, performance bottlenecks, and scalability issues within distributed systems.
Evaluate and recommend new cloud technologies and tools to improve efficiency, performance, and cost-effectiveness.
Required Skills/Experience:
Bachelor's or Master's degree in Computer Science, Software Engineering, or a related STEM field.
8+ years of professional experience in software development, with at least 4 years focused on cloud-native application development and architecture.
Cloud Expertise: AWS (compute, storage, networking, databases), Azure or GCP.
Hands on Experience with AWS Services:
ALB/NLB (Application/Network Load Balancers)
TLS (Transport Layer Security)
NATS/KEDA (Message brokers for event-driven systems)
Auto-scaling
Direct Connect (DX)
FinOps (Cost optimization with metrics visibility)
Cloud Security
Versioned Object Storage & Lifecycle Policies
Caching Layers
FSx for Lustre (High-performance file system)
Strong programming skills in TypeScript/node and C++.
Extensive experience with containerization technologies (e.g., Docker, Kubernetes) and serverless architectures.
Proven track record of designing and implementing scalable, fault-tolerant, and secure distributed systems.
Deep understanding of microservices architecture and event-driven systems.
Experience with Infrastructure as Code (IaC) tools such as Terraform.
Best Practices - CI/CD Tools: Jenkins, GitHub Actions, or similar.
Soft Skills: Communication, problem-solving, mentoring.
At BD, we prioritize on-site collaboration because we believe it fosters creativity, innovation, and effective problem-solving, which are essential in the fast-paced healthcare industry. For most roles, we require a minimum of 4 days of in-office presence per week to maintain our culture of excellence and ensure smooth operations, while also recognizing the importance of flexibility and work-life balance. Remote or field-based positions will have different workplace arrangements which will be indicated in the job posting.
For certain roles at BD, employment is contingent upon the Company's receipt of sufficient proof that you are fully vaccinated against COVID-19. In some locations, testing for COVID-19 may be available and/or required. Consistent with BD's Workplace Accommodations Policy, requests for accommodation will be considered pursuant to applicable law.
Why Join Us?
A career at BD means being part of a team that values your opinions and contributions and that encourages you to bring your authentic self to work. It's also a place where we help each other be great, we do what's right, we hold each other accountable, and learn and improve every day.
To find purpose in the possibilities, we need people who can see the bigger picture, who understand the human story that underpins everything we do. We welcome people with the imagination and drive to help us reinvent the future of health. At BD, you'll discover a culture in which you can learn, grow, and thrive. And find satisfaction in doing your part to make the world a better place.
To learn more about BD visit **********************
Becton, Dickinson, and Company is an Equal Opportunity Employer. We evaluate applicants without regard to race, color, religion, age, sex, creed, national origin, ancestry, citizenship status, marital or domestic or civil union status, familial status, affectional or sexual orientation, gender identity or expression, genetics, disability, military eligibility or veteran status, and other legally-protected characteristics.
Required Skills
Optional Skills
.
Primary Work LocationUSA OR Ashland - FlowJoAdditional LocationsWork ShiftNA (United States of America)
At BD, we are strongly committed to investing in our associates-their well-being and development, and in providing rewards and recognition opportunities that promote a performance-based culture. We demonstrate this commitment by offering a valuable, competitive package of compensation and benefits programs which you can learn more about on our Careers Site under Our Commitment to You.
Salary or hourly rate ranges have been implemented to reward associates fairly and competitively, as well as to support recognition of associates' progress, ranging from entry level to experts in their field, and talent mobility. There are many factors, such as location, that contribute to the range displayed. The salary or hourly rate offered to a successful candidate is based on experience, education, skills, and any step rate pay system of the actual work location, as applicable to the role or position. Salary or hourly pay ranges may vary for Field-based and Remote roles.
Salary Range Information
$113,400.00 - $186,900.00 USD Annual
$113.4k-186.9k yearly Auto-Apply 2d ago
OSP Engineer
Pearce Services 4.7
Data engineer job in Medford, OR
Job Description
At PEARCE, we've got a career for you!
Pearce is a leading technology-enabled provider of asset management solutions for mission-critical electromechanical infrastructure throughout North America. Pearce provides technical maintenance, repair, operations, and engineering services for uninterruptible power supply (UPS) systems, backup power generators, battery energy storage systems (BESS), critical cooling systems, and other electrical and mechanical infrastructure across end markets such as renewable energy, telecom, and data centers. Founded in 1998, Pearce has more than 4,000 employees and 28 locations across the U.S. Pearce is a wholly owned subsidiary of CBRE Group, Inc., the world's largest commercial real estate services and investment firm. To learn more about Pearce visit *******************************
Your Impact:
Pearce is seeking a detail-oriented and experienced OSP Engineer to support telecommunications infrastructure projects. This role focuses on the design, planning, and implementation of outside plant (OSP) projects involving copper and fiber networks. The ideal candidate will have a strong background in engineering, permitting, and construction coordination, along with the ability to manage budgets and ensure compliance with company and industry standards.
Core Responsibilities:
Engineering & Design
Design and plan outside plant facilities for copper and fiber infrastructure (aerial, buried, conduit).
Prepare and review engineering drawings, loop loss calculations, and switch serving area designs.
Conduct field measurements for poles, conduits, and buried facilities to support network design.
Develop project budgets, monitor actual costs, and conduct job inspections for quality assurance.
Permitting & Compliance
Prepare, organize, and submit permit packages to relevant agencies (Township, County, State, DNR, Railroad, Pipeline, etc.).
Ensure designs meet permitting regulations and engineering standards.
Field Coordination
Meet with municipal and government officials to coordinate around road projects (travel required; mileage reimbursed).
Collaborate with other utilities to coordinate relocations, minimize conflicts, and avoid project delays.
Support service orders, subdivision expansions, road relocations, dark fiber MDUs, and more.
Update FROGS (CAD), Varasset, and SiteTracker tools as required.
Project & Construction Support
Coordinate with SSP contractors to ensure timely and budget-conscious project completion.
Assist with make-ready work and joint-use coordination for Ziply-owned poles.
Conduct site surveys, support PUC complaint investigations, and create capital projects (e.g., for cell towers or plant repairs).
Address and resolve plant damage requests (DOR/PDR), cable count changes, and other network service needs.
Administrative Support
Maintain records, update engineering files, and process departmental documentation.
Draft and type correspondence, reports, and engineering support documents.
Core Experience:
3-5+ years of hands-on experience in copper and fiber optic network engineering.
Minimum 5 years in network engineeringor OSP-related work, including permitting and fiber record management.
Strong project management and organizational skills with the ability to handle multiple projects simultaneously.
Proficiency in Ziply engineering tools and systems, including FROGS, Varasset, and SiteTracker.
Skilled in communication and collaboration with municipal officials, utility companies, contractors, and cross-functional teams.
In-depth knowledge of permitting procedures and OSP construction practices.
Willingness to travel locally for fieldwork and meetings (mileage reimbursed).
Preferred candidates located in the Pacific Standard Time (PST) zone. Highly qualified candidates in the Mountain Standard Time (MST) zone may also be considered.
At Pearce, we are committed to fair and transparent pay practices. Actual compensation is influenced by a wide array of factors including but not limited to skill set, level of experience, and location.
In addition to wages, employees may also be eligible for performance and referral bonuses, production incentives, tool/equipment and fuel stipends, company vehicle, per diem or other applicable compensation. We also offer all full-time employees a comprehensive benefits package including health and life insurance, 401k with employer match, paid time off, tuition reimbursement, and professional development courses.
This pay range reflects our commitment to pay equity and compliance with state and federal pay transparency laws. If you have questions about compensation, we encourage open discussions during the hiring process.
Base Pay Range$36-$38 USD
What We Offer
Pearce offers a family-friendly and innovative culture with opportunities for growth, competitive compensation, comprehensive health benefits including medical, dental and vision insurance, flexible spending accounts, HSA option. To help you recharge, we have paid vacation and paid holidays. For your future, we offer a company-matching 401(k) Retirement, Life Insurance, Tuition reimbursement, and professional development training. To help you be successful at work, as required for the role, we will provide a company vehicle, phone, laptop, or tablet along with all necessary tools and safety equipment.
We are an equal opportunity employer. All aspects of employment including the decision to hire, promote, discipline, or discharge, will be based on merit, competence, performance, and business needs. We do not discriminate based on race, color, religion, marital status, age, national origin, ancestry, physical or mental disability, medical condition, pregnancy, genetic information, gender, sexual orientation, gender identity or expression, veteran status, or any other status protected under federal, state, or local law.
Learn more about us at ************************
$36-38 hourly 8d ago
2nd Assistant Engineer
Transocean 3.9
Data engineer job in Talent, OR
Offshore Maintenance Group
Transocean is focused on being the employer of choice for the drilling industry.
We are challenging ourselves every day to push the performance of the company through technological advances and passion for our work.
Job Summary
- Supervise the maintenance and repair of mechanical, pneumatic and hydraulic equipment onboard the vessel
- Exercise effective control and efficient operation of the technical and mechanical aspects of the rig with due regard to personnel safety, the rig, and protection of the environment
Job Description
REPORTING:
Reports to 1st Assistant Engineer
SUPERVISION:
Supervises 3rd Assistant Engineer and Motor Operator
PRE-REQUISITE/QUALIFICATION:
High School Diploma (U.S.)/ Proof of Completion of Formal Education or Relevant Craft Certificate (where applicable)
Successful Military Discharge - Preferred for U.S.
Graduate of Trade School or University where knowledge and training acquired are applicable to job requirements - Preferred
Valid medical examination and vaccination certificates
Knowledge of all technical calculations required for the safe operation of the drilling unit
Basic computer skills
Valid 2nd Engineer Unlimited License (Class 2 Motor above 3000kW)
Norway Specific: Must hold qualifications in accordance with relevant authority requirements and the requirements of the Petroleum Safety Authority Norway (PSAN ref. Activities Regulations).
GENERAL REQUIREMENTS:
Promotes a Safe and Respectful Work Environment: Proactively promotes and Maintains a healthy respect for open and transparent dialogue between all levels of personnel.
Demonstrates care for the safety and well-being of all personnel, at all times. Invests in each person to provide equal opportunity for development and advancement of those qualified, to fulfill their career objectives. Respects the dignity of all personnel and recognizes their merit.
Attend and participate in all required Safety and Operational meetings.
Participate fully in the annual performance appraisal process, competency assessments based on performance against Operational Discipline requirements.
Complete company training requirements for the 2nd Assistant Engineer job level as per the training matrix, including in local/regulatory requirements.
If applicable, the duties and responsibilities in the safety case for this position must be observed.
Full participation in emergency drills and respond to emergency situations as per assigned duties on the station bill.
Norway Specific: Responsible to follow up on the working environment within the rig department.
OPERATIONAL DISCIPLINE REQUIREMENTS:
Disciplined application and participation in the Task Planning & Risk Assessment Process.
This is inclusive of:
Understanding and perform the roles and responsibilities as assigned at the pre-job meeting.
Understanding the hazards associated with the task.
Understanding and effectively implementing the control measures required to mitigate identified hazards.
Execute all tasks and task steps in a disciplined manner, following the sequence detailed in the relevant procedures.
Recognize at risk behavior or conditions during tasks and call ‘Time out for Safety' when unsure, or when the job does not go as planned.
Participate in After Action Reviews to feedback lessons learned from performing the task.
HSE REQUIREMENTS:
Demonstrate commitment and leadership in the protection of personnel, the environment, and assets.
Coach personnel in the understanding of applicable Company OI & HSE requirements.
Ensure work is executed in accordance with Company OI & HSE requirements.
Call a Time Out when a real or perceived at risk behavior or unsafe condition is observed.
Coach personnel to use WorkSight ENGAGE during the execution of planned work.
Use WorkSight COACH during the execution of planned work and daily operations to reinforce Company requirements.
Report all incidents, potential hazards, or abnormal situations. Ensure reported situations are acknowledged properly.
Ensure the Control of Work process is complied with when planning and managing the execution of work.
Ensure major accident barriers / controls are maintained.
Ensure safe and organized work areas are maintained.
Understand your role in an emergency, as designated in the Emergency Response Plan and Station Bill.
Lead or participate in all emergency drills and exercises, as required.
Ensure change is properly controlled and managed.
DUTIES:
The underlying expectation is that all duties are carried out in compliance with the Company Operational Discipline expectations and HSE requirements. When considering assignment of duties for the 2nd Assistant Engineer, the Supervisor and the 2nd Assistant Engineer will apply the rules of task planning (CAKES) to determine if the assigned role is appropriate. This means confirming that the individuals involved have the knowledge, skills, and experience to carry out the task, and the task plan complies with policy before they are authorized to proceed.
GENERAL DUTIES:
Operations/Maintenance:
Coordinate and ensure that all mechanical, pneumatic and hydraulic equipment maintenance and repair is done in a safe and prudent manner.
Ensure conformance to policies, local and international regulations relating to the operation of the rig, and current pollution regulations.
Monitor the power plants' controls and power distribution; ensure that power is available at all times.
Ensure that the permit to work and isolation system is in place and followed.
Participate in the effective management of the AIM (Asset & Inventory Management) system and ensure all records are maintained on a timely basis.
Operate, maintain and repair as necessary the engine cooling water system, lube oil system, and fuel system.
Supervise the maintenance and repair on all pumps and valves of the ballast system, thrusters, associated driven pumps and auxiliary equipment.
Liaise with the marine department regarding the loading, ordering and use of fuel, potable water, and drill water in consideration of the rig's stability.
Carry out equipment periodic maintenance according to AIM guidelines and coordinate same with First Engineer.
Ensure that reports for repair and maintenance of equipment are accurate and complete.
Carry out classification society surveys as part of continuous survey of machinery.
Assist the First Engineer in ensuring that all third-party equipment is fit for purpose, certified, correctly installed, and maintained while on the rig.
In conjunction with the First Engineer, monitor and control the use and distribution of consumables/materials; generate requisitions as required.
Maintain an adequate supply of spares to fulfill maintenance requirements and facilitate a safe and efficient operation.
Inform the First Engineer of any technical problems or limitations that may affect the safe operation of the rig.
Implement outstanding recommendations from audits, as issued by Clients, Regulatory Authorities or rig management.
Assist the First Engineer in supplying information for maintenance and repair budget.
Assist in communicating equipment problems or breakdown with Field Support group and equipment vendors.
Personnel:
Promote and maintain good working relationships with other departments, third-party personnel and Customer representatives.
Ensure all Maintenance personnel meet the training requirements as per their applicable training matrix.
Mentor, coach, develop and train crew members to ensure they are competent to work at their next job level.
Provide leadership and motivation to the maintenance crews. Apply continuous improvement process.
Mentor the 3rd Assistant Engineer and deputize when operations, security of well, and personnel safety are not at risk.
Assist in training to ensure that all personnel are competent to perform their allocated jobs.
Participate fully in the annual performance appraisal process.
Apply appropriate accountability regarding recognition; recommend promotion or disciplinary action up to and including discharge.
If you want to push yourself to great achievement, let Transocean develop your career.
$87k-118k yearly est. Auto-Apply 60d+ ago
Machine Learning Data Engineer - Systems & Retrieval
Zyphra Technologies Inc.
Data engineer job in Palo Alto, CA
Zyphra is an artificial intelligence company based in Palo Alto, California. The Role:
As a Machine Learning DataEngineer - Systems & Retrieval, you will build and optimize the data infrastructure that fuels our machine learning systems. This includes designing high-performance pipelines for collecting, transforming, indexing, and serving massive, heterogeneous datasets from raw web-scale data to enterprise document corpora. You'll play a central role in architecting retrieval systems for LLMs and enabling scalable training and inference with clean, accessible, and secure data. You'll have an impact across both research and product teams by shaping the foundation upon which intelligent systems are trained, retrieved, and reasoned over.
You'll work across:
Design and implementation of distributed data ingestion and transformation pipelines
Building retrieval and indexing systems that support RAG and other LLM-based methods
Mining and organizing large unstructured datasets, both in research and production environments
Collaborating with ML engineers, systems engineers, and DevOps to scale pipelines and observability
Ensuring compliance and access control in data handling, with security and auditability in mind
Requirements:
Strong software engineering background with fluency in Python
Experience designing, building, and maintaining data pipelines in production environments
Deep understanding of data structures, storage formats, and distributed data systems
Familiarity with indexing and retrieval techniques for large-scale document corpora
Understanding of database systems (SQL and NoSQL), their internals, and performance characteristics
Strong attention to security, access controls, and compliance best practices (e.g., GDPR, SOC2)
Excellent debugging, observability, and logging practices to support reliability at scale
Strong communication skills and experience collaborating across ML, infra, and product teams
Bonus Skill Set:
Experience building or maintaining LLM-integrated retrieval systems (e.g, RAG pipelines)
Academic or industry background in data mining, search, recommendation systems, or IR literature
Experience with large-scale ETL systems and tools like Apache Beam, Spark, or similar
Familiarity with vector databases (e.g., FAISS, Weaviate, Pinecone) and embedding-based retrieval
Understanding of data validation and quality assurance in machine learning workflows
Experience working on cross-functional infra and MLOps teams
Knowledge of how data infrastructure supports training pipelines, inference serving, and feedback loops
Comfort working across raw, unstructured data, structured databases, and model-ready formats
Why Work at Zyphra:
Our research methodology is to make grounded, methodical steps toward ambitious goals. Both deep research and engineering excellence are equally valued
We strongly value new and crazy ideas and are very willing to bet big on new ideas
We move as quickly as we can; we aim to minimize the bar to impact as low as possible
We all enjoy what we do and love discussing AI
Benefits and Perks:
Comprehensive medical, dental, vision, and FSA plans
Competitive compensation and 401(k)
Relocation and immigration support on a case-by-case basis
On-site meals prepared by a dedicated culinary team; Thursday Happy Hours
In-person team in Palo Alto, CA, with a collaborative, high-energy environment
If you're excited by the challenge of high-scale, high-performance dataengineering in the context of cutting-edge AI, you'll thrive in this role. Apply Today! #J-18808-Ljbffr
How much does a data engineer earn in Medford, OR?
The average data engineer in Medford, OR earns between $74,000 and $142,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.
Average data engineer salary in Medford, OR
$102,000
What are the biggest employers of Data Engineers in Medford, OR?
The biggest employers of Data Engineers in Medford, OR are: