Research Data Analyst 3 Oakland, CA, Job ID 81754
Data analyst job in Oakland, CA
Gathers, analyzes, and interprets a wide variety of programmatic data. Designs and conducts data analysis. Prepares reports, charts, tables, and other visual aids to interpret and communicate data and results. Manages operations of several information systems and provides technical assistance to academic and administrative users statewide.
This position is a career appointment that is 100% fixed.
The home department is the Program Planning & Evaluation Department. While this position normally is based in Oakland, CA, this position is eligible for hybrid flexible work arrangements for applicants living in the State of California at this time. Please note that hybrid flexible work arrangements are subject to change by the University.
Pay Scale: $88,900.00/year to $126,400.00/year
Job Posting Close Date: This job is open until filled. The first application review date will be 10/24/2025.
Key Responsibilities:
35%
Gathers, analyzes, prepares, and summarizes the collection of information and data. Recommends approaches, trends, sources, and uses. Writes complex queries to gather programmatic data from multiple systems, using SQL in Microsoft SQL Server Management Studio and Microsoft Access, for annual reporting and ad hoc requests. Analyzes and prepares data to inform administrative and program decision making. Provides leadership to compile data for the Program Planning and Evaluation team and contributes to the analysis to meet annual federal program reporting requirements, strategic communications, and advocacy efforts.
20%
Manages day-to-day operations for several information systems, including the following federal systems: The National Institute of Food Agriculture's (NIFA) Reporting System; the Research, Extension, and Education Project Online Reporting Tool; and the National Information Management and Support System, with over 500 UC ANR academic users. Provides training and technical assistance to users and to campus partners. Is a core member of the multicampus Program Reporting Group with administrative staff from the Deans' Offices. Liaises with Deans' Offices across multiple campuses and with USDA NIFA on policy and technical questions. Is a core member of the Product Owner Group for Project Board, the UC Cooperative Extension information system developed in-house, working closely with IT that is responsible for programming the system. Conducts testing when enhancements are deployed. Provides technical assistance to users statewide. Be a core team member, working with IT, to replace the publications data collection system Bibliography Project to improve administrative efficiencies.
20%
Develops systems for organizing data to analyze, identify, and report trends. Structures and categorizes raw data to make it easily accessible, usable, and analyzable. Completes tasks such as data cleaning, standardization, classification, and potentially data warehousing. For example, retrieves, analyzes, and compiles UCPath data, the UC systemwide personnel system, for all UC ANR academics across multiple campuses and locations, to provide a quarterly report to leadership to illustrate trends and inform their academic staffing decisions. Manages an annual statewide survey of community educators to collect on data programmatic scope. Organizes and updates this academic and programmatic staff data in large, complex tables.
15%
Prepares data for presentation in clear and compelling ways for a variety of audiences including administrators across the UC system and academics across disciplines, from senior leadership to researchers in the field, as well as UC ANR clientele/members of the public. For example, manages the programmatic and non-sensitive personnel data tasks for presentation in the online ArcGIS UC ANR Programmatic Footprint Maps. Tests maps to ensure the data is represented properly. Acts as a member of the project team that works on continued process improvements and map data enhancements.
5%
Analyzes the interrelationships of data and defines logical aspects of data sets. Contributes to researching, assessing, and selecting new data reporting products.
5%
Implements related business processes. Adapts processes to retrieve and analyze programmatic data when organization makes related changes. Manages proposal submission and provides process oversight for academic position planning and funding opportunities.
Requirements:
Bachelor's degree in related area and / or equivalent experience / training.
Thorough skills in analysis and consultation.
Skills to communicate complex information in a clear and concise manner both verbally and in writing.
Skills in project management.
Skills at a level to evaluate alternate solutions and develop recommendation.
Ability to create and edit tables in Structured Query Language (SQL).
Ability to work independently and with a team.
Interpersonal and verbal skills to effectively communicate with diplomacy and to interact with a wide range of academics and staff.
Special Conditions of Employment:
Must possess valid California Driver's License to drive a County or University vehicle. Ability and means to travel on a flexible schedule as needed, proof of liability damage insurance on vehicle used is required. Reimbursement of job-related travel will be reimbursed according to University policies.
The University reserves the right to make employment contingent upon successful completion of the background check. This is a designated position requiring a background check and may require fingerprinting due to the nature of the job responsibilities. UC ANR does hire people with conviction histories and reviews information received in the context of the job responsibilities.
As of January 1, 2014, ANR is a smoke- and tobacco-free environment in which smoking, the use of smokeless tobacco products, and the use of unregulated nicotine products (e-cigarettes), is strictly prohibited.
As a condition of employment, you will be required to comply with the University of California Policy on Vaccination Programs, as may be amended or revised from time to time. Federal, state, or local public health directives may impose additional requirements.
Exercise the utmost discretion in managing sensitive information learned in the course of performing their duties. Sensitive information includes but is not limited to employee and student records, health and patient records, financial data, strategic plans, proprietary information, and any other sensitive or non-public information learned during the course and scope of employment. Understands that sensitive information should be shared on a limited basis and actively takes steps to limit access to sensitive information to individuals who have legitimate business need to know. Ensure that sensitive information is properly safeguarded. Follow all organizational policies and laws on data protection and privacy. This includes secure handling of physical and digital records and proper usage of IT systems to prevent data leaks. The unauthorized or improper disclosure of confidential work-related information obtained from any source on any work-related matter is a violation of these expectations.
Misconduct Disclosure Requirement: As a condition of employment, the final candidate who accepts a conditional offer of employment will be required to disclose if they have been subject to any final administrative or judicial decisions within the last seven years determining that they committed any misconduct; received notice of any allegations or are currently the subject of any administrative or disciplinary proceedings involving misconduct; have left a position after receiving notice of allegations or while under investigation in an administrative or disciplinary proceeding involving misconduct; or have filed an appeal of a finding of misconduct with a previous employer.
a. "Misconduct" means any violation of the policies or laws governing conduct at the applicant's previous place of employment, including, but not limited to, violations of policies or laws prohibiting sexual harassment, sexual assault, or other forms of harassment, discrimination, dishonesty, or unethical conduct, as defined by the employer. For reference, below are UC's policies addressing some forms of misconduct:
https://apptrkr.com/get_redirect.php?id=6751108&target URL=
Misconduct Disclosure Requirement: As a condition of employment, the final candidate who accepts a conditional offer of employment will be required to disclose if they have been subject to any final administrative or judicial decisions within the last seven years determining that they committed any misconduct; received notice of any allegations or are currently the subject of any administrative or disciplinary proceedings involving misconduct; have left a position after receiving notice of allegations or while under investigation in an administrative or disciplinary proceeding involving misconduct; or have filed an appeal of a finding of misconduct with a previous employer.
a. "Misconduct" means any violation of the policies or laws governing conduct at the applicant's previous place of employment, including, but not limited to, violations of policies or laws prohibiting sexual harassment, sexual assault, or other forms of harassment, discrimination, dishonesty, or unethical conduct, as defined by the employer. For reference, below are UC's policies addressing some forms of misconduct:
UC Sexual Violence and Sexual Harassment Policy
UC Anti-Discrimination Policy
Abusive Conduct in the Workplace
To apply, please visit: https://careerspub.universityofcalifornia.edu/psc/ucanr/EMPLOYEE/HRMS/c/HRS_HRAM_FL.HRS_CG_SEARCH_FL.GBL?Page=HRS_APP_JBPST_FL&JobOpeningId=81754&PostingSeq=1&SiteId=17&language Cd=ENG&FOCUS=Applicant
Copyright ©2025 Jobelephant.com Inc. All rights reserved.
Posted by the FREE value-added recruitment advertising agency
jeid-b9a81a9e9cf59842a913c67f9291ce56
Air Quality Data Analyst
Data analyst job in San Francisco, CA
Please review the below sections, especially the "how to apply" section, to complete your application and be considered for this position!
Title: Air Quality Data Analyst
Salary: $36/hr-$45/hour-salary commensurate with experience
Job Type: Part-time, Temporary
Benefits: Sick leave accrual
Duration of Appointment: Est. 2 months from start date
Location/Schedule: Hybrid, with potential for remote. For remote consideration, the individual must be California-based and able to be on site for their first day; Estimate 10 hours/week
About the Position: This role is a part-time, temporary position (10 hours/week) that will help assist with our air quality data work in the areas identified below.
Air Quality Data
Work with the Program Manager to provide support in bridging the technical components of our air quality data work and helping to translate that information for the general public and community members, including into written materials, presentations, and reports
Evolve Brightline's air quality program to the next generation and help prepare for us for the next level of grantmaking.
Review and assess our current air quality data and network to identify opportunities for expansion or new directions, as well as any gaps, and communicate those findings to the team. This includes researching other air quality data programs and materials and providing written recommendations/report.
Collaborate with Brightline staff, partners and volunteers who are working on analyzing our air quality data.
Participate in meetings related to our air quality work, including with vendors and other key partners.
Drafts, reviews and analyzes air quality data and other documents; conducts inquiries, compiles and researches information.
Additional program support as needed-could include supporting site visits, off site community meetings, air quality sensor network maintenance, etc.
General Support:
Provide grant support including with any progress report deliverables, looking for documentation, helping to track deadlines, follow up with partners, etc.
Assist Brightline team members with other projects as needed
Required Qualifications, Skills, and Abilities:
3-5 years of related experience, including some direct work with environmental mapping
Experience with utilizing ArcGIS
Experience with R, modeling or other coding languages
Familiarity utilizing Google Suite, Canva (or other design software)
Strong interpersonal, written and oral communication skills
Ability to work with data and identify trends and areas where further data points or analysis are needed
Ability to translate and bridge the technical components of the data analysis and findings to those not in the field, including individuals in the community
Ability to collaborate with multiple stakeholders and take and incorporate feedback/input
Outstanding relationship-building skills, as well as ability to adapt communication style
Must demonstrate strong organizational skills, ability to adapt and must be detail oriented.
Desire to learn more about Brightline's work
Passion for working in environmental justice
Desire to work with diverse communities and neighborhoods in San Francisco
Preferred Skills & Qualifications
Masters or PhD in related field
Experience with Tableau or other data visualization software
Experience with website coding
Spanish, Cantonese, or Arabic language skills
Experience working with community-based organizations and or low-income communities
How to Apply
Please email a short cover letter, resume, and three references (preferably direct supervisors and include e-mail and a phone number for each) to ************************** with the subject line “Air Quality Data Analyst Application - [Your Name].” Applications will be reviewed on a rolling basis, with first round of interviews occurring the week of December 1, 2025, until the position is filled.
Staff Data Scientist
Data analyst job in San Francisco, CA
Staff Data Scientist | San Francisco | $250K-$300K + Equity
We're partnering with one of the fastest-growing AI companies in the world to hire a Staff Data Scientist. Backed by over $230M from top-tier investors and already valued at over $1B, they've secured customers that include some of the most recognizable names in tech. Their AI platform powers millions of daily interactions and is quickly becoming the enterprise standard for conversational AI.
In this role, you'll bring rigorous analytics and experimentation leadership that directly shapes product strategy and company performance.
What you'll do:
Drive deep-dive analyses on user behavior, product performance, and growth drivers
Design and interpret A/B tests to measure product impact at scale
Build scalable data models, pipelines, and dashboards for company-wide use
Partner with Product and Engineering to embed experimentation best practices
Evaluate ML models, ensuring business relevance, performance, and trade-off clarity
What we're looking for:
5+ years in data science or product analytics at scale (consumer or marketplace preferred)
Advanced SQL and Python skills, with strong foundations in statistics and experimental design
Proven record of designing, running, and analyzing large-scale experiments
Ability to analyze and reason about ML models (classification, recommendation, LLMs)
Strong communicator with a track record of influencing cross-functional teams
If you're excited by the sound of this challenge- apply today and we'll be in touch.
Data Scientist
Data analyst job in San Francisco, CA
We're working with a Series A health tech start-up pioneering a revolutionary approach to healthcare AI, developing neurosymbolic systems that combine statistical learning with structured medical knowledge. Their technology is being adopted by leading health systems and insurers to enhance patient outcomes through advanced predictive analytics.
We're seeking Machine Learning Engineers who excel at the intersection of data science, modeling, and software engineering. You'll design and implement models that extract insights from longitudinal healthcare data, balancing analytical rigor, interpretability, and scalability.
This role offers a unique opportunity to tackle foundational modeling challenges in healthcare, where your contributions will directly influence clinical, actuarial, and policy decisions.
Key Responsibilities
Develop predictive models to forecast disease progression, healthcare utilization, and costs using temporal clinical data (claims, EHR, laboratory results, pharmacy records)
Design interpretable and explainable ML solutions that earn the trust of clinicians, actuaries, and healthcare decision-makers
Research and prototype innovative approaches leveraging both classical and modern machine learning techniques
Build robust, scalable ML pipelines for training, validation, and deployment in distributed computing environments
Collaborate cross-functionally with data engineers, clinicians, and product teams to ensure models address real-world healthcare needs
Communicate findings and methodologies effectively through visualizations, documentation, and technical presentations
Required Qualifications
Strong foundation in statistical modeling, machine learning, or data science, with preference for experience in temporal or longitudinal data analysis
Proficiency in Python and ML frameworks (PyTorch, JAX, NumPyro, PyMC, etc.)
Proven track record of transitioning models from research prototypes to production systems
Experience with probabilistic methods, survival analysis, or Bayesian inference (highly valued)
Bonus Qualifications
Experience working with clinical data and healthcare terminologies (ICD, CPT, SNOMED CT, LOINC)
Background in actuarial modeling, claims forecasting, or risk adjustment methodologies
Data Scientist
Data analyst job in Pleasanton, CA
Key Responsibilities
Design and develop marketing-focused machine learning models, including:
Customer segmentation
Propensity, churn, and lifetime value (LTV) models
Campaign response and uplift models
Attribution and marketing mix models (MMM)
Build and deploy NLP solutions for:
Customer sentiment analysis
Text classification and topic modeling
Social media, reviews, chat, and voice-of-customer analytics
Apply advanced statistical and ML techniques to solve real-world business problems.
Work with structured and unstructured data from multiple marketing channels (digital, CRM, social, email, web).
Translate business objectives into analytical frameworks and actionable insights.
Partner with stakeholders to define KPIs, success metrics, and experimentation strategies (A/B testing).
Optimize and productionize models using MLOps best practices.
Mentor junior data scientists and provide technical leadership.
Communicate complex findings clearly to technical and non-technical audiences.
Required Skills & Qualifications
7+ years of experience in Data Science, with a strong focus on marketing analytics.
Strong expertise in Machine Learning (supervised & unsupervised techniques).
Hands-on experience with NLP techniques, including:
Text preprocessing and feature extraction
Word embeddings (Word2Vec, GloVe, Transformers)
Large Language Models (LLMs) is a plus
Proficiency in Python (NumPy, Pandas, Scikit-learn, TensorFlow/PyTorch).
Experience with SQL and large-scale data processing.
Strong understanding of statistics, probability, and experimental design.
Experience working with cloud platforms (AWS, Azure, or GCP).
Ability to translate data insights into business impact.
Nice to Have
Experience with marketing automation or CRM platforms.
Knowledge of MLOps, model monitoring, and deployment pipelines.
Familiarity with GenAI/LLM-based NLP use cases for marketing.
Prior experience in consumer, e-commerce, or digital marketing domains.
EEO
Centraprise is an equal opportunity employer. Your application and candidacy will not be considered based on race, color, sex, religion, creed, sexual orientation, gender identity, national origin, disability, genetic information, pregnancy, veteran status or any other characteristic protected by federal, state or local laws.
Senior Data Visualization Analyst
Data analyst job in Oakland, CA
Fractal Analytics is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets. An ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite empowers imagination with intelligence. And that it will be such Fractalites that will continue to build the company for the next 100 years.
Please visit Fractal | Intelligence for Imagination for more information about Fractal.
Job Title: Senior Data Visualization Analyst / Consultant
Location: SF, CA; Oakland, CA
Employment Type: Full-Time
About the Role
We are seeking a highly skilled Senior Data Visualization Analyst/Consultant to join our advanced analytics team supporting a large health payer organization. This role focuses on delivering actionable insights through data visualization, reporting, and analytics to drive operational excellence and improve member outcomes.
You will work closely with stakeholders across middle-office functions such as provider performance reporting, value-based care analytics, care management operations, and compliance reporting. The ideal candidate combines strong technical expertise with deep understanding of health plan operations.
Key Responsibilities
Design, develop, and maintain interactive dashboards and reports using Tableau and other visualization tools.
Build and optimize data pipelines, including creating SQL views and integrating data from multiple sources.
Translate complex healthcare data into clear, actionable insights for business leaders and operational teams.
Partner with cross-functional teams to support value-based care initiatives, provider performance analytics, and care management programs.
Ensure compliance with regulatory reporting requirements and internal data governance standards.
Collaborate with data engineers and analysts to enhance data quality, accessibility, and scalability.
Present findings and recommendations to senior stakeholders in a clear and compelling manner.
Required Qualifications
Bachelor's degree in Analytics, Data Science, Engineering, or related field; Master's degree in MIS, Analytics, or Data Science preferred.
5+ years of experience in data analytics and visualization, preferably in healthcare or health payer environment.
Strong proficiency in:
SQL (complex queries, views, optimization)
Tableau (dashboard design, advanced calculations)
Azure (Data Factory, Synapse, or similar)
SAS (for statistical analysis and reporting)
Python (data wrangling, automation, analytics)
Experience with data pipeline development and integration across multiple systems.
Familiarity with health plan operations, including provider performance, value-based care, care management, and compliance reporting.
Nice-to-Have Skills
Exposure to behavioral health program analytics.
Knowledge of machine learning concepts for predictive modeling.
Experience with cloud-based data architecture and ETL tools.
Core Competencies
Strong analytical and problem-solving skills.
Excellent communication and stakeholder management abilities.
Ability to work independently and in a collaborative team environment.
Detail-oriented with a focus on data accuracy and visualization best practices.
Pay:
The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $131,000 - $159,000. In addition, you may be eligible for a discretionary bonus for the current performance period.
Benefits:
As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take time needed for either sick time or vacation.
Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
Fractal Doesn't offer any sponsorship at this time.
Business System Analyst
Data analyst job in San Leandro, CA
Peterson Power Systems, a Peterson Cat company, has a need for a Senior Business Systems Analyst at our San Leandro, CA location.
The Senior Business Systems Analyst analyzes business processes throughout the Power Systems business unit, identifies improvement opportunities, and collaborates with stakeholders to design new and improved system solutions. Utilizing a strong understanding of business system requirements, this role serves as the key liaison between business teams, Information Technology (IT) department, and external partners to implement new systems and system enhancements. This position is also responsible for providing user support and training to ensure effective system adoption.
ESSENTIAL JOB FUNCTIONS
The following reflects management's definition of essential functions for this job but does not restrict the tasks that may be assigned. Management may assign or reassign the functions to this job at any time due to reasonable accommodation or other reasons. Job functions include the following. Other duties may be assigned.
Collaborate with Power Systems managers to analyze current business systems and processes, identify inefficiencies, and design system solutions to automate and improve them.
Develop a working understanding of business system requirements related to, but not limited to, sales pipeline management, quoting, inventory management, project planning, and logistics planning.
Partner with technical teams including Information Technology (IT), Marketing, and external partners to design, develop and implement new systems and system enhancements
Communicate system solutions and project updates, acting as a key liaison between technical and non-technical stakeholders.
Provide oversight and drive continuous improvement of Peterson Power's business systems, particularly Salesforce, to ensure strong user adoption and experience.
Provide business systems support and training for end users.
Maintain regular, punctual, and predictable attendance.
QUALIFICATIONS
Bachelor's Degree from a fully accredited college in Business or other closely related field; and a minimum of six (6) years of directly related experience with design and implementation of business systems, including Salesforce; or an equivalent combination of education and work experience.
3+ years of Salesforce experience required.
Salesforce CRM and CPQ module experience highly preferred.
Certinia experience preferred.
Staff Data Engineer
Data analyst job in San Francisco, CA
🌎 San Francisco (Hybrid)
💼 Founding/Staff Data Engineer
💵 $200-300k base
Our client is an elite applied AI research and product lab building AI-native systems for finance-and pushing frontier models into real production environments. Their work sits at the intersection of data, research, and high-stakes financial decision-making.
As the Founding Data Engineer, you will own the data platform that powers everything: models, experiments, and user-facing products relied on by demanding financial customers. You'll make foundational architectural decisions, work directly with researchers and product engineers, and help define how data is built, trusted, and scaled from day one.
What you'll do:
Design and build the core data platform, ingesting, transforming, and serving large-scale financial and alternative datasets.
Partner closely with researchers and ML engineers to ship production-grade data and feature pipelines that power cutting-edge models.
Establish data quality, observability, lineage, and reproducibility across both experimentation and production workloads.
Deploy and operate data services using Docker and Kubernetes in a modern cloud environment (AWS, GCP, or Azure).
Make foundational choices on tooling, architecture, and best practices that will define how data works across the company.
Continuously simplify and evolve systems-rewriting pipelines or infrastructure when it's the right long-term decision.
Ideal candidate:
Have owned or built high-performance data systems end-to-end, directly supporting production applications and ML models.
Are strongest in backend and data infrastructure, with enough frontend literacy to integrate cleanly with web products when needed.
Can design and evolve backend services and pipelines (Node.js or Python) to support new product features and research workflows.
Are an expert in at least one statically typed language, with a strong bias toward type safety, correctness, and maintainable systems.
Have deployed data workloads and services using Docker and Kubernetes on a major cloud provider.
Are comfortable making hard calls-simplifying, refactoring, or rebuilding legacy pipelines when quality and scalability demand it.
Use AI tools to accelerate your work, but rigorously review and validate AI-generated code, insisting on sound system design.
Thrive in a high-bar, high-ownership environment with other exceptional engineers.
Love deep technical problems in data infrastructure, distributed systems, and performance.
Nice to have:
Experience working with financial data (market, risk, portfolio, transactional, or alternative datasets).
Familiarity with ML infrastructure, such as feature stores, experiment tracking, or model serving systems.
Background in a high-growth startup or a foundational infrastructure role.
Compensation & setup:
Competitive salary and founder-level equity
Hybrid role based in San Francisco, with close collaboration and significant ownership
Small, elite team building core infrastructure with outsized impact
Application Analyst
Data analyst job in San Francisco, CA
Required Qualifications:
Epic Cupid certification
Radiant certification
Involves the design, building, testing, and implementation of clinical application systems. Provides support to clinical users through knowledge of clinical processes, documentation needs, workflows, and clinical practice standards, when adapting software to meet their needs. Works with clinicians to create or adapt written protocols. Prepares detailed specs encompassing clinical processes, information flow, risk, and impact analysis. May provide customer service, troubleshooting, and maintenance.
Generic Scope:
Experienced professional who knows how to apply theory and put it into practice with in-depth understanding of the professional field; independently performs the full range of responsibilities within the function; possesses broad job knowledge; analyzes problems / issues of diverse scope and determines solutions.
Custom Scope:
Applies skills as a seasoned clinical applications professional to projects of medium size at all levels of complexity, or portions of large projects.
The Clinical Applications Professional III functions as the primary support contact and expert for technology solutions used within the cardiology service lines. They work under the direction of the Team Lead and/or Manager to configure, build & install applications. They coordinate all issues that arise during the project for their application area. Key operational activities include primary responsibility to analyze work flows and understand policies, procedures and constraints of the clinical or business operations supported by the applications. In depth and precise investigation and documentation of operational specifications and application functionality is required. Key technical activities include the analysis of new releases to determine how workflow should be modified, building and populating databases and tables during initial system configuration, conducting system testing and conversion data validation. The application analyst develops and documents internal procedures and establishes change control processes for the application.
The Clinical Application Analyst also develops user training aids and trains end users in workflow and use of applications. They function as the primary contact to troubleshoot problems and questions from end-users during training, go-live, stabilization and on-going support (7x24). Successful candidates are skilled communicators who make decisions independently and in collaboration with others up and down the project structure. Attention to detail is a critical skill for this position. Successful candidates enjoy helping other users learn and adopt to use of the technology solutions.
Compensation: $65-70/hr
Exact compensation may vary based on several factors, including skills, experience, and education. Benefit packages for this role will start on the 1st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law.
AI Data Engineer
Data analyst job in Sonoma, CA
Member of Technical Staff - AI Data Engineer
San Francisco (In-Office)
$150K to $225K + Equity
A high-growth, AI-native startup coming out of stealth is hiring AI Data Engineers to build the systems that power production-grade AI. The company has recently signed a Series A term sheet and is scaling rapidly. This role is central to unblocking current bottlenecks across data engineering, context modeling, and agent performance.
Responsibilities:
• Build distributed, reliable data pipelines using Airflow, Temporal, and n8n
• Model SQL, vector, and NoSQL databases (Postgres, Qdrant, etc.)
• Build API and function-based services in Python
• Develop custom automations (Playwright, Stagehand, Zapier)
• Work with AI researchers to define and expose context as services
• Identify gaps in data quality and drive changes to upstream processes
• Ship fast, iterate, and own outcomes end-to-end
Required Experience:
• Strong background in data engineering
• Hands-on experience working with LLMs or LLM-powered applications
• Data modeling skills across SQL and vector databases
• Experience building distributed systems
• Experience with Airflow, Temporal, n8n, or similar workflow engines
• Python experience (API/services)
• Startup mindset and bias toward rapid execution
Nice To Have:
• Experience with stream processing (Flink)
• dbt or Clickhouse experience
• CDC pipelines
• Experience with context construction, RAG, or agent workflows
• Analytical tooling (Posthog)
What You Can Expect:
• High-intensity, in-office environment
• Fast decision-making and rapid shipping cycles
• Real ownership over architecture and outcomes
• Opportunity to work on AI systems operating at meaningful scale
• Competitive compensation package
• Meals provided plus full medical, dental, and vision benefits
If this sounds like you, please apply now.
Epic Cupid Analyst - USC, GC - Local only
Data analyst job in San Francisco, CA
Job Title: Epic Cupid Analyst
Candidate Location: Must be local or commute distance to San Francisco, CA 94102
Duration: 6 months
Visa: USC and GC Only
Below skills kept in BOLD are required, please make sure to review the req before submitting profiles.
Required Skills:
• Epic Cupid Certification is required
• Epic Radiant Certification is required
• Deep expertise in Epic Cupid and Radiant, along with strong clinical workflow knowledge and excellent stakeholder collaboration skills.
• Strong hands-on experience with Epic Cupid, including both implementation and production support.
• Epic Radiant experience supporting imaging workflows.
• Proven background in clinical application analysis within healthcare environments.
• Ability to independently manage medium-sized projects or contribute to large, complex initiatives.
• Strong analytical skills with the ability to document and translate clinical requirements into system configurations.
• Excellent communication skills and a high level of attention to detail.
• Comfortable working in fast-paced clinical environments with multiple stakeholders.
• Able to apply theory to practice, work independently, and act as a trusted advisor to clinical users.
• Strong problem-solving abilities, a customer-focused mindset, and a passion for enabling clinicians through effective technology solutions.
Key Responsibilities:
• Design, build, test, and implement Epic clinical application systems supporting cardiology workflows.
• Serve as the primary application expert and support contact for Cupid and Radiant modules.
• Analyze clinical workflows, policies, and procedures to ensure systems align with operational needs.
• Configure applications, build and populate databases and tables, and validate conversion data.
• Evaluate new Epic releases and assess workflow impacts, recommending configuration changes as needed.
• Develop and maintain detailed documentation including specifications, workflows, risk assessments, and change control processes.
• Provide troubleshooting and on-call support during training, go-live, stabilization, and ongoing operations (24x7 support model).
• Create user training materials and conduct end-user training sessions.
• Collaborate closely with clinicians, project managers, and technical teams to ensure successful project delivery.
Data Engineer
Data analyst job in San Francisco, CA
Midjourney is a research lab exploring new mediums to expand the imaginative powers of the human species. We are a small, self-funded team focused on design, human infrastructure, and AI. We have no investors, no big company controlling us, and no advertisers. We are 100% supported by our amazing community.
Our tools are already used by millions of people to dream, to explore, and to create. But this is just the start. We think the story of the 2020s is about building the tools that will remake the world for the next century. We're making those tools, to expand what it means to be human.
Core Responsibilities:
Design and maintain data pipelines to consolidate information across multiple sources (subscription platforms, payment systems, infrastructure and usage monitoring, and financial systems) into a unified analytics environment
Build and manage interactive dashboards and self-service BI tools that enable leadership to track key business metrics including revenue performance, infrastructure costs, customer retention, and operational efficiency
Serve as technical owner of our financial planning platform (Pigment or similar), leading implementation and build-out of models, data connections, and workflows in partnership with Finance leadership to translate business requirements into functional system architecture
Develop automated data quality checks and cleaning processes to ensure accuracy and consistency across financial and operational datasets
Partner with Finance, Product and Operations teams to translate business questions into analytical frameworks, including cohort analysis, cost modeling, and performance trending
Create and maintain documentation for data models, ETL processes, dashboard logic, and system workflows to ensure knowledge continuity
Support strategic planning initiatives by building financial models, scenario analyses, and data-driven recommendations for resource allocation and growth investments
Required Qualifications:
3-5+ years experience in data engineering, analytics engineering, or similar role with demonstrated ability to work with large-scale datasets
Strong SQL skills and experience with modern data warehousing solutions (BigQuery, Snowflake, Redshift, etc.)
Proficiency in at least one programming language (Python, R) for data manipulation and analysis
Experience with BI/visualization tools (Looker, Tableau, Power BI, or similar)
Hands-on experience administering enterprise financial systems (NetSuite, SAP, Oracle, or similar ERP platforms)
Experience working with Stripe Billing or similar subscription management platforms, including data extraction and revenue reporting
Ability to communicate technical concepts clearly to non-technical stakeholders
Senior Data Engineer - Spark, Airflow
Data analyst job in Sonoma, CA
We are seeking an experienced Data Engineer to design and optimize scalable data pipelines that drive our global data and analytics initiatives.
In this role, you will leverage technologies such as Apache Spark, Airflow, and Python to build high performance data processing systems and ensure data quality, reliability, and lineage across Mastercard's data ecosystem.
The ideal candidate combines strong technical expertise with hands-on experience in distributed data systems, workflow automation, and performance tuning to deliver impactful, data-driven solutions at enterprise scale.
Responsibilities:
Design and optimize Spark-based ETL pipelines for large-scale data processing.
Build and manage Airflow DAGs for scheduling, orchestration, and checkpointing.
Implement partitioning and shuffling strategies to improve Spark performance.
Ensure data lineage, quality, and traceability across systems.
Develop Python scripts for data transformation, aggregation, and validation.
Execute and tune Spark jobs using spark-submit.
Perform DataFrame joins and aggregations for analytical insights.
Automate multi-step processes through shell scripting and variable management.
Collaborate with data, DevOps, and analytics teams to deliver scalable data solutions.
Qualifications:
Bachelor's degree in Computer Science, Data Engineering, or related field (or equivalent experience).
At least 7 years of experience in data engineering or big data development.
Strong expertise in Apache Spark architecture, optimization, and job configuration.
Proven experience with Airflow DAGs using authoring, scheduling, checkpointing, monitoring.
Skilled in data shuffling, partitioning strategies, and performance tuning in distributed systems.
Expertise in Python programming including data structures and algorithmic problem-solving.
Hands-on with Spark DataFrames and PySpark transformations using joins, aggregations, filters.
Proficient in shell scripting, including managing and passing variables between scripts.
Experienced with spark submit for deployment and tuning.
Solid understanding of ETL design, workflow automation, and distributed data systems.
Excellent debugging and problem-solving skills in large-scale environments.
Experience with AWS Glue, EMR, Databricks, or similar Spark platforms.
Knowledge of data lineage and data quality frameworks like Apache Atlas.
Familiarity with CI/CD pipelines, Docker/Kubernetes, and data governance tools.
Sr Data Platform Engineer
Data analyst job in Elk Grove, CA
Hybrid role 3X a week in office in Elk Grove, CA; no remote capabilities
This is a direct hire opportunity.
We're seeking a seasoned Senior Data Platform Engineer to design, build, and optimize scalable data solutions that power analytics, reporting, and AI/ML initiatives. This full‑time role is hands‑on, working with architects, analysts, and business stakeholders to ensure data systems are reliable, secure, and high‑performing.
Responsibilites:
Build and maintain robust data pipelines (structured, semi‑structured, unstructured).
Implement ETL workflows with Spark, Delta Lake, and cloud‑native tools.
Support big data platforms (Databricks, Snowflake, GCP) in production.
Troubleshoot and optimize SQL queries, Spark jobs, and workloads.
Ensure governance, security, and compliance across data systems.
Integrate workflows into CI/CD pipelines with Git, Jenkins, Terraform.
Collaborate cross‑functionally to translate business needs into technical solutions.
Qualifications:
7+ years in data engineering with production pipeline experience.
Expertise in Spark ecosystem, Databricks, Snowflake, GCP.
Strong skills in PySpark, Python, SQL.
Experience with RAG systems, semantic search, and LLM integration.
Familiarity with Kafka, Pub/Sub, vector databases.
Proven ability to optimize ETL jobs and troubleshoot production issues.
Agile team experience and excellent communication skills.
Certifications in Databricks, Snowflake, GCP, or Azure.
Exposure to Airflow, BI tools (Power BI, Looker Studio).
Research Data Analyst 3 or 4 Oakland, CA, Job ID 80657
Data analyst job in Oakland, CA
The University of California Agriculture and Natural Resources (UC ANR) seeks a Financial Data Analyst (classified as Research Data Analyst 4) to develop, maintain, and optimize financial software solutions that support the Division's financial operations. UC ANR is a complex organization with operations in 58 county offices, three campuses, and nine research and extension centers. As the land-grant arm of the UC system, ANR manages over $300 million in funding and employs over 1,600 academic and staff personnel.
The incumbent applies advanced data analytics, programming, and data integration skills to extract, clean, and analyze large and complex financial datasets from multiple enterprise sources.
This role will collaborate with Resource Planning and Management, Financial Services, IT, and administrative teams to customize and implement applications that enhance financial reporting, budgeting, and operational efficiency. The ideal candidate has a strong background with financial data systems in higher education, proficiency in tools such as SQL, R, SAS, Tableau, and Power BI, with a passion for improving processes in a large, complex academic institution.
This position is a contract appointment that is 100% fixed, and ends three years from date of hire with the possibility of extension if funding permits.
This position is posted as a Research Data Analyst 4 but a Research Data Analyst 3 may be considered depending on the level of experience of the hired applicant.
The home department is Resource, Planning & Management. While this position normally is based in Oakland, CA, this position is eligible for hybrid flexible work arrangements for applicants living in the State of California at this time. Please note that hybrid flexible work arrangements are subject to change by the University.
Pay Scale:
Research Data Analyst 3: $88,900.00/year to $126,400.00/year
Research Data Analyst 4: $109,200.00/year to $158,500.00/year
Job Posting Close Date: This job is open until filled. The first application review date will be 9/8/2025.
Key Responsibilities:
25%
Financial Data Integration & Reporting
Create and maintain scripts, queries, and reports that integrate data from multiple systems to support cross-functional financial analysis.
Build automated tools to extract, clean, and analyze large financial datasets from Oracle, Cognos, and other university financial systems.
Translate complex reporting and analysis needs into scalable, user-friendly dashboards and visualizations using tools such as Tableau, Cognos, or Power BI.
20%
Financial Application Design & Enhancement
Design and enhance financial applications that support budgeting, forecasting, and reporting within a university financial ecosystem.
Implement financial models and process automation solutions to improve operational efficiency across departments.
15%
Process Improvement & Operational Efficiency
Improve speed, accuracy, and efficiency of financial calculations and reporting systems.
Contribute to continuous improvement initiatives with university finance and IT stakeholders.
15%
Compliance & Risk Management
Ensure the use of all financial applications complies with university policies, state and federal regulations (GAAP, IFRS, OMB Uniform Guidance), and cybersecurity standards
15%
Technical Support & Issue Resolution
Diagnose and resolve system issues, provide technical support to finance teams, and ensure seamless financial operations across all campuses
10%
Stakeholder Collaboration & Documentation
Work with university finance and IT stakeholders, maintain clear documentation of system workflows, and support cross-campus coordination.
Requirements:
Bachelor's degree in Computer Science, Finance, Data Science, Business Analytics, or a related field or equivalent experience.
Proficiency in using programming or scripting languages (such as SQL, Python, or similar) to extract, analyze, and automate processes using data from financial systems.
Experience with ERP platforms (Oracle, UCPath), and business intelligence tools (Cognos, Tableau, Power BI).
Familiarity with data warehousing, cloud computing (AWS, Azure, GCP), and automation tools.
Ability to analyze large datasets and develop financial reports that align with UC's financial reporting standards.
Strong problem-solving, analytical thinking, and collaboration skills.
Preferred Skills:
Strong understanding of fund accounting, grant management, and UC financial policies.
Experience working in higher education finance, UC financial operations, or public sector budgeting.
Knowledge of UC policies related to finance, grants, and compliance.
Certifications such as CPA, CFA, or data analytics certifications.
Special Conditions of Employment:
Must possess valid California Driver's License to drive a County or University vehicle. Ability and means to travel on a flexible schedule as needed, proof of liability damage insurance on vehicle used is required. Reimbursement of job-related travel will be reimbursed according to University policies.
The University reserves the right to make employment contingent upon successful completion of the background check. This is a designated position requiring a background check and may require fingerprinting due to the nature of the job responsibilities. UC ANR does hire people with conviction histories and reviews information received in the context of the job responsibilities.
As of January 1, 2014, ANR is a smoke- and tobacco-free environment in which smoking, the use of smokeless tobacco products, and the use of unregulated nicotine products (e-cigarettes), is strictly prohibited.
As a condition of employment, you will be required to comply with the University of California Policy on Vaccination Programs, as may be amended or revised from time to time. Federal, state, or local public health directives may impose additional requirements.
Exercise the utmost discretion in managing sensitive information learned in the course of performing their duties. Sensitive information includes but is not limited to employee and student records, health and patient records, financial data, strategic plans, proprietary information, and any other sensitive or non-public information learned during the course and scope of employment. Understands that sensitive information should be shared on a limited basis and actively takes steps to limit access to sensitive information to individuals who have legitimate business need to know. Ensure that sensitive information is properly safeguarded. Follow all organizational policies and laws on data protection and privacy. This includes secure handling of physical and digital records and proper usage of IT systems to prevent data leaks. The unauthorized or improper disclosure of confidential work-related information obtained from any source on any work-related matter is a violation of these expectations.
Misconduct Disclosure Requirement: As a condition of employment, the final candidate who accepts a conditional offer of employment will be required to disclose if they have been subject to any final administrative or judicial decisions within the last seven years determining that they committed any misconduct; received notice of any allegations or are currently the subject of any administrative or disciplinary proceedings involving misconduct; have left a position after receiving notice of allegations or while under investigation in an administrative or disciplinary proceeding involving misconduct; or have filed an appeal of a finding of misconduct with a previous employer.
a. "Misconduct" means any violation of the policies or laws governing conduct at the applicant's previous place of employment, including, but not limited to, violations of policies or laws prohibiting sexual harassment, sexual assault, or other forms of harassment, discrimination, dishonesty, or unethical conduct, as defined by the employer. For reference, below are UC's policies addressing some forms of misconduct:
UC Sexual Violence and Sexual Harassment Policy
UC Anti-Discrimination Policy
Abusive Conduct in the Workplace
To apply, please visit: https://careerspub.universityofcalifornia.edu/psc/ucanr/EMPLOYEE/HRMS/c/HRS_HRAM_FL.HRS_CG_SEARCH_FL.GBL?Page=HRS_APP_JBPST_FL&JobOpeningId=80657&PostingSeq=1&SiteId=17&language Cd=ENG&FOCUS=Applicant
Copyright ©2025 Jobelephant.com Inc. All rights reserved.
Posted by the FREE value-added recruitment advertising agency
jeid-186599039718bd4cabfba1b0577b375d
Staff Data Scientist
Data analyst job in Santa Rosa, CA
Staff Data Scientist | San Francisco | $250K-$300K + Equity
We're partnering with one of the fastest-growing AI companies in the world to hire a Staff Data Scientist. Backed by over $230M from top-tier investors and already valued at over $1B, they've secured customers that include some of the most recognizable names in tech. Their AI platform powers millions of daily interactions and is quickly becoming the enterprise standard for conversational AI.
In this role, you'll bring rigorous analytics and experimentation leadership that directly shapes product strategy and company performance.
What you'll do:
Drive deep-dive analyses on user behavior, product performance, and growth drivers
Design and interpret A/B tests to measure product impact at scale
Build scalable data models, pipelines, and dashboards for company-wide use
Partner with Product and Engineering to embed experimentation best practices
Evaluate ML models, ensuring business relevance, performance, and trade-off clarity
What we're looking for:
5+ years in data science or product analytics at scale (consumer or marketplace preferred)
Advanced SQL and Python skills, with strong foundations in statistics and experimental design
Proven record of designing, running, and analyzing large-scale experiments
Ability to analyze and reason about ML models (classification, recommendation, LLMs)
Strong communicator with a track record of influencing cross-functional teams
If you're excited by the sound of this challenge- apply today and we'll be in touch.
Application Analyst
Data analyst job in San Mateo, CA
Required Qualifications:
Epic Cupid certification
Radiant certification
Involves the design, building, testing, and implementation of clinical application systems. Provides support to clinical users through knowledge of clinical processes, documentation needs, workflows, and clinical practice standards, when adapting software to meet their needs. Works with clinicians to create or adapt written protocols. Prepares detailed specs encompassing clinical processes, information flow, risk, and impact analysis. May provide customer service, troubleshooting, and maintenance.
Generic Scope:
Experienced professional who knows how to apply theory and put it into practice with in-depth understanding of the professional field; independently performs the full range of responsibilities within the function; possesses broad job knowledge; analyzes problems / issues of diverse scope and determines solutions.
Custom Scope:
Applies skills as a seasoned clinical applications professional to projects of medium size at all levels of complexity, or portions of large projects.
The Clinical Applications Professional III functions as the primary support contact and expert for technology solutions used within the cardiology service lines. They work under the direction of the Team Lead and/or Manager to configure, build & install applications. They coordinate all issues that arise during the project for their application area. Key operational activities include primary responsibility to analyze work flows and understand policies, procedures and constraints of the clinical or business operations supported by the applications. In depth and precise investigation and documentation of operational specifications and application functionality is required. Key technical activities include the analysis of new releases to determine how workflow should be modified, building and populating databases and tables during initial system configuration, conducting system testing and conversion data validation. The application analyst develops and documents internal procedures and establishes change control processes for the application.
The Clinical Application Analyst also develops user training aids and trains end users in workflow and use of applications. They function as the primary contact to troubleshoot problems and questions from end-users during training, go-live, stabilization and on-going support (7x24). Successful candidates are skilled communicators who make decisions independently and in collaboration with others up and down the project structure. Attention to detail is a critical skill for this position. Successful candidates enjoy helping other users learn and adopt to use of the technology solutions.
Compensation: $65-70/hr
Exact compensation may vary based on several factors, including skills, experience, and education. Benefit packages for this role will start on the 1st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law.
AWS Data Architect
Data analyst job in Santa Rosa, CA
Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner.
Please visit Fractal | Intelligence for Imagination for more information about Fractal.
Fractal is looking for a proactive and driven AWS Lead Data Architect/Engineer to join our cloud and data tech team. In this role, you will work on designing the system architecture and solution, ensuring the platform is scalable while performant, and creating automated data pipelines.
Responsibilities:
Design & Architecture of Scalable Data Platforms
Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs
Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management).
Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility.
Client & Business Stakeholder Engagement
Partner with business stakeholders to translate functional requirements into scalable technical solutions.
Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases
Data Pipeline Development & Collaboration
Collaborate with data engineers and data scientists to develop end-to-end pipelines using Python, PySpark, SQL
Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets.
Performance, Scalability, and Reliability
Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques.
Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools
Security, Compliance & Governance
Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies.
Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging.
Adoption of AI Copilots & Agentic Development
Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for
Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks.
Generating documentation and test cases to accelerate pipeline development.
Interactive debugging and iterative code optimization within notebooks.
Advocate for agentic AI workflows that use specialized agents for
Data profiling and schema inference.
Automated testing and validation.
Innovation and Continuous Learning
Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling.
Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements.
Requirements:
Bachelor's or master's degree in computer science, Information Technology, or a related field.
8-12 years of hands-on experience in data engineering, with at least 5+ years on Python and Apache Spark.
Expertise in building high-throughput, low-latency ETL/ELT pipelines on AWS/Azure/GCP using Python, PySpark, SQL.
Excellent hands on experience with workload automation tools such as Airflow, Prefect etc.
Familiarity with building dynamic ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage
Experience designing Lakehouse architectures with bronze, silver, gold layering.
Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing.
Experience with designing Data marts using Cloud data warehouses and integrating with BI tools (Power BI, Tableau, etc.).
Experience CI/CD pipelines using tools such as AWS Code commit, Azure DevOps, GitHub Actions.
Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning platform resources
In-depth experience with AWS Cloud services such as Glue, S3, Redshift etc.
Strong understanding of data privacy, access controls, and governance best practices.
Experience working with RBAC, tokenization, and data classification frameworks
Excellent communication skills for stakeholder interaction, solution presentations, and team coordination.
Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements.
Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality
Must be able to work in PST time zone.
Pay:
The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $150k - $180k. In addition, you may be eligible for a discretionary bonus for the current performance period.
Benefits:
As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take the time needed for either sick time or vacation.
Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
Senior ML Data Engineer
Data analyst job in Sonoma, CA
We're the data team behind Midjourney's image generation models. We handle the dataset side: processing, filtering, scoring, captioning, and all the distributed compute that makes high-quality training data possible.
What you'd be working on:
Large-scale dataset processing and filtering pipelines
Training classifiers for content moderation and quality assessment
Models for data quality and aesthetic evaluation
Data visualization tools for experimenting on dataset samples
Testing/simulating distributed inference pipelines
Monitoring dashboards for data quality and pipeline health
Performance optimization and infrastructure scaling
Occasionally jumping into inference optimization and other cross-team projects
Our current stack: PySpark, Slurm, distributed batch processing across hybrid cloud setup. We're pragmatic about tools - if there's something better, we'll switch.
We're looking for someone strong in either:
Data engineering/ML pipelines at scale, or
Cloud/infrastructure with distributed systems experience
Don't need exact tech matches - comfort with adjacent technologies and willingness to learn matters more. We work with our own hardware plus GCP and other providers, so adaptability across different environments is valuable.
Location: SF office a few times per week (we may make exceptions on location for truly exceptional candidates)
The role offers variety, our team members often get pulled into different projects across the company, from dataset work to inference optimization. If you're interested in the intersection of large-scale data processing and cutting-edge generative AI, we'd love to hear from you.
Application Analyst
Data analyst job in Hayward, CA
Required Qualifications:
Epic Cupid certification
Radiant certification
Involves the design, building, testing, and implementation of clinical application systems. Provides support to clinical users through knowledge of clinical processes, documentation needs, workflows, and clinical practice standards, when adapting software to meet their needs. Works with clinicians to create or adapt written protocols. Prepares detailed specs encompassing clinical processes, information flow, risk, and impact analysis. May provide customer service, troubleshooting, and maintenance.
Generic Scope:
Experienced professional who knows how to apply theory and put it into practice with in-depth understanding of the professional field; independently performs the full range of responsibilities within the function; possesses broad job knowledge; analyzes problems / issues of diverse scope and determines solutions.
Custom Scope:
Applies skills as a seasoned clinical applications professional to projects of medium size at all levels of complexity, or portions of large projects.
The Clinical Applications Professional III functions as the primary support contact and expert for technology solutions used within the cardiology service lines. They work under the direction of the Team Lead and/or Manager to configure, build & install applications. They coordinate all issues that arise during the project for their application area. Key operational activities include primary responsibility to analyze work flows and understand policies, procedures and constraints of the clinical or business operations supported by the applications. In depth and precise investigation and documentation of operational specifications and application functionality is required. Key technical activities include the analysis of new releases to determine how workflow should be modified, building and populating databases and tables during initial system configuration, conducting system testing and conversion data validation. The application analyst develops and documents internal procedures and establishes change control processes for the application.
The Clinical Application Analyst also develops user training aids and trains end users in workflow and use of applications. They function as the primary contact to troubleshoot problems and questions from end-users during training, go-live, stabilization and on-going support (7x24). Successful candidates are skilled communicators who make decisions independently and in collaboration with others up and down the project structure. Attention to detail is a critical skill for this position. Successful candidates enjoy helping other users learn and adopt to use of the technology solutions.
Compensation: $65-70/hr
Exact compensation may vary based on several factors, including skills, experience, and education. Benefit packages for this role will start on the 1st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law.