Senior data scientist jobs in Santa Barbara, CA - 2,180 jobs
All
Senior Data Scientist
Data Engineer
Data Scientist
Lead Data Analyst
Senior Product Data Scientist, Product, App Safety Engineering
Google Inc. 4.8
Senior data scientist job in Mountain View, CA
corporate_fare Google place Mountain View, CA, USA
Apply
Bachelor's degree in Statistics, Mathematics, Data Science, Engineering, Physics, Economics, or a related quantitative field.
8 years of experience using analytics to solve product or business problems, performing statistical analysis, and coding (e.g., Python, R, SQL), or 5 years of experience with an advanced degree.
Preferred qualifications:
Master's degree in Statistics, Mathematics, Data Science, Engineering, Physics, Economics, or a related quantitative field.
About the job
Help serve Google's worldwide user base of more than a billion people. DataScientists provide quantitative support, market understanding and a strategic perspective to our partners throughout the organization. As a data-loving member of the team, you serve as an analytics expert for your partners, using numbers to help them make better decisions. You will weave stories with meaningful insight from data. You'll make critical recommendations for your fellow Googlers in Engineering and Product Management. You relish tallying up the numbers one minute and communicating your findings to a team leader the next.
The Platforms and Devices team encompasses Google's various computing software platforms across environments (desktop, mobile, applications), as well as our first party devices and services that combine the best of Google AI, software, and hardware. Teams across this area research, design, and develop new technologies to make our user's interaction with computing faster and more seamless, building innovative experiences for our users around the world.
The US base salary range for this full-time position is $156,000-$229,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google .
Responsibilities
Perform analysis utilizing relevant tools (e.g., SQL, R, Python). Help solve problems, narrowing down multiple options into the best approach, and take ownership of open-ended ambiguous business problems to reach an optimal solution.
Build new processes, procedures, methods, tests, and components with foresight to anticipate and address future issues.
Report on Key Performance Indicators (KPIs) to support business reviews with the cross-functional/organizational leadership team. Translate analysis results to business insights or product improvement opportunities.
Build and prototype analysis and business cases iteratively to provide insights at scale. Develop knowledge of Google data structures and metrics, advocating for changes where needed for product development.
Influence across teams to align resources and direction.
Google is proud to be an equal opportunity and affirmative action employer. We are committed to building a workforce that is representative of the users we serve, creating a culture of belonging, and providing an equal employment opportunity regardless of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), expecting or parents-to-be, criminal histories consistent with legal requirements, or any other basis protected by law. See also Google's EEO Policy , Know your rights: workplace discrimination is illegal , Belonging at Google , and How we hire .
Google is a global company and, in order to facilitate efficient collaboration and communication globally, English proficiency is a requirement for all roles unless stated otherwise in the job posting.
To all recruitment agencies: Google does not accept agency resumes. Please do not forward resumes to our jobs alias, Google employees, or any other organization location. Google is not responsible for any fees related to unsolicited resumes.
#J-18808-Ljbffr
We design, build and maintain infrastructure to support agentic workflows for Siri. Our team is in charge of data generation, introspection and evaluation frameworks that are key to efficiently developing foundation models and agentic workflows for Siri applications. In this team you will have the opportunity to work at the intersection of with cutting edge foundation models and products.
Minimum Qualifications
Strong background in computer science: algorithms, data structures and system design
3+ year experience on large scale distributed system design, operation and optimization
Experience with SQL/NoSQL database technologies, data warehouse frameworks like BigQuery/Snowflake/RedShift/Iceberg and data pipeline frameworks like GCP Dataflow/Apache Beam/Spark/Kafka
Experience processing data for ML applications at scale
Excellent interpersonal skills able to work independently as well as cross-functionally
Preferred Qualifications
Experience fine-tuning and evaluating Large Language Models
Experience with Vector Databases
Experience deploying and serving of LLMs
At Apple, base pay is one part of our total compensation package and is determined within a range. This provides the opportunity to progress as you grow and develop within a role. The base pay range for this role is between $147,400 and $272,100, and your base pay will depend on your skills, qualifications, experience, and location.
Apple employees also have the opportunity to become an Apple shareholder through participation in Apple's discretionary employee stock programs. Apple employees are eligible for discretionary restricted stock unit awards, and can purchase Apple stock at a discount if voluntarily participating in Apple's Employee Stock Purchase Plan. You'll also receive benefits including: Comprehensive medical and dental coverage, retirement benefits, a range of discounted products and free services, and for formal education related to advancing your career at Apple, reimbursement for certain educational expenses - including tuition. Additionally, this role might be eligible for discretionary bonuses or commission payments as well as relocation. Learn more about Apple Benefits.
Note: Apple benefit, compensation and employee stock programs are subject to eligibility requirements and other terms of the applicable plan or program.
Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant .
#J-18808-Ljbffr
$147.4k-272.1k yearly 4d ago
Sr. Data Scientist
T3W Business Solutions, Inc.
Senior data scientist job in San Diego, CA
T3W Business Solutions, Inc. is a Woman-Owned Small Business with Headquarters located in San Diego, CA. It is our mission to help our clients develop strategies to optimize their use of space and resources resulting in maximum benefits; we also deliver quality data and analysis to support our client's daily facility operations, planning, and compliance programs. We are looking for a Sr. DataScientist in San Diego, California.
**Contingent Upon Contract Award**
Summary
Builds advanced analytics, machine learning models, forecasting tools, and data products to support FRCSW strategic and operational decisions. Analyzes large structured/unstructured datasets, constructs pipelines, and develops dashboards visualizing key performance indicators. Leads data standardization, modeling, statistical analysis, and automation initiatives. Guides team members on analytic methods and ensures alignment with enterprise data strategy.
Responsibilities
Apply statistical modeling, machine learning, and data visualization techniques.
Develop predictive models and dashboards using Power BI, Qlik, or Tableau.
Analyze large structured and unstructured datasets.
Collaborate with IT, program management, and financial teams to support data-driven decisions.
Requirements
Bachelor's degree in Data Science, Statistics, or a related field.
10+ years of professional data analytics experience.
Proficiency in Python, R, SQL, and visualization tools.
Must possess an active Secret Clearance - Required
This contractor and subcontractor shall abide by the requirements of 41 CFR §§ 60-1.4(a), 60-300.5(a) and 60-741.5(a). These regulations prohibit discrimination against qualified individuals based on their status as protected veterans or individuals with disabilities and prohibit discrimination against all individuals based on their race, color, religion, sex, sexual orientation, gender identity or national origin. Moreover, these regulations require that covered prime contractors and subcontractors take affirmative action to employ and advance in employment individuals without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status or disability.
$105k-152k yearly est. 1d ago
Staff Data Scientist - Sales Analytics
Harnham
Senior data scientist job in San Francisco, CA
Salary: $200-250k base + RSUs
This fast-growing Series E AI SaaS company is redefining how modern engineering teams build and deploy applications. We're looking for a Staff DataScientist to drive Sales and Go-to-Market (GTM) analytics, applying advanced modeling and experimentation to accelerate revenue growth and optimize the full sales funnel.
About the Role
As the seniordatascientist supporting Sales and GTM, you will combine statistical modeling, experimentation, and advanced analytics to inform strategy and guide decision-making across our revenue organization. Your work will help leadership understand pipeline health, predict outcomes, and identify the levers that unlock sustainable growth.
Key Responsibilities
Model the Business: Build forecasting and propensity models for pipeline generation, conversion rates, and revenue projections.
Optimize the Sales Funnel: Analyze lead scoring, opportunity progression, and deal velocity to recommend improvements in acquisition, qualification, and close rates.
Experimentation & Causal Analysis: Design and evaluate experiments (A/B tests, uplift modeling) to measure the impact of pricing, incentives, and campaign initiatives.
Advanced Analytics for GTM: Apply machine learning and statistical techniques to segment accounts, predict churn/expansion, and identify high-value prospects.
Cross-Functional Partnership: Work closely with Sales, Marketing, RevOps, and Product to influence GTM strategy and ensure data-driven decisions.
Data Infrastructure Collaboration: Partner with Analytics Engineering to define data requirements, ensure data quality, and enable self-serve reporting.
Strategic Insights: Present findings to executive leadership, translating complex analyses into actionable recommendations.
About You
Experience: 6+ years in data science or advanced analytics roles, with significant time spent in B2B SaaS or developer tools environments.
Technical Depth: Expert in SQL and proficient in Python or R for statistical modeling, forecasting, and machine learning.
Domain Knowledge: Strong understanding of sales analytics, revenue operations, and product-led growth (PLG) motions.
Analytical Rigor: Skilled in experimentation design, causal inference, and building predictive models that influence GTM strategy.
Communication: Exceptional ability to tell a clear story with data and influence senior stakeholders across technical and business teams.
Business Impact: Proven record of driving measurable improvements in pipeline efficiency, conversion rates, or revenue outcomes.
$200k-250k yearly 2d ago
Data Partnerships Lead - Equity & Growth (SF)
Exa
Senior data scientist job in San Francisco, CA
A cutting-edge AI search engine company in San Francisco is seeking a Data Partnerships specialist to build their data pipeline. The role involves owning the partnerships cycle, making strategic decisions, negotiating contracts, and potentially building a team. Candidates should have experience in contract negotiation and a Juris Doctor degree. This in-person role offers a competitive salary range of $160,000 - $250,000 with above-market equity.
#J-18808-Ljbffr
$160k-250k yearly 19h ago
Data Scientist
Talent Software Services 3.6
Senior data scientist job in Novato, CA
Are you an experienced DataScientist with a desire to excel? If so, then Talent Software Services may have the job for you! Our client is seeking an experienced DataScientist to work at their company in Novato, CA.
Client's Data Science is responsible for designing, capturing, analyzing, and presenting data that can drive key decisions for Clinical Development, Medical Affairs, and other business areas of Client. With a quality-by-design culture, Data Science builds quality data that is fit-for-purpose to support statistically sound investigation of critical scientific questions. The Data Science team develops solid analytics that are visually relevant and impactful in supporting key data-driven decisions across Client. The Data Management Science (DMS) group contributes to Data Science by providing complete, correct, and consistent analyzable data at data, data structure and documentation levels following international standards and GCP. The DMS Center of Risk Based Quality Management (RBQM) sub-function is responsible for the implementation of a comprehensive, cross-functional strategy to proactively manage quality risks for clinical trials. Starting at protocol development, the team collaborates to define critical-to-quality factors, design fit-for-purpose quality strategies, and enable ongoing oversight through centralized monitoring and data-driven risk management. The RBQM DataScientist supports central monitoring and risk-based quality management (RBQM) for clinical trials. This role focuses on implementing and running pre-defined KRIs, QTLs, and other risk metrics using clinical data, with strong emphasis on SAS programming to deliver robust and scalable analytics across multiple studies.
Primary Responsibilities/Accountabilities:
The RBQM DataScientist may perform a range of the following responsibilities, depending upon the study's complexity and the study's development stage:
Implement and maintain pre-defined KRIs, QTLs, and triggers using robust SAS programs/macros across multiple clinical studies.
Extract, transform, and integrate data from EDC systems (e.g., RAVE) and other clinical sources into analysis-ready SAS datasets.
Run routine and ad-hoc RBQM/central monitoring outputs (tables, listings, data extracts, dashboard feeds) to support signal detection and study review.
Perform QC and troubleshooting of SAS code; ensure outputs are accurate and efficient.
Maintain clear technical documentation (specifications, validation records, change logs) for all RBQM programs and processes.
Collaborate with Central Monitors, Central Statistical Monitors, Data Management, Biostatistics, and Study Operations to understand requirements and ensure correct implementation of RBQM metrics.
Qualifications:
PhD, MS, or BA/BS in statistics, biostatistics, computer science, data science, life science, or a related field.
Relevant clinical development experience (programming, RBM/RBQM, Data Management), for example:
PhD: 3+ years
MS: 5+ years
BA/BS: 8+ years
Advanced SAS programming skills (hard requirement) in a clinical trials environment (Base SAS, Macro, SAS SQL; experience with large, complex clinical datasets).
Hands-on experience working with clinical trial data.•Proficiency with Microsoft Word, Excel, and PowerPoint.
Technical - Preferred / Strong Plus
Experience with RAVE EDC.
Awareness or working knowledge of CDISC, CDASH, SDTM standards.
Exposure to R, Python, or JavaScript and/or clinical data visualization tools/platforms.
Preferred:
Knowledge of GCP, ICH, FDA guidance related to clinical trials and risk-based monitoring.
Strong analytical and problem-solving skills; ability to interpret complex data and risk outputs.
Effective communication and teamwork skills; comfortable collaborating with cross-functional, global teams.
Ability to manage multiple programming tasks and deliver high-quality work in a fast-paced environment.
$99k-138k yearly est. 3d ago
Senior Energy Data Engineer - API & Spark Pipelines
Medium 4.0
Senior data scientist job in San Francisco, CA
A technology finance firm in San Francisco is seeking an experienced Data Engineer. The role involves building data pipelines, integrating data across various platforms, and developing scalable web applications. The ideal candidate will have a strong background in data analysis, software development, and experience with AWS. The salary range for this position is between $160,000 and $210,000, with potential bonuses and equity.
#J-18808-Ljbffr
$160k-210k yearly 3d ago
Staff Machine Learning Data Engineer
Backflip 3.7
Senior data scientist job in San Francisco, CA
Mechanical design, the work done in CAD, is the rate-limiter for progress in the physical world. However, there are only 2-4 million people on Earth who know how to CAD. But what if hundreds of millions could? What if creating something in the real world were as easy as imagining the use case, or sketching it on paper?
Backflip is building a foundation model for mechanical design: unifying the world's scattered engineering knowledge into an intelligent, end-to-end design environment. Our goal is to enable anyone to imagine a solution and hit “print.”
Founded by a second-time CEO in the same space (first company: Markforged), Backflip combines deep industry insight with breakthrough AI research. Backed by a16z and NEA, we raised a $30M Series A and built a deeply technical, mission-driven team.
We're building the AI foundation that tomorrow's space elevators, nanobots, and spaceships will be built in.
If you're excited to define the next generation of hard tech, come build it with us.
The Role
We're looking for a Staff Machine Learning Data Engineer to lead and build the data pipelines powering Backflip's foundation model for manufacturing and CAD.
You'll design the systems, tools, and strategies that turn the world's engineering knowledge - text, geometry, and design intent - into high-quality training data.
This is a core leadership role within the AI team, driving the data architecture, augmentation, and evaluation that underpin our model's performance and evolution.
You'll collaborate with Machine Learning Engineers to run data-driven experiments, analyze results, and deliver AI products that shape the future of the physical world.
What You'll Do
Architect and own Backflip's ML data pipeline, from ingestion to processing to evaluation.
Define data strategy: establish best practices for data augmentation, filtering, and sampling at scale.
Design scalable data systems for multimodal training (text, geometry, CAD, and more).
Develop and automate data collection, curation, and validation workflows.
Collaborate with MLEs to design and execute experiments that measure and improve model performance.
Build tools and metrics for dataset analysis, monitoring, and quality assurance.
Contribute to model development through insights grounded in data, shaping what, how, and when we train.
Who You Are
You've built and maintained ML data pipelines at scale, ideally for foundation or generative models, that shipped into production in the real world.
You have deep experience with data engineering for ML, including distributed systems, data extraction, transformation, and loading, and large-scale data processing (e.g. PySpark, Beam, Ray, or similar).
You're fluent in Python and experienced with ML frameworks and data formats (Parquet, TFRecord, HuggingFace datasets, etc.).
You've developed data augmentation, sampling, or curation strategies that improved model performance.
You think like both an engineer and an experimentalist: curious, analytical, and grounded in evidence.
You collaborate well across AI development, infra, and product, and enjoy building the data systems that make great models possible.
You care deeply about data quality, reproducibility, and scalability.
You're excited to help shape the future of AI for physical design.
Bonus points if:
You are comfortable working with a variety of complex data formats, e.g. for 3D geometry kernels or rendering engines.
You have an interest in math, geometry, topology, rendering, or computational geometry.
You've worked in 3D printing, CAD, or computer graphics domains.
Why Backflip
This is a rare opportunity to own the data backbone of a frontier foundation model, and help define how AI learns to design the physical world.
You'll join a world-class, mission-driven team operating at the intersection of research, engineering, and deep product sense, building systems that let people design the physical world as easily as they imagine it.
Your work will directly shape the performance, capability, and impact of Backflip's foundation model, the core of how the world will build in the future.
Let's build the tools the future will be made in.
#J-18808-Ljbffr
$126k-178k yearly est. 3d ago
ML Engineer: Fraud Detection & Big Data at Scale
Datavisor 4.5
Senior data scientist job in Mountain View, CA
A leading security technology firm in California is seeking a skilled Data Science Engineer. You will harness the power of unsupervised machine learning to detect fraudulent activities across various sectors. Ideal candidates have experience with Java/C++, data structures, and machine learning. The company offers competitive pay, flexible schedules, equity participation, health benefits, a collaborative environment, and unique perks such as catered lunches and game nights.
#J-18808-Ljbffr
$125k-177k yearly est. 4d ago
ML Data Engineer: Systems & Retrieval for LLMs
Zyphra Technologies Inc.
Senior data scientist job in Palo Alto, CA
A leading AI technology company based in Palo Alto, CA is seeking a Machine Learning Data Engineer. You will build and optimize the data infrastructure for our machine learning systems while collaborating with ML engineers and infrastructure teams. The ideal candidate has a strong engineering background in Python, experience in production data pipelines, and a deep understanding of distributed systems. This role offers comprehensive benefits, a collaborative environment, and opportunities for innovative contributions.
#J-18808-Ljbffr
$110k-157k yearly est. 2d ago
Founding ML Infra Engineer - Audio Data Platform
David Ai
Senior data scientist job in San Francisco, CA
A pioneering audio tech company based in San Francisco is searching for a Founding Machine Learning Infrastructure Engineer. In this role, you will build and scale the core infrastructure that powers cutting-edge audio ML products. You will lead the development of systems for training and deploying models. Candidates should have over 5 years of backend experience with strong skills in cloud infrastructure and machine learning principles. The company offers benefits like unlimited PTO and comprehensive health coverage.
#J-18808-Ljbffr
$110k-157k yearly est. 3d ago
Data/Full Stack Engineer, Data Storage & Ingestion Consultant
Eon Systems PBC
Senior data scientist job in San Francisco, CA
About us
At Eon, we are at the forefront of large-scale neuroscientific data collection. Our mission is to enable the safe and scalable development of brain emulation technology to empower humanity over the next decade, beginning with the creation of a fully emulated digital twin of a mouse.
Role
We're a San Francisco team collecting very large microscopy datasets and we need an expert to design and implement our end-to-end data pipeline, from high-rate ingest to multi-petabyte storage and downstream processing. You'll own the strategy (on-prem vs. S3 or hybrid), the bill of materials, and the deployment, and you'll be on the floor wiring, racking, tuning, and validating performance.
Our current instruments generate data at ~1+ GB/s sustained (higher during bursts) and the program will accumulate multiple petabyes total over time. You'll help us choose and implement the right architecture considering reliability and cost controls.
Outcomes (what success looks like)
Within 2 weeks: Implement an immediate data-handling strategy that reliably ingests our initial data streams.
Within 2 weeks: Deliver a documented medium-term data architecture covering storage, networking, ingest, and durability.
Within 1 month: Operationalize the medium-term pipeline in production (ingest → buffer → long-term store → compute access).
Ongoing: Maintain ≥95% uptime for the end-to-end data-handling pipeline after setup.
Responsibilities
Architect ingest & storage: Choose and implement an on-prem hardware and data pipeline design or a cloud/S3 alternative with explicit cost and performance tradeoffs at multi-petabyte scale.
Set up a sustained-write ingest path ≥1 GB/s with adequate burst headroom (camera/frame-to-disk), including networking considerations, cooling, and throttling safeguards.
Optimize footprint & cost: Incorporate on-the-fly compression/downsampling options and quantify CPU budget vs. write-speed tradeoffs; document when/where to compress to control $/PB.
Integrate with acquisition workflows ensuring image data and metadata are compatible with downstream stitching/flat-field correction pipelines.
Enable downstream compute: Expose the data to segmentation/analysis stacks (local GPU nodes or cloud).
Skills
5+ years designing and deploying high-throughput storage or HPC pipelines (≥1 GB/s sustained ingest) in production.
Deep hands-on with: NVMe RAID/striping, ZFS/MDRAID/erasure coding, PCIe topology, NUMA pinning, Linux performance tuning, and NIC offload features.
Proven delivery of multi-GB/s ingest systems and petabyte-scale storage in production (life-sciences, vision, HPC, or media).
Experience building tiered storage systems (NVMe → HDD/object) and validating real-world throughput under sustained load.
Practical S3/object-storage know-how (AWS S3 and/or on-prem S3-compatible systems) with lifecycle, versioning, and cost controls.
Data integrity & reliability: snapshots, scrubs, replication, erasure coding, and backup/DR for PB-scale systems.
Networking: ****25/40/100 GbE (SFP+/SFP28), RDMA/ RoCE/iWARP familiarity; switch config and path tuning.
Ability to spec and rack hardware: selecting chassis/backplanes, RAID/HBA cards, NICs, and cooling strategies to prevent NVMe throttling under sustained writes.
Ideal skills:
Experience with microscopy or scientific imaging ingest at frame-to-disk speeds, including Micro-Manager-based pipelines and raw-to-containerized format conversions.
Experience with life science imaging data a plus.
Engagement details
Contract (1099 or corp-to-corp); contract-to-hire if there's a mutual fit.
On-site requirement: You must be physically present in San Francisco during build-out and initial operations; local field work (e.g., UCSF) as needed.
Compensation: Contract, $100-300/hour
Timeline: Immediate start
#J-18808-Ljbffr
$110k-157k yearly est. 4d ago
Global Data ML Engineer for Multilingual Speech & AI
Cartesia
Senior data scientist job in San Francisco, CA
A leading technology company in San Francisco is seeking a Machine Learning Engineer to ensure the quality and coverage of data across diverse languages. You will design large-scale datasets, evaluate models, and implement quality control systems. The ideal candidate has expertise in multilingual datasets and a strong background in applied ML. This full-time role offers competitive benefits, including fully covered insurance and in-office perks, in a supportive team environment.
#J-18808-Ljbffr
$110k-157k yearly est. 19h ago
Foundry Data Engineer: ETL Automation & Dashboards
Data Freelance Hub 4.5
Senior data scientist job in San Francisco, CA
A data consulting firm based in San Francisco is seeking a Palantir Foundry Consultant for a contract position. The ideal candidate should have strong experience in Palantir Foundry, SQL, and PySpark, with proven skills in data pipeline development and ETL automation. Responsibilities include building data pipelines, implementing interactive dashboards, and leveraging data analysis for actionable insights. This on-site role offers an excellent opportunity for those experienced in the field.
#J-18808-Ljbffr
$114k-160k yearly est. 2d ago
Senior Data Engineer: ML Pipelines & Signal Processing
Zendar
Senior data scientist job in Berkeley, CA
An innovative tech firm in Berkeley seeks a SeniorData Engineer to manage complex data engineering pipelines. You will ensure data quality, support ML engineers across locations, and establish infrastructure standards. The ideal candidate has over 5 years of experience in Data Science or MLOps, strong algorithmic skills, and proficiency in GCP, Python, and SQL. This role offers competitive salary and the chance to impact a growing team in a dynamic field.
#J-18808-Ljbffr
$110k-157k yearly est. 3d ago
Data Scientist
Del Rey Systems & Technology, Inc. 4.3
Senior data scientist job in Oxnard, CA
DataScientist II (2ppl)
STATUS: Contingency - Announcement of Award Imminent
SSC: Active Secret Security Clearance (required)
SALARY: Please see labor category posted below
SUMMARY:
The Naval Surface Warfare Center, Port Hueneme Division (NSWC PHD) is part of the larger Naval Sea Systems Command. The NSWC PHD mission is to provide research, development, test and evaluation, and in-service engineering and logistics support to the U.S. Navy, other military services, and government agencies. Its focus areas include combat systems, unmanned systems, surface ship systems, and information systems.
LABOR CATEGORIES: All positions require an Active Secret Clearance and experience in DoD
DataScientist II - $105,872.00
Desired Education: Bachelor's degree in a related technical field.
Desired Experience: Three (3) years' experience with software integration or testing, including analyzing and implementing test plans and scripts. Experience with frequent scripting language use, such as Python and R and using packages commonly used in data science applications or advanced analytics. Experience with data science, data mining, statistics, or graph algorithms to support analytics objectives.
COMPANY OVERVIEW
DEL REY Systems & Technology, Inc. (DEL REY) is a small Veteran-owned defense contractor founded in 1995 and headquartered in San Diego, California. We are an equal opportunity employer and believe in recruiting and developing the very best professionals in the field. Although our corporate office is in California, we have employees supporting our customers from coast-to-coast and many states in-between.
For employment consideration, please submit your resume to this posting in MS-Word and let us know the position for which you are applying. DEL REY is proud to offer competitive compensation and a comprehensive benefit package. Employee benefits include both a Traditional 401k and ROTH Retirement Accounts; Medical, Dental, Vision, FSA, Vacation, Sick, Basic Term Life Insurance, Employee Assistance Program and voluntary supplemental insurance.
DEL REY complies with applicable Federal civil rights laws and does not discriminate. We welcome all applicants as we are always looking for skilled employees possessing a desire to join and contribute to an employee-focused company committed to sustaining superior customer satisfaction. For employment consideration, please respond to the job board where we have our posting or to our Career Page and reference the position which you are seeking.
DISCLAIMER: The information in this job description indicates the general nature of the opportunity. It should not be construed as a complete or final description
.
*** Time-Sensitive - Apply if interested ***
$105.9k yearly 3d ago
Data Engineer
Metrosys
Senior data scientist job in Santa Barbara, CA
MetroSys is seeking an experienced Data Engineer with a strong background in Python and Microsoft Azure environments. The ideal candidate will have at least 5 years of experience in building and optimizing data pipelines, managing data storage solutions, and integrating systems through APIs. This role will focus on developing robust data pipelines and warehouse solutions to support enterprise-level data initiatives.
Key Responsibilities
Design, develop, and maintain scalable data pipelines to support analytics, reporting, and operational workloads.
Build and optimize data storage solutions in Microsoft Azure, including Azure Data Lake and related services.
Integrate third-party and internal systems using APIs for data ingestion and synchronization.
Collaborate with data architects, analysts, and business stakeholders to ensure data solutions meet requirements.
Implement best practices for data governance, quality, and security across the pipeline lifecycle.
Monitor, troubleshoot, and improve pipeline performance and reliability.
Qualifications
5+ years of hands-on experience as a Data Engineer or similar role.
Strong proficiency with Python for data manipulation, automation, and pipeline development.
Proven experience in Microsoft Azure data services (Azure Data Factory, Data Lake, Synapse, SQL Database).
Solid understanding of data warehouse concepts and storage optimization techniques.
Experience designing and consuming APIs for system integration.
Strong problem-solving skills and ability to work independently in a remote environment.
Preferred Skills:
Knowledge of cloud security practices and compliance requirements.
Familiarity with CI/CD pipelines for data workflows.
Experience with large-scale enterprise data projects.
$103k-146k yearly est. Auto-Apply 60d+ ago
Staff Data Engineer
Artera
Senior data scientist job in Santa Barbara, CA
Our Mission: Make healthcare #1 in customer service. What We Deliver: Artera, a SaaS leader in digital health, transforms patient experience with AI-powered virtual agents (voice and text) for every step of the patient journey. Trusted by 1,000+ provider organizations - including specialty groups, FQHCs, large IDNs and federal agencies - engaging 100 million patients annually. Artera's virtual agents support front desk staff to improve patient access including self-scheduling, intake, forms, billing and more. Whether augmenting a team or unleashing a fully autonomous digital workforce, Artera offers multiple virtual agent options to meet healthcare organizations where they are in their AI journey. Artera helps support 2B communications in 109 languages across voice, text and web. A decade of healthcare expertise, powered by AI.
Our Impact: Trusted by 1,000+ provider organizations - including specialty groups, FQHCs, large IDNs and federal agencies - engaging 100 million patients annually. Hear from our CEO, Guillaume de Zwirek, about why we are standing at the edge of the biggest technological shift in healthcare's history!
Our award-winning culture: Our award-winning culture: Since founding in 2015, Artera has consistently been recognized for its innovative technology, business growth, and named a top place to work. Examples of these accolades include: Inc. 5000 Fastest Growing Private Companies (2020, 2021, 2022, 2023, 2024); Deloitte Technology Fast 500 (2021, 2022, 2023, 2024, 2025); Built In Best Companies to Work For (2021, 2022, 2023, 2024, 2025, 2026). Artera has also been recognized by Forbes as one of “America's Best Startup Employers,” Newsweek as one of the “World's Best Digital Health Companies,” and named one of the top “44 Startups to Bet your Career on in 2024” by Business Insider.
SUMMARY
We are seeking a highly skilled and motivated Staff Data Engineer to join our team at Artera. This role is critical to maintaining and improving our data infrastructure, ensuring that our data pipelines are robust, efficient, and capable of delivering high-quality data to both internal and external stakeholders. As a key player in our data team, you will have the opportunity to make strategic decisions about the tools we use, how we organize our data, and the best methods for orchestrating and optimizing our data processes.
Your contributions will be essential to ensuring the uninterrupted flow of data across our platform, supporting the analytics needs of our clients and internal teams. If you are passionate about data, problem-solving, and continuous improvement, while also taking the lead on investigating and implementing solutions to enhance our data infrastructure.
RESPONSIBILITES
Continuous Enhancement: Maintain and elevate Artera's data infrastructure, ensuring peak performance and dependability.
Strategic Leadership: Drive the decision-making process for the selection and implementation of data tools and technologies
Streamlining: Design and refine data pipelines to ensure smooth and efficient data flow.
Troubleshooting: Manage the daily operations of the Artera platform, swiftly identifying and resolving data-related challenges.
Cross-Functional Synergy: Partner with cross-functional teams to develop new data requirements and refine existing processes.
Guidance: Provide mentorship to junior engineers, supporting their growth and assisting with complex projects.
Collaborative Innovation: Contribute to ongoing platform improvements, ensuring a culture of continuous innovation.
Knowledge Expansion: Stay informed on industry trends and best practices in data infrastructure and cloud technologies.
Dependability: Guarantee consistent data delivery to customers and stakeholders, adhering to or surpassing service level agreements.
Oversight: Monitor and sustain the data infrastructure, covering areas like recalls, message delivery, and reporting functions.
Proactiveness: Improves stability and performance of architecture for team implementations.
Requirements
Bachelor's Degree in STEM preferred *additional experience is also accepted in lieu of a degree
Proven experience with Kubernetes and Cloud infrastructure (AWS preferred)
Strong proficiency in Python and SQL for data processing and automation.
Expertise in orchestration tools such as Airflow and Docker.
Understanding of performance optimization and cost-effectiveness in Snowflake.
Ability to work effectively in a collaborative, cross-functional environment.
Strong problem-solving skills with a proactive and solution-oriented mindset.
Experience with event sourced and microservice architecture
Experienced working with asynchronous requests in large scale applications
Commitment to testing best practices
Experience in Large-scale data architecture
Demonstrated ability to build and maintain complex data pipelines and data flows.
Bonus Experience
Knowledge of DBT & Meltano
The compensation for this role will be based on level of experience and the geographic tier in which you are located. This position also comes with equity and a variety of benefits.Security RequirementsThis engineering role contributes to a secure, federally compliant platform. Candidates must be eligible for a government background check and operate within strict code management, access, and documentation standards. Security-conscious development and participation in compliance practices are core to the role.
OUR APPROACH TO WORK LOCATIONArtera has hybrid office locations in Santa Barbara, CA, and Philadelphia (Wayne), PA, where team members typically come in three days a week. Specific frequency can vary depending on your team's needs, manager expectations and/or role responsibilities.
In addition to our U.S. office locations, we are intentionally building geographically concentrated teams in several key metropolitan areas, which we call our “Hiring Hubs.” We are currently hiring remote candidates located within the following hiring hubs:- Boston Metro Area, MA- Chicago Metro Area, IL- Denver Metro Area, CO- Kansas City Metro Area (KS/MO)- Los Angeles Metro Area, CA- San Francisco / Bay Area, CA- Seattle Metro Area, WA
This hub-based model helps us cultivate strong local connections and team cohesion, even in a distributed environment.
To be eligible for employment at Artera, candidates must reside in one of our hybrid office cities or one of the designated hiring hubs. Specific roles may call out location preferences when relevant.
As our hubs grow, we may establish local offices to further enhance in-person connection and collaboration. While there are no current plans in place, should an office open in your area, we anticipate implementing a hybrid model. Any future attendance expectations would be developed thoughtfully, considering factors like typical commute times and access to public transit, to ensure they are fair and practical for the local team.
WORKING AT ARTERA Company benefits - Full health benefits (medical, dental, and vision), flexible spending accounts, company paid life insurance, company paid short-term & long-term disability, company equity, voluntary benefits, 401(k) and more! Career development - Manager development cohorts, employee development funds Generous time off - Company holidays, Winter & Summer break, and flexible time off Employee Resource Groups (ERGs) - We believe that everyone should belong at their workplace. Our ERGs are available for identifying employees or allies to join.
EQUAL EMPLOYMENT OPPORTUNITY (EEO) STATEMENTArtera is an Equal Opportunity Employer and is committed to fair and equitable hiring practices. All hiring decisions at Artera are based on strategic business needs, job requirements and individual qualifications. All candidates are considered without regard to race, color, religion, gender, sexuality, national origin, age, disability, genetics or any other protected status.
Artera is committed to providing employees with a work environment free of discrimination and harassment; Artera will not tolerate discrimination or harassment of any kind.
Artera provides reasonable accommodations for applicants and employees in compliance with state and federal laws. If you need an accommodation, please reach out to ************.
DATA PRIVACYArtera values your privacy. By submitting your application, you consent to the processing of your personal information provided in conjunction with your application. For more information please refer to our Privacy Policy.
SECURITY REQUIREMENTSAll employees are responsible for protecting the confidentiality, integrity, and availability of the organization's systems and data, including safeguarding Artera's sensitive information such as, Personal identifiable Information (PII) and Protected Health Information (PHI). Those with specific security or privacy responsibilities must ensure compliance with organizational policies, regulatory requirements, and applicable standards and frameworks by implementing safeguards, monitoring for threats, reporting incidents, and addressing data handling risks or breaches.
We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.
$103k-146k yearly est. 14d ago
Data Engineer
KBR 4.7
Senior data scientist job in Camarillo, CA
Title: Data Engineer Belong. Connect. Grow. with KBR! KBR's National Security Solutions team provides high-end engineering and advanced technology solutions to our customers in the intelligence and national security communities. In this position, your work will have a profound impact on the country's most critical role - protecting our national security.
Why Join Us?
* Innovative Projects: KBR's work is at the forefront of engineering, logistics, operations, science, program management, mission IT and cybersecurity solutions.
* Collaborative Environment: Be part of a dynamic team that thrives on collaboration and innovation, fostering a supportive and intellectually stimulating workplace.
* Impactful Work: Your contributions will be pivotal in designing and optimizing defense systems that ensure national security and shape the future of space defense.
Come join the ITEA award winning TRMC BDKM team and be a part of the team responsible for revolutionizing how analysis is performed across the entire Department of Defense!
Key Responsibilities:
As a Data Engineer, you will be a critical part of the team that is responsible for enabling the development of data-driven decision analysis products through the innovative application, and promotion, of novel methods from data science, machine learning, and operations research to provide robust and flexible testing and evaluation capabilities to support DoD modernization.
* Analytic Experience: Candidate will be a part of the technical team responsible for providing analytic consulting services, supporting analytic workflow and product development and testing, promoting the user adoption of methods and best practices from data science, conducting applied methods projects, and supporting the creation of analysis-ready data.
* Onsite Support: Candidate will be the face of the CHEETAS Team and will be responsible for ensuring stakeholders have the analytical tools, data products and reports they need to make insightful recommendations based on your data driven analysis.
* Stakeholder Assistance: Candidate will directly assisting both analyst / technical and non-analyst / non-technical stakeholders with the analysis of DoD datasets and demonstrating the 'art of the possible' to the stakeholders and VIPs with insights gained from your analysis of DoD Test and Evaluation (T&E) data.
* Communication: Must effectively communicate at both a programmatic and technical level. Although you potentially may be the only team member physically on-site supporting you will not be alone. You will have support from other data science team members as well as the software engineering and system administration teams.
* Technical Support: Candidate will be responsible for running and operating CHEETAS (and other tools); demonstrating these tools to stakeholders & VIPs; conveying analysis results; adapting internally-developed tools, notebooks and reports to meet emerging needs; gathering use cases, requirements, gaps and needs from stakeholders and for larger development items providing that information as feature requests or bug reports to the CHEETAS development team; and performing impromptu hands-on training sessions with end users and potentially troubleshooting problems from within closed networks without internet access (with support from distributed team members).
* Independent Work: Candidate must be self-motivated and capable of working independently with little supervision / direct tasking.
Work Environment:
* Location: Onsite; Honolulu, HI
* Travel Requirements: This position will require travel of 25% with potential surge to 50% to support end users located at various DoD ranges & labs located across the US (including Alaska and Hawaii). When not supporting a site, this position can work remotely or from a nearby KBR office (if available and desired).
* Working Hours: Standard, although you potentially may be the only team member physically on-site providing support, you will not be alone.
Basic Qualifications:
* Security Clearance: Active or current TS/SCI Clearance is required
* Education: A degree in operations research, engineering, applied math, statistics, computer science or information technology with preferred 15+ years of experience within DoD. Candidates with 10-15 years of DoD experience will be considered on a case-by-case basis. Entry level candidates will not be considered.
* Technical Experience: Previous experience must include five (5) years of hands-on experience in big data analytics, five (5) years of hands-on experience with object-oriented and functional languages (e.g., Python, R, C++, C#, Java, Scala, etc.).
* Data Experience: Experience in dealing with imperfections in data. Experience should demonstrate competency in key concepts from software engineering, computer programming, statistical analysis, data mining algorithms, machine learning, and modeling sufficient to inform technical choices and infrastructure configuration.
* Data Analytics: Proven analytical skills and experience in preparing and handling large volumes of data for ETL processes. Experience should include working with teams in the development and interpretation the results of analytic products with DoD specific data types.
* Big Data Infrastructure: Experience in the installation, configuration, and use of big data infrastructure (Spark, Trino, Hadoop, Hive, Neo4J, JanusGraph, HBase, MS SQL Server with Polybase, VMWare as examples). Experience in implementing Data Visualization solutions.
Qualifications Required:
* Experience using scripting languages (Python and R) to process, analyze and visualize data.
* Experience using notebooks (Jupyter Notebooks and RMarkdown) to create reproducible and explainable products.
* Experience using interactive visualization tools (RShiny, py Shiny, Dash) to create interactive analytics.
* Experience generating and presenting reports, visualizations and findings to customers.
* Experience building and optimizing 'big data' data pipelines, architectures and data sets.
* Experience cleaning and preparing time series and geospatial data for analysis.
* Experience working with Windows, Linux, and containers.
* Experience querying databases using SQL and working with and configuring distributed storage and computing environments to conduct analysis (Spark, Trino, Hadoop, Hive, Neo4J, JanusGraph, MongoDB, Accumulo, HBase as examples).
* Experience working with code repositories in a collaborative team.
* Ability to make insightful recommendations based on data driven analysis and customer interactions.
* Ability to effectively communicate both orally and in writing with customers and teammates.
* Ability to speak and present findings in front of large technical and non-technical groups.
* Ability to create documentation and repeatable procedures to enable reproducible research.
* Ability to create training and educational content for novice end users on the use of tools and novel analytic methods.
* Ability to solve problems, debug, and troubleshoot while under pressure and time constraints is required.
* Should be self-motivated to design, develop, enhance, reengineer or integrate software applications to improve the quality of data outputs available for end users.
* Ability to work closely with datascientists to develop and subsequently implement the best technical design and approach for new analytical products.
* Strong analytical skills related to working with both structured and unstructured datasets.
* Excellent programming, testing, debugging, and problem-solving skills.
* Experience designing, building, and maintaining both new and existing data systems and solutions
* Understanding of ETL processes, how they function and experience implementing ETL processes required.
* Knowledge of message queuing, stream processing and extracting value from large disparate datasets.
* Knowledge of software design patterns and Agile Development methodologies is required.
* Knowledge of methods from operations research, statistical and machine learning, data science, and computer science is sufficient to select appropriate methods to enable data preparation and computing architecture configuration to implement these approaches.
* Knowledge of computer programming concepts, data structures and storage architecture, to include relational and non-relational databases, distributed computing frameworks, and modeling and simulation experimentation sufficient to select appropriate methods to enable data preparation and computing architecture configuration to implement these approaches.
Scheduled Weekly Hours: 40
Basic Compensation:
$119,900 - $179,800
The offered rate will be based on the selected candidate's knowledge, skills, abilities and/or experience and in consideration of internal parity.
Additional Compensation:
KBR may offer bonuses, commissions, or other forms of compensation to certain job titles or levels, per internal policy or contractual designation. Additional compensation may be in the form of sign on bonus, relocation benefits, short-term incentives, long-term incentives, or discretionary payments for exceptional performance.
Ready to Make a Difference?
If you're excited about making a significant impact in the field of space defense and working on projects that matter, we encourage you to apply and join our team at KBR. Let's shape the future together.
KBR Benefits
KBR offers a selection of competitive lifestyle benefits which could include 401K plan with company match, medical, dental, vision, life insurance, AD&D, flexible spending account, disability, paid time off, or flexible work schedule. We support career advancement through professional training and development.
Belong, Connect and Grow at KBR
At KBR, we are passionate about our people and our Zero Harm culture. These inform all that we do and are at the heart of our commitment to, and ongoing journey toward being a People First company. That commitment is central to our team of team's philosophy and fosters an environment where everyone can Belong, Connect and Grow. We Deliver - Together.
KBR is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, disability, sex, sexual orientation, gender identity or expression, age, national origin, veteran status, genetic information, union status and/or beliefs, or any other characteristic protected by federal, state, or local law.
$119.9k-179.8k yearly Auto-Apply 13d ago
Data Engineer
City of Oxnard, Ca 4.3
Senior data scientist job in Oxnard, CA
This recruitment is open until filled. Early submissions are encouraged as applications will be reviewed on a regular and ongoing basis. The City of Oxnard is seeking a skilled and proactive Data Engineer to support our modernization efforts and optimize data workflows across our 300+ applications. This role will be responsible for designing, developing, and maintaining ETL pipelines, data transformations, and legacy systems integrations to ensure seamless data flow between city systems and third-party vendors. You will work closely with integration specialists, database administrators, and business stakeholders to enhance accessibility, reporting, and usability of critical data.
This role is essential in monitoring and maintaining the data health of the city and aiding decision making through analysis and visualization. Therefore, the ideal candidate will have experience in BI and working with stakeholders to create dashboards and reports.
WHAT YOU'LL DO:
* Maintain disparate datasets through ETL pipelines, improve data accessibility, ensure data integrity, and drive and enforce security compliance.
* Prepare strategies for modernizing legacy systems through database migrations or hybrid integrations.
* Collaborate with database administrators and integration specialists to streamline workflows and optimize performance.
* Work with system administrators to ensure data security and access control best practices.
* Support BI initiatives by structuring data and building dashboards for analysts.
* Help train and support business users to use BI tools effectively.
* Maintain technical documentation for data flows, integrations, and system dependencies.
* Analyze data trends, discrepancies, and vendor requests to guide informed decisions.
* Participate in evaluating and recommending software tools and platforms for data work.
Payroll Title/Classification: Business Systems Analyst, Senior
WORK SCHEDULE:
The normal work week is Monday through Friday 8:00 am-6:00 pm with every other Friday off. This position may be required to be on an on-call (stand-by) rotation and you may be required to be available to work additional hours as needed to respond to workload needs.
The City does not offer hybrid or remote work.
Please note: The Information Technology Department supports public safety personnel including the Police Department on a 24-hour, 7-day-per-week schedule, therefore, the candidate may be required to be on call on a rotating basis, subject to callback. As part of the selection process, applicants will be required to successfully complete a thorough background investigation, which may include a polygraph exam.This class specification represents only the core areas of responsibilities; specific position assignments will vary depending on the needs of the Department.
* Design, build, and maintain ETL pipelines to support data exchange between systems.
* Ensure data consistency, integrity, and compliance with governance regulations such as CJIS.
* Develop integration solutions using APIs, web services, and direct database connections.
* Optimize and transform legacy and modern data for use across applications.
* Collaborate with technical teams to structure and optimize databases for accessibility.
* Implement observability mechanisms to ensure critical data workflows and integrations are traceable, auditable, and monitored for failures or anomalies.
* Perform root cause analysis of data errors, failures, and bottlenecks.
* Write documentation for databases, workflows, custom integrations, and reports.
* Conduct unit and integration testing for pipelines and transformations.
* Research, evaluate, and recommend software tools, platforms, and third-party solutions to meet business and technical requirements.
* Work closely with business leaders, analysts, and department heads to gather requirements and ensure BI solutions align with business objectives.
* Build dashboards and interactive reports, empowering end-users to access and explore data independently.
* Work cross-functionally with stakeholders and non-technical business users.
* Other assigned duties as the role may require.
The following are the minimum qualifications necessary for entry into the classification:
Education:
* Bachelor's degree in Computer Science, Business Administration, Information Technology, or a related field.
Experience:
* Minimum of 5 years of hands-on experience in Data Engineering, Integration Engineering, Data Analysis, or a similar field.
* Proven experience with custom reporting, dashboards, or BI tools (Power BI, Tableau, Qlik, or similar).
* Strong proficiency in SQL, Python, or other scripting/ETL languages.
* Strong understanding of database systems (MS SQL Server, MySQL, or similar) including database design and optimization.
* Hands-on experience with ETL processes and tools (SSIS, Talend, Apache Airflow, or similar).
* Knowledge of data modeling, warehousing, and integration protocols (APIs, SFTP, message queues, XML/JSON data exchanges).
Highly Desirable Experience, Qualifications and/or Certifications:
* Local government or public sector experience especially public safety (police and fire).
* Experience with version control systems (e.g., Git or TFS) and collaborative development workflows (e.g., GitFlow).
* Previous experience in a hybrid role combining technical and business analysis responsibilities.
* Knowledge of cloud-based data storage and processing (AWS, Azure, etc.).
Licensing/Certifications:
* Valid CA Class C Driver's License
Other Requirements:
* Must be able to speak and understand English to effectively communicate with fellow employees, customers, and vendors.
APPLICATION PROCESS:
* Submit NEOGOV/Government Jobs on-line application.
* Complete and submit responses to the supplemental questions, if required.
* Upload resume, cover letter, proof of degree (transcript), or other requested documents.
Your application may be rejected as incomplete if you do not include the relevant information in the online application and include the information only on the resume. Applications and/or Supplemental Questionnaires that state "see my resume" or "see my personnel file" are considered incomplete and will not be accepted. Cover letters and/or optional resumes are not accepted in lieu of a completed application.
The list of qualified candidates established from this recruitment may be used to fill other full-time, part-time, and temporary assignments. There is currently one (1) full-time vacancy within the Information Technology Department.
Selected candidate(s) must pass a thorough background investigation.
UNION MEMBERSHIP: Positions in this classification are represented by the Oxnard Mid Manager's Association (OMMA).
NOTE: For most positions, the City of Oxnard relies on office automation (Microsoft Office/Google) and web-based enabled tools, therefore candidates must be proficient and comfortable with computer use to perform functions associated with on-going work.
Regular and reliable attendance, effective communication skills, and development of effective working relationships are requirements of all positions.
Employees are required to participate in the City's direct deposit plan and are paid on a bi-weekly basis.
This position requires a 12 month probationary period.
Pursuant to California Government Code Section 3100, all public employees are required to serve as disaster service workers subject to such disaster service activities as may be assigned to them.
EQUAL OPPORTUNITY: The City of Oxnard is an Equal Opportunity Employer and welcomes applications from all qualified applicants. We do not discriminate on the basis of race, color, religion, sex, national origin, age, marital status, medical condition, disability or sexual orientation.
REASONABLE ACCOMMODATION: The City of Oxnard makes reasonable accommodation for individuals/people with disabilities. If you believe you require special arrangements to participate in the testing process, you must inform the Human Resources Department in writing no later than the filing date. Applicants who request such accommodation must document their request with an explanation of the type and extent of accommodation required.
LEGAL REQUIREMENT: On the first day of employment, new employees must provide proof of citizenship or documentation of legal right to work in the United States in compliance with the Immigration Reform and Control Act of 1986, as amended. The City participates in E-Verify and will provide the federal government with your form I-9 information to confirm that you are authorized to work in the U.S. If E-Verify cannot confirm that you are authorized to work, this employer is required to give you written instructions and an opportunity to contact Department of Homeland Security (DHS) or Social Security Administration (SSA) so you can begin to resolve the issue before the employer can take any action against you, including terminating your employment. Employers can only use E-Verify once you have accepted a job offer and completed the Form I-9. For more information on E-Verify, please contact DHS. ************ dhs.gov/e-verify
If you have any questions regarding this recruitment, please contact Ashley Costello at **************************.
NOTE: The provisions of this bulletin do not constitute an expressed or implied contract. Any provision contained in this bulletin may be modified or revoked without notice.
How much does a senior data scientist earn in Santa Barbara, CA?
The average senior data scientist in Santa Barbara, CA earns between $97,000 and $195,000 annually. This compares to the national average senior data scientist range of $90,000 to $170,000.
Average senior data scientist salary in Santa Barbara, CA