Principal Platform Software Engineer - OpenBMC Platform Architect
Data engineer job in Santa Clara, CA
NVIDIA's invention of the GPU in 1999 fueled the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI - the next era of computing - with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, we are increasingly known as “the AI computing company.” We're looking to grow our company, and form teams with the most inquisitive people in the world. Join us at the forefront of technological advancement.
Are you ready to change the next generation of computing? Join us at the forefront of technological advancement. We are looking for a principal platform software architect who can lead next generation data center server product platform architecture, bring up and drive a solution to production.
What you'll be doing:
Platform architecture and hardware bring up of NVIDIA HGX GPU baseboards. Software architecture and design for various firmware, understanding embedded system limitations, Linux kernel internals to ensure performance, scalability and resiliency requirements for firmware running on embedded devices.
Working closely with hardware teams to influence hardware design and review HW architecture & schematics.
Work with internal and external team members to narrow down on performance and resiliency requirements for firmware running on Nvidia data center products. Hands on coding, code review, and BMC firmware development including various manageability features for NVIDIA's Server platforms
Actively engaged in designing and developing CI/CD framework to ensure best quality for firmware. Writing and reviewing design documents, reviewing QA test plan and working closely with all collaborators to achieve consensus for design and testability as per product requirements.
Designs solutions for errors, stats & configuration appropriate to CPU, GPU, DIMM, SSDs, NICs, IB, PSU, BMC, FPGA, CPLD etc. for enterprise readiness of NVIDIA Server platforms.
Actively work with whole org to Instruments code to ensure maximum code coverage, writing and automating unit tests for each implemented module and maintaining detailed unit test case reports.
Mentor team for best practices on writing efficient and bug free code. Works with internal and external partners to drive design architecture to real products.
Works with the security team to ensure developed code is in line with product security goals, and with hardware teams to influence hardware design and review HW architecture & schematics.
What we need to see:
Bachelor of Science Degree (or higher) or equivalent experience in Electrical or Computer Engineering or Computer Science.
15+ overall years of active development using C / C++ as primary programming language using Linux as OS.
8+ experience in technically leading a good size of team in terms of delivering large firmware or software projects. 5+ experience in working across internal and external stakeholders to narrow down on requirements and converting those requirements in architecture and drive with a team to deliver it with quality
Proven track record of delivering solutions to customers. Deep understanding of deployments at scale
Domain expertise in Data Center Firmware/software development on X86 or ARM Platforms including BMC-BIOS communication, thermal management, power management, firmware update, device monitoring, firmware security, etc.
Board Bring-up expertise with hands-on experience in Device drivers like I2C/I3C, SPI, PCIe, SMBus, Mail-box etc. as well as the device trees for uboot and Linux kernel.
Understanding on REST architecture style especially JSON over HTTPs with OAuth.
Strong programming in C/C++ in Linux operating environment, strong understanding of Linux kernel internals, strong code review skills.
You should possess excellent written and oral communication skills, good work ethics, high sense of team-work, love to produce quality work and commitment to finish your tasks every single day. You are a self-starter who loves to find creative solutions to complicated problems.
Ways to stand out from the crowd:
Consistent track record in delivering 100,000+ lines of code for a single project.
Proven record in technically leading org of 30+ engineers.
Expertise in system software and platform security for x86/ARM based Rack/Blade server systems.
NVIDIA is widely considered to be one of the technology world's most desirable employers. We have some of the most forward-thinking and hardworking people on the planet working for us. If you're creative and autonomous, we want to hear from you
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 272,000 USD - 425,500 USD.
You will also be eligible for equity and benefits.
Applications for this job will be accepted at least until December 10, 2025.NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
Auto-ApplyData Scientist
Data engineer job in San Francisco, CA
We're working with a Series A health tech start-up pioneering a revolutionary approach to healthcare AI, developing neurosymbolic systems that combine statistical learning with structured medical knowledge. Their technology is being adopted by leading health systems and insurers to enhance patient outcomes through advanced predictive analytics.
We're seeking Machine Learning Engineers who excel at the intersection of data science, modeling, and software engineering. You'll design and implement models that extract insights from longitudinal healthcare data, balancing analytical rigor, interpretability, and scalability.
This role offers a unique opportunity to tackle foundational modeling challenges in healthcare, where your contributions will directly influence clinical, actuarial, and policy decisions.
Key Responsibilities
Develop predictive models to forecast disease progression, healthcare utilization, and costs using temporal clinical data (claims, EHR, laboratory results, pharmacy records)
Design interpretable and explainable ML solutions that earn the trust of clinicians, actuaries, and healthcare decision-makers
Research and prototype innovative approaches leveraging both classical and modern machine learning techniques
Build robust, scalable ML pipelines for training, validation, and deployment in distributed computing environments
Collaborate cross-functionally with data engineers, clinicians, and product teams to ensure models address real-world healthcare needs
Communicate findings and methodologies effectively through visualizations, documentation, and technical presentations
Required Qualifications
Strong foundation in statistical modeling, machine learning, or data science, with preference for experience in temporal or longitudinal data analysis
Proficiency in Python and ML frameworks (PyTorch, JAX, NumPyro, PyMC, etc.)
Proven track record of transitioning models from research prototypes to production systems
Experience with probabilistic methods, survival analysis, or Bayesian inference (highly valued)
Bonus Qualifications
Experience working with clinical data and healthcare terminologies (ICD, CPT, SNOMED CT, LOINC)
Background in actuarial modeling, claims forecasting, or risk adjustment methodologies
Lead Data Scientist - Computer Vision
Data engineer job in Santa Clara, CA
Lead Data Scientist - Computer Vision/Image Processing
About the Role
We are seeking a Lead Data Scientist to drive the strategy and execution of data science initiatives, with a particular focus on computer vision systems & image processing techniques. The ideal candidate has deep expertise in image processing techniques including Filtering, Binary Morphology, Perspective/Affine Transformation, Edge Detection.
Responsibilities
Solid knowledge of computer vision programs and image processing techniques: Filtering, Binary Morphology, Perspective/Affine Transformation, Edge Detection
Strong understanding of machine learning: Regression, Supervised and Unsupervised Learning
Proficiency in Python and libraries such as OpenCV, NumPy, scikit-learn, TensorFlow/PyTorch.
Familiarity with version control (Git) and collaborative development practices
Data Scientist
Data engineer job in Pleasanton, CA
Key Responsibilities
Design and develop marketing-focused machine learning models, including:
Customer segmentation
Propensity, churn, and lifetime value (LTV) models
Campaign response and uplift models
Attribution and marketing mix models (MMM)
Build and deploy NLP solutions for:
Customer sentiment analysis
Text classification and topic modeling
Social media, reviews, chat, and voice-of-customer analytics
Apply advanced statistical and ML techniques to solve real-world business problems.
Work with structured and unstructured data from multiple marketing channels (digital, CRM, social, email, web).
Translate business objectives into analytical frameworks and actionable insights.
Partner with stakeholders to define KPIs, success metrics, and experimentation strategies (A/B testing).
Optimize and productionize models using MLOps best practices.
Mentor junior data scientists and provide technical leadership.
Communicate complex findings clearly to technical and non-technical audiences.
Required Skills & Qualifications
7+ years of experience in Data Science, with a strong focus on marketing analytics.
Strong expertise in Machine Learning (supervised & unsupervised techniques).
Hands-on experience with NLP techniques, including:
Text preprocessing and feature extraction
Word embeddings (Word2Vec, GloVe, Transformers)
Large Language Models (LLMs) is a plus
Proficiency in Python (NumPy, Pandas, Scikit-learn, TensorFlow/PyTorch).
Experience with SQL and large-scale data processing.
Strong understanding of statistics, probability, and experimental design.
Experience working with cloud platforms (AWS, Azure, or GCP).
Ability to translate data insights into business impact.
Nice to Have
Experience with marketing automation or CRM platforms.
Knowledge of MLOps, model monitoring, and deployment pipelines.
Familiarity with GenAI/LLM-based NLP use cases for marketing.
Prior experience in consumer, e-commerce, or digital marketing domains.
EEO
Centraprise is an equal opportunity employer. Your application and candidacy will not be considered based on race, color, sex, religion, creed, sexual orientation, gender identity, national origin, disability, genetic information, pregnancy, veteran status or any other characteristic protected by federal, state or local laws.
Data Engineer
Data engineer job in San Francisco, CA
Elevate Data Engineer
Hybrid, CA
Brooksource is searching for an Associate Data Engineer to join our HealthCare partner to support their data analytics groups. This position is through Brooksource's Elevate Program, and will include additional technical training including, but not limited to: SQL, Python, DBT, Azure, etc.
Responsibilities
Assist in the design, development, and implementation of ELT/ETL data pipelines using Azure-based technologies
Support data warehouse environments for large-scale enterprise systems
Help implement and maintain data models following best practices
Participate in data integration efforts to support reporting and analytics needs
Perform data validation, troubleshooting, and incident resolution for data pipelines
Support documentation of data flows, transformations, and architecture
DevOps & Platform Support
Assist with DevOps activities related to data platforms, including deployments and environment support
Help build and maintain automation scripts and reusable frameworks for data operations
Support CI/CD pipelines for data engineering workflows
Assist with monitoring, alerting, and basic performance optimization
Collaborate with senior engineers to support infrastructure-as-code and cloud resource management
Collaboration & Delivery
Work closely with data engineers, solution leads, data modelers, analysts, and business partners
Help translate business requirements into technical data solutions
Participate in code reviews, sprint planning, and team ceremonies
Follow established architecture, security, and data governance standards
Required Qualifications
Bachelor's degree in Computer Science, Engineering, Information Systems, or related field (or equivalent experience)
Foundational knowledge of data engineering concepts, including ETL/ELT and data warehousing
Experience or coursework with SQL and relational databases
Familiarity with Microsoft Azure or another cloud platform
Basic scripting experience (Python, SQL, PowerShell, or Bash)
Understanding of version control (Git)
Preferred / Nice-to-Have Skills
Exposure to Azure services such as Azure Data Factory, Synapse Analytics, Azure SQL, or Data Lake
Basic understanding of CI/CD pipelines and DevOps concepts
Familiarity with data modeling concepts (star schema, normalization)
Experience of fa
Interest in automation, cloud infrastructure, and reliability engineering
Internship or project experience in data engineering or DevOps environments
Data Scientist with Gen Ai and Python experience
Data engineer job in Palo Alto, CA
About Company,
Droisys is an innovation technology company focused on helping companies accelerate their digital initiatives from strategy and planning through execution. We leverage deep technical expertise, Agile methodologies, and data-driven intelligence to modernize systems of engagement and simplify human/tech interaction.
Amazing things happen when we work in environments where everyone feels a true sense of belonging and when candidates have the requisite skills and opportunities to succeed. At Droisys, we invest in our talent and support career growth, and we are always on the lookout for amazing talent who can contribute to our growth by delivering top results for our clients. Join us to challenge yourself and accomplish work that matters.
Here's the job details,
Data Scientist with Gen Ai and Python experience
Palo Alto CA- 5 days Onsite
Interview Mode:-Phone & F2F
Job Overview:
Competent Data Scientist, who is independent, results driven and is capable of taking business requirements and building out the technologies to generate statistically sound analysis and production grade ML models
DS skills with GenAI and LLM Knowledge,
Expertise in Python/Spark and their related libraries and frameworks.
Experience in building training ML pipelines and efforts involved in ML Model deployment.
Experience in other ML concepts - Real time distributed model inferencing pipeline, Champion/Challenger framework, A/B Testing, Model.
Familiar with DS/ML Production implementation.
Excellent problem-solving skills, with attention to detail, focus on quality and timely delivery of assigned tasks.
Azure cloud and Databricks prior knowledge will be a big plus.
Droisys is an equal opportunity employer. We do not discriminate based on race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law. Droisys believes in diversity, inclusion, and belonging, and we are committed to fostering a diverse work environment.
Founding Data Scientist (GTM)
Data engineer job in San Francisco, CA
An early-stage investment of ours is looking to make their first IC hire in data science. This company builds tools that help teams understand how their AI systems perform and improve them over time (and they already have a lot of enterprise customers).
We're looking for a Sr Data Scientist to lead analytics for sales, marketing, and customer success. The job is about finding insights in data, running analyses and experiments, and helping the business make better decisions.
Responsibilities:
Analyze data to improve how the company finds, converts, and supports customers
Create models that predict lead quality, conversion, and customer value
Build clear dashboards and reports for leadership
Work with teams across the company to answer key questions
Take initiative, communicate clearly, and dig into data to solve problems
Try new methods and tools to keep improving the company's GTM approach
Qualifications:
5+ years related industry experience working with data and supporting business teams.
Solid experience analyzing GTM or revenue-related data
Strong skills in SQL and modern analytics tools (Snowflake, Hex, dbt etc.)
Comfortable owning data workflows-from cleaning and modeling to presenting insights.
Able to work independently, prioritize well, and move projects forward without much direction
Clear thinker and communicator who can turn data into actionable recommendations
Adaptable and willing to learn new methods in a fast-paced environment
About Us:
Greylock is an early-stage investor in hundreds of remarkable companies including Airbnb, LinkedIn, Dropbox, Workday, Cloudera, Facebook, Instagram, Roblox, Coinbase, Palo Alto Networks, among others. More can be found about us here: *********************
How We Work:
We are full-time, salaried employees of Greylock and provide free candidate referrals/introductions to our active investments. We will contact anyone who looks like a potential match--requesting to schedule a call with you immediately.
Due to the selective nature of this service and the volume of applicants we typically receive from our job postings, a follow-up email will not be sent until a match is identified with one of our investments.
Please note: We are not recruiting for any roles within Greylock at this time. This job posting is for direct employment with a startup in our portfolio.
Data Eng with Commercial Knowledge
Data engineer job in Foster City, CA
Seeking a Data Stewardship and Governance team member with strong expertise in Pharma Commercial Marketing and Sales domains.
The role involves managing HCP/HCO master data, payer/formulary structures, sales roster and territory alignment, campaign, and channel taxonomy, and ensuring compliance with governance policies.
Responsibilities include defining data standards, stewardship workflows, and metadata management; performing data profiling, quality checks, and remediation; and supporting privacy/compliance (HIPAA, GDPR, Sunshine Act).
Candidate must have strong SQL skills for data validation, reconciliation, and analysis, along with experience in data integration and transformation using AWS Glue or similar ETL tools.
Familiarity with cloud data platforms (Snowflake, Redshift, Databricks), governance tools (Collibra, Informatica Axon/EDC), and MDM solutions (Veeva Network, Reltio, Informatica MDM) is preferred.
Ability to work with Pharma data sources (IQVIA, claims, specialty pharmacy, CRM, marketing automation, sales performance data) and collaborate with commercial, marketing, and compliance teams is essential.
Strong analytical, documentation, and communication skills required.
Optical Sensing, Hardware Data Analysis Engineer for a Global Consumer Device Company
Data engineer job in Cupertino, CA
Our optical sensing team develops optical sensors for next generation products. The team is seeking someone who has strong Python skills, a self-driven go-getter, with strong experience in optical instruments, data analysis and data visualization is required.
Responsibilities:
Manage and report the engineering build process using Python and JMP to analyze large sets of data and track key figures of merits.
Validate the ambient light sensors' color and Liz sensing performance using Python and spectrometers.
Assist with miscellaneous lab work to conduct failure analysis or research such as display light leakage, cover glass properties, affects from thermal environment, etc.
Support in creating a performance simulation model using MATLAB.
Lead end-to-end lab validation to support new optical sensor development.
Develop and implement validation plan for hardware/software designs.
Benchmark optical sensor performance from early prototype to product launch.
Provide guidance and recommendation to production line testing requirements.
Analyze data to draw conclusions and provide feedback to product design.
Convert data to a visual plot and/or chart.
Collaborate with cross-functional teams including Optical Engineering, Mechanical Engineering, Electrical Engineering and Process Engineering to deliver state-of-the-art sensing solutions.
Deliver presentations of results in regular review with cross-functional teams.
Requirements:
Degree in Optics, Physics, Electrical Engineering or equivalent. B.S./M.S. and industry experience, or Ph.D.
Strong background in optical measurements and data analysis.
Experience in using Python or other coding languages for lab equipment control, data acquisition, and instrument automation.
Need to be able to write/create, rewrite, revise customize and automate scripts.
Hands-on experience with optical lab equipment (light sources, spectrometers, detectors, oscilloscopes, free space optics on optical bench, etc.).
Excellent written and verbal communication skills.
Solid teamwork and self-motivated for technical challenges.
Preferred Skillset:
Both Hardware and Software background
Type: Contract (12+ months)
Location: Cupertino, CA (100% onsite)
Data Engineer
Data engineer job in Pleasanton, CA
Hi
Job Title: Data Engineer
HM prefers candidate to be on site at Pleasanton
Proficiency in Spark, Python, and SQL is essential for this role. 10+ Experience with relational databases such as Oracle, NoSQL databases including MongoDB and Cassandra, and big data technologies, particularly Databricks, is required. Strong knowledge of data modeling techniques is necessary for designing efficient and scalable data structures. Familiarity with APIs and web services, including REST and SOAP, is important for integrating various data sources and ensuring seamless data flow. This role involves leveraging these technical skills to build and maintain robust data pipelines and support advanced data analytics.
SKILLS:
- Spark/Python/SQL
- Relational Database (Oracle) / NoSQL Database (MongoDB/ Cassandra) / Databricks
- Big Data technologies - Databricks preferred
- Data modelling techniques
- APIs and web services (REST/ SOAP)
If interested, Please share below details with update resume:
Full Name:
Phone:
E-mail:
Rate:
Location:
Visa Status:
Availability:
SSN (Last 4 digit):
Date of Birth:
LinkedIn Profile:
Availability for the interview:
Availability for the project:
Data Engineer III
Data engineer job in Cupertino, CA
This will be a data engineer role for processing battery testing data to facilitate battery algorithm delivery and support battery algorithm simulations to validate the battery algorithm and project the product KPIs.
Requires battery modeling and algorithm knowledge and hands on experiences in data analysis and Matlab programing.
Experience with Matlab is required, C++/python is a plus
Experience with machine learning, optimization, and control algorithms is a plus
Degree in DataScience/EE/CS/ChemE/MechE is preferred.
About PTR Global: PTR Global is a leading provider of information technology and workforce solutions. PTR Global has become one of the largest providers in its industry, with over 5000 professionals providing services across the U.S. and Canada. For more information visit *****************
At PTR Global, we understand the importance of your privacy and security. We NEVER ASK job applicants to:
Pay any fee to be considered for, submitted to, or selected for any opportunity.
Purchase any product, service, or gift cards from us or for us as part of an application, interview, or selection process.
Provide sensitive financial information such as credit card numbers or banking information. Successfully placed or hired candidates would only be asked for banking details after accepting an offer from us during our official onboarding processes as part of payroll setup.
Pay Range: $75 - $85
The specific compensation for this position will be determined by a number of factors, including the scope, complexity and location of the role as well as the cost of labor in the market; the skills, education, training, credentials and experience of the candidate; and other conditions of employment. Our full-time consultants have access to benefits including medical, dental, vision and 401K contributions as well as any other PTO, sick leave, and other benefits mandated by appliable state or localities where you reside or work.
If you receive a suspicious message, email, or phone call claiming to be from PTR Global do not respond or click on any links. Instead, contact us directly at ***************. To report any concerns, please email us at *******************
Imaging Data Engineer/Architect
Data engineer job in San Francisco, CA
About us:
Intuitive is an innovation-led engineering company delivering business outcomes for 100's of Enterprises globally. With the reputation of being a Tiger Team & a Trusted Partner of enterprise technology leaders, we help solve the most complex Digital Transformation challenges across following Intuitive Superpowers:
Modernization & Migration
Application & Database Modernization
Platform Engineering (IaC/EaC, DevSecOps & SRE)
Cloud Native Engineering, Migration to Cloud, VMware Exit
FinOps
Data & AI/ML
Data (Cloud Native / DataBricks / Snowflake)
Machine Learning, AI/GenAI
Cybersecurity
Infrastructure Security
Application Security
Data Security
AI/Model Security
SDx & Digital Workspace (M365, G-suite)
SDDC, SD-WAN, SDN, NetSec, Wireless/Mobility
Email, Collaboration, Directory Services, Shared Files Services
Intuitive Services:
Professional and Advisory Services
Elastic Engineering Services
Managed Services
Talent Acquisition & Platform Resell Services
About the job:
Title: Imaging Data Engineer/Architect
Start Date: Immediate
# of Positions: 1
Position Type: Contract/ Full-Time
Location: San Francisco, CA
Notes:
Imaging data Engineer/architect who understands Radiology and Digital pathology, related clinical data and metadata.
Hands-on experience on above technologies, and with good knowledge in the biomedical imaging, and data pipelines overall.
About the Role
We are seeking a highly skilled Imaging Data Engineer/Architect to join our San Francisco team as a Subject Matter Expert (SME) in radiology and digital pathology. This role will design and manage imaging data pipelines, ensuring seamless integration of clinical data and metadata to support advanced diagnostic and research applications. The ideal candidate will have deep expertise in medical imaging standards, cloud-based data architectures, and healthcare interoperability, contributing to innovative solutions that enhance patient outcomes.
Responsibilities
Design and implement scalable data architectures for radiology and digital pathology imaging data, including DICOM, HL7, and FHIR standards.
Develop and optimize data pipelines to process and store large-scale imaging datasets (e.g., MRI, CT, histopathology slides) and associated metadata.
Collaborate with clinical teams to understand radiology and pathology workflows, ensuring data solutions align with clinical needs.
Ensure data integrity, security, and compliance with healthcare regulations (e.g., HIPAA, GDPR).
Integrate imaging data with AI/ML models for diagnostic and predictive analytics, working closely with data scientists.
Build and maintain metadata schemas to support data discoverability and interoperability across systems.
Provide technical expertise to cross-functional teams, including product managers and software engineers, to drive imaging data strategy.
Conduct performance tuning and optimization of imaging data storage and retrieval systems in cloud environments (e.g., AWS, Google Cloud, Azure).
Document data architectures and processes, ensuring knowledge transfer to internal teams and external partners.
Stay updated on emerging imaging technologies and standards, proposing innovative solutions to enhance data workflows.
Qualifications
Education: Bachelor's degree in computer science, Biomedical Engineering, or a related field (master's preferred).
Experience:
5+ years in data engineering or architecture, with at least 3 years focused on medical imaging (radiology and/or digital pathology).
Proven experience with DICOM, HL7, FHIR, and imaging metadata standards (e.g., SNOMED, LOINC).
Hands-on experience with cloud platforms (AWS, Google Cloud, or Azure) for imaging data storage and processing.
Technical Skills:
Proficiency in programming languages (e.g., Python, Java, SQL) for data pipeline development.
Expertise in ETL processes, data warehousing, and database management (e.g., Snowflake, BigQuery, PostgreSQL).
Familiarity with AI/ML integration for imaging data analytics.
Knowledge of containerization (e.g., Docker, Kubernetes) for deploying data solutions.
Domain Knowledge:
Deep understanding of radiology and digital pathology workflows, including PACS and LIS systems.
Familiarity with clinical data integration and healthcare interoperability standards.
Soft Skills:
Strong analytical and problem-solving skills to address complex data challenges.
Excellent communication skills to collaborate with clinical and technical stakeholders.
Ability to work independently in a fast-paced environment, with a proactive approach to innovation.
Certifications (preferred):
AWS Certified Solutions Architect, Google Cloud Professional Data Engineer, or equivalent.
Certifications in medical imaging (e.g., CIIP - Certified Imaging Informatics Professional).
Data Engineer (SQL / SQL Server Focus)
Data engineer job in San Francisco, CA
Data Engineer (SQL / SQL Server Focus) (Kind note, we cannot provide sponsorship for this role)
A leading professional services organization is seeking an experienced Data Engineer to join its team. This role supports enterprise-wide systems, analytics, and reporting initiatives, with a strong emphasis on SQL Server-based data platforms.
Key Responsibilities
Design, develop, and optimize SQL Server-centric ETL/ELT pipelines to ensure reliable, accurate, and timely data movement across enterprise systems.
Develop and maintain SQL Server data models, schemas, and tables to support financial analytics and reporting.
Write, optimize, and maintain complex T-SQL queries, stored procedures, functions, and views with a strong focus on performance and scalability.
Build and support SQL Server Reporting Services (SSRS) solutions, translating business requirements into clear, actionable reports.
Partner with finance and business stakeholders to define KPIs and ensure consistent, trusted reporting outputs.
Monitor, troubleshoot, and tune SQL Server workloads, including query performance, indexing strategies, and execution plans.
Ensure adherence to data governance, security, and access control standards within SQL Server environments.
Support documentation, version control, and change management for database and reporting solutions.
Collaborate closely with business analysts, data engineers, and IT teams to deliver end-to-end data solutions.
Mentor junior team members and contribute to database development standards and best practices.
Act as a key contributor to enterprise data architecture and reporting strategy, particularly around SQL Server platforms.
Required Education & Experience
Bachelor's or Master's degree in Computer Science, Information Systems, Data Engineering, or a related field.
8+ years of hands-on experience working with SQL Server in enterprise data warehouse or financial reporting environments.
Advanced expertise in T-SQL, including:
Query optimization
Index design and maintenance
Stored procedures and performance tuning
Strong experience with SQL Server Integration Services (SSIS) and SSRS.
Solid understanding of data warehousing concepts, including star and snowflake schemas, and OLAP vs. OLTP design.
Experience supporting large, business-critical databases with high reliability and performance requirements.
Familiarity with Azure-based SQL Server deployments (Azure SQL, Managed Instance, or SQL Server on Azure VMs) is a plus.
Strong analytical, problem-solving, and communication skills, with the ability to work directly with non-technical stakeholders.
Data Engineer
Data engineer job in San Francisco, CA
You'll work closely with engineering, analytics, and product teams to ensure data is accurate, accessible, and efficiently processed across the organization.
Key Responsibilities:
Design, develop, and maintain scalable data pipelines and architectures.
Collect, process, and transform data from multiple sources into structured, usable formats.
Ensure data quality, reliability, and security across all systems.
Work with data analysts and data scientists to optimize data models for analytics and machine learning.
Implement ETL (Extract, Transform, Load) processes and automate workflows.
Monitor and troubleshoot data infrastructure, ensuring minimal downtime and high performance.
Collaborate with cross-functional teams to define data requirements and integrate new data sources.
Maintain comprehensive documentation for data systems and processes.
Requirements:
Proven experience as a Data Engineer, ETL Developer, or similar role.
Strong programming skills in Python, SQL, or Scala.
Experience with data pipeline tools (Airflow, dbt, Luigi, etc.).
Familiarity with big data technologies (Spark, Hadoop, Kafka, etc.).
Hands-on experience with cloud data platforms (AWS, GCP, Azure, Snowflake, or Databricks).
Understanding of data modeling, warehousing, and schema design.
Solid knowledge of database systems (PostgreSQL, MySQL, NoSQL).
Strong analytical and problem-solving skills.
Data Engineer - Scientific Data Ingestion
Data engineer job in San Francisco, CA
We envision a world where novel drugs and therapies reach patients in months, not years, accelerating breakthroughs that save lives.
Mithrl is building the world's first commercially available AI Co-Scientist-a discovery engine that empowers life science teams to go from messy biological data to novel insights in minutes. Scientists ask questions in natural language, and Mithrl answers with real analysis, novel targets, and patent-ready reports. No coding. No waiting. No bioinformatics bottlenecks.
We are the fastest growing tech-bio startup in the Bay Area with over 12X YoY revenue growth. Our platform is already being used by teams at some of the largest biotechs and big pharma across three continents to accelerate and uncover breakthroughs-from target discovery to mechanism of action.
WHAT YOU WILL DO
Build and own an AI-powered ingestion & normalization pipeline to import data from a wide variety of sources - unprocessed Excel/CSV uploads, lab and instrument exports, as well as processed data from internal pipelines.
Develop robust schema mapping, coercion, and conversion logic (think: units normalization, metadata standardization, variable-name harmonization, vendor-instrument quirks, plate-reader formats, reference-genome or annotation updates, batch-effect correction, etc.).
Use LLM-driven and classical data-engineering tools to structure “semi-structured” or messy tabular data - extracting metadata, inferring column roles/types, cleaning free-text headers, fixing inconsistencies, and preparing final clean datasets.
Ensure all transformations that should only happen once (normalization, coercion, batch-correction) execute during ingestion - so downstream analytics / the AI “Co-Scientist” always works with clean, canonical data.
Build validation, verification, and quality-control layers to catch ambiguous, inconsistent, or corrupt data before it enters the platform.
Collaborate with product teams, data science / bioinformatics colleagues, and infrastructure engineers to define and enforce data standards, and ensure pipeline outputs integrate cleanly into downstream analysis and storage systems.
WHAT YOU BRING
Must-have
5+ years of experience in data engineering / data wrangling with real-world tabular or semi-structured data.
Strong fluency in Python, and data processing tools (Pandas, Polars, PyArrow, or similar).
Excellent experience dealing with messy Excel / CSV / spreadsheet-style data - inconsistent headers, multiple sheets, mixed formats, free-text fields - and normalizing it into clean structures.
Comfort designing and maintaining robust ETL/ELT pipelines, ideally for scientific or lab-derived data.
Ability to combine classical data engineering with LLM-powered data normalization / metadata extraction / cleaning.
Strong desire and ability to own the ingestion & normalization layer end-to-end - from raw upload → final clean dataset - with an eye for maintainability, reproducibility, and scalability.
Good communication skills; able to collaborate across teams (product, bioinformatics, infra) and translate real-world messy data problems into robust engineering solutions.
Nice-to-have
Familiarity with scientific data types and “modalities” (e.g. plate-readers, genomics metadata, time-series, batch-info, instrumentation outputs).
Experience with workflow orchestration tools (e.g. Nextflow, Prefect, Airflow, Dagster), or building pipeline abstractions.
Experience with cloud infrastructure and data storage (AWS S3, data lakes/warehouses, database schemas) to support multi-tenant ingestion.
Past exposure to LLM-based data transformation or cleansing agents - building or integrating tools that clean or structure messy data automatically.
Any background in computational biology / lab-data / bioinformatics is a bonus - though not required.
WHAT YOU WILL LOVE AT MITHRL
Mission-driven impact: you'll be the gatekeeper of data quality - ensuring that all scientific data entering Mithrl becomes clean, consistent, and analysis-ready. You'll have outsized influence over the reliability and trustworthiness of our entire data + AI stack.
High ownership & autonomy: this role is yours to shape. You decide how ingestion works, define the standards, build the pipelines. You'll work closely with our product, data science, and infrastructure teams - shaping how data is ingested, stored, and exposed to end users or AI agents.
Team: Join a tight-knit, talent-dense team of engineers, scientists, and builders
Culture: We value consistency, clarity, and hard work. We solve hard problems through focused daily execution
Speed: We ship fast (2x/week) and improve continuously based on real user feedback
Location: Beautiful SF office with a high-energy, in-person culture
Benefits: Comprehensive PPO health coverage through Anthem (medical, dental, and vision) + 401(k) with top-tier plans
Snowflake Data Architect
Data engineer job in Santa Clara, CA
Senior Snowflake Data Engineer (Contract | Long-Term)
We're partnering with an enterprise data platform team on a long-term initiative where Snowflake is the primary cloud data warehouse supporting analytics and reporting at scale. This role is ideal for someone whose core strength is Snowflake, with some experience working alongside Databricks in modern data ecosystems.
What you'll be doing
Building and maintaining ELT pipelines primarily in Snowflake
Writing, optimizing, and troubleshooting complex Snowflake SQL
Managing Snowflake objects: virtual warehouses, schemas, streams, tasks, and secure views
Supporting performance tuning, cost optimization, and warehouse sizing
Collaborating with analytics and business teams to deliver trusted datasets
Integrating upstream or adjacent processing from Databricks where applicable
What we're looking for
Strong, hands-on Snowflake Data Engineering experience (primary platform)
Advanced SQL expertise within Snowflake
Experience designing ELT pipelines and analytical data models
Working knowledge of Databricks / Spark in a production environment
Understanding of Snowflake governance, security, and cost controls
Nice to have
dbt experience
Experience supporting enterprise analytics or reporting teams
Exposure to cloud-based data platforms in large-scale environments
Engagement details
Contract (long-term)
Competitive hourly rate
Remote or hybrid (US-based)
This role is best suited for engineers who go deep in Snowflake and can collaborate across platforms when Databricks is part of the stack.
Senior Data Warehouse & BI Developer
Data engineer job in San Leandro, CA
About the Role
We're looking for a Senior Data Warehouse & BI Developer to join our Data & Analytics team and help shape the future of Ariat's enterprise data ecosystem. You'll design and build data solutions that power decision-making across the company, from eCommerce to finance and operations.
In this role, you'll take ownership of data modeling, and BI reporting using Cognos and Tableau, and contribute to the development of SAP HANA Calculation Views. If you're passionate about data architecture, visualization, and collaboration - and love learning new tools - this role is for you.
You'll Make a Difference By
Designing and maintaining Ariat's enterprise data warehouse and reporting architecture.
Developing and optimizing Cognos reports for business users.
Collaborating with the SAP HANA team to develop and enhance Calculation Views.
Translating business needs into technical data models and actionable insights.
Ensuring data quality through validation, testing, and governance practices.
Partnering with teams across the business to improve data literacy and reporting capabilities.
Staying current with modern BI and data technologies to continuously evolve Ariat's analytics stack.
About You
7+ years of hands-on experience in BI and Data Warehouse development.
Advanced skills in Cognos (Framework Manager, Report Studio).
Strong SQL skills and experience with data modeling (star schemas, dimensional modeling).
Experience building and maintaining ETL processes.
Excellent analytical and communication skills.
A collaborative, learning-oriented mindset.
Experience developing SAP HANA Calculation Views preferred
Experience with Tableau (Desktop, Server) preferred
Knowledge of cloud data warehouses (Snowflake, BigQuery, etc.).
Background in retail or eCommerce analytics.
Familiarity with Agile/Scrum methodologies.
About Ariat
Ariat is an innovative, outdoor global brand with roots in equestrian performance. We develop high-quality footwear and apparel for people who ride, work, and play outdoors, and care about performance, quality, comfort, and style.
The salary range for this position is $120,000 - $150,000 per year.
The salary is determined by the education, experience, knowledge, skills, and abilities of the applicant, internal equity, and alignment with market data for geographic locations. Ariat in good faith believes that this posted compensation range is accurate for this role at this location at the time of this posting. This range may be modified in the future.
Ariat's holistic benefits package for full-time team members includes (but is not limited to):
Medical, dental, vision, and life insurance options
Expanded wellness and mental health benefits
Paid time off (PTO), paid holidays, and paid volunteer days
401(k) with company match
Bonus incentive plans
Team member discount on Ariat merchandise
Note: Availability of benefits may be subject to location & employment type and may have certain eligibility requirements. Ariat reserves the right to alter these benefits in whole or in part at any time without advance notice.
Ariat will consider qualified applicants, including those with criminal histories, in a manner consistent with state and local laws. Ariat is an Equal Opportunity Employer and considers applicants for employment without regard to race, color, religion, sex, orientation, national origin, age, disability, genetics or any other basis protected under federal, state, or local law. Ariat is committed to providing reasonable accommodations to candidates with disabilities. If you need an accommodation during the application process, email *************************.
Please see our Employment Candidate Privacy Policy at ********************* to learn more about how we collect, use, retain and disclose Personal Information.
Please note that Ariat does not accept unsolicited resumes from recruiters or employment agencies. In the absence of a signed Agreement, Ariat will not consider or agree to payment of any referral compensation or recruiter/agency placement fee. In the event a recruiter or agency submits a resume or candidate without a previously signed Agreement, Ariat explicitly reserves the right to pursue and hire those candidate(s) without any financial obligation to the recruiter or agency. Any unsolicited resumes, including those submitted directly to hiring managers, are deemed to be the property of Ariat.
Data Architect - Azure Databricks
Data engineer job in Palo Alto, CA
Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner.
Please visit Fractal | Intelligence for Imagination for more information about Fractal.
Job Posting Title: Principal Architect - Azure Databricks
Job Description
Seeking a visionary and hands-on Principal Architect to lead large-scale, complex technical initiatives leveraging Databricks within the healthcare payer domain. This role is pivotal in driving data modernization, advanced analytics, and AI/ML solutions for our clients. You will serve as a strategic advisor, technical leader, and delivery expert across multiple engagements.
Responsibilities:
Design & Architecture of Scalable Data Platforms
Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs such as sales forecasting, trade promotions, supply chain optimization etc...
Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management).
Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility.
Client & Business Stakeholder Engagement
Partner with business stakeholders to translate functional requirements into scalable technical solutions.
Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases
Data Pipeline Development & Collaboration
Collaborate with data engineers and data scientists to develop end-to-end pipelines using PySpark, SQL, DLT (Delta Live Tables), and Databricks Workflows.
Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets.
Performance, Scalability, and Reliability
Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques.
Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools
Security, Compliance & Governance
Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies.
Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging.
Adoption of AI Copilots & Agentic Development
Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for:
Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks.
Generating documentation and test cases to accelerate pipeline development.
Interactive debugging and iterative code optimization within notebooks.
Advocate for agentic AI workflows that use specialized agents for:
Data profiling and schema inference.
Automated testing and validation.
Innovation and Continuous Learning
Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling.
Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements.
Requirements:
Bachelor's or master's degree in computer science, Information Technology, or a related field.
12-18 years of hands-on experience in data engineering, with at least 5+ years on Databricks Architecture and Apache Spark.
Expertise in building high-throughput, low-latency ETL/ELT pipelines on Azure Databricks using PySpark, SQL, and Databricks-native features.
Familiarity with ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage (Azure Data Lake Storage Gen2)
Experience designing Lakehouse architectures with bronze, silver, gold layering.
Expertise in optimizing Databricks performance using Delta Lake features such as OPTIMIZE, VACUUM, ZORDER, and Time Travel
Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing.
Experience with designing Data marts using Databricks SQL warehouse and integrating with BI tools (Power BI, Tableau, etc.).
Hands-on experience designing solutions using Workflows (Jobs), Delta Lake, Delta Live Tables (DLT), Unity Catalog, and MLflow.
Familiarity with Databricks REST APIs, Notebooks, and cluster configurations for automated provisioning and orchestration.
Experience in integrating Databricks with CI/CD pipelines using tools such as Azure DevOps, GitHub Actions.
Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning Databricks workspaces and resources
In-depth experience with Azure Cloud services such as ADF, Synapse, ADLS, Key Vault, Azure Monitor, and Azure Security Centre.
Strong understanding of data privacy, access controls, and governance best practices.
Experience working with Unity Catalog, RBAC, tokenization, and data classification frameworks
Worked as a consultant for more than 4-5 years with multiple clients
Contribute to pre-sales, proposals, and client presentations as a subject matter expert.
Participated and Lead RFP responses for your organization. Experience in providing solutions for technical problems and provide cost estimates
Excellent communication skills for stakeholder interaction, solution presentations, and team coordination.
Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements.
Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality.
Pay:
The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $ 200,000 - $300,000. In addition, you may be eligible for a discretionary bonus for the current performance period.
Benefits:
As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take time needed for either sick time or vacation.
Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
Staff Data Scientist, Full Stack
Data engineer job in Santa Clara, CA
Our Mission At Palo Alto Networks everything starts and ends with our mission: Being the cybersecurity partner of choice, protecting our digital way of life. Our vision is a world where each day is safer and more secure than the one before. We are a company built on the foundation of challenging and disrupting the way things are done, and we're looking for innovators who are as committed to shaping the future of cybersecurity as we are.
Who We Are
We take our mission of protecting the digital way of life seriously. We are relentless in protecting our customers and we believe that the unique ideas of every member of our team contributes to our collective success. Our values were crowdsourced by employees and are brought to life through each of us everyday - from disruptive innovation and collaboration, to execution. From showing up for each other with integrity to creating an environment where we all feel included.
As a member of our team, you will be shaping the future of cybersecurity. We work fast, value ongoing learning, and we respect each employee as a unique individual. Knowing we all have different needs, our development and personal wellbeing programs are designed to give you choice in how you are supported. This includes our FLEXBenefits wellbeing spending account with over 1,000 eligible items selected by employees, our mental and financial health resources, and our personalized learning opportunities - just to name a few!
At Palo Alto Networks, we believe in the power of collaboration and value in-person interactions. This is why our employees generally work full time from our office with flexibility offered where needed. This setup fosters casual conversations, problem-solving, and trusted relationships. Our goal is to create an environment where we all win with precision.
Job Description
Your Career
As a Staff Data Engineer and Scientist, you will be an integral member of our Customer Analytics team, responsible for shaping the future of our business operations through robust data infrastructure and advanced analytical solutions. This unique hybrid role combines data engineering and applied AI/ML, requiring an entrepreneurial problem-solver who thrives in tackling ambiguous business problems through their deep understanding of the business as well as deep technical expertise. You will act as both a strategic partner as well as builder, developing deep insights , building, developing and curating new datasets, as well as owning the end to end ML/AI model deployment for key customer success initiatives.
You will be constantly challenged by tough engineering and design tasks, working in a fast-paced setting to deliver high-quality, impactful work.
This is an in office role 3 days/week in our HQ, Santa Clara, CA
Your Impact
In this versatile role, you will drive impact across both data engineering and data science domains:
Data Engineering Foundations
Design & Development: Design and implement scalable data architectures and datasets that support the organization's evolving data needs, providing the technical foundations for our analytics team and business users.
Data Engineering: Support and implement large datasets in batch/real-time analytical solutions leveraging data transformation technologies.
Data Security & Scalability: Enable robust data-level security features and build scalable solutions to support dynamic cloud environments, including financial considerations.
Process Improvement: Perform code reviews with peers and make recommendations on how to improve our end-to-end development processes.
AI/ML Innovation & Business Impact
Develop & Deploy Classical ML Models: Own the end-to-end lifecycle of machine learning projects. You'll build and productionize sophisticated models for critical business areas such as marketing attribution, customer churn prediction, case escalation and other relevant use-cases to post-sales.
Optimize AI Agentic Systems: Play a key role in our generative AI initiatives. You will be responsible for characterizing, evaluating, and fine-tuning AI agents-such as conversational systems that allow users to query massive datasets using natural language-to improve their accuracy, efficiency, and reliability.
Partner with Business Stakeholders: Act as an internal consultant to our Go-to-Market (GTM), Global Customer Services (GCS) and Product and Finance teams. You'll translate business challenges into data science use-cases, identify opportunities for AI-driven solutions, and present your findings in a clear, actionable manner.
Own the Full Data Science Lifecycle: Your responsibilities will cover the entire project workflow, working with the business to understand the problem, charting a path to solve the problem, feature engineering, model selection and training, robust evaluation, deployment, and, in partnership with the data platform team, ongoing monitoring for performance degradation.
Qualifications
Your Experience
7 plus years experience building and maintain data pipeline both for reporting, analysis and feature engineering.
Experience building and optimizing clean, well-structured analytical datasets for business and data science use cases. This includes Implementing and supporting Big Data solutions for both batch (scheduled) and real-time (streaming) analytics.
Prior experience working extensively within dynamic cloud environments, specifically Google Cloud Services (GCS) BigQuery and Vertex AI.
Prior experience developing dashboards in Tableau/Looker or similar data viz platform.
Nice to have: Experience implementing and managing data-level security features to ensure data is protected and access is properly controlled.
Expert-level programming skills in Python and familiarity with core data science and machine learning libraries (e.g., Scikit-learn, Pandas, PyTorch/TensorFlow, XGBoost).
A solid command of SQL for complex querying and data manipulation.
Proven ability to work autonomously, navigate ambiguity, and drive projects from concept to completion.
Preferred Qualifications
Prior working experience in Customer Analytics space and customer experience use-cases, e.g. Escalation, Risk predictors, Renewals and efficiency of project delivery in Professional Services space.
Direct experience with generative AI, including hands-on work with LLMs and frameworks like LangChain, LlamaIndex, or the Hugging Face ecosystem.
Experience in evaluating and optimizing the performance of AI systems or agents.
Demonstrated expertise in specialized modeling domains such as causal inference, time-series analysis.
An MS or PhD in a quantitative field like Computer Science, AI, Statistics, or equivalent practical experience or equivalent military experience.
Additional Information
The Team
Working at a high-tech cybersecurity company within Information Technology is a once-in-a-lifetime opportunity. You'll join the brightest minds in technology, creating, building, and supporting tools and enabling our global teams on the front line of defense against cyberattacks.
We're connected by one mission but driven by the impact of that mission and what it means to protect our way of life in the digital age. Join a dynamic and fast-paced team of people who feel excited by the prospect of a challenge and feel a thrill at resolving technical gaps that inhibit productivity.
Compensation Disclosure
The compensation offered for this position will depend on qualifications, experience, and work location. For candidates who receive an offer at the posted level, the starting base salary (for non-sales roles) or base salary + commission target (for sales/commissioned roles) is expected to be between $143000- $231000/YR. The offered compensation may also include restricted stock units and a bonus. A description of our employee benefits may be found here.
Our Commitment
We're problem solvers that take risks and challenge cybersecurity's status quo. It's simple: we can't accomplish our mission without diverse teams innovating, together.
We are committed to providing reasonable accommodations for all qualified individuals with a disability. If you require assistance or accommodation due to a disability or special need, please contact us at accommodations@paloaltonetworks.com.
Palo Alto Networks is an equal opportunity employer. We celebrate diversity in our workplace, and all qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or other legally protected characteristics.
All your information will be kept confidential according to EEO guidelines.
Is role eligible for Immigration Sponsorship? No. Please note that we will not sponsor applicants for work visas for this position.
Senior Snowflake Data Engineer
Data engineer job in Santa Clara, CA
About the job
Why Zensar?
We're a bunch of hardworking, fun-loving, people-oriented technology enthusiasts. We love what we do, and we're passionate about helping our clients thrive in an increasingly complex digital world. Zensar is an organization focused on building relationships, with our clients and with each other-and happiness is at the core of everything we do. In fact, we're so into happiness that we've created a Global Happiness Council, and we send out a Happiness Survey to our employees each year. We've learned that employee happiness requires more than a competitive paycheck, and our employee value proposition-grow, own, achieve, learn (GOAL)-lays out the core opportunities we seek to foster for every employee. Teamwork and collaboration are critical to Zensar's mission and success, and our teams work on a diverse and challenging mix of technologies across a broad industry spectrum. These industries include banking and financial services, high-tech and manufacturing, healthcare, insurance, retail, and consumer services. Our employees enjoy flexible work arrangements and a competitive benefits package, including medical, dental, vision, 401(k), among other benefits. If you are looking for a place to have an immediate impact, to grow and contribute, where we work hard, play hard, and support each other, consider joining team Zensar!
Zensar is seeking an Senior Snowflake Data Engineer -Santa Clara, CA-Work from office all 5 days-This is open for Full time with excellent benefits and growth opportunities and contract role as well.
Job Description:
Key Requirements:
Strong hands-on experience in data engineering using Snowflake with proven ability to build and optimize large-scale data pipelines.
Deep understanding of data architecture principles, including ingestion, transformation, storage, and access control.
Solid experience in system design and solution architecture, focusing on scalability, reliability, and maintainability.
Expertise in ETL/ELT pipeline design, including data extraction, transformation, validation, and load processes.
In-depth knowledge of data modeling techniques (dimensional modeling, star, and snowflake schemas).
Skilled in optimizing compute and storage costs across Snowflake environments.
Strong proficiency in administration, including database design, schema management, user roles, permissions, and access control policies.
Hands-on experience implementing data lineage, quality, and monitoring frameworks.
Advanced proficiency in SQL for data processing, transformation, and automation.
Experience with reporting and visualization tools such as Power BI and Sigma Computing.
Excellent communication and collaboration skills, with the ability to work independently and drive technical initiatives.
Zensar believes that diversity of backgrounds, thought, experience, and expertise fosters the robust exchange of ideas that enables the highest quality collaboration and work product. Zensar is an equal opportunity employer. All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Zensar is committed to providing veteran employment opportunities to our service men and women. Zensar is committed to providing equal employment opportunities for persons with disabilities or religious observances, including reasonable accommodation when needed. Accommodations made to facilitate the recruiting process are not a guarantee of future or continued accommodations once hired.
Zensar does not facilitate/sponsor any work authorization for this position.
Candidates who are currently employed by a client or vendor of Zensar may be ineligible for consideration.
Zensar values your privacy. We'll use your data in accordance with our privacy statement located at: *********************************