Veeva Systems is a mission-driven organization and pioneer in industry cloud, helping life sciences companies bring therapies to patients faster. As one of the fastest-growing SaaS companies in history, we surpassed $2B in revenue in our last fiscal year with extensive growth potential ahead.
At the heart of Veeva are our values: Do the Right Thing, Customer Success, Employee Success, and Speed. We're not just any public company - we made history in 2021 by becoming a public benefit corporation (PBC), legally bound to balancing the interests of customers, employees, society, and investors.
As a Work Anywhere company, we support your flexibility to work from home or in the office, so you can thrive in your ideal environment.
Join us in transforming the life sciences industry, committed to making a positive impact on its customers, employees, and communities.
The Role
This role is responsible for ensuring the reliability, accuracy, and safety of our Veeva AI Agents through rigorous evaluation and systematic validation methodologies. We're looking for experienced candidates with:
1. A meticulous, critical, and curious mindset with a dedication to product quality in a rapidly evolving technological domain2. Exceptional analytical and systematic problem-solving capabilities3. Excellent ability to communicate technical findings to both engineering and product management audiences4. Ability to learn application areas quickly
Thrive in our Work Anywhere environment: We support your flexibility to work remotely or in the office within Canada or the US, ensuring seamless collaboration within your product team's time zone. Join us and be part of a mission-driven organization transforming the life sciences industry.What You'll Do
Evaluation Strategy & Planning: Define and establish comprehensive evaluation strategies for new AI Agents. Prioritize the integrity and coverage of test data sets to reflect real-world usage and potential failure modes
LLM Output Integrity Assessment: Programmatically and manually evaluate the quality of LLM-generated content against predefined metrics (e.g., factual accuracy, contextual relevance, coherence, and safety standards)
Creating High-Fidelity Datasets: Design, curate, and generate diverse, high-quality test data sets, including challenging prompts and scenarios. Evaluate LLM outputs to proactively identify system biases, unsafe content, hallucinations, and critical edge cases
Automation of Evaluation Pipelines: Develop, implement, and maintain scalable automated evaluations to ensure efficient, continuous validation of agent behavior and prevent regressions with new features and model updates
Root Cause Analysis: Understand model behaviors and assist in the trace and root-cause analysis of identified defects or performance degradations
Reporting & Performance Metrics: Clearly document, track, and communicate performance metrics, validation results, and bug status to the broader development and product teams
Requirements
Data Integrity & Validation: A strong, specialized understanding of data quality principles, including methods for validating datasets against bias, integrity concerns, and quality standards. Ability to craft diverse and adversarial test data to uncover AI edge cases
Prompt Engineering & Model Expertise: Demonstrated skill in advanced prompt engineering techniques to create evaluation scenarios that test the AI's reasoning, action planning, and adherence to system instructions. Deep knowledge of LLM common failure modes (hallucination, incoherence, jailbreaking)
Automated Evaluation Implementation: 5+ years of experience designing and deploying automated evaluation pipelines to assess complex, agentic AI behaviors. Familiarity with quality metrics such as task success rate, semantic similarity, and sentiment analysis for output measurement
Debugging Agentic Systems: Must be comfortable with the specific challenges of debugging agentic systems, including tracing and interpreting an agent's internal reasoning, tool use, and action sequence to pinpoint failure points
Programming & Frameworks: 5+ years of experience using Python to develop custom evaluation frameworks, writing scripts, and integrating pipelines with CI/CD systems. Familiarity with standard test automation tools (e.g., Pytest, modern web automation tools)
Bachelor's degree in Data Science, Machine Learning, Computer Science, or a related field, with experience in Gen AI / LLMs
High work ethic. Veeva is a hard-working company
High integrity and honesty. Veeva is a PBC and a “do the right thing” company. We expect that from all employees
Applicants must have the unrestricted right to work in the United States or Canada. Veeva will not provide sponsorship at this time
Perks & Benefits
Medical, dental, vision, and basic life insurance
Flexible PTO and company paid holidays
Retirement programs
1% charitable giving program
Compensation
Base pay: $110,000 - $270,000
The salary range listed here has been provided to comply with local regulations and represents a potential base salary range for this role. Please note that actual salaries may vary within the range above or below, depending on experience and location. We look at compensation for each individual and base our offer on your unique qualifications, experience, and expected contributions. This position may also be eligible for other types of compensation in addition to base salary, such as variable bonus and/or stock bonus.
#LI-Remote#LI-MidSenior
Veeva's headquarters is located in the San Francisco Bay Area with offices in more than 15 countries around the world.
Veeva is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity or expression, religion, national origin or ancestry, age, disability, marital status, pregnancy, protected veteran status, protected genetic information, political affiliation, or any other characteristics protected by local laws, regulations, or ordinances. If you need assistance or accommodation due to a disability or special need when applying for a role or in our recruitment process, please contact us at talent_accommodations@veeva.com.
We design, build and maintain infrastructure to support agentic workflows for Siri. Our team is in charge of data generation, introspection and evaluation frameworks that are key to efficiently developing foundation models and agentic workflows for Siri applications. In this team you will have the opportunity to work at the intersection of with cutting edge foundation models and products.
Minimum Qualifications
Strong background in computer science: algorithms, data structures and system design
3+ year experience on large scale distributed system design, operation and optimization
Experience with SQL/NoSQL database technologies, data warehouse frameworks like BigQuery/Snowflake/RedShift/Iceberg and data pipeline frameworks like GCP Dataflow/Apache Beam/Spark/Kafka
Experience processing data for ML applications at scale
Excellent interpersonal skills able to work independently as well as cross-functionally
Preferred Qualifications
Experience fine-tuning and evaluating Large Language Models
Experience with Vector Databases
Experience deploying and serving of LLMs
At Apple, base pay is one part of our total compensation package and is determined within a range. This provides the opportunity to progress as you grow and develop within a role. The base pay range for this role is between $147,400 and $272,100, and your base pay will depend on your skills, qualifications, experience, and location.
Apple employees also have the opportunity to become an Apple shareholder through participation in Apple's discretionary employee stock programs. Apple employees are eligible for discretionary restricted stock unit awards, and can purchase Apple stock at a discount if voluntarily participating in Apple's Employee Stock Purchase Plan. You'll also receive benefits including: Comprehensive medical and dental coverage, retirement benefits, a range of discounted products and free services, and for formal education related to advancing your career at Apple, reimbursement for certain educational expenses - including tuition. Additionally, this role might be eligible for discretionary bonuses or commission payments as well as relocation. Learn more about Apple Benefits.
Note: Apple benefit, compensation and employee stock programs are subject to eligibility requirements and other terms of the applicable plan or program.
Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant .
#J-18808-Ljbffr
$147.4k-272.1k yearly 4d ago
Senior Applications Consultant - Workday Data Consultant
Capgemini 4.5
San Francisco, CA jobs
Job Description - Senior Applications Consultant - Workday Data Consultant (054374)
Senior Applications Consultant - Workday Data Consultant
Qualifications & Experience:
Certified in Workday HCM
Experience in Workday data conversion
At least one implementation as a data consultant
Ability to work with clients on data conversion requirements and load data into Workday tenants
Flexible to work across delivery landscape including Agile Applications Development, Support, and Deployment
Valid US work authorization (no visa sponsorship required)
6‑8 years overall experience (minimum 2 years relevant), Bachelor's degree
SE Level 1 certification; pursuing Level 2
Experience in package configuration, business analysis, architecture knowledge, technical solution design, vendor management
Responsibilities:
Translate business cases into detailed technical designs
Manage operational and technical issues, translating blueprints into requirements and specifications
Lead integration testing and user acceptance testing
Act as stream lead guiding team members
Participate as an active member within technology communities
Capgemini is an Equal Opportunity Employer encouraging diversity and providing accommodations for disabilities.
All qualified applicants will receive consideration without regard to race, national origin, gender identity or expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status, or any other characteristic protected by law.
Physical, mental, or environmental demands may be referenced. Reasonable accommodations will be considered where possible.
#J-18808-Ljbffr
$101k-134k yearly est. 5d ago
Senior Workday Data Consultant & Applications Lead
Capgemini 4.5
San Francisco, CA jobs
A leading consulting firm in San Francisco seeks a Senior Applications Consultant specializing in Workday Data Conversion. The ideal candidate will be certified in Workday HCM and have significant experience with data conversion processes. Responsibilities include translating business needs into technical designs, managing issues, and leading testing efforts. Candidates must possess a Bachelor's degree and a minimum of 6 years of experience, with at least 2 in a relevant role. This position requires valid US work authorization.
#J-18808-Ljbffr
$101k-134k yearly est. 5d ago
Foundry Data Engineer: ETL Automation & Dashboards
Data Freelance Hub 4.5
San Francisco, CA jobs
A data consulting firm based in San Francisco is seeking a Palantir Foundry Consultant for a contract position. The ideal candidate should have strong experience in Palantir Foundry, SQL, and PySpark, with proven skills in data pipeline development and ETL automation. Responsibilities include building data pipelines, implementing interactive dashboards, and leveraging data analysis for actionable insights. This on-site role offers an excellent opportunity for those experienced in the field.
#J-18808-Ljbffr
$114k-160k yearly est. 2d ago
Data Gov -Unity Catalog Platform Engineer
Capgemini 4.5
Seattle, WA jobs
Job Title:Data Gov -Unity Catalog Platform Engineer (Optimize distributed workspaces in Databricks leveraging Unity Catalog.)
Hiring Urgency: Immediate Requirement
Company: Capgemini
Employment Type: Full-Time- Hybrid
Summary:
Capgemini is urgently seeking a Data Governance to enterprise-level metadata and data access initiatives. The ideal candidate will have deep expertise in Collibra, Databricks, Unity Catalog, and Privacera, and will drive strategic alignment across technical and business teams. This role is open to relocation and offers the opportunity to shape scalable, secure data ecosystems.
Your Role
Metadata Management & Design and implement enterprise metadata models in Collibra aligned with business goals.
Integrate metadata workflows with Unity Catalog and Synaptica for enhanced discoverability.
Data Access Governance
Implement and govern secure data access using Privacera.
Optimize distributed workspaces in Databricks leveraging Unity Catalog.
Standards & Best Practices
Define standards for efficient use of Databricks environments.
Champion metadata governance and utilization across teams.
Strategic Leadership
Develop transition architecture roadmaps with clear milestones and success metrics.
Align cross-functional stakeholders around a unified data discovery vision.
Foster collaboration and clarity in complex, ambiguous environments.
Innovation & Thought Leadership
Promote innovative solutions in ontology, data access, and discovery.
Serve as both a strategic leader and hands-on contributor.
Your skills and experience:
Extensive hands-on experience with Databricks, Collibra, Unity Catalog, and Privacera.
Proven success in implementing distributed Databricks workspaces.
Strong background in metadata modeling, data architecture, and enterprise-scale data discovery.
Familiarity with data governance frameworks and compliance standards.
Skills & Technologies:
Collibra
Data Governance
Metadata Management
Databricks
Unity Catalog
Privacera
Synaptica
Information Technology
Life at Capgemini
Capgemini supports all aspects of your well-being throughout the changing stages of your life and career. For eligible employees, we offer:
Flexible work
Healthcare including dental, vision, mental health, and well-being programs
Financial well-being programs such as 401(k) and Employee Share Ownership Plan
Paid time off and paid holidays
Paid parental leave
Family building benefits like adoption assistance, surrogacy, and cryopreservation
Social well-being benefits like subsidized back-up child/elder care and tutoring
Mentoring, coaching and learning programs
Employee Resource Groups
Disaster Relief
$94k-127k yearly est. 1d ago
GenAI Engineer-Data Scientist
Capgemini 4.5
Seattle, WA jobs
Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you'd like, where you'll be supported and inspired by a collaborative community of colleagues around the world, and where you'll be able to reimagine what's possible. Join us and help the world's leading organizations unlock the value of technology and build a more sustainable, more inclusive world.
About the job you're considering
We are seeking a passionate and innovative GenAI Engineer/Data Scientist to join our team. This role involves developing GEN AI solutions and predictive AI models, deploying them in production environments, and driving the integration of AI technologies across our business operations. As a key member of our AI team, you will collaborate with diverse teams to design solutions that deliver tangible business value through AI-driven insights.
Your Role
Familiarity with API architecture, and components such as external interfacing, traffic control, runtime execution of business logic, data access, authentication, deployment.
Key skills to include Understanding of URLs and API Endpoints, HTTP Requests, Authentication Methods, Response Types, JSON/REST, Parameters and Data Filtering, Error Handling, Debugging, Rate Limits, Tokens, Integration, and Documentation.
Develop generative and predictive AI models (including NLP, computer vision, etc.).
Familiarity with cloud platforms (e.g., Azure, AWS, GCP) and big data tools (e.g., Databricks, PySpark) to develop AI solutions.
Familiarity with intelligent autonomous agents for complex tasks and multimodal interactions.
Familiarity with agentic workflows that utilize AI agents to automate tasks and improve operational efficiency.
Deploy AI models into production environments, ensuring scalability, performance, and optimization.
Monitor and troubleshoot deployed models and pipelines for optimal performance.
Design and maintain data pipelines for efficient data collection, processing, and storage (e.g., data lakes, data warehouses).
Required Qualifications:
Minimum of 1 year of professional experience in AI, application development, machine learning, or a similar role.
Experience in model deployment, MLOps, model monitoring, and managing data/model drift.
Experience with predictive AI (e.g., regression, classification, clustering) and generative AI models (e.g., GPT, Claude LLM, Stable Diffusion).
Bachelor's or greater degree in Machine Learning, AI, or equivalent professional experience.
Minimum of 1 year of professional experience in AI, application development, machine learning, or a similar role.
Experience in model deployment, MLOps, model monitoring, and managing data/model drift.
Experience with predictive AI (e.g., regression, classification, clustering) and generative AI models (e.g., GPT, Claude LLM, Stable Diffusion).
Capgemini offers a comprehensive, non-negotiable benefits package to all regular, full-time employees. In the U.S. and Canada, available benefits are determined by local policy and eligibility
Paid time off based on employee grade (A-F), defined by policy: Vacation: 12-25 days, depending on grade, Company paid holidays, Personal Days, Sick
Leave Medical, dental, and vision coverage (or provincial healthcare coordination in Ca
Retirement savings plans (e.g., 401(k) in the U.S., RRSP in Ca
Life and disability insurance
Employee assistance pro
Other benefits as provided by local policy and eligib
$96k-127k yearly est. 2d ago
Technology Lead - Data Platforms
Launch Consulting Group 3.9
Chicago, IL jobs
Be a part of our success story
Launch offers talented and motivated people the opportunity to do the best work of their lives in a dynamic and growing company. Through competitive salaries, outstanding benefits, internal advancement opportunities, and recognized community involvement, you will have the chance to create a career you can be proud of. Your new trajectory starts here at Launch!
The Role
Launch is actively seeking a visionary Technology Leader, Data Platform Engineering to architect and deliver enterprise data platforms while driving transformational outcomes for clients. This role combines deep technical expertise in modern data platforms with strategic consulting capabilities. You will lead implementations using Azure Fabric, Snowflake, and Databricks, leveraging AI-native development tools to accelerate delivery and maintain engineering rigor.
Responsibilities Include
Lead enterprise data platform transformations, architecting scalable solutions on Azure Fabric, Snowflake, and Databricks.
Design modern data architectures supporting real-time analytics, ML workloads, and AI agent integration.
Drive technical pre-sales including solution architecture, proposal development, and executive presentations.
Implement AI-native development workflows using Claude Code, GitHub Copilot, and MCP servers.
Establish engineering practices leveraging AI tools for pipeline development, testing automation, and intelligent data quality monitoring.
Architect solutions using Azure Fabric, Snowflake, and Databricks with governance and compliance frameworks.
Recruit and develop high-performing dataengineers and architects.
Mentor teams on effective use of AI development tools while reinforcing engineering fundamentals.
Contribute to business development through active participation in sales cycles and strategic partnership development.
Qualifications
Must-Have
12+ years in enterprise data platform engineering, with 5+ years in consulting or client-facing delivery roles.
Proven track record leading platform transformations with project budgets exceeding $10M.
Strong executive presence and ability to influence C-suite stakeholders.
Expert-level proficiency in Azure Fabric, Snowflake, and Databricks.
Hands-on experience with AI-native development tools (Claude Code, MCP servers, GitHub Copilot).
Proficiency in modern data stack: dbt Labs, Airflow, Kafka, vector databases, and real-time streaming.
Strong programming skills in Python, SQL, and infrastructure-as-code (Terraform, Bicep).
Nice-to-Have
Certifications: Azure Solutions Architect Expert (AZ-305), SnowPro Advanced, DatabricksDataEngineer Professional.
Industry-specific certifications in financial services, healthcare, retail, or manufacturing.
Compensation & Benefits
As an employee at Launch, you will grow your skills and experience through a variety of exciting project work (across industries and technologies) with some of the top companies in the world! Our employees receive full benefits-medical, dental, vision, short-term disability, long-term disability, life insurance, and matched 401k. We also have an uncapped, take-what-you-need PTO policy. The anticipated base wage range for this role is $220,000 - $245,000. Education and experience will be highly considered, and we are happy to discuss your wage expectations in more detail throughout our internal interview process.
$220k-245k yearly 3d ago
Data Scientist II
Pyramid Consulting, Inc. 4.1
Cambridge, MA jobs
Immediate need for a talented Data Scientist II. This is a 11+ Months Contract opportunity with long-term potential and is located in Cambridge, MA (Onsite). Please review the job description below and contact me ASAP if you are interested.
Job ID:25-96101
Pay Range: $66 - $72/hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location).
Key Responsibilities:
Leverage Bioinformatics, System Biology, Statistics and Machine Learning methods to analyze high-throughput omics datasets, with a specific focus on novel target and biomarker discovery in Neuroscience.
Lead computational analyses and data integration projects involving genomic, transcriptomic, proteomic, and other multi-omics data.
Provide high-quality data analysis and timely support for target and biomarker discovery projects supporting the organization's growing Neuroscience portfolio.
Keep up to date with the latest bioinformatics analysis methods, software, and databases, integrating new methodologies into existing frameworks to enhance data analysis capabilities.
Work with experimental biologists, functional area experts, and clinical scientists to support drug discovery and development programs at various stages.
Provide computational biology /data science input in research strategy and experimental design, provide bioinformatics input, and assist in interpreting results from both in-vitro and in-vivo studies.
Communicate study results effectively to the project team and wider scientific community through written and verbal means, including proposals for further experiments, presentations at internal and external meetings, and publications in leading journals.
Key Requirements and Technology Experience:
Must have skills: - ["bioinformatics / computational biology ", " Multi-omics data analysis", “Machine Learning & Statistical modeling”, " Biological data integration”, “Python and/or R”, “HPC or cloud computing environments”, “Git / GitHub”]
Demonstrated expertise in bioinformatics, computational biology, machine learning, multi-omics data analysis, biological data integration and interpretation.
Extensive and demonstrated experience in the computational analysis of multi-modal and multi-scale (e.g. single cell, spatial) molecular profiles of patient-derived samples.
Proficient in one or more programming languages (e.g., Python, R) and competent with HPC environments and/or cloud-based platforms.
Experience with version control systems, such as Git (e.g., Github).
Good working knowledge of public and proprietary bioinformatics databases, resources and tools.
Familiarity with public repositories of DNA, RNA, protein, single-cell and spatial profiling data.
Ability to critically evaluate scientific research and apply novel informatics methods in translational applications.
Strong problem-solving skills, self-motivated, attention to detail, and ability to handle multiple projects.
Proven ability to conduct research individually and collaboratively.
Proven track record of contributions to peer-reviewed publications in the field of bioinformatics or computational biology.
Excellent communication skills (written, presentation, and oral).
PhD. in Computational Biology, Bioinformatics, Biostatistics, Computer Science, or a related discipline with a minimum of 2 years of academic or industry experience.
MSc in Computational Biology, Bioinformatics, Biostatistics, Computer Science, or a related discipline, with a minimum of 5 years of academic or industry experience.
Experience in analyzing neuroscience datasets and working knowledge of neuroscience, especially neurodegenerative diseases.
In-depth understanding of drug target and biomarker identification in an industry setting
Our client is a leading Pharmaceutical Industry and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration.
Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
$66-72 hourly 5d ago
Data Engineer
Pyramid Consulting, Inc. 4.1
Dallas, TX jobs
Immediate need for a talented DataEngineer. This is a 06+ Months Contract opportunity with long-term potential and is located in Dallas, TX (Hybrid). Please review the job description below and contact me ASAP if you are interested.
Job ID:26-00480
Pay Range: $40 - $45/hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location).
Key Responsibilities:
Design, develop, and optimize end-to-end data pipelines using Python and PySpark
Build and maintain ETL/ELT workflows to process structured and semi-structured data
Write complex SQL queries for data transformation, validation, and performance optimization
Develop scalable data solutions using Azure services such as Azure Data Factory, Azure Data Lake, Azure Synapse Analytics, and Databricks
Ensure data quality, reliability, and performance across data platforms
Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements
Implement best practices for data governance, security, and compliance
Monitor and troubleshoot data pipeline failures and performance issues
Support production deployments and ongoing enhancements
Key Requirements and Technology Experience:
Must have skills: DataEngineer, Azure, Python, PySpark
Strong proficiency in SQL for querying and data modeling
Hands-on experience with Python for data processing and automation
Solid experience using PySpark for distributed data processing
Experience working with Microsoft Azure data services
Understanding of data warehousing concepts and big data architectures
Experience with batch and/or real-time data processing
Ability to work independently and within cross-functional teams
Experience with Azure Databricks
Knowledge of data modelling techniques (star/snowflake schemas)
Familiarity with CI/CD pipelines and version control tools (Git)
Exposure to data security, access control, and compliance standards
Experience with streaming technologies
Knowledge of DevOps or DataOps practices
Cloud certifications (Azure preferred)
Our client is a leading Pharmaceutical Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration.
Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
$40-45 hourly 5d ago
SAP Data Architect
Excelon Solutions 4.5
Austin, TX jobs
Tittle: SAP Data Architect
Mode: Fulltime
Expectations / Deliverables for the Role
Builds the SAP data foundation by defining how SAP systems store, share, and manage trusted enterprise data.
Produces reference data architectures by leveraging expert input from application, analytics, integration, platform, and security teams. These architectures form the basis for new solutions and enterprise data initiatives.
Enables analytics and AI use cases by ensuring data is consistent, governed, and discoverable.
Leverages SAP Business Data Cloud, Datasphere, MDG and related capabilities to unify data and eliminate duplicate data copies.
Defines and maintains common data model catalogs to create a shared understanding of core business data.
Evolves data governance, ownership, metadata, and lineage standards across the enterprise.
Protects core transactional systems by preventing excessive replication and extraction loads.
Technical Proficiency
Strong knowledge of SAP master and transactional data domains.
Hands-on experience with SAP MDG, Business Data Cloud, BW, Datasphere, or similar platforms.
Expertise in data modeling, metadata management, data quality, and data governance practices.
Understanding of data architectures that support analytics, AI, and regulatory requirements.
Experience integrating SAP data with non-SAP analytics and reporting platforms.
Soft Skills
Ability to align data, and engineering teams around a shared data vision, drive consensus on data standards and decisions
Strong facilitation skills to resolve data ownership and definition conflicts.
Clear communicator who can explain architecture choices, trade-offs, and cost impacts to stakeholders.
Pragmatic mindset focused on value, reuse, and simplification.
Comfortable challenging designs constructively in ARB reviews
$92k-124k yearly est. 4d ago
Data Scientist
Talent Software Services 3.6
Novato, CA jobs
Are you an experienced Data Scientist with a desire to excel? If so, then Talent Software Services may have the job for you! Our client is seeking an experienced Data Scientist to work at their company in Novato, CA.
Client's Data Science is responsible for designing, capturing, analyzing, and presenting data that can drive key decisions for Clinical Development, Medical Affairs, and other business areas of Client. With a quality-by-design culture, Data Science builds quality data that is fit-for-purpose to support statistically sound investigation of critical scientific questions. The Data Science team develops solid analytics that are visually relevant and impactful in supporting key data-driven decisions across Client. The Data Management Science (DMS) group contributes to Data Science by providing complete, correct, and consistent analyzable data at data, data structure and documentation levels following international standards and GCP. The DMS Center of Risk Based Quality Management (RBQM) sub-function is responsible for the implementation of a comprehensive, cross-functional strategy to proactively manage quality risks for clinical trials. Starting at protocol development, the team collaborates to define critical-to-quality factors, design fit-for-purpose quality strategies, and enable ongoing oversight through centralized monitoring and data-driven risk management. The RBQM Data Scientist supports central monitoring and risk-based quality management (RBQM) for clinical trials. This role focuses on implementing and running pre-defined KRIs, QTLs, and other risk metrics using clinical data, with strong emphasis on SAS programming to deliver robust and scalable analytics across multiple studies.
Primary Responsibilities/Accountabilities:
The RBQM Data Scientist may perform a range of the following responsibilities, depending upon the study's complexity and the study's development stage:
Implement and maintain pre-defined KRIs, QTLs, and triggers using robust SAS programs/macros across multiple clinical studies.
Extract, transform, and integrate data from EDC systems (e.g., RAVE) and other clinical sources into analysis-ready SAS datasets.
Run routine and ad-hoc RBQM/central monitoring outputs (tables, listings, data extracts, dashboard feeds) to support signal detection and study review.
Perform QC and troubleshooting of SAS code; ensure outputs are accurate and efficient.
Maintain clear technical documentation (specifications, validation records, change logs) for all RBQM programs and processes.
Collaborate with Central Monitors, Central Statistical Monitors, Data Management, Biostatistics, and Study Operations to understand requirements and ensure correct implementation of RBQM metrics.
Qualifications:
PhD, MS, or BA/BS in statistics, biostatistics, computer science, data science, life science, or a related field.
Relevant clinical development experience (programming, RBM/RBQM, Data Management), for example:
PhD: 3+ years
MS: 5+ years
BA/BS: 8+ years
Advanced SAS programming skills (hard requirement) in a clinical trials environment (Base SAS, Macro, SAS SQL; experience with large, complex clinical datasets).
Hands-on experience working with clinical trial data.•Proficiency with Microsoft Word, Excel, and PowerPoint.
Technical - Preferred / Strong Plus
Experience with RAVE EDC.
Awareness or working knowledge of CDISC, CDASH, SDTM standards.
Exposure to R, Python, or JavaScript and/or clinical data visualization tools/platforms.
Preferred:
Knowledge of GCP, ICH, FDA guidance related to clinical trials and risk-based monitoring.
Strong analytical and problem-solving skills; ability to interpret complex data and risk outputs.
Effective communication and teamwork skills; comfortable collaborating with cross-functional, global teams.
Ability to manage multiple programming tasks and deliver high-quality work in a fast-paced environment.
$99k-138k yearly est. 3d ago
Data Engineer
Zillion Technologies, Inc. 3.9
Saint Louis, MO jobs
We're seeking an experienced DataEngineer to help design and build a cloud-native big data analytics platform on AWS. You'll work in an agile engineering team alongside data scientists and engineers to develop scalable data pipelines, analytics, and visualization capabilities.
Key Highlights:
Build and enhance data pipelines and analytics using Python, R, and AWS services (Glue, Lambda, Redshift, EMR, QuickSight, SageMaker)
Design and support big data solutions leveraging Spark, Hadoop, and Redshift
Apply DevOps and Infrastructure as Code practices (Terraform, Ansible, AWS CDK)
Collaborate cross-functionally to align data architecture with business goals
Support security, quality, and operational excellence initiatives
Requirements:
7+ years of dataengineering experience
Strong AWS cloud and big data background
Experience with containerization (EKS/ECR), APIs, and Linux
Location: Hybrid in St. Louis, MO area (onsite 2-3 days)
$71k-97k yearly est. 4d ago
Staff Machine Learning Data Engineer
Backflip 3.7
San Francisco, CA jobs
Mechanical design, the work done in CAD, is the rate-limiter for progress in the physical world. However, there are only 2-4 million people on Earth who know how to CAD. But what if hundreds of millions could? What if creating something in the real world were as easy as imagining the use case, or sketching it on paper?
Backflip is building a foundation model for mechanical design: unifying the world's scattered engineering knowledge into an intelligent, end-to-end design environment. Our goal is to enable anyone to imagine a solution and hit “print.”
Founded by a second-time CEO in the same space (first company: Markforged), Backflip combines deep industry insight with breakthrough AI research. Backed by a16z and NEA, we raised a $30M Series A and built a deeply technical, mission-driven team.
We're building the AI foundation that tomorrow's space elevators, nanobots, and spaceships will be built in.
If you're excited to define the next generation of hard tech, come build it with us.
The Role
We're looking for a Staff Machine Learning DataEngineer to lead and build the data pipelines powering Backflip's foundation model for manufacturing and CAD.
You'll design the systems, tools, and strategies that turn the world's engineering knowledge - text, geometry, and design intent - into high-quality training data.
This is a core leadership role within the AI team, driving the data architecture, augmentation, and evaluation that underpin our model's performance and evolution.
You'll collaborate with Machine Learning Engineers to run data-driven experiments, analyze results, and deliver AI products that shape the future of the physical world.
What You'll Do
Architect and own Backflip's ML data pipeline, from ingestion to processing to evaluation.
Define data strategy: establish best practices for data augmentation, filtering, and sampling at scale.
Design scalable data systems for multimodal training (text, geometry, CAD, and more).
Develop and automate data collection, curation, and validation workflows.
Collaborate with MLEs to design and execute experiments that measure and improve model performance.
Build tools and metrics for dataset analysis, monitoring, and quality assurance.
Contribute to model development through insights grounded in data, shaping what, how, and when we train.
Who You Are
You've built and maintained ML data pipelines at scale, ideally for foundation or generative models, that shipped into production in the real world.
You have deep experience with dataengineering for ML, including distributed systems, data extraction, transformation, and loading, and large-scale data processing (e.g. PySpark, Beam, Ray, or similar).
You're fluent in Python and experienced with ML frameworks and data formats (Parquet, TFRecord, HuggingFace datasets, etc.).
You've developed data augmentation, sampling, or curation strategies that improved model performance.
You think like both an engineer and an experimentalist: curious, analytical, and grounded in evidence.
You collaborate well across AI development, infra, and product, and enjoy building the data systems that make great models possible.
You care deeply about data quality, reproducibility, and scalability.
You're excited to help shape the future of AI for physical design.
Bonus points if:
You are comfortable working with a variety of complex data formats, e.g. for 3D geometry kernels or rendering engines.
You have an interest in math, geometry, topology, rendering, or computational geometry.
You've worked in 3D printing, CAD, or computer graphics domains.
Why Backflip
This is a rare opportunity to own the data backbone of a frontier foundation model, and help define how AI learns to design the physical world.
You'll join a world-class, mission-driven team operating at the intersection of research, engineering, and deep product sense, building systems that let people design the physical world as easily as they imagine it.
Your work will directly shape the performance, capability, and impact of Backflip's foundation model, the core of how the world will build in the future.
Let's build the tools the future will be made in.
#J-18808-Ljbffr
$126k-178k yearly est. 3d ago
ML Engineer: Fraud Detection & Big Data at Scale
Datavisor 4.5
Mountain View, CA jobs
A leading security technology firm in California is seeking a skilled Data Science Engineer. You will harness the power of unsupervised machine learning to detect fraudulent activities across various sectors. Ideal candidates have experience with Java/C++, data structures, and machine learning. The company offers competitive pay, flexible schedules, equity participation, health benefits, a collaborative environment, and unique perks such as catered lunches and game nights.
#J-18808-Ljbffr
$125k-177k yearly est. 4d ago
Data Engineer II (Full Time) United States
Cisco Systems, Inc. 4.8
Parkton, NC jobs
Please note this posting is to advertise potential job opportunities. This exact role may not be open today but could open in the near future. When you apply, a Cisco representative may contact you directly if a relevant position opens.
Applications are accepted until further notice.
Meet the Team
At Cisco IT, you will join a collaborative group of builders, innovators, and change agents. Our team brings together expertise in: CRM architecture, Data governance, Sales and partner compensation and AI and automation
We believe the best ideas come from diverse voices and bold thinking. You'll have the support and resources to learn, grow, and make a meaningful impact.
Your Impact
As a DataEngineer at Cisco, you will:
Build and support integrated data solutions that power business analytics and machine learning
Ensure data is high-quality, accessible, and easy to use
Enable smarter decisions, optimize marketing investments, and enhance customer experiences
Make complex data understandable and actionable
Collaborate across teams to deliver meaningful solutions for a global community
Minimum Qualifications
Enrolled in or recently graduated from a technical degree or certification program (e.g., Technical Boot Camp, Apprenticeship, Community College, or 4-Year University)
AI literacy and a cloud-first mindset
Familiarity with agile methodologies and data-driven approaches
Experience or familiarity with databases (e.g., MySQL, PostgreSQL, MongoDB) and big data frameworks (e.g., Hadoop, Spark)
Able to legally live and work in the United States without visa support or sponsorship
Preferred Qualifications
Experience with RESTful API design and development for data integration
Experience with business intelligence tools such as Tableau
Understanding of security best practices in data encryption, secure data transfer, and access controls
Knowledge of full-stack development for comprehensive data solutions
Why Cisco?
At Cisco, we're revolutionizing how data and infrastructure connect and protect organizations in the AI era - and beyond. We've been innovating fearlessly for 40 years to create solutions that power how humans and technology work together across the physical and digital worlds. These solutions provide customers with unparalleled security, visibility, and insights across the entire digital footprint.
Fueled by the depth and breadth of our technology, we experiment and create meaningful solutions. Add to that our worldwide network of doers and experts, and you'll see that the opportunities to grow and build are limitless. We work as a team, collaborating with empathy to make really big things happen on a global scale. Because our solutions are everywhere, our impact is everywhere.
We are Cisco, and our power starts with you.
Message to applicants applying to work in the U.S. and/or Canada:
Individual pay is determined by the candidate's hiring location, market conditions, job-related skillset, experience, qualifications, education, certifications, and/or training. The full salary range for certain locations is listed below. For locations not listed below, the recruiter can share more details about compensation for the role in your location during the hiring process.
U.S. employees are offered benefits, subject to Cisco's plan eligibility rules, which include medical, dental and vision insurance, a 401(k) plan with a Cisco matching contribution, paid parental leave, short and long-term disability coverage, and basic life insurance. Please see the Cisco careers site to discover more benefits and perks. Employees may be eligible to receive grants of Cisco restricted stock units, which vest following continued employment with Cisco for defined periods of time.
U.S. employees are eligible for paid time away as described below, subject to Cisco's policies:
10 paid holidays per full calendar year, plus 1 floating holiday for non-exempt employees
1 paid day off for employee's birthday, paid year-end holiday shutdown, and 4 paid days off for personal wellness determined by Cisco
Non-exempt employees** receive 16 days of paid vacation time per full calendar year, accrued at rate of 4.92 hours per pay period for full-time employees
Exempt employees participate in Cisco's flexible vacation time off program, which has no defined limit on how much vacation time eligible employees may use (subject to availability and some business limitations)
80 hours of sick time off provided on hire date and each January 1st thereafter, and up to 80 hours ofunused sick timecarried forwardfrom one calendar yearto the next
Additional paid time away may be requested to deal with critical or emergency issues for family members
Optional 10 paid days per full calendar year to volunteer
For non-sales roles, employees are also eligible to earn annual bonuses subject to Cisco's policies.
Employees on sales plans earn performance-based incentive pay on top of their base salary, which is split between quota and non-quota components, subject to the applicable Cisco plan. For quota-based incentive pay, Cisco typically pays as follows:
.75% of incentive target for each 1% of revenue attainment up to 50% of quota;
1.5% of incentive target for each 1% of attainment between 50% and 75%;
1% of incentive target for each 1% of attainment between 75% and 100%; and
Once performance exceeds 100% attainment, incentive rates are at or above 1% for each 1% of attainment with no cap on incentive compensation.
For non-quota-based sales performance elements such as strategic sales objectives, Cisco may pay 0% up to 125% of target. Cisco sales plans do not have a minimum threshold of performance for sales incentive compensation to be paid.
The applicable full salary ranges for this position, by specific state, are listed below:
New York City Metro Area:
$123,600.00 - $200,100.00
Non-Metro New York state & Washington state:
$109,900.00 - $181,600.00
* For quota-based sales roles on Cisco's sales plan, the ranges provided in this posting include base pay and sales target incentive compensation combined.
** Employees in Illinois, whether exempt or non-exempt, will participate in a unique time off program to meet local requirements.
$123.6k-200.1k yearly 2d ago
Data Architect
NLB Services 4.3
Neenah, WI jobs
Hi All,
I am hiring for below mentioned role.
Role- AWS Data Architect
Job Type - Contract
We are seeking a highly skilled AWS Data Architect to design, build, and optimize cloud-based data platforms that enable scalable analytics and business intelligence. The ideal candidate will have deep expertise in AWS cloud services, data modeling, data lakes, ETL pipelines, and big data ecosystems.
Key Responsibilities
Design and implement end-to-end data architectures on AWS (data lakes, data warehouses, and streaming solutions).
Define data ingestion, transformation, and storage strategies using AWS native services (Glue, Lambda, EMR, S3, Redshift, Athena, etc.).
Architect ETL/ELT pipelines and ensure efficient, secure, and reliable data flow.
Collaborate with dataengineers, analysts, and business stakeholders to translate business needs into scalable data solutions.
Establish data governance, security, and compliance frameworks following AWS best practices (IAM, KMS, Lake Formation).
Optimize data systems for performance, cost, and scalability.
Lead data migration projects from on-prem or other clouds to AWS.
Provide technical guidance and mentorship to dataengineering teams.
Required Skills & Qualifications
10+ years of experience in data architecture, dataengineering, or cloud architecture.
Strong hands-on experience with AWS services:
Storage & Compute: S3, EC2, Lambda, ECS, EKS
Data Processing: Glue, EMR, Kinesis, Step Functions
Lead Technical Recruiter
Next Level Business Services, Inc.
Consulting| Analytics| Staff Augmentation
E-Mail: ******************************* |
An ISO 27001 and 20000-1 Certified & Minority Business Enterprise
$80k-112k yearly est. 2d ago
Lead Data Engineer
Relativity 4.7
Chicago, IL jobs
Posting Type
Hyrbid
Relativity powers the world's most critical legal, compliance, and investigative work. From corporate compliance to human rights, our platform must preserve trust in global investigations while handling petabytes of sensitive evidence. Our team builds the distributed data backbone that powers AI-assisted evidence analysis across billions of documents daily. We are at the forefront of Legal Data Intelligence, building technology that helps organizations Organize Data, Discover Truth, and Act on It.
As a Lead DataEngineer, you are the technical leader for your team: hands-on, design focused, and accountable for elevating engineering quality. You'll drive architectural decisions, guide how systems are built, and mentor others to deliver high-performance, cloud-native data systems. You will work across modern data tooling, including Databricks, Kafka, dbt, and Snowflake, to directly support some of the most mission critical legal processes worldwide.
Job Description and Requirements
What You'll Do
* Architect, build, and operate distributed data pipelines and services that process massive volumes of structured and unstructured data.
* Design and deliver scalable, secure, and observable data integration and ETL solutions using dbt to ensure end-to-end data integrity and availability.
* Partner with AI/ML engineers and data scientists to build data foundations that accelerate model training, experimentation, and production deployment.
* Drive improvements in data quality, lineage, reliability, and SLAs, shaping engineering standards for your team.
* Contribute reusable patterns, frameworks, and best practices that strengthen our Azure cloud-native data platform.
What You'll Bring
* 6+ years of experience in dataengineering, backend engineering, or data architecture with substantial work on distributed systems and cloud-first data platforms.
* Deep understanding of data modeling, ETL/ELT design, and workflow orchestration.
* Proven ability to lead technical decisions within a team-breaking down complex problems, defining trade-offs, and driving alignment.
* Strong communication skills and the ability to explain complex data concepts to diverse audiences.
* A commitment to building systems that create clarity and insight through data, not just moving data from point A to point B.
Preferred Experience
* Expertise with Databricks and/or Azure-based data architectures.
* Background building systems that support ML workflows or AI-driven applications.
* Experience leading modernization or migration of large-scale data platforms.
* Strong understanding of observability and cost-optimization strategies for cloudbased data systems.
Relativity is committed to competitive, fair, and equitable compensation practices.
This position is eligible for total compensation which includes a competitive base salary, an annual performance bonus, and long-term incentives.
The expected salary range for this role is between following values:
$150,000 and $224,000
The final offered salary will be based on several factors, including but not limited to the candidate's depth of experience, skill set, qualifications, and internal pay equity. Hiring at the top end of the range would not be typical, to allow for future meaningful salary growth in this position.
Suggested Skills:
Documentations, Innovation, Leadership, Problem Solving, Process Improvements, Project Management, Quality Assurance (QA), Risk Management, Technical Knowledge, Troubleshooting
$75k-97k yearly est. 3d ago
Staff Data Engineer
Relativity 4.7
Chicago, IL jobs
Posting Type
Remote
Relativity powers some of the world's most critical legal, compliance, and investigative work. From corporate compliance to human rights, our platform safeguards global investigations while handling petabytes of sensitive evidence. Our teams build the distributed data foundation that drives AI-assisted analysis across billions of documents every day. We are defining the future of Legal Data Intelligence-technology that helps organizations Organize Data, Discover Truth, and Act on It.
As a Staff DataEngineer, you will be a hands-on technical leader responsible for designing, building, and optimizing high-performance data systems operating at massive scale. You'll work across data orchestration tools, modern processing frameworks, and distributed compute to ensure our platform is resilient, scalable, and ready to power the next generation of AI capabilities.
This role blends deep technical execution with strategic leadership. You'll shape architectural direction, provide design oversight across teams, and elevate dataengineering standards across the organization. Your work will directly influence some of the world's most sensitive legal processes-from real-time investigations to large-scale litigation.
Job Description and Requirements
What You'll Do
Architect, build, and operate distributed data systems that process massive volumes of structured and unstructured data.
Design and deliver scalable, secure, and observable data integration and ETL pipelines using Databricks, dbt, and modern workflow orchestration tools-ensuring end-to-end data quality and availability.
Partner with AI/ML engineers and data scientists to develop robust data foundations that accelerate model training, experimentation, and production deployment.
Drive innovation in data infrastructure, workflow automation, and platform resiliency to expand analytical and AI capabilities.
Collaborate closely with product, AI/ML, and platform teams to deliver data solutions that support critical business and customer outcomes.
Provide architectural direction and design guidance across projects, ensuring alignment with platform strategy and engineering best practices.
Mentor engineers, contribute to engineering standards, and champion a culture of high performance and reliability.
What You'll Bring
8+ years of experience in dataengineering, data architecture, or backend systems development at scale.
Proven ability to design and optimize complex, high-volume data systems and pipelines for performance, scalability, security, and reliability.
Deep expertise in data modeling, ETL/ELT methodologies, and workflow orchestration.
A strong record of technical leadership-guiding architecture, leading major data initiatives, and mentoring engineers.
Excellent communication and collaboration skills, with the ability to articulate complex concepts to technical and non-technical audiences.
A passion for using data to drive insight and clarity-not just to move pipelines.
Preferred Experience
Hands-on experience with modern big-data and streaming technologies (e.g., Spark/Databricks, Kafka) in AWS, Azure, or GCP.
Expertise in orchestration and transformation tools (e.g., Airflow, dbt) and cloud data warehouses such as Snowflake or BigQuery.
Background in building data systems that power machine learning or AI-driven applications.
Experience leading modernization or migration of large-scale data platforms.
Familiarity with observability, operational excellence, and cost-optimization strategies for distributed data systems.
What Success Looks Like
First 3 months: Deliver key architectural enhancements and develop deep fluency in our platform, data domains, and stakeholder ecosystem.
6 months: Establish ownership over a major data system end-to-end, with broad authority to improve performance, scalability, and data quality.
12 months: Lead cross-team initiatives that elevate Relativity's analytics and AI capabilities; mentor others and raise the bar for dataengineering across the organization
Why You'll Love It Here
Impact that matters: Your work enables investigations, litigation, and compliance efforts that affect industries-and lives-around the world.
Modern technology: Build on a cutting-edge data stack and operate at a scale few companies can match.
Career growth: Opportunities to lead, innovate, and broaden your influence across AI, product, and platform ecosystems.
Inclusive culture: We value diverse perspectives and foster an environment where everyone can thrive.
Competitive rewards: Strong compensation, equity, flexible work options, and ongoing professional development.
Relativity is committed to competitive, fair, and equitable compensation practices.
This position is eligible for total compensation which includes a competitive base salary, an annual performance bonus, and long-term incentives.
The expected salary range for this role is between following values:
$174,000 and $262,000
The final offered salary will be based on several factors, including but not limited to the candidate's depth of experience, skill set, qualifications, and internal pay equity. Hiring at the top end of the range would not be typical, to allow for future meaningful salary growth in this position.
Suggested Skills:
Algorithms, Automation, Debugging, Distributed Systems, Performance Tuning, Problem Solving, Project Management, Software Development, System Designs, Technical Leadership
Veeva Systems is building the industry cloud for Life Sciences to help companies work in a more efficient and connected way. Learn more about our products, vision and values, and status as a public benefit corporation on our website. The Role We are hiring recent university graduates to grow the next generation of Software Engineers through our Engineering Development Program.
We believe in pushing high potential people to achieve excellence. Our program is specifically designed to provide a challenging environment to learn quickly and deliver value early, equipping you with the resources to become an excellent engineer. REQUIREMENTS | We are looking for graduates who meet the following requirements:
Bachelor's degree in computer science or related field from an accredited 4 year university with a 3.0 to 4.0 GPA
Must have taken relevant C.S. classes, including at least one Compilers or Operating Systems class. The Fundamentals are important at Veeva
High work ethic. Veeva is a hard-working company
High integrity and honesty. Veeva is a PBC and a “do the right thing” company. We expect that from all employees
Excellent verbal and written English communication skills. Engineering is not all about the code, it's also about communication
0-2 years of professional software experience. We have other jobs for more experienced hires, but EDP is designed for those just getting going in their careers
Ability and desire to work in office 4 days/week for your first two years. After 2 years, you will have the flexibility to Work Anywhere
OUR TECHNOLOGY | We have a variety of different products and codebases, but in general, we use this tech stack:
System software is Java or Rust
Application logic is Java, Python, TypeScript
Front end is JavaScript, React, TypeScript
Mobile is Swift, Kotlin, React Native
THE PROCESS | Our process is different than most. It is designed to be fast, efficient and respectful. Here are the steps:
You submit your resume, short cover letter of questions, and take a personality test
Within one week we will notify you via email if we would like to go to the next step or not
The next step is a single 2-hour interview with a member of our tech evaluation team. Part of this is a coding exercise in the language of your choice (Java, JavaScript or Python)
Within one week after this step, we will give you an offer, or let you know that we do not wish to move forward
You will have two weeks to accept our offer or not. If you accept, we will hold a spot for you and expect you to show up on your start date. Accepting an offer and continuing to interview would be an ethical violation in our view
When you join you will be assigned to an engineering manager in your work location. It's important to know you are applying to work as an engineer in a location but not applying for a specific team/product
Compensation
Starting base pay (Cash + RSU): $115,000 in Columbus
Starting bonus of $20,000 and annual stock options which can be quite valuable if Veeva stock does well over the long term
Work Authorization: Qualified candidates must be legally authorized to be employed in the United States. Veeva does not provide sponsorship for employment visa status (e.g., H-1B, OPT, or TN status) for this employment position.
Work Environment: Veeva is a Work Anywhere company. You can choose to work in an office or remotely from home on any given day of the week. Although Veeva is Work from Anywhere, Associate Software Engineers must live within a maximum commuting distance of 45 minutes to 1 hour from their home office and must work in-office 4 days a week.
#LI-Entry
Veeva is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity or expression, religion, national origin or ancestry, age, disability, marital status, pregnancy, protected veteran status, protected genetic information, political affiliation, or any other characteristics protected by local laws, regulations, or ordinances. If you need assistance or accommodation due to a disability or special need when applying for a role or in our recruitment process, please contact us at talent_accommodations@veeva.com.