Post job

Data Scientist jobs at KBR - 3811 jobs

  • Senior Product Data Scientist - App Safety & Insights

    Google Inc. 4.8company rating

    Mountain View, CA jobs

    A leading technology company seeks a Senior Product Data Scientist in Mountain View, CA, to analyze data and provide strategic insights to enhance product decisions. Candidates should have a bachelor's in a quantitative field, with 8 years of experience in analytics, coding skills in Python, R, and SQL, and a passion for problem-solving. This role offers a competitive salary range of $156,000 to $229,000, along with a bonus, equity, and benefits. #J-18808-Ljbffr
    $156k-229k yearly 4d ago
  • Job icon imageJob icon image 2

    Looking for a job?

    Let Zippia find it for you.

  • Principal Data Scientist, Agentic AI Professional Services

    Amazon 4.7company rating

    New York, NY jobs

    AWS is seeking an exceptional Principal Data Scientist to join our Agentic AI ProServe Experience (APEX) team. This role calls for an individual ready to tackle large-scale agentic AI challenges and drive innovative ML/AI practices that transform professional services delivery and address real-world customer needs. Reporting directly to the leader of APEX, this position is essential for revolutionizing AWS Professional Services' delivery capabilities through advanced agent-based AI systems. In addition to building ProServe Agents, you will driving their adoption across the ProServe builder community, enabling transformative customer engagements and partner enablement through AWS tools and services such as Kiro, AWS Transform, Strands, Bedrock, and Agentcore. The ideal candidate will possess extensive experience in agentic AI systems, multi-agent architectures, and complex data science environments, developing solutions that advance the state-of-the-art in agent-based professional services delivery. Proficiency with Machine Learning, Large Language Models, reinforcement learning, and advanced analytics techniques is essential, along with the ability to convey these technical concepts in simple terms to diverse stakeholders. The role demands both deep technical expertise and exceptional communication skills to drive adoption across the ProServe organization and partner ecosystem. Key job responsibilities Scientific Leadership * Define and drive the scientific vision and roadmap for ProServe Agents and the agentic AI professional services portfolio * Lead research and development initiatives to advance the state-of-the-art in agent-based AI systems for professional services workflows * Collaborate with senior scientists across AWS to pioneer novel approaches to agent learning, reasoning, and interaction * Publish innovative research at top-tier conferences and influence the broader scientific community * Evaluate emerging research and identify opportunities to integrate techniques into ProServe Agent architecture Technical Excellence * Architect complex multi-agent systems that can effectively understand, reason about, and execute professional services workflows at enterprise scale * Lead the integration of Gen AI and ML-based methods across ProServe delivery practices, spearheading AI-driven agent development and adoption initiatives * Establish rigorous evaluation frameworks and metrics for measuring agent effectiveness, safety, and business impact * Develop production-ready agentic solutions leveraging AWS tools including Kiro, AWS Transform, Strands, Bedrock, and Bedrock AgentCore Customer & Business Impact * Translate complex technical concepts into clear business value propositions for ProServe builders, customers, and partners * Lead high-visibility technical delivery for strategic customers, demonstrating ProServe Agent capabilities * Drive adoption of ProServe Agents across the AWS Professional Services builder community, transforming delivery methodologies * Enable AWS partners on agentic delivery mechanisms and best practices * Analyze complex enterprise problems and design AI agent solutions that deliver measurable business outcomes * Collaborate effectively with ProServe delivery teams, product, engineering, and design teams to create and launch innovative agent-based solutions Communicate complex agentic AI concepts clearly to diverse audiences across ProServe, customers, and partners Leadership & Mentorship * Provide thought leadership and technical direction on agentic AI strategies and their applications in professional services * Drive innovation by sharing new technical solutions and product ideas across AWS teams * Mentor and guide the data science, ML engineering, and ProServe builder communities * Foster collaboration with key stakeholders to enhance the AWS ProServe experience Basic Qualifications - Master's degree in Math, Statistics, Computer Science, or related Science field, or experience in data science, machine learning or data mining - Experience building machine learning models or developing algorithms for business application Preferred Qualifications - 8+ years of data scientist or similar role involving data extraction, analysis, statistical modeling and communication experience - 4+ years of practical machine learning experience - 2+ years of working with or evaluating AI systems experience - Experience with Machine Learning and Large Language Model fundamentals, including architecture, training/inference lifecycles, and optimization of model execution, or experience debugging, profiling, and implementing best software engineering practices in large-scale systems - Experience with one of the following areas: machine learning technologies, Reinforcement Learning, Deep Learning, Computer Vision, Natural Language Processing (NLP) or related applications - Have publications at top-tier peer-reviewed conferences or journals - Experience with AWS or cloud technologies - Experience in technical leadership of development, testing, and implementation of large-scale, complex technology projects - Experience driving collaborative projects from conception to delivery, or experience in development or technical support - Experience creating and delivering written and oral communications for technical and non-technical audiences Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status. Los Angeles County applicants: Job duties for this position include: work safely and cooperatively with other employees, supervisors, and staff; adhere to standards of excellence despite stressful conditions; communicate effectively and respectfully with employees, supervisors, and staff to ensure exceptional customer service; and follow all federal, state, and local laws and Company policies. Criminal history may have a direct, adverse, and negative relationship with some of the material job duties of this position. These include the duties and responsibilities listed above, as well as the abilities to adhere to company policies, exercise sound judgment, effectively manage stress and work safely and respectfully with others, exhibit trustworthiness and professionalism, and safeguard business operations and the Company's reputation. Pursuant to the Los Angeles County Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit ********************************************************* for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner. The base salary range for this position is listed below. Your Amazon package will include sign-on payments and restricted stock units (RSUs). Final compensation will be determined based on factors including experience, qualifications, and location. Amazon also offers comprehensive benefits including health insurance (medical, dental, vision, prescription, Basic Life & AD&D insurance and option for Supplemental life plans, EAP, Mental Health Support, Medical Advice Line, Flexible Spending Accounts, Adoption and Surrogacy Reimbursement coverage), 401(k) matching, paid time off, and parental leave. Learn more about our benefits at ******************************* . USA, CA, Mountain View - 217,800.00 - 294,700.00 USD annually USA, CA, San Diego - 189,400.00 - 256,200.00 USD annually USA, GA, Atlanta - 189,400.00 - 256,200.00 USD annually USA, MA, Boston - 189,400.00 - 256,200.00 USD annually USA, NY, New York - 208,300.00 - 281,800.00 USD annually USA, TX, Austin - 189,400.00 - 256,200.00 USD annually USA, TX, Dallas - 189,400.00 - 256,200.00 USD annually USA, VA, Arlington - 189,400.00 - 256,200.00 USD annually USA, WA, Seattle - 189,400.00 - 256,200.00 USD annually
    $92k-140k yearly est. 2d ago
  • Senior Data Scientist - Experimentation

    Apple 4.8company rating

    New York, NY jobs

    **Weekly Hours:** 40 **Role Number:** 200*********** At Apple, we work every day to create products that enrich people's lives. Our Advertising Platforms group makes it possible for people around the world to easily access informative and imaginative content on their devices while helping publishers and developers promote and monetize their work. Today, our technology and services power advertising in Search Ads in App Store and Apple News. Our platforms are highly-performant, deployed at scale, and setting new standards for enabling effective advertising while protecting user privacy. Ads Experimentation team is seeking a Machine Learning Engineer and Statistician who will help drive innovation. This is a hands-on role for developing & supporting a state of the art ads experimentation platform. This role requires partnering with cross-functional teams to effectively coordinate the complex interdependencies inherent in application development. A successful candidate has strong technical skills and is eager to create intuitive user experiences; they have a keen eye for the details that surprise and delight our customers. While understanding our product features, you will perform rigorous analyses, measure performance and make data-driven decisions and develop scalable tools that facilitate quick and accurate decision making. **Description** Design and build a new generation of experimentation platform for the Ads Platform organization. Apply leading-edge technologies to enable safe and data driven launch of features that help connect Apple users and advertisers while delivering on Apple's privacy commitment through experimentation. Collaborate with stakeholders and data scientists to design experiments and statistical methods that account for the marketplace dynamics and yield reliable treatment effects. Provide expert technical guidance to engineers and scientists on statistical techniques, experiment design and data engineering. You will join and contribute to a culture that emphasizes observability and understandability, reliability, resiliency, simplicity, reusability, extensibility, scalability, velocity and productivity. We are one team, nurturing each other's growth and supporting each other in delivering for our customers and Apple. **Minimum Qualifications** + 8+ years of experience in software and data science or statistics with an in-depth understanding of SQL and causal inference. Industry experience in SDLC is preferred. + Analysis: Conduct rigorous, end-to-end analyses using SQL, Python, and statistical methods to uncover insights and improve treatment effect estimates. + Experience with A/B testing infrastructure and methodologies and deep understanding of the assumptions of randomized control trials. Experience in marketplace experimentation is a bonus. + Familiarity with causal machine learning tools and technologies. Well versed in observational causal techniques like regression, propensity score matching, Diff in Diff, regression discontinuity and instrumental variables. + Design and analyze controlled experiments or counterfactual causal inference studies to estimate the incremental and long term impact of interventions. Experience in measuring advertiser value from campaigns or algorithmic changes is a huge plus. + Proficiency in SQL and Pyspark or Scala to conduct analyses and build data products (pipelines and dashboards) to automate experiment reporting at scale. + Lead the design, development, and maintenance of scalable and reliable data pipelines. Implement robust data quality checks, monitoring systems, and data lineage tracking. Advocate best practices for data engineering. + Strategic Partnership: Work directly with stakeholders, including senior leaders, to identify, scope, and prioritise high-impact questions. **Preferred Qualifications** + Advanced Degree in Computer Science, Statistics, Applied Math or related field. + Skilled at operating in a cross-functional organization. + Ability to understand ambiguous and complex problems and design and execute analytical approaches and turn analysis into clear and concise takeaways that drive action. + Curious business attitude with a proven ability to seek projects with a sense of ownership. + Excellent communication, social and presentation skills. + Desire to work in a fast-paced and challenging work environment. + Mentor junior engineers and scientists on the team. Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant (*********************************************************************************************** .
    $122k-162k yearly est. 2d ago
  • Senior Product Data Scientist, Product, App Safety Engineering

    Google Inc. 4.8company rating

    Mountain View, CA jobs

    corporate_fare Google place Mountain View, CA, USA Apply Bachelor's degree in Statistics, Mathematics, Data Science, Engineering, Physics, Economics, or a related quantitative field. 8 years of experience using analytics to solve product or business problems, performing statistical analysis, and coding (e.g., Python, R, SQL), or 5 years of experience with an advanced degree. Preferred qualifications: Master's degree in Statistics, Mathematics, Data Science, Engineering, Physics, Economics, or a related quantitative field. About the job Help serve Google's worldwide user base of more than a billion people. Data Scientists provide quantitative support, market understanding and a strategic perspective to our partners throughout the organization. As a data-loving member of the team, you serve as an analytics expert for your partners, using numbers to help them make better decisions. You will weave stories with meaningful insight from data. You'll make critical recommendations for your fellow Googlers in Engineering and Product Management. You relish tallying up the numbers one minute and communicating your findings to a team leader the next. The Platforms and Devices team encompasses Google's various computing software platforms across environments (desktop, mobile, applications), as well as our first party devices and services that combine the best of Google AI, software, and hardware. Teams across this area research, design, and develop new technologies to make our user's interaction with computing faster and more seamless, building innovative experiences for our users around the world. The US base salary range for this full-time position is $156,000-$229,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process. Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google . Responsibilities Perform analysis utilizing relevant tools (e.g., SQL, R, Python). Help solve problems, narrowing down multiple options into the best approach, and take ownership of open-ended ambiguous business problems to reach an optimal solution. Build new processes, procedures, methods, tests, and components with foresight to anticipate and address future issues. Report on Key Performance Indicators (KPIs) to support business reviews with the cross-functional/organizational leadership team. Translate analysis results to business insights or product improvement opportunities. Build and prototype analysis and business cases iteratively to provide insights at scale. Develop knowledge of Google data structures and metrics, advocating for changes where needed for product development. Influence across teams to align resources and direction. Google is proud to be an equal opportunity and affirmative action employer. We are committed to building a workforce that is representative of the users we serve, creating a culture of belonging, and providing an equal employment opportunity regardless of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), expecting or parents-to-be, criminal histories consistent with legal requirements, or any other basis protected by law. See also Google's EEO Policy , Know your rights: workplace discrimination is illegal , Belonging at Google , and How we hire . Google is a global company and, in order to facilitate efficient collaboration and communication globally, English proficiency is a requirement for all roles unless stated otherwise in the job posting. To all recruitment agencies: Google does not accept agency resumes. Please do not forward resumes to our jobs alias, Google employees, or any other organization location. Google is not responsible for any fees related to unsolicited resumes. #J-18808-Ljbffr
    $149k-192k yearly est. 4d ago
  • Senior Data Scientist

    Capgemini 4.5company rating

    New York, NY jobs

    Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you'd like, where you'll be supported and inspired by a collaborative community of colleagues around the world, and where you'll be able to reimagine what's possible. Join us and help the world's leading organizations unlock the value of technology and build a more sustainable, more inclusive world. **** Develop and implement a set of techniques or analytics applications to transform raw data into meaningful information using data-oriented programming languages and visualization software. Apply data mining, data modeling, natural language processing, and machine learning to extract and analyze information from large structured and unstructured datasets. Visualize, interpret, and report data findings. May create dynamic data reports. **Job Description - Grade Specific** Responsible for helping the company leverage data, working with the team of data scientists and engineers to provide valuable direction and make informed decisions concerning the product, growth, and engagement Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem. Ref. code: 240304 Posted on: Nov 21, 2025 Experience Level: Experienced Professionals Contract Type: Permanent Location: New York, NY, US Brand: Capgemini Professional Community: Data & AI Capgemini is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law.
    $86k-114k yearly est. 2d ago
  • Data Scientist II

    Pyramid Consulting, Inc. 4.1company rating

    Cambridge, MA jobs

    Immediate need for a talented Data Scientist II. This is a 11+ Months Contract opportunity with long-term potential and is located in Cambridge, MA (Onsite). Please review the job description below and contact me ASAP if you are interested. Job ID:25-96101 Pay Range: $66 - $72/hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location). Key Responsibilities: Leverage Bioinformatics, System Biology, Statistics and Machine Learning methods to analyze high-throughput omics datasets, with a specific focus on novel target and biomarker discovery in Neuroscience. Lead computational analyses and data integration projects involving genomic, transcriptomic, proteomic, and other multi-omics data. Provide high-quality data analysis and timely support for target and biomarker discovery projects supporting the organization's growing Neuroscience portfolio. Keep up to date with the latest bioinformatics analysis methods, software, and databases, integrating new methodologies into existing frameworks to enhance data analysis capabilities. Work with experimental biologists, functional area experts, and clinical scientists to support drug discovery and development programs at various stages. Provide computational biology /data science input in research strategy and experimental design, provide bioinformatics input, and assist in interpreting results from both in-vitro and in-vivo studies. Communicate study results effectively to the project team and wider scientific community through written and verbal means, including proposals for further experiments, presentations at internal and external meetings, and publications in leading journals. Key Requirements and Technology Experience: Must have skills: - ["bioinformatics / computational biology ", " Multi-omics data analysis", “Machine Learning & Statistical modeling”, " Biological data integration”, “Python and/or R”, “HPC or cloud computing environments”, “Git / GitHub”] Demonstrated expertise in bioinformatics, computational biology, machine learning, multi-omics data analysis, biological data integration and interpretation. Extensive and demonstrated experience in the computational analysis of multi-modal and multi-scale (e.g. single cell, spatial) molecular profiles of patient-derived samples. Proficient in one or more programming languages (e.g., Python, R) and competent with HPC environments and/or cloud-based platforms. Experience with version control systems, such as Git (e.g., Github). Good working knowledge of public and proprietary bioinformatics databases, resources and tools. Familiarity with public repositories of DNA, RNA, protein, single-cell and spatial profiling data. Ability to critically evaluate scientific research and apply novel informatics methods in translational applications. Strong problem-solving skills, self-motivated, attention to detail, and ability to handle multiple projects. Proven ability to conduct research individually and collaboratively. Proven track record of contributions to peer-reviewed publications in the field of bioinformatics or computational biology. Excellent communication skills (written, presentation, and oral). PhD. in Computational Biology, Bioinformatics, Biostatistics, Computer Science, or a related discipline with a minimum of 2 years of academic or industry experience. MSc in Computational Biology, Bioinformatics, Biostatistics, Computer Science, or a related discipline, with a minimum of 5 years of academic or industry experience. Experience in analyzing neuroscience datasets and working knowledge of neuroscience, especially neurodegenerative diseases. In-depth understanding of drug target and biomarker identification in an industry setting Our client is a leading Pharmaceutical Industry and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration. Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
    $66-72 hourly 3d ago
  • GenAI Engineer-Data Scientist

    Capgemini 4.5company rating

    Seattle, WA jobs

    Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you'd like, where you'll be supported and inspired by a collaborative community of colleagues around the world, and where you'll be able to reimagine what's possible. Join us and help the world's leading organizations unlock the value of technology and build a more sustainable, more inclusive world. About the job you're considering We are seeking a passionate and innovative GenAI Engineer/Data Scientist to join our team. This role involves developing GEN AI solutions and predictive AI models, deploying them in production environments, and driving the integration of AI technologies across our business operations. As a key member of our AI team, you will collaborate with diverse teams to design solutions that deliver tangible business value through AI-driven insights. Your Role Familiarity with API architecture, and components such as external interfacing, traffic control, runtime execution of business logic, data access, authentication, deployment. Key skills to include Understanding of URLs and API Endpoints, HTTP Requests, Authentication Methods, Response Types, JSON/REST, Parameters and Data Filtering, Error Handling, Debugging, Rate Limits, Tokens, Integration, and Documentation. Develop generative and predictive AI models (including NLP, computer vision, etc.). Familiarity with cloud platforms (e.g., Azure, AWS, GCP) and big data tools (e.g., Databricks, PySpark) to develop AI solutions. Familiarity with intelligent autonomous agents for complex tasks and multimodal interactions. Familiarity with agentic workflows that utilize AI agents to automate tasks and improve operational efficiency. Deploy AI models into production environments, ensuring scalability, performance, and optimization. Monitor and troubleshoot deployed models and pipelines for optimal performance. Design and maintain data pipelines for efficient data collection, processing, and storage (e.g., data lakes, data warehouses). Required Qualifications: Minimum of 1 year of professional experience in AI, application development, machine learning, or a similar role. Experience in model deployment, MLOps, model monitoring, and managing data/model drift. Experience with predictive AI (e.g., regression, classification, clustering) and generative AI models (e.g., GPT, Claude LLM, Stable Diffusion). Bachelor's or greater degree in Machine Learning, AI, or equivalent professional experience. Minimum of 1 year of professional experience in AI, application development, machine learning, or a similar role. Experience in model deployment, MLOps, model monitoring, and managing data/model drift. Experience with predictive AI (e.g., regression, classification, clustering) and generative AI models (e.g., GPT, Claude LLM, Stable Diffusion). Capgemini offers a comprehensive, non-negotiable benefits package to all regular, full-time employees. In the U.S. and Canada, available benefits are determined by local policy and eligibility Paid time off based on employee grade (A-F), defined by policy: Vacation: 12-25 days, depending on grade, Company paid holidays, Personal Days, Sick Leave Medical, dental, and vision coverage (or provincial healthcare coordination in Ca Retirement savings plans (e.g., 401(k) in the U.S., RRSP in Ca Life and disability insurance Employee assistance pro Other benefits as provided by local policy and eligib
    $96k-127k yearly est. 5d ago
  • Data Scientist Gen AI

    EXL 4.5company rating

    Atlanta, GA jobs

    Job Title: Data Scientist - GenAI Work Experience: 5+ Years On-site requirement: 4 days per week at Atlanta office We are looking for a highly capable and innovative Data Scientist with experience in Generative AI to join our Data Science Team. You will lead the development and deployment of GenAI solutions, including LLM-based applications, prompt engineering, fine-tuning, embeddings, and retrieval-augmented generation (RAG) for enterprise use cases. The ideal candidate has a strong foundation in machine learning and NLP, with hands-on experience in modern GenAI tools and frameworks such as OpenAI, LangChain, Hugging Face, Vertex AI, Bedrock, or similar. Key Responsibilities: Design and build Generative AI solutions using Large Language Models (LLMs) for business problems across domains like customer service, document automation, summarization, and knowledge retrieval. Fine-tune or adapt foundation models using domain-specific data. Implement RAG pipelines, embedding models, vector databases (e.g., FAISS, Pinecone, ChromaDB). Collaborate with data engineers, MLOps, and product teams to build end-to-end AI applications and APIs. Develop custom prompts and prompt chains using tools like LangChain, LlamaIndex, PromptFlow, or custom frameworks. Evaluate model performance, mitigate bias, and optimize accuracy, latency, and cost. Stay up to date with the latest trends in LLMs, transformers, and GenAI architecture. Required Skills: 5+ years of experience in Data Science / ML, with 1+ year hands-on in LLMs / GenAI projects. Strong Python programming skills, especially in libraries such as Transformers, LangChain, scikit-learn, PyTorch, or TensorFlow. Experience with OpenAI (GPT-4), Claude, Mistral, LLaMA, or similar models. Knowledge of vector search, embedding models (e.g., BERT, Sentence Transformers), and semantic search techniques. Ability to build scalable AI workflows and deploy them via APIs or web apps (e.g., FastAPI, Streamlit, Flask). Familiarity with cloud platforms (AWS/GCP/Azure) and MLOps best practices. Excellent communication skills with the ability to translate technical solutions into business impact. Preferred Qualifications: Experience with prompt tuning, few-shot learning, or LoRA-based fine-tuning. Knowledge of data privacy and security considerations in GenAI applications. Familiarity with enterprise architecture, SDLC, or building GenAI use cases in regulated domains (e.g., finance, insurance, healthcare). For more information on benefits and what we offer please visit us at **************************************************
    $67k-91k yearly est. 2d ago
  • Senior Applications Consultant - Workday Data Consultant

    Capgemini 4.5company rating

    San Francisco, CA jobs

    Job Description - Senior Applications Consultant - Workday Data Consultant (054374) Senior Applications Consultant - Workday Data Consultant Qualifications & Experience: Certified in Workday HCM Experience in Workday data conversion At least one implementation as a data consultant Ability to work with clients on data conversion requirements and load data into Workday tenants Flexible to work across delivery landscape including Agile Applications Development, Support, and Deployment Valid US work authorization (no visa sponsorship required) 6‑8 years overall experience (minimum 2 years relevant), Bachelor's degree SE Level 1 certification; pursuing Level 2 Experience in package configuration, business analysis, architecture knowledge, technical solution design, vendor management Responsibilities: Translate business cases into detailed technical designs Manage operational and technical issues, translating blueprints into requirements and specifications Lead integration testing and user acceptance testing Act as stream lead guiding team members Participate as an active member within technology communities Capgemini is an Equal Opportunity Employer encouraging diversity and providing accommodations for disabilities. All qualified applicants will receive consideration without regard to race, national origin, gender identity or expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status, or any other characteristic protected by law. Physical, mental, or environmental demands may be referenced. Reasonable accommodations will be considered where possible. #J-18808-Ljbffr
    $101k-134k yearly est. 3d ago
  • Senior Workday Data Consultant & Applications Lead

    Capgemini 4.5company rating

    San Francisco, CA jobs

    A leading consulting firm in San Francisco seeks a Senior Applications Consultant specializing in Workday Data Conversion. The ideal candidate will be certified in Workday HCM and have significant experience with data conversion processes. Responsibilities include translating business needs into technical designs, managing issues, and leading testing efforts. Candidates must possess a Bachelor's degree and a minimum of 6 years of experience, with at least 2 in a relevant role. This position requires valid US work authorization. #J-18808-Ljbffr
    $101k-134k yearly est. 3d ago
  • Data Scientist

    Talent Software Services 3.6company rating

    Novato, CA jobs

    Are you an experienced Data Scientist with a desire to excel? If so, then Talent Software Services may have the job for you! Our client is seeking an experienced Data Scientist to work at their company in Novato, CA. Client's Data Science is responsible for designing, capturing, analyzing, and presenting data that can drive key decisions for Clinical Development, Medical Affairs, and other business areas of Client. With a quality-by-design culture, Data Science builds quality data that is fit-for-purpose to support statistically sound investigation of critical scientific questions. The Data Science team develops solid analytics that are visually relevant and impactful in supporting key data-driven decisions across Client. The Data Management Science (DMS) group contributes to Data Science by providing complete, correct, and consistent analyzable data at data, data structure and documentation levels following international standards and GCP. The DMS Center of Risk Based Quality Management (RBQM) sub-function is responsible for the implementation of a comprehensive, cross-functional strategy to proactively manage quality risks for clinical trials. Starting at protocol development, the team collaborates to define critical-to-quality factors, design fit-for-purpose quality strategies, and enable ongoing oversight through centralized monitoring and data-driven risk management. The RBQM Data Scientist supports central monitoring and risk-based quality management (RBQM) for clinical trials. This role focuses on implementing and running pre-defined KRIs, QTLs, and other risk metrics using clinical data, with strong emphasis on SAS programming to deliver robust and scalable analytics across multiple studies. Primary Responsibilities/Accountabilities: The RBQM Data Scientist may perform a range of the following responsibilities, depending upon the study's complexity and the study's development stage: Implement and maintain pre-defined KRIs, QTLs, and triggers using robust SAS programs/macros across multiple clinical studies. Extract, transform, and integrate data from EDC systems (e.g., RAVE) and other clinical sources into analysis-ready SAS datasets. Run routine and ad-hoc RBQM/central monitoring outputs (tables, listings, data extracts, dashboard feeds) to support signal detection and study review. Perform QC and troubleshooting of SAS code; ensure outputs are accurate and efficient. Maintain clear technical documentation (specifications, validation records, change logs) for all RBQM programs and processes. Collaborate with Central Monitors, Central Statistical Monitors, Data Management, Biostatistics, and Study Operations to understand requirements and ensure correct implementation of RBQM metrics. Qualifications: PhD, MS, or BA/BS in statistics, biostatistics, computer science, data science, life science, or a related field. Relevant clinical development experience (programming, RBM/RBQM, Data Management), for example: PhD: 3+ years MS: 5+ years BA/BS: 8+ years Advanced SAS programming skills (hard requirement) in a clinical trials environment (Base SAS, Macro, SAS SQL; experience with large, complex clinical datasets). Hands-on experience working with clinical trial data.•Proficiency with Microsoft Word, Excel, and PowerPoint. Technical - Preferred / Strong Plus Experience with RAVE EDC. Awareness or working knowledge of CDISC, CDASH, SDTM standards. Exposure to R, Python, or JavaScript and/or clinical data visualization tools/platforms. Preferred: Knowledge of GCP, ICH, FDA guidance related to clinical trials and risk-based monitoring. Strong analytical and problem-solving skills; ability to interpret complex data and risk outputs. Effective communication and teamwork skills; comfortable collaborating with cross-functional, global teams. Ability to manage multiple programming tasks and deliver high-quality work in a fast-paced environment.
    $99k-138k yearly est. 1d ago
  • Machine Learning Engineer - Backend/Data Engineer: Agentic Workflows

    Apple Inc. 4.8company rating

    Sunnyvale, CA jobs

    We design, build and maintain infrastructure to support agentic workflows for Siri. Our team is in charge of data generation, introspection and evaluation frameworks that are key to efficiently developing foundation models and agentic workflows for Siri applications. In this team you will have the opportunity to work at the intersection of with cutting edge foundation models and products. Minimum Qualifications Strong background in computer science: algorithms, data structures and system design 3+ year experience on large scale distributed system design, operation and optimization Experience with SQL/NoSQL database technologies, data warehouse frameworks like BigQuery/Snowflake/RedShift/Iceberg and data pipeline frameworks like GCP Dataflow/Apache Beam/Spark/Kafka Experience processing data for ML applications at scale Excellent interpersonal skills able to work independently as well as cross-functionally Preferred Qualifications Experience fine-tuning and evaluating Large Language Models Experience with Vector Databases Experience deploying and serving of LLMs At Apple, base pay is one part of our total compensation package and is determined within a range. This provides the opportunity to progress as you grow and develop within a role. The base pay range for this role is between $147,400 and $272,100, and your base pay will depend on your skills, qualifications, experience, and location. Apple employees also have the opportunity to become an Apple shareholder through participation in Apple's discretionary employee stock programs. Apple employees are eligible for discretionary restricted stock unit awards, and can purchase Apple stock at a discount if voluntarily participating in Apple's Employee Stock Purchase Plan. You'll also receive benefits including: Comprehensive medical and dental coverage, retirement benefits, a range of discounted products and free services, and for formal education related to advancing your career at Apple, reimbursement for certain educational expenses - including tuition. Additionally, this role might be eligible for discretionary bonuses or commission payments as well as relocation. Learn more about Apple Benefits. Note: Apple benefit, compensation and employee stock programs are subject to eligibility requirements and other terms of the applicable plan or program. Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant . #J-18808-Ljbffr
    $147.4k-272.1k yearly 2d ago
  • Data Scientist (Active Secret Clearance required)

    Compqsoft 4.0company rating

    Huntsville, AL jobs

    Apply Description Job Title: Data Scientist Duration: Long Term Clearance: Active Secret Under general direction, apply knowledge of mathematics, statistics, modeling, business analysis, and technology to transform high volumes of complex data into analytic solutions. Develop and maintain data visualizations using Tableau. Evaluate accuracy and quality of data sources as well as the designed models. Evaluate new and existing data sources, processes, and architecture to develop recommendations. May use expertise to design, develop code, test, and debug software in multiple programming languages. May work in one or several areas such as equipment or software design, engineering evaluation, or test configuration management procedures, statistical analysis, and modeling. Requirements Qualifications: * Strong Tableau visualization development skills * Strong Oracle data analysis skills and ability to write/read SQL * Experience in data analytics, predictive analytics, mathematics, statistics, or similar field * Experience developing predictive or statistical models such as regressions, decision trees, neural networks, developing simulations, or performing optimizations using ILOG CPLEX, AnyLogic, or similar software package * Demonstrated skill and experience in analyzing data using Excel or other methods * Experience using SQL or other programming language to write complex queries, perform data analysis, extract data, and manipulate data * Experience in data preparation, manipulation, and cleansing for large datasets Education and years of experience: MA/MS and 10+yrs experience or BA/BS w/12+yrs experience About Us: CompQsoft Inc. Established in 1997, headquarters in Houston, TX and office in Leesburg, VA. CompQsoft offers a range of comprehensive Cyber Security, Infrastructure, Cloud solutions, ERP implementation, Business Intelligence, Application development, Ecommerce applications and Management consulting services. CompQsoft is Certified CMMI Level 3 practitioner for Development and Services, ISO 9001:2015, ISO 27001:2013 & ISO 200001:2011 Certified. CompQsoft is a fast growing company with a strategy and methodology that is strongly focused on the success of our clients, predominantly the Federal government. CompQsoft provides equal opportunity in all aspects of employment and in the working environment to all employees and applicants. CompQsoft does not take any nonmerit factors like race, color, religion, sex (gender), mental/physical disability, and age into account for purposes of recruitment, hiring and development.
    $61k-82k yearly est. 5d ago
  • Foundry Data Engineer: ETL Automation & Dashboards

    Data Freelance Hub 4.5company rating

    San Francisco, CA jobs

    A data consulting firm based in San Francisco is seeking a Palantir Foundry Consultant for a contract position. The ideal candidate should have strong experience in Palantir Foundry, SQL, and PySpark, with proven skills in data pipeline development and ETL automation. Responsibilities include building data pipelines, implementing interactive dashboards, and leveraging data analysis for actionable insights. This on-site role offers an excellent opportunity for those experienced in the field. #J-18808-Ljbffr
    $114k-160k yearly est. 5d ago
  • Data Gov -Unity Catalog Platform Engineer

    Capgemini 4.5company rating

    Seattle, WA jobs

    Job Title:Data Gov -Unity Catalog Platform Engineer (Optimize distributed workspaces in Databricks leveraging Unity Catalog.) Hiring Urgency: Immediate Requirement Company: Capgemini Employment Type: Full-Time- Hybrid Summary: Capgemini is urgently seeking a Data Governance to enterprise-level metadata and data access initiatives. The ideal candidate will have deep expertise in Collibra, Databricks, Unity Catalog, and Privacera, and will drive strategic alignment across technical and business teams. This role is open to relocation and offers the opportunity to shape scalable, secure data ecosystems. Your Role Metadata Management & Design and implement enterprise metadata models in Collibra aligned with business goals. Integrate metadata workflows with Unity Catalog and Synaptica for enhanced discoverability. Data Access Governance Implement and govern secure data access using Privacera. Optimize distributed workspaces in Databricks leveraging Unity Catalog. Standards & Best Practices Define standards for efficient use of Databricks environments. Champion metadata governance and utilization across teams. Strategic Leadership Develop transition architecture roadmaps with clear milestones and success metrics. Align cross-functional stakeholders around a unified data discovery vision. Foster collaboration and clarity in complex, ambiguous environments. Innovation & Thought Leadership Promote innovative solutions in ontology, data access, and discovery. Serve as both a strategic leader and hands-on contributor. Your skills and experience: Extensive hands-on experience with Databricks, Collibra, Unity Catalog, and Privacera. Proven success in implementing distributed Databricks workspaces. Strong background in metadata modeling, data architecture, and enterprise-scale data discovery. Familiarity with data governance frameworks and compliance standards. Skills & Technologies: Collibra Data Governance Metadata Management Databricks Unity Catalog Privacera Synaptica Information Technology Life at Capgemini Capgemini supports all aspects of your well-being throughout the changing stages of your life and career. For eligible employees, we offer: Flexible work Healthcare including dental, vision, mental health, and well-being programs Financial well-being programs such as 401(k) and Employee Share Ownership Plan Paid time off and paid holidays Paid parental leave Family building benefits like adoption assistance, surrogacy, and cryopreservation Social well-being benefits like subsidized back-up child/elder care and tutoring Mentoring, coaching and learning programs Employee Resource Groups Disaster Relief
    $94k-127k yearly est. 4d ago
  • Sr Data Engineer-ETL

    Infovision Inc. 4.4company rating

    Denver, CO jobs

    Job Title: Sr. Data Engineer-ETL Duration: Long-term Main Skill: Over 10+ years of experience in the Software Development Industry. We need data Engineering exp - building ETLS using spark and sql, real time and batch pipelines using Kafka/firehose, experience with building pipelines with data bricks/snowflake, experience with ingesting multiple data formats like json/parquet/delta etc. Job Description: About You You have a BS or MS in Computer Science or similar relevant field You work well in a collaborative, team-based environment You are an experienced engineering with 3+ years of experience You have a passion for big data structures You possess strong organizational and analytical skills related to working with structured and unstructured data operations You have experience implementing and maintaining high performance / high availability data structures You are most comfortable operating within cloud based eco systems You enjoy leading projects and mentoring other team members Specific Skills: Over 10 years of experience in the Software Development Industry. Experience or knowledge of relational SQL and NoSQL databases High proficiency in Python, Pyspark, SQL and/or Scala Experience in designing and implementing ETL processes Experience in managing data pipelines for analytics and operational use Strong understanding of in-memory processing and data formats (Avro, Parquet, Json etc.) Experience or knowledge of AWS cloud services: EC2, MSK, S3, RDS, SNS, SQS Experience or knowledge of stream-processing systems: i.e., Storm, Spark-Structured-Streaming, Kafka consumers. Experience or knowledge of data pipeline and workflow management tools: i.e., Apache Airflow, AWS Data Pipeline Experience or knowledge of big data tools: i.e., Hadoop, Spark, Kafka. Experience or knowledge of software engineering tools/practices: i.e., Github, VSCode, CI/CD Experience or knowledge in data observability and monitoring Hands-on experience in designing and maintaining data schema life-cycles. Bonus - Experience in tools like Databricks, Snowflake and Thoughtspot
    $71k-94k yearly est. 4d ago
  • Data Engineer

    Pyramid Consulting, Inc. 4.1company rating

    Dallas, TX jobs

    Immediate need for a talented Data Engineer. This is a 06+ Months Contract opportunity with long-term potential and is located in Dallas, TX (Hybrid). Please review the job description below and contact me ASAP if you are interested. Job ID:26-00480 Pay Range: $40 - $45/hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location). Key Responsibilities: Design, develop, and optimize end-to-end data pipelines using Python and PySpark Build and maintain ETL/ELT workflows to process structured and semi-structured data Write complex SQL queries for data transformation, validation, and performance optimization Develop scalable data solutions using Azure services such as Azure Data Factory, Azure Data Lake, Azure Synapse Analytics, and Databricks Ensure data quality, reliability, and performance across data platforms Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements Implement best practices for data governance, security, and compliance Monitor and troubleshoot data pipeline failures and performance issues Support production deployments and ongoing enhancements Key Requirements and Technology Experience: Must have skills: Data Engineer, Azure, Python, PySpark Strong proficiency in SQL for querying and data modeling Hands-on experience with Python for data processing and automation Solid experience using PySpark for distributed data processing Experience working with Microsoft Azure data services Understanding of data warehousing concepts and big data architectures Experience with batch and/or real-time data processing Ability to work independently and within cross-functional teams Experience with Azure Databricks Knowledge of data modelling techniques (star/snowflake schemas) Familiarity with CI/CD pipelines and version control tools (Git) Exposure to data security, access control, and compliance standards Experience with streaming technologies Knowledge of DevOps or DataOps practices Cloud certifications (Azure preferred) Our client is a leading Pharmaceutical Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration. Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
    $40-45 hourly 3d ago
  • ML Engineer: Fraud Detection & Big Data at Scale

    Datavisor 4.5company rating

    Mountain View, CA jobs

    A leading security technology firm in California is seeking a skilled Data Science Engineer. You will harness the power of unsupervised machine learning to detect fraudulent activities across various sectors. Ideal candidates have experience with Java/C++, data structures, and machine learning. The company offers competitive pay, flexible schedules, equity participation, health benefits, a collaborative environment, and unique perks such as catered lunches and game nights. #J-18808-Ljbffr
    $125k-177k yearly est. 2d ago
  • Staff Machine Learning Data Engineer

    Backflip 3.7company rating

    San Francisco, CA jobs

    Mechanical design, the work done in CAD, is the rate-limiter for progress in the physical world. However, there are only 2-4 million people on Earth who know how to CAD. But what if hundreds of millions could? What if creating something in the real world were as easy as imagining the use case, or sketching it on paper? Backflip is building a foundation model for mechanical design: unifying the world's scattered engineering knowledge into an intelligent, end-to-end design environment. Our goal is to enable anyone to imagine a solution and hit “print.” Founded by a second-time CEO in the same space (first company: Markforged), Backflip combines deep industry insight with breakthrough AI research. Backed by a16z and NEA, we raised a $30M Series A and built a deeply technical, mission-driven team. We're building the AI foundation that tomorrow's space elevators, nanobots, and spaceships will be built in. If you're excited to define the next generation of hard tech, come build it with us. The Role We're looking for a Staff Machine Learning Data Engineer to lead and build the data pipelines powering Backflip's foundation model for manufacturing and CAD. You'll design the systems, tools, and strategies that turn the world's engineering knowledge - text, geometry, and design intent - into high-quality training data. This is a core leadership role within the AI team, driving the data architecture, augmentation, and evaluation that underpin our model's performance and evolution. You'll collaborate with Machine Learning Engineers to run data-driven experiments, analyze results, and deliver AI products that shape the future of the physical world. What You'll Do Architect and own Backflip's ML data pipeline, from ingestion to processing to evaluation. Define data strategy: establish best practices for data augmentation, filtering, and sampling at scale. Design scalable data systems for multimodal training (text, geometry, CAD, and more). Develop and automate data collection, curation, and validation workflows. Collaborate with MLEs to design and execute experiments that measure and improve model performance. Build tools and metrics for dataset analysis, monitoring, and quality assurance. Contribute to model development through insights grounded in data, shaping what, how, and when we train. Who You Are You've built and maintained ML data pipelines at scale, ideally for foundation or generative models, that shipped into production in the real world. You have deep experience with data engineering for ML, including distributed systems, data extraction, transformation, and loading, and large-scale data processing (e.g. PySpark, Beam, Ray, or similar). You're fluent in Python and experienced with ML frameworks and data formats (Parquet, TFRecord, HuggingFace datasets, etc.). You've developed data augmentation, sampling, or curation strategies that improved model performance. You think like both an engineer and an experimentalist: curious, analytical, and grounded in evidence. You collaborate well across AI development, infra, and product, and enjoy building the data systems that make great models possible. You care deeply about data quality, reproducibility, and scalability. You're excited to help shape the future of AI for physical design. Bonus points if: You are comfortable working with a variety of complex data formats, e.g. for 3D geometry kernels or rendering engines. You have an interest in math, geometry, topology, rendering, or computational geometry. You've worked in 3D printing, CAD, or computer graphics domains. Why Backflip This is a rare opportunity to own the data backbone of a frontier foundation model, and help define how AI learns to design the physical world. You'll join a world-class, mission-driven team operating at the intersection of research, engineering, and deep product sense, building systems that let people design the physical world as easily as they imagine it. Your work will directly shape the performance, capability, and impact of Backflip's foundation model, the core of how the world will build in the future. Let's build the tools the future will be made in. #J-18808-Ljbffr
    $126k-178k yearly est. 1d ago
  • Data Engineer

    Zillion Technologies, Inc. 3.9company rating

    Saint Louis, MO jobs

    We're seeking an experienced Data Engineer to help design and build a cloud-native big data analytics platform on AWS. You'll work in an agile engineering team alongside data scientists and engineers to develop scalable data pipelines, analytics, and visualization capabilities. Key Highlights: Build and enhance data pipelines and analytics using Python, R, and AWS services (Glue, Lambda, Redshift, EMR, QuickSight, SageMaker) Design and support big data solutions leveraging Spark, Hadoop, and Redshift Apply DevOps and Infrastructure as Code practices (Terraform, Ansible, AWS CDK) Collaborate cross-functionally to align data architecture with business goals Support security, quality, and operational excellence initiatives Requirements: 7+ years of data engineering experience Strong AWS cloud and big data background Experience with containerization (EKS/ECR), APIs, and Linux Location: Hybrid in St. Louis, MO area (onsite 2-3 days)
    $71k-97k yearly est. 2d ago

Learn more about KBR jobs

View all jobs