Post job

Data Scientist jobs at Pacific Northwest National Laboratory

- 4121 jobs
  • Senior Agentic AI Data Scientist

    PRI Technology 4.1company rating

    Bethlehem, PA jobs

    We need HANDS ON engineering leaders, not architects. MUST BE VERY SEASONED DATA SCIENCE ENGINEERS WHO IS WILLING TO DO A SHORT ONLINE TEST Can sit in Hudson Yards or Bethlehem, PA 2-3 days onsite Hybrid role - candidates must be able to work onsite in Hudson Yards, NY or Bethlehem, PA I will not entertain out of state candidates. We're looking for a very Senior Data Scientist - Agentic AI with strong hands-on experience in AI/ML, LLMs, and intelligent automation. This role will focus on building, deploying, and scaling Agentic AI systems and enterprise-level generative AI solutions that transform business operations and customer experiences. You'll work on high-visibility projects alongside senior leadership, translating cutting-edge AI research into real-world impact. Key Responsibilities: Design and deploy Agentic AI solutions to automate complex workflows. Operationalize LLMs and generative AI to process unstructured data (contracts, claims, medical records, etc.). Build autonomous agents and reasoning systems integrated into enterprise platforms. Partner with engineering and AIOps teams to move models from prototype to production. Translate research in reinforcement learning and reasoning into business-ready AI applications. Mentor junior data scientists and establish best practices for scalable AI development. What We're Looking For: PhD (2+ yrs) or Master's (10+ yrs) in Statistics, Computer Science, Engineering, or Applied Mathematics. 5+ years of hands-on AI/ML development experience. Strong programming skills in Python, PyTorch, TensorFlow, LangGraph. Proven background in machine learning, optimization, and statistical modeling. Excellent communication, leadership, and cross-functional collaboration skills.
    $98k-138k yearly est. 2d ago
  • Data Scientist - Full Time Only

    Programmers.Io 3.8company rating

    Charlotte, NC jobs

    The ideal candidate's favorite words are learning, data, scale, and agility. You will leverage your strong collaboration skills and ability to extract valuable insights from highly complex data sets to ask the right questions and find the right answers. Responsibilities Analyze raw data: assessing quality, cleansing, structuring for downstream processing Design accurate and scalable prediction algorithms Collaborate with engineering team to bring analytical prototypes to production Generate actionable insights for business improvements Qualifications Bachelor's degree or equivalent experience in quantative field (Statistics, Mathematics, Computer Science, Engineering, etc.) At least 1 - 2 years' of experience in quantitative analytics or data modeling Deep understanding of predictive modeling, machine-learning, clustering and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau)
    $69k-98k yearly est. 4d ago
  • Applied Data Scientist/ Data Science Engineer

    Programmers.Io 3.8company rating

    Austin, TX jobs

    Role: Applied Data Scientist/ Data Science Engineer Yrs. of experience: 8+ Yrs. Job type : Fulltime Job Responsibilities: You will be part of a team that innovates and collaborates with internal stakeholders to deliver world-class solutions with a customer first mentality. This group is passionate about the data science field and is motivated to find opportunity in, and develop solutions for, evolving challenges. You will: Solve business and customer issues utilizing AI/ML - Mandatory Build prototypes and scalable AI/ML solutions that will be integrated into software products Collaborate with software engineers, business stakeholders and product owners in an Agile environment Have complete ownership of model outcomes and drive continuous improvement Essential Requirements: Strong coding skills in Python and SQL - Mandatory Machine Learning knowledge (Deep learning, Information Retrieval (RAG), GenAI , Classification, Forecasting, Regression, etc. on large datasets) with experience in ML model deployment Ability to work with internal stakeholders to transfer business questions into quantitative problem statements Ability to effectively communicate data science progress to non-technical internal stakeholders Ability to lead a team of data scientists is a plus Experience with Big Data technologies and/or software development is a plus
    $74k-108k yearly est. 3d ago
  • Data Scientist

    Spectraforce 4.5company rating

    Newark, NJ jobs

    Job Title: Data Scientist Level 1 Duration: 12 months RTH As a Senior Data Scientist on the GRI Data Science team, you will partner with our diverse team of Engineers, Economists, Computer Scientists, Mathematicians, Physicists, Statisticians, and Actuaries tasked with mining our industry-leading internal data to develop new analytics capabilities for our businesses. The role requires a rare combination of sophisticated analytical expertise, business acumen, strategic mindset, client relationship skills, problem-solving, and a passion for generating business impact. This is an exciting opportunity to be a part of a strategic initiative that is evolving and growing over time! In addition to applied experience, you will bring excellent problem-solving, communication, and teamwork skills, along with agile ways of working, strong business insight, an inclusive leadership demeanor, and a continuous learning focus to all that you do. Here is what you can expect in a typical day: • Responsible for the hands-on development of sophisticated data science solutions comprising the portfolio developed by the Director of Data Science and the technical requirements specified by the Director of Data Science. • Perform hands-on data analysis, model development, model training, model testing, and model deployment. • Continuously research new methods for problem solution, including new algorithms, modeling techniques, and data analytics techniques. • Write production-level code and partner with machine learning engineers to push development code into production. • Partner with machine learning engineers to productionize machine learning models. Partner with data engineers to build data pipelines. Partner with software engineers to integrate solutions with business platforms. • Work closely with the business and data science lead to recommend and develop models for customer engagement and wellness use cases. • Manage external vendors in the execution of the data science development process. The Skills and expertise you bring: • Advanced degree (Master's, Ph.D.) in Mathematics, Statistics, Engineering, Econometrics, Physics, Computer Science, Actuarial, Data Science, or comparable quantitative disciplines. • Working on complex problems in which analysis of situations or data requires an in-depth evaluation of various factors. Exercises judgment within broadly defined practices and policies in selecting methods, techniques, and evaluation criteria for obtaining results. • Knowledge of business concepts, tools, and processes that are needed for making sound decisions in the context of the company's business. Create and test hypotheses for customer engagement and wellness programs. • Experience in research, designing experiments (ex, A/B testing), working with claims and customer experience data. A behavior science background is preferred but not required. • Ability to learn creative skills and knowledge on an ongoing basis through self-initiative and solving challenges. • Excellent problem-solving, communication, and collaboration skills. Applied experience with several of the following: • Data Acquisition and Transformation: Acquiring data from disparate data sources using API's, SQL, and NoSQL. Transform data using SQL, NoSQL, and Python. Visualizing data using a diverse tool set, including but not limited to Python and R. • Database Management System: Knowledge of how databases are structured and function in order to use them efficiently may include multiple data environments, cloud/AWS, primary and foreign key relationships, table design, database schemas, etc. o Knowledge of how to work with data from (do not build) o SQL skills (relational) - CORE / Initial Proficiency o Unstructured (NoSQL) o Graph/ontology (DB Graph) • Model Deployment: Understanding of: MDLC (Model Development Life Cycle), CI/CD/CT pipelines (using tools like Jenkins, CloudBees, Harness etc.), A/B testing. Pipeline frameworks like MLFlow, AWS SageMaker pipeline, etc. model and data versioning. • Statistics and Computing: Exceptional understanding of: Calculus, Multivariable Calculus, Linear Algebra, Differential Equations, Probability, Statistics, Applied Probability, Applied Statistics, Computer Science (Programming Methodologies), and Cloud. Knowledge of statistical techniques such as the use of descriptive, inferential, Bayesian statistics, time series analysis, etc., to extract business insights and experimentation to solve business problems • Data Wrangling: Preparing data for further analysis; Redefining and mapping raw data to generate insights; Processing of large datasets (structured, unstructured). • Machine Learning: Understanding of machine learning theory, including the mathematics underlying machine learning algorithms. Expertise in the application of machine learning theory to building, training, testing, and monitoring machine learning models. Understanding and expertise in NLP (natural language processing). • Programming Languages: Python, R, SQL, Java or Scala, SQL, Cypher
    $78k-111k yearly est. 2d ago
  • Senior Data Scientist Agentic AI

    PRI Technology 4.1company rating

    New York, NY jobs

    My name is Bill Stevens, and I have a new three month plus contract to hire Senior Data Scientist Agentic AI opportunity available for a major firm with offices located in Midtown, Manhattan on the West Side and Holmdel, New Jersey that could be of interest to you, please review my specification below and I am available at any time to speak with you so please feel free to call me. The work week schedule will be hybrid, three days a week in either of the firms' offices and two days remote. The onsite work site will be determined by the candidate. The ideal candidate should also possess a green card or be of citizenship. No Visa entanglements and no H1-B holding company submittals. The firms Data & AI team spearheads a culture of intelligence and automation across the enterprise, creating business value from advanced data and AI solutions. Their team includes data scientists, engineers, analysts, and product leaders working together to deliver AI-driven products that power growth, improve risk management, and elevate customer experience. The firm created the Data Science Lab (DSL) to reimagine emerging technologies, evolving consumer needs, and rapid advances in AI. The DSL expedites transition to data-driven decision making and fosters innovation by rapidly testing, scaling, and operationalizing state-of-the-art AI. We are seeking a Senior Data Scientist Engineer, Agentic AI who is an experienced individual contributor with deep expertise in AI/ML and a track record of turning advanced research into practical, impactful enterprise solutions. This role focuses on building, deploying, and scaling agentic AI systems, large language models, and intelligent automation solutions that reshape how the firm operates, serves customers, and drives growth. You'll collaborate directly with senior executives on high-visibility projects to bring next-generation AI to life across the firm's products and services. Key Responsibilities: Design and deploy Agentic AI solutions to automate complex business workflows, enhance decision-making, and improve customer and employee experiences. Operationalize cutting-edge LLMs and generative AI to process and understand unstructured data such as contracts, claims, medical records, and customer interactions. Build autonomous agents and multi-step reasoning systems that integrate with the firm's core platforms to deliver measurable business impact. Partner with data engineers and AIOps teams to ensure AI models are production-ready, scalable, and robust, from prototype to enterprise deployment. Translate research in agentic AI, reinforcement learning, and reasoning into practical solutions that support underwriting, claims automation, customer servicing, and risk assessment. Collaborate with product owners, engineers, and business leaders to define use cases, design solutions, and measure ROI. Contribute to the Data Science Lab by establishing repeatable frameworks for developing, testing, and deploying agentic AI solutions. Mentor junior data scientists and contribute to the standardization of AI/ML practices, tools, and frameworks across the firm. You are: Passionate about pushing the frontier of AI while applying it to solve real-world business problems. Excited by the potential of agentic AI, autonomous systems, and LLM-based solutions to transform industries. A hands-on builder who thrives on seeing AI solutions move from proof-of-concept to real-world deployment. Comfortable working in multi-disciplinary teams and engaging with senior business leaders to align AI solutions with enterprise goals. You have: PhD with 2+ years of experience OR have a Master's degree with 4+ years of experience in Statistics, Computer Science, Engineering, Applied mathematics or related field 3+ years of hands-on AI modeling/development experience Strong theoretical foundations in probability & statistics Strong programming skills in Python including PyTorch, Tensorflow, LangGraph Solid background in machine learning algorithms, optimization, and statistical modeling Excellent communication skills and ability to work and collaborating cross-functionally with Product, Engineering, and other disciplines at both the leadership and hands-on level Excellent analytical and problem-solving abilities with superb attention to detail Proven leadership in providing technical leadership and mentoring to data scientists and strong management skills with ability to monitor/track performance for enterprise success This position pays $150.00 per hour on a w-2 hourly basis or $175.00 per hour on a Corp basis. The Corp rate is for independent contractors only and not third-party firms. No Visa entanglements and no H1-B holding companies. The interview process will include an initial phone or virtual interview screening. Please let me know your interest in this position, availability to interview and start for this position along with a copy of your recent resume or please feel free to call me at any time with any questions. Regards Bill Stevens Senior Technical Recruiter PRI Technology Denville, New Jersey 07834 ************** ******************************
    $107k-150k yearly est. 4d ago
  • Data Scientist

    Unisys 4.6company rating

    Reston, VA jobs

    • Collect, clean, and preprocess large datasets from multiple sources. • Apply statistical analysis and machine learning techniques to solve business problems. • Build predictive models and algorithms to optimize processes and improve outcomes. • Develop dashboards and visualizations to communicate insights effectively. • Collaborate with cross-functional teams (Product, Engineering, Risk, Marketing) to identify opportunities for leveraging data. • Ensure data integrity, security, and compliance with organizational standards. • Stay current with emerging technologies and best practices in data science and AI. ________________________________________ Required Qualifications • Bachelor's or Master's degree in Data Science, Computer Science, Statistics, Mathematics, or related field. • Strong proficiency in Python, R, SQL, and experience with data manipulation libraries (e.g., Pandas, NumPy). • Hands-on experience with machine learning frameworks (e.g., scikit-learn, TensorFlow, PyTorch). • Solid understanding of statistical modeling, hypothesis testing, and data visualization. • Experience with big data platforms (e.g., Spark, Hadoop) and cloud environments (AWS, Azure, GCP). • Excellent problem-solving skills and ability to communicate complex concepts clearly. ________________________________________ Preferred Qualifications • Experience in risk modeling, financial services, or product analytics. • Knowledge of MLOps and deploying models in production. • Familiarity with data governance and compliance frameworks. ________________________________________ Soft Skills • Strong analytical thinking and attention to detail. • Ability to work independently and in a team environment. • Effective communication and stakeholder management skills. #LI-CGTS #TS-0455
    $71k-96k yearly est. 5d ago
  • Machine Learning Engineer / Data Scientist / GenAI

    Amtex Systems Inc. 4.0company rating

    New York, NY jobs

    NYC NY / Hybrid 12+ Months Project - Leveraging Llama to extract cybersecurity insights out of unstructured data from their ticketing system. Must have strong experience with: Llama Python Hadoop MCP Machine Learning (ML) They need a strong developer - using llama and Hadoop (this is where the data sits), experience with MCP. They have various ways to pull the data out of their tickets but want someone who can come in and make recommendations on the best way to do it and then get it done. They have tight timelines. Thanks and Regards! Lavkesh Dwivedi ************************ Amtex System Inc. 28 Liberty Street, 6th Floor | New York, NY - 10005 ************ ********************
    $78k-104k yearly est. 3d ago
  • Data Scientist Specialist

    Vista Applied Solutions Group Inc. 4.0company rating

    McLean, VA jobs

    Job Title: Data Scientist Specialist Duration: 45 Minutes|120 Minutes Interview Type: MS Teams - Video Mandatory| 2nd round on-site Call notes: Flexibility in hands on experience Not required 10 years of experience ML and Gen AI 2 years of Gen AI experience Gen AI development This role comes between software engineer and Data Scientist Can see Data Engineer BS/MS in Ai or Data Science preferred PHD degree not needed
    $70k-95k yearly est. 3d ago
  • Data Scientist

    Marlabs LLC 4.1company rating

    Palo Alto, CA jobs

    Client: AT&T Primarily looking at: Data Bricks, Azure, Mongo DB, SQL Indexing, Python OOPS for modularizing notebooks into python library. Lastly LLM fine tuning using LORA and QLORA. Job Description: Competent Data Scientist, who is independent, results driven and is capable of taking business requirements and building out the technologies to generate statistically sound analysis and production grade ML models. DS skills with GenAI and LLM Knowledge. Experience building H2O models (XGboost, logistic regression, neural networks, random forest). Experience with MongoDB and NO-SQL Datasets. Experience in Hadoop ecosystem, Databricks and Pyspark. Expertise in Python/Spark and their related libraries and frameworks. Experience in building training ML pipelines and efforts involved in ML Model deployment. Experience in other ML concepts - Real time distributed model inferencing pipeline, Champion/Challenger framework, A/B Testing, Model Unix/Linux expertise; comfortable with Linux operating system and Shell Scripting. Familiar with DS/ML Production implementation. Excellent problem-solving skills, with attention to detail, focus on quality and timely delivery of assigned tasks. Azure cloud and Databricks prior knowledge will be a big plus.
    $98k-139k yearly est. 1d ago
  • Data Scientist

    Synechron 4.4company rating

    Dallas, TX jobs

    We are At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets. Our challenge We are seeking a talented and analytical Data Scientist to join our team. The ideal candidate will leverage advanced data analysis, statistical modeling, and machine learning techniques to drive insights, optimize loan processes, improve risk assessment, and enhance customer experiences in mortgage and lending domains. Additional Information* The base salary for this position will vary based on geography and other factors. In accordance with law, the base salary for this role if filled within Dallas, TX is $110k - $120k/year & benefits (see below). The Role Responsibilities: Analyze large volumes of loan and mortgage data to identify key trends, patterns, and risk factors. Develop and implement predictive models for credit scoring, risk segmentation, loan default prediction, and fraud detection. Collaborate with product teams, underwriters, and risk managers to understand business requirements and translate them into analytical solutions. Build data pipelines and automate data ingestion, cleaning, and processing workflows related to loans and mortgage portfolios. Conduct feature engineering to improve model accuracy and robustness. Monitor model performance over time and recalibrate models as needed based on changing market conditions. Create dashboards and reports to communicate insights and support decision-making processes. Ensure data quality, integrity, and compliance with regulatory standards. Stay updated on industry trends, emerging techniques, and regulatory changes affecting mortgage and lending projects Requirements: Strong knowledge of mortgage products, loan lifecycle, credit risk, and underwriting processes. Experience with Kafka, Hadoop, Hive, or other big data tools. Familiarity with containerization (Docker) and orchestration (Kubernetes). Understanding of data security, privacy, and compliance standards. Knowledge of streaming data processing and real-time analytics. We offer: A highly competitive compensation and benefits package. A multinational organization with 58 offices in 21 countries and the possibility to work abroad. 10 days of paid annual leave (plus sick leave and national holidays). Maternity & paternity leave plans. A comprehensive insurance plan including medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region). Retirement savings plans. A higher education certification policy. Commuter benefits (varies by region). Extensive training opportunities, focused on skills, substantive knowledge, and personal development. On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses. Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups. Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms. A flat and approachable organization. A truly diverse, fun-loving, and global work culture. S YNECHRON'S DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
    $110k-120k yearly 5d ago
  • Senior Data Scientist

    Launch Consulting Group 3.9company rating

    Chicago, IL jobs

    Be a part of our success story. At Launch, we offer talented and driven professionals the opportunity to deliver high-impact work in a dynamic, fast-growing environment. We invest in our people through competitive compensation, exceptional benefits, internal growth pathways, and meaningful community involvement. For senior professionals ready to shape the future of AI and Data Science, your next big opportunity starts here. The Role: As a Senior Data Scientist, you will lead the design and development of advanced machine learning and AI solutions that solve complex business challenges. You'll apply a deep technical foundation in data science, spanning statistics, modeling, software engineering, and domain expertise, while helping set technical direction, mentor team members, and influence decision-making across the organization. You will work closely with stakeholders and cross-functional teams to define use cases, architect scalable solutions, and drive innovation through GenAI, LLMs, NLP, computer vision, and other cutting-edge techniques. Key Responsibilities: Lead advanced Data Science initiatives from ideation to deployment, ensuring alignment with business strategy and measurable outcomes. Develop, implement, and optimize AI/ML models across a range of domains, including predictive analytics, GenAI, LLMs, time series, deep learning, and NLP. Conduct rigorous exploratory data analysis and feature engineering to uncover insights and support robust model development. Architect data preprocessing pipelines that ensure clean, high-quality, and well-structured data for training and evaluation. Apply experimental design best practices (e.g., A/B testing, cross-validation, randomization) to ensure statistical rigor and reduce model bias/overfitting. Own the deployment and monitoring of ML solutions on modern cloud platforms (Databricks, AWS SageMaker, Azure ML, Snowflake, etc.). Serve as a technical advisor and thought leader, guiding internal stakeholders on AI capabilities, data strategy, and industry best practices. Mentor and support the development of junior and mid-level data scientists through code reviews, pair programming, and knowledge sharing. Communicate effectively across technical and non-technical audiences, including executive leadership, to influence and drive adoption of data-driven solutions. Required Qualifications: 7+ years of hands-on experience in Data Science, with a strong track record of delivering end-to-end AI/ML solutions in production environments. Deep expertise in Python, including libraries such as scikit-learn, pandas, NumPy, TensorFlow, PyTorch, spa Cy, and/or Hugging Face. Strong understanding of statistical modeling, regression, classification, clustering, and deep learning architectures. Demonstrated experience applying GenAI, LLMs, NLP, or computer vision techniques to solve real-world business problems. Proven ability to write advanced SQL and work with complex data structures across relational and non-relational databases. Experience with data pipeline orchestration and preprocessing of structured and unstructured data at scale. Hands-on experience with cloud data science platforms such as Databricks, AWS SageMaker, Azure ML, Snowflake Snowpark, or Palantir Foundry. Strong stakeholder management skills and the ability to thrive in a high-paced, constantly evolving environment. Excellent verbal and written communication skills, including the ability to present complex technical concepts clearly and persuasively. Preferred Qualifications: Experience with project delivery frameworks and tools such as Azure DevOps, JIRA, or Agile methodologies. Experience working with Electronic Medical Record systems and healthcare data file types (e.g., HL7, FHIR, CCD). Certifications related to Epic, FHIR, or other health IT platforms are a plus. Familiarity with data architecture tools (e.g., Miro, Visio, LucidChart) for technical design documentation. Knowledge of modern cloud-based data solutions such as Azure Synapse, Snowflake, Redshift, or BigQuery. Benefits & Compensation: As an employee at Launch, you will grow your skills and experience through a variety of exciting project work (across industries and technologies) with some of the top companies in the world! Our employees receive full benefits - medical, dental, vision, short-term disability, long-term disability, life insurance, and matched 401k. We also have an uncapped, take-what-you-need PTO policy. The anticipated wage range for this role is $140,000-180,000. Education and experience will be highly considered, and we are happy to discuss your wage expectations in more detail throughout our internal interview process.
    $140k-180k yearly 1d ago
  • Senior Data Scientist

    Matlen Silver 3.7company rating

    Miami, FL jobs

    RESPONSIBILITIES: Consider implementation constraints, both business and technical, and defines approach across increasingly complex/large analytics use cases Autonomously define the master table for increasingly complex engagements with a good understanding of all trade-offs in model building Contribute to feature validation Independently deliver one or multiple workstreams over the full duration of a project Have an excellent understanding of leading-edge methodologies with emerging spikes in one or more methodologies (e.g., deep learning) or domains. Can apply these rigorously to a range of problems Guide junior colleagues in the application of analytics approaches Create series of outputs/plots that build upon each other to guide problem solving with business teams. Suggests outputs across all workstreams Write production code optimized for efficiency and memory Conduct code reviews with junior colleagues Lead overall model validation/QA approach across the engagement Excellent understanding of risks and how to mitigate Raise potential ethical issues related to analytics to leadership EXPERIENCE: Strong understanding of agile methodologies and experience as a Sr Data Scientist on a cross functional agile team preferred Proficiency applying data science techniques on large scale data sets. E.g., machine learning to solve clustering, classification, regression, anomaly detection problem statements; and optimization or simulation techniques to solve prescriptive problems Proven ability to derive new insights by merging and transforming internal and external data sets in business contexts Experience with supporting deployment, monitoring, maintenance, and enhancement of models preferred Experience with cloud database technologies (e.g., AWS) and developing solutions on cloud computing services and infrastructure in the data and analytics space QUALIFICATIONS REQUIRED: 4+ years of directly related experience in a data science environment. Bachelor's degree required, MS or PhD preferred in the field of data science, computer science, engineering, mathematics, statistics, or related fields Python or R required; SQL preferred PREFERRED QUALIFICATIONS: Preference will be given to candidates who have the following: Demonstrated ability to efficiently learn and solve new business domains and problems Nice to have experience with popular open-source or commercial optimization libraries (e.g. Gurobi, CPLEX) Knowledge with dashboarding tools (e.g., Dash, Shiny, Tableau) Nice to have knowledge with MLOps infrastructure (e.g., Databricks, MLflow) and containerization and managing production pipelines and microservices (e.g., Docker, Kubernetes)
    $65k-90k yearly est. 4d ago
  • Senior Data Scientist

    Unisys 4.6company rating

    McLean, VA jobs

    We are seeking a highly experienced **Principal Gen AI Scientist** with a strong focus on **Generative AI (GenAI)** to lead the design and development of cutting-edge AI Agents, Agentic Workflows and Gen AI Applications that solve complex business problems. This role requires advanced proficiency in Prompt Engineering, Large Language Models (LLMs), RAG, Graph RAG, MCP, A2A, multi-modal AI, Gen AI Patterns, Evaluation Frameworks, Guardrails, data curation, and AWS cloud deployments. You will serve as a hands-on Gen AI (data) scientist and critical thought leader, working alongside full stack developers, UX designers, product managers and data engineers to shape and implement enterprise-grade Gen AI solutions. Key Responsibilities: * Architect and implement scalable AI Agents, Agentic Workflows and GenAI applications to address diverse and complex business use cases. * Develop, fine-tune, and optimize lightweight LLMs; lead the evaluation and adaptation of models such as Claude (Anthropic), Azure OpenAI, and open-source alternatives. * Design and deploy Retrieval-Augmented Generation (RAG) and Graph RAG systems using vector databases and knowledge bases. * Curate enterprise data using connectors integrated with AWS Bedrock's Knowledge Base/Elastic * Implement solutions leveraging MCP (Model Context Protocol) and A2A (Agent-to-Agent) communication. * Build and maintain Jupyter-based notebooks using platforms like SageMaker and MLFlow/Kubeflow on Kubernetes (EKS). * Collaborate with cross-functional teams of UI and microservice engineers, designers, and data engineers to build full-stack Gen AI experiences. * Integrate GenAI solutions with enterprise platforms via API-based methods and GenAI standardized patterns. * Establish and enforce validation procedures with Evaluation Frameworks, bias mitigation, safety protocols, and guardrails for production-ready deployment. * Design & build robust ingestion pipelines that extract, chunk, enrich, and anonymize data from PDFs, video, and audio sources for use in LLM-powered workflows-leveraging best practices like semantic chunking and privacy controls * Orchestrate multimodal pipelines** using scalable frameworks (e.g., Apache Spark, PySpark) for automated ETL/ELT workflows appropriate for unstructured media * Implement embeddings drives-map media content to vector representations using embedding models, and integrate with vector stores (AWS KnowledgeBase/Elastic/Mongo Atlas) to support RAG architectures **Required Qualifications:** * 10+ years of experience in AI/ML, with 3+ years in applied GenAI or LLM-based solutions. * Deep expertise in prompt engineering, fine-tuning, RAG, GraphRAG, vector databases (e.g., AWS KnowledgeBase / Elastic), and multi-modal models. * Proven experience with cloud-native AI development (AWS SageMaker, Bedrock, MLFlow on EKS). * Strong programming skills in Python and ML libraries (Transformers, LangChain, etc.). * Deep understanding of Gen AI system patterns and architectural best practices, Evaluation Frameworks * Demonstrated ability to work in cross-functional agile teams. * Need Github Code Repository Link for each candidate. Please thoroughly vet the candidates. **Preferred Qualifications:** * Published contributions or patents in AI/ML/LLM domains. * Hands-on experience with enterprise AI governance and ethical deployment frameworks. * Familiarity with CI/CD practices for ML Ops and scalable inference APIs. #LI-CGTS #TS-2942
    $77k-104k yearly est. 1d ago
  • Senior Data Scientist

    Infocepts 3.7company rating

    McLean, VA jobs

    Purpose: As a Data Scientist, you will play a key role in delivering impactful, data-driven solutions for our strategic enterprise clients. This role also offers the opportunity to shape and grow Infocepts' Data Science & AI practice, contributing to high-impact AI/ML initiatives, crafting data-driven narratives for stakeholders, and applying advanced techniques to solve complex business problems from strategy to execution. Key Result Areas and Activities: Design, build, and deploy AI/ML solutions using modern cloud and data platforms. Lead data science projects across industries, ensuring alignment with business goals. Apply supervised, unsupervised, deep learning, and Generative AI (e.g., LLMs, agentic workflows) techniques to address client use cases. Collaborate with data engineering teams to optimize model pipelines using Delta Lake and Spark. Communicate findings effectively through data visualizations and stakeholder presentations. Drive adoption of MLOps practices for scalable and reliable model deployment. Contribute to the evolution of Infocepts' Data Science & AI offerings through innovation and knowledge sharing. Roles & Responsibilities Essential Skills 5+ years of experience in applied AI and machine/deep learning. Hands-on experience with Databricks, MLflow, PySpark, and Spark MLlib. Proficiency in Python and SQL for model development and data manipulation. Strong understanding of supervised and unsupervised learning, deep learning, and Generative AI. Familiarity with cloud platforms: AWS, Azure, and GCP. Solid foundation in advanced statistical methods and probabilistic analysis. Ability to lead end-to-end AI/ML projects, including design, development, and stakeholder management. Experience with visualization tools like Tableau, Power BI, or similar. Familiarity with ML workflow orchestration and MLOps practices. Desirable Skills Experience with LLMs (Large Language Models) and agentic AI workflows. Familiarity with modern data platforms like Snowflake. Exposure to real-time data processing in cloud-native environments. Contributions to open-source AI projects or publications in data science communities. Qualifications Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, Statistics, or a related field. Certifications in cloud platforms (AWS, Azure, GCP) or Databricks are a plus. Qualities: Able to consult, write, and present persuasively Able to work in a self-organized and cross-functional team Able to iterate based on new information, peer reviews, and feedback Able to work seamlessly with clients across multiple geographies Research focused mindset Excellent analytical, presentation, reporting, documentation and interactive skills "Infocepts is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law."
    $78k-108k yearly est. 5d ago
  • Senior Data Scientist

    Zillion Technologies, Inc. 3.9company rating

    McLean, VA jobs

    Locals to Only# In- Person Interview Job Title: Data Scientist Specialist We are seeking a highly experienced Principal Gen AI Scientist with a strong focus on Generative AI (GenAI) to lead the design and development of cutting-edge AI Agents, Agentic Workflows and Gen AI Applications that solve complex business problems. This role requires advanced proficiency in Prompt Engineering, Large Language Models (LLMs), RAG, Graph RAG, MCP, A2A, multi-modal AI, Gen AI Patterns, Evaluation Frameworks, Guardrails, data curation, and AWS cloud deployments. You will serve as a hands-on Gen AI (data) scientist and critical thought leader, working alongside full stack developers, UX designers, product managers and data engineers to shape and implement enterprise-grade Gen AI solutions. Responsibilities: Architect and implement scalable AI Agents, Agentic Workflows and GenAI applications to address diverse and complex business use cases. Develop, fine-tune, and optimize lightweight LLMs; lead the evaluation and adaptation of models such as Claude (Anthropic), Azure OpenAI, and open-source alternatives. Design and deploy Retrieval-Augmented Generation (RAG) and Graph RAG systems using vector databases and knowledge bases. Curate enterprise data using connectors integrated with AWS Bedrock's Knowledge Base/Elastic. Implement solutions leveraging MCP (Model Context Protocol) and A2A (Agent-to-Agent) communication. Build and maintain Jupyter-based notebooks using platforms like AWS SageMaker and MLFlow/Kubeflow on Kubernetes (EKS). Collaborate with cross-functional teams of UI and microservice engineers, designers, and data engineers to build full-stack Gen AI experiences. Integrate GenAI solutions with enterprise platforms via API-based methods and GenAI standardized patterns. Establish and enforce validation procedures with Evaluation Frameworks, bias mitigation, safety protocols, and guardrails for production-ready deployment. Design & build robust ingestion pipelines that extract, chunk, enrich, and anonymize data from PDFs, video, and audio sources for use in LLM-powered workflows-leveraging best practices like semantic chunking and privacy controls. Orchestrate multimodal pipelines** using scalable frameworks (e.g., Apache Spark, PySpark) for automated ETL/ELT workflows appropriate for unstructured media. Implement embeddings drives-map media content to vector representations using embedding models, and integrate with vector stores (AWS Knowledge Base/Elastic/Mongo Atlas) to support RAG architectures. Qualifications: experience in AI/ML, with applied GenAI or LLM-based solutions. Deep expertise in prompt engineering, fine-tuning, RAG, GraphRAG, vector databases (e.g., AWS Knowledge Base / Elastic), and multi-modal models. Proven experience with cloud-native AI development (AWS SageMaker, Amazon Bedrock, MLFlow on EKS). Strong programming skills in Python and ML libraries (Transformers, LangChain, etc.). Deep understanding of Gen AI system patterns and architectural best practices, Evaluation Frameworks. Demonstrated ability to work in cross-functional agile teams.
    $77k-110k yearly est. 1d ago
  • Senior Data Governance Consultant (Informatica)

    Paradigm Technology 4.2company rating

    Plano, TX jobs

    Senior Data Governance Consultant (Informatica) About Paradigm - Intelligence Amplified Paradigm is a strategic consulting firm that turns vision into tangible results. For over 30 years, we've helped Fortune 500 and high-growth organizations accelerate business outcomes across data, cloud, and AI. From strategy through execution, we empower clients to make smarter decisions, move faster, and maximize return on their technology investments. What sets us apart isn't just what we do, it's how we do it. Driven by a clear mission and values rooted in integrity, excellence, and collaboration, we deliver work that creates lasting impact. At Paradigm, your ideas are heard, your growth is prioritized, your contributions make a difference. Summary: We are seeking a Senior Data Governance Consultant to lead and enhance data governance capabilities across a financial services organization The Senior Data Governance Consultant will collaborate closely with business, risk, compliance, technology, and data management teams to define data standards, strengthen data controls, and drive a culture of data accountability and stewardship The ideal candidate will have deep experience in developing and implementing data governance frameworks, data policies, and control mechanisms that ensure compliance, consistency, and trust in enterprise data assets Hands-on experience with Informatica, including Master Data Management (MDM) or Informatica Data Management Cloud (IDMC), is preferred This position is Remote, with occasional travel to Plano, TX Responsibilities: Data Governance Frameworks: Design, implement, and enhance data governance frameworks aligned with regulatory expectations (e.g., BCBS 239, GDPR, CCPA, DORA) and internal control standards Policy & Standards Development: Develop, maintain, and operationalize data policies, standards, and procedures that govern data quality, metadata management, data lineage, and data ownership Control Design & Implementation: Define and embed data control frameworks across data lifecycle processes to ensure data integrity, accuracy, completeness, and timeliness Risk & Compliance Alignment: Work with risk and compliance teams to identify data-related risks and ensure appropriate mitigation and monitoring controls are in place Stakeholder Engagement: Partner with data owners, stewards, and business leaders to promote governance practices and drive adoption of governance tools and processes Data Quality Management: Define and monitor data quality metrics and KPIs, establishing escalation and remediation procedures for data quality issues Metadata & Lineage: Support metadata and data lineage initiatives to increase transparency and enable traceability across systems and processes Reporting & Governance Committees: Prepare materials and reporting for data governance forums, risk committees, and senior management updates Change Management & Training: Develop communication and training materials to embed governance culture and ensure consistent understanding across the organization Required Qualifications: 7+ years of experience in data governance, data management, or data risk roles within financial services (banking, insurance, or asset management preferred) Strong knowledge of data policy development, data standards, and control frameworks Proven experience aligning data governance initiatives with regulatory and compliance requirements Familiarity with Informatica data governance and metadata tools Excellent communication skills with the ability to influence senior stakeholders and translate technical concepts into business language Deep understanding of data management principles (DAMA-DMBOK, DCAM, or equivalent frameworks) Bachelor's or Master's Degree in Information Management, Data Science, Computer Science, Business, or related field Preferred Qualifications: Hands-on experience with Informatica, including Master Data Management (MDM) or Informatica Data Management Cloud (IDMC), is preferred Experience with data risk management or data control testing Knowledge of financial regulatory frameworks (e.g., Basel, MiFID II, Solvency II, BCBS 239) Certifications, such as Informatica, CDMP, or DCAM Background in consulting or large-scale data transformation programs Key Competencies: Strategic and analytical thinking Strong governance and control mindset Excellent stakeholder and relationship management Ability to drive organizational change and embed governance culture Attention to detail with a pragmatic approach Why Join Paradigm At Paradigm, integrity drives innovation. You'll collaborate with curious, dedicated teammates, solving complex problems and unlocking immense data value for leading organizations. If you seek a place where your voice is heard, growth is supported, and your work creates lasting business value, you belong at Paradigm. Learn more at ******************** Policy Disclosure: Paradigm maintains a strict drug-free workplace policy. All offers of employment are contingent upon successfully passing a standard 5-panel drug screen. Please note that a positive test result for any prohibited substance, including marijuana, will result in disqualification from employment, regardless of state laws permitting its use. This policy applies consistently across all positions and locations.
    $76k-107k yearly est. 2d ago
  • Sr Data Platform Engineer

    The Judge Group 4.7company rating

    Elk Grove, CA jobs

    Hybrid role 3X a week in office in Elk Grove, CA; no remote capabilities This is a direct hire opportunity. We're seeking a seasoned Senior Data Platform Engineer to design, build, and optimize scalable data solutions that power analytics, reporting, and AI/ML initiatives. This full‑time role is hands‑on, working with architects, analysts, and business stakeholders to ensure data systems are reliable, secure, and high‑performing. Responsibilites: Build and maintain robust data pipelines (structured, semi‑structured, unstructured). Implement ETL workflows with Spark, Delta Lake, and cloud‑native tools. Support big data platforms (Databricks, Snowflake, GCP) in production. Troubleshoot and optimize SQL queries, Spark jobs, and workloads. Ensure governance, security, and compliance across data systems. Integrate workflows into CI/CD pipelines with Git, Jenkins, Terraform. Collaborate cross‑functionally to translate business needs into technical solutions. Qualifications: 7+ years in data engineering with production pipeline experience. Expertise in Spark ecosystem, Databricks, Snowflake, GCP. Strong skills in PySpark, Python, SQL. Experience with RAG systems, semantic search, and LLM integration. Familiarity with Kafka, Pub/Sub, vector databases. Proven ability to optimize ETL jobs and troubleshoot production issues. Agile team experience and excellent communication skills. Certifications in Databricks, Snowflake, GCP, or Azure. Exposure to Airflow, BI tools (Power BI, Looker Studio).
    $108k-153k yearly est. 3d ago
  • Sr. Cloud Data Engineer

    Charter Global 4.0company rating

    Malvern, PA jobs

    Job Title: Sr. Cloud Data Engineer Duration: 12 months+ Contract Contract Description: Responsibilities: Maintain and optimize AWS-based data pipelines to ensure timely and reliable data delivery. Develop and troubleshoot workflows using AWS Glue, PySpark, Step Functions, and DynamoDB. Collaborate on code management and CI/CD processes using Bitbucket, GitHub, and Bamboo. Participate in code reviews and repository management to uphold coding standards. Provide technical guidance and mentorship to junior engineers and assist in team coordination. Qualifications: 9-10 years of experience in data engineering with strong hands-on AWS expertise. Proficient in AWS Glue, PySpark, Step Functions, and DynamoDB. Skilled in managing code repositories and CI/CD pipelines (Bitbucket, GitHub, Bamboo). Experience in team coordination or mentoring roles. Familiarity with Wealth Asset Management, especially personal portfolio performance, is a plus
    $76k-105k yearly est. 4d ago
  • Senior Snowflake Data Engineer

    Zensar Technologies 4.3company rating

    Santa Clara, CA jobs

    About the job Why Zensar? We're a bunch of hardworking, fun-loving, people-oriented technology enthusiasts. We love what we do, and we're passionate about helping our clients thrive in an increasingly complex digital world. Zensar is an organization focused on building relationships, with our clients and with each other-and happiness is at the core of everything we do. In fact, we're so into happiness that we've created a Global Happiness Council, and we send out a Happiness Survey to our employees each year. We've learned that employee happiness requires more than a competitive paycheck, and our employee value proposition-grow, own, achieve, learn (GOAL)-lays out the core opportunities we seek to foster for every employee. Teamwork and collaboration are critical to Zensar's mission and success, and our teams work on a diverse and challenging mix of technologies across a broad industry spectrum. These industries include banking and financial services, high-tech and manufacturing, healthcare, insurance, retail, and consumer services. Our employees enjoy flexible work arrangements and a competitive benefits package, including medical, dental, vision, 401(k), among other benefits. If you are looking for a place to have an immediate impact, to grow and contribute, where we work hard, play hard, and support each other, consider joining team Zensar! Zensar is seeking an Senior Snowflake Data Engineer -Santa Clara, CA-Work from office all 5 days-This is open for Full time with excellent benefits and growth opportunities and contract role as well. Job Description: Key Requirements: Strong hands-on experience in data engineering using Snowflake and Databricks, with proven ability to build and optimize large-scale data pipelines. Deep understanding of data architecture principles, including ingestion, transformation, storage, and access control. Solid experience in system design and solution architecture, focusing on scalability, reliability, and maintainability. Expertise in ETL/ELT pipeline design, including data extraction, transformation, validation, and load processes. In-depth knowledge of data modeling techniques (dimensional modeling, star, and snowflake schemas). Skilled in optimizing compute and storage costs across Snowflake and Databricks environments. Strong proficiency in administration, including database design, schema management, user roles, permissions, and access control policies. Hands-on experience implementing data lineage, quality, and monitoring frameworks. Advanced proficiency in SQL for data processing, transformation, and automation. Experience with reporting and visualization tools such as Power BI and Sigma Computing. Excellent communication and collaboration skills, with the ability to work independently and drive technical initiatives. Zensar believes that diversity of backgrounds, thought, experience, and expertise fosters the robust exchange of ideas that enables the highest quality collaboration and work product. Zensar is an equal opportunity employer. All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Zensar is committed to providing veteran employment opportunities to our service men and women. Zensar is committed to providing equal employment opportunities for persons with disabilities or religious observances, including reasonable accommodation when needed. Accommodations made to facilitate the recruiting process are not a guarantee of future or continued accommodations once hired. Zensar does not facilitate/sponsor any work authorization for this position. Candidates who are currently employed by a client or vendor of Zensar may be ineligible for consideration. Zensar values your privacy. We'll use your data in accordance with our privacy statement located at: *********************************
    $109k-150k yearly est. 2d ago
  • Senior Data Engineer

    Zillion Technologies, Inc. 3.9company rating

    McLean, VA jobs

    The candidate must have 5+ years of hands on experience working with PySpark/Python, microservices architecture, AWS EKS, SQL, Postgres, DB2, Snowflake, Behave OR Cucumber frameworks, Pytest (unit testing), automation testing and regression testing. Experience with tools such as Jenkins, SonarQube AND/OR Fortify are preferred for this role. Experience in Angular and DevOps are nice to haves for this role. Must Have Qualifications: PySpark/Python based microservices, AWS EKS, Postgres SQL Database, Behave/Cucumber for automation, Pytest, Snowflake, Jenkins, SonarQube and Fortify. Responsibilities: Development of microservices based on Python, PySpark, AWS EKS, AWS Postgres for a data-oriented modernization project. New System: Python and PySpark, AWS Postgres DB, Behave/Cucumber for automation, and Pytest Perform System, functional and data analysis on the current system and create technical/functional requirement documents. Current System: Informatica, SAS, AutoSys, DB2 Write automated tests using Behave/cucumber, based on the new micro-services-based architecture Promote top code quality and solve issues related to performance tuning and scalability. Strong skills in DevOps, Docker/container-based deployments to AWS EKS using Jenkins and experience with SonarQube and Fortify. Able to communicate and engage with business teams and analyze the current business requirements (BRS documents) and create necessary data mappings. Preferred strong skills and experience in reporting applications development and data analysis Knowledge in Agile methodologies and technical documentation.
    $77k-109k yearly est. 4d ago

Learn more about Pacific Northwest National Laboratory jobs

Most common jobs at Pacific Northwest National Laboratory

View all jobs