Data Scientist
Senior data scientist job in Garner, NC
Accentuate Staffing is working with a client that is hiring an experienced Data Scientist to work on predictive analytics and join their data and AI team. This role combines advanced machine learning research with strategic business analytics to create scalable predictive solutions that drive efficiency and smarter decision-making across the enterprise.
The ideal candidate will bring a blend of technical expertise in machine learning, cloud platforms, and data engineering, alongside strong business acumen. You'll build and refine predictive models that help the company forecast sales, understand demand patterns, and make smarter operational decisions - from production planning to staffing and supply chain management.
Responsibilities:
Design and implement advanced predictive and machine learning models to support sales forecasting, demand planning, and strategic decision-making.
Build and maintain scalable ETL/ELT data pipelines that integrate structured and unstructured data from multiple business sources.
Experiment with AI techniques such as NLP, computer vision, and generative models to explore innovative applications across the organization.
Partner with business and IT teams to define analytics requirements, operationalize models, and integrate outputs into dashboards and reporting platforms.
Develop and manage self-service analytics dashboards using Power BI, SAP Analytics Cloud, or similar tools to deliver actionable insights.
Ensure data integrity, quality, and governance across predictive systems.
Qualifications:
Degree in Data Science, Computer Science, Statistics, or a related field.
Experience in predictive analytics, data science, or AI engineering within a business setting.
Proficiency in Python, R, SQL, and experience with cloud-based ML platforms such as Azure ML, AWS, or GCP.
Hands-on experience with data pipeline technologies (Azure Data Factory, Spark, Hadoop) and business intelligence tools (Power BI, Tableau, or SAP Analytics Cloud).
Strong understanding of machine learning model lifecycle management, from design through deployment and monitoring.
Exceptional communication and stakeholder engagement skills, with the ability to translate technical work into business value.
Data Engineer
Senior data scientist job in Raleigh, NC
Mercor is hiring a Data Engineer on behalf of a leading AI lab. In this role, you'll **design resilient ETL/ELT pipelines and data contracts** to ensure datasets are analytics- and ML-ready. You'll validate, enrich, and serve data with strong schema and versioning discipline, building the backbone that powers AI research and production systems. This position is ideal for candidates who love working with data pipelines, distributed processing, and ensuring data quality at scale.
* * * ### **You're a great fit if you:** - Have a background in **computer science, data engineering, or information systems**. - Are proficient in **Python, pandas, and SQL**. - Have hands-on experience with **databases** like PostgreSQL or SQLite. - Understand distributed data processing with **Spark or DuckDB**. - Are experienced in orchestrating workflows with **Airflow** or similar tools. - Work comfortably with common formats like **JSON, CSV, and Parquet**. - Care about **schema design, data contracts, and version control** with Git. - Are passionate about building pipelines that enable **reliable analytics and ML workflows**. * * * ### **Primary Goal of This Role** To design, validate, and maintain scalable ETL/ELT pipelines and data contracts that produce clean, reliable, and reproducible datasets for analytics and machine learning systems. * * * ### **What You'll Do** - Build and maintain **ETL/ELT pipelines** with a focus on scalability and resilience. - Validate and enrich datasets to ensure they're **analytics- and ML-ready**. - Manage **schemas, versioning, and data contracts** to maintain consistency. - Work with **PostgreSQL/SQLite, Spark/Duck DB, and Airflow** to manage workflows. - Optimize pipelines for performance and reliability using **Python and pandas**. - Collaborate with researchers and engineers to ensure data pipelines align with product and research needs. * * * ### **Why This Role Is Exciting** - You'll create the **data backbone** that powers cutting-edge AI research and applications. - You'll work with modern **data infrastructure and orchestration tools**. - You'll ensure **reproducibility and reliability** in high-stakes data workflows. - You'll operate at the **intersection of data engineering, AI, and scalable systems**. * * * ### **Pay & Work Structure** - You'll be classified as an hourly contractor to Mercor. - Paid weekly via Stripe Connect, based on hours logged. - Part-time (20-30 hrs/week) with flexible hours-work from anywhere, on your schedule. - Weekly Bonus of **$500-$1000 USD** per 5 tasks. - Remote and flexible working style.
Senior Data Scientist, Statistical Genetics
Senior data scientist job in Raleigh, NC
A pioneering life sciences research organization is looking for a Data Scientist or Senior Data Scientist specializing in statistical genetics and computational biology. The mission centers on leveraging advanced computational approaches and in vivo model systems to deepen insights into the biological and genetic mechanisms controlling human aging. The work enables innovative interventions that support longer, healthier lives. The environment offers a strong emphasis on fundamental discovery science, integration of academic and industry partnerships, and a dynamic therapeutic R&D pipeline.
Key Responsibilities:
Apply and advance statistical and computational methods for interpreting biobank\-scale and large cohort datasets, encompassing high\-dimensional phenotypes and longitudinal health data.
Design and lead investigations to understand the genetic architecture of age\-related diseases and trait trajectories across populations.
Merge multiple biological datasets (e.g., clinical, genetic, multi\-omics) to build new hypotheses for novel treatments targeting aging\-associated conditions.
Develop reusable software tools and workflows streamlining analysis and data processing in diverse cohort research projects.
Communicate results and collaborate closely with interdisciplinary teams spanning multiple scientific domains-internally and with external research partners.
Requirements
Ph.D. or equivalent advanced training in genetics, statistical genetics, computational biology, bioinformatics, or a related discipline.
Demonstrated expertise in creating and deploying computational or statistical methodologies for large, complex biological\/phenotypic datasets.
Proficiency with modern genetic analysis tools-including GWAS, burden tests, fine\-mapping, LD score regression, QTL mapping (eQTL\/pQTL), colocalization, polygenic risk scoring, Mendelian randomization-and specific techniques for studies involving diverse ancestry groups.
Hands\-on experience analyzing large\-scale clinical and molecular data, such as genomics, imaging, multi\-omics, and longitudinal datasets.
Knowledge of or experience with substantial human cohort research (e.g., UK Biobank, FinnGen, All of Us, or similar repositories).
Advanced coding skills in Python and\/or R, with a portfolio of developed tools, libraries, or pipelines accessible to other scientists.
Excellent communication and teamwork abilities, with a strong history of collaborating across varied scientific specialties.
Onsite work required a minimum of 4 days per week.
"}}],"is Mobile":false,"iframe":"true","job Type":"Full time","apply Name":"Apply Now","zsoid":"688258619","FontFamily":"Verdana, Geneva, sans\-serif","job OtherDetails":[{"field Label":"Industry","uitype":2,"value":"Science & Technology"},{"field Label":"City","uitype":1,"value":"Raleigh"},{"field Label":"State\/Province","uitype":1,"value":"North Carolina"},{"field Label":"Zip\/Postal Code","uitype":1,"value":"27709"}],"header Name":"Senior Data Scientist, Statistical Genetics","widget Id":"**********00072311","is JobBoard":"false","user Id":"**********00248003","attach Arr":[],"custom Template":"3","is CandidateLoginEnabled":false,"job Id":"**********15404062","FontSize":"12","google IndexUrl":"https:\/\/callieregroup.zohorecruit.com\/recruit\/ViewJob.na?digest=u6RCQRHkFYC6J5ac54igRZ41KjdHuFUrHtGNB3loNr8\-&embedsource=Google","location":"Raleigh","embedsource":"CareerSite","indeed CallBackUrl":"https:\/\/recruit.zoho.com\/recruit\/JBApplyAuth.do","logo Id":"1gvk5a9fa9ad9912143c8885985d92ce2ae5f"}
Senior Data Scientist
Senior data scientist job in Raleigh, NC
We are seeking a highly skilled and experienced Senior Data Scientist with deep expertise in machine learning model development and production deployment, particularly in predictive and prescriptive analytics. The ideal candidate will have a strong technical background in Python, PySpark, TensorFlow/PyTorch, advanced ML algorithms (XGBoost, LightGBM, structural time series), and experience building scalable AI/ML architectures using platforms like Databricks and cloud services. This role also requires strong collaboration and mentoring capabilities to translate ambiguous business challenges into end-to-end data science solutions, work effectively with cross-functional stakeholders, and guide junior team members through Agile development cycles.
This position is 4 days in office, 1 day remote per week, based at our corporate headquarters in Raleigh, North Carolina (North Hills)
Key Responsibilities
Model Development & Technical Expertise
Collaborate with cross-functional teams- IT Product, Merchandising, Supply Chain, and Finance-to Design and deploy machine learning solutions for demand forecasting, pricing, labor optimization, and supply chain efficiency.
Hand-on expertise in training models such as structural time series modeling and boosting algorithms (e.g., XGBoost, LightGBM) using Gigabyte/petabyte scale data using pyspark or similar frameworks.
Hands-on Expertise and knowledge of MLOps best practices and ML Engineering, to design and build robust batch and inference ML architectures.
Establish best practices for model development, validation, and monitoring to ensure accuracy, scalability, and impact.
Develop and tune machine learning models using Python, PySpark, TensorFlow, and PyTorch.
Create robust and scalable AI/ML frameworks and build robust architectures.
Familiarity of GenAI and agentic solutions, Retrieval-Augmented Generation (RAG) and model distillation is a plus.
Collaboration & Communication
Promote data-driven culture through analytical storytelling and stakeholder engagement.
Work closely with stakeholders to understand business challenges and translate vague business requirements into data science solutions and work in the end-to-end solutioning.
Collaborate with cross-functional teams to ensure successful integration of models into business processes.
Ability to mentor and provide guidance to junior team members.
Monitoring & Visualization
Rapidly prototype and test hypotheses to validate model approaches.
Build automated workflows for model monitoring and performance evaluation.
Create dashboards using tools like Databricks and Palantir to visualize key model metrics like model drift, shapley values etc.
Agile Execution
Operate effectively in Agile environments, contributing to iterative development cycles and sprint planning.
Adapt quickly to changing priorities while maintaining high standards of quality and innovation.
Education & Experience:
5+ years of experience in Data Science, Machine Learning, AI/GenAI.
Master's degree in computer science, Statistics, Operations Research or similar.
Preferred:
Experience in Retail Industry
AI/ML certifications in Vertex AI/Databricks/AWS.
1+ years of experience as team lead.
California Residents click below for Privacy Notice:
***************************************************
Auto-ApplySenior Data Scientist
Senior data scientist job in Raleigh, NC
Labcorp is hiring a Senior Data Scientist. This person will produce innovative solutions driven by exploratory data analysis from complex and high-dimensional datasets. Apply knowledge of statistics, data modeling, data science, artificial intelligence, software engineering / architecture to recognize patterns, identify opportunities and make valuable discoveries. Use a flexible, analytical approach to design, develop, deploy, and evaluate predictive models. Generate and test hypotheses.
Produce innovative solutions driven by exploratory data analysis from complex and high-dimensional datasets, with an expanded focus on Generative AI applications. Apply knowledge of statistics, data modeling, data sciences, and artificial intelligence to recognize patterns, identify opportunities, and make valuable discoveries. Use a flexible, analytical approach to design, develop, and evaluate both traditional predictive models and generative AI systems. Generate and test hypotheses across various modeling paradigms.
Applicants who live within 35 miles of either the Burlington, NC or Durham, NC location will follow a hybrid schedule. This schedule includes a minimum of three in office days per week at an assigned location, either Burlington or Durham, supporting both collaboration and flexibility.
RESPONSIBILITIES
Interpret data and present insights through rich and intuitive visualizations that tell compelling stories.
Develop novel ways of integrating, mining, and visualizing diverse, high-dimensional, and poorly curated data sets.
Explore and implement generative AI techniques where appropriate to enhance data analysis capabilities and create new solutions.
Develop and deliver presentations to communicate technical ideas and analytical findings to non-technical partners and senior leadership.
Build underlying software infrastructure to better manage, integrate, and mine data, incorporating both traditional and generative AI approaches.
Work closely with engineering teams and participate in the full development cycle from product inception and research to production deployment.
Write production quality code while implementing both established methods and innovative AI solutions.
REQUIREMENTS
Knowledge, Skills & Abilities:
Experience in artificial intelligence and statistical learning.
Experience with statistical methodologies and machine learning techniques such as: neural networks, graphical models, ensemble methods and natural language processing.
Experience with multiple deep learning techniques such as CNN, LSTM, RNN, etc., in addition to standard machine learning approaches such as those found in scikit-learn.
Master of evaluation techniques for supervised and unsupervised techniques. Knows to evaluate the quality of data and determine gaps in data or assumptions.
Proficiency with Python. Can develop meaningful python code using objective oriented programming and functional programming. Writes tests for code. Can debug errors quickly.
Strong data visualization skills.
Familiarity with one or more machine learning libraries or frameworks such as: PyTorch, Tensorflow.
Experience with rational and non-structure databases is highly desirable.
Experience using cloud technologies such as AWS with tools such as S3, Lambda, Athena, API Gateway, SageMaker.
Strong foundation in data analysis and statistical learning.
Must be able to provide evidence of relevant research expertise in the form of presentations, software, technical publications, and/or knowledge of applications.
Experience with statistical methodologies and machine learning techniques including neural networks, graphical models, ensemble methods, and natural language processing.
Knowledge of generative AI approaches such as large language models, diffusion models, or GANs is desirable.
Proficiency with Python and R.
Strong data visualization skills.
Familiarity with machine learning libraries such as PyTorch, TensorFlow, and scikit-learn.
Programming experience in Java, Python, or Perl with knowledge of relational and non-structured databases.
Technical proficiency and demonstrated success in scientific creativity, collaboration, and independent thought.
Ability to translate research concepts into practical solutions and prototypes.
Comfortable working with both technical and non-technical staff.
Strong project management skills with the ability to measure success metrics.
Leadership experience in technical discussions with senior stakeholders.
Must have ability to communicate effectively.
Preferences:
Candidates in the Raleigh area are preferred so they can be able to work onsite when needed.
Candidates with a healthcare industry background are preferred.
Physical Demands and Environmental Conditions:
Regularly works with a computer for approximately 6-8 hours a day.
Must be able to read and understand scientific and complex directions.
Technical Proficiencies:
Languages: Python , R, SQL, Excel, Java, JavaScript, Spark (Pyspark)
Packages: Scikit-learn, Pandas, NumPy, SciPy, TensorFlow, PyTorch, SpaCy, Hugging Face Transformers, Snorkel, H2O, Spark MLlib, Matplotlib, Seaborn, Statsmodels
Cloud: AWS (S3, Athena, Glue, EC2, SageMaker, Lambda, etc.), or equivalent cloud platforms
Technologies: Git, Jira, Docker
Techniques: Both traditional ML (Random Forest, XGBoost, clustering, etc.) and generative approaches (transformer models, diffusion models, GANs) as appropriate for the problem at hand, Machine learning and deep learning fundamentals, Natural language processing and understanding, Computer vision and image analysis, Exploratory data analysis and feature engineering, A/B testing and experimental design, Time series forecasting and anomaly detection, MLOps and model deployment practices, Ethical AI and responsible model development, Experience in Clinical data preferred
EDUCATION
Advance degree is required in Computer Science, Engineering, Statistics, Math or related field.
Must be able to provide evidence of relevant research expertise in the form of presentations, software, technical publications, and/or knowledge of applications.
Master with at least 4 years' experience or Ph.D. with 2 years of experience in a Data science setting.
Benefits: Employees regularly scheduled to work 20 or more hours per week are eligible for comprehensive benefits including: Medical, Dental, Vision, Life, STD/LTD, 401(k), Paid Time Off (PTO) or Flexible Time Off (FTO), Tuition Reimbursement and Employee Stock Purchase Plan. Casual, PRN & Part Time employees regularly scheduled to work less than 20 hours are eligible to participate in the 401(k) Plan only. For more detailed information, please click here.
Labcorp is proud to be an Equal Opportunity Employer:
Labcorp strives for inclusion and belonging in the workforce and does not tolerate harassment or discrimination of any kind. We make employment decisions based on the needs of our business and the qualifications and merit of the individual. Qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), family or parental status, marital, civil union or domestic partnership status, sexual orientation, gender identity, gender expression, personal appearance, age, veteran status, disability, genetic information, or any other legally protected characteristic. Additionally, all qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law.
We encourage all to apply
If you are an individual with a disability who needs assistance using our online tools to search and apply for jobs, or needs an accommodation, please visit our accessibility site or contact us at Labcorp Accessibility. For more information about how we collect and store your personal data, please see our Privacy Statement.
Auto-ApplyData Scientist, Product Analytics
Senior data scientist job in Raleigh, NC
As a Data Scientist at Meta, you will shape the future of people-facing and business-facing products we build across our entire family of applications (Facebook, Instagram, Messenger, WhatsApp, Oculus). By applying your technical skills, analytical mindset, and product intuition to one of the richest data sets in the world, you will help define the experiences we build for billions of people and hundreds of millions of businesses around the world. You will collaborate on a wide array of product and business problems with a wide-range of cross-functional partners across Product, Engineering, Research, Data Engineering, Marketing, Sales, Finance and others. You will use data and analysis to identify and solve product development's biggest challenges. You will influence product strategy and investment decisions with data, be focused on impact, and collaborate with other teams. By joining Meta, you will become part of a world-class analytics community dedicated to skill development and career growth in analytics and beyond.Product leadership: You will use data to shape product development, quantify new opportunities, identify upcoming challenges, and ensure the products we build bring value to people, businesses, and Meta. You will help your partner teams prioritize what to build, set goals, and understand their product's ecosystem.Analytics: You will guide teams using data and insights. You will focus on developing hypotheses and employ a varied toolkit of rigorous analytical approaches, different methodologies, frameworks, and technical approaches to test them.Communication and influence: You won't simply present data, but tell data-driven stories. You will convince and influence your partners using clear insights and recommendations. You will build credibility through structure and clarity, and be a trusted strategic partner.
**Required Skills:**
Data Scientist, Product Analytics Responsibilities:
1. Work with large and complex data sets to solve a wide array of challenging problems using different analytical and statistical approaches
2. Apply technical expertise with quantitative analysis, experimentation, data mining, and the presentation of data to develop strategies for our products that serve billions of people and hundreds of millions of businesses
3. Identify and measure success of product efforts through goal setting, forecasting, and monitoring of key product metrics to understand trends
4. Define, understand, and test opportunities and levers to improve the product, and drive roadmaps through your insights and recommendations
5. Partner with Product, Engineering, and cross-functional teams to inform, influence, support, and execute product strategy and investment decisions
**Minimum Qualifications:**
Minimum Qualifications:
6. Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
7. A minimum of 6 years of work experience in analytics (minimum of 4 years with a Ph.D.)
8. Bachelor's degree in Mathematics, Statistics, a relevant technical field, or equivalent practical experience
9. Experience with data querying languages (e.g. SQL), scripting languages (e.g. Python), and/or statistical/mathematical software (e.g. R)
**Preferred Qualifications:**
Preferred Qualifications:
10. Master's or Ph.D. Degree in a quantitative field
**Public Compensation:**
$173,000/year to $242,000/year + bonus + equity + benefits
**Industry:** Internet
**Equal Opportunity:**
Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Meta participates in the E-Verify program in certain locations, as required by law. Please note that Meta may leverage artificial intelligence and machine learning technologies in connection with applications for employment.
Meta is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance or accommodations due to a disability, please let us know at accommodations-ext@fb.com.
Data Scientist, NLP & Language Models
Senior data scientist job in Raleigh, NC
Datavant is a data platform company and the world's leader in health data exchange. Our vision is that every healthcare decision is powered by the right data, at the right time, in the right format. Our platform is powered by the largest, most diverse health data network in the U.S., enabling data to be secure, accessible and usable to inform better health decisions. Datavant is trusted by the world's leading life sciences companies, government agencies, and those who deliver and pay for care.
By joining Datavant today, you're stepping onto a high-performing, values-driven team. Together, we're rising to the challenge of tackling some of healthcare's most complex problems with technology-forward solutions. Datavanters bring a diversity of professional, educational and life experiences to realize our bold vision for healthcare.
Datavant is looking for an enthusiastic and meticulous Data Scientist to join our growing team, which builds machine learning models for use across Datavant in multiple verticals and for multiple customer types.
As part of the Data Science team, you will play a crucial role in developing new product features and automating existing internal processes to drive innovation across Datavant. You will work with tens of millions of patients' worth of healthcare data to develop models, contributing to the entirety of the model development lifecycle from ideation and research to deployment and monitoring. You will collaborate with an experienced team of Data Scientists and Machine Learning Engineers along with application Engineers and Product Managers across the company to achieve Datavant's AI-enabled future.
**You Will:**
+ Play a key role in the success of our products by developing models for NLP (and other) tasks.
+ Perform error analysis, data cleaning, and other related tasks to improve models.
+ Collaborate with your team by making recommendations for the development roadmap of a capability.
+ Work with other data scientists and engineers to optimize machine learning models and insert them into end-to-end pipelines.
+ Understand product use-cases and define key performance metrics for models according to business requirements.
+ Set up systems for long-term improvement of models and data quality (e.g. active learning, continuous learning systems, etc.).
**What You Will Bring to the Table:**
+ Advanced degree in computer science, data science, statistics, or a related field, or equivalent work experience.
+ 4+ years of experience with data science and machine learning in an industry setting.
+ 4+ years experience with Python.
+ Experience designing and building NLP models for tasks such as classification, named-entity recognition, and dependency parsing.
+ Proficiency with standard data analysis toolkits such as SQL, Numpy, Pandas, etc.
+ Proficiency with deep learning frameworks like PyTorch (preferred) or TensorFlow.
+ Demonstrated ability to drive results in a team environment and contribute to team decision-making in the face of ambiguity.
+ Strong time management skills and demonstrable experience of prioritising work to meet tight deadlines.
+ Initiative and ability to independently explore and research novel topics and concepts as they arise.
\#LI-BC1
We are committed to building a diverse team of Datavanters who are all responsible for stewarding a high-performance culture in which all Datavanters belong and thrive. We are proud to be an Equal Employment Opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, disability, veteran status, or other legally protected status.
At Datavant our total rewards strategy powers a high-growth, high-performance, health technology company that rewards our employees for transforming health care through creating industry-defining data logistics products and services.
The range posted is for a given job title, which can include multiple levels. Individual rates for the same job title may differ based on their level, responsibilities, skills, and experience for a specific job.
The estimated total cash compensation range for this role is:
$136,000-$170,000 USD
To ensure the safety of patients and staff, many of our clients require post-offer health screenings and proof and/or completion of various vaccinations such as the flu shot, Tdap, COVID-19, etc. Any requests to be exempted from these requirements will be reviewed by Datavant Human Resources and determined on a case-by-case basis. Depending on the state in which you will be working, exemptions may be available on the basis of disability, medical contraindications to the vaccine or any of its components, pregnancy or pregnancy-related medical conditions, and/or religion.
This job is not eligible for employment sponsorship.
Datavant is committed to a work environment free from job discrimination. We are proud to be an Equal Employment Opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, disability, veteran status, or other legally protected status. To learn more about our commitment, please review our EEO Commitment Statement here (************************************************** . Know Your Rights (*********************************************************************** , explore the resources available through the EEOC for more information regarding your legal rights and protections. In addition, Datavant does not and will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay.
At the end of this application, you will find a set of voluntary demographic questions. If you choose to respond, your answers will be anonymous and will help us identify areas for improvement in our recruitment process. (We can only see aggregate responses, not individual ones. In fact, we aren't even able to see whether you've responded.) Responding is entirely optional and will not affect your application or hiring process in any way.
Datavant is committed to working with and providing reasonable accommodations to individuals with physical and mental disabilities. If you need an accommodation while seeking employment, please request it here, (************************************************************** Id=**********48790029&layout Id=**********48795462) by selecting the 'Interview Accommodation Request' category. You will need your requisition ID when submitting your request, you can find instructions for locating it here (******************************************************************************************************* . Requests for reasonable accommodations will be reviewed on a case-by-case basis.
For more information about how we collect and use your data, please review our Privacy Policy (**************************************** .
Senior Data Scientist
Senior data scientist job in Raleigh, NC
**_What Data Science contributes to Cardinal Health_** The Data & Analytics Function oversees the analytics life-cycle in order to identify, analyze and present relevant insights that drive business decisions and anticipate opportunities to achieve a competitive advantage. This function manages analytic data platforms, the access, design and implementation of reporting/business intelligence solutions, and the application of advanced quantitative modeling.
Data Science applies base, scientific methodologies from various disciplines, techniques and tools that extracts knowledge and insight from data to solve complex business problems on large data sets, integrating multiple systems.
At Cardinal Health's Artificial Intelligence Center of Excellence (AI CoE), we are pushing the boundaries of healthcare with cutting-edge Data Science and Artificial Intelligence (AI). Our mission is to leverage the power of data to create innovative solutions that improve patient outcomes, streamline operations, and enhance the overall healthcare experience.
We are seeking a highly motivated and experienced Senior Data Scientist to join our team as a thought leader and architect of our AI strategy. You will play a critical role in fulfilling our vision through delivery of impactful solutions that drive real-world change.
**_Responsibilities_**
+ Lead the Development of Innovative AI solutions: Be responsible for designing, implementing, and scaling sophisticated AI solutions that address key business challenges within the healthcare industry by leveraging your expertise in areas such as Machine Learning, Generative AI, and RAG Technologies.
+ Develop advanced ML models for forecasting, classification, risk prediction, and other critical applications.
+ Explore and leverage the latest Generative AI (GenAI) technologies, including Large Language Models (LLMs), for applications like summarization, generation, classification and extraction.
+ Build robust Retrieval Augmented Generation (RAG) systems to integrate LLMs with vast repositories of healthcare and business data, ensuring accurate and relevant outputs.
+ Shape Our AI Strategy: Work closely with key stakeholders across the organization to understand their needs and translate them into actionable AI-driven or AI-powered solutions.
+ Act as a champion for AI within Cardinal Health, influencing the direction of our technology roadmap and ensuring alignment with our overall business objectives.
+ Guide and mentor a team of Data Scientists and ML Engineers by providing technical guidance, mentorship, and support to a team of skilled and geographically distributed data scientists, while fostering a collaborative and innovative environment that encourages continuous learning and growth.
+ Embrace a AI-Driven Culture: foster a culture of data-driven decision-making, promoting the use of AI insights to drive business outcomes and improve customer experience and patient care.
**_Qualifications_**
+ 8-12 years of experience with a minimum of 4 years of experience in data science, with a strong track record of success in developing and deploying complex AI/ML solutions, preferred
+ Bachelor's degree in related field, or equivalent work experience, preferred
+ GenAI Proficiency: Deep understanding of Generative AI concepts, including LLMs, RAG technologies, embedding models, prompting techniques, and vector databases, along with evaluating retrievals from RAGs and GenAI models without ground truth
+ Experience working with building production ready Generative AI Applications involving RAGs, LLM, vector databases and embeddings model.
+ Extensive knowledge of healthcare data, including clinical data, patient demographics, and claims data. Understanding of HIPAA and other relevant regulations, preferred.
+ Experience working with cloud platforms like Google Cloud Platform (GCP) for data processing, model training, evaluation, monitoring, deployment and support preferred.
+ Proven ability to lead data science projects, mentor colleagues, and effectively communicate complex technical concepts to both technical and non-technical audiences preferred.
+ Proficiency in Python, statistical programming languages, machine learning libraries (Scikit-learn, TensorFlow, PyTorch), cloud platforms, and data engineering tools preferred.
+ Experience in Cloud Functions, VertexAI, MLFlow, Storage Buckets, IAM Principles and Service Accounts preferred.
+ Experience in building end-to-end ML pipelines, from data ingestion and feature engineering to model training, deployment, and scaling preferred.
+ Experience in building and implementing CI/CD pipelines for ML models and other solutions, ensuring seamless integration and deployment in production environments preferred.
+ Familiarity with RESTful API design and implementation, including building robust APIs to integrate your ML models and GenAI solutions with existing systems preferred.
+ Working understanding of software engineering patterns, solutions architecture, information architecture, and security architecture with an emphasis on ML/GenAI implementations preferred.
+ Experience working in Agile development environments, including Scrum or Kanban, and a strong understanding of Agile principles and practices preferred.
+ Familiarity with DevSecOps principles and practices, incorporating coding standards and security considerations into all stages of the development lifecycle preferred.
**_What is expected of you and others at this level_**
+ Applies advanced knowledge and understanding of concepts, principles, and technical capabilities to manage a wide variety of projects
+ Participates in the development of policies and procedures to achieve specific goals
+ Recommends new practices, processes, metrics, or models
+ Works on or may lead complex projects of large scope
+ Projects may have significant and long-term impact
+ Provides solutions which may set precedent
+ Independently determines method for completion of new projects
+ Receives guidance on overall project objectives
+ Acts as a mentor to less experienced colleagues
**Anticipated salary range:** $121,600 - $173,700
**Bonus eligible:** Yes
**Benefits:** Cardinal Health offers a wide variety of benefits and programs to support health and well-being.
+ Medical, dental and vision coverage
+ Paid time off plan
+ Health savings account (HSA)
+ 401k savings plan
+ Access to wages before pay day with my FlexPay
+ Flexible spending accounts (FSAs)
+ Short- and long-term disability coverage
+ Work-Life resources
+ Paid parental leave
+ Healthy lifestyle programs
**Application window anticipated to close:** 11/05/2025
*if interested in opportunity, please submit application as soon as possible.
The salary range listed is an estimate. Pay at Cardinal Health is determined by multiple factors including, but not limited to, a candidate's geographical location, relevant education, experience and skills and an evaluation of internal pay equity.
_Candidates who are back-to-work, people with disabilities, without a college degree, and Veterans are encouraged to apply._
_Cardinal Health supports an inclusive workplace that values diversity of thought, experience and background. We celebrate the power of our differences to create better solutions for our customers by ensuring employees can be their authentic selves each day. Cardinal Health is an Equal_ _Opportunity/Affirmative_ _Action employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, ancestry, age, physical or mental disability, sex, sexual orientation, gender identity/expression, pregnancy, veteran status, marital status, creed, status with regard to public assistance, genetic status or any other status protected by federal, state or local law._
_To read and review this privacy notice click_ here (***************************************************************************************************************************
Data Scientist IV
Senior data scientist job in Raleigh, NC
Data Scientist - Onsite Contract Clearance Requirements: Eligible for sensitive clearance; must pass drug screening, criminal history, and credit check. Contract We are seeking an experienced Data Scientist to drive actionable insights through advanced analytics and data modeling. This role involves working onsite in Raleigh, NC, to analyze complex datasets, identify business trends, and develop data-driven solutions that optimize organizational performance. The ideal candidate will have deep expertise in data analysis, modeling, visualization, and the ability to translate findings into clear business recommendations.
Key Responsibilities:
* Data Analysis & Visualization:
* Develop rapid analysis scripts, visualization prototypes, and interactive dashboards
* Prepare analysis reports, presentations, and project documentation
* Ensure deliverables meet business needs with minimal training required
* Modeling & Optimization:
* Design and implement mathematical models and optimization solutions
* Develop source code, scripting, and translational documentation for analytics projects
* Present proposals and project results to stakeholders
Qualifications & Skills:
* Minimum 13 years of relevant experience with a degree (or 17 years without a degree) in Data Science, Statistics, Experimental Psychology, or a related field
* Proficiency in Python, R, and SQL
* Strong analytical and statistical modeling skills
* Excellent written and verbal communication skills for diverse stakeholders
* Proven ability to work across industries and adapt quickly to complex business environments
* Familiarity with big data technologies and database development is a plus
* Must be able to work onsite in Raleigh, NC
Additional Requirements:
* Pass client-mandated clearance and drug screening
* Adherence to business casual dress code
* All overtime must be pre-approved in writing
About Seneca Resources
At Seneca Resources, we are more than just a staffing and consulting firm, we are a trusted career partner. With offices across the U.S. and clients ranging from Fortune 500 companies to government organizations, we provide opportunities that help professionals grow their careers while making an impact.
When you work with Seneca, you're choosing a company that invests in your success, celebrates your achievements, and connects you to meaningful work with leading organizations nationwide. We take the time to understand your goals and match you with roles that align with your skills and career path. Our consultants and contractors enjoy competitive pay, comprehensive health, dental, and vision coverage, 401(k) retirement plans, and the support of a dedicated team who will advocate for you every step of the way.
* Seneca Resources is proud to be an Equal Opportunity Employer, committed to fostering a diverse and inclusive workplace where all qualified individuals are encouraged to apply.
Data Scientist II
Senior data scientist job in Raleigh, NC
As a Senior Data Scientist II, you will leverage your advanced analytical skills to extract insights from complex datasets. Your expertise will drive data-driven decision-making and contribute to the development of innovative solutions. You will collaborate with cross-functional teams to enhance business strategies and drive growth through actionable data analysis.
· Leading the development of advanced AI and machine learning models to solve complex business problem
· Working closely with other data scientists and engineers to design, develop, and deploy AI solutions
· Collaborating with cross-functional teams to ensure AI solutions are aligned with business goals and customer needs
· Building models, performing analytics, and creating AI features
· Mentoring junior data scientists and provide guidance on AI and machine learning best practices
Working with product leaders to apply data science solutions
We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity/affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to ********************.To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy: ****************************************************
Skills and Requirements
3-5+ years of professional experience (POST GRAD) as a delivery-focused Data Scientist or ML Engineer
Understand how LLMs integrate to complex systems
Deep understanding of RAG
Strong in Python
Expeirence creating AI Agents and Agentic Workflows Masters and PHD
Data Scientist II- Machine Learning (Ultrasonic/NDT)
Senior data scientist job in Raleigh, NC
Full-time Description
Company: Predictant LLC
Predictant is an innovative start up focused on using AI to accurately measure tension in large industrial bolts (currently focused on the Wind Turbine industry). Using our patented method, and patent pending hardware, coupled with advanced AI models, Predictant reduces the maintenance and inspection time on large structures by up to 90% vs. traditional methods. Predictant has recently obtained critical Wind Energy certifications and is now demonstrating the technology to a select list of industry leading OEMs and maintenance companies while preparing for strong growth next year. Predictant is part of the Invica Group, a London based company that maintains a worldwide portfolio of companies focused on sustainability.
This role applies specifically to the development of the AI model built from the ultrasonic bi-wave data received from half a million, and increasing, bolt measurements. This role reports to the Director of Product Development who created the patents for the technology and oversees all product, hardware, and software development. Applicants should expect a fast-paced, hard working environment and the rewards of working with a small and growing company.
Data Scientist II identifies trends, patterns, and anomalies in datasets by performing extensive statistical and data analysis to develop insights. Performs data mining, cleaning, and aggregation processes to prepare data, conduct analysis, and develop databases. The individual should effectively utilize (and maintain) multiple structured and non-structured databases to design, develop, and implement the most valuable data-driven solutions for the organization.
Due to the central role of Machine Learning (ML) in Predictant's mission, it is essential for a Data Scientist II to possess a solid comprehension of ML models, including classification, regression, and clustering. The person should be able to apply ML and Artificial Intelligence (AI) to tackle both current and novel issues in the domains of Non-destructive Testing (NDT). In addition, proficiency in feature engineering is a must for this role, as feature engineering is a crucial step in the ML pipelines, and it heavily contributes to the success of ML-based applications for NDT.
The individual in this role will be responsible for rapidly testing new ideas and concepts, debugging and optimizing existing models. This role involves collaborating with team members and discussing ideas and innovative solutions for improving existing processes. Additionally, the person should be an excellent team player, always seeking to assist team members in achieving their objectives of developing exceptional data products.
Requirements
What You'll Do
Analyze large-scale ultrasonic and sensor datasets to identify trends and anomalies
Perform feature engineering, preprocessing, and EDA
Develop, validate, and optimize supervised & unsupervised ML/DL models
Build deep learning pipelines using Python and modern DL frameworks (TensorFlow/Keras)
Collaborate with engineering and product teams on new ideas, experiments, and model improvements
Communicate insights to both technical and non-technical stakeholders
What We're Looking For
2-4 years of experience in Data Science or ML Engineering
Bachelor's or Master's in Engineering, Data Science, CS, Math, Physics, or similar
Strong Python skills and deep-learning experience for Computer Vision
Experience with data preprocessing, feature engineering, and model development
Familiarity with sensor/ultrasonic data, signal processing, or audio/time-series analysis (preferred)
Experience with Azure (pipelines, databases) is a plus
Strong communication, attention to detail, and ability to work in a fast-paced startup environment
Must be able to work onsite in Raleigh 5 days/week
Preferred
Local candidates in the Raleigh-Durham/RTP area
U.S. citizens, permanent residents, or current H-1B holders who do not require new visa sponsorship
Data Scientist, Data and Analytics CoE
Senior data scientist job in Raleigh, NC
Cognizant is one of the world's leading professional services companies, we help our clients modernize technology, reinvent processes and transform experiences, so they can stay ahead in our constantly evolving world. Cognizant is looking to expand our team and your skills are needed. Are you interested? If so, please apply in order to be considered. We look forward to reviewing your application!
The Data Scientist creates a shared understanding of business performance to drive data-driven decisions and accountability across business channels.
**Main Responsibilities:**
Enables Proactive Insights: Surfaces forward-looking analytics using advanced data modeling and engineering techniques.
- Establishes COE Strategy: Develops MSS Data Science COE strategies, best practices, and scalable frameworks.
- Optimizes Tools & Processes: Streamlines DS workflows, prioritizes initiatives, and enables cross-LOB alignment and scaling.
- Leverages Technical Expertise: Utilizes PySpark/SQL, Fabric Engineering, and Lakehouse architecture for data model development.
- Strengthens the Data Ecosystem: Troubleshoots to ensure performance, reliability, and accessibility.
- PySpark/SQL
- Fabric Engineer & Lakehouse Design
- Data model development
- Data ecosystem troubleshooting & Solutioning
**Hourly Rate and Other Compensation:**
The annual salary for this position is between $100,000-114,400 depending on experience and other qualifications of the successful candidate.
This position is also eligible for Cognizant's discretionary annual incentive program, based on performance and subject to the terms of Cognizant's applicable plans.
**Benefits:** Cognizant offers the following benefits for this position, subject to applicable eligibility requirements:
+ Medical/Dental/Vision/Life Insurance
+ Paid holidays plus Paid Time Off
+ 401(k) plan and contributions
+ Long-term/Short-term Disability
+ Paid Parental Leave
+ Employee Stock Purchase Plan
**Disclaimer:** The hourly rate, other compensation, and benefits information is accurate as of the date of this posting. Cognizant reserves the right to modify this information at any time, subject to applicable law.
**LA County (only):** Qualified applicants with arrest and/or conviction records will be considered for employment.
Cognizant will only consider applicants for this position who are legally authorized to work in the United States without requiring company sponsorship now or at any time in the future.
Cognizant is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law.
Slalom Flex (Project Based)- Java Data Engineer
Senior data scientist job in Raleigh, NC
About the Role: We are seeking a highly skilled and motivated Data Engineer to join our team as an individual contributor. In this role, you will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure that support our data-driven initiatives. You will work closely with cross-functional teams to ensure data availability, quality, and performance across the organization.
About Us
Slalom is a purpose-led, global business and technology consulting company. From strategy to implementation, our approach is fiercely human. In six countries and 43 markets, we deeply understand our customers-and their customers-to deliver practical, end-to-end solutions that drive meaningful impact. Backed by close partnerships with over 400 leading technology providers, our 10,000+ strong team helps people and organizations dream bigger, move faster, and build better tomorrows for all. We're honored to be consistently recognized as a great place to work, including being one of Fortune's 100 Best Companies to Work For seven years running. Learn more at slalom.com.
Key Responsibilities:
* Design, develop, and maintain robust data pipelines using Java and Python.
* Build and optimize data workflows on AWS using services such as EMR, Glue, Lambda, and NoSQL databases.
* Leverage open-source frameworks to enhance data processing capabilities and performance.
* Collaborate with data scientists, analysts, and other engineers to deliver high-quality data solutions.
* Participate in Agile development practices, including sprint planning, stand-ups, and retrospectives.
* Ensure data integrity, security, and compliance with internal and external standards.
Required Qualifications:
* 5+ years of hands-on experience in software development using Java and Python (Spring Boo).
* 1+ years of experience working with AWS services including EMR, Glue, Lambda, and NoSQL databases.
* 3+ years of experience working with open-source data processing frameworks (e.g., Apache Spark, Kafka, Airflow).
* 2+ years of experience in Agile software development environments.
* Strong problem-solving skills and the ability to work independently in a fast-paced environment.
* Excellent communication and collaboration skills.
Preferred Qualifications:
* Experience with CI/CD pipelines and infrastructure-as-code tools (e.g., Terraform, CloudFormation).
* Familiarity with data governance and data quality best practices.
* Exposure to data lake and data warehouse architectures.
Compensation and Benefits
Slalom prides itself on helping team members thrive in their work and life. As a result, Slalom is proud to invest in benefits that include meaningful time off and paid holidays, 401(k) with a match, a range of choices for highly subsidized health, dental, & vision coverage, adoption and fertility assistance, and short/long-term disability. We also offer yearly $350 reimbursement account for any well-being-related expenses.
EEO and Accommodations
Slalom is an equal opportunity employer and is committed to inclusion, diversity, and equity in the workplace. All qualified applicants will receive consideration
for employment without regard to race, color, religion, sex, national origin, disability status, protected veterans' status, or any other characteristic protected by federal, state, or local laws. Slalom will also consider qualified applications with criminal histories, consistent with legal requirements.
Slalom welcomes and encourages applications from individuals with disabilities. Reasonable accommodations are available for candidates during all aspects of the
selection process. Please advise the talent acquisition team if you require accommodations during the interview process.
Sr. Data Engineer
Senior data scientist job in Raleigh, NC
Piper Companies is seeking a Senior Data Engineer to join a leading organization in the banking industry. The Senior Data Engineer will have strong experience with cloud technologies (GCP or AWS), programming in Java, Scala, or Kotlin, and designing scalable data solutions.
Responsibilities of the Senior Data Engineer:
* Design and develop robust data platforms using the Google Cloud stack (DataProc/Spark, BigQuery, GCS, etc.)
* Lead the development and enhancement of microservices in Java/Kotlin to support data-driven use cases
* Architect hybrid solutions for on-prem and cloud environments, leveraging Terraform for infrastructure management
Qualifications of the Senior Data Engineer:
* 8+ years of experience in data engineering or software development
* Strong proficiency in SQL and one or more of the following: Java, Scala, Kotlin
* Hands-on experience with GCP or AWS cloud platforms
Compensation for the Senior Data Engineer includes:
* Salary range: $140,000 - $185,000 depending on experience
* Comprehensive benefits package including medical, dental, vision, 401(k), and PTO
* Hybrid work flexibility - 3 days a week onsite
This job opens for applications on 10/17/2025. Applications for this job will be accepted for at least 30 days from the posting date.
Keywords: Senior Data Engineer, GCP, AWS, SQL, Java, Scala, Kotlin, Terraform, BigQuery, DataProc, Spark, Google Cloud Platform, cloud data architecture, microservices, hybrid cloud, infrastructure as code, banking technology, financial services, data platform engineering, cloud-native development, on-prem integration.
#LI-JN1
#HYBRID
Senior Data Engineer
Senior data scientist job in Raleigh, NC
Job Title: Senior Data Engineer Workplace: Hybrid - Due to in-office requirements, candidates must be local to either Raleigh, NC or Charlottesville, VA. Relocation assistance is not available Clearance Required: Not required, BUT YOU MUST BE ELIGIBLE FOR A CLEARANCE
Position Overview:
Elder Research, Inc. (ERI) is seeking to hire a Senior Data Engineer with strong engineering skills who will provide technical support across multiple project teams by leading, designing, and implementing the software and data architectures necessary to deliver analytics to our clients, as well as providing consulting and training support to client teams in the areas of architecture, data engineering, ML engineering and/or related areas. The ideal candidate will have a strong command of Python for data analysis and engineering tasks, a demonstrated ability to create reports and visualizations using tools like R, Python, SQL, or Power BI, and deep expertise in Microsoft Azure environments. The candidate will play a key role in collaborating with cross-functional teams, including software developers, cloud engineers, architects, business leaders, and power users, to deliver innovative data solutions to our clients.
This role requires a consultative mindset, excellent communication skills, and a thorough understanding of the Software Development Life Cycle (SDLC). Candidates should have 7-12 years of relevant experience and experience in client-facing or consultative roles. The role will be based out of Raleigh NC or Charlottesville VA and will require 2-4 days of Business Travel to our customer site every 6 weeks.
Key Responsibilities:
Data Engineering & Analysis:
* Develop, optimize, and maintain scalable data pipelines and systems in Azure environments.
* Analyze large, complex datasets to extract insights and support business decision-making.
* Create detailed and visually appealing reports and dashboards using R, Python, SQL, and Power BI.
Collaboration & Consulting:
* Work closely with software developers, cloud engineers, architects, business leaders, and power users to understand requirements and deliver tailored solutions.
* Act as a subject-matter expert in data engineering and provide guidance on best practices.
* Translate complex technical concepts into actionable business insights for stakeholders.
Azure Expertise:
* Leverage Azure services such as Azure Data Factory, Azure Synapse Analytics, Azure Databricks, Azure SQL Database, and Azure Blob Storage for data solutions.
* Ensure data architecture is aligned with industry standards and optimized for performance in cloud environments.
SDLC Proficiency:
* Follow and advocate for SDLC best practices in data engineering projects.
* Collaborate with software development teams to ensure seamless integration of data solutions into applications.
Required Qualifications:
* Experience: 7-12 years in data engineering, analytics, or related fields, with a focus on Azure environments.
* Education: Masters degree in Computer Science, Data Science, Engineering, or a related field.
Technical Skills:
* Programming: Advanced expertise in Python; experience with R is a plus.
* Data Tools: Proficient in SQL, Power BI, and Azure-native data tools.
* Azure Knowledge: Strong understanding of Azure services, including data integration, storage, and analytics solutions.
* SDLC Knowledge: Proven track record of delivering data solutions following SDLC methodologies.
* Consultative Skills: Strong client-facing experience with excellent communication and presentation abilities.
* Due to Customer requirements Candidates must be US Citizens or Permanent Residents of the United States of America.
Preferred Skills and Qualifications:
* Certifications in Azure (e.g., Azure Data Engineer, Azure Solutions Architect).
* Familiarity with Azure Functions, Event Grid, and Logic Apps.
* Hands-on experience with machine learning frameworks and big data processing tools (e.g., Spark, Hadoop).
* Familiarity with CI/CD pipelines and DevOps practices for data engineering workflows.
Why apply to this position at Elder Research?
* Competitive Salary and Benefits
* Important Work / Make a Difference supporting U.S. national security.
* Job Stability: Elder Research is not a typical government contractor, we hire you for a career not just a contract.
* People-Focused Culture: we prioritize work-life-balance and provide a supportive, positive, and collaborative work environment as well as opportunities for professional growth and advancement.
* Company Stock Ownership: all employees are provided with shares of the company each year based on company value and profits.
AI Data Engineer - Data and Knowledge Graphs
Senior data scientist job in Raleigh, NC
Job DescriptionDPR is looking for an experienced AI Data Engineer to join our Data and AI team and work closely with the Data Platform, BI and Enterprise architecture teams to influence the technical direction of DPR's AI initiatives. You will work closely with cross-functional teams, including business stakeholders, data engineers, and technical leads, to ensure alignment between business needs and data architecture and define data models for specific focus areas.Responsibilities
Integrate the semantic layer to serve as an AI-ready knowledge base, enabling applications such as advanced analytics, prompt engineering for large language models, and intelligent data discovery while ensuring seamless connectivity and holistic data understanding across the enterprise.
Develop standards, guidelines, and best practices for knowledge representation, semantic modeling, and data standardization across DPR to ensure a clear and consistent approach within the enterprise semantic layer.
Establish and refine operational processes for semantic model development, including intake mechanisms for new requirements (e.g., from AI prompt engineering initiatives) and backlog management, ensuring efficient and iterative delivery.
Partner closely with analytics engineers and data architects to deeply understand the underlying data models in Snowflake and develop a profound understanding of our business domains and data entities. Provide strategic guidance on how structured data can be seamlessly transformed, optimized, and semantically enriched for advanced AI consumption and traditional BI/Analytics tools.
Lead the effort to establish and maintain comprehensive documentation for all aspects of the semantic layer, which includes defining and standardizing key business metrics, documenting ontological definitions, relationships, usage guidelines, and metadata for all semantic models, ensuring clarity, consistency, and ease of understanding for all data users.
Evaluate and monitor the performance, quality, and usability of semantic systems, ensuring they meet organizational objectives, external standards, and the demands of AI applications.
Act as a thought leader, constantly evaluating emerging trends in knowledge graphs, semantic AI, prompt engineering, and related technologies to strategically enhance DPR's capabilities in knowledge representation and data understanding.
Rapidly prototype high-priority solutions in cloud platforms, demonstrating their feasibility and business value.
Participate in all phases of the project lifecycle and lead data architecture initiatives.
Qualifications
Proven expertise in data analysis, data modeling, and data engineering with a focus on cloud- native data platforms.
5+ years of hands-on experience in semantic modeling, ontology engineering, knowledge graphs, or related AI data preparation.
3+ years of experience developing data solutions specifically for AI/ML applications leveraging structured data.
3+ years of experience with data warehousing concepts, dimensional modeling, and data governance principles as they relate to structuring data for semantic enrichment.
Proficiency in SQL and experience working with cloud-based data warehouses, preferably Snowflake.
Strong analytical and problem-solving skills with keen attention to detail and the ability to translate complex business concepts into logical ontologies and knowledge graph structures.
Strong proficiency in SQL, Python, and PySpark.
Familiarity with agile methodologies, and experience working closely with cross functional teams to manage technical backlogs.
Skilled in orchestrating and automating data pipelines within a DevOps framework.
Strong communicator with the ability to present ideas clearly and influence stakeholders - with a passion for enabling data-driven transformation.
DPR Construction is a forward-thinking, self-performing general contractor specializing in technically complex and sustainable projects for the advanced technology, life sciences, healthcare, higher education and commercial markets. Founded in 1990, DPR is a great story of entrepreneurial success as a private, employee-owned company that has grown into a multi-billion-dollar family of companies with offices around the world.
Working at DPR, you'll have the chance to try new things, explore unique paths and shape your future. Here, we build opportunity together-by harnessing our talents, enabling curiosity and pursuing our collective ambition to make the best ideas happen. We are proud to be recognized as a great place to work by our talented teammates and leading news organizations like U.S. News and World Report, Forbes, Fast Company and Newsweek.
Explore our open opportunities at ********************
Auto-ApplyQlik Data Engineer
Senior data scientist job in Raleigh, NC
Akkodis is seeking a Qlik Data Engineer for a Contract with a client in Raleigh, NC(Remote). You will design and automate scalable data ingestion pipelines and implement optimized data models for efficient storage and retrieval. Proficiency in Qlik platforms and strong SQL expertise is essential for success in this role.
Rate Range: $49/hour to $53/hour; The rate may be negotiable based on experience, education, geographic location, and other factors.
Qlik Data Engineer job responsibilities include:
* Design and develop scalable ETL/ELT pipelines using Qlik tools (Qlik Replicate, Qlik Compose) for batch and real-time data processing.
* Automate data ingestion and application reloads using Qlik Automate or scripting (e.g., Python) to improve efficiency.
* Implement and optimize data models in Snowflake schema for efficient storage and retrieval.
* Monitor and troubleshoot data integration processes, ensuring performance and resolving bottlenecks.
* Collaborate with cross-functional teams to gather requirements and deliver actionable data solutions.
* Ensure data quality and governance by implementing validation frameworks and compliance measures.
Required Qualifications:
* Bachelor's degree in computer science, Information Technology, or related field.
* 3-7 years of experience in data engineering and Qlik platform development.
* Proven expertise in Qlik tools (Qlik Replicate, Qlik Compose, Qlik Sense) and strong proficiency in SQL for data integration and transformation.
* Hands-on experience with Snowflake and AWS cloud services, along with knowledge of ETL/ELT processes and data modeling techniques.
If you are interested in this role, then please click APPLY NOW. For other opportunities available at Akkodis, or any questions, feel free to contact me at ****************************.
Pay Details: $49.00 to $53.00 per hour
Benefit offerings available for our associates include medical, dental, vision, life insurance, short-term disability, additional voluntary benefits, EAP program, commuter benefits and a 401K plan. Our benefit offerings provide employees the flexibility to choose the type of coverage that meets their individual needs. In addition, our associates may be eligible for paid leave including Paid Sick Leave or any other paid leave required by Federal, State, or local law, as well as Holiday pay where applicable.
Equal Opportunity Employer/Veterans/Disabled
Military connected talent encouraged to apply
To read our Candidate Privacy Information Statement, which explains how we will use your information, please navigate to *************************************************
The Company will consider qualified applicants with arrest and conviction records in accordance with federal, state, and local laws and/or security clearance requirements, including, as applicable:
* The California Fair Chance Act
* Los Angeles City Fair Chance Ordinance
* Los Angeles County Fair Chance Ordinance for Employers
* San Francisco Fair Chance Ordinance
Massachusetts Candidates Only: It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability.
Easy ApplySr. Data Engineer
Senior data scientist job in Raleigh, NC
Who We Are:
Bandwidth, a prior “Best of EC” award winner, is a global software company that helps enterprises deliver exceptional experiences through voice, messaging, and emergency services. Reaching 65+ countries and over 90 percent of the global economy, we're the only provider offering an owned communications cloud that delivers advanced automation, AI integrations, global reach, and premium human support. Bandwidth is trusted for mission-critical communications by the Global 2000, hyperscalers, and SaaS builders!
At Bandwidth, your music matters when you are part of the BAND. We celebrate differences and encourage BANDmates to be their authentic selves. #jointheband
What We Are Looking For:
This role focuses on designing, building, and optimizing scalable data platforms that enable data ingestion, transformation, and governance for the organization. The Sr. Data Engineer will work closely with cross-functional teams to support both traditional data engineering pipelines and modern data mesh initiatives, balancing infrastructure development, data-ops, and enablement. The role requires a strong understanding of data architecture in Snowflake, along with proficiency in Python, AWS services, and dev-ops practices, to drive efficient data solutions for enterprise-wide data consumption.
What You'll Do:
Develop and Maintain Scalable Data Platforms: Lead the design and development of scalable, secure, and efficient data platforms using Snowflake and other cloud technologies, ensuring alignment with the organization's data mesh and platform enablement goals.
Build and Optimize Data Pipelines: Architect, build, and optimize ETL/ELT data pipelines to support various business units, balancing batch and streaming solutions. Leverage tools like AWS DMS, RDS, S3, and Kafka to ensure smooth data flows.
Data Enablement & Ops: Collaborate with cross-functional teams (data engineers, data scientists, analysts, and architects) to create data-as-a-product offerings, ensuring self-service enablement, observability, and data quality.
Automation & CI/CD: Develop, manage, and continuously improve automated data pipelines using Python (Prefect) and cloud-native services (AWS). Ensure CI/CD pipelines are set up for efficient deployment and monitoring.
Snowflake Expertise: Apply deep knowledge of Snowflake SQL to build efficient and performant data models, optimizing storage and query performance.
Cloud Data Integration: Use AWS services and data integration tools (DMS, Prefect) to ingest and manage diverse data sets across the organization.
Data Governance & Compliance: Work closely with solution architects to implement data governance frameworks, ensuring adherence to enterprise data standards and regulatory compliance.
Stakeholder Collaboration: Partner with product owners, business stakeholders, and data teams to translate business requirements into scalable data solutions, aligning roadmaps with organizational objectives.
Agile Practices & Leadership: Participate in Agile ceremonies, mentor junior engineers, and help drive a culture of continuous improvement across the data engineering team.
Metrics and Observability: Implement monitoring and metrics for data pipeline health, performance, and data quality, driving insights to improve product decision-making.
What You Need:
Education:
Bachelor's degree (within an engineering or computer science discipline strongly preferred) or equivalent work experience
Experience:
8+ years of experience working in software architecture in the data engineering domain.
Demonstrated track record of operating in highly collaborative, flexible, and productive cross-organization teams.
Proven ability to perform in high-visibility, high-growth environments.
Action and results-oriented, able to communicate strategies and tactics, identify obstacles.
Experience in providing effective leadership to developers/engineers through the creation of Agile backlog assets and ongoing day-to-day guidance, resulting in development outcomes steeped in productivity and fun.
Knowledge:
Programming Languages, e.g. Python, Java
SQL Databases, e.g. MySQL, PostgreSQL, SQL Server, MariaDB
NoSQL Databases, e.g. MongoDB, Cassandra
Data Warehousing Solutions, e.g. Snowflake
Data Processing, e.g. Flink
Stream Process, e.g. Kafka, MSK
Data modeling, ETL/ELT processes, and data quality frameworks.
Other data engineering tools, e.g. Kafka Connect, DMS, Talend, Prefect
Cloud Platforms, e.g. AWS
Containerization/Orchestration, e.g. Kubernetes (k8s)
CI/CD tools, Version control tools: ArgoCD, Github, Github actions
Agile methodologies and frameworks, e.g. Scrum, Kanban
Collaboration tools, e.g. JIRA, Monday
Skills:
Demonstrated skills in developing products and/or solutions with a focus on agile principles working with internal delivery teams.
Strong written and verbal communication.
The Whole Person Promise:
At Bandwidth, we're pretty proud of our corporate culture, which is rooted in our “Whole Person Promise.” We promise all employees that they can have meaningful work AND a full life, and we provide a work environment geared toward enriching your body, mind, and spirit. How do we do that? Well…
100% company-paid Medical, Vision, & Dental coverage for you and your family with low deductibles and low out-of-pocket expenses.
All new hires receive four weeks of PTO.
PTO Embargo. When you take time off (of any kind!) you're embargoed from working. Bandmates and managers are not allowed to interrupt your PTO - not even with email.
Additional PTO can be earned throughout the year through volunteer hours and Bandwidth challenges.
“Mahalo moments” program grants additional time off for life's most important moments like graduations, buying a first home, getting married, wedding anniversaries (every five years), and the birth of a grandchild.
90-Minute Workout Lunches and unlimited meetings with our very own nutritionist.
Are you excited about the position and its responsibilities, but not sure if you're 100% qualified? Do you feel you can work to help us crush the mission? If you answered ‘yes' to both of these questions, we encourage you to apply! You won't want to miss the opportunity to be a part of the BAND.
Applicant Privacy Notice
Auto-ApplySenior Data Scientist
Senior data scientist job in Raleigh, NC
**What Data Science contributes to Cardinal Health** The Data & Analytics Function oversees the analytics lifecycle in order to identify, analyze and present relevant insights that drive business decisions and anticipate opportunities to achieve a competitive advantage. This function manages analytics products, the access, design and implementation of reporting/business intelligence solutions, and the application of advanced quantitative modeling.
Data Science applies base, scientific methodologies from various disciplines, techniques and tools that extracts knowledge and insight from data to solve complex business problems on large data sets, integrating multiple systems.
This role will support the Major Rugby business unit, a legacy supplier of multi-source, generic pharmaceuticals for over 60 years. Major Rugby provides over 1,000 high-quality, Rx, OTC and vitamin, mineral and supplement products to the acute, retail, government and consumer markets. This role will focus on leveraging advanced analytics, machine learning, and optimization techniques to solve complex challenges related to demand forecasting, inventory optimization, logistics efficiency and risk mitigation. Our goal is to uncover insights and drive meaningful deliverables to improve decision making and business outcomes.
**Responsibilities:**
+ Leads the design, development, and deployment of advanced analytics and machine learning models to solve complex business problems
+ Collaborates cross-functionally with product, engineering, operations, and business teams to identify opportunities for data-driven decision-making
+ Translates business requirements into analytical solutions and delivers insights that drive strategic initiatives
+ Develops and maintains scalable data science solutions, ensuring reproducibility, performance, and maintainability
+ Evaluates and implements new tools, frameworks, and methodologies to enhance the data science toolkit
+ Drives experimentation and A/B testing strategies to optimize business outcomes
+ Mentors junior data scientists and contributes to the development of a high-performing analytics team
+ Ensures data quality, governance, and compliance with organizational and regulatory standards
+ Stays current with industry trends, emerging technologies, and best practices in data science and AI
+ Contributes to the development of internal knowledge bases, documentation, and training materials
**Qualifications:**
+ 8-12 years of experience in data science, analytics, or a related field (preferred)
+ Advanced degree (Master's or Ph.D.) in Data Science, Computer Science, Engineering, Operations Research, Statistics, or a related discipline preferred
+ Strong programming skills in Python and SQL;
+ Proficiency in data visualization tools such as Tableau, or Looker, with a proven ability to translate complex data into clear, actionable business insights
+ Deep understanding of machine learning, statistical modeling, predictive analytics, and optimization techniques
+ Experience with cloud platforms (e.g., AWS, Azure, GCP) and big data technologies (e.g., Spark, Hadoop) is highly desirable
+ Excellent communication and storytelling skills, with the ability to influence stakeholders and present findings to both technical and non-technical audiences
+ Experience in Supervised and Unsupervised Machine Learning including Classification, Forecasting, Anomaly Detection, Pattern Detection, Text Mining, using variety of techniques such as Decision trees, Time Series Analysis, Bagging and Boosting algorithms, Neural Networks, Deep Learning and Natural Language processing (NLP).
+ Experience with PyTorch or other deep learning frameworks
+ Strong understanding of RESTful APIs and / or data streaming a big plus
+ Required experience of modern version control (GitHub, Bitbucket)
+ Hands-on experience with containerization (Docker, Kubernetes, etc.)
+ Experience with product discovery and design thinking
+ Experience with Gen AI
+ Experience with supply chain analytics is preferred
**Anticipated salary range:** $123,400 - $176,300
**Bonus eligible:** Yes
**Benefits:** Cardinal Health offers a wide variety of benefits and programs to support health and well-being.
+ Medical, dental and vision coverage
+ Paid time off plan
+ Health savings account (HSA)
+ 401k savings plan
+ Access to wages before pay day with my FlexPay
+ Flexible spending accounts (FSAs)
+ Short- and long-term disability coverage
+ Work-Life resources
+ Paid parental leave
+ Healthy lifestyle programs
**Application window anticipated to close:** 12/02/2025 *if interested in opportunity, please submit application as soon as possible.
The salary range listed is an estimate. Pay at Cardinal Health is determined by multiple factors including, but not limited to, a candidate's geographical location, relevant education, experience and skills and an evaluation of internal pay equity.
\#LI-Remote
\#LI-AP4
_Candidates who are back-to-work, people with disabilities, without a college degree, and Veterans are encouraged to apply._
_Cardinal Health supports an inclusive workplace that values diversity of thought, experience and background. We celebrate the power of our differences to create better solutions for our customers by ensuring employees can be their authentic selves each day. Cardinal Health is an Equal_ _Opportunity/Affirmative_ _Action employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, ancestry, age, physical or mental disability, sex, sexual orientation, gender identity/expression, pregnancy, veteran status, marital status, creed, status with regard to public assistance, genetic status or any other status protected by federal, state or local law._
_To read and review this privacy notice click_ here (***************************************************************************************************************************
Data Scientist
Senior data scientist job in Raleigh, NC
We are seeking a motivated Data Scientist with strong technical skills in machine learning and a passion for continuous learning. The ideal candidate will have foundational experience in building predictive and prescriptive models using Python, PySpark, and modern ML frameworks (TensorFlow, PyTorch, XGBoost, LightGBM). This role requires the ability to write production-ready code, develop automated model monitoring workflows, create visualization dashboards, and work collaboratively in Agile environments with cross-functional teams.
This position is 4 days in office, 1 day remote per week, based at our corporate headquarters in Raleigh, North Carolina (North Hills)
Key Responsibilities:
Model Development & Technical Expertise
Design and implement predictive and prescriptive models for regression, classification, and optimization problems.
Apply advanced techniques such as structural time series modeling and boosting algorithms (e.g., XGBoost, LightGBM).
Develop and tune machine learning models using Python, PySpark, TensorFlow, and PyTorch.
Write efficient, production-ready code and frameworks optimized for scalability and deployment.
Build automated workflows for model monitoring and performance evaluation.
Create dashboards using tools like Databricks and Palantir to visualize key model metrics like model drift, shapley values etc.
Rapidly prototype and test hypotheses to validate model approaches.
Collaboration & Communication
Should be a team player and should have a strong desire to learn new technologies and methodologies.
Agile Execution
Operate effectively in Agile environments, contributing to iterative development cycles and sprint planning.
Adapt quickly to changing priorities while maintaining high standards of quality and innovation.
Education & Experience:
1+ years of experience in Data Science, Machine Learning, AI/GenAI.
Master's degree in computer science, Statistics, Operations Research, Mathematics or similar.
Familiarity of GenAI and agentic solutions, Retrieval-Augmented Generation (RAG) and model distillation is a plus.
Preferred:
Experience in Retail Industry
AI/ML certifications in Vertex AI/Databricks/AWS.
California Residents click below for Privacy Notice:
***************************************************
Auto-Apply