Senior Data Scientist
Senior data scientist job in Plainfield, NJ
Data Scientist - Pharmaceutical Analytics (PhD)
1 year Contract - Hybrid- Plainfield, NJ
We're looking for a PhD-level Data Scientist with experience in the pharmaceutical industry and expertise working with commercial data sets (IQVIA, claims, prescription data). This role will drive insights that shape drug launches, market access, and patient outcomes.
What You'll Do
Apply machine learning & advanced analytics to pharma commercial data
Deliver insights on market dynamics, physician prescribing, and patient behavior
Partner with R&D, medical affairs, and commercial teams to guide strategy
Build predictive models for sales effectiveness, adherence, and market forecasting
What We're Looking For
PhD in Data Science, Statistics, Computer Science, Bioinformatics, or related field
5+ years of pharma or healthcare analytics experience
Strong skills in enterprise-class software stacks and cloud computing
Deep knowledge of pharma market dynamics & healthcare systems
Excellent communication skills to translate data into strategy
Data Scientist
Senior data scientist job in Parsippany-Troy Hills, NJ
***This role is hybrid three days per week onsite in Parsippany, NJ. LOCAL CANDIDATES ONLY. No relocation***
Data Scientist
• Summary: Provide analytics, telemetry, ML/GenAI-driven insights to measure SDLC
health, prioritize improvements, validate pilot outcomes, and implement AI-driven
development lifecycle capabilities.
• Responsibilities:
o Define metrics and instrumentation for SDLC/CI pipelines, incidents, and
delivery KPIs.
o Build dashboards, anomaly detection, and data models; implement GenAI
solutions (e.g., code suggestion, PR summarization, automated test
generation) to improve developer workflows.
o Design experiments and validate AI-driven features during the pilot.
o Collaborate with engineering and SRE to operationalize models and ensure
observability and data governance.
• Required skills:
o 3+ years applied data science/ML in production; hands-on experience with
GenAI/LLMs applied to developer workflows or DevOps automation.
o Strong Python (pandas, scikit-learn), ML frameworks, SQL, and data
visualization (Tableau/Power BI).
o Experience with observability/telemetry data (logs/metrics/traces) and A/B
experiment design.
• Preferred:
o Experience with model deployment, MLOps, prompt engineering, and cloud
data platforms (AWS/GCP/Azure).
Biostatistician
Senior data scientist job in Rahway, NJ
Join a Global Leader in Workforce Solutions - Net2Source Inc.
Who We Are
Net2Source Inc. isn't just another staffing company - we're a powerhouse of innovation, connecting top talent with the right opportunities. Recognized for 300% growth in the past three years, we operate in 34 countries with a global team of 5,500+. Our mission? To bridge the talent gap with precision- Right Talent. Right Time. Right Place. Right Price.
Title: Statistical Scientist
Duration: 12 Months (Start Date: First Week of January)
Location: Rahway, NJ (Onsite/Hybrid Only)
Rate: $65/hr on W2
Position Summary
We are seeking an experienced Statistical Scientist to support analytical method qualification, validation, and experimental design for Client's scientific and regulatory programs. The successful candidate will work closely with scientists to develop statistically sound protocols, contribute to method robustness and validation studies, and prepare reporting documentation for regulatory readiness. This position requires deep expertise in statistical methodologies and hands-on programming skills in SAS, R, and JMP.
Key Responsibilities
• Collaborate with scientists to design experiments, develop study protocols, and establish acceptance criteria for analytical applications.
• Support analytical method qualification and validation through statistical protocol development, analysis, and reporting.
• Write memos and technical reports summarizing statistical analyses for internal and regulatory audiences.
• Assist scientists in assessing protocol deviations and resolving investigation-related issues.
• Coordinate with the Quality Audit group to finalize statistical reports for BLA (Biologics License Application) readiness.
• Apply statistical modeling approaches to evaluate assay robustness and method reliability.
• Support data integrity and ensure compliance with internal processes and regulatory expectations.
Qualifications
Education (Required):
• Ph.D. in Statistics, Biostatistics, Applied Statistics, or related discipline with 3+ years of relevant experience, or
• M.S. in Statistics/Applied Statistics with 6+ years of relevant experience.
(BS/BA candidates will
not
be considered.)
Required Skills:
• Proficiency in SAS, R, and JMP.
• Demonstrated experience in analytical method qualification and validation, including protocol writing and statistical execution.
• Strong background in experimental design for analytical applications.
• Ability to interpret and communicate statistical results clearly in both written and verbal formats.
Preferred / Nice-to-Have:
• Experience with mixed-effect modeling.
• Experience with Bayesian analysis.
• Proven ability to write statistical software/code to automate routine analyses.
• Experience presenting complex statistical concepts to leadership.
• Experience in predictive stability analysis.
Why Work With Us?
At Net2Source, we believe in more than just jobs - we build careers. We champion leadership at all levels, celebrate diverse perspectives, and empower our people to make an impact. Enjoy a collaborative environment where your ideas matter, and your professional growth is our priority.
Our Commitment to Inclusion & Equity
Net2Source is an equal opportunity employer dedicated to fostering a workplace where diverse talents and perspectives are valued. We make all employment decisions based on merit, ensuring a culture of respect, fairness, and opportunity for all.
Awards & Recognition
• America's Most Honored Businesses (Top 10%)
• Fastest-Growing Staffing Firm by Staffing Industry Analysts
• INC 5000 List for Eight Consecutive Years
• Top 100 by Dallas Business Journal
• Spirit of Alliance Award by Agile1
Ready to Level Up Your Career?
Click Apply Now and let's make it happen.
SENIOR DATA ENGINEER DEVELOPER/ARCHITECT
Senior data scientist job in Red Bank, NJ
MUST BE WILLING TO GO ONSITE ON AN AS NEEDED BASIS/OPEN TO 2 DAYS A WEEK
MUST LIVE IN THE TRI-STATE AREA/NO RELOCATION AVAILABLE AND NO RELOCATING OPTION
YOU MUST BE ACTIVELY LOCAL TO THE BRIDGEWATER NJ LOCATION
YOU MUST BE A US CITIZEN OR GREENCARD, NO OTHER WORK STATUS IS AN OPTION
MUST HAVE STRONG SKILLS IN DATABRICKS/PYTHON ORACLE
MUST HAVE STRONG EXPERIENCE WITH AZURE AND AIRFLOW
MUST HAVE PYTHON,VDATAWAREHOUSING, ORACLE
Bachelor's Degree in Computer Science, Data Science, Software Engineering, Information Systems, or related quantitative field
7 plus years of experience working as a Data Engineer, ETL Engineer, Data/ETL Architect or similar roles
MUST HAVE SOLID SQl Server, Fabric, Synopse
DO NOT WANT TO SEE AWS OR GCP BACKGROUNDS
DATABRICKS CERTIFICATE IS A HUGE PLUS
solid continuous experience in Python.
years of continuous experience with Airflow.
years of experience with Azure Data Factory (ADF) or similar.
years of experience working with relational databases: Oracle, SQL Server, PostgreSQL, or similar.
years of experience working with NoSQL databases: MongoDB, Cosmos DB, DocumentDB or similar
years of experience writing SQL code.
years of experience in Kimball Dimensional Modeling (Star-Schema).
years of experience in columnar databases: Snowflake, Azure Synapse, or similar.
Data Analytics Engineer
Senior data scientist job in Somerset, NJ
Client: manufacturing company
Type: direct hire
Our client is a publicly traded, globally recognized technology and manufacturing organization that relies on data-driven insights to support operational excellence, strategic decision-making, and digital transformation. They are seeking a Power BI Developer to design, develop, and maintain enterprise reporting solutions, data pipelines, and data warehousing assets.
This role works closely with internal stakeholders across departments to ensure reporting accuracy, data availability, and the long-term success of the company's business intelligence initiatives. The position also plays a key role in shaping BI strategy and fostering collaboration across cross-functional teams.
This role is on-site five days per week in Somerset, NJ.
Key Responsibilities
Power BI Reporting & Administration
Lead the design, development, and deployment of Power BI and SSRS reports, dashboards, and analytics assets
Collaborate with business stakeholders to gather requirements and translate needs into scalable technical solutions
Develop and maintain data models to ensure accuracy, consistency, and reliability
Serve as the Power BI tenant administrator, partnering with security teams to maintain data protection and regulatory compliance
Optimize Power BI solutions for performance, scalability, and ease of use
ETL & Data Warehousing
Design and maintain data warehouse structures, including schema and database layouts
Develop and support ETL processes to ensure timely and accurate data ingestion
Integrate data from multiple systems while ensuring quality, consistency, and completeness
Work closely with database administrators to optimize data warehouse performance
Troubleshoot data pipelines, ETL jobs, and warehouse-related issues as needed
Training & Documentation
Create and maintain technical documentation, including specifications, mappings, models, and architectural designs
Document data warehouse processes for reference, troubleshooting, and ongoing maintenance
Manage data definitions, lineage documentation, and data cataloging for all enterprise data models
Project Management
Oversee Power BI and reporting projects, offering technical guidance to the Business Intelligence team
Collaborate with key business stakeholders to ensure departmental reporting needs are met
Record meeting notes in Confluence and document project updates in Jira
Data Governance
Implement and enforce data governance policies to ensure data quality, compliance, and security
Monitor report usage metrics and follow up with end users as needed to optimize adoption and effectiveness
Routine IT Functions
Resolve Help Desk tickets related to reporting, dashboards, and BI tools
Support general software and hardware installations when needed
Other Responsibilities
Manage email and phone communication professionally and promptly
Respond to inquiries to resolve issues, provide information, or direct to appropriate personnel
Perform additional assigned duties as needed
Qualifications
Required
Minimum of 3 years of relevant experience
Bachelor's degree in Computer Science, Data Analytics, Machine Learning, or equivalent experience
Experience with cloud-based BI environments (Azure, AWS, etc.)
Strong understanding of data modeling, data visualization, and ETL tools (e.g., SSIS, Azure Synapse, Snowflake, Informatica)
Proficiency in SQL for data extraction, manipulation, and transformation
Strong knowledge of DAX
Familiarity with data warehouse technologies (e.g., Azure Blob Storage, Redshift, Snowflake)
Experience with Power Pivot, SSRS, Azure Synapse, or similar reporting tools
Strong analytical, problem-solving, and documentation skills
Excellent written and verbal communication abilities
High attention to detail and strong self-review practices
Effective time management and organizational skills; ability to prioritize workload
Professional, adaptable, team-oriented, and able to thrive in a dynamic environment
Data Engineer
Senior data scientist job in Hamilton, NJ
Key Responsibilities:
Manage and support batch processes and data pipelines in Azure Databricks and Azure Data Factory.
Integrate and process Bloomberg market data feeds and files into trading or analytics platforms.
Monitor, troubleshoot, and resolve data and system issues related to trading applications and market data ingestion.
Develop, automate, and optimize ETL pipelines using Python, Spark, and SQL.
Manage FTP/SFTP file transfers between internal systems and external vendors.
Ensure data quality, completeness, and timeliness for downstream trading and reporting systems.
Collaborate with operations, application support, and infrastructure teams to resolve incidents and enhance data workflows.
Required Skills & Experience:
10+ years of experience in data engineering or production support within financial services or trading environments.
Hands-on experience with Azure Databricks, Azure Data Factory, and Azure Storage, Logic Apps, Fabric.
Strong Python and SQL programming skills.
Experience with Bloomberg data feeds (BPIPE, TSIP,SFTP).
Experience with Git, CI/CD pipelines, and Azure DevOps.
Proven ability to support batch jobs, troubleshoot failures, and manage job scheduling.
Experience handling FTP/SFTP file transfers and automation (e.g., using scripts or managed file transfer tools).
Solid understanding of equities trading, fixed income trading, trading workflows, and financial instruments.
Excellent communication, problem-solving, and stakeholder management skills.
AWS Data engineer with Databricks || USC Only || W2 Only
Senior data scientist job in Princeton, NJ
AWS Data Engineer with Databricks
Princeton, NJ - Hybrid - Need Locals or Neaby
Duration: Long Term
is available only to U.S. citizens.
Key Responsibilities
Design and implement ETL/ELT pipelines with Databricks, Apache Spark, AWS Glue, S3, Redshift, and EMR for processing large-scale structured and unstructured data.
Optimize data flows, monitor performance, and troubleshoot issues to maintain reliability and scalability.
Collaborate on data modeling, governance, security, and integration with tools like Airflow or Step Functions.
Document processes and mentor junior team members on best practices.
Required Qualifications
Bachelor's degree in Computer Science, Engineering, or related field.
5+ years of data engineering experience, with strong proficiency in Databricks, Spark, Python, SQL, and AWS services (S3, Glue, Redshift, Lambda).
Familiarity with big data tools like Kafka, Hadoop, and data warehousing concepts.
Data Engineer
Senior data scientist job in Jersey City, NJ
Mastech Digital Inc. (NYSE: MHH) is a minority-owned, publicly traded IT staffing and digital transformation services company. Headquartered in Pittsburgh, PA, and established in 1986, we serve clients nationwide through 11 U.S. offices.
Role: Data Engineer
Location: Merrimack, NH/Smithfield, RI/Jersey City, NJ
Duration: Full-Time/W2
Job Description:
Must-Haves:
Python for running ETL batch jobs
Heavy SQL for data analysis, validation and querying
AWS and the ability to move the data through the data stages and into their target databases.
The Postgres database is the target, so that is required.
Nice to haves:
Snowflake
Java for API development is a nice to have (will teach this)
Experience in asset management for domain knowledge.
Production support debugging and processing of vendor data
The Expertise and Skills You Bring
A proven foundation in data engineering - bachelor's degree + preferred, 10+ years' experience
Extensive experience with ETL technologies
Design and develop ETL reporting and analytics solutions.
Knowledge of Data Warehousing methodologies and concepts - preferred
Advanced data manipulation languages and frameworks (JAVA, PYTHON, JSON) - required
RMDS experience (Snowflake, PostgreSQL ) - required
Knowledge of Cloud platforms and Services (AWS - IAM, EC2, S3, Lambda, RDS ) - required
Designing and developing low to moderate complex data integration solution - required
Experience with DevOps, Continuous Integration and Continuous Delivery (Maven, Jenkins, Stash, Ansible, Docker) will be preferred
Expert in SQL and Stored Procedures on any Relational databases
Good in debugging, analyzing and Production Support
Application Development based on JIRA stories (Agile environment)
Demonstrable experience with ETL tools (Informatica, Snaplogic)
Experience in working with Python in an AWS environment
Create, update, and maintain technical documentation for software-based projects and products.
Solving production issues.
Interact effectively with business partners to understand business requirements and assist in generation of technical requirements.
Participate in architecture, technical design, and product implementation discussions.
Working Knowledge of Unix/Linux operating systems and shell scripting
Experience with developing sophisticated Continuous Integration & Continuous Delivery (CI/CD) pipeline including software configuration management, test automation, version control, static code analysis.
Excellent interpersonal and communication skills
Ability to work with global Agile teams
Proven ability to deal with ambiguity and work in fast paced environment
Ability to mentor junior data engineers.
The Value You Deliver
The associate would help the team in designing and building a best-in-class data solutions using very diversified tech stack.
Strong experience of working in large teams and proven technical leadership capabilities
Knowledge of enterprise-level implementations like data warehouses and automated solutions.
Ability to negotiate, influence and work with business peers and management.
Ability to develop and drive a strategy as per the needs of the team
Good to have: Full-Stack Programming knowledge, hands-on test case/plan preparation within Jira
Senior Data Architect
Senior data scientist job in Edison, NJ
Act as a Enterprise Architect, supporting architecture reviews, design decisions, and strategic planning.
Design and implement scalable data warehouse and analytics solutions on AWS and Snowflake.
Develop and optimize SQL, ETL/ELT pipelines, and data models to support reporting and analytics.
Collaborate with cross-functional teams (data engineering, application development, infrastructure) to align on architecture best practices and ensure consistency across solutions.
Evaluate and recommend technologies, tools, and frameworks to improve data processing efficiency and reliability.
Provide guidance and mentorship to data engineering teams, enforcing data governance, quality, and security standards.
Troubleshoot complex data and performance issues and propose long-term architectural solutions.
Support capacity planning, cost optimization, and environment management within AWS/Snowflake ecosystems.
About ValueMomentum
ValueMomentum is a leading solutions provider for the global property & casualty insurance industry, supported by deep domain and technology capabilities. We offer a comprehensive suite of advisory, development, implementation, and maintenance services across the entire P&C insurance value chain. This includes Underwriting, Claims, Distribution, and more, empowering insurers to stay ahead with sustained growth, high performance, and enhanced stakeholder value. Trusted by over 75 insurers, ValueMomentum is one of the largest standalone insurance-focused solutions providers to the US insurance industry.
Azure Data Engineer
Senior data scientist job in Jersey City, NJ
Title: Senior Azure Data Engineer Client: Major Japanese Bank Experience Level: Senior (10+ Years)
The Senior Azure Data Engineer will design, build, and optimize enterprise data solutions within Microsoft Azure for a major Japanese bank. This role focuses on architecting scalable data pipelines, enhancing data lake environments, and ensuring security, compliance, and data governance best practices.
Key Responsibilities:
Develop, maintain, and optimize Azure-based data pipelines and ETL/ELT workflows.
Design and implement Azure Data Lake, Synapse, Databricks, and ADF solutions.
Ensure data security, compliance, lineage, and governance controls.
Partner with architecture, data governance, and business teams to deliver high-quality data solutions.
Troubleshoot performance issues and improve system efficiency.
Required Skills:
10+ years of data engineering experience.
Strong hands-on expertise with Azure Synapse, Azure Data Factory, Azure Databricks, Azure Data Lake, and Azure SQL.
Azure certifications strongly preferred.
Strong SQL, Python, and cloud data architecture skills.
Experience in financial services or large enterprise environments preferred.
Senior Data Engineer (Snowflake)
Senior data scientist job in Parsippany-Troy Hills, NJ
Senior Data Engineer (Snowflake & Python)
1-Year Contract | $60/hour + Benefit Options
Hybrid: On-site a few days per month (local candidates only)
Work Authorization Requirement
You must be authorized to work for any employer as a W2 employee. This is required for this role.
This position is W-2 only - no C2C, no third-party submissions, and no sponsorship will be considered.
Overview
We are seeking a Senior Data Engineer to support enterprise-scale data initiatives for a highly collaborative engineering organization. This is a new, long-term contract opportunity for a hands-on data professional who thrives in fast-paced environments and enjoys building high-quality, scalable data solutions on Snowflake.
Candidates must be based in or around New Jersey, able to work on-site at least 3 days per month, and meet the W2 employment requirement.
What You'll Do
Design, develop, and support enterprise-level data solutions with a strong focus on Snowflake
Participate across the full software development lifecycle - planning, requirements, development, testing, and QA
Partner closely with engineering and data teams to identify and implement optimal technical solutions
Build and maintain high-performance, scalable data pipelines and data warehouse architectures
Ensure platform performance, reliability, and uptime, maintaining strong coding and design standards
Troubleshoot production issues, identify root causes, implement fixes, and document preventive solutions
Manage deliverables and priorities effectively in a fast-moving environment
Contribute to data governance practices including metadata management and data lineage
Support analytics and reporting use cases leveraging advanced SQL and analytical functions
Required Skills & Experience
8+ years of experience designing and developing data solutions in an enterprise environment
5+ years of hands-on Snowflake experience
Strong hands-on development skills with SQL and Python
Proven experience designing and developing data warehouses in Snowflake
Ability to diagnose, optimize, and tune SQL queries
Experience with Azure data frameworks (e.g., Azure Data Factory)
Strong experience with orchestration tools such as Airflow, Informatica, Automic, or similar
Solid understanding of metadata management and data lineage
Hands-on experience with SQL analytical functions
Working knowledge of Shell scripting and Java scripting
Experience using Git, Confluence, and Jira
Strong problem-solving and troubleshooting skills
Collaborative mindset with excellent communication skills
Nice to Have
Experience supporting Pharma industry data
Exposure to Omni-channel data environments
Why This Opportunity
$60/hour W2 on a long-term 1-year contract
Benefit options available
Hybrid structure with limited on-site requirement
High-impact role supporting enterprise data initiatives
Clear expectations: W-2 only, no third-party submissions, no Corp-to-Corp
This employer participates in E-Verify and will provide the federal government with your Form I-9 information to confirm that you are authorized to work in the U.S.
Senior Data Engineer
Senior data scientist job in New Providence, NJ
Apexon is a digital-first technology services firm specializing in accelerating business transformation and delivering human-centric digital experiences. We have been meeting customers wherever they are in the digital lifecycle and helping them outperform their competition through speed and innovation.Apexon brings together distinct core competencies - in AI, analytics, app development, cloud, commerce, CX, data, DevOps, IoT, mobile, quality engineering and UX, and our deep expertise in BFSI, healthcare, and life sciences - to help businesses capitalize on the unlimited opportunities digital offers. Our reputation is built on a comprehensive suite of engineering services, a dedication to solving clients' toughest technology problems, and a commitment to continuous improvement. Backed by Goldman Sachs Asset Management and Everstone Capital, Apexon now has a global presence of 15 offices (and 10 delivery centers) across four continents.
Job Description
Experienced Data management specialist responsible for developing, overseeing, organizing, storing, and analyzing data and data systems
Participate in all aspects of the software development lifecycle for Snowflake solutions, including planning, requirements, development, testing, and quality assurance
Work in tandem with our engineering team to identify and implement the most optimal solutions
Ensure platform performance, uptime, and scale, maintaining high standards for code quality and thoughtful design
Troubleshoot incidents, identify root causes, fix and document problems, and implement preventive measures
Able to manage deliverables in fast paced environments
Areas of Expertise
At least 10 years of experience designing and development of data solutions in enterprise environment
At least 5+ years' experience on Snowflake Platform
Strong hands-on SQL and Python development
Experience with designing and developing data warehouses in Snowflake
A minimum of three years' experience in developing production-ready data ingestion and processing pipelines using Spark, Scala
Strong hands-on experience with Orchestration Tools e.g. Airflow, Informatica, Automic
Good understanding on Metadata and data lineage
Hands-on knowledge on SQL Analytical functions
Strong knowledge and hands-on experience in Shell scripting, Java Scripting
Able to demonstrate experience with software engineering practices including CI/CD, Automated testing and Performance Engineering.
Good understanding and exposure to Git, Confluence and Jira
Good problem solving and troubleshooting skills.
Team player, collaborative approach and excellent communication skills
Our Commitment to Diversity & Inclusion:
Did you know that Apexon has been Certifiedâ„¢ by Great Place To Work , the global authority on workplace culture, in each of the three regions in which it operates: USA (for the fourth time in 2023), India (seven consecutive certifications as of 2023), and the UK.Apexon is committed to being an equal opportunity employer and promoting diversity in the workplace. We are taking affirmative action to ensure equal employment opportunity for all qualified individuals. Apexon strictly prohibits discrimination and harassment of any kind and provides equal employment opportunities to employees and applicants without regard to gender, race, color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. You can read about our Job Applicant Privacy policy here Job Applicant Privacy Policy (apexon.com)
Senior Data Engineer - Master Data Management (MDM)
Senior data scientist job in Iselin, NJ
We are seeking a highly skilled and experienced Senior Data Engineer specializing in Master Data Management (MDM) to join our data team. The ideal candidate will have a strong background in designing, implementing, and managing end-to-end MDM solutions, preferably within the financial sector. You will be responsible for architecting robust data platforms, evaluating MDM tools, and aligning data strategies to meet business needs.
Additional Information*
The base salary for this role will vary based on geography and other factors. In accordance with law, the base salary for this role if filled within Iselin, NJ is $140K - $150K/year & benefits (see below).
The Role
Responsibilities:
Lead the design, development, and deployment of comprehensive MDM solutions across the organization, with an emphasis on financial data domains.
Demonstrate extensive experience with multiple MDM implementations, including platform selection, comparison, and optimization.
Architect and present end-to-end MDM architectures, ensuring scalability, data quality, and governance standards are met.
Evaluate various MDM platforms (e.g., Informatica, Reltio, Talend, IBM MDM, etc.) and provide objective recommendations aligned with business requirements.
Collaborate with business stakeholders to understand reference data sources and develop strategies for managing reference and master data effectively.
Implement data integration pipelines leveraging modern data engineering tools and practices.
Develop, automate, and maintain data workflows using Python, Airflow, or Astronomer.
Build and optimize data processing solutions using Kafka, Databricks, Snowflake, Azure Data Factory (ADF), and related technologies.
Design microservices, especially utilizing GraphQL, to enable flexible and scalable data services.
Ensure compliance with data governance, data privacy, and security standards.
Support CI/CD pipelines for continuous integration and deployment of data solutions.
Requirements:
12+ years of experience in data engineering, with a proven track record of MDM implementations, preferably in the financial services industry.
Extensive hands-on experience designing and deploying MDM solutions and comparing MDM platform options.
Strong functional knowledge of reference data sources and domain-specific data standards.
Expertise in Python, Pyspark, Kafka, microservices architecture (particularly GraphQL), Databricks, Snowflake, Azure Data Factory, SQL, and orchestration tools such as Airflow or Astronomer.
Familiarity with CI/CD practices, tools, and automation pipelines.
Ability to work collaboratively across teams to deliver complex data solutions.
Experience with financial systems (capital markets, credit risk, and regulatory compliance applications).
Preferred Qualifications:
Familiarity with financial data models and regulatory requirements.
Experience with Azure cloud platforms
Knowledge of data governance, data quality frameworks, and metadata management.
We offer:
A highly competitive compensation and benefits package.
A multinational organization with 58 offices in 21 countries and the possibility to work abroad.
10 days of paid annual leave (plus sick leave and national holidays).
Maternity & paternity leave plans.
A comprehensive insurance plan including medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region).
Retirement savings plans.
A higher education certification policy.
Commuter benefits (varies by region).
Extensive training opportunities, focused on skills, substantive knowledge, and personal development.
On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses.
Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups.
Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms.
A flat and approachable organization.
A truly diverse, fun-loving, and global work culture.
S YNECHRON'S DIVERSITY & INCLUSION STATEMENT
Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.
All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
Data Engineer
Senior data scientist job in Jersey City, NJ
ONLY LOCALS TO NJ/NY - NO RELOCATION CANDIDATES
Skillset: Data Engineer
Must Haves: Python, PySpark, AWS - ECS, Glue, Lambda, S3
Nice to Haves: Java, Spark, React Js
Interview Process: Interview Process: 2 rounds, 2nd will be on site
You're ready to gain the skills and experience needed to grow within your role and advance your career - and we have the perfect software engineering opportunity for you.
As a Data Engineer III - Python / Spark / Data Lake at JPMorgan Chase within the Consumer and Community Bank , you will be a seasoned member of an agile team, tasked with designing and delivering reliable data collection, storage, access, and analytics solutions that are secure, stable, and scalable. Your responsibilities will include developing, testing, and maintaining essential data pipelines and architectures across diverse technical areas, supporting various business functions to achieve the firm's business objectives.
Job responsibilities:
• Supports review of controls to ensure sufficient protection of enterprise data.
• Advises and makes custom configuration changes in one to two tools to generate a product at the business or customer request.
• Updates logical or physical data models based on new use cases.
• Frequently uses SQL and understands NoSQL databases and their niche in the marketplace.
• Adds to team culture of diversity, opportunity, inclusion, and respect.
• Develop enterprise data models, Design/ develop/ maintain large-scale data processing pipelines (and infrastructure), Lead code reviews and provide mentoring thru the process, Drive data quality, Ensure data accessibility (to analysts and data scientists), Ensure compliance with data governance requirements, and Ensure business alignment (ensure data engineering practices align with business goals).
• Supports review of controls to ensure sufficient protection of enterprise data
Required qualifications, capabilities, and skills
• Formal training or certification on data engineering concepts and 2+ years applied experience
• Experience across the data lifecycle, advanced experience with SQL (e.g., joins and aggregations), and working understanding of NoSQL databases
• Experience with statistical data analysis and ability to determine appropriate tools and data patterns to perform analysis
• Extensive experience in AWS, design, implementation, and maintenance of data pipelines using Python and PySpark.
• Proficient in Python and PySpark, able to write and execute complex queries to perform curation and build views required by end users (single and multi-dimensional).
• Proven experience in performance and tuning to ensure jobs are running at optimal levels and no performance bottleneck.
• Advanced proficiency in leveraging Gen AI models from Anthropic (or OpenAI, or Google) using APIs/SDKs
• Advanced proficiency in cloud data lakehouse platform such as AWS data lake services, Databricks or Hadoop, relational data store such as Postgres, Oracle or similar, and at least one NOSQL data store such as Cassandra, Dynamo, MongoDB or similar
• Advanced proficiency in Cloud Data Warehouse Snowflake, AWS Redshift
• Advanced proficiency in at least one scheduling/orchestration tool such as Airflow, AWS Step Functions or similar
• Proficiency in Unix scripting, data structures, data serialization formats such as JSON, AVRO, Protobuf, or similar, big-data storage formats such as Parquet, Iceberg, or similar, data processing methodologies such as batch, micro-batching, or stream, one or more data modelling techniques such as Dimensional, Data Vault, Kimball, Inmon, etc., Agile methodology, TDD or BDD and CI/CD tools.
Preferred qualifications, capabilities, and skills
• Knowledge of data governance and security best practices.
• Experience in carrying out data analysis to support business insights.
• Strong Python and Spark
Data Engineer
Senior data scientist job in Newark, NJ
NeenOpal is a global consulting firm specializing in Data Science and Business Intelligence, with offices in Bengaluru, Newark, and Fredericton. We provide end-to-end solutions tailored to the unique needs of businesses, from startups to large organizations, across domains like digital strategy, sales and marketing, supply chain, and finance. Our mission is to help organizations achieve operational excellence and transform into data-driven enterprises.
Role Description
This is a full-time, hybrid, Data Engineer role located in Newark, NJ. The Data Engineer will be responsible for designing, implementing, and managing data engineering solutions to support business needs. Day-to-day tasks include building and optimizing data pipelines, developing and maintaining data models and ETL processes, managing data warehousing solutions, and contributing to the organization's data analytics initiatives. Collaboration with cross-functional teams to ensure robust data infrastructure will be a key aspect of this role.
Key Responsibilities
Data Pipeline Development: Design, implement, and manage robust data pipelines to ensure efficient data flow into data warehouses. Automate ETL processes using Python and advanced data engineering tools.
Data Integration: Integrate and transform data using industry-standard tools. Experience required with:
AWS Services: AWS Glue, Data Pipeline, Redshift, and S3.
Azure Services: Azure Data Factory, Synapse Analytics, and Blob Storage.
Data Warehousing: Implement and optimize solutions using Snowflake and Amazon Redshift.
Database Management: Develop and manage relational databases (SQL Server, MySQL, PostgreSQL) to ensure data integrity.
Performance Optimization: Continuously monitor and improve data processing workflows and apply best practices for query optimization.
Global Collaboration: Work closely with cross-functional teams in the US, India, and Canada to deliver high-quality solutions.
Governance & Support: Document ETL processes and data mappings in line with governance standards. Diagnose and resolve data-related issues promptly.
Required Skills and Experience
Experience: Minimum 2+ years of experience designing and developing ETL processes (AWS Glue, Azure Data Factory, or similar).
Integration: Experience integrating data via RESTful / GraphQL APIs.
Programming: Proficient in Python for ETL automation and SQL for database management.
Cloud Platforms: Strong experience with AWS or Azure data services. (GCP familiarity is a plus) .
Data Warehousing: Expertise with Snowflake, Amazon Redshift, or Azure Synapse Analytics.
Integration: Experience integrating data via RESTful APIs.
Communication: Excellent articulation skills to explain technical work directly to clients and stakeholders.
Authorization: Must have valid work authorization in the United States.
Salary Range: $65,000- $80,000 per year
Benefits: This role includes health insurance, paid time off, and opportunities for professional growth and continuous learning within a fast-growing global analytics company.
Equal Opportunity Employer NeenOpal Inc. is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status.
Applied AIML Data Scientist - Senior Associate
Senior data scientist job in Jersey City, NJ
The Risk Management & Compliance Technology Machine Learning team at JPMorgan Chase focuses on solving challenging business problems such as Anti-Money Laundering and Surveillance through data science and machine learning techniques across Risk, Compliance, Conduct and Operational Risk.As an Applied AI ML Senior Associate on the team, you will have the opportunity to study complex business problems and apply advanced algorithms to develop, test, and evaluate AI/ML applications or models for those problems.
You will work with the firm's rich data pool from both internal and external sources using Python/Spark via AWS and other systems. You are also expected to derive business insights from technical results and be able to present them to non-technical audience.
Job responsibilities
Proactively develop understanding of key business problems and processes
Execute tasks throughout a model development process including data wrangling/analysis, model training, testing, and selection.
Generate structured and meaningful insights from data analysis and modelling exercise and present them in appropriate format according to the audience.
Collaborate with other data scientists and machine learning engineers to deployment machine learning solutions.
Carry out ad-hoc and periodic analysis as required by the business stakeholder, model risk function, and other groups.
Required qualifications, capabilities, and skills
At least 2 years of relevant experience post Advanced degree (MS, PHD) in a quantitative field (e.g., Data Science, Computer Science, Applied Mathematics, Statistics, and Econometrics)
Practical expertise and work experience with ML projects, both supervised and unsupervised.
Proficient programming skills with Python, R, or other equivalent languages,
Demonstrated experience working with large and complicated datasets.
Experience with broad range of analytical toolkits, such as SQL, Spark, Scikit-Learn, and XGBoost.
Excellent problem solving, communication (verbal and written), and teamwork skills;
Preferred qualifications, capabilities, and skills
Experience with graph analytics and neural network (Tensorflow, Keras).
Experience working with engineering teams to operationalize machine learning models.
Familiarity with the financial services industry.
Auto-ApplySenior Data Scientist
Senior data scientist job in Trenton, NJ
**_What Data Science contributes to Cardinal Health_** The Data & Analytics Function oversees the analytics life-cycle in order to identify, analyze and present relevant insights that drive business decisions and anticipate opportunities to achieve a competitive advantage. This function manages analytic data platforms, the access, design and implementation of reporting/business intelligence solutions, and the application of advanced quantitative modeling.
Data Science applies base, scientific methodologies from various disciplines, techniques and tools that extracts knowledge and insight from data to solve complex business problems on large data sets, integrating multiple systems.
At Cardinal Health's Artificial Intelligence Center of Excellence (AI CoE), we are pushing the boundaries of healthcare with cutting-edge Data Science and Artificial Intelligence (AI). Our mission is to leverage the power of data to create innovative solutions that improve patient outcomes, streamline operations, and enhance the overall healthcare experience.
We are seeking a highly motivated and experienced Senior Data Scientist to join our team as a thought leader and architect of our AI strategy. You will play a critical role in fulfilling our vision through delivery of impactful solutions that drive real-world change.
**_Responsibilities_**
+ Lead the Development of Innovative AI solutions: Be responsible for designing, implementing, and scaling sophisticated AI solutions that address key business challenges within the healthcare industry by leveraging your expertise in areas such as Machine Learning, Generative AI, and RAG Technologies.
+ Develop advanced ML models for forecasting, classification, risk prediction, and other critical applications.
+ Explore and leverage the latest Generative AI (GenAI) technologies, including Large Language Models (LLMs), for applications like summarization, generation, classification and extraction.
+ Build robust Retrieval Augmented Generation (RAG) systems to integrate LLMs with vast repositories of healthcare and business data, ensuring accurate and relevant outputs.
+ Shape Our AI Strategy: Work closely with key stakeholders across the organization to understand their needs and translate them into actionable AI-driven or AI-powered solutions.
+ Act as a champion for AI within Cardinal Health, influencing the direction of our technology roadmap and ensuring alignment with our overall business objectives.
+ Guide and mentor a team of Data Scientists and ML Engineers by providing technical guidance, mentorship, and support to a team of skilled and geographically distributed data scientists, while fostering a collaborative and innovative environment that encourages continuous learning and growth.
+ Embrace a AI-Driven Culture: foster a culture of data-driven decision-making, promoting the use of AI insights to drive business outcomes and improve customer experience and patient care.
**_Qualifications_**
+ 8-12 years of experience with a minimum of 4 years of experience in data science, with a strong track record of success in developing and deploying complex AI/ML solutions, preferred
+ Bachelor's degree in related field, or equivalent work experience, preferred
+ GenAI Proficiency: Deep understanding of Generative AI concepts, including LLMs, RAG technologies, embedding models, prompting techniques, and vector databases, along with evaluating retrievals from RAGs and GenAI models without ground truth
+ Experience working with building production ready Generative AI Applications involving RAGs, LLM, vector databases and embeddings model.
+ Extensive knowledge of healthcare data, including clinical data, patient demographics, and claims data. Understanding of HIPAA and other relevant regulations, preferred.
+ Experience working with cloud platforms like Google Cloud Platform (GCP) for data processing, model training, evaluation, monitoring, deployment and support preferred.
+ Proven ability to lead data science projects, mentor colleagues, and effectively communicate complex technical concepts to both technical and non-technical audiences preferred.
+ Proficiency in Python, statistical programming languages, machine learning libraries (Scikit-learn, TensorFlow, PyTorch), cloud platforms, and data engineering tools preferred.
+ Experience in Cloud Functions, VertexAI, MLFlow, Storage Buckets, IAM Principles and Service Accounts preferred.
+ Experience in building end-to-end ML pipelines, from data ingestion and feature engineering to model training, deployment, and scaling preferred.
+ Experience in building and implementing CI/CD pipelines for ML models and other solutions, ensuring seamless integration and deployment in production environments preferred.
+ Familiarity with RESTful API design and implementation, including building robust APIs to integrate your ML models and GenAI solutions with existing systems preferred.
+ Working understanding of software engineering patterns, solutions architecture, information architecture, and security architecture with an emphasis on ML/GenAI implementations preferred.
+ Experience working in Agile development environments, including Scrum or Kanban, and a strong understanding of Agile principles and practices preferred.
+ Familiarity with DevSecOps principles and practices, incorporating coding standards and security considerations into all stages of the development lifecycle preferred.
**_What is expected of you and others at this level_**
+ Applies advanced knowledge and understanding of concepts, principles, and technical capabilities to manage a wide variety of projects
+ Participates in the development of policies and procedures to achieve specific goals
+ Recommends new practices, processes, metrics, or models
+ Works on or may lead complex projects of large scope
+ Projects may have significant and long-term impact
+ Provides solutions which may set precedent
+ Independently determines method for completion of new projects
+ Receives guidance on overall project objectives
+ Acts as a mentor to less experienced colleagues
**Anticipated salary range:** $121,600 - $173,700
**Bonus eligible:** Yes
**Benefits:** Cardinal Health offers a wide variety of benefits and programs to support health and well-being.
+ Medical, dental and vision coverage
+ Paid time off plan
+ Health savings account (HSA)
+ 401k savings plan
+ Access to wages before pay day with my FlexPay
+ Flexible spending accounts (FSAs)
+ Short- and long-term disability coverage
+ Work-Life resources
+ Paid parental leave
+ Healthy lifestyle programs
**Application window anticipated to close:** 11/05/2025
*if interested in opportunity, please submit application as soon as possible.
The salary range listed is an estimate. Pay at Cardinal Health is determined by multiple factors including, but not limited to, a candidate's geographical location, relevant education, experience and skills and an evaluation of internal pay equity.
_Candidates who are back-to-work, people with disabilities, without a college degree, and Veterans are encouraged to apply._
_Cardinal Health supports an inclusive workplace that values diversity of thought, experience and background. We celebrate the power of our differences to create better solutions for our customers by ensuring employees can be their authentic selves each day. Cardinal Health is an Equal_ _Opportunity/Affirmative_ _Action employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, ancestry, age, physical or mental disability, sex, sexual orientation, gender identity/expression, pregnancy, veteran status, marital status, creed, status with regard to public assistance, genetic status or any other status protected by federal, state or local law._
_To read and review this privacy notice click_ here (***************************************************************************************************************************
Sr Data Engineer Python Serverside
Senior data scientist job in White House Station, NJ
This is a direct hire full-time position, with a hybrid on-site 2 days a week format.
YOU MUST BE A US CITIZEN OR GREEN CARD, NO OTHER STATUS TO WORK IN THE US WILL BE PERMITTED
YOU MUST LIVE LOCAL TO THE AREA AND BE ABLE TO DRIVE ONSITE A MIN TWO DAYS A WEEK
THE TECH STACK WILL BE:
7 years demonstrated server-side development proficiency
5 years demonstrated server-side development proficiency
Programming Languages: Python (NumPy, Pandas, Oracle PL/SQL). Other non-interpreted languages like Java, C++, Rust, etc. are a plus. Must be proficient in the intermediate-advanced level of the language (concurrency, memory management, etc.)
Design patterns: typical GOF patterns (Factory, Facade, Singleton, etc.)
Data structures: maps, lists, arrays, etc
SCM: solid Git proficiency, MS Azure DevOps (CI/CD)
Senior Data Engineer - MDM
Senior data scientist job in Iselin, NJ
We are
At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets.
Our challenge:
We are seeking a highly skilled and experienced Senior Data Engineer specializing in Master Data Management (MDM) to join our data team. The ideal candidate will have a strong background in designing, implementing, and managing end-to-end MDM solutions, preferably within the financial sector. You will be responsible for architecting robust data platforms, evaluating MDM tools, and aligning data strategies to meet business needs.
Additional Information
The base salary for this position will vary based on geography and other factors. In accordance with the law, the base salary for this role if filled within Iselin, NJ is $135K to $150K/year & benefits (see below).
Key Responsibilities:
Lead the design, development, and deployment of comprehensive MDM solutions across the organization, with an emphasis on financial data domains.
Demonstrate extensive experience with multiple MDM implementations, including platform selection, comparison, and optimization.
Architect and present end-to-end MDM architectures, ensuring scalability, data quality, and governance standards are met.
Evaluate various MDM platforms (e.g., Informatica, Reltio, Talend, IBM MDM, etc.) and provide objective recommendations aligned with business requirements.
Collaborate with business stakeholders to understand reference data sources and develop strategies for managing reference and master data effectively.
Implement data integration pipelines leveraging modern data engineering tools and practices.
Develop, automate, and maintain data workflows using Python, Airflow, or Astronomer.
Build and optimize data processing solutions using Kafka, Databricks, Snowflake, Azure Data Factory (ADF), and related technologies.
Design microservices, especially utilizing GraphQL, to enable flexible and scalable data services.
Ensure compliance with data governance, data privacy, and security standards.
Support CI/CD pipelines for continuous integration and deployment of data solutions.
Qualifications:
12+ years of experience in data engineering, with a proven track record of MDM implementations, preferably in the financial services industry.
Extensive hands-on experience designing and deploying MDM solutions and comparing MDM platform options.
Strong functional knowledge of reference data sources and domain-specific data standards.
Expertise in Python, Pyspark, Kafka, microservices architecture (particularly GraphQL), Databricks, Snowflake, Azure Data Factory, SQL, and orchestration tools such as Airflow or Astronomer.
Familiarity with CI/CD practices, tools, and automation pipelines.
Ability to work collaboratively across teams to deliver complex data solutions.
Experience with financial systems (capital markets, credit risk, and regulatory compliance applications).
Preferred Skills:
Familiarity with financial data models and regulatory requirements.
Experience with Azure cloud platforms
Knowledge of data governance, data quality frameworks, and metadata management.
We offer:
A highly competitive compensation and benefits package
A multinational organization with 58 offices in 21 countries and the possibility to work abroad
10 days of paid annual leave (plus sick leave and national holidays)
Maternity & Paternity leave plans
A comprehensive insurance plan including: medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region)
Retirement savings plans
A higher education certification policy
Commuter benefits (varies by region)
Extensive training opportunities, focused on skills, substantive knowledge, and personal development.
On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses
Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups
Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms
A flat and approachable organization
A truly diverse, fun-loving and global work culture
SYNECHRON'S DIVERSITY & INCLUSION STATEMENT
Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.
All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
Senior Data Scientist
Senior data scientist job in Trenton, NJ
**What Data Science contributes to Cardinal Health** The Data & Analytics Function oversees the analytics lifecycle in order to identify, analyze and present relevant insights that drive business decisions and anticipate opportunities to achieve a competitive advantage. This function manages analytics products, the access, design and implementation of reporting/business intelligence solutions, and the application of advanced quantitative modeling.
Data Science applies base, scientific methodologies from various disciplines, techniques and tools that extracts knowledge and insight from data to solve complex business problems on large data sets, integrating multiple systems.
This role will support the Major Rugby business unit, a legacy supplier of multi-source, generic pharmaceuticals for over 60 years. Major Rugby provides over 1,000 high-quality, Rx, OTC and vitamin, mineral and supplement products to the acute, retail, government and consumer markets. This role will focus on leveraging advanced analytics, machine learning, and optimization techniques to solve complex challenges related to demand forecasting, inventory optimization, logistics efficiency and risk mitigation. Our goal is to uncover insights and drive meaningful deliverables to improve decision making and business outcomes.
**Responsibilities:**
+ Leads the design, development, and deployment of advanced analytics and machine learning models to solve complex business problems
+ Collaborates cross-functionally with product, engineering, operations, and business teams to identify opportunities for data-driven decision-making
+ Translates business requirements into analytical solutions and delivers insights that drive strategic initiatives
+ Develops and maintains scalable data science solutions, ensuring reproducibility, performance, and maintainability
+ Evaluates and implements new tools, frameworks, and methodologies to enhance the data science toolkit
+ Drives experimentation and A/B testing strategies to optimize business outcomes
+ Mentors junior data scientists and contributes to the development of a high-performing analytics team
+ Ensures data quality, governance, and compliance with organizational and regulatory standards
+ Stays current with industry trends, emerging technologies, and best practices in data science and AI
+ Contributes to the development of internal knowledge bases, documentation, and training materials
**Qualifications:**
+ 8-12 years of experience in data science, analytics, or a related field (preferred)
+ Advanced degree (Master's or Ph.D.) in Data Science, Computer Science, Engineering, Operations Research, Statistics, or a related discipline preferred
+ Strong programming skills in Python and SQL;
+ Proficiency in data visualization tools such as Tableau, or Looker, with a proven ability to translate complex data into clear, actionable business insights
+ Deep understanding of machine learning, statistical modeling, predictive analytics, and optimization techniques
+ Experience with cloud platforms (e.g., AWS, Azure, GCP) and big data technologies (e.g., Spark, Hadoop) is highly desirable
+ Excellent communication and storytelling skills, with the ability to influence stakeholders and present findings to both technical and non-technical audiences
+ Experience in Supervised and Unsupervised Machine Learning including Classification, Forecasting, Anomaly Detection, Pattern Detection, Text Mining, using variety of techniques such as Decision trees, Time Series Analysis, Bagging and Boosting algorithms, Neural Networks, Deep Learning and Natural Language processing (NLP).
+ Experience with PyTorch or other deep learning frameworks
+ Strong understanding of RESTful APIs and / or data streaming a big plus
+ Required experience of modern version control (GitHub, Bitbucket)
+ Hands-on experience with containerization (Docker, Kubernetes, etc.)
+ Experience with product discovery and design thinking
+ Experience with Gen AI
+ Experience with supply chain analytics is preferred
**Anticipated salary range:** $123,400 - $176,300
**Bonus eligible:** Yes
**Benefits:** Cardinal Health offers a wide variety of benefits and programs to support health and well-being.
+ Medical, dental and vision coverage
+ Paid time off plan
+ Health savings account (HSA)
+ 401k savings plan
+ Access to wages before pay day with my FlexPay
+ Flexible spending accounts (FSAs)
+ Short- and long-term disability coverage
+ Work-Life resources
+ Paid parental leave
+ Healthy lifestyle programs
**Application window anticipated to close:** 12/02/2025 *if interested in opportunity, please submit application as soon as possible.
The salary range listed is an estimate. Pay at Cardinal Health is determined by multiple factors including, but not limited to, a candidate's geographical location, relevant education, experience and skills and an evaluation of internal pay equity.
\#LI-Remote
\#LI-AP4
_Candidates who are back-to-work, people with disabilities, without a college degree, and Veterans are encouraged to apply._
_Cardinal Health supports an inclusive workplace that values diversity of thought, experience and background. We celebrate the power of our differences to create better solutions for our customers by ensuring employees can be their authentic selves each day. Cardinal Health is an Equal_ _Opportunity/Affirmative_ _Action employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, ancestry, age, physical or mental disability, sex, sexual orientation, gender identity/expression, pregnancy, veteran status, marital status, creed, status with regard to public assistance, genetic status or any other status protected by federal, state or local law._
_To read and review this privacy notice click_ here (***************************************************************************************************************************