Lead Data Scientist
Data scientist job in Columbus, OH
Candidates MUST go on-site at one of the following locations
Columbus, OH
Cincinnati, OH
Cleveland, OH
Indianapolis, IN
Hagerstown, MD
Chicago, IL
Detroit, MI
Minnetonka, MN
Houston, TX
Charlotte, NC
Akron, OH
Experience:
· Master's degree and 5+ years of experience related work experience using statistics and machine learning to solve complex business problems, experience conducting statistical analysis with advanced statistical software, scripting languages, and packages, experience with big data analysis tools and techniques, and experience building and deploying predictive models, web scraping, and scalable data pipelines
· Expert understanding of statistical methods and skills such as Bayesian Networks Inference, linear and non-linear regression, hierarchical, mixed models/multi-level modeling
Python, R, or SAS SQL and some sort of lending experience (i.e. HELOC, Mortgage etc) is most important
Excellent communication skills
If a candidate has cred card experience (i.e. Discover or Bread financial ) THEY ARE A+ fit!
Education:
Master's degree or PhD in computer science, statistics, economics or related fields
Responsibilities:
· Prioritizes analytical projects based on business value and technological readiness
Performs large-scale experimentation and build data-driven models to answer business questions
Conducts research on cutting-edge techniques and tools in machine learning/deep learning/artificial intelligence
Evangelizes best practices to analytics and products teams
Acts as the go-to resource for machine learning across a range of business needs
Owns the entire model development process, from identifying the business requirements, data sourcing, model fitting, presenting results, and production scoring
Provides leadership, coaching, and mentoring to team members and develops the team to work with all areas of the organization
Works with stakeholders to ensure that business needs are clearly understood and that services meet those needs
Anticipates and analyzes trends in technology while assessing the emerging technology's impact(s)
Coaches' individuals through change and serves as a role model
Skills:
· Up-to-date knowledge of machine learning and data analytics tools and techniques
Strong knowledge in predictive modeling methodology
Experienced at leveraging both structured and unstructured data sources
Willingness and ability to learn new technologies on the job
Demonstrated ability to communicate complex results to technical and non-technical audiences
Strategic, intellectually curious thinker with focus on outcomes
Professional image with the ability to form relationships across functions
Ability to train more junior analysts regarding day-to-day activities, as necessary
Proven ability to lead cross-functional teams
Strong experience with Cloud Machine Learning technologies (e.g., AWS Sagemaker)
Strong experience with machine learning environments (e.g., TensorFlow, scikit-learn, caret)
Demonstrated Expertise with at least one Data Science environment (R/RStudio, Python, SAS) and at least one database architecture (SQL, NoSQL)
Financial Services background preferred
Data Scientist with Hands On development experience with R, SQL & Python
Data scientist job in Columbus, OH
*Per the client, No C2C's!*
Central Point Partners is currently interviewing candidates in the Columbus, Oh area for a large client.
only GC's and USC's.
This position is Hybrid (4 Days onsite)! Only candidates who are local to Columbus, Oh will be considered.
Data Scientist with Hands On development experience with R, SQL & Python
Summary:
Our client is seeking a passionate, data-savvy Senior Data Scientist to join the Enterprise Analytics team to fuel our mission of growth through data-driven insights and opportunity discovery. This dynamic role uses a consultative approach with the business segments to dive into our customer, product, channel, and digital data to uncover opportunities for consumer experience optimization and customer value delivery. You will also enable stakeholders with actionable, intuitive performance insights that provide the business with direction for growth. The ideal candidate will have a robust mix of technical and communication skills, with a passion for optimization, data storytelling, and data visualization. You will collaborate with a centralized team of data scientists as well as teams across the organization including Product, Marketing, Data, Finance, and senior leadership. This is an exciting opportunity to be a key influencer to the company's strategic decisions and to learn and grow with our Analytics team.
Notes from the manager
The skills that will be critical will be Python or R and a firm understanding of SQL along with foundationally understanding what data is needed to perform studies now and in the future. For a high-level summary that should help describe what this person will be asked to do alongside their peers:
I would say this person will balance analysis with development, knowing when to jump in and knowing when to step back to lend their expertise.
Feature & Functional Design
Data scientists are embedded in the team's designing the feature. Their main job here is to define the data tracking needed to evaluate the business case-things like event logging, Adobe tagging, third-party data ingestion, and any other tracking requirements. They are also meant to consult and outline if/when business should be bringing data into the bank and will help connect business with CDAO and IT warehousing and data engineering partners should new data need to be brought forward.
Feature Engineering & Development
The same data scientists stay involved as the feature moves into execution. They support all necessary functions (Amigo, QA, etc.) to ensure data tracking is in place when the feature goes live. They also begin preparing to support launch evaluation and measurement against experimentation design or business case success criteria.
Feature Rollout & Performance Evaluation
Owns tracking the rollout, running A/B tests, and conducting impact analysis for all features that they have been involved in the Feature & Functional Design and Feature Engineering & Development stages. They provide an unbiased view of how the feature performs against the original business case along with making objective recommendations that will provide direction for business. They will roll off once the feature has matured through business case/experiment design and evaluation.
In addition to supporting feature rollouts…
Data scientists on the team are also encouraged to pursue self-driven initiatives during periods when they are not actively supporting other projects. These initiatives may include designing experiments, conducting exploratory analyses, developing predictive models, or identifying new opportunities for impact.
For more information about this opportunity, please contact Bill Hart at ************ AND email your resume to **********************************!
Senior Data Analytics Engineer
Data scientist job in Columbus, OH
We are seeking a highly skilled Analytics Data Engineer with deep expertise in building scalable data solutions on the AWS platform. The ideal candidate is a 10/10 expert in Python and PySpark, with strong working knowledge of SQL. This engineer will play a critical role in translating business and end-user needs into robust analytics products-spanning ingestion, transformation, curation, and enablement for downstream reporting and visualization.
You will work closely with both business stakeholders and IT teams to design, develop, and deploy advanced data pipelines and analytical capabilities that power enterprise decision-making.
Key Responsibilities
Data Engineering & Pipeline Development
Design, develop, and optimize scalable data ingestion pipelines using Python, PySpark, and AWS native services.
Build end-to-end solutions to move large-scale big data from source systems into AWS environments (e.g., S3, Redshift, DynamoDB, RDS).
Develop and maintain robust data transformation and curation processes to support analytics, dashboards, and business intelligence tools.
Implement best practices for data quality, validation, auditing, and error-handling within pipelines.
Analytics Solution Design
Collaborate with business users to understand analytical needs and translate them into technical specifications, data models, and solution architectures.
Build curated datasets optimized for reporting, visualization, machine learning, and self-service analytics.
Contribute to solution design for analytics products leveraging AWS services such as AWS Glue, Lambda, EMR, Athena, Step Functions, Redshift, Kinesis, Lake Formation, etc.
Cross-Functional Collaboration
Work with IT and business partners to define requirements, architecture, and KPIs for analytical solutions.
Participate in Daily Scrum meetings, code reviews, and architecture discussions to ensure alignment with enterprise data strategy and coding standards.
Provide mentorship and guidance to junior engineers and analysts as needed.
Engineering (Supporting Skills)
Employ strong skills in Python, Pyspark and SQL to support data engineering tasks, broader system integration requirements, and application layer needs.
Implement scripts, utilities, and micro-services as needed to support analytics workloads.
Required Qualifications
5+ years of professional experience in data engineering, analytics engineering, or full-stack data development roles.
Expert-level proficiency (10/10) in:
Python
PySpark
Strong working knowledge of:
SQL and other programming languages
Demonstrated experience designing and delivering big-data ingestion and transformation solutions through AWS.
Hands-on experience with AWS services such as Glue, EMR, Lambda, Redshift, S3, Kinesis, CloudFormation, IAM, etc.
Strong understanding of data warehousing, ETL/ELT, distributed computing, and data modeling.
Ability to partner effectively with business stakeholders and translate requirements into technical solutions.
Strong problem-solving skills and the ability to work independently in a fast-paced environment.
Preferred Qualifications
Experience with BI/Visualization tools such as Tableau
Experience building CI/CD pipelines for data products (e.g., Jenkins, GitHub Actions).
Familiarity with machine learning workflows or MLOps frameworks.
Knowledge of metadata management, data governance, and data lineage tools.
Senior Data Engineer.
Data scientist job in Columbus, OH
Immediate need for a talented Senior Data Engineer. This is a 06+ months contract opportunity with long-term potential and is located in Columbus, OH(Remote). Please review the job description below and contact me ASAP if you are interested.
Job ID: 25-95277
Pay Range: $70 - $71 /hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location).
Key Responsibilities:
Working with Marketing data partners and build data pipelines to automate the data feeds from the partners to internal systems on Snowflake.
Working with Data Analysts to understand their data needs and prepare the datasets for analytics.
Work with Data Scientists to build the infrastructure to deploy the models, monitor the performance, and build the necessary audit infrastructure.
Key Requirements and Technology Experience:
Key skills; Snowflake, Python and AWS
Experience with building data pipelines, data pipeline infrastructure and related tools and environments used in analytics and data science (ex: Python, Unix)
Experience in developing analytic workloads with AWS Services, S3, Simple Queue Service (SQS), Simple Notification Service (SNS), Lambda, EC2, ECR and Secrets Manager.
Strong proficiency in Python, SQL, Linux/Unix shell scripting, GitHub Actions or Docker, Terraform or CloudFormation, and Snowflake.
Order of Importance: Terraform, Docker, GitHub Actions OR Jenkins
Experience with orchestration tools such as Prefect, DBT, or Airflow.
Experience automating data ingestion, processing, and reporting/monitoring.
Experience with other relevant tools used in data engineering (e.g., SQL, GIT, etc.)
Ability to set up environments (Dev, QA, and Prod) using GitHub repo and GitHub rules/methodologies; how to maintain (via SQL coding and proper versioning)
Our client is a leading Insurance Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration
Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, colour, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws.
By applying to our jobs, you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
Data Engineer
Data scientist job in Columbus, OH
We're seeking a skilled Data Engineer based in Columbus, OH, to support a high-impact data initiative. The ideal candidate will have hands-on experience with Python, Databricks, SQL, and version control systems, and be comfortable building and maintaining robust, scalable data solutions.
Key Responsibilities
Design, implement, and optimize data pipelines and workflows within Databricks.
Develop and maintain data models and SQL queries for efficient ETL processes.
Partner with cross-functional teams to define data requirements and deliver business-ready solutions.
Use version control systems to manage code and ensure collaborative development practices.
Validate and maintain data quality, accuracy, and integrity through testing and monitoring.
Required Skills
Proficiency in Python for data engineering and automation.
Strong, practical experience with Databricks and distributed data processing.
Advanced SQL skills for data manipulation and analysis.
Experience with Git or similar version control tools.
Strong analytical mindset and attention to detail.
Preferred Qualifications
Experience with cloud platforms (AWS, Azure, or GCP).
Familiarity with enterprise data lake architectures and best practices.
Excellent communication skills and the ability to work independently or in team environments.
Senior Data Engineer(only W2)
Data scientist job in Columbus, OH
Bachelor's Degree in Computer Science or related technical field AND 5+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, or Java.
Proficiency with Azure data services, such as Azure Data Lake, Azure Data Factory and Databricks.
Expertise using Cloud Security (i.e., Active Directory, network security groups, and encryption services).
Proficient in Python for developing and maintaining data solutions.
Experience with optimizing or managing technology costs.
Ability to build and maintain a data architecture supporting both real-time and batch processing.
Ability to implement industry standard programming techniques by mastering advanced fundamental concepts, practices, and procedures, and having the ability to analyze and solve problems in existing systems.
Expertise with unit testing, integration testing and performance/stress testing.
Database management skills and understanding of legacy and contemporary data modeling and system architecture.
Demonstrated leadership skills, team spirit, and the ability to work cooperatively and creatively across an organization
Experience on teams leveraging Lean or Agile frameworks.
Senior Data Engineer
Data scientist job in Columbus, OH
Responsible for understanding, preparing, processing, and analyzing data to make it valuable and useful for operations decision support.
Accountabilities in this role include:
Partnering with Business Analysis and Analytics teams.
Demonstrating problem-solving ability for effective and timely resolution of system issues, including production outages.
Developing and supporting standard processes to harvest data from various sources and perform data blending to develop advanced data sets, analytical cubes, and data exploration.
Utilizing queries, data exploration and transformation, and basic statistical methods.
Creating Python scripts.
Developing Microsoft SQL Server Integration Services Workflows.
Building Microsoft SQL Server Analysis Services Tabular Models.
Focusing on SQL database work with a blend of strong technical and communication skills.
Demonstrating ability to learn and navigate in large complex environments.
Exhibiting Excel acumen to develop complex spreadsheets, formulas, create macros, and understand VBA code within the modules.
Required Skills:
Experience with MS SQL
Proficiency in Python
Desired Skills:
Experience with SharePoint
Advanced Excel Skills (formulas, VBA, Power Pivot, Pivot Table)
Data Engineer
Data scientist job in Dublin, OH
The Data Engineer is a technical leader and hands-on developer responsible for designing, building, and optimizing data pipelines and infrastructure to support analytics and reporting. This role will serve as the lead developer on strategic data initiatives, ensuring scalable, high-performance solutions are delivered effectively and efficiently.
The ideal candidate is self-directed, thrives in a fast-paced project environment, and is comfortable making technical decisions and architectural recommendations. The ideal candidate has prior experience in modern data platforms, most notable Databricks and the “lakehouse” architecture. They will work closely with cross-functional teams, including business stakeholders, data analysts, and engineering teams, to develop data solutions that align with enterprise strategies and business goals.
Experience in the financial industry is a plus, particularly in designing secure and compliant data solutions.
Responsibilities:
Design, build, and maintain scalable ETL/ELT pipelines for structured and unstructured data.
Optimize data storage, retrieval, and processing for performance, security, and cost-efficiency.
Ensure data integrity and governance by implementing robust validation, monitoring, and compliance processes.
Consume and analyze data from the data pipeline to infer, predict and recommend actionable insight, which will inform operational and strategic decision making to produce better results.
Empower departments and internal consumers with metrics and business intelligence to operate and direct our business, better serving our end customers.
Determine technical and behavioral requirements, identify strategies as solutions, and section solutions based on resource constraints.
Work with the business, process owners, and IT team members to design solutions for data and advanced analytics solutions.
Perform data modeling and prepare data in databases for analysis and reporting through various analytics tools.
Play a technical specialist role in championing data as a corporate asset.
Provide technical expertise in collaborating with project and other IT teams, internal and external to the company.
Contribute to and maintain system data standards.
Research and recommend innovative, and where possible automated approaches for system data administration tasks. Identify approaches that leverage our resources and provide economies of scale.
Engineer system that balances and meets performance, scalability, recoverability (including backup design), maintainability, security, high availability requirements and objectives.
Skills:
Databricks and related - SQL, Python, PySpark, Delta Live Tables, Data pipelines, AWS S3 object storage, Parquet/Columnar file formats, AWS Glue.
Systems Analysis - The application of systems analysis techniques and procedures, including consulting with users, to determine hardware, software, platform, or system functional specifications.
Time Management - Managing one's own time and the time of others.
Active Listening - Giving full attention to what other people are saying, taking time to understand the points being made, asking questions as appropriate, and not interrupting at inappropriate times.
Critical Thinking - Using logic and reasoning to identify the strengths and weaknesses of alternative solutions, conclusions or approaches to problems.
Active Learning - Understanding the implications of new information for both current and future problem-solving and decision-making.
Writing - Communicating effectively in writing as appropriate for the needs of the audience.
Speaking - Talking to others to convey information effectively.
Instructing - Teaching others how to do something.
Service Orientation - Actively looking for ways to help people.
Complex Problem Solving - Identifying complex problems and reviewing related information to develop and evaluate options and implement solutions.
Troubleshooting - Determining causes of operating errors and deciding what to do about it.
Judgment and Decision Making - Considering the relative costs and benefits of potential actions to choose the most appropriate one.
Experience and Education:
High School Diploma (or GED or High School Equivalence Certificate).
Associate degree or equivalent training and certification.
5+ years of experience in data engineering including SQL, data warehousing, cloud-based data platforms.
Databricks experience.
2+ years Project Lead or Supervisory experience preferred.
Must be legally authorized to work in the United States. We are unable to sponsor or take over sponsorship at this time.
Data Engineer (Databricks)
Data scientist job in Columbus, OH
ComResource is searching for a highly skilled Data Engineer with a background in SQL and Databricks that can handle the design and construction of scalable management systems, ensure that all data systems meet company requirements, and also research new uses for data acquisition.
Requirements:
Design, construct, install, test and maintain data management systems.
Build high-performance algorithms, predictive models, and prototypes.
Ensure that all systems meet the business/company requirements as well as industry practices.
Integrate up-and-coming data management and software engineering technologies into existing data structures.
Develop set processes for data mining, data modeling, and data production.
Create custom software components and analytics applications.
Research new uses for existing data.
Employ an array of technological languages and tools to connect systems together.
Recommend different ways to constantly improve data reliability and quality.
Qualifications:
5+ years data quality engineering
Experience with Cloud-based systems, preferably Azure
Databricks and SQL Server testing
Experience with ML tools and LLMs
Test automation frameworks
Python and SQL for data quality checks
Data profiling and anomaly detection
Documentation and quality metrics
Healthcare data validation experience preferred
Test automation and quality process development
Plus:
Azure Databricks
Azure Cognitive Services integration
Databricks Foundational model Integration
Claude API implementation a plus
Python and NLP frameworks (spa Cy, Hugging Face, NLTK)
Junior Data Engineer
Data scientist job in Columbus, OH
Contract-to-Hire
Columbus, OH (Hybrid)
Our healthcare services client is looking for an entry-level Data Engineer to join their team. You will play a pivotal role in maintaining and improving inventory and logistics management programs. Your day-to-day work will include leveraging machine learning and open-source technologies to drive improvements in data processes.
Job Responsibilities
Automate key processes and enhance data quality
Improve injection processes and enhance machine learning capabilities
Manage substitutions and allocations to streamline product ordering
Work on logistics-related data engineering tasks
Build and maintain ML models for predictive analytics
Interface with various customer systems
Collaborate on integrating AI models into customer service
Qualifications
Bachelor's degree in related field
0-2 years of relevant experience
Proficiency in SQL and Python
Understanding of GCP/BigQuery (or any cloud experience, basic certifications a plus).
Knowledge of data science concepts.
Business acumen and understanding (corporate experience or internship preferred).
Familiarity with Tableau
Strong analytical skills
Attitude for collaboration and knowledge sharing
Ability to present confidently in front of leaders
Why Should You Apply?
You will be part of custom technical training and professional development through our Elevate Program!
Start your career with a Fortune 15 company!
Access to cutting-edge technologies
Opportunity for career growth
Eight Eleven Group provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, national origin, age, sex, citizenship, disability, genetic information, gender, sexual orientation, gender identity, marital status, amnesty or status as a covered veteran in accordance with applicable federal, state, and local laws.
Senior Data Engineer
Data scientist job in Cincinnati, OH
Key Responsibilities
Experience in administration and configuration of API Gateways (e.g. Apigee/Kong) Apply cloud computing skill to deploy upgrades and fixes
Design, develop, and implement integrations based on use feedback.
Troubleshoot production issues and coordinate with the development team to streamline code deployment.
Implement automation tools and frameworks (Ci/CD pipelines).
Analyze code and communicate detailed reviews to development teams to ensure a marked improvement in applications and the timely completion of products.
Collaborate with team members to improve the company's engineering tools, systems and procedures, and data security.
Deliver quality customer service and resolve end-user issues in a timely manner
Draft architectural diagrams, interface specifications and other design documents
Participate in the development and communication of data strategy and roadmaps across the technology organization to support project portfolio and business strategy
Innovate, develop, and drive the development and communication of data strategy and roadmaps across the technology organization to support project portfolio
Drive the development and communication of enterprise standards for data domains and data solutions, focusing on simplified integration and streamlined operational and analytical uses
Drive digital innovation by leveraging innovative new technologies and approaches to renovate, extend, and transform the existing core data assets, including SQL-based, NoSQL-based, and Cloud-based data platforms
Define high-level migration plans to address the gaps between the current and future state, typically in sync with the budgeting or other capital planning processes
Lead the analysis of the technology environment to detect critical deficiencies and recommend solutions for improvement
Mentor team members in data principles, patterns, processes and practices
Promote the reuse of data assets, including the management of the data catalog for reference
Draft and review architectural diagrams, interface specifications and other design documents
Note to Vendors
Top 3 skills: azure data bricks, python, and spark
Soft Skills Needed: problem solving, attention to detail, and ability to work independently and part of agile team
Team details i.e.. size, dynamics, locations: 10 team members, working independently but will do peer programing throughout the day.
Senior Data Engineer
Data scientist job in Cincinnati, OH
Data Engineer III
About the Role
We're looking for a Data Engineer III to play a key role in a large-scale data migration initiative within Client's commercial lending, underwriting, and reporting areas. This is a hands-on engineering role that blends technical depth with business analysis, focused on transforming legacy data systems into modern, scalable pipelines.
What You'll Do
Analyze legacy SQL, DataStage, and SAS code to extract business logic and identify key data dependencies.
Document current data usage and evaluate the downstream impact of migrations.
Design, build, and maintain data pipelines and management systems to support modernization goals.
Collaborate with business and technology teams to translate requirements into technical solutions.
Improve data quality, reliability, and performance across multiple environments.
Develop backend solutions using Python, Java, or J2EE, and integrate with tools like DataStage and dbt.
What You Bring
5+ years of experience with relational and non-relational databases (SQL, Snowflake, DB2, MongoDB).
Strong background in legacy system analysis (SQL, DataStage, SAS).
Experience with Python or Java for backend development.
Proven ability to build and maintain ETL pipelines and automate data processes.
Exposure to AWS, Azure, or GCP.
Excellent communication and stakeholder engagement skills.
Financial domain experience-especially commercial lending or regulatory reporting-is a big plus.
Familiarity with Agile methodologies preferred.
Junior Data Scientist
Data scientist job in Cincinnati, OH
The Medpace Analytics and Business Intelligence team is growing rapidly and is focused on building a data driven culture across the enterprise. The BI team uses data and insights to drive increased strategic and operational efficiencies across the organization. As a Junior Data Scientist, you will hold a highly visible analytical role that requires interaction and partnership with leadership across the Medpace organization.
What's in this for you?
* Work in a collaborative, fast paced, entrepreneurial, and innovative workplace;
* Gain experience and exposure to advanced BI concepts from visualization to data warehousing;
* Grow business knowledge by working with leadership across all aspects of Medpace's business.
Responsibilities
* Data Collection & Cleaning: Gather, clean, and preprocess large, raw datasets;
* Analysis & Modeling: Perform statistical analysis, build & validate machine learning models, and test hypotheses (e.g., A/B testing);
* Algorithm Development: Create algorithms to manage and interpret information, often automating processes;
* Insight Generation: Discover trends, patterns, and insights to inform business strategy;
* Visualization & Communication: Present complex findings visually (dashboards, charts) and verbally to technical and non-technical teams;
* Collaboration: Work with engineering, product, and business teams to implement solutions;
* Model Monitoring: Deploy and maintain models, iterating for continuous improvement.
Qualifications
* Bachelor's Degree in Business, Life Science, Computer Science, or Related Degree;
* 0-3 years of experience in business intelligence or analytics - Python, R, SQL heavily preferred
* Strong analytical and communication skills;
* Excellent organization skills and the ability to multitask while efficiently completing high quality work.
Medpace Overview
Medpace is a full-service clinical contract research organization (CRO). We provide Phase I-IV clinical development services to the biotechnology, pharmaceutical and medical device industries. Our mission is to accelerate the global development of safe and effective medical therapeutics through its scientific and disciplined approach. We leverage local regulatory and therapeutic expertise across all major areas including oncology, cardiology, metabolic disease, endocrinology, central nervous system, anti-viral and anti-infective. Headquartered in Cincinnati, Ohio, employing more than 5,000 people across 40+ countries.
Why Medpace?
People. Purpose. Passion. Make a Difference Tomorrow. Join Us Today.
The work we've done over the past 30+ years has positively impacted the lives of countless patients and families who face hundreds of diseases across all key therapeutic areas. The work we do today will improve the lives of people living with illness and disease in the future.
Cincinnati Perks
* Cincinnati Campus Overview
* Flexible work environment
* Competitive PTO packages, starting at 20+ days
* Competitive compensation and benefits package
* Company-sponsored employee appreciation events
* Employee health and wellness initiatives
* Community involvement with local nonprofit organizations
* Discounts on local sports games, fitness gyms and attractions
* Modern, ecofriendly campus with an on-site fitness center
* Structured career paths with opportunities for professional growth
* Discounted tuition for UC online programs
Awards
* Named a Top Workplace in 2024 by The Cincinnati Enquirer
* Recognized by Forbes as one of America's Most Successful Midsize Companies in 2021, 2022, 2023 and 2024
* Continually recognized with CRO Leadership Awards from Life Science Leader magazine based on expertise, quality, capabilities, reliability, and compatibility
What to Expect Next
A Medpace team member will review your qualifications and, if interested, you will be contacted with details for next steps.
Auto-ApplyData Scientist
Data scientist job in Delaware, OH
The Data Scientist will be responsible to create product reliability models and advanced analytics that drive strategic decisions about product improvements. This role will collaborate closely with cross-functional teams, including Engineering, Product Management, Quality, Services, and IT, to develop and deploy data-driven solutions that address complex customer challenges. You should understand the impacts of environmental and other field conditions as they relate to product reliability. In this role, you should be able to apply mathematical and statistical methods to predict future field performance by building product reliability models using software tools.
PRINCIPAL DUTIES & RESPONSIBILITIES:
* Analyze product reliability requirements
* Create predictive models for field performance and product reliability
* Assist in design of experiments and analysis to understand impacts of different design decisions and test results
* Correlate predictive models with test results and field data
REQUIREMENTS:
* Bachelor's Degree in Math, Statistics, Data Science, Computer Science, Reliability Engineering, or equivalent experience
* 1-5 years of experience
* Basic knowledge in power/electrical engineering; AC and DC power
* Basic knowledge in large- and small-scale cooling systems
* LabVIEW, Python, or other coding and modeling language
* Minitab or other statistical software tools
* Power BI or other data visualization tools
* Travel 15%
Auto-ApplyPrincipal Data Scientist : Product to Market (P2M) Optimization
Data scientist job in Groveport, OH
About Gap Inc. Our brands bridge the gaps we see in the world. Old Navy democratizes style to ensure everyone has access to quality fashion at every price point. Athleta unleashes the potential of every woman, regardless of body size, age or ethnicity. Banana Republic believes in sustainable luxury for all. And Gap inspires the world to bring individuality to modern, responsibly made essentials.
This simple idea-that we all deserve to belong, and on our own terms-is core to who we are as a company and how we make decisions. Our team is made up of thousands of people across the globe who take risks, think big, and do good for our customers, communities, and the planet. Ready to learn fast, create with audacity and lead boldly? Join our team.
About the Role
Gap Inc. is seeking a Principal Data Scientist with deep expertise in operations research and machine learning to lead the design and deployment of advanced analytics solutions across the Product-to-Market (P2M) space. This role focuses on driving enterprise-scale impact through optimization and data science initiatives spanning pricing, inventory, and assortment optimization.
The Principal Data Scientist serves as a senior technical and strategic thought partner, defining solution architectures, influencing product and business decisions, and ensuring that analytical solutions are both technically rigorous and operationally viable. The ideal candidate can lead end-to-end solutioning independently, manage ambiguity and complex stakeholder dynamics, and communicate technical and business risk effectively across teams and leadership levels.
What You'll Do
* Lead the framing, design, and delivery of advanced optimization and machine learning solutions for high-impact retail supply chain challenges.
* Partner with product, engineering, and business leaders to define analytics roadmaps, influence strategic priorities, and align technical investments with business goals.
* Provide technical leadership to other data scientists through mentorship, design reviews, and shared best practices in solution design and production deployment.
* Evaluate and communicate solution risks proactively, grounding recommendations in realistic assessments of data, system readiness, and operational feasibility.
* Evaluate, quantify, and communicate the business impact of deployed solutions using statistical and causal inference methods, ensuring benefit realization is measured rigorously and credibly.
* Serve as a trusted advisor by effectively managing stakeholder expectations, influencing decision-making, and translating analytical outcomes into actionable business insights.
* Drive cross-functional collaboration by working closely with engineering, product management, and business partners to ensure model deployment and adoption success.
* Quantify business benefits from deployed solutions using rigorous statistical and causal inference methods, ensuring that model outcomes translate into measurable value
* Design and implement robust, scalable solutions using Python, SQL, and PySpark on enterprise data platforms such as Databricks and GCP.
* Contribute to the development of enterprise standards for reproducible research, model governance, and analytics quality.
Who You Are
* Master's or Ph.D. in Operations Research, Operations Management, Industrial Engineering, Applied Mathematics, or a closely related quantitative discipline.
* 10+ years of experience developing, deploying, and scaling optimization and data science solutions in retail, supply chain, or similar complex domains.
* Proven track record of delivering production-grade analytical solutions that have influenced business strategy and delivered measurable outcomes.
* Strong expertise in operations research methods, including linear, nonlinear, and mixed-integer programming, stochastic modeling, and simulation.
* Deep technical proficiency in Python, SQL, and PySpark, with experience in optimization and ML libraries such as Pyomo, Gurobi, OR-Tools, scikit-learn, and MLlib.
* Hands-on experience with enterprise platforms such as Databricks and cloud environments
* Demonstrated ability to assess, communicate, and mitigate risk across analytical, technical, and business dimensions.
* Excellent communication and storytelling skills, with a proven ability to convey complex analytical concepts to technical and non-technical audiences.
* Strong collaboration and influence skills, with experience leading cross-functional teams in matrixed organizations.
* Experience managing code quality, CI/CD pipelines, and GitHub-based workflows.
Preferred Qualifications
* Experience shaping and executing multi-year analytics strategies in retail or supply chain domains.
* Proven ability to balance long-term innovation with short-term deliverables.
* Background in agile product development and stakeholder alignment for enterprise-scale initiatives.
Benefits at Gap Inc.
* Merchandise discount for our brands: 50% off regular-priced merchandise at Old Navy, Gap, Banana Republic and Athleta, and 30% off at Outlet for all employees.
* One of the most competitive Paid Time Off plans in the industry.*
* Employees can take up to five "on the clock" hours each month to volunteer at a charity of their choice.*
* Extensive 401(k) plan with company matching for contributions up to four percent of an employee's base pay.*
* Employee stock purchase plan.*
* Medical, dental, vision and life insurance.*
* See more of the benefits we offer.
* For eligible employees
Gap Inc. is an equal-opportunity employer and is committed to providing a workplace free from harassment and discrimination. We are committed to recruiting, hiring, training and promoting qualified people of all backgrounds, and make all employment decisions without regard to any protected status. We have received numerous awards for our long-held commitment to equality and will continue to foster a diverse and inclusive environment of belonging. In 2022, we were recognized by Forbes as one of the World's Best Employers and one of the Best Employers for Diversity.
Data Scientist Lead, Vice President
Data scientist job in Columbus, OH
JobID: 210686904 JobSchedule: Full time JobShift: Day : Join a powerhouse team at the forefront of Home Lending Data & Analytics, where we partner with Product, Marketing, and Sales to solve the most urgent and complex business challenges. Our team thrives in a fast-paced, matrixed environment, driving transformative impact through bold analytics and innovative data science solutions. We are relentless in our pursuit of actionable insights, seamlessly engaging with stakeholders and redefining what's possible through strategic collaboration and visionary problem solving. If you're ready to shape the future of home lending with breakthrough ideas and data-driven leadership, this is the team for you.
We are seeking a senior Data Scientist Lead to join our Home Lending Data & Analytics team, supporting Originations Product Team. This strategic role requires a visionary leader with a consulting background who excels at translating complex data into actionable business insights. The ideal candidate will be a recognized thought leader, demonstrating exceptional critical thinking and problem-solving skills, and a proven ability to craft and deliver compelling data-driven stories that influence decision-making at all levels. Success in this role requires not only technical expertise, but also the ability to inspire others, drive innovation, and communicate insights in a way that resonates with both technical and non-technical audiences.
Key Responsibilities:
* Identify, quantify, and solve obstacles to business goals using advanced business analysis and data science skillsets.
* Recognize and communicate meaningful trends and patterns in data, delivering clear, compelling narratives to drive business decisions.
* Serve as a data expert and consultant to the predictive modeling team, identifying and validating data sources.
* Advise business and technology partners on data-driven opportunities to increase efficiency and improve customer experience.
* Proactively interface with, and gather information from, other areas of the business (Operations, Technology, Finance, Marketing).
* Extract and analyze data from various sources and technologies using complex SQL queries.
* Summarize discoveries with solid data support and quick turnaround, tailoring messages for technical and non-technical audiences.
* Influence upward and downward-mentor junior team members and interface with business leaders to drive strategic initiatives.
* Foster a culture of innovation, attention to detail, and results within the team.
Qualifications:
* 6+ years of experience in business strategy, analytics, or data science.
* 2+ years of experience in business management consulting.
* Strong experience with SQL (query/procedure writing).
* Proficiency in at least one versatile, cross-technology tool/language: Python, SAS, R, or Alteryx.
* Demonstrated ability to craft compelling stories from data and present insights that influence decision-making.
* Clear and succinct written and verbal communication skills, able to frame and present messages for different audiences.
* Critical and analytical thinking, with the ability to maintain detail focus and retain big picture perspective.
* Strong Microsoft Excel skills.
* Ability to work independently, manage shifting priorities and projects, and thrive in a fast-paced, competitive environment.
* Excellent interpersonal skills to work effectively with a variety of individuals, departments, and organizations.
* Experience mentoring or leading teams is highly desirable.
Preferred Background:
* Experience in Mortgage Banking or Financial Services industry preferred.
* Previous experience in consulting, with exposure to a variety of industries and business challenges.
* Track record of successful stakeholder engagement and change management.
* Recognized as a thought leader, with experience driving strategic initiatives and innovation.
Auto-ApplySenior Data Scientist, Navista
Data scientist job in Columbus, OH
At Navista, our mission is to empower community oncology practices to deliver patient-centered cancer care. Navista, a Cardinal Health company, is an oncology practice alliance co-created with oncologists and practice leaders that offers advanced support services and technology to help practices remain independent and thrive. True to our name, our experienced team is passionate about helping oncology practices navigate the future.
We are seeking an innovative and highly skilled **Senior Data Scientist** with specialized expertise in Generative AI (GenAI), Large Language Models (LLMs), and Agentic Systems to join the Navista - Data & Advanced Analytics team supporting the growth of our Navista Application Suite and the Integrated Oncology Network (IoN). In this critical role, you will be at the forefront of designing, developing, and deploying advanced AI solutions that leverage the power of generative models and intelligent agents to transform our products and operations. You will be responsible for pushing the boundaries of what's possible, from foundational research to production-ready applications, working with diverse datasets and complex problem spaces, particularly within the oncology domain.
The ideal candidate will possess a deep theoretical understanding and practical experience in building, fine-tuning, and deploying LLMs, as well as architecting and implementing agentic frameworks. You will play a key role in shaping our AI strategy, mentoring junior team members, and collaborating with cross-functional engineering and product teams to bring groundbreaking AI capabilities to life, including developing predictive models from complex, often unstructured, oncology data.
**_Responsibilities_**
+ **Research & Development:** Lead the research, design, and development of novel Generative AI models and algorithms, including but not limited to LLMs, diffusion models, GANs, and VAEs, to address complex business challenges.
+ **LLM Expertise:** Architect, fine-tune, and deploy Large Language Models for various applications such as natural language understanding, generation, summarization, question-answering, and code generation, with a focus on extracting insights from unstructured clinical and research data.
+ **Agentic Systems Design:** Design and implement intelligent agentic systems capable of autonomous decision-making, planning, reasoning, and interaction within complex environments, leveraging LLMs as core components.
+ **Predictive Modeling:** Develop and deploy advanced predictive models and capabilities using both structured and unstructured data, particularly within the oncology space, to forecast outcomes, identify trends, and support clinical or commercial decision-making.
+ **Prompt Engineering & Optimization:** Develop advanced prompt engineering strategies and techniques to maximize the performance and reliability of LLM-based applications.
+ **Data Strategy for GenAI:** Work with data engineers to define and implement data collection, preprocessing, and augmentation strategies specifically tailored for training and fine-tuning generative models and LLMs, including techniques for handling and enriching unstructured oncology data (e.g., clinical notes, pathology reports).
+ **Model Evaluation & Deployment:** Develop robust evaluation metrics and methodologies for generative models, agentic systems, and predictive models. Oversee the deployment, monitoring, and continuous improvement of these models in production environments.
+ **Collaboration & Leadership:** Collaborate closely with machine learning engineers, software engineers, and product managers to integrate AI solutions into our products. Provide technical leadership and mentorship to junior data scientists.
+ **Innovation & Thought Leadership:** Stay abreast of the latest advancements in GenAI, LLMs, and agentic AI research. Proactively identify new opportunities and technologies that can enhance our capabilities and competitive advantage.
+ **Ethical AI:** Ensure the responsible and ethical development and deployment of AI systems, addressing potential biases, fairness, and transparency concerns.
**_Qualifications_**
+ 8-12 years of experience as a Data Scientist or Machine Learning Engineer, with a significant focus on deep learning and natural language processing, preferred
+ Bachelor's degree in related field, or equivalent work experience, preferred
+ Proven hands-on experience with Generative AI models (e.g., Transformers, GANs, VAEs, Diffusion Models) and their applications.
+ Extensive experience working with Large Language Models (LLMs), including fine-tuning, prompt engineering, RAG (Retrieval Augmented Generation), and understanding various architectures (e.g., GPT, Llama, BERT, T5).
+ Demonstrated experience in designing, building, and deploying agentic systems or multi-agent systems, including concepts like planning, reasoning, and tool use.
+ Strong experience working with unstructured data, particularly in the oncology domain (e.g., clinical notes, pathology reports, genomic data, imaging reports), and extracting meaningful features for analysis.
+ Demonstrated ability to create and deploy predictive capabilities and models from complex datasets, including those with unstructured components.
+ Proficiency in Python and deep learning frameworks such as PyTorch or TensorFlow.
+ Experience with relevant libraries and tools (e.g., Hugging Face Transformers, LangChain, LlamaIndex).
+ Strong understanding of machine learning fundamentals, statistical modeling, and experimental design.
+ Experience with at least one cloud platforms ( GCP, Azure) for training and deploying large-scale AI models.
+ Excellent problem-solving skills, with the ability to tackle complex, ambiguous problems and drive solutions.
+ Strong communication and presentation skills, capable of explaining comp
+ Experience in the healthcare or life sciences industry, specifically with oncology data and research, highly preferred
+ Experience with MLOps practices for deploying and managing large-scale AI models, highly preferred
+ Familiarity with distributed computing frameworks (e.g., Spark, Dask), highly preferred
+ Experience contributing to open-source AI projects, highly preferred
**_What is expected of you and others at this level_**
+ Applies advanced knowledge and understanding of concepts, principles, and technical capabilities to manage a wide variety of projects
+ Participates in the development of policies and procedures to achieve specific goals
+ Recommends new practices, processes, metrics, or models
+ Works on or may lead complex projects of large scope
+ Projects may have significant and long-term impact
+ Provides solutions which may set precedent
+ Independently determines method for completion of new projects
+ Receives guidance on overall project objectives
+ Acts as a mentor to less experienced colleagues
**Anticipated salary range:** $123,400 - $176,300
**Bonus eligible:** Yes
**Benefits:** Cardinal Health offers a wide variety of benefits and programs to support health and well-being.
+ Medical, dental and vision coverage
+ Paid time off plan
+ Health savings account (HSA)
+ 401k savings plan
+ Access to wages before pay day with my FlexPay
+ Flexible spending accounts (FSAs)
+ Short- and long-term disability coverage
+ Work-Life resources
+ Paid parental leave
+ Healthy lifestyle programs
**Application window anticipated to close:** 02/15/2026 *if interested in opportunity, please submit application as soon as possible.
The salary range listed is an estimate. Pay at Cardinal Health is determined by multiple factors including, but not limited to, a candidate's geographical location, relevant education, experience and skills and an evaluation of internal pay equity.
\#LI-Remote
_Candidates who are back-to-work, people with disabilities, without a college degree, and Veterans are encouraged to apply._
_Cardinal Health supports an inclusive workplace that values diversity of thought, experience and background. We celebrate the power of our differences to create better solutions for our customers by ensuring employees can be their authentic selves each day. Cardinal Health is an Equal_ _Opportunity/Affirmative_ _Action employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, ancestry, age, physical or mental disability, sex, sexual orientation, gender identity/expression, pregnancy, veteran status, marital status, creed, status with regard to public assistance, genetic status or any other status protected by federal, state or local law._
_To read and review this privacy notice click_ here (***************************************************************************************************************************
Senior Data Engineer
Data scientist job in Columbus, OH
Our direct client has a long-term contract need for a Sr. Data Engineer.
Candidate Requirements:
Candidates must be local to Columbus, Ohio
Candidates must be willing and able to work the following:
Hybrid schedule (3 days in office & 2 days WFH)
The team is responsible for the implementation of the new Contract Management System (FIS Asset Finance) as well as the integration into the overall environment and the migration of data from the legacy contract management system to the new system.
Candidate will be focused on the delivery of data migration topics to ensure that high quality data is migrated from the legacy systems to the new systems. This may involve data mapping, SQL development and other technical activities to support Data Migration objectives.
Must Have Experience:
Strong C# and SQL Server design and development skills. Analysis Design. IMPORTANT MUST HAVE!
Strong technical analysis skills
Strong collaboration skills to work effectively with cross-functional teams
Exceptional ability to structure, illustrate, and communicate complex concepts clearly and effectively to diverse audiences, ensuring understanding and actionable insights.
Demonstrated adaptability and problem-solving skills to navigate challenges and uncertainties in a fast-paced environment.
Strong prioritization and time management skills to balance multiple projects and deadlines in a dynamic environment.
In-depth knowledge of Agile methodologies and practices, with the ability to adapt and implement Agile principles in testing and delivery processes.
Nice to have:
ETL design and development; data mapping skills and experience; experience executing/driving technical design and implementation topics
Junior Data Engineer
Data scientist job in Cincinnati, OH
Agility Partners is seeking a qualified Junior Data Engineer to fill an open position with one of our clients. This is an exciting opportunity for an early‑career professional to build real‑world data skills in a supportive, fast‑paced environment. You'll work closely with senior engineers and analysts, primarily using SQL to query, clean, and prepare data that powers reporting and analytics.
Responsibilities
Assist with writing and optimizing SQL queries to support reporting and ad‑hoc data requests
Help create basic database objects (tables, views) and maintain data dictionaries and documentation
Support routine data quality checks (duplicates, nulls, referential integrity) and simple SQL‑based transformations
Participate in loading data into staging tables and preparing datasets for downstream use
Troubleshoot query issues (e.g., incorrect results, slow performance) with guidance from senior engineers
Collaborate with analytics teams to validate results and ensure datasets meet business needs
The Ideal Candidate
Foundational SQL proficiency: comfortable writing basic queries and joins; curiosity to learn window functions and indexing
Understanding of relational database concepts (keys, normalization vs. denormalization)
0-2 years of professional experience (or internship/capstone/bootcamp projects); recent grads welcome
Detail‑oriented, coachable, and comfortable asking questions and working through feedback
Able to document work clearly and communicate findings to non‑technical stakeholders
Bonus (not required): exposure to Excel/Google Sheets or a BI tool (Power BI/Tableau), and interest in learning simple ETL concepts
Reasons to Love It
Learn directly from experienced data engineers while working on meaningful, production datasets
Clear growth path from SQL fundamentals to broader data engineering skills over time
Supportive team culture that values curiosity, reliability, and steady skill development
Data Engineer (W2 Contract only)
Data scientist job in Cincinnati, OH
Role: Data Engineer III
Contract
Handle the design and construction of scalable management systems, ensure that all data systems meet company requirements, and also research new uses for data acquisition. Required to know and understand the ins and outs of the industry such as data mining practices, algorithms, and how data can be used.
Must Have Skills:
5+ years of DataStage experience
5+ years of ETL experience
5+ years of SQL experience
5+ years of Unix/Linux scripting experience
At least 5 years of experience working with relational and non-relational databases (e.g., SQL, Snowflake, DB2, MongoDB).
Business Intelligence - Data Engineering
Cloud Snowflake Database
Primary Responsibilities:
Design, construct, install, test and maintain data management systems.
Build high-performance algorithms, predictive models, and prototypes.
Ensure that all systems meet the business/company requirements as well as industry practices.
Integrate up-and-coming data management and software engineering technologies into existing data structures.
Develop set processes for data mining, data modeling, and data production.
Create custom software components and analytics applications.
Research new uses for existing data.
Employ an array of technological languages and tools to connect systems together.
Collaborate with members of your team (eg, data architects, the IT team, data scientists) on the project's goals.
Install/update disaster recovery procedures.
Recommend different ways to constantly improve data reliability and quality.
Qualifications:
Technical Degree or related work experience
Experience with non-relational & relational databases (SQL, MySQL, NoSQL, Hadoop, MongoDB, etc.)
Experience programming and/or architecting a back end language (Java, J2EE, etc)