The Program Integrity DataScientist II is responsible for developing, implementing, managing, and deploying in-depth analyses that meet the information needs associated with payment accuracy, anomaly detection, and Fraud, Waste, and Abuse (FWA).
Essential Functions:
Build concepts as algorithms that identify claims for pre- or post-pay intervention based on probability of fraud, waste, and abuse. Algorithms are implemented into production workflows for action: medical record request and audit, downcode adjustment, denial and remittance communication, etc.
Analyze and quantify claim payment issues and provide recommendations to mitigate identified program integrity risks.
Identify trends and patterns using standard corporate, processes, tools, reports and databases as well as leveraging other processes and data sources.
Conduct outcome analyses to determine impact and effectiveness of corporate program and payment integrity initiatives.
Collaborate on the examination and explanation of complex data relationships to answer questions identified either within the department or by other departments as it relates to payment accuracy, anomaly detection, and FWA.
Monitoring of and providing explanation of anomalies related to trends associated with the potential for Fraud Waste and Abuse across the corporate enterprise.
Collaborate with the Legal Department, generating data and analyses to support Legal proceedings.
Develop hypothesis tests and extrapolations on statistically valid samples to establish outlier behavior patterns and potential recoupment.
Create, maintain, and communicate an analytical plan for each project.
Mine and analyze large structured and unstructured datasets.
Employ wide range of data sources to develop algorithms for predicting risk and understanding drivers, detecting outliers, etc.
Develop visualizations that demonstrate the efficacy of developed algorithms.
Provide statistical validation and analysis of outcomes associated with clinical programs and interventions.
Collaborate with other teams to integrate with existing solutions.
Communicate results and ideas to key stakeholders.
Prepare code for operationalization of end-to-end model pipeline and deliverable for business consumption.
Perform any other job related duties as requested.
Education and Experience:
Bachelor's degree in Data Science, Mathematics, Statistics, Engineering, Computer Science, or a related field required
Equivalent years of relevant work experience may be accepted in lieu of required education
Three (3) years data analysis and/or analytic programming required
Experience with cloud services (such as Azure, AWS or GCP) and modern data stack (such as Databricks or Snowflakes) preferred
Healthcare experience required
Competencies, Knowledge and Skills:
Proficient in SQL and at least one of the following programming languages: Python / R / RAT STAT
Familiarity with SAS is preferred
Preferred beginner level of knowledge of developing reports or dashboards in Power BI or other business intelligence applications
Ability to perform advanced statistical analyses and techniques including t-tests, ANOVAs, z-tests, statistical extrapolations, non-parametric significance testing, and sampling methodologies
Working knowledge of predictive modeling and machine learning algorithms such as generalized linear models, non-linear supervised learning models, clustering, decision trees, dimensionality reduction and natural language processing
Proficient in feature engineering techniques and exploratory data analysis
Familiarity with optimization techniques and artificial intelligence methods
Ability to analyze large quantities of information and identify patterns, irregularities, and deficiencies
Knowledge of healthcare coding and billing processes, including CPT4, HCPCS, ICD-9, DRG and Revenue Codes preferred
Proficient with MS office (Excel, PowerPoint, Word, Access)
Demonstrated critical thinking, verbal communication, presentation and written communication skills
Ability to work independently and within a cross-functional team environment
Licensure and Certification:Working Conditions:
General office environment; may be required to sit or stand for extended periods of time
Up to 15% (occasional) travel to attend meetings, trainings, and conferences may be required
Compensation Range:
$83,000.00 - $132,800.00 CareSource takes into consideration a combination of a candidate's education, training, and experience as well as the position's scope and complexity, the discretion and latitude required for the role, and other external and internal data when establishing a salary level. In addition to base compensation, you may qualify for a bonus tied to company and individual performance. We are highly invested in every employee's total well-being and offer a substantial and comprehensive total rewards package.
Compensation Type:
Salary
Competencies:
- Fostering a Collaborative Workplace Culture - Cultivate Partnerships - Develop Self and Others - Drive Execution - Influence Others - Pursue Personal Excellence - Understand the Business
This is not all inclusive. CareSource reserves the right to amend this job description at any time. CareSource is an Equal Opportunity Employer. We are dedicated to fostering an environment of belonging that welcomes and supports individuals of all backgrounds.
#LI-GB1
$83k-132.8k yearly 5d ago
Looking for a job?
Let Zippia find it for you.
Data Scientist
Procter & Gamble 4.8
Data scientist job in Cincinnati, OH
Do you enjoy solving billion-dollar data science problems across trillions of data points? Are you passionate about working at the cutting edge of interdisciplinary boundaries, where computer science meets hard science? If you like turning untidy data into nonobvious insights and surprising business leaders with the transformative power of Artificial Intelligence (AI), including Generative and Agentic AI, we want you on our team at P&G.
As a DataScientist in our organization, you will play a crucial role in disrupting current business practices by designing and implementing innovative models that enhance our processes. You will be expected to constructively research, design, and customize algorithms tailored to various problems and data types. Utilizing your expertise in Operations Research (including optimization and simulation) and machine learning models (such as tree models, deep learning, and reinforcement learning), you will directly contribute to the development of scalable Data Science algorithms. Your work will also integrate advanced techniques from Generative and Agentic AI to create more dynamic and responsive models, enhancing our analytical capabilities. You will collaborate with Data and AI Engineering teams to productionize these solutions, applying exploratory data analysis, feature engineering, and model building within cloud environments on massive datasets to deliver accurate and impactful insights. Additionally, you will mentor others as a technical coach and become a recognized expert in one or more Data Science techniques, quantifying the improvements in business outcomes resulting from your work.
Key Responsibilities:
+ Algorithm Design & Development: Directly contribute to the design and development of scalable Data Science algorithms.
+ Collaboration: Work closely with Data and Software Engineering teams to effectively productionize algorithms.
+ Data Analysis: Apply thorough technical knowledge to large datasets, conducting exploratory data analysis, feature engineering, and model building.
+ Coaching & Mentorship: Develop others as a technical coach, sharing your expertise and insights.
+ Expertise Development: Become a known expert in one or multiple Data Science techniques and methodologies.
Job Qualifications
Required Qualifications:
+ Education: Pursuing or has graduated with a Master's degree in a quantitative field (Operations Research, Computer Science, Engineering, Applied Mathematics, Statistics, Physics, Analytics, etc.) or possess equivalent work experience.
+ Technical Skills: Proficient in programming languages such as Python and familiar with data science/machine learning libraries like OpenCV, scikit-learn, PyTorch, TensorFlow/Keras, and Pandas. Demonstrated ability to develop and test code within cloud environments.
+ Communication: Strong written and verbal communication skills, with the ability to influence others to take action.
Preferred Qualifications:
+ Analytic Methodologies: Experience applying analytic methodologies such as Machine Learning, Optimization, Simulation, and Generative and Agentic AI to real-world problems.
+ Continuous Learning: A commitment to lifelong learning, keeping up to date with the latest technology trends, and a willingness to teach others while learning new techniques.
+ Data Handling & Cloud: Experience with large datasets and developing in cloud computing platforms such as GCP or Azure.
+ DevOps Familiarity: Familiarity with DevOps environments, including tools like Git and CI/CD practices.
Immigration Sponsorship is not available for this role. For more information regarding who is eligible for hire at P&G along with other work authorization FAQ's, please click HERE (******************************************************* .
Procter & Gamble participates in e-verify as required by law.
Qualified individuals will not be disadvantaged based on being unemployed.
We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Job Schedule
Full time
Job Number
R000135859
Job Segmentation
Entry Level
Starting Pay / Salary Range
$85,000.00 - $115,000.00 / year
$85k-115k yearly 60d+ ago
AI Data Scientist
Medpace 4.5
Data scientist job in Cincinnati, OH
We are currently seeking an experienced datascientist to join our AI team who will support and lead data flow, advanced analytical needs and AI tools across Medpace. The AI team utilizes analytical principles and techniques to identify, collate and analyze many data sources and works with teams across Medpace to support efficiency and business gains for pharmaceutical development. The AI DataScientist will support various projects across the company to bring data sources together in a consistent manner, work with the business to identify the value of AI, identify appropriate solutions and work with IT to ensure they are developed and built into the relevant systems. The team is seeking an experienced candidate to contribute new skills to our team, support team growth and foster AI development.
The AI Team is a highly collaborative team with members in both the Cincinnati and London offices. This team supports many teams across the business including clinical operations, medical, labs, business development and business operations. The AI Team also works side-by-side with data engineering, business analytics and software engineering to architecture innovative data storage and access solutions for optimal data utilization strategies. If you are an individual with experience in informatics, data science, or computer science, please review the following career opportunity.
Responsibilities
* Explore and work with different data sources to collate into knowledge;
* Work with different business teams across the company with a variety of different business needs to identify potential areas that AI can support;
* Manage the process of working through AI potentials from discovery research to PoC to production with the business teams and supporting tasks for IT developers;
* Try out different AI tools to substantiate the potential of its use with the business team;
* Translate results into compelling visualizations which illustrate the overall benefits of the use of AI and identify with the business team the overall value of its use;
* Develop and map database architecture of methodological and clinical data systems;
* Convert business tasks into meaningful developer Jira tasks for sprints;
* Support departmental process improvement initiatives that can include AI; and
* Participate in training and development of more junior team members.
Qualifications
* Master's degree or higher in informatics, computer science/engineering, health information, statistics, or related field required;
* 2 or more years of experience as a DataScientist or closely related;
* Experience applying machine learning to pharmaceutical or clinical data (or translatable artificial intelligence [ai] techniques from other industries);
* Advanced computer programming skills (preferred language: Python);
* Analytical thinker with great attention to detail;
* Ability to prioritize multiple projects and tasks within tight timelines; and
* Excellent written and verbal communication skills.
Medpace Overview
Medpace is a full-service clinical contract research organization (CRO). We provide Phase I-IV clinical development services to the biotechnology, pharmaceutical and medical device industries. Our mission is to accelerate the global development of safe and effective medical therapeutics through its scientific and disciplined approach. We leverage local regulatory and therapeutic expertise across all major areas including oncology, cardiology, metabolic disease, endocrinology, central nervous system, anti-viral and anti-infective. Headquartered in Cincinnati, Ohio, employing more than 5,000 people across 40+ countries.
Why Medpace?
People. Purpose. Passion. Make a Difference Tomorrow. Join Us Today.
The work we've done over the past 30+ years has positively impacted the lives of countless patients and families who face hundreds of diseases across all key therapeutic areas. The work we do today will improve the lives of people living with illness and disease in the future.
Cincinnati Perks
* Cincinnati Campus Overview
* Flexible work environment
* Competitive PTO packages, starting at 20+ days
* Competitive compensation and benefits package
* Company-sponsored employee appreciation events
* Employee health and wellness initiatives
* Community involvement with local nonprofit organizations
* Discounts on local sports games, fitness gyms and attractions
* Modern, ecofriendly campus with an on-site fitness center
* Structured career paths with opportunities for professional growth
* Discounted tuition for UC online programs
Awards
* Named a Top Workplace in 2024 by The Cincinnati Enquirer
* Recognized by Forbes as one of America's Most Successful Midsize Companies in 2021, 2022, 2023 and 2024
* Continually recognized with CRO Leadership Awards from Life Science Leader magazine based on expertise, quality, capabilities, reliability, and compatibility
What to Expect Next
A Medpace team member will review your qualifications and, if interested, you will be contacted with details for next steps.
$69k-98k yearly est. Auto-Apply 9d ago
Data Analytics Intern - Reinsurance
Ameritas 4.7
Data scientist job in Cincinnati, OH
This internship is an opportunity to gain experience in analytics and automation on Ameritas' reinsurance team. The intern will gain practical experience with analysis, programing, testing, and implementation of metrics and data visualization tools.
Position Location:
This is a hybrid role (Cincinnati, OH) working partially in-office and partially from home.
Position Start Date:
This internship will begin in May 2026.
What you do:
Gain practical experience with analysis, programing, testing, and implementation of metrics and data visualization tools.
This internship will provide exposure to the following functions and duties:
Business Systems Administration:
Assist with identifying solutions to gather and present data
Support partners in their ability to leverage metric and metric solutions to measure and improve performance as well as drive decision making
Support simple to complex metric gathering
Operations Analysis:
Support the development and continuous improvement of operational business metrics that help define and measure the performance of all processes associated with Reinsurance Operations.
Assess and improve operational processes in the Reinsurance Operations space.
Investigating defects or errors in the technology.
What you bring:
Must be enrolled in a college program at least half-time as defined by your institution for the entire duration of the internship studying computer science, data analytics or related field.
Ability to commit to a 12-month internship working full-time during the summer and part-time during the school year.
Full-time hours: 30-40 hours per week
Part-time hours: 15-20 hours per week
Experience in Excel, Python, SQL, and/or Power Query/Power Pivot is desired.
Strong analytical, problem solving, and troubleshooting skills
Ability to work independently and within a team to build relationships and interact effectively
Positive, self-motivated individual with a high level of enthusiasm and willingness to take on new challenges, responsibilities, and assignments.
Self-starter with a real drive for impact and a naturally curious demeanor
Excellent verbal and written communications skills
What we offer:
At Ameritas, our mission is Fulfilling Life. We do that in many ways, but especially by helping people invest in themselves by offering trusted financial products and advice. Because we believe everyone should be happy, healthy, and financially secure, we work hard to provide trusted financial products and valued guidance, including individual life and disability insurance, employee benefits, retirement planning, investments, and wealth management services.
Our company motto is “Fulfilling Life” and we take that job seriously, with a heavy commitment to volunteering and community support, employee well-being, and providing quality products and services that help our customers create a brighter future.
Being a mutual-based organization, the return on our investment goes back into the company, to benefit our members/customers.
We're committed to professional development, opportunity, inclusion, and diversity. Team building and collaboration are also priorities.
An Equal Opportunity Employer
Ameritas has a reputation as a company that cares, and because everyone should feel safe bringing their authentic, whole self to work, we're committed to an inclusive culture and diverse workplace, enriched by our individual differences. We are an Equal Opportunity/Affirmative Action Employer that hires based on qualifications, positive attitude, and exemplary work ethic, regardless of sex, race, color, national origin, religion, age, disability, veteran status, genetic information, marital status, sexual orientation, gender identity or any other characteristic protected by law.
$59k-88k yearly est. 1d ago
Data Scientist - Clinical and Operational Analytics
Venesco LLC
Data scientist job in Dayton, OH
Job DescriptionDescription:
Develop and deploy machine learning models to support clinical and operational decision-making. Work with large datasets to extract insights and support predictive analytics for human performance and health.
Requirements:
Mandatory Qualifications:
• Bachelor's degree in a quantitative field (e.g., Computer Science, Applied Math).
• 3+ years of experience in predictive analytics.
• Proficiency in Python, NumPy, Pandas, Matplotlib, and Scikit-learn.
• Ability to explain and implement ML algorithms from scratch.
• Signed NDA and HIPAA training required upon start.
Desired Qualifications:
• Experience with dashboard development and pretrained language models.
• Experience with dimensionality reduction and deep learning libraries (TensorFlow, PyTorch).
• Familiarity with human biology and performance.
Key Tasks and Responsibilities:
• Develop and tune unsupervised tree-based clustering models.
• Implement decision trees, k-NN, and optimized list sorting algorithms.
• Generate and minimize distance matrices using vectorized code.
• Collaborate with software engineers and maintain HIPAA compliance.
$69k-95k yearly est. 8d ago
Data Scientist
Core4Ce Careers
Data scientist job in Dayton, OH
We are seeking a highly skilled DataScientist / Machine Learning Engineer to develop advanced analytics and machine learning solutions that drive meaningful insights for our customers. In this role, you will design and test algorithms, build data-driven experiments, and collaborate closely with SMEs and developers to transform data into actionable intelligence. This position is ideal for someone who excels at both innovative research and practical implementation.
Key Responsibilities:
Algorithm Development: Develop machine learning, data mining, statistical, and graph-based algorithms to analyze complex data sets and uncover meaningful patterns.
Model Evaluation: Test, validate, and down-select algorithms to determine the best-performing models for customer requirements.
Experimental Design & Data Generation: Design experiments and creating synthetic or simulated data when training/example data sets are limited or unavailable.
Data Visualization & Reporting: Produce clear reports, dashboards, and visualizations that communicate data insights to customers and stakeholders in an intuitive manner.
Automation & SME Collaboration: Work with subject matter experts to convert manual analytic workflows into efficient, automated analytics solutions.
Cross-Functional Development: Collaborate with software developers to ensure algorithms are properly implemented, optimized, and integrated into production systems.
*This position is designed to be flexible, with responsibilities evolving to meet business needs and enable individual growth.
Required Qualifications:
Active TS-SCI security clearance with the ability to obtain a CI poly.
OPIR Experience
Modeling and Simulation Experience
Experience designing, training, and validating machine learning models and statistical algorithms.
Proficiency with Python, R, or similar languages used for analytics and model development.
Hands-on experience with data visualization tools (e.g., Tableau, Power BI, matplotlib, seaborn).
Strong understanding of experimental design and data generation strategies.
Ability to communicate complex analytic concepts to both technical and non-technical audiences.
Demonstrated ability to work collaboratively across multidisciplinary teams.
Degree in Mathematics/Statistics, Computer Science, or a relevant domain field.
MA/MS degree with 13+ years of relevant experience, OR
BA/BS degree with 15+ years of relevant experience in a discipline aligned with the position's responsibilities.
Why Work for Us?
Core4ce is a team of innovators, self-starters, and critical thinkers-driven by a shared mission to strengthen national security and advance warfighting outcomes.
We offer:
401(k) with 100% company match on the first 6% deferred, with immediate vesting
Comprehensive medical, dental, and vision coverage-employee portion paid 100% by Core4ce
Unlimited access to training and certifications, with no pre-set cap on eligible professional development
Tuition assistance for job-related degrees and courses
Paid parental leave, PTO that grows with tenure, and generous holiday schedules
Got a big idea? At Core4ce,
The Forge
gives every employee the chance to propose bold innovations and help bring them to life with internal backing.
Join us to build a career that matters-supported by a company that invests in you.
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex (including pregnancy), national origin, disability, veteran status, age, genetic information, or other legally protected status.
$69k-95k yearly est. 6d ago
Data Scientist
Vt Group 3.8
Data scientist job in Beavercreek, OH
VTG is seeking a DataScientist to support our Team in Beavercreek, OH. The DataScientist will design, prototype, and implement a data management and application development pipeline in support of national defense data science and data architecture prototyping tasks. This role will also include gathering and organizing data, conducting data analytics, and developing data analytic and AI/ML based applications. This is an onsite role due to its classification level.
What will you do?
The DataScientist will work with a team of DevOps engineers, software developers, data engineers, and system operators to identify data needs and prototype a range of novel solutions. This datascientist would be involved at all levels of the data life cycle from onboard management of data to its use in application development and back to application integration and gathering test data.
Leverage third-party tools to architect and prototype a modern data management and application development pipeline in a local and/or a cloud environment
Perform data analytics of simulated and real-world data
Integrate structured and unstructured data from disparate data sources
Develop applications and models supporting various users
Provide technical input to program managers and government representatives
Do you have what it takes?
Requirements:
* Bachelor's Degree, majoring in majoring in Computer Science, Data Science, Information Systems, or a related field
* 4+ years of experience as a DataScientist including experience in statistical modeling and machine learning based on the analysis of large sets of data
* Experience with data storage and management tools (S3, SQL, MongoDB, Hbase, Apache Atlas, Kafka, etc.)
* Programming experience in Python, R, or similar data manipulation languages and associated libraries (e.g. pandas, numpy, polars, dask)
* Experience with data science and analytics toolsets (e.g. JupyterHub / Jupyter Notebooks, Apache Spark, MATLAB)
* Knowledge of data modeling principles
* Experience in knowledge extraction and insights from data in various forms, both structured and unstructured
* Cloud development experience, preferably in AWS
* Excellent verbal and written communication skills
* US Citizen with current TOP SECRET/SCI Eligible Clearance or ability to obtain a TOP SECRET/SCI clearance
* Successful completion of background check
Desired Qualifications/Skills/Abilities:
* Master's Degree or higher in Computer Science, Data Science, or Information Systems
* Experience establishing data pipelines in cloud platforms, such as AWS, Azure, or Google Cloud
* Data visualization experience and associated tools/libraries (e.g. pyplot, seaborn)
* Experience using Git for version control and issue tracking
* Experience with artifact repositories (e.g. Artifactory)
* Experience with CI/CD pipelines (e.g. Jenkins, Gitlab pipelines)
* Experience with AI/ML development tools and libraries (e.g. Sagemaker, ML Studio, Tensorflow, Keras, scikit-learn)
* Experience leading teams and projects
* Programming experience in C++ and Java
* Experience with Linux systems
Travel Requirements:
* Minimal travel, mostly in the Beavercreek, OH area
Make an Impact on the Future of Our Annuity Products
We're looking for an experienced actuary who enjoys both hands-on analytical work and shaping product strategy. In this role, you'll lead pricing and inforce management projects for our individual annuity products--especially fixed indexed annuities (FIAs).
Your first major focus will be designing and pricing GLWB riders on FIAs. From there, you'll have the opportunity to influence product innovation, improve modeling processes, and collaborate across the company to strengthen our overall product suite.
If you like having real ownership, enjoy partnering with smart colleagues across multiple functions, and want to help build the next generation of annuity products, this role is a great fit.
What Will You Work On
Lead pricing and design for FIAs, including GLWB riders.
Support product filings and regulatory submissions.
Prepare pricing results, documentation, and presentations for senior leaders.
Help drive new product development--potentially including a RILA product.
Assess the impact of reinsurance or M&A activity.
Perform inforce management and analyze product experience trends.
Enhance modeling frameworks and refine rate-setting processes.
Monitor industry trends.
Build a strong understanding of our current and past product offerings.
Leadership & Collaboration
Work closely with partners in IT, investments, hedging, marketing, distribution, capital management, valuation, risk, compliance, and legal.
Mentor junior actuaries and help build deeper expertise in annuity pricing.
Drive projects forward with independence and confidence.
Option to manage 1--2 direct reports while still spending most of your time as a hands-on individual contributor.
$52k-80k yearly est. 1d ago
Data Services Intern
Reynolds and Reynolds Company 4.3
Data scientist job in Dayton, OH
":"As the Data Services Intern, you will work with customer lists and customers over the phone to validate and audit applications. You will also assist and lead an audit project on one of our databases to ensure it is up-to-date and organized. Additional team projects will be assigned by management.
If you have a strong attention to detail and organization skills, this is the position for you!","job_category":"Internships and Part-time","job_state":"OH","job_title":"Data Services Intern","date":"2026-01-12","zip":"45430","position_type":"Part-Time","salary_max":"0","salary_min":"0","requirements":"Strong attention to detail and organization skills~^~Analytical and communication skills~^~Open to any major or college level~^~Proficient in Microsoft Word and Excel","training":"Training will be on the job","benefits":"We strive to offer an environment that provides our associates with the right balance between work and family.
We offer a comprehensive benefits package including: - Free on-site fitness center with a cardiovascular fitness area, weight room, daily fitness classes - Health Center staffed by a Nurse Practitioner and on-site prescription delivery - Company sports leagues including softball, volleyball, soccer, and many more - World-class cafeteria with complimentary breakfast and lunch options - Convenience store with a full-service coffee bar, snacks, and beverages - Associate discounts for cell phones, cars, computers, entertainment, and much more Reynolds and Reynolds promotes a healthy lifestyle by providing a non-smoking environment.
Reynolds and Reynolds is an equal opportunity employer.
","
$49k-71k yearly est. 8d ago
Data Engineer
Total Quality Logistics, Inc. 4.0
Data scientist job in Cincinnati, OH
Country USA State Ohio City Cincinnati Descriptions & requirements About the role: As a Data Engineer, with TQL you will be supporting the FP&A department by developing scalable reporting solutions in Microsoft Fabric. This role will focus on migrating data from on-premises systems to the cloud, building and optimizing SQL views and pipelines, and creating governed Power BI datasets and semantic models.
What's in it for you:
* $85,000-$125,000 base salary + performance bonuses
* Advancement opportunities with aggressive and structured career paths
* A culture of continuous education and technical training with reimbursements available
* Comprehensive benefits package
* Health, dental and vision coverage
* 401(k) with company match
* Perks including employee discounts, financial wellness planning, tuition reimbursement and more
What you'll be doing:
* Migrate FP&A datasets from on-premises to Microsoft Fabric/Lakehouse
* Build and maintain SQL pipelines, transformations, and views that support reporting needs
* Ensure performance, scalability, and reliability through automation, monitoring, and CI/CD best practices
* Design, publish, and manage Power BI certified datasets, semantic models, and reports/dashboards
* Apply best practices in DAX, modeling, and governance to enable accurate, self-service reporting
* Partner with Finance stakeholders to translate reporting requirements into technical deliverables
* Implement processes to ensure accuracy, consistency, and reconciliation across financial and operational systems
* Maintain documentation of data models, business logic, and reporting standards
* Troubleshoot and resolve issues impacting reporting accuracy or performance
* Collaborate with Data Governance and Quality teams to align with enterprise standards and metadata frameworks
What you need:
* Bachelor's degree in Computer Science, Information Systems, Data Engineering, or related field
* 3+ years of experience in BI/data engineering or analytics engineering
* Advanced SQL skills with proven experience in building and optimizing large-scale datasets
* Strong Power BI expertise (datasets, DAX, performance tuning, semantic models)
* Hands-on experience with Microsoft Fabric and Lakehouse/cloud data platforms preferred
* Knowledge of financial reporting concepts and ability to work with FP&A stakeholders p
* Strong problem-solving skills and ability to bridge Finance and IT needs
Where you'll be: 4289 Ivy Pointe Boulevard, Cincinnati, Ohio 45245
Employment visa sponsorship is unavailable for this position. Applicants requiring employment visa sponsorship now or in the future (e.g., F-1 STEM OPT, H-1B, TN, J1 etc.) will not be considered.
About Us
Total Quality Logistics (TQL) is one of the largest freight brokerage firms in the nation. TQL connects customers with truckload freight that needs to be moved with quality carriers who have the capacity to move it.
As a company that operates 24/7/365, TQL manages work-life balance with sales support teams that assist with accounting, and after hours calls and specific needs. At TQL, the opportunities are endless which means that there is room for career advancement and the ability to write your own paycheck.
What's your worth? Our open and transparent communication from management creates a successful work environment and custom career path for our employees. TQL is an industry-leader in the logistics industry with unlimited potential. Be a part of something big.
Total Quality Logistics is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, age, national origin, genetic information, disability or protected veteran status.
If you are unable to apply online due to a disability, contact recruiting at ******************
*
$85k-125k yearly 36d ago
Senior Data Engineer (P358)
84.51 4.3
Data scientist job in Cincinnati, OH
84.51° is a retail data science, insights and media company. We help The Kroger Co., consumer packaged goods companies, agencies, publishers and affiliates create more personalized and valuable experiences for shoppers across the path to purchase.
Powered by cutting-edge science, we utilize first-party retail data from more than 62 million U.S. households sourced through the Kroger Plus loyalty card program to fuel a more customer-centric journey using 84.51° Insights, 84.51° Loyalty Marketing and our retail media advertising solution, Kroger Precision Marketing.
Join us at 84.51°!
__________________________________________________________
PLEASE NOTE:
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United Stated and with the Kroger Family of Companies (i.e. H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).
Senior Data Engineer, AI Enablement (P358) Summary
As a Senior Data Engineer on our AI Enablement team, you will cultivate strategies and solutions to ingest, store and distribute our big data. This role is on our enablement team that builds solutions for monitoring, registering, and tracking our machine learning and AI solutions across 84.51˚ and develops monitoring and observability pipelines for our internal AI tooling. Our engineers use PySpark, Python, SQL, GitHub actions, and Databricks/Azure to develop scalable data solutions.
Responsibilities
Take ownership of features and drive them to completion through all phases of the entire 84.51° SDLC. This includes external facing and internal applications as well as process improvement activities such as:
Lead design of Python and PySpark based solutions
Perform development of cloud based (Azure) ETL solutions
Build and configure cloud infrastructure for all stages of the data development lifecycle
Execute unit and integration testing
Develop robust data QA processes
Collaborate with senior resources to ensure consistent development practices
Provide mentoring to junior resources
Build visualizations in Databricks Apps, Databricks Dashboards and PowerBI
Bring new perspectives to problems and be driven to improve yourself and the way things are done
Qualifications, Skills, and Experience
Bachelor's degree in Computer Science, Management Information Systems, Mathematics, Business Analytics or another technically strong program.
3+ years proven ability of professional data engineering experience
Strong understanding of Agile Principles (Scrum)
3+ years proven ability of developing with Python and PySpark
Full understanding of ETL concepts and data warehousing concepts
Experience with CI/CD frameworks (GitHub Actions a plus)
Experience with visualization techniques and tools like Databricks dashboards or PowerBI a plus
Languages/Tech stack:
Python
PySpark
Terraform
Databricks
GitHub Actions
Azure
AKS experience a plus
Dashboard experience a plus
WebApp experience a plus
#LI-SSS
Pay Transparency and Benefits
The stated salary range represents the entire span applicable across all geographic markets from lowest to highest. Actual salary offers will be determined by multiple factors including but not limited to geographic location, relevant experience, knowledge, skills, other job-related qualifications, and alignment with market data and cost of labor. In addition to salary, this position is also eligible for variable compensation.
Below is a list of some of the benefits we offer our associates:
Health: Medical: with competitive plan designs and support for self-care, wellness and mental health. Dental: with in-network and out-of-network benefit. Vision: with in-network and out-of-network benefit.
Wealth: 401(k) with Roth option and matching contribution. Health Savings Account with matching contribution (requires participation in qualifying medical plan). AD&D and supplemental insurance options to help ensure additional protection for you.
Happiness: Paid time off with flexibility to meet your life needs, including 5 weeks of vacation time, 7 health and wellness days, 3 floating holidays, as well as 6 company-paid holidays per year. Paid leave for maternity, paternity and family care instances.
Pay Range$97,000-$166,750 USD
$97k-166.8k yearly Auto-Apply 5d ago
Data Analysis Engineer
Absolics Inc.
Data scientist job in Covington, KY
• Execute Quality Data Analysis * Administer Root Cause Analysis * Data Collection and Management * Data Analysis and Interpretation DUTIES/RESPONSIBILITIES * Excellent analytical and problem-solving skills * Solid understanding of quality management principles, root cause analysis, and corrective action processes
REQUIREMENTS
* Must be legally permitted to work in the United States
* Proficiency in using quality management software and tools, as well as Microsoft Office applications
* Problem-solving mindset and the ability to work well under pressure to meet deadlines
* Strong analytical skills and attention to detail, with the ability to interpret data and trends to drive informed decisions
QUALITIFICATIONS
* Bachelor's degree related to Quality or similar industry
EDUCATION
* 5+ years of Quality experience
* Experience with Machine Learning
* Experience with Big Data Analysis & Automation
* Experience with Yield Management System from SK Hynix/Samsung Semiconductor
EXPERIENCE
$71k-96k yearly est. 60d+ ago
Cloud Data Engineer
Radiancetech
Data scientist job in Beavercreek, OH
Radiance Technologies, a rapidly growing employee-owned company supporting the Department of Defense, is searching for a Cloud Database Engineer to join our team. We are looking for a self-starter with excellent people skills to work with our customers. Employee ownership, generous 401K and profit sharing, competitive salaries, pleasant work environments, and stimulating assignments combine to make Radiance Technologies a great place to work and succeed.
We are seeking a Cloud Data Engineer to design, implement, and manage seamless data and object storage solutions across on-premises, cloud, inter-region, and inter-cloud platforms. The Cloud Data Engineer will also be responsible for designing, implementing, and managing databases in our cloud environment, ensuring high levels of data availability and assisting in the development of data models.
This role focuses on enabling secure, scalable, and resilient architectures that support structured and unstructured data, high-velocity pipelines, and multi-cloud ecosystems. The ideal candidate will have expertise in cloud-native data engineering, object storage platforms, and data transformation pipelines, with a strong understanding of managing data and storage across hybrid environments. This role requires close collaboration with platform, software, and analytics teams to ensure data and storage solutions align with mission-critical needs.
Candidates should have an in-depth understanding of database structure principles and experience with cloud platforms such as AWS, Google Cloud, or Microsoft Azure. Database experience should include relational (Oracle, PostgreSQL, etc.), graph databases (Neo4j, ArangoDB, etc.), search and analytics engines like ElasticSearch, and other no SQL databases (MongoDB, Cassandra, etc.)
Key Responsibilities:
Seamless Data and Object Storage Management:
Design, implement, and manage databases and object storage solutions across on-premises, cloud, inter-region, and inter-cloud platforms.
Enable seamless data and object storage movement and synchronization between environments, ensuring high availability and minimal latency.
Engineer and optimize object storage platforms (e.g., AWS S3, MinIO, Ceph) for durability, performance, lifecycle management, and secure access.
Data and Storage Architecture Optimization:
Build and maintain scalable data pipelines for structured and unstructured data, supporting ingestion, curation, metadata enrichment, and analytics workflows.
Modernize storage architectures, including migration from legacy systems (e.g., NFS/file shares) to object-based architectures.
Implement data tiering, record/object-level authorization, and secure access controls for both databases and object storage.
Collaboration and Integration:
Work closely with development teams to optimize database and object storage usage, integrating data services with containerized platforms (e.g., Kubernetes, OpenShift) and CI/CD pipelines.
Partner with platform engineers, DevSecOps teams, and mission users to align data and storage solutions with operational needs.
Governance and Security:
Implement and enforce data governance policies, tagging, metadata schemas, and access controls for both databases and object storage.
Develop and manage backup and restore procedures for databases and object storage, ensuring disaster recovery readiness.
Monitor system performance and resolve database and object storage performance and capacity issues.
Documentation and Troubleshooting:
Document architectures, data flows, object storage configurations, and operational procedures to reduce tribal knowledge.
Troubleshoot complex data, object storage, performance, and access issues across environments (DEV/INT/PROD).
Required Qualifications:
Bachelor's degree in Computer Science, Engineering, Data Science, or a related STEM field (Master's preferred).
5+ years of relevant work experience in data engineering, database administration, or object storage management.
U.S Citizenship with the ability to obtain and maintain a DoD TS/SCI Security Clearance.
Proven experience managing databases and object storage solutions across on-premises, cloud, inter-region, and inter-cloud platforms.
Expertise in cloud platforms such as AWS, Google Cloud, or Microsoft Azure.
Hands-on experience with object storage technologies (e.g., AWS S3, MinIO, Ceph) and data transformation pipelines (e.g., Apache Airflow, Argo Workflows, Python).
Strong proficiency in Python, SQL, and modern data engineering frameworks.
Familiarity with both relational (e.g., Oracle, PostgreSQL) and non-relational databases (e.g., MongoDB, Cassandra).
Experience with data tiering, record/object-level authorization, and secure data handling.
Ability to handle multiple projects and deadlines in a fast-paced environment.
Desired Qualifications:
Active TS/SCI clearance.
Security+CE certification.
Experience supporting Intelligence Community (IC) mission programs.
Knowledge of data catalogs, search/indexing, and discovery tools.
Experience with streaming data (e.g., Kafka) and event-driven architectures.
Familiarity with DevSecOps, security scanning, and accreditation processes.
Understanding of analytics, machine learning (ML), or exploitation workflows consuming large datasets.
Prior experience modernizing or migrating legacy data systems.
EOE/Minorities/Females/Vet/Disabled
$75k-102k yearly est. Auto-Apply 9d ago
Data Engineer
Tata Consulting Services 4.3
Data scientist job in Cincinnati, OH
* 2+ years of proven professional data development experience * 2+ years developing with SQL * 4+ years Python Development * 3+ years Java, Spring Framework development * Object Oriented Programming * 3+ years Distributed Data Processing (PySpark, Snowpark)
* Proficient CI/CD practices
* Automated data pipeline orchestration
* Data observability - Logging, Monitoring, and Alerting
* Databricks and/or Snowflake
* API development
* Data quality checks
* Cloud Technologies (Azure preferred)
Roles & Responsibilities:
* Develop distributed data processing data pipeline solutions
* Orchestrate multi-step data transformation pipelines
* Perform unit, integration, and regression testing on packaged code
* Build transformation logic and code in an Object Oriented Programming style
* Enhance CI/CD pipelines in the path to production
* Create data quality checks for ingested and post processed data
* Ensure data observability via alerting and monitoring of automated pipeline solutions
* Maintain and enhance existing applications
* Build cloud resources via infrastructure as code
* Provide mentoring to junior team members
* Participate in retrospective reviews
* Participate in the estimation process for new work and releases
* Bring new perspectives to problems
* Be driven to improve yourself and the way things are done
TCS Employee Benefits Summary:
* Discretionary Annual Incentive.
* Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
* Family Support: Maternal & Parental Leaves.
* Insurance Options: Auto & Home Insurance, Identity Theft Protection.
* Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
* Time Off: Vacation, Time Off, Sick Leave & Holidays.
* Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
#LI-RJ2
Salary Range-$100,000-$120,000 a year
$100k-120k yearly 6d ago
Senior Data Engineer
Apidel Technologies 4.1
Data scientist job in Blue Ash, OH
Job Description
The Engineer is responsible for staying on track with key milestones in Customer Platform / Customer Data Acceleration, work will be on the new Customer Platform Analytics system in Databricks. The Engineer has overall responsibility in the technical design process. Leads and participates in the application technical design process and completes estimates and work plans for design, development, implementation, and rollout tasks. The Engineer also communicates with the appropriate teams to ensure that assignments are delivered with the highest of quality and in accordance to standards. The Engineer strives to continuously improve the software delivery processes and practices. Role model and demonstrate the companys core values of respect, honesty, integrity, diversity, inclusion and safety of others.
Current tools and technologies include:
Databricks and Netezza
Key Responsibilities
Lead and participate in the design and implementation of large and/or architecturally significant applications.
Champion company standards and best practices. Work to continuously improve software delivery processes and practices.
Build partnerships across the application, business and infrastructure teams.
Setting up new customer data platforms from Netezza to Databricks
Complete estimates and work plans independently as appropriate for design, development, implementation and rollout tasks.
Communicate with the appropriate teams to ensure that assignments are managed appropriately and that completed assignments are of the highest quality.
Support and maintain applications utilizing required tools and technologies.
May direct the day-to-day work activities of other team members.
Must be able to perform the essential functions of this position with or without reasonable accommodation.
Work quickly with the team to implement new platform.
Be onsite with development team when necessary.
Behaviors/Skills:
Puts the Customer First - Anticipates customer needs, champions for the customer, acts with customers in mind, exceeds customers expectations, gains customers trust and respect.
Communicates effectively and candidly - Communicates clearly and directly, approachable, relates well to others, engages people and helps them understand change, provides and seeks feedback, articulates clearly, actively listens.
Achieves results through teamwork Is open to diverse ideas, works inclusively and collaboratively, holds self and others accountable, involves others to accomplish individual and team goals
Note to Vendors
Length of Contract 9 months
Top skills Databricks, Netezza
Soft Skills Needed collaborating well with others, working in a team dynamic
Project person will be supporting - staying on track with key milestones in Customer Platform / Customer Data Acceleration, Work will be on the new Customer Platform Analytics system in Databricks that will replace Netezza
Team details ie. size, dynamics, locations most of the team is located in Cincinnati, working onsite at the BTD
Work Location (in office, hybrid, remote) Onsite at BTD when necessary, approximately 2-3 days a week
Is travel required - No
Max Rate if applicable best market rate
Required Working Hours 8-5 est
Interview process and when will it start Starting with one interview, process may change
Prescreening Details standard questions. Scores will carry over.
When do you want this person to start Looking to hire quickly, the team is looking to move fast.
$79k-114k yearly est. 5d ago
Senior Data Engineer
General Electric Credit Union 4.8
Data scientist job in Cincinnati, OH
General Electric Credit Union is a not-for-profit, member-owned full service financial institution headquartered in Cincinnati with branches in Ohio and Kentucky. At GECU, we pride ourselves on maintaining quality service, being an employee-friendly workplace, and developing our team members while teaching you the skills to lead you to career advancement opportunities.
Overview: The Senior Data Engineer will play a key role in developing and optimizing GECU's data infrastructure to support the organization's data-driven initiatives. The Senior Data Engineer will be designing, building, and maintaining scalable data pipelines and systems, working with the data and development team Essential Responsibilities:
Design, implement, and maintain robust, scalable, and high-performance data pipelines and ETL processes to collect, process, and store large volumes of structured and unstructured data.
Collaborate with cross-functional teams to understand data requirements and translate them into technical solutions.
Develop and maintain data warehouse and data lake solutions, ensuring data quality, integrity, and reliability.
Optimize data pipelines and ETL processes for performance, efficiency, and cost-effectiveness, utilizing best practices and technologies.
Implement data governance and security measures to ensure compliance with regulatory requirements and data privacy standards.
Troubleshoot and resolve issues related to data processing, data quality, and system performance in a timely manner.
Evaluate and recommend new technologies, tools, and frameworks to enhance the organization's data infrastructure and capabilities.
Document technical specifications, data lineage, and system architecture to facilitate knowledge sharing and collaboration.
Collaborate with other key data employees to maintain and publish data definitions and data catalogue.
Stay up to date with industry trends and emerging technologies in data engineering and analytics.
Education and Experience:
High school diploma, or GED required; Bachelor's degree in Computer Science, Engineering, or related field; Master's degree is a plus.
Minimum 6 years' experience in Data Engineering; working with data warehousing concepts, database technologies (e.g., SQL, NoSQL), and distributed computing architectures.
Experience with Snowflake Data Warehouse preferred
Knowledge, Skills, and Abilities:
Strong programming skills in languages such as Python, Java, Scala, or SQL, with experience in data manipulation, transformation, and analysis.
Knowledge of cloud platforms such as AWS, Azure, or Google Cloud Platform.
Extensive knowledge of data modeling, schema design, and optimization techniques for relational and non-relational databases.
Excellent problem-solving and troubleshooting skills, with the ability to diagnose and resolve complex technical issues.
Strong communication and collaboration skills, with the ability to work effectively in a team environment and interact with stakeholders at all levels.
Ability to perform independently and competently to accomplish necessary deliverables accurately and on time.
Ability to assist and mentor Junior Data Engineers
At GECU, we want to support your wellbeing by offering a wide range of benefits:
Health, Dental and Vision insurance
Life and Disability insurance options
Paid Time Off starts accruing once hired and take your birthday off -paid
401k Retirement plan with up to a 10% match of your base gross compensation
Tuition reimbursement opportunities & professional development
Volunteer opportunities -and earn additional PTO hours!
On-site clinics for Vaccines and Mammograms
And many more!
Come join GECU as we are a curated culture of respect, understanding, and mutual recognition. We believe forming bonds and connecting with each other only stands to strengthen the service we provide to our members in our mission of improving the Quality of Financial lives!
General Electric Credit Union is an Equal Opportunity Employer
$77k-101k yearly est. 60d+ ago
Actuarial Analyst II (Medicare)
Caresource 4.9
Data scientist job in Dayton, OH
The Actuarial Analyst II (Medicare) is responsible for providing timely, accurate, and actionable data, analysis and recommendations supporting the department, area, and corporate key initiatives.
Essential Functions:
Develops actionable recommendations to management relative to all analysis
Calculates profitability reports on appropriate business segments
Leads regular and ad-hoc analysis identifying opportunities to improve efficiency and affordability
Leads regular and ad-hoc analysis/studies to evaluate assumptions used in other processes
Participates in cross functional teams to address key issues facing the organization
Prepares monthly calculation of the company's IBNR reserves
Evaluates government reimbursement process to determine areas of risk and opportunity
Participates in initiatives to improve operating results
Initiates key analysis to improve overall MLR results
Performs other duties as assigned
Preferred Experience:
Experience with Medicare Advantage bids
Experience with data compilation to support bids
Experience with desk review audit responses
Experience developing and using pricing models related to Medicare Advantage
Experience with Medicare Risk Adjustment
Experience with building large Excel models
Proficient with relational databases and coding languages (SQL)
Experience with DSNP and/or FIDE/HIDE-SNPs
Education and Experience:
Bachelor's degree in actuarial science, mathematics, economics or a related field is required
Minimum of one (1) year of decision support or financial analysis experience is preferred
Intermediate database (SAS or Access) experience is required
Managed Care or healthcare experience is preferred
Competencies, Knowledge and Skills:
Intermediate interpersonal/verbal communication skills and a high level of professionalism
Intermediate written communication skills
Intermediate ability to work independently and within a team environment
Intermediate ability to work in fast paced environment with changing priorities
Intermediate ability to remain calm under pressure
Intermediate proficiency level with Microsoft Excel
Intermediate proficiency with Microsoft Suite to include, Word, Power Point and Outlook
Intermediate ability to effectively present information and respond to questions from peers and management
Intermediate ability to analyze and interpret financial data in order to coordinate the preparation of financial records
Intermediate knowledge of SQL, SAS, R, or other data manipulation software
Effective problem-solving skills with attention to detail
Effective listening and critical thinking skills
Licensure and Certification:
A combination of years of industry experience and exams passed under the Society of Actuaries Exam Syllabus (excluding exam SRM via transition credit) equaling 5 or more is required
Active pursuit of an actuarial designation (ASA) is required
Working Conditions:
General office environment; may be required to sit or stand for extended periods of time
Compensation Range:
$72,200.00 - $115,500.00 CareSource takes into consideration a combination of a candidate's education, training, and experience as well as the position's scope and complexity, the discretion and latitude required for the role, and other external and internal data when establishing a salary level. In addition to base compensation, you may qualify for a bonus tied to company and individual performance. We are highly invested in every employee's total well-being and offer a substantial and comprehensive total rewards package.
Compensation Type:
Salary
Competencies:
- Fostering a Collaborative Workplace Culture - Cultivate Partnerships - Develop Self and Others - Drive Execution - Influence Others - Pursue Personal Excellence - Understand the Business
This is not all inclusive. CareSource reserves the right to amend this job description at any time. CareSource is an Equal Opportunity Employer. We are dedicated to fostering an environment of belonging that welcomes and supports individuals of all backgrounds.
#LI-RW1
$72.2k-115.5k yearly 2d ago
Data Scientist - Clinical and Operational Analytics
Venesco LLC
Data scientist job in Dayton, OH
Requirements
Mandatory Qualifications:
• Bachelor's degree in a quantitative field (e.g., Computer Science, Applied Math).
• 3+ years of experience in predictive analytics.
• Proficiency in Python, NumPy, Pandas, Matplotlib, and Scikit-learn.
• Ability to explain and implement ML algorithms from scratch.
• Signed NDA and HIPAA training required upon start.
Desired Qualifications:
• Experience with dashboard development and pretrained language models.
• Experience with dimensionality reduction and deep learning libraries (TensorFlow, PyTorch).
• Familiarity with human biology and performance.
Key Tasks and Responsibilities:
• Develop and tune unsupervised tree-based clustering models.
• Implement decision trees, k-NN, and optimized list sorting algorithms.
• Generate and minimize distance matrices using vectorized code.
• Collaborate with software engineers and maintain HIPAA compliance.
$69k-95k yearly est. 60d+ ago
Senior Data Engineer
Medpace 4.5
Data scientist job in Cincinnati, OH
Our corporate activities are growing rapidly, and we are currently seeking a full-time, office-based Senior Data Engineer to join our Information Technology team. This position will work on a team to accomplish tasks and projects that are instrumental to the company's success. If you want an exciting career where you use your previous expertise and can develop and grow your career even further, then this is the opportunity for you.
Responsibilities
* Utilize skills in development areas including data warehousing, business intelligence, and databases (Snowflake, ANSI SQL, SQL Server, T-SQL);
* Support programming/software development using Extract, Transform, and Load (ETL) and Extract, Load and Transform (ELT) tools, (dbt, Azure Data Factory, SSIS);
* Design, develop, enhance and support business intelligence systems primarily using Microsoft Power BI;
* Collect, analyze and document user requirements;
* Participate in software validation process through development, review, and/or execution of test plan/cases/scripts;
* Create software applications by following software development lifecycle process, which includes requirements gathering, design, development, testing, release, and maintenance;
* Communicate with team members regarding projects, development, tools, and procedures; and
* Provide end-user support including setup, installation, and maintenance for applications released.
Qualifications
* Bachelor's Degree in Computer Science, Data Science, or a related field;
* 5+ years of experience in Data Engineering;
* Knowledge of developing dimensional data models and awareness of the advantages and limitations of Star Schema and Snowflake schema designs;
* Solid ETL development, reporting knowledge based off intricate understanding of business process and measures;
* Knowledge of Snowflake cloud data warehouse, Fivetran data integration and dbt transformations is preferred;
* Knowledge of Python is preferred;
* Knowledge of REST API;
* Basic knowledge of SQL Server databases is required;
* Knowledge of C#, Azure development is a bonus; and
* Excellent analytical, written and oral communication skills.
Medpace Overview
Medpace is a full-service clinical contract research organization (CRO). We provide Phase I-IV clinical development services to the biotechnology, pharmaceutical and medical device industries. Our mission is to accelerate the global development of safe and effective medical therapeutics through its scientific and disciplined approach. We leverage local regulatory and therapeutic expertise across all major areas including oncology, cardiology, metabolic disease, endocrinology, central nervous system, anti-viral and anti-infective. Headquartered in Cincinnati, Ohio, employing more than 5,000 people across 40+ countries.
Why Medpace?
People. Purpose. Passion. Make a Difference Tomorrow. Join Us Today.
The work we've done over the past 30+ years has positively impacted the lives of countless patients and families who face hundreds of diseases across all key therapeutic areas. The work we do today will improve the lives of people living with illness and disease in the future.
Cincinnati Perks
* Cincinnati Campus Overview
* Flexible work environment
* Competitive PTO packages, starting at 20+ days
* Competitive compensation and benefits package
* Company-sponsored employee appreciation events
* Employee health and wellness initiatives
* Community involvement with local nonprofit organizations
* Discounts on local sports games, fitness gyms and attractions
* Modern, ecofriendly campus with an on-site fitness center
* Structured career paths with opportunities for professional growth
* Discounted tuition for UC online programs
Awards
* Named a Top Workplace in 2024 by The Cincinnati Enquirer
* Recognized by Forbes as one of America's Most Successful Midsize Companies in 2021, 2022, 2023 and 2024
* Continually recognized with CRO Leadership Awards from Life Science Leader magazine based on expertise, quality, capabilities, reliability, and compatibility
What to Expect Next
A Medpace team member will review your qualifications and, if interested, you will be contacted with details for next steps.
$80k-111k yearly est. Auto-Apply 8d ago
Data Engineer
Tata Consulting Services 4.3
Data scientist job in Blue Ash, OH
* Proven experience as a Software Developer, with a strong focus on building scalable and efficient Python applications. * Experience in developing Spark Structured Streaming applications is highly desirable. * Minimum of 7+ years of professional software development experience.
* Strong analytical and problem-solving skills, with the ability to debug and optimize Spark jobs running on Databricks.
* Ability to work closely with cross-functional teams to deliver high-quality streaming solutions.
Technical Skills:
* Strong expertise in Python, PySpark, and Spark Structured Streaming.
* Experience with Databricks and Azure.
* Familiarity with Delta Lake and Terraform scripting.
* Proficiency in working with varied data file formats (Avro, JSON, CSV) for ingestion and transformation.
Software Development:
* Proficiency in Object-Oriented Programming (OOP) concepts and software design principles.
* Ability to write clean, maintainable, and scalable Python code.GitHub Actions:
* Experience in setting up and managing CI/CD pipelines using GitHub Actions to ensure smooth and automated deployment processes.
Agile Methodology:
* Experience working in an Agile/Scrum environment, with a focus on iterative development, continuous feedback, and delivery.
Nice to Haves:
* Python Unit Testing.
* Unity Catalog.
* Databricks Asset Bundles.
* Unit Testing/Mocking
TCS Employee Benefits Summary:
* Discretionary Annual Incentive.
* Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
* Family Support: Maternal & Parental Leaves.
* Insurance Options: Auto & Home Insurance, Identity Theft Protection.
* Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
* Time Off: Vacation, Time Off, Sick Leave & Holidays.
* Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
#LI-RJ2
Salary Range-$100,000-$120,000 a year
How much does a data scientist earn in Cincinnati, OH?
The average data scientist in Cincinnati, OH earns between $59,000 and $109,000 annually. This compares to the national average data scientist range of $75,000 to $148,000.
Average data scientist salary in Cincinnati, OH
$80,000
What are the biggest employers of Data Scientists in Cincinnati, OH?
The biggest employers of Data Scientists in Cincinnati, OH are: