The Program Integrity Data Scientist II is responsible for developing, implementing, managing, and deploying in-depth analyses that meet the information needs associated with payment accuracy, anomaly detection, and Fraud, Waste, and Abuse (FWA).
Essential Functions:
Build concepts as algorithms that identify claims for pre- or post-pay intervention based on probability of fraud, waste, and abuse. Algorithms are implemented into production workflows for action: medical record request and audit, downcode adjustment, denial and remittance communication, etc.
Analyze and quantify claim payment issues and provide recommendations to mitigate identified program integrity risks.
Identify trends and patterns using standard corporate, processes, tools, reports and databases as well as leveraging other processes and data sources.
Conduct outcome analyses to determine impact and effectiveness of corporate program and payment integrity initiatives.
Collaborate on the examination and explanation of complex data relationships to answer questions identified either within the department or by other departments as it relates to payment accuracy, anomaly detection, and FWA.
Monitoring of and providing explanation of anomalies related to trends associated with the potential for Fraud Waste and Abuse across the corporate enterprise.
Collaborate with the Legal Department, generating data and analyses to support Legal proceedings.
Develop hypothesis tests and extrapolations on statistically valid samples to establish outlier behavior patterns and potential recoupment.
Create, maintain, and communicate an analytical plan for each project.
Mine and analyze large structured and unstructured datasets.
Employ wide range of data sources to develop algorithms for predicting risk and understanding drivers, detecting outliers, etc.
Develop visualizations that demonstrate the efficacy of developed algorithms.
Provide statistical validation and analysis of outcomes associated with clinical programs and interventions.
Collaborate with other teams to integrate with existing solutions.
Communicate results and ideas to key stakeholders.
Prepare code for operationalization of end-to-end model pipeline and deliverable for business consumption.
Perform any other job related duties as requested.
Education and Experience:
Bachelor's degree in Data Science, Mathematics, Statistics, Engineering, Computer Science, or a related field required
Equivalent years of relevant work experience may be accepted in lieu of required education
Three (3) years data analysis and/or analytic programming required
Experience with cloud services (such as Azure, AWS or GCP) and modern data stack (such as Databricks or Snowflakes) preferred
Healthcare experience required
Competencies, Knowledge and Skills:
Proficient in SQL and at least one of the following programming languages: Python / R / RAT STAT
Familiarity with SAS is preferred
Preferred beginner level of knowledge of developing reports or dashboards in Power BI or other business intelligence applications
Ability to perform advanced statistical analyses and techniques including t-tests, ANOVAs, z-tests, statistical extrapolations, non-parametric significance testing, and sampling methodologies
Working knowledge of predictive modeling and machine learning algorithms such as generalized linear models, non-linear supervised learning models, clustering, decision trees, dimensionality reduction and natural language processing
Proficient in feature engineering techniques and exploratory data analysis
Familiarity with optimization techniques and artificial intelligence methods
Ability to analyze large quantities of information and identify patterns, irregularities, and deficiencies
Knowledge of healthcare coding and billing processes, including CPT4, HCPCS, ICD-9, DRG and Revenue Codes preferred
Proficient with MS office (Excel, PowerPoint, Word, Access)
Demonstrated critical thinking, verbal communication, presentation and written communication skills
Ability to work independently and within a cross-functional team environment
Licensure and Certification:Working Conditions:
General office environment; may be required to sit or stand for extended periods of time
Up to 15% (occasional) travel to attend meetings, trainings, and conferences may be required
Compensation Range:
$83,000.00 - $132,800.00 CareSource takes into consideration a combination of a candidate's education, training, and experience as well as the position's scope and complexity, the discretion and latitude required for the role, and other external and internal data when establishing a salary level. In addition to base compensation, you may qualify for a bonus tied to company and individual performance. We are highly invested in every employee's total well-being and offer a substantial and comprehensive total rewards package.
Compensation Type:
Salary
Competencies:
- Fostering a Collaborative Workplace Culture - Cultivate Partnerships - Develop Self and Others - Drive Execution - Influence Others - Pursue Personal Excellence - Understand the Business
This is not all inclusive. CareSource reserves the right to amend this job description at any time. CareSource is an Equal Opportunity Employer. We are dedicated to fostering an environment of belonging that welcomes and supports individuals of all backgrounds.
#LI-GB1
$83k-132.8k yearly 3d ago
Looking for a job?
Let Zippia find it for you.
Data Scientist
Procter & Gamble 4.8
Data engineer job in Cincinnati, OH
Do you enjoy solving billion-dollar data science problems across trillions of data points? Are you passionate about working at the cutting edge of interdisciplinary boundaries, where computer science meets hard science? If you like turning untidy data into nonobvious insights and surprising business leaders with the transformative power of Artificial Intelligence (AI), including Generative and Agentic AI, we want you on our team at P&G.
As a Data Scientist in our organization, you will play a crucial role in disrupting current business practices by designing and implementing innovative models that enhance our processes. You will be expected to constructively research, design, and customize algorithms tailored to various problems and data types. Utilizing your expertise in Operations Research (including optimization and simulation) and machine learning models (such as tree models, deep learning, and reinforcement learning), you will directly contribute to the development of scalable Data Science algorithms. Your work will also integrate advanced techniques from Generative and Agentic AI to create more dynamic and responsive models, enhancing our analytical capabilities. You will collaborate with Data and AI Engineering teams to productionize these solutions, applying exploratory data analysis, feature engineering, and model building within cloud environments on massive datasets to deliver accurate and impactful insights. Additionally, you will mentor others as a technical coach and become a recognized expert in one or more Data Science techniques, quantifying the improvements in business outcomes resulting from your work.
Key Responsibilities:
+ Algorithm Design & Development: Directly contribute to the design and development of scalable Data Science algorithms.
+ Collaboration: Work closely with Data and Software Engineering teams to effectively productionize algorithms.
+ Data Analysis: Apply thorough technical knowledge to large datasets, conducting exploratory data analysis, feature engineering, and model building.
+ Coaching & Mentorship: Develop others as a technical coach, sharing your expertise and insights.
+ Expertise Development: Become a known expert in one or multiple Data Science techniques and methodologies.
Job Qualifications
Required Qualifications:
+ Education: Pursuing or has graduated with a Master's degree in a quantitative field (Operations Research, Computer Science, Engineering, Applied Mathematics, Statistics, Physics, Analytics, etc.) or possess equivalent work experience.
+ Technical Skills: Proficient in programming languages such as Python and familiar with data science/machine learning libraries like OpenCV, scikit-learn, PyTorch, TensorFlow/Keras, and Pandas. Demonstrated ability to develop and test code within cloud environments.
+ Communication: Strong written and verbal communication skills, with the ability to influence others to take action.
Preferred Qualifications:
+ Analytic Methodologies: Experience applying analytic methodologies such as Machine Learning, Optimization, Simulation, and Generative and Agentic AI to real-world problems.
+ Continuous Learning: A commitment to lifelong learning, keeping up to date with the latest technology trends, and a willingness to teach others while learning new techniques.
+ Data Handling & Cloud: Experience with large datasets and developing in cloud computing platforms such as GCP or Azure.
+ DevOps Familiarity: Familiarity with DevOps environments, including tools like Git and CI/CD practices.
Immigration Sponsorship is not available for this role. For more information regarding who is eligible for hire at P&G along with other work authorization FAQ's, please click HERE (******************************************************* .
Procter & Gamble participates in e-verify as required by law.
Qualified individuals will not be disadvantaged based on being unemployed.
We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Job Schedule
Full time
Job Number
R000135859
Job Segmentation
Entry Level
Starting Pay / Salary Range
$85,000.00 - $115,000.00 / year
$85k-115k yearly 60d+ ago
Data Scientist (P3860)
84.51 4.3
Data engineer job in Cincinnati, OH
84.51° is a retail data science, insights and media company. We help The Kroger Co., consumer packaged goods companies, agencies, publishers and affiliates create more personalized and valuable experiences for shoppers across the path to purchase.
Powered by cutting-edge science, we utilize first-party retail data from more than 62 million U.S. households sourced through the Kroger Plus loyalty card program to fuel a more customer-centric journey using 84.51° Insights, 84.51° Loyalty Marketing and our retail media advertising solution, Kroger Precision Marketing.
Join us at 84.51°!
__________________________________________________________
84.51° is a retail data science, insights and media company. We help The Kroger Co., consumer packaged goods companies, agencies, publishers and affiliates create more personalized and valuable experiences for shoppers across the path to purchase.
Powered by cutting-edge science, we utilize first-party retail data from more than 62 million U.S. households sourced through the Kroger Plus loyalty card program to fuel a more customer-centric journey using 84.51° Insights, 84.51° Loyalty Marketing and our retail media advertising solution, Kroger Precision Marketing.
Kroger Precision Marketing (KPM), powered by 84.51°, is the retail media arm of Kroger, helping brands build stronger connections with customers through data-driven insights and media activation
.
By combining Kroger's loyalty data with the power of 84.51° data science and strategic media expertise, we provide brands with meaningful insights that fuel personalization and lead to impactful activations within the advertising ecosystem. Our comprehensive analytics bring customer stories and journeys to life-delivering measurable business impact for our brand partners while enhancing the customer experience.
As a Data Scientist, you will support the AI Platform within KPM. In this role, you'll be focused on foundational data science development to address strategic use cases across KPM's commercial portfolio. You will collaborate with fellow data scientists, product managers, engineers, and AI enablement teams to prototype new methodologies, explore emerging technologies, and build scalable solutions that deliver measurable business value.
To succeed in this role, you should be eager to learn and approach challenges with curiosity. You'll need experience or familiarity with tools like Python, PySpark, SQL, Power BI or other similar statistical software to develop analytical solutions. Additionally, you'll need familiarity with or a willingness to learn aspects of machine learning, statistical analysis, and modern AI frameworks. You should enjoy working in a team environment and be open to collaborating with data scientists, product managers, and business partners to support shared goals.
Responsibilities
Assist with prototyping solutions and contributing to rapid decision-making in uncertain contexts.
Collaborate closely with cross-functional teams, including product, engineering, and business stakeholders, throughout discovery, prototyping, and scaling.
Have strong technical acumen and a passion around bringing new methodologies and capabilities to life to solve business problems
Support project execution by managing tasks effectively to ensure timely, high-quality delivery.
Interpret results and contribute to analysis and reporting for stakeholders.
Contribute to building scalable, high-quality code and pipelines with mentorship from senior teammates.
Identify gaps and opportunities in existing science, measurement, and data solutions, bringing forward ideas for team discussion.
Continuously challenge and improve 84.51°'s analytical capabilities and products.
Share knowledge, contribute to best practices, and build efficiencies as part of a collaborative data science community.
Qualifications, Skills & Experience
A bachelor's degree or higher in mathematics, statistics, computer science, data science, economics, or related discipline.
Demonstrated proficiency with components of our technology stack, such as Azure, Python, Spark, GitHub, Power BI, and Snowflake.
Experience working with databases, analyzing data, and presenting findings through academic projects, internships, or entry-level roles.
Strong analytical, problem-solving, and communication skills
Package building and code optimization experience or a strong desire to learn.
Ability to approach unstructured problems and develop clear, practical solutions with support from senior team members.
Ability to balance technical learning with business priorities to drive value.
Good organizational and time management skills, with the ability to handle multiple assignments.
Ability to collaborate effectively with cross-functional teams, including business stakeholders, product managers, and engineers
Natural curiosity and willingness to experiment and learn from mistakes.
#LI-AB1
Pay Transparency and Benefits
The stated salary range represents the entire span applicable across all geographic markets from lowest to highest. Actual salary offers will be determined by multiple factors including but not limited to geographic location, relevant experience, knowledge, skills, other job-related qualifications, and alignment with market data and cost of labor. In addition to salary, this position is also eligible for variable compensation.
Below is a list of some of the benefits we offer our associates:
Health: Medical: with competitive plan designs and support for self-care, wellness and mental health. Dental: with in-network and out-of-network benefit. Vision: with in-network and out-of-network benefit.
Wealth: 401(k) with Roth option and matching contribution. Health Savings Account with matching contribution (requires participation in qualifying medical plan). AD&D and supplemental insurance options to help ensure additional protection for you.
Happiness: Paid time off with flexibility to meet your life needs, including 5 weeks of vacation time, 7 health and wellness days, 3 floating holidays, as well as 6 company-paid holidays per year. Paid leave for maternity, paternity and family care instances.
Pay Range$73,000-$125,350 USD
$73k-125.4k yearly Auto-Apply 6d ago
ETL Architect
Scadea Solutions
Data engineer job in Cincinnati, OH
Job title: ETL Architect DURATION 18 months YEARS OF EXPERIENCE 7-10 INTERVIEW TYPE Phone Screen to Hire REQUIRED SKILLS • Experience with Data Stage and ETL design Technical • Requirement gathering , converting business requirements to technical specs to profile
• Worked hands on in minimum 2 projects with data stage
• Understand the process of developing an etl design that support multiple datastage developers
• Be able to create an etl design framework and related specifications for use by etl developers
• Define standards and best practices of Data Stage etl to be followed by all data stage developers
• Understanding of Data Warehouse, Data marts concepts and implementation experience
• Be able to look at code produced to insure conformance with developed ETL framework and design for reuse
• Preferable experienced user level comptency in IBM's metadata product, datastage and Infosphere product line
• Be able to design etl for oracle or sql server or any db
• Good analytical skills and process design
• Insuring compliance to quality standards, and delivery timelines.
Qualifications
Bachelors
Additional Information
Required Skills:
Job Description:
Performs highly complex application programming/systems development and support Performs highly complex configuration of business rules and technical parameters of software products Review business requirements and develop application design documentation Build technical components (Maximo objects, TRM Rules, Java extensions, etc) based on detailed design.
Performs unit testing of components along with completing necessary documentation. Supports product test, user acceptance test, etc as a member of the fix-it team. Employs consistent measurement techniques Include testing in project plans and establish controls to require adherence to test plans Manages the interrelationships among various projects or work objectives
$86k-113k yearly est. 8h ago
AI Data Scientist
Medpace 4.5
Data engineer job in Cincinnati, OH
We are currently seeking an experienced data scientist to join our AI team who will support and lead data flow, advanced analytical needs and AI tools across Medpace. The AI team utilizes analytical principles and techniques to identify, collate and analyze many data sources and works with teams across Medpace to support efficiency and business gains for pharmaceutical development. The AI Data Scientist will support various projects across the company to bring data sources together in a consistent manner, work with the business to identify the value of AI, identify appropriate solutions and work with IT to ensure they are developed and built into the relevant systems. The team is seeking an experienced candidate to contribute new skills to our team, support team growth and foster AI development.
The AI Team is a highly collaborative team with members in both the Cincinnati and London offices. This team supports many teams across the business including clinical operations, medical, labs, business development and business operations. The AI Team also works side-by-side with dataengineering, business analytics and software engineering to architecture innovative data storage and access solutions for optimal data utilization strategies. If you are an individual with experience in informatics, data science, or computer science, please review the following career opportunity.
Responsibilities
* Explore and work with different data sources to collate into knowledge;
* Work with different business teams across the company with a variety of different business needs to identify potential areas that AI can support;
* Manage the process of working through AI potentials from discovery research to PoC to production with the business teams and supporting tasks for IT developers;
* Try out different AI tools to substantiate the potential of its use with the business team;
* Translate results into compelling visualizations which illustrate the overall benefits of the use of AI and identify with the business team the overall value of its use;
* Develop and map database architecture of methodological and clinical data systems;
* Convert business tasks into meaningful developer Jira tasks for sprints;
* Support departmental process improvement initiatives that can include AI; and
* Participate in training and development of more junior team members.
Qualifications
* Master's degree or higher in informatics, computer science/engineering, health information, statistics, or related field required;
* 2 or more years of experience as a Data Scientist or closely related;
* Experience applying machine learning to pharmaceutical or clinical data (or translatable artificial intelligence [ai] techniques from other industries);
* Advanced computer programming skills (preferred language: Python);
* Analytical thinker with great attention to detail;
* Ability to prioritize multiple projects and tasks within tight timelines; and
* Excellent written and verbal communication skills.
Medpace Overview
Medpace is a full-service clinical contract research organization (CRO). We provide Phase I-IV clinical development services to the biotechnology, pharmaceutical and medical device industries. Our mission is to accelerate the global development of safe and effective medical therapeutics through its scientific and disciplined approach. We leverage local regulatory and therapeutic expertise across all major areas including oncology, cardiology, metabolic disease, endocrinology, central nervous system, anti-viral and anti-infective. Headquartered in Cincinnati, Ohio, employing more than 5,000 people across 40+ countries.
Why Medpace?
People. Purpose. Passion. Make a Difference Tomorrow. Join Us Today.
The work we've done over the past 30+ years has positively impacted the lives of countless patients and families who face hundreds of diseases across all key therapeutic areas. The work we do today will improve the lives of people living with illness and disease in the future.
Cincinnati Perks
* Cincinnati Campus Overview
* Flexible work environment
* Competitive PTO packages, starting at 20+ days
* Competitive compensation and benefits package
* Company-sponsored employee appreciation events
* Employee health and wellness initiatives
* Community involvement with local nonprofit organizations
* Discounts on local sports games, fitness gyms and attractions
* Modern, ecofriendly campus with an on-site fitness center
* Structured career paths with opportunities for professional growth
* Discounted tuition for UC online programs
Awards
* Named a Top Workplace in 2024 by The Cincinnati Enquirer
* Recognized by Forbes as one of America's Most Successful Midsize Companies in 2021, 2022, 2023 and 2024
* Continually recognized with CRO Leadership Awards from Life Science Leader magazine based on expertise, quality, capabilities, reliability, and compatibility
What to Expect Next
A Medpace team member will review your qualifications and, if interested, you will be contacted with details for next steps.
$69k-98k yearly est. Auto-Apply 7d ago
Data Engineer III
Stratacuity
Data engineer job in Cincinnati, OH
Job Title: DataEngineer Job Pay Range: $68/hr-$72/hr Contract Length: 12 months About the Role I'm looking for a DataEngineer to support a Data Science Enablement team focused on powering enterprise ML/AI workflows. This role centers on accelerating the migration of existing R Shiny dashboards to Streamlit, backed by Snowflake, and supporting analytics and risk teams with reliable, production‑ready data.
You'll play a key part in transitioning legacy dashboards and data assets to a modern, governed, high‑performance environment. The target is to fully retire the legacy platform by the end of 2026, and you'll help drive that transformation by building scalable Streamlit applications and curated datasets that serve data science and credit risk users.
This position is ideal for someone who enjoys moving quickly for users, collaborating with data scientists and ML engineers, and working with modern data tooling. If you've supported model development, reporting, or deployment workflows before - and want to deepen your experience in ML/AI enablement - this role offers strong growth potential.
What You'll Do
* Migrate prioritized R Shiny dashboards to Streamlit, improving performance and delivery speed
* Build reusable application patterns (state management, caching, charts, tables, exports)
* Develop and maintain production‑ready datasets in Snowflake
* Collaborate with data science and ML engineering partners to align dashboards with model pipelines and monitoring workflows
* Apply governance, security, and performance best practices across apps and datasets
* Contribute to lightweight testing, CI/CD workflows, and operational telemetry
Required Technical Skills
* Streamlit
* Snowflake
* Python
* R / R Shiny familiarity
* SQL
Nice‑to‑Have Technical Skills
* dbt
* DB2
* ETL / orchestration tools
* AWS SageMaker
* Snowpark
* CI/CD tools (e.g., GitHub Actions)
EEO Employer
Apex Systems is an equal opportunity employer. We do not discriminate or allow discrimination on the basis of race, color, religion, creed, sex (including pregnancy, childbirth, breastfeeding, or related medical conditions), age, sexual orientation, gender identity, national origin, ancestry, citizenship, genetic information, registered domestic partner status, marital status, disability, status as a crime victim, protected veteran status, political affiliation, union membership, or any other characteristic protected by law. Apex will consider qualified applicants with criminal histories in a manner consistent with the requirements of applicable law. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation in using our website for a search or application, please contact our Employee Services Department at [email protected] or ************.
Apex Systems is a world-class IT services company that serves thousands of clients across the globe. When you join Apex, you become part of a team that values innovation, collaboration, and continuous learning. We offer quality career resources, training, certifications, development opportunities, and a comprehensive benefits package. Our commitment to excellence is reflected in many awards, including ClearlyRated's Best of Staffing in Talent Satisfaction in the United States and Great Place to Work in the United Kingdom and Mexico. Apex uses a virtual recruiter as part of the application process. Click here for more details.
Apex Benefits Overview: Apex offers a range of supplemental benefits, including medical, dental, vision, life, disability, and other insurance plans that offer an optional layer of financial protection. We offer an ESPP (employee stock purchase program) and a 401K program which allows you to contribute typically within 30 days of starting, with a company match after 12 months of tenure. Apex also offers a HSA (Health Savings Account on the HDHP plan), a SupportLinc Employee Assistance Program (EAP) with up to 8 free counseling sessions, a corporate discount savings program and other discounts. In terms of professional development, Apex hosts an on-demand training program, provides access to certification prep and a library of technical and leadership courses/books/seminars once you have 6+ months of tenure, and certification discounts and other perks to associations that include CompTIA and IIBA. Apex has a dedicated customer service team for our Consultants that can address questions around benefits and other resources, as well as a certified Career Coach. You can access a full list of our benefits, programs, support teams and resources within our 'Welcome Packet' as well, which an Apex team member can provide.
Employee Type:
Contract
Location:
Cincinnati, OH, US
Job Type:
Date Posted:
January 7, 2026
Pay Range:
$68 - $71 per hour
Similar Jobs
* DataEngineer - III
* Data Center Structural Engineer III - Data Center Structural Engineer III
* Data Consultant III
* Business Data Scientist III
* Data Center Project Architect III - Data Center Project Architect III
$68-71 hourly 1d ago
Data Scientist - Clinical and Operational Analytics
Venesco LLC
Data engineer job in Dayton, OH
Requirements
Mandatory Qualifications:
• Bachelor's degree in a quantitative field (e.g., Computer Science, Applied Math).
• 3+ years of experience in predictive analytics.
• Proficiency in Python, NumPy, Pandas, Matplotlib, and Scikit-learn.
• Ability to explain and implement ML algorithms from scratch.
• Signed NDA and HIPAA training required upon start.
Desired Qualifications:
• Experience with dashboard development and pretrained language models.
• Experience with dimensionality reduction and deep learning libraries (TensorFlow, PyTorch).
• Familiarity with human biology and performance.
Key Tasks and Responsibilities:
• Develop and tune unsupervised tree-based clustering models.
• Implement decision trees, k-NN, and optimized list sorting algorithms.
• Generate and minimize distance matrices using vectorized code.
• Collaborate with software engineers and maintain HIPAA compliance.
$69k-95k yearly est. 60d+ ago
Data Scientist
Core4Ce Careers
Data engineer job in Dayton, OH
We are seeking a highly skilled Data Scientist / Machine Learning Engineer to develop advanced analytics and machine learning solutions that drive meaningful insights for our customers. In this role, you will design and test algorithms, build data-driven experiments, and collaborate closely with SMEs and developers to transform data into actionable intelligence. This position is ideal for someone who excels at both innovative research and practical implementation.
Key Responsibilities:
Algorithm Development: Develop machine learning, data mining, statistical, and graph-based algorithms to analyze complex data sets and uncover meaningful patterns.
Model Evaluation: Test, validate, and down-select algorithms to determine the best-performing models for customer requirements.
Experimental Design & Data Generation: Design experiments and creating synthetic or simulated data when training/example data sets are limited or unavailable.
Data Visualization & Reporting: Produce clear reports, dashboards, and visualizations that communicate data insights to customers and stakeholders in an intuitive manner.
Automation & SME Collaboration: Work with subject matter experts to convert manual analytic workflows into efficient, automated analytics solutions.
Cross-Functional Development: Collaborate with software developers to ensure algorithms are properly implemented, optimized, and integrated into production systems.
*This position is designed to be flexible, with responsibilities evolving to meet business needs and enable individual growth.
Required Qualifications:
Active TS-SCI security clearance with the ability to obtain a CI poly.
OPIR Experience
Modeling and Simulation Experience
Experience designing, training, and validating machine learning models and statistical algorithms.
Proficiency with Python, R, or similar languages used for analytics and model development.
Hands-on experience with data visualization tools (e.g., Tableau, Power BI, matplotlib, seaborn).
Strong understanding of experimental design and data generation strategies.
Ability to communicate complex analytic concepts to both technical and non-technical audiences.
Demonstrated ability to work collaboratively across multidisciplinary teams.
Degree in Mathematics/Statistics, Computer Science, or a relevant domain field.
MA/MS degree with 13+ years of relevant experience, OR
BA/BS degree with 15+ years of relevant experience in a discipline aligned with the position's responsibilities.
Why Work for Us?
Core4ce is a team of innovators, self-starters, and critical thinkers-driven by a shared mission to strengthen national security and advance warfighting outcomes.
We offer:
401(k) with 100% company match on the first 6% deferred, with immediate vesting
Comprehensive medical, dental, and vision coverage-employee portion paid 100% by Core4ce
Unlimited access to training and certifications, with no pre-set cap on eligible professional development
Tuition assistance for job-related degrees and courses
Paid parental leave, PTO that grows with tenure, and generous holiday schedules
Got a big idea? At Core4ce,
The Forge
gives every employee the chance to propose bold innovations and help bring them to life with internal backing.
Join us to build a career that matters-supported by a company that invests in you.
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex (including pregnancy), national origin, disability, veteran status, age, genetic information, or other legally protected status.
$69k-95k yearly est. 4d ago
Data Engineer
Total Quality Logistics, Inc. 4.0
Data engineer job in Cincinnati, OH
Country USA State Ohio City Cincinnati Descriptions & requirements About the role: As a DataEngineer, with TQL you will be supporting the FP&A department by developing scalable reporting solutions in Microsoft Fabric. This role will focus on migrating data from on-premises systems to the cloud, building and optimizing SQL views and pipelines, and creating governed Power BI datasets and semantic models.
What's in it for you:
* $85,000-$125,000 base salary + performance bonuses
* Advancement opportunities with aggressive and structured career paths
* A culture of continuous education and technical training with reimbursements available
* Comprehensive benefits package
* Health, dental and vision coverage
* 401(k) with company match
* Perks including employee discounts, financial wellness planning, tuition reimbursement and more
What you'll be doing:
* Migrate FP&A datasets from on-premises to Microsoft Fabric/Lakehouse
* Build and maintain SQL pipelines, transformations, and views that support reporting needs
* Ensure performance, scalability, and reliability through automation, monitoring, and CI/CD best practices
* Design, publish, and manage Power BI certified datasets, semantic models, and reports/dashboards
* Apply best practices in DAX, modeling, and governance to enable accurate, self-service reporting
* Partner with Finance stakeholders to translate reporting requirements into technical deliverables
* Implement processes to ensure accuracy, consistency, and reconciliation across financial and operational systems
* Maintain documentation of data models, business logic, and reporting standards
* Troubleshoot and resolve issues impacting reporting accuracy or performance
* Collaborate with Data Governance and Quality teams to align with enterprise standards and metadata frameworks
What you need:
* Bachelor's degree in Computer Science, Information Systems, DataEngineering, or related field
* 3+ years of experience in BI/dataengineering or analytics engineering
* Advanced SQL skills with proven experience in building and optimizing large-scale datasets
* Strong Power BI expertise (datasets, DAX, performance tuning, semantic models)
* Hands-on experience with Microsoft Fabric and Lakehouse/cloud data platforms preferred
* Knowledge of financial reporting concepts and ability to work with FP&A stakeholders p
* Strong problem-solving skills and ability to bridge Finance and IT needs
Where you'll be: 4289 Ivy Pointe Boulevard, Cincinnati, Ohio 45245
Employment visa sponsorship is unavailable for this position. Applicants requiring employment visa sponsorship now or in the future (e.g., F-1 STEM OPT, H-1B, TN, J1 etc.) will not be considered.
About Us
Total Quality Logistics (TQL) is one of the largest freight brokerage firms in the nation. TQL connects customers with truckload freight that needs to be moved with quality carriers who have the capacity to move it.
As a company that operates 24/7/365, TQL manages work-life balance with sales support teams that assist with accounting, and after hours calls and specific needs. At TQL, the opportunities are endless which means that there is room for career advancement and the ability to write your own paycheck.
What's your worth? Our open and transparent communication from management creates a successful work environment and custom career path for our employees. TQL is an industry-leader in the logistics industry with unlimited potential. Be a part of something big.
Total Quality Logistics is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, age, national origin, genetic information, disability or protected veteran status.
If you are unable to apply online due to a disability, contact recruiting at ******************
*
$85k-125k yearly 34d ago
Data Engineer
Tata Consulting Services 4.3
Data engineer job in Blue Ash, OH
* Proven experience as a Software Developer, with a strong focus on building scalable and efficient Python applications. * Experience in developing Spark Structured Streaming applications is highly desirable. * Minimum of 7+ years of professional software development experience.
* Strong analytical and problem-solving skills, with the ability to debug and optimize Spark jobs running on Databricks.
* Ability to work closely with cross-functional teams to deliver high-quality streaming solutions.
Technical Skills:
* Strong expertise in Python, PySpark, and Spark Structured Streaming.
* Experience with Databricks and Azure.
* Familiarity with Delta Lake and Terraform scripting.
* Proficiency in working with varied data file formats (Avro, JSON, CSV) for ingestion and transformation.
Software Development:
* Proficiency in Object-Oriented Programming (OOP) concepts and software design principles.
* Ability to write clean, maintainable, and scalable Python code.GitHub Actions:
* Experience in setting up and managing CI/CD pipelines using GitHub Actions to ensure smooth and automated deployment processes.
Agile Methodology:
* Experience working in an Agile/Scrum environment, with a focus on iterative development, continuous feedback, and delivery.
Nice to Haves:
* Python Unit Testing.
* Unity Catalog.
* Databricks Asset Bundles.
* Unit Testing/Mocking
TCS Employee Benefits Summary:
* Discretionary Annual Incentive.
* Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
* Family Support: Maternal & Parental Leaves.
* Insurance Options: Auto & Home Insurance, Identity Theft Protection.
* Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
* Time Off: Vacation, Time Off, Sick Leave & Holidays.
* Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
#LI-RJ2
Salary Range-$100,000-$120,000 a year
$100k-120k yearly 4d ago
Data Analysis Engineer
Absolics Inc.
Data engineer job in Covington, KY
• Execute Quality Data Analysis * Administer Root Cause Analysis * Data Collection and Management * Data Analysis and Interpretation DUTIES/RESPONSIBILITIES * Excellent analytical and problem-solving skills * Solid understanding of quality management principles, root cause analysis, and corrective action processes
REQUIREMENTS
* Must be legally permitted to work in the United States
* Proficiency in using quality management software and tools, as well as Microsoft Office applications
* Problem-solving mindset and the ability to work well under pressure to meet deadlines
* Strong analytical skills and attention to detail, with the ability to interpret data and trends to drive informed decisions
QUALITIFICATIONS
* Bachelor's degree related to Quality or similar industry
EDUCATION
* 5+ years of Quality experience
* Experience with Machine Learning
* Experience with Big Data Analysis & Automation
* Experience with Yield Management System from SK Hynix/Samsung Semiconductor
EXPERIENCE
$71k-96k yearly est. 60d+ ago
Cloud Data Engineer
Radiancetech
Data engineer job in Beavercreek, OH
Radiance Technologies, a rapidly growing employee-owned company supporting the Department of Defense, is searching for a Cloud DatabaseEngineer to join our team. We are looking for a self-starter with excellent people skills to work with our customers. Employee ownership, generous 401K and profit sharing, competitive salaries, pleasant work environments, and stimulating assignments combine to make Radiance Technologies a great place to work and succeed.
We are seeking a Cloud DataEngineer to design, implement, and manage seamless data and object storage solutions across on-premises, cloud, inter-region, and inter-cloud platforms. The Cloud DataEngineer will also be responsible for designing, implementing, and managing databases in our cloud environment, ensuring high levels of data availability and assisting in the development of data models.
This role focuses on enabling secure, scalable, and resilient architectures that support structured and unstructured data, high-velocity pipelines, and multi-cloud ecosystems. The ideal candidate will have expertise in cloud-native dataengineering, object storage platforms, and data transformation pipelines, with a strong understanding of managing data and storage across hybrid environments. This role requires close collaboration with platform, software, and analytics teams to ensure data and storage solutions align with mission-critical needs.
Candidates should have an in-depth understanding of database structure principles and experience with cloud platforms such as AWS, Google Cloud, or Microsoft Azure. Database experience should include relational (Oracle, PostgreSQL, etc.), graph databases (Neo4j, ArangoDB, etc.), search and analytics engines like ElasticSearch, and other no SQL databases (MongoDB, Cassandra, etc.)
Key Responsibilities:
Seamless Data and Object Storage Management:
Design, implement, and manage databases and object storage solutions across on-premises, cloud, inter-region, and inter-cloud platforms.
Enable seamless data and object storage movement and synchronization between environments, ensuring high availability and minimal latency.
Engineer and optimize object storage platforms (e.g., AWS S3, MinIO, Ceph) for durability, performance, lifecycle management, and secure access.
Data and Storage Architecture Optimization:
Build and maintain scalable data pipelines for structured and unstructured data, supporting ingestion, curation, metadata enrichment, and analytics workflows.
Modernize storage architectures, including migration from legacy systems (e.g., NFS/file shares) to object-based architectures.
Implement data tiering, record/object-level authorization, and secure access controls for both databases and object storage.
Collaboration and Integration:
Work closely with development teams to optimize database and object storage usage, integrating data services with containerized platforms (e.g., Kubernetes, OpenShift) and CI/CD pipelines.
Partner with platform engineers, DevSecOps teams, and mission users to align data and storage solutions with operational needs.
Governance and Security:
Implement and enforce data governance policies, tagging, metadata schemas, and access controls for both databases and object storage.
Develop and manage backup and restore procedures for databases and object storage, ensuring disaster recovery readiness.
Monitor system performance and resolve database and object storage performance and capacity issues.
Documentation and Troubleshooting:
Document architectures, data flows, object storage configurations, and operational procedures to reduce tribal knowledge.
Troubleshoot complex data, object storage, performance, and access issues across environments (DEV/INT/PROD).
Required Qualifications:
Bachelor's degree in Computer Science, Engineering, Data Science, or a related STEM field (Master's preferred).
5+ years of relevant work experience in dataengineering, database administration, or object storage management.
U.S Citizenship with the ability to obtain and maintain a DoD TS/SCI Security Clearance.
Proven experience managing databases and object storage solutions across on-premises, cloud, inter-region, and inter-cloud platforms.
Expertise in cloud platforms such as AWS, Google Cloud, or Microsoft Azure.
Hands-on experience with object storage technologies (e.g., AWS S3, MinIO, Ceph) and data transformation pipelines (e.g., Apache Airflow, Argo Workflows, Python).
Strong proficiency in Python, SQL, and modern dataengineering frameworks.
Familiarity with both relational (e.g., Oracle, PostgreSQL) and non-relational databases (e.g., MongoDB, Cassandra).
Experience with data tiering, record/object-level authorization, and secure data handling.
Ability to handle multiple projects and deadlines in a fast-paced environment.
Desired Qualifications:
Active TS/SCI clearance.
Security+CE certification.
Experience supporting Intelligence Community (IC) mission programs.
Knowledge of data catalogs, search/indexing, and discovery tools.
Experience with streaming data (e.g., Kafka) and event-driven architectures.
Familiarity with DevSecOps, security scanning, and accreditation processes.
Understanding of analytics, machine learning (ML), or exploitation workflows consuming large datasets.
Prior experience modernizing or migrating legacy data systems.
EOE/Minorities/Females/Vet/Disabled
$75k-102k yearly est. Auto-Apply 8d ago
Senior Data Engineer
Apidel Technologies 4.1
Data engineer job in Blue Ash, OH
Job Description
The Engineer is responsible for staying on track with key milestones in Customer Platform / Customer Data Acceleration, work will be on the new Customer Platform Analytics system in Databricks. The Engineer has overall responsibility in the technical design process. Leads and participates in the application technical design process and completes estimates and work plans for design, development, implementation, and rollout tasks. The Engineer also communicates with the appropriate teams to ensure that assignments are delivered with the highest of quality and in accordance to standards. The Engineer strives to continuously improve the software delivery processes and practices. Role model and demonstrate the companys core values of respect, honesty, integrity, diversity, inclusion and safety of others.
Current tools and technologies include:
Databricks and Netezza
Key Responsibilities
Lead and participate in the design and implementation of large and/or architecturally significant applications.
Champion company standards and best practices. Work to continuously improve software delivery processes and practices.
Build partnerships across the application, business and infrastructure teams.
Setting up new customer data platforms from Netezza to Databricks
Complete estimates and work plans independently as appropriate for design, development, implementation and rollout tasks.
Communicate with the appropriate teams to ensure that assignments are managed appropriately and that completed assignments are of the highest quality.
Support and maintain applications utilizing required tools and technologies.
May direct the day-to-day work activities of other team members.
Must be able to perform the essential functions of this position with or without reasonable accommodation.
Work quickly with the team to implement new platform.
Be onsite with development team when necessary.
Behaviors/Skills:
Puts the Customer First - Anticipates customer needs, champions for the customer, acts with customers in mind, exceeds customers expectations, gains customers trust and respect.
Communicates effectively and candidly - Communicates clearly and directly, approachable, relates well to others, engages people and helps them understand change, provides and seeks feedback, articulates clearly, actively listens.
Achieves results through teamwork Is open to diverse ideas, works inclusively and collaboratively, holds self and others accountable, involves others to accomplish individual and team goals
Note to Vendors
Length of Contract 9 months
Top skills Databricks, Netezza
Soft Skills Needed collaborating well with others, working in a team dynamic
Project person will be supporting - staying on track with key milestones in Customer Platform / Customer Data Acceleration, Work will be on the new Customer Platform Analytics system in Databricks that will replace Netezza
Team details ie. size, dynamics, locations most of the team is located in Cincinnati, working onsite at the BTD
Work Location (in office, hybrid, remote) Onsite at BTD when necessary, approximately 2-3 days a week
Is travel required - No
Max Rate if applicable best market rate
Required Working Hours 8-5 est
Interview process and when will it start Starting with one interview, process may change
Prescreening Details standard questions. Scores will carry over.
When do you want this person to start Looking to hire quickly, the team is looking to move fast.
$79k-114k yearly est. 4d ago
Data Engineer (Technology Solutions Analyst IV) (Vacancy)
City of Columbus, Oh 4.0
Data engineer job in Franklin, OH
Definition * Data Integration & ETL Development: Design, build, and maintain ETL/ELT pipelines using tools like Talend, Informatica, ADF, SSIS, or similar. * SQL & Data Querying: Strong SQL skills for data validation, profiling, troubleshooting, and optimizing large datasets.
* Database Systems: Experience with relational databases (SQL Server, PostgreSQL, Oracle), including schemas, normalization, indexing, and performance.
* API & File-Based Integration: Knowledge of REST/SOAP APIs, JSON, XML, flat files, and batch transfers; able to integrate with external systems securely.
* Data Mapping & Transformation Logic: Develop source-to-target mappings, define transformation rules, and implement business logic in pipelines.
* Data Modeling: Familiarity with dimensional, relational, and operational models to support analytics and operations.
* Data Quality & Validation: Implement validation rules, quality checks, error handling, auditing, and reconciliation in data pipelines.
* Programming & Scripting: Proficiency in Python, Java, or Scala to write, optimize, and debug code for data processing and ETL workflows.
* Automation & Scheduling: Use job scheduling and orchestration tools like Airflow, Control-M, Cron, or platform-native schedulers.
* Cloud & Storage Concepts: Understanding of cloud storage, compute, and integration on Azure, AWS, or GCP (optional but valuable).
* Version Control & SDLC: Familiarity with Git and structured development processes, including code reviews, deployments, and documentation.
* Technical Requirement Gathering: Translate business requirements into technical specifications for data pipelines and integrations.
* Testing & Troubleshooting: Validate ETL processes, analyze outputs, resolve issues, and support repeatable UAT procedures.
* Documentation: Produce design documents, data flows, dictionaries, integration specs, and operational procedures.
* Analytical Problem-Solving: Analyze data flows and system behaviors to identify root causes and recommend fixes.
* Performance Optimization: Optimize queries, transformations, pipeline logic, and system configurations for throughput and reliability.
* Collaboration & Communication: Work with BAs, analysts, SMEs, application leads, and engineers to deliver accurate integrations.
* Tooling & Productivity: Use Smartsheet, Jira, SharePoint, or similar tools for task tracking and documentation.
Under direction, is responsible for researching, designing, developing, and improving enterprise data and/or application solutions; performs related duties as required.
Examples of Work
(Any one position may not include all of the duties listed, nor do the examples cover all of the duties that may be performed.)
Develops data and/or applications solutions, which may include utilization of supportive software, data management platforms, database management systems, server applications, and/or web-based development systems;
Confers with departmental or divisional personnel to analyze current operational procedures, document potential problems, and identify specific data input and output requirements for software system development;
Analyzes requirements of data and/or application solutions, such as licensing requirements, data requirements, peripheral equipment, maintenance support, server requirements, access on mobile devices, or other system interfaces and requirements to determine feasibility of system designs within time and budget constraints;
Reviews the outline of the new or revised business system process(es) and makes recommendations for improvements by defining the hardware and software requirements of the proposed system(s) or process(es); evaluates integration requirements for data and applications and supportive databases; develops solutions for integration and/or extraction of data for cross-platform dependencies among application systems; prepares cost estimates and project timelines for implementation;
Consults with technical staff to evaluate business processes, needs, expectations, and functional requirements and translates into technical requirements; designs and proposes solutions to streamline or enhance business functions/processes;
Develops solutions for citywide data integration and management; creates synthesized data models, transformations and visualizations for deployment of analytics to meet the business or operational needs of the City;
Researches third-party software systems for feasibility of design and compatibility and adaptability with existing architecture and business processes; reviews proposed hardware and software solutions and recommends selection to management for approval;
Formulates, designs, configures, and/or modifies software systems using scientific analysis to predict and measure outcome and consequences of design;
Develops and directs software system testing procedures, programming, and documentation;
Serves as senior consultant for database design, implementation, and administration, including security, backup, recovery, and maintenance; utilizes administrative rights within software applications to mass import updates to an application system;
Advises departmental or divisional personnel in their technology needs with regard to data corruption, security issues, computer viruses, and hardware and software redundancy;
Consults with staff members to evaluate interface between hardware and software and operational and performance requirements of software system; analyzes system growth and evaluates processing efficiency;
Plans and prepares technical reports, memoranda, and instructional manuals as documentation of system development;
Makes recommendations to management regarding the planning, development, and coordination of software system development projects;
Performs engineering cost/benefit analysis to verify potential effectiveness of new products; conducts technical research for building new designs, developing business cases, selling ideas to management, and gaining commitment for new system enhancements;
Mentors business systems analysts and senior programmer analysts in their work and individual projects as requested by management;
Participates in appropriate professional activities to stay abreast of existing and emerging technologies.
Minimum Qualifications
Possession of a bachelor's degree and four (4) years of experience in systems analysis, database management, applications development, or software design. Substitution(s): Valid possession of one (1) of the following certifications may be substituted for the required education: Microsoft Certified Solutions Developer (all tracks), Microsoft Certified Solutions Expert (Data Platform or Business Intelligence), or Geographic Information Systems Professional (GISP). Additional experience as specified above may substitute for the educational requirement on a year-for-year basis. Possession of a master's degree may be substituted for one (1) year of the required experience.
Test/Job Contact Information
Recruitment #: 25-0585-V7
Employment Type: Full-Time (Regular)
Should you have questions regarding this vacancy, please contact:
Kimberly Hetterscheidt
Department of Technology
Division of Information Services
1111 E. Broad St.
Columbus, Ohio 43205
P: **************
E:***************************
The City of Columbus is an Equal Opportunity Employer
$58k-72k yearly est. 20d ago
Senior Data Engineer
General Electric Credit Union 4.8
Data engineer job in Cincinnati, OH
General Electric Credit Union is a not-for-profit, member-owned full service financial institution headquartered in Cincinnati with branches in Ohio and Kentucky. At GECU, we pride ourselves on maintaining quality service, being an employee-friendly workplace, and developing our team members while teaching you the skills to lead you to career advancement opportunities.
Overview: The Senior DataEngineer will play a key role in developing and optimizing GECU's data infrastructure to support the organization's data-driven initiatives. The Senior DataEngineer will be designing, building, and maintaining scalable data pipelines and systems, working with the data and development team Essential Responsibilities:
Design, implement, and maintain robust, scalable, and high-performance data pipelines and ETL processes to collect, process, and store large volumes of structured and unstructured data.
Collaborate with cross-functional teams to understand data requirements and translate them into technical solutions.
Develop and maintain data warehouse and data lake solutions, ensuring data quality, integrity, and reliability.
Optimize data pipelines and ETL processes for performance, efficiency, and cost-effectiveness, utilizing best practices and technologies.
Implement data governance and security measures to ensure compliance with regulatory requirements and data privacy standards.
Troubleshoot and resolve issues related to data processing, data quality, and system performance in a timely manner.
Evaluate and recommend new technologies, tools, and frameworks to enhance the organization's data infrastructure and capabilities.
Document technical specifications, data lineage, and system architecture to facilitate knowledge sharing and collaboration.
Collaborate with other key data employees to maintain and publish data definitions and data catalogue.
Stay up to date with industry trends and emerging technologies in dataengineering and analytics.
Education and Experience:
High school diploma, or GED required; Bachelor's degree in Computer Science, Engineering, or related field; Master's degree is a plus.
Minimum 6 years' experience in DataEngineering; working with data warehousing concepts, database technologies (e.g., SQL, NoSQL), and distributed computing architectures.
Experience with Snowflake Data Warehouse preferred
Knowledge, Skills, and Abilities:
Strong programming skills in languages such as Python, Java, Scala, or SQL, with experience in data manipulation, transformation, and analysis.
Knowledge of cloud platforms such as AWS, Azure, or Google Cloud Platform.
Extensive knowledge of data modeling, schema design, and optimization techniques for relational and non-relational databases.
Excellent problem-solving and troubleshooting skills, with the ability to diagnose and resolve complex technical issues.
Strong communication and collaboration skills, with the ability to work effectively in a team environment and interact with stakeholders at all levels.
Ability to perform independently and competently to accomplish necessary deliverables accurately and on time.
Ability to assist and mentor Junior DataEngineers
At GECU, we want to support your wellbeing by offering a wide range of benefits:
Health, Dental and Vision insurance
Life and Disability insurance options
Paid Time Off starts accruing once hired and take your birthday off -paid
401k Retirement plan with up to a 10% match of your base gross compensation
Tuition reimbursement opportunities & professional development
Volunteer opportunities -and earn additional PTO hours!
On-site clinics for Vaccines and Mammograms
And many more!
Come join GECU as we are a curated culture of respect, understanding, and mutual recognition. We believe forming bonds and connecting with each other only stands to strengthen the service we provide to our members in our mission of improving the Quality of Financial lives!
General Electric Credit Union is an Equal Opportunity Employer
$77k-101k yearly est. 60d+ ago
Software Development Engineer in Test IV
Caresource 4.9
Data engineer job in Dayton, OH
The Software Development Engineer in Test IV (SDET IV) is responsible for leading the design, development, and execution of sophisticated automated test solutions, tools, and frameworks. This senior role requires a strong foundation in software development, test automation, and quality assurance practices, with a proven ability to work in Agile/SCRUM environments. They will be responsible for mentoring junior team members and have the ability to drive the overall testing strategy for the organization, ensuring the delivery of high-quality software products.
Essential Functions:
Lead the design, development, and maintenance of advanced automation frameworks and tools using C#, .NET, Python, and JavaScript to support testing for various applications.
Architect and implement AI-driven testing solutions and methodologies to enhance testing efficiency and coverage.
Develop and maintain automation scripts using UFT, UiPath Studio, and other tools to test Facets, web apps, and UiPath RPA solutions.
Create, manage, and execute comprehensive test cases using OpenText Application Lifecycle Management (ALM), Azure DevOps, and UiPath Orchestrator, ensuring high accuracy and efficiency.
Build and integrate automated test cases into Azure DevOps CI/CD pipelines for robust regression and continuous testing coverage.
Utilize SQL for backend validation, data-driven testing, and analysis of relational databases.
Manage build environments and automation tasks using Microsoft Team Foundation Server (TFS) and Azure DevOps.
Maintain source code version control using Git, Microsoft Team Foundation Server (TFS), Azure DevOps, Subversion (SVN), and support Agile/SCRUM practices throughout the SDLC.
Identify, log, and track software defects, providing detailed documentation and collaborating with configuration development teams for resolution.
Participate in code reviews and provide constructive feedback on testability, quality, and best practices.
Research, evaluate, and implement new testing technologies and quality assurance methodologies to drive innovation.
Support and manage virtualization environments with VMware vSphere.
Mentor and lead junior and mid-level team members, fostering a culture of knowledge sharing and continuous improvement.
Drive the overall testing strategy and best practices across the organization, ensuring alignment with business goals and objectives.
Perform any other job related duties as requested.
Education and Experience:
Bachelor's degree in Computer Science, Engineering, a related field required
Equivalent years of relevant work experience may be accepted in lieu of required education
Seven (7) years of experience as a Software Development Engineer in Test (SDET) or in a similar role required
Proven experience in software testing methodologies and practices required
Experience in developing automated test scripts and frameworks required
Competencies, Knowledge and Skills:
Exposure to Facets/ GuidingCare or equivalent system is preferred.
High level of proficiency in C#, .NET Framework, Python, JavaScript, and scripting languages such as VBScript.
Familiarity with UI test automation tools (e.g., UFT), RPA tools (e.g., UiPath), and test management platforms (e.g., OpenText ALM, Azure DevOps).
Familiarity with Azure DevOps, including API integration and pipeline configuration for automated testing.
Strong understanding of Agile/SCRUM methodologies and the full software development lifecycle.
Knowledge of XML, HTML, SQL, and data validation techniques.
Strong understanding of test case design, defect tracking, and requirement traceability documentation.
Excellent problem-solving and debugging skills, with the ability to analyze and troubleshoot complex systems.
Familiarity with test automation frameworks (e.g., Selenium, NUnit) and CI/CD processes.
Understanding of API testing and experience with tools such as Postman and SoapUI.
Expert-level proficiency in .NET technologies, including ASP.NET, .NET Core, and related frameworks, with a focus on architectural design and implementation.
Extensive experience in leading the development and testing of complex applications built on the .NET platform, particularly using C#.
Deep knowledge of .NET testing tools and frameworks (e.g., NUnit, MSTest), with the ability to drive the adoption of innovative testing methodologies across the organization.
Responsible for establishing best practices and standards for .NET development and testing, ensuring alignment with organizational goals.
Expert in UiPath for Robotic Process Automation (RPA) solutions, with a focus on architecting scalable and efficient automation frameworks.
Ability to lead the design, development, and testing of end-to-end RPA workflows using UiPath, ensuring alignment with organizational goals.
Deep knowledge of RPA best practices and methodologies, including governance, compliance, and performance optimization.
Ability to drive strategic initiatives to integrate RPA solutions across the organization, mentoring team members in advanced RPA techniques and fostering a culture of continuous improvement.
Expert in generative AI technologies and large language models (e.g., GPT-4), with a strong focus on designing and deploying end-to-end AI architectures to solve complex testing challenges.
Proficient in advanced Python scripting and utilizing AI frameworks to optimize testing processes and expand test coverage comprehensively.
Drives the implementation of AI-driven automation strategies across the organization, collaborating with cross-functional teams to maximize business value.
Leads initiatives to research and integrate cutting-edge AI technologies into testing methodologies, fostering innovation within the team.
Deep understanding of AI and machine learning algorithms, with the ability to mentor others in best practices and advanced techniques.
Licensure and Certification:
None
Working Conditions:
General office environment; may be required to sit or stand for extended periods of time
Travel is not typically required
Compensation Range:
$94,100.00 - $164,800.00 CareSource takes into consideration a combination of a candidate's education, training, and experience as well as the position's scope and complexity, the discretion and latitude required for the role, and other external and internal data when establishing a salary level. In addition to base compensation, you may qualify for a bonus tied to company and individual performance. We are highly invested in every employee's total well-being and offer a substantial and comprehensive total rewards package.
Compensation Type:
Salary
Competencies:
- Fostering a Collaborative Workplace Culture - Cultivate Partnerships - Develop Self and Others - Drive Execution - Influence Others - Pursue Personal Excellence - Understand the Business
This is not all inclusive. CareSource reserves the right to amend this job description at any time. CareSource is an Equal Opportunity Employer. We are dedicated to fostering an environment of belonging that welcomes and supports individuals of all backgrounds.
#LI-GB1
$94.1k-164.8k yearly 4d ago
ETL Architect
Scadea Solutions
Data engineer job in Cincinnati, OH
Job title: ETL Architect
DURATION 18 months
YEARS OF EXPERIENCE 7-10
INTERVIEW TYPE Phone Screen to Hire
REQUIRED SKILLS
• Experience with Data Stage and ETL design
Technical
• Requirement gathering , converting business requirements to technical specs to profile
• Worked hands on in minimum 2 projects with data stage
• Understand the process of developing an etl design that support multiple datastage developers
• Be able to create an etl design framework and related specifications for use by etl developers
• Define standards and best practices of Data Stage etl to be followed by all data stage developers
• Understanding of Data Warehouse, Data marts concepts and implementation experience
• Be able to look at code produced to insure conformance with developed ETL framework and design for reuse
• Preferable experienced user level comptency in IBM's metadata product, datastage and Infosphere product line
• Be able to design etl for oracle or sql server or any db
• Good analytical skills and process design
• Insuring compliance to quality standards, and delivery timelines.
Qualifications
Bachelors
Additional Information
Required Skills:
Job Description:
Performs highly complex application programming/systems development and support Performs highly complex configuration of business rules and technical parameters of software products Review business requirements and develop application design documentation Build technical components (Maximo objects, TRM Rules, Java extensions, etc) based on detailed design.
Performs unit testing of components along with completing necessary documentation. Supports product test, user acceptance test, etc as a member of the fix-it team. Employs consistent measurement techniques Include testing in project plans and establish controls to require adherence to test plans Manages the interrelationships among various projects or work objectives
$86k-113k yearly est. 60d+ ago
Junior Data Scientist
Medpace 4.5
Data engineer job in Cincinnati, OH
The Medpace Analytics and Business Intelligence team is growing rapidly and is focused on building a data driven culture across the enterprise. The BI team uses data and insights to drive increased strategic and operational efficiencies across the organization. As a Junior Data Scientist, you will hold a highly visible analytical role that requires interaction and partnership with leadership across the Medpace organization.
What's in this for you?
* Work in a collaborative, fast paced, entrepreneurial, and innovative workplace;
* Gain experience and exposure to advanced BI concepts from visualization to data warehousing;
* Grow business knowledge by working with leadership across all aspects of Medpace's business.
Responsibilities
* Data Collection & Cleaning: Gather, clean, and preprocess large, raw datasets;
* Analysis & Modeling: Perform statistical analysis, build & validate machine learning models, and test hypotheses (e.g., A/B testing);
* Algorithm Development: Create algorithms to manage and interpret information, often automating processes;
* Insight Generation: Discover trends, patterns, and insights to inform business strategy;
* Visualization & Communication: Present complex findings visually (dashboards, charts) and verbally to technical and non-technical teams;
* Collaboration: Work with engineering, product, and business teams to implement solutions;
* Model Monitoring: Deploy and maintain models, iterating for continuous improvement.
Qualifications
* Bachelor's Degree in Business, Life Science, Computer Science, or Related Degree;
* 0-3 years of experience in business intelligence or analytics - Python, R, SQL heavily preferred
* Strong analytical and communication skills;
* Excellent organization skills and the ability to multitask while efficiently completing high quality work.
Medpace Overview
Medpace is a full-service clinical contract research organization (CRO). We provide Phase I-IV clinical development services to the biotechnology, pharmaceutical and medical device industries. Our mission is to accelerate the global development of safe and effective medical therapeutics through its scientific and disciplined approach. We leverage local regulatory and therapeutic expertise across all major areas including oncology, cardiology, metabolic disease, endocrinology, central nervous system, anti-viral and anti-infective. Headquartered in Cincinnati, Ohio, employing more than 5,000 people across 40+ countries.
Why Medpace?
People. Purpose. Passion. Make a Difference Tomorrow. Join Us Today.
The work we've done over the past 30+ years has positively impacted the lives of countless patients and families who face hundreds of diseases across all key therapeutic areas. The work we do today will improve the lives of people living with illness and disease in the future.
Cincinnati Perks
* Cincinnati Campus Overview
* Flexible work environment
* Competitive PTO packages, starting at 20+ days
* Competitive compensation and benefits package
* Company-sponsored employee appreciation events
* Employee health and wellness initiatives
* Community involvement with local nonprofit organizations
* Discounts on local sports games, fitness gyms and attractions
* Modern, ecofriendly campus with an on-site fitness center
* Structured career paths with opportunities for professional growth
* Discounted tuition for UC online programs
Awards
* Named a Top Workplace in 2024 by The Cincinnati Enquirer
* Recognized by Forbes as one of America's Most Successful Midsize Companies in 2021, 2022, 2023 and 2024
* Continually recognized with CRO Leadership Awards from Life Science Leader magazine based on expertise, quality, capabilities, reliability, and compatibility
What to Expect Next
A Medpace team member will review your qualifications and, if interested, you will be contacted with details for next steps.
$69k-98k yearly est. Auto-Apply 10d ago
Data Scientist - Clinical and Operational Analytics
Venesco LLC
Data engineer job in Dayton, OH
Job DescriptionDescription:
Develop and deploy machine learning models to support clinical and operational decision-making. Work with large datasets to extract insights and support predictive analytics for human performance and health.
Requirements:
Mandatory Qualifications:
• Bachelor's degree in a quantitative field (e.g., Computer Science, Applied Math).
• 3+ years of experience in predictive analytics.
• Proficiency in Python, NumPy, Pandas, Matplotlib, and Scikit-learn.
• Ability to explain and implement ML algorithms from scratch.
• Signed NDA and HIPAA training required upon start.
Desired Qualifications:
• Experience with dashboard development and pretrained language models.
• Experience with dimensionality reduction and deep learning libraries (TensorFlow, PyTorch).
• Familiarity with human biology and performance.
Key Tasks and Responsibilities:
• Develop and tune unsupervised tree-based clustering models.
• Implement decision trees, k-NN, and optimized list sorting algorithms.
• Generate and minimize distance matrices using vectorized code.
• Collaborate with software engineers and maintain HIPAA compliance.
$69k-95k yearly est. 7d ago
Data Engineer
Tata Consulting Services 4.3
Data engineer job in Cincinnati, OH
* 2+ years of proven professional data development experience * 2+ years developing with SQL * 4+ years Python Development * 3+ years Java, Spring Framework development * Object Oriented Programming * 3+ years Distributed Data Processing (PySpark, Snowpark)
* Proficient CI/CD practices
* Automated data pipeline orchestration
* Data observability - Logging, Monitoring, and Alerting
* Databricks and/or Snowflake
* API development
* Data quality checks
* Cloud Technologies (Azure preferred)
Roles & Responsibilities:
* Develop distributed data processing data pipeline solutions
* Orchestrate multi-step data transformation pipelines
* Perform unit, integration, and regression testing on packaged code
* Build transformation logic and code in an Object Oriented Programming style
* Enhance CI/CD pipelines in the path to production
* Create data quality checks for ingested and post processed data
* Ensure data observability via alerting and monitoring of automated pipeline solutions
* Maintain and enhance existing applications
* Build cloud resources via infrastructure as code
* Provide mentoring to junior team members
* Participate in retrospective reviews
* Participate in the estimation process for new work and releases
* Bring new perspectives to problems
* Be driven to improve yourself and the way things are done
TCS Employee Benefits Summary:
* Discretionary Annual Incentive.
* Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
* Family Support: Maternal & Parental Leaves.
* Insurance Options: Auto & Home Insurance, Identity Theft Protection.
* Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
* Time Off: Vacation, Time Off, Sick Leave & Holidays.
* Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
#LI-RJ2
Salary Range-$100,000-$120,000 a year
How much does a data engineer earn in Cincinnati, OH?
The average data engineer in Cincinnati, OH earns between $66,000 and $115,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.
Average data engineer salary in Cincinnati, OH
$87,000
What are the biggest employers of Data Engineers in Cincinnati, OH?
The biggest employers of Data Engineers in Cincinnati, OH are: