Sr Data Engineer
Data engineer job in Cincinnati, OH
Must Have
PowerBI
Snowflake
SQL
Nice To Have
Python and/or SAS
Notes:
Data Engineer with a few years experience with SQL, DBT, Snowflake, and PowerBI. Cincinnati in person.
Primary Responsibilities:
Design, construct, install, test and maintain data management systems.
Build high-performance algorithms, predictive models, and prototypes.
Ensure that all systems meet the business/company requirements as well as industry practices.
Integrate up-and-coming data management and software engineering technologies into existing data structures.
Develop set processes for data mining, data modeling, and data production.
Create custom software components and analytics applications.
Research new uses for existing data.
Employ an array of technological languages and tools to connect systems together.
Collaborate with members of your team (eg, data architects, the IT team, data scientists) on the project s goals.
Install/update disaster recovery procedures.
Recommend different ways to constantly improve data reliability and quality.
Qualifications:
Technical Degree or related work experience
Experience with non-relational & relational databases (SQL, MySQL, NoSQL, Hadoop, MongoDB, etc.)
Experience programming and/or architecting a back end language (Java, J2EE, etc)
Senior Data Engineer
Data engineer job in Cincinnati, OH
Key Responsibilities
Experience in administration and configuration of API Gateways (e.g. Apigee/Kong) Apply cloud computing skill to deploy upgrades and fixes
Design, develop, and implement integrations based on use feedback.
Troubleshoot production issues and coordinate with the development team to streamline code deployment.
Implement automation tools and frameworks (Ci/CD pipelines).
Analyze code and communicate detailed reviews to development teams to ensure a marked improvement in applications and the timely completion of products.
Collaborate with team members to improve the company's engineering tools, systems and procedures, and data security.
Deliver quality customer service and resolve end-user issues in a timely manner
Draft architectural diagrams, interface specifications and other design documents
Participate in the development and communication of data strategy and roadmaps across the technology organization to support project portfolio and business strategy
Innovate, develop, and drive the development and communication of data strategy and roadmaps across the technology organization to support project portfolio
Drive the development and communication of enterprise standards for data domains and data solutions, focusing on simplified integration and streamlined operational and analytical uses
Drive digital innovation by leveraging innovative new technologies and approaches to renovate, extend, and transform the existing core data assets, including SQL-based, NoSQL-based, and Cloud-based data platforms
Define high-level migration plans to address the gaps between the current and future state, typically in sync with the budgeting or other capital planning processes
Lead the analysis of the technology environment to detect critical deficiencies and recommend solutions for improvement
Mentor team members in data principles, patterns, processes and practices
Promote the reuse of data assets, including the management of the data catalog for reference
Draft and review architectural diagrams, interface specifications and other design documents
Note to Vendors
Top 3 skills: azure data bricks, python, and spark
Soft Skills Needed: problem solving, attention to detail, and ability to work independently and part of agile team
Team details i.e.. size, dynamics, locations: 10 team members, working independently but will do peer programing throughout the day.
Data Architect
Data engineer job in Cincinnati, OH
THIS IS A W2 (NOT C2C OR REFERRAL BASED) CONTRACT OPPORTUNITY
REMOTE MOSTLY WITH 1 DAY/MO ONSITE IN CINCINNATI-LOCAL CANDIDATES TAKE PREFERENCE
RATE: $75-85/HR WITH BENEFITS
We are seeking a highly skilled Data Architect to function in a consulting capacity to analyze, redesign, and optimize a Medical Payments client's environment. The ideal candidate will have deep expertise in SQL, Azure cloud services, and modern data architecture principles.
Responsibilities
Design and maintain scalable, secure, and high-performing data architectures.
Lead migration and modernization projects in heavy use production systems.
Develop and optimize data models, schemas, and integration strategies.
Implement data governance, security, and compliance standards.
Collaborate with business stakeholders to translate requirements into technical solutions.
Ensure data quality, consistency, and accessibility across systems.
Required Qualifications
Bachelor's degree in Computer Science, Information Systems, or related field.
Proven experience as a Data Architect or similar role.
Strong proficiency in SQL (query optimization, stored procedures, indexing).
Hands-on experience with Azure cloud services for data management and analytics.
Knowledge of data modeling, ETL processes, and data warehousing concepts.
Familiarity with security best practices and compliance frameworks.
Preferred Skills
Understanding of Electronic Health Records systems.
Understanding of Big Data technologies and modern data platforms outside the scope of this project.
Senior Data Engineer
Data engineer job in Cincinnati, OH
Data Engineer III
About the Role
We're looking for a Data Engineer III to play a key role in a large-scale data migration initiative within Client's commercial lending, underwriting, and reporting areas. This is a hands-on engineering role that blends technical depth with business analysis, focused on transforming legacy data systems into modern, scalable pipelines.
What You'll Do
Analyze legacy SQL, DataStage, and SAS code to extract business logic and identify key data dependencies.
Document current data usage and evaluate the downstream impact of migrations.
Design, build, and maintain data pipelines and management systems to support modernization goals.
Collaborate with business and technology teams to translate requirements into technical solutions.
Improve data quality, reliability, and performance across multiple environments.
Develop backend solutions using Python, Java, or J2EE, and integrate with tools like DataStage and dbt.
What You Bring
5+ years of experience with relational and non-relational databases (SQL, Snowflake, DB2, MongoDB).
Strong background in legacy system analysis (SQL, DataStage, SAS).
Experience with Python or Java for backend development.
Proven ability to build and maintain ETL pipelines and automate data processes.
Exposure to AWS, Azure, or GCP.
Excellent communication and stakeholder engagement skills.
Financial domain experience-especially commercial lending or regulatory reporting-is a big plus.
Familiarity with Agile methodologies preferred.
Data bricks developer
Data engineer job in Cincinnati, OH
Cincinnati, OH
Onsite 5 days in a Week
Long term contract role
Role Description
A Data bricks developer to design, develop, and maintain Data bricks pipelines
Required Skills
• 5+ years of managing and administering the Databricks environment, including clusters, workspaces, and notebooks, to ensure optimal performance, reliability, and scalability.
• Write clean, modular Python code for data processing, orchestration, automation, and integration with internal and external systems.
• Implement and tune Spark jobs for performance, reliability, and cost-efficiency, including partitioning, caching, and cluster configuration.
• Admin capabilities to setup infrastructure in cost effective way.
• Configure and optimize Databricks clusters and resources based on workload requirements and best practices.
• Monitor system performance, resource utilization, and availability using Databricks monitoring and logging tools. Preferably Dynatrace.
Junior Data Engineer
Data engineer job in Cincinnati, OH
Agility Partners is seeking a qualified Junior Data Engineer to fill an open position with one of our clients. This is an exciting opportunity for an early‑career professional to build real‑world data skills in a supportive, fast‑paced environment. You'll work closely with senior engineers and analysts, primarily using SQL to query, clean, and prepare data that powers reporting and analytics.
Responsibilities
Assist with writing and optimizing SQL queries to support reporting and ad‑hoc data requests
Help create basic database objects (tables, views) and maintain data dictionaries and documentation
Support routine data quality checks (duplicates, nulls, referential integrity) and simple SQL‑based transformations
Participate in loading data into staging tables and preparing datasets for downstream use
Troubleshoot query issues (e.g., incorrect results, slow performance) with guidance from senior engineers
Collaborate with analytics teams to validate results and ensure datasets meet business needs
The Ideal Candidate
Foundational SQL proficiency: comfortable writing basic queries and joins; curiosity to learn window functions and indexing
Understanding of relational database concepts (keys, normalization vs. denormalization)
0-2 years of professional experience (or internship/capstone/bootcamp projects); recent grads welcome
Detail‑oriented, coachable, and comfortable asking questions and working through feedback
Able to document work clearly and communicate findings to non‑technical stakeholders
Bonus (not required): exposure to Excel/Google Sheets or a BI tool (Power BI/Tableau), and interest in learning simple ETL concepts
Reasons to Love It
Learn directly from experienced data engineers while working on meaningful, production datasets
Clear growth path from SQL fundamentals to broader data engineering skills over time
Supportive team culture that values curiosity, reliability, and steady skill development
Data Architect
Data engineer job in Blue Ash, OH
Since 2006, CoStrategix has defined and implemented digital transformation initiatives, data and analytics capabilities, and digital commerce solutions for Fortune 500 and mid-market customers.
CoStrategix provides thought leadership, strategy, and comprehensive end-to-end technology execution to help organizations transform and stay competitive in today's digital world. As a Gixer (employee) at CoStrategix, you will have broad exposure to diverse industries and technologies.
You will work on leading-edge digital projects in areas of Data Engineering, Data Governance, Data Strategy, AI, Cloud,. Gixers operate at the leading edge of technologies, and our projects require compelling human interfaces and modern data platforms.
This role is based at our culture hub in Blue Ash, Ohio.
About this role:
As a Data Architect at CoStrategix, you will define, orchestrate, and implement modern data platforms and architectures. This role is about understanding the current state of data ecosystems, mapping existing data flows and structures, creating an architectural blueprint, and then implementing data strategies and governance frameworks in rapid cycles.
In this role, you will provide the following:
Strategic & Consultative Responsibilities
Act as a trusted data advisor to client stakeholders, clearly communicating trade-offs, guiding decision-making, and influencing the adoption of modern data practices.
Lead stakeholder interviews and working sessions to elicit requirements, clarify use cases, and align on priorities, scope, and success metrics.
Create phased data roadmaps with clear milestones, dependencies, and value outcomes (e.g., time-to-insight, cost reduction, risk reduction) and track progress against them.
Provide architectural input into scoping and pricing of data engagements; ensure solutions balance value, risk, and cost, and support delivery teams in staying aligned to scope and architectural guardrails.
Work closely with sales and account teams to understand customer objectives and translate them into practical, scalable data architecture and solution designs.
Participate in pre-sales engagements, discovery workshops, proposal development, client presentations, and proof-of-concept activities to showcase solution feasibility and value.
Data Governance, Quality & Operating Model
Bring consultative competencies around data governance and data quality, helping clients define guiding principles, policies, and operating models.
Lead development of comprehensive data strategies for clients that align with their business priorities.
Design processes for metadata management, lineage tracking, and master data management (MDM), including stewardship roles and workflows.
Establish and maintain data quality standards, metrics, and monitoring processes to ensure accurate, complete, and timely data across critical domains.
Develop semantic models and curated datasets, guide adoption of data cataloging and data literacy programs.
Enterprise & Solution Architecture
Design and maintain conceptual, logical, and physical data architectures to support enterprise, analytical, and operational systems.
Assess and recommend data platforms, cloud services, and emerging technologies to meet business needs, while collaborating with Cloud, DevOps, and Security Architects to ensure architectural alignment.
Partner with Data Analysts, BI Developers, and Data Scientists to ensure data architectures enable analytics, visualization, and AI/ML initiatives.
Define non-functional requirements (performance, scalability, resilience, cost, security, and compliance) for data solutions, and ensure they are addressed in the architecture and design.
Maintain architecture decision records, reference architectures, and reusable patterns; define and promote standards and best practices for data modeling, integration, and consumption across teams.
Implementation, Delivery & Enablement
Lead the implementation of scalable, secure, and high-performing data and transformation frameworks that unify data across platforms and enable real-time, batch, and event-driven use cases.
Define and enforce data design standards, patterns, and best practices during implementation to ensure consistency, maintainability, and performance.
Mentor and coach engineering and analytics teams in data design principles, governance frameworks, and architectural discipline to ensure consistency and quality in delivery.
Qualifications:
Bachelor's Degree in Math, Statistics, Computer Science, Information Technology, or a related field
8+ years of experience in data management and architecture roles
3 to 5 years of leading data strategy, governance, or modernization efforts
3 to 5 years of pre-sales, client solutioning, and/or consulting engagement in Data Management
Experience designing and implementing modern data architectures
Current understanding of best practices regarding data security, governance, and regulatory compliance
Experience in data modeling, data engineering, and analytics platform architecture
Experience with data engineering tooling such as Databricks, Snowflake, Synapse, BigQuery, Kafka, and dbt
Experience with software development, DevOps best practices, and automation methodologies
Excellent leadership and negotiation skills are necessary to work effectively with colleagues at various levels of the organization and across multiple locations
Communicate complex issues crisply and concisely to various levels of management
Coaching and mentoring skills - ability to adapt to all levels of the organization
Strong collaboration skills and excellent verbal and written communication skills
About CoStrategix
We make CoStrategix an awesome place to work, offering a total rewards package that includes comprehensive benefits starting on day one. Benefits include medical, dental, vision, disability, and life insurance, as well as an EAP and 401(k) retirement plan. We are a flexible hybrid workplace committed to a culture of curiosity, collaboration, learning, self-improvement, and, above all, fun. We have been named a finalist for the Cincinnati Business Courier's Best Places to Work Awards for 4 consecutive years.
Do the Right Thing. Always.
At CoStrategix, we are passionate about our core values. Diversity, equity & inclusion (DE&I) are part of our core values Every Gixer (employee) has an opportunity for success regardless of their race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Creating an environment where everyone, from any background, can do their best work is the right thing to do.
AI/ML Engineer
Data engineer job in Monroe, OH
Machine Learning / AI Engineer
Vaco is partnering with a growing organization near Monroe, Ohio to hire a Machine Learning / AI Engineer for a direct-hire, hybrid role.
This position offers the opportunity to work on impactful, real-world AI solutions within a logistics-focused environment, collaborating closely with cross-functional teams to drive efficiency and innovation.
What You'll Do:
Design, train, and deploy machine learning models supporting demand forecasting, routing optimization, and network efficiency
Integrate AI and ML solutions with existing logistics platforms, sensors, and data pipelines
Partner with cross-functional teams to translate business challenges into scalable, production-ready solutions
Build and maintain data processing, feature engineering, and model evaluation pipelines using modern ML frameworks
Monitor model performance, retrain systems, and drive continuous improvement through feedback loops
Ensure AI systems adhere to enterprise standards for security, scalability, and data governance
Stay current on advancements in machine learning, optimization algorithms, and generative AI relevant to logistics and operations
Required Skills:
Bachelor's degree in Computer Science, Software Engineering, IT, or equivalent practical experience
5+ years of experience in ML/AI-focused roles with strong foundations in statistical learning
Deep expertise in Generative AI, NLP, and simulation techniques applied to operational or logistics challenges
Proficiency in Python and common ML frameworks (TensorFlow, PyTorch, Scikit-learn)
Experience with data engineering tools and platforms such as SQL, Spark, Databricks, and Airflow
Familiarity with cloud environments (AWS or Azure) and MLOps practices for model deployment and monitoring
Strong ability to analyze large, complex datasets to uncover actionable insights and automation opportunities
Proven problem-solving skills with an emphasis on scalability and real-world application
Excellent communication and collaboration skills
About The Role:
Direct-hire opportunity with a stable, forward-thinking organization
Hybrid work environment offering flexibility
High-impact role focused on applied AI and operational optimization
Desired Skills and Experience:
AI, ML, Machine Learning, Artificial Intelligence, AI/ML, Natural language processing, NLP, Python, AWS, Cloud, Azure, Amazon Web Services, SQL, Spark, Databricks, Airflow, MLOps, TensorFlow, PyTorch, Scikit-learn, Generative AI, Data, AI/ML Engineer, AI Engineer, ML Engineer
Data Scientist
Data engineer job in Cincinnati, OH
Do you enjoy solving billion-dollar data science problems across trillions of data points? Are you passionate about working at the cutting edge of interdisciplinary boundaries, where computer science meets hard science? If you like turning untidy data into nonobvious insights and surprising business leaders with the transformative power of Artificial Intelligence (AI), including Generative and Agentic AI, we want you on our team at P&G.
As a Data Scientist in our organization, you will play a crucial role in disrupting current business practices by designing and implementing innovative models that enhance our processes. You will be expected to constructively research, design, and customize algorithms tailored to various problems and data types. Utilizing your expertise in Operations Research (including optimization and simulation) and machine learning models (such as tree models, deep learning, and reinforcement learning), you will directly contribute to the development of scalable Data Science algorithms. Your work will also integrate advanced techniques from Generative and Agentic AI to create more dynamic and responsive models, enhancing our analytical capabilities. You will collaborate with Data and AI Engineering teams to productionize these solutions, applying exploratory data analysis, feature engineering, and model building within cloud environments on massive datasets to deliver accurate and impactful insights. Additionally, you will mentor others as a technical coach and become a recognized expert in one or more Data Science techniques, quantifying the improvements in business outcomes resulting from your work.
Key Responsibilities:
+ Algorithm Design & Development: Directly contribute to the design and development of scalable Data Science algorithms.
+ Collaboration: Work closely with Data and Software Engineering teams to effectively productionize algorithms.
+ Data Analysis: Apply thorough technical knowledge to large datasets, conducting exploratory data analysis, feature engineering, and model building.
+ Coaching & Mentorship: Develop others as a technical coach, sharing your expertise and insights.
+ Expertise Development: Become a known expert in one or multiple Data Science techniques and methodologies.
Job Qualifications
Required Qualifications:
+ Education: Pursuing or has graduated with a Master's degree in a quantitative field (Operations Research, Computer Science, Engineering, Applied Mathematics, Statistics, Physics, Analytics, etc.) or possess equivalent work experience.
+ Technical Skills: Proficient in programming languages such as Python and familiar with data science/machine learning libraries like OpenCV, scikit-learn, PyTorch, TensorFlow/Keras, and Pandas. Demonstrated ability to develop and test code within cloud environments.
+ Communication: Strong written and verbal communication skills, with the ability to influence others to take action.
Preferred Qualifications:
+ Analytic Methodologies: Experience applying analytic methodologies such as Machine Learning, Optimization, Simulation, and Generative and Agentic AI to real-world problems.
+ Continuous Learning: A commitment to lifelong learning, keeping up to date with the latest technology trends, and a willingness to teach others while learning new techniques.
+ Data Handling & Cloud: Experience with large datasets and developing in cloud computing platforms such as GCP or Azure.
+ DevOps Familiarity: Familiarity with DevOps environments, including tools like Git and CI/CD practices.
Immigration Sponsorship is not available for this role. For more information regarding who is eligible for hire at P&G along with other work authorization FAQ's, please click HERE (******************************************************* .
Procter & Gamble participates in e-verify as required by law.
Qualified individuals will not be disadvantaged based on being unemployed.
We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Job Schedule
Full time
Job Number
R000135859
Job Segmentation
Entry Level
Starting Pay / Salary Range
$85,000.00 - $115,000.00 / year
Principal Data Scientist
Data engineer job in Cincinnati, OH
Description & Requirements We now have an exciting opportunity for a Principal Data Scientist to join the Maximus AI Accelerator supporting both the enterprise and our clients. We are looking for an accomplished hands-on individual contributor and team player to be a part of the AI Accelerator team.
You will be responsible for architecting and optimizing scalable, secure AI systems and integrating AI models in production using MLOps best practices, ensuring systems are resilient, compliant, and efficient. This role requires strong systems thinking, problem-solving abilities, and the capacity to manage risk and change in complex environments. Success depends on cross-functional collaboration, strategic communication, and adaptability in fast-paced, evolving technology landscapes.
This position will be focused on strategic company-wide initiatives but will also play a role in project delivery and capture solutioning (i.e., leaning in on existing or future projects and providing solutioning to capture new work.)
This position requires occasional travel to the DC area for client meetings.
Essential Duties and Responsibilities:
- Make deep dives into the data, pulling out objective insights for business leaders.
- Initiate, craft, and lead advanced analyses of operational data.
- Provide a strong voice for the importance of data-driven decision making.
- Provide expertise to others in data wrangling and analysis.
- Convert complex data into visually appealing presentations.
- Develop and deploy advanced methods to analyze operational data and derive meaningful, actionable insights for stakeholders and business development partners.
- Understand the importance of automation and look to implement and initiate automated solutions where appropriate.
- Initiate and take the lead on AI/ML initiatives as well as develop AI/ML code for projects.
- Utilize various languages for scripting and write SQL queries. Serve as the primary point of contact for data and analytical usage across multiple projects.
- Guide operational partners on product performance and solution improvement/maturity options.
- Participate in intra-company data-related initiatives as well as help foster and develop relationships throughout the organization.
- Learn new skills in advanced analytics/AI/ML tools, techniques, and languages.
- Mentor more junior data analysts/data scientists as needed.
- Apply strategic approach to lead projects from start to finish;
Job-Specific Minimum Requirements:
- Develop, collaborate, and advance the applied and responsible use of AI, ML and data science solutions throughout the enterprise and for our clients by finding the right fit of tools, technologies, processes, and automation to enable effective and efficient solutions for each unique situation.
- Contribute and lead the creation, curation, and promotion of playbooks, best practices, lessons learned and firm intellectual capital.
- Contribute to efforts across the enterprise to support the creation of solutions and real mission outcomes leveraging AI capabilities from Computer Vision, Natural Language Processing, LLMs and classical machine learning.
- Contribute to the development of mathematically rigorous process improvement procedures.
- Maintain current knowledge and evaluation of the AI technology landscape and emerging. developments and their applicability for use in production/operational environments.
Minimum Requirements
- Bachelor's degree in related field required.
- 10-12 years of relevant professional experience required.
Job-Specific Minimum Requirements:
- 10+ years of relevant Software Development + AI / ML / DS experience.
- Professional Programming experience (e.g. Python, R, etc.).
- Experience in two of the following: Computer Vision, Natural Language Processing, Deep Learning, and/or Classical ML.
- Experience with API programming.
- Experience with Linux.
- Experience with Statistics.
- Experience with Classical Machine Learning.
- Experience working as a contributor on a team.
Preferred Skills and Qualifications:
- Masters or BS in quantitative discipline (e.g. Math, Physics, Engineering, Economics, Computer Science, etc.).
- Experience developing machine learning or signal processing algorithms:
- Ability to leverage mathematical principles to model new and novel behaviors.
- Ability to leverage statistics to identify true signals from noise or clutter.
- Experience working as an individual contributor in AI.
- Use of state-of-the-art technology to solve operational problems in AI and Machine Learning.
- Strong knowledge of data structures, common computing infrastructures/paradigms (stand alone and cloud), and software engineering principles.
- Ability to design custom solutions in the AI and Advanced Analytics sphere for customers. This includes the ability to scope customer needs, identify currently existing technologies, and develop custom software solutions to fill any gaps in available off the shelf solutions.
- Ability to build reference implementations of operational AI & Advanced Analytics processing solutions.
Background Investigations:
- IRS MBI - Eligibility
#techjobs #VeteransPage
EEO Statement
Maximus is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, age, national origin, disability, veteran status, genetic information and other legally protected characteristics.
Pay Transparency
Maximus compensation is based on various factors including but not limited to job location, a candidate's education, training, experience, expected quality and quantity of work, required travel (if any), external market and internal value analysis including seniority and merit systems, as well as internal pay alignment. Annual salary is just one component of Maximus's total compensation package. Other rewards may include short- and long-term incentives as well as program-specific awards. Additionally, Maximus provides a variety of benefits to employees, including health insurance coverage, life and disability insurance, a retirement savings plan, paid holidays and paid time off. Compensation ranges may differ based on contract value but will be commensurate with job duties and relevant work experience. An applicant's salary history will not be used in determining compensation. Maximus will comply with regulatory minimum wage rates and exempt salary thresholds in all instances.
Accommodations
Maximus provides reasonable accommodations to individuals requiring assistance during any phase of the employment process due to a disability, medical condition, or physical or mental impairment. If you require assistance at any stage of the employment process-including accessing job postings, completing assessments, or participating in interviews,-please contact People Operations at **************************.
Minimum Salary
$
156,740.00
Maximum Salary
$
234,960.00
Easy ApplyETL Architect
Data engineer job in Cincinnati, OH
Job title: ETL Architect
DURATION 18 months
YEARS OF EXPERIENCE 7-10
INTERVIEW TYPE Phone Screen to Hire
REQUIRED SKILLS
• Experience with Data Stage and ETL design
Technical
• Requirement gathering , converting business requirements to technical specs to profile
• Worked hands on in minimum 2 projects with data stage
• Understand the process of developing an etl design that support multiple datastage developers
• Be able to create an etl design framework and related specifications for use by etl developers
• Define standards and best practices of Data Stage etl to be followed by all data stage developers
• Understanding of Data Warehouse, Data marts concepts and implementation experience
• Be able to look at code produced to insure conformance with developed ETL framework and design for reuse
• Preferable experienced user level comptency in IBM's metadata product, datastage and Infosphere product line
• Be able to design etl for oracle or sql server or any db
• Good analytical skills and process design
• Insuring compliance to quality standards, and delivery timelines.
Qualifications
Bachelors
Additional Information
Required Skills:
Job Description:
Performs highly complex application programming/systems development and support Performs highly complex configuration of business rules and technical parameters of software products Review business requirements and develop application design documentation Build technical components (Maximo objects, TRM Rules, Java extensions, etc) based on detailed design.
Performs unit testing of components along with completing necessary documentation. Supports product test, user acceptance test, etc as a member of the fix-it team. Employs consistent measurement techniques Include testing in project plans and establish controls to require adherence to test plans Manages the interrelationships among various projects or work objectives
AI Data Scientist
Data engineer job in Cincinnati, OH
We are currently seeking an experienced data scientist to join our AI team who will support and lead data flow, advanced analytical needs and AI tools across Medpace. The AI team utilizes analytical principles and techniques to identify, collate and analyze many data sources and works with teams across Medpace to support efficiency and business gains for pharmaceutical development. The AI Data Scientist will support various projects across the company to bring data sources together in a consistent manner, work with the business to identify the value of AI, identify appropriate solutions and work with IT to ensure they are developed and built into the relevant systems. The team is seeking an experienced candidate to contribute new skills to our team, support team growth and foster AI development.
The AI Team is a highly collaborative team with members in both the Cincinnati and London offices. This team supports many teams across the business including clinical operations, medical, labs, business development and business operations. The AI Team also works side-by-side with data engineering, business analytics and software engineering to architecture innovative data storage and access solutions for optimal data utilization strategies. If you are an individual with experience in informatics, data science, or computer science, please review the following career opportunity.
Responsibilities
* Explore and work with different data sources to collate into knowledge;
* Work with different business teams across the company with a variety of different business needs to identify potential areas that AI can support;
* Manage the process of working through AI potentials from discovery research to PoC to production with the business teams and supporting tasks for IT developers;
* Try out different AI tools to substantiate the potential of its use with the business team;
* Translate results into compelling visualizations which illustrate the overall benefits of the use of AI and identify with the business team the overall value of its use;
* Develop and map database architecture of methodological and clinical data systems;
* Convert business tasks into meaningful developer Jira tasks for sprints;
* Support departmental process improvement initiatives that can include AI; and
* Participate in training and development of more junior team members.
Qualifications
* Master's degree or higher in informatics, computer science/engineering, health information, statistics, or related field required;
* 2 or more years of experience as a Data Scientist or closely related;
* Experience applying machine learning to pharmaceutical or clinical data (or translatable artificial intelligence [ai] techniques from other industries);
* Advanced computer programming skills (preferred language: Python);
* Analytical thinker with great attention to detail;
* Ability to prioritize multiple projects and tasks within tight timelines; and
* Excellent written and verbal communication skills.
Medpace Overview
Medpace is a full-service clinical contract research organization (CRO). We provide Phase I-IV clinical development services to the biotechnology, pharmaceutical and medical device industries. Our mission is to accelerate the global development of safe and effective medical therapeutics through its scientific and disciplined approach. We leverage local regulatory and therapeutic expertise across all major areas including oncology, cardiology, metabolic disease, endocrinology, central nervous system, anti-viral and anti-infective. Headquartered in Cincinnati, Ohio, employing more than 5,000 people across 40+ countries.
Why Medpace?
People. Purpose. Passion. Make a Difference Tomorrow. Join Us Today.
The work we've done over the past 30+ years has positively impacted the lives of countless patients and families who face hundreds of diseases across all key therapeutic areas. The work we do today will improve the lives of people living with illness and disease in the future.
Cincinnati Perks
* Cincinnati Campus Overview
* Flexible work environment
* Competitive PTO packages, starting at 20+ days
* Competitive compensation and benefits package
* Company-sponsored employee appreciation events
* Employee health and wellness initiatives
* Community involvement with local nonprofit organizations
* Discounts on local sports games, fitness gyms and attractions
* Modern, ecofriendly campus with an on-site fitness center
* Structured career paths with opportunities for professional growth
* Discounted tuition for UC online programs
Awards
* Named a Top Workplace in 2024 by The Cincinnati Enquirer
* Recognized by Forbes as one of America's Most Successful Midsize Companies in 2021, 2022, 2023 and 2024
* Continually recognized with CRO Leadership Awards from Life Science Leader magazine based on expertise, quality, capabilities, reliability, and compatibility
What to Expect Next
A Medpace team member will review your qualifications and, if interested, you will be contacted with details for next steps.
Auto-ApplySenior Data Engineer
Data engineer job in Blue Ash, OH
Job Description
The Engineer is responsible for staying on track with key milestones in Customer Platform / Customer Data Acceleration, work will be on the new Customer Platform Analytics system in Databricks. The Engineer has overall responsibility in the technical design process. Leads and participates in the application technical design process and completes estimates and work plans for design, development, implementation, and rollout tasks. The Engineer also communicates with the appropriate teams to ensure that assignments are delivered with the highest of quality and in accordance to standards. The Engineer strives to continuously improve the software delivery processes and practices. Role model and demonstrate the companys core values of respect, honesty, integrity, diversity, inclusion and safety of others.
Current tools and technologies include:
Databricks and Netezza
Key Responsibilities
Lead and participate in the design and implementation of large and/or architecturally significant applications.
Champion company standards and best practices. Work to continuously improve software delivery processes and practices.
Build partnerships across the application, business and infrastructure teams.
Setting up new customer data platforms from Netezza to Databricks
Complete estimates and work plans independently as appropriate for design, development, implementation and rollout tasks.
Communicate with the appropriate teams to ensure that assignments are managed appropriately and that completed assignments are of the highest quality.
Support and maintain applications utilizing required tools and technologies.
May direct the day-to-day work activities of other team members.
Must be able to perform the essential functions of this position with or without reasonable accommodation.
Work quickly with the team to implement new platform.
Be onsite with development team when necessary.
Behaviors/Skills:
Puts the Customer First - Anticipates customer needs, champions for the customer, acts with customers in mind, exceeds customers expectations, gains customers trust and respect.
Communicates effectively and candidly - Communicates clearly and directly, approachable, relates well to others, engages people and helps them understand change, provides and seeks feedback, articulates clearly, actively listens.
Achieves results through teamwork Is open to diverse ideas, works inclusively and collaboratively, holds self and others accountable, involves others to accomplish individual and team goals
Note to Vendors
Length of Contract 9 months
Top skills Databricks, Netezza
Soft Skills Needed collaborating well with others, working in a team dynamic
Project person will be supporting - staying on track with key milestones in Customer Platform / Customer Data Acceleration, Work will be on the new Customer Platform Analytics system in Databricks that will replace Netezza
Team details ie. size, dynamics, locations most of the team is located in Cincinnati, working onsite at the BTD
Work Location (in office, hybrid, remote) Onsite at BTD when necessary, approximately 2-3 days a week
Is travel required - No
Max Rate if applicable best market rate
Required Working Hours 8-5 est
Interview process and when will it start Starting with one interview, process may change
Prescreening Details standard questions. Scores will carry over.
When do you want this person to start Looking to hire quickly, the team is looking to move fast.
Senior Data Engineer
Data engineer job in Cincinnati, OH
General Electric Credit Union is a not-for-profit, member-owned full service financial institution headquartered in Cincinnati with branches in Ohio and Kentucky. At GECU, we pride ourselves on maintaining quality service, being an employee-friendly workplace, and developing our team members while teaching you the skills to lead you to career advancement opportunities.
Overview: The Senior Data Engineer will play a key role in developing and optimizing GECU's data infrastructure to support the organization's data-driven initiatives. The Senior Data Engineer will be designing, building, and maintaining scalable data pipelines and systems, working with the data and development team Essential Responsibilities:
Design, implement, and maintain robust, scalable, and high-performance data pipelines and ETL processes to collect, process, and store large volumes of structured and unstructured data.
Collaborate with cross-functional teams to understand data requirements and translate them into technical solutions.
Develop and maintain data warehouse and data lake solutions, ensuring data quality, integrity, and reliability.
Optimize data pipelines and ETL processes for performance, efficiency, and cost-effectiveness, utilizing best practices and technologies.
Implement data governance and security measures to ensure compliance with regulatory requirements and data privacy standards.
Troubleshoot and resolve issues related to data processing, data quality, and system performance in a timely manner.
Evaluate and recommend new technologies, tools, and frameworks to enhance the organization's data infrastructure and capabilities.
Document technical specifications, data lineage, and system architecture to facilitate knowledge sharing and collaboration.
Collaborate with other key data employees to maintain and publish data definitions and data catalogue.
Stay up to date with industry trends and emerging technologies in data engineering and analytics.
Education and Experience:
High school diploma, or GED required; Bachelor's degree in Computer Science, Engineering, or related field; Master's degree is a plus.
Minimum 6 years' experience in Data Engineering; working with data warehousing concepts, database technologies (e.g., SQL, NoSQL), and distributed computing architectures.
Experience with Snowflake Data Warehouse preferred
Knowledge, Skills, and Abilities:
Strong programming skills in languages such as Python, Java, Scala, or SQL, with experience in data manipulation, transformation, and analysis.
Knowledge of cloud platforms such as AWS, Azure, or Google Cloud Platform.
Extensive knowledge of data modeling, schema design, and optimization techniques for relational and non-relational databases.
Excellent problem-solving and troubleshooting skills, with the ability to diagnose and resolve complex technical issues.
Strong communication and collaboration skills, with the ability to work effectively in a team environment and interact with stakeholders at all levels.
Ability to perform independently and competently to accomplish necessary deliverables accurately and on time.
Ability to assist and mentor Junior Data Engineers
At GECU, we want to support your wellbeing by offering a wide range of benefits:
Health, Dental and Vision insurance
Life and Disability insurance options
Paid Time Off starts accruing once hired and take your birthday off -paid
401k Retirement plan with up to a 10% match of your base gross compensation
Tuition reimbursement opportunities & professional development
Volunteer opportunities -and earn additional PTO hours!
On-site clinics for Vaccines and Mammograms
And many more!
Come join GECU as we are a curated culture of respect, understanding, and mutual recognition. We believe forming bonds and connecting with each other only stands to strengthen the service we provide to our members in our mission of improving the Quality of Financial lives!
General Electric Credit Union is an Equal Opportunity Employer
Data Engineer (W2 Contract only)
Data engineer job in Cincinnati, OH
Role: Data Engineer III
Contract
Handle the design and construction of scalable management systems, ensure that all data systems meet company requirements, and also research new uses for data acquisition. Required to know and understand the ins and outs of the industry such as data mining practices, algorithms, and how data can be used.
Must Have Skills:
5+ years of DataStage experience
5+ years of ETL experience
5+ years of SQL experience
5+ years of Unix/Linux scripting experience
At least 5 years of experience working with relational and non-relational databases (e.g., SQL, Snowflake, DB2, MongoDB).
Business Intelligence - Data Engineering
Cloud Snowflake Database
Primary Responsibilities:
Design, construct, install, test and maintain data management systems.
Build high-performance algorithms, predictive models, and prototypes.
Ensure that all systems meet the business/company requirements as well as industry practices.
Integrate up-and-coming data management and software engineering technologies into existing data structures.
Develop set processes for data mining, data modeling, and data production.
Create custom software components and analytics applications.
Research new uses for existing data.
Employ an array of technological languages and tools to connect systems together.
Collaborate with members of your team (eg, data architects, the IT team, data scientists) on the project's goals.
Install/update disaster recovery procedures.
Recommend different ways to constantly improve data reliability and quality.
Qualifications:
Technical Degree or related work experience
Experience with non-relational & relational databases (SQL, MySQL, NoSQL, Hadoop, MongoDB, etc.)
Experience programming and/or architecting a back end language (Java, J2EE, etc)
Senior Data Engineer
Data engineer job in Cincinnati, OH
The team is seeking a Data Engineer experienced in implementing modern data solutions in Azure, with strong hands-on skills in Databricks, Spark, Python, and cloud-based DataOps practices. The Data Engineer will analyze, design, and develop data products, pipelines, and information architecture deliverables, focusing on data as an enterprise asset. This role also supports cloud infrastructure automation and CI/CD using Terraform, GitHub, and GitHub Actions to deliver scalable, reliable, and secure data solutions.
Requirements:
• 5+ years of experience as a Data Engineer
• Hands-on experience with Azure Databricks, Spark, and Python
• Experience with Delta Live Tables (DLT) or Databricks SQL
• Strong SQL and database background
• Experience with Azure Functions, messaging services, or orchestration tools
• Familiarity with data governance, lineage, or cataloging tools (e.g., Purview, Unity Catalog)
• Experience monitoring and optimizing Databricks clusters or workflows
• Experience working with Azure cloud data services and understanding how they integrate with Databricks and enterprise data platforms
• Experience with Terraform for cloud infrastructure provisioning
• Experience with GitHub and GitHub Actions for version control and CI/CD automation
• Strong understanding of distributed computing concepts (partitions, joins, shuffles, cluster behavior)
• Familiarity with SDLC and modern engineering practices
• Ability to balance multiple priorities, work independently, and stay organized
Key Responsibilities
• Analyze, design, and develop enterprise data solutions with a focus on Azure, Databricks, Spark, Python, and SQL
• Develop, optimize, and maintain Spark/PySpark data pipelines, including managing performance issues such as data skew, partitioning, caching, and shuffle optimization
• Build and support Delta Lake tables and data models for analytical and operational use cases
• Apply reusable design patterns, data standards, and architecture guidelines across the enterprise, including collaboration with when needed
• Use Terraform to provision and manage cloud and Databricks resources, supporting Infrastructure as Code (IaC) practices
• Implement and maintain CI/CD workflows using GitHub and GitHub Actions for source control, testing, and pipeline deployment
• Manage Git-based workflows for Databricks notebooks, jobs, and data engineering artifacts
• Troubleshoot failures and improve reliability across Databricks jobs, clusters, and data pipelines
• Apply cloud computing skills to deploy fixes, upgrades, and enhancements in Azure environments
• Work closely with engineering teams to enhance tools, systems, development processes, and data security
• Participate in the development and communication of data strategy, standards, and roadmaps
• Draft architectural diagrams, interface specifications, and other design documents
• Promote the reuse of data assets and contribute to enterprise data catalog practices
• Deliver timely and effective support and communication to stakeholders and end users
• Mentor team members on data engineering principles, best practices, and emerging technologies
Senior Java Software Engineer
Data engineer job in Cincinnati, OH
Job Title: Senior Software Engineer - Back End III
Work Type: Onsite
Must Have
Amazon Web Services (AWS)
Application Dev. Aurora PostgreSQL Expertise in Java, Spring Boot, RESTful APIs, and microservices.
Nice To Have
10+ years of JAVA/SpringBoot hands-on development experience, good understanding of cloud and event driven architecture, microservices and DDD architectural style.
Key Responsibilities:
Design and develop Spring Boot microservices and RESTful APIs.
Deploy applications using AWS ECS Fargate and/or Kubernetes
Implement event-driven systems using Amazon MSK and RDS (PostgreSQL).
Build Spring Batch jobs and integrate with CloudWatch.
Automate infrastructure with Terraform and secure credentials via Secrets Manager.
Produce technical documentation and manage operational support.
Participate in Agile ceremonies, on-call support and release management.
Required Qualifications:
Bachelor's or Master's in Computer Science or related field.
10+ years in enterprise software development; 3+ years in cloud-native architecture.
Expertise in Java, Spring Boot, RESTful APIs, and microservices.
Hands-on experience with AWS ECS, Lambda, RDS (PostgreSQL), DynamoDB, MSK, CloudWatch, Secrets Manager, and Terraform.
Experience with Kafka, CI/CD, and secure cloud deployment.
Strong communication and leadership skills.
ETL Architect
Data engineer job in Cincinnati, OH
Job title: ETL Architect DURATION 18 months YEARS OF EXPERIENCE 7-10 INTERVIEW TYPE Phone Screen to Hire REQUIRED SKILLS • Experience with Data Stage and ETL design Technical • Requirement gathering , converting business requirements to technical specs to profile
• Worked hands on in minimum 2 projects with data stage
• Understand the process of developing an etl design that support multiple datastage developers
• Be able to create an etl design framework and related specifications for use by etl developers
• Define standards and best practices of Data Stage etl to be followed by all data stage developers
• Understanding of Data Warehouse, Data marts concepts and implementation experience
• Be able to look at code produced to insure conformance with developed ETL framework and design for reuse
• Preferable experienced user level comptency in IBM's metadata product, datastage and Infosphere product line
• Be able to design etl for oracle or sql server or any db
• Good analytical skills and process design
• Insuring compliance to quality standards, and delivery timelines.
Qualifications
Bachelors
Additional Information
Required Skills:
Job Description:
Performs highly complex application programming/systems development and support Performs highly complex configuration of business rules and technical parameters of software products Review business requirements and develop application design documentation Build technical components (Maximo objects, TRM Rules, Java extensions, etc) based on detailed design.
Performs unit testing of components along with completing necessary documentation. Supports product test, user acceptance test, etc as a member of the fix-it team. Employs consistent measurement techniques Include testing in project plans and establish controls to require adherence to test plans Manages the interrelationships among various projects or work objectives
Junior Data Scientist
Data engineer job in Cincinnati, OH
The Medpace Analytics and Business Intelligence team is growing rapidly and is focused on building a data driven culture across the enterprise. The BI team uses data and insights to drive increased strategic and operational efficiencies across the organization. As a Junior Data Scientist, you will hold a highly visible analytical role that requires interaction and partnership with leadership across the Medpace organization.
What's in this for you?
* Work in a collaborative, fast paced, entrepreneurial, and innovative workplace;
* Gain experience and exposure to advanced BI concepts from visualization to data warehousing;
* Grow business knowledge by working with leadership across all aspects of Medpace's business.
Responsibilities
* Data Collection & Cleaning: Gather, clean, and preprocess large, raw datasets;
* Analysis & Modeling: Perform statistical analysis, build & validate machine learning models, and test hypotheses (e.g., A/B testing);
* Algorithm Development: Create algorithms to manage and interpret information, often automating processes;
* Insight Generation: Discover trends, patterns, and insights to inform business strategy;
* Visualization & Communication: Present complex findings visually (dashboards, charts) and verbally to technical and non-technical teams;
* Collaboration: Work with engineering, product, and business teams to implement solutions;
* Model Monitoring: Deploy and maintain models, iterating for continuous improvement.
Qualifications
* Bachelor's Degree in Business, Life Science, Computer Science, or Related Degree;
* 0-3 years of experience in business intelligence or analytics - Python, R, SQL heavily preferred
* Strong analytical and communication skills;
* Excellent organization skills and the ability to multitask while efficiently completing high quality work.
Medpace Overview
Medpace is a full-service clinical contract research organization (CRO). We provide Phase I-IV clinical development services to the biotechnology, pharmaceutical and medical device industries. Our mission is to accelerate the global development of safe and effective medical therapeutics through its scientific and disciplined approach. We leverage local regulatory and therapeutic expertise across all major areas including oncology, cardiology, metabolic disease, endocrinology, central nervous system, anti-viral and anti-infective. Headquartered in Cincinnati, Ohio, employing more than 5,000 people across 40+ countries.
Why Medpace?
People. Purpose. Passion. Make a Difference Tomorrow. Join Us Today.
The work we've done over the past 30+ years has positively impacted the lives of countless patients and families who face hundreds of diseases across all key therapeutic areas. The work we do today will improve the lives of people living with illness and disease in the future.
Cincinnati Perks
* Cincinnati Campus Overview
* Flexible work environment
* Competitive PTO packages, starting at 20+ days
* Competitive compensation and benefits package
* Company-sponsored employee appreciation events
* Employee health and wellness initiatives
* Community involvement with local nonprofit organizations
* Discounts on local sports games, fitness gyms and attractions
* Modern, ecofriendly campus with an on-site fitness center
* Structured career paths with opportunities for professional growth
* Discounted tuition for UC online programs
Awards
* Named a Top Workplace in 2024 by The Cincinnati Enquirer
* Recognized by Forbes as one of America's Most Successful Midsize Companies in 2021, 2022, 2023 and 2024
* Continually recognized with CRO Leadership Awards from Life Science Leader magazine based on expertise, quality, capabilities, reliability, and compatibility
What to Expect Next
A Medpace team member will review your qualifications and, if interested, you will be contacted with details for next steps.
Auto-ApplyFull Stack Engineer/Software Engineer(only W2, onsite)
Data engineer job in Cincinnati, OH
Must Have
Application Dev. - Software Engineer
C#
Cascading Style Sheets (CSS)
Extensible Markup Language (XML)
GitHub
HTML5
shell scripting
Nice To Have
APIs - Rest and Soap
Java Spring
OpenShift