Data Architect
Data engineer job in Cincinnati, OH
THIS IS A W2 (NOT C2C OR REFERRAL BASED) CONTRACT OPPORTUNITY
REMOTE MOSTLY WITH 1 DAY/MO ONSITE IN CINCINNATI-LOCAL CANDIDATES TAKE PREFERENCE
RATE: $75-85/HR WITH BENEFITS
We are seeking a highly skilled Data Architect to function in a consulting capacity to analyze, redesign, and optimize a Medical Payments client's environment. The ideal candidate will have deep expertise in SQL, Azure cloud services, and modern data architecture principles.
Responsibilities
Design and maintain scalable, secure, and high-performing data architectures.
Lead migration and modernization projects in heavy use production systems.
Develop and optimize data models, schemas, and integration strategies.
Implement data governance, security, and compliance standards.
Collaborate with business stakeholders to translate requirements into technical solutions.
Ensure data quality, consistency, and accessibility across systems.
Required Qualifications
Bachelor's degree in Computer Science, Information Systems, or related field.
Proven experience as a Data Architect or similar role.
Strong proficiency in SQL (query optimization, stored procedures, indexing).
Hands-on experience with Azure cloud services for data management and analytics.
Knowledge of data modeling, ETL processes, and data warehousing concepts.
Familiarity with security best practices and compliance frameworks.
Preferred Skills
Understanding of Electronic Health Records systems.
Understanding of Big Data technologies and modern data platforms outside the scope of this project.
Data Scientist
Data engineer job in Cincinnati, OH
Do you enjoy solving billion-dollar data science problems across trillions of data points? Are you passionate about working at the cutting edge of interdisciplinary boundaries, where computer science meets hard science? If you like turning untidy data into nonobvious insights and surprising business leaders with the transformative power of Artificial Intelligence (AI), we want you on our team at P&G.
As a Data Scientist in our organization, you will play a crucial role in disrupting current business practices by designing and implementing innovative models that enhance our processes. You will be expected to constructively research, design, and customize algorithms tailored to various problems and data types. Utilizing your expertise in Operations Research (including optimization and simulation) and machine learning models (such as tree models, deep learning, and reinforcement learning), you will directly contribute to the development of scalable Data Science algorithms and collaborate with Data and Software Engineering teams to productionize these solutions. Your technical knowledge will empower you to apply exploratory data analysis, feature engineering, and model building on massive datasets, delivering accurate and impactful insights. Additionally, you will mentor others as a technical coach and become a recognized expert in one or more Data Science techniques, quantifying the improvements in business outcomes resulting from your work.
Key Responsibilities:
+ Algorithm Design & Development: Directly contribute to the design and development of scalable Data Science algorithms.
+ Collaboration: Work closely with Data and Software Engineering teams to effectively productionize algorithms.
+ Data Analysis: Apply thorough technical knowledge to large datasets, conducting exploratory data analysis, feature engineering, and model building.
+ Coaching & Mentorship: Develop others as a technical coach, sharing your expertise and insights.
+ Expertise Development: Become a known expert in one or multiple Data Science techniques and methodologies.
Job Qualifications
Required Qualifications:
+ Education: Pursuing or has graduated with a Master's degree in a quantitative field (Operations Research, Computer Science, Engineering, Applied Mathematics, Statistics, Physics, Analytics, etc.) or possess equivalent work experience.
+ Technical Skills: Proficient in programming languages such as Python and familiar with data science/machine learning libraries like OpenCV, scikit-learn, PyTorch, TensorFlow/Keras, and Pandas.
+ Communication: Strong written and verbal communication skills, with the ability to influence others to take action.
Preferred Qualifications:
+ Analytic Methodologies: Experience applying analytic methodologies such as Machine Learning, Optimization, and Simulation to real-world problems.
+ Continuous Learning: A commitment to lifelong learning, keeping up to date with the latest technology trends, and a willingness to teach others while learning new techniques.
+ Data Handling: Experience with large datasets and cloud computing platforms such as GCP or Azure.
+ DevOps Familiarity: Familiarity with DevOps environments, including tools like Git and CI/CD practices.
Compensation for roles at P&G varies depending on a wide array of non-discriminatory factors including but not limited to the specific office location, role, degree/credentials, relevant skill set, and level of relevant experience. At P&G compensation decisions are dependent on the facts and circumstances of each case. Total rewards at P&G include salary + bonus (if applicable) + benefits . Your recruiter may be able to share more about our total rewards offerings and the specific salary range for the relevant location(s) during the hiring process.
We are committed to providing equal opportunities in employment. We value diversity and do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
Immigration Sponsorship is not available for this role. For more information regarding who is eligible for hire at P&G along with other work authorization FAQ's, please click HERE (******************************************************* .
Procter & Gamble participates in e-verify as required by law.
Qualified individuals will not be disadvantaged based on being unemployed.
We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Job Schedule
Full time
Job Number
R000135859
Job Segmentation
Entry Level
Starting Pay / Salary Range
$85,000.00 - $115,000.00 / year
ETL Architect
Data engineer job in Cincinnati, OH
Job title: ETL Architect
DURATION 18 months
YEARS OF EXPERIENCE 7-10
INTERVIEW TYPE Phone Screen to Hire
REQUIRED SKILLS
• Experience with Data Stage and ETL design
Technical
• Requirement gathering , converting business requirements to technical specs to profile
• Worked hands on in minimum 2 projects with data stage
• Understand the process of developing an etl design that support multiple datastage developers
• Be able to create an etl design framework and related specifications for use by etl developers
• Define standards and best practices of Data Stage etl to be followed by all data stage developers
• Understanding of Data Warehouse, Data marts concepts and implementation experience
• Be able to look at code produced to insure conformance with developed ETL framework and design for reuse
• Preferable experienced user level comptency in IBM's metadata product, datastage and Infosphere product line
• Be able to design etl for oracle or sql server or any db
• Good analytical skills and process design
• Insuring compliance to quality standards, and delivery timelines.
Qualifications
Bachelors
Additional Information
Required Skills:
Job Description:
Performs highly complex application programming/systems development and support Performs highly complex configuration of business rules and technical parameters of software products Review business requirements and develop application design documentation Build technical components (Maximo objects, TRM Rules, Java extensions, etc) based on detailed design.
Performs unit testing of components along with completing necessary documentation. Supports product test, user acceptance test, etc as a member of the fix-it team. Employs consistent measurement techniques Include testing in project plans and establish controls to require adherence to test plans Manages the interrelationships among various projects or work objectives
AI Data Scientist
Data engineer job in Cincinnati, OH
We are currently seeking an experienced data scientist to join our AI team who will support and lead data flow, advanced analytical needs and AI tools across Medpace. The AI team utilizes analytical principles and techniques to identify, collate and analyze many data sources and works with teams across Medpace to support efficiency and business gains for pharmaceutical development. The AI Data Scientist will support various projects across the company to bring data sources together in a consistent manner, work with the business to identify the value of AI, identify appropriate solutions and work with IT to ensure they are developed and built into the relevant systems. The team is seeking an experienced candidate to contribute new skills to our team, support team growth and foster AI development.
The AI Team is a highly collaborative team with members in both the Cincinnati and London offices. This team supports many teams across the business including clinical operations, medical, labs, business development and business operations. The AI Team also works side-by-side with data engineering, business analytics and software engineering to architecture innovative data storage and access solutions for optimal data utilization strategies. If you are an individual with experience in informatics, data science, or computer science, please review the following career opportunity.
Responsibilities
* Explore and work with different data sources to collate into knowledge;
* Work with different business teams across the company with a variety of different business needs to identify potential areas that AI can support;
* Manage the process of working through AI potentials from discovery research to PoC to production with the business teams and supporting tasks for IT developers;
* Try out different AI tools to substantiate the potential of its use with the business team;
* Translate results into compelling visualizations which illustrate the overall benefits of the use of AI and identify with the business team the overall value of its use;
* Develop and map database architecture of methodological and clinical data systems;
* Convert business tasks into meaningful developer Jira tasks for sprints;
* Support departmental process improvement initiatives that can include AI; and
* Participate in training and development of more junior team members.
Qualifications
* Master's degree or higher in informatics, computer science/engineering, health information, statistics, or related field required;
* 2 or more years of experience as a Data Scientist or closely related;
* Experience applying machine learning to pharmaceutical or clinical data (or translatable artificial intelligence [ai] techniques from other industries);
* Advanced computer programming skills (preferred language: Python);
* Analytical thinker with great attention to detail;
* Ability to prioritize multiple projects and tasks within tight timelines; and
* Excellent written and verbal communication skills.
Medpace Overview
Medpace is a full-service clinical contract research organization (CRO). We provide Phase I-IV clinical development services to the biotechnology, pharmaceutical and medical device industries. Our mission is to accelerate the global development of safe and effective medical therapeutics through its scientific and disciplined approach. We leverage local regulatory and therapeutic expertise across all major areas including oncology, cardiology, metabolic disease, endocrinology, central nervous system, anti-viral and anti-infective. Headquartered in Cincinnati, Ohio, employing more than 5,000 people across 40+ countries.
Why Medpace?
People. Purpose. Passion. Make a Difference Tomorrow. Join Us Today.
The work we've done over the past 30+ years has positively impacted the lives of countless patients and families who face hundreds of diseases across all key therapeutic areas. The work we do today will improve the lives of people living with illness and disease in the future.
Cincinnati Perks
* Cincinnati Campus Overview
* Flexible work environment
* Competitive PTO packages, starting at 20+ days
* Competitive compensation and benefits package
* Company-sponsored employee appreciation events
* Employee health and wellness initiatives
* Community involvement with local nonprofit organizations
* Discounts on local sports games, fitness gyms and attractions
* Modern, ecofriendly campus with an on-site fitness center
* Structured career paths with opportunities for professional growth
* Discounted tuition for UC online programs
Awards
* Named a Top Workplace in 2024 by The Cincinnati Enquirer
* Recognized by Forbes as one of America's Most Successful Midsize Companies in 2021, 2022, 2023 and 2024
* Continually recognized with CRO Leadership Awards from Life Science Leader magazine based on expertise, quality, capabilities, reliability, and compatibility
What to Expect Next
A Medpace team member will review your qualifications and, if interested, you will be contacted with details for next steps.
Auto-ApplyLead Data Engineer (P4031)
Data engineer job in Cincinnati, OH
84.51° is a retail data science, insights and media company. We help The Kroger Co., consumer packaged goods companies, agencies, publishers and affiliates create more personalized and valuable experiences for shoppers across the path to purchase.
Powered by cutting-edge science, we utilize first-party retail data from more than 62 million U.S. households sourced through the Kroger Plus loyalty card program to fuel a more customer-centric journey using 84.51° Insights, 84.51° Loyalty Marketing and our retail media advertising solution, Kroger Precision Marketing.
Join us at 84.51°!
__________________________________________________________
Lead Data Engineer (P4031)
Cincinnati / Chicago
SUMMARY:
The Lead Data Engineer serves as both a technical leader and an individual contributor within the data engineering team, embodying a player/coach role. This position is responsible for guiding and mentoring a team of data engineers while actively participating in data engineering projects to deliver results that support organizational objectives. As a technical lead, you will balance team leadership with hands-on contributions to drive innovation and excellence in data engineering. You will cultivate strategies and solutions to ingest, store and distribute our big data. Our developers use Python (PySpark, FastAPI), Databricks, and Azure cloud services in 6 week long scrum cycles to develop the products, tools and features.
RESPONSIBILITIES: Take ownership of features and drive them to completion through all phases of the entire 84.51° SDLC. This includes external facing and internal applications as well as process improvement activities such as:
Lead design and perform development of Python (PySpark, FastAPI) / Databricks / Azure based solutions
Develop and execute unit and integration testing
Collaborate with senior resources to ensure consistent development practices
Provide mentoring to junior resources
Bring new perspectives to problems and be driven to improve yourself and the way things are done
Manage resourcing across key initiatives in support of domain roadmaps and initiatives
QUALIFICATIONS, SKILLS, AND EXPERIENCE:
Bachelor's degree typically in Computer Science, Management Information Systems, Mathematics, Business Analytics or another technically strong program.
5+ years proven ability of professional data development experience
Strong understanding of Agile Principles (Scrum)
5+ years proven ability of developing Spark based solutions in a cloud environment
Full understanding of ETL concepts and data warehousing concepts
3+ year developing experience with Python
Experience with Databricks, REST APIs, and Microsoft Azure cloud services
#LI-SSS
Pay Transparency and Benefits
The stated salary range represents the entire span applicable across all geographic markets from lowest to highest. Actual salary offers will be determined by multiple factors including but not limited to geographic location, relevant experience, knowledge, skills, other job-related qualifications, and alignment with market data and cost of labor. In addition to salary, this position is also eligible for variable compensation.
Below is a list of some of the benefits we offer our associates:
Health: Medical: with competitive plan designs and support for self-care, wellness and mental health. Dental: with in-network and out-of-network benefit. Vision: with in-network and out-of-network benefit.
Wealth: 401(k) with Roth option and matching contribution. Health Savings Account with matching contribution (requires participation in qualifying medical plan). AD&D and supplemental insurance options to help ensure additional protection for you.
Happiness: Paid time off with flexibility to meet your life needs, including 5 weeks of vacation time, 7 health and wellness days, 3 floating holidays, as well as 6 company-paid holidays per year. Paid leave for maternity, paternity and family care instances.
Pay Range$121,000-$201,250 USD
Auto-ApplyData Engineer IV
Data engineer job in Cincinnati, OH
Job Title: Data Engineer IV
Payrate $70/hr on W2.
USC and GC Holder candidates only.
TOP SKILLS:
Must Have
Python
SQL
Nice To Have
AWS Sagemaker
DBT
Snowflake
What You'll Do
Squad: Machine Learning Data Enablement squad in the Data Insights Tribe
Required: In office 4 days a week minimum (Monday-Thursday)
We're hiring a Data Engineer to join our newly launched Machine Learning Data Enablement team at Fifth Third Bank. This team is focused on building high-quality, scalable data pipelines that power machine learning models across the enterprise, deployed in AWS SageMaker.
We're looking for an early-career professional who's excited to grow in a hands-on data engineering role. Ideal candidates will have experience working on machine learning-related projects or have partnered with data science teams to support model development and deployment - and have a strong interest in enabling ML workflows through robust data infrastructure.
You'll work closely with data scientists and ML engineers to deliver curated, production-ready datasets and help shape how machine learning data is delivered across the bank. You should have solid SQL and Python skills, a collaborative mindset, and a strong interest in modern data tooling. Experience with Snowflake, dbt, or cloud data platforms is a strong plus. Familiarity with ML tools like SageMaker or Databricks is helpful but not required - we're happy to help you learn.
This is a hands-on role with high visibility and high impact. You'll be joining a team at the ground level, helping to define how data powers machine learning at scale.
What You'll Get
Competitive base salary
Medical, dental, and vision insurance coverage
Optional life and disability insurance provided
401(k) with a company match and optional profit sharing
Paid vacation time
Paid Bench time
Training allowance offering
You'll be eligible to earn referral bonuses!
Job requirements
Python
SQL
Nice To Have
AWS Sagemaker
DBT
Snowflake
All done!
Your application has been successfully submitted!
Other jobs
Data Engineer Level 2
Data engineer job in Cincinnati, OH
Senior Data w/Pyspark, Databricks What you'll do: As a Data Engineer, you will be part of the product development team. We develop end-to-end solutions that run data science as a service (DSaaS) and deliver Insights. Our engineers use large-scale data computing frameworks (both on-prem and on the cloud) including (but not limited to) PySpark, Python, Databricks, and SQL to develop products and services in Azure.
Minimum Skills Required:
Bachelor's degree (typically in Computer Science, Management Information Systems, Mathematics, Business Analytics or another technically strong program), plus 2 years of experience
Proven Azure/Databricks development experience
Proven Big Data development experience with Apache Spark (PySpark)
Experience developing with SQL (Postgres, Oracle, SQL Server, or any)
Exposure to VCS (Git, SVN) and CI/CD processes
Exposure to data pipeline applications such as Kafka or Flume.
Exposure to developing in REST APIs in python (Flask preferred)
Understanding of Agile Principles (Scrum)
Key Responsibilities
Perform unit and integration testing
Collaborate with architecture and lead and senior engineers to ensure consistent development practices
Collaborate with other engineers to solve and bring new perspectives to complex problems
Drive improvements in people, practices, and procedures
Embrace new technologies and an ever-changing environment
Data Engineer
Data engineer job in Cincinnati, OH
Insight Global is looking for a data engineer contractor for one of their top financial clients. The following would be their roles and responsibilities: - Bachelor's degree in Computer Science/Information Systems or equivalent combination of education and experience.
- Must be able to communicate ideas both verbally and in writing to management, business and IT sponsors, and technical resources in language that is appropriate for each group.
- Four+ years of relevant IT experience in data engineering or related disciplines.
- Significant experience with at least one major relational database management system (RDBMS).
- Experience working with and supporting Unix/Linux and Windows systems.
- Proficiency in relational database modeling concepts and techniques.
- Solid conceptual understanding of distributed computing principles and scalable data architectures.
- Working knowledge of application and data security concepts, best practices, and common vulnerabilities.
- Experience in one or more of the following disciplines preferred: scalable data platforms and modern data architectures technologies and distributions, metadata management products, commercial ETL tools, data reporting and visualization tools, messaging systems, data warehousing, major version control systems, continuous integration/delivery tools, infrastructure automation and virtualization tools, major cloud platforms (AWS, Azure, GCP), or rest API design and development.
- Previous experience working with offshore teams desired.
- Financial industry experience, especially Regulatory Reporting, is a plus.
We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity/affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to ********************.To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy: ****************************************************
Skills and Requirements
4+ years in Business Intelligence - Data Engineering
Experience working within Data Stage
Experience working in DBT transformation framework
Experience working in Cloud Native data platforms, specifically Snowflake
Experience with SQL for interacting with relational databases Regulatory Reporting experience
Data Engineer III
Data engineer job in Cincinnati, OH
We are seeking an experienced Data Engineer III. The ideal candidate will be responsible for working with business analysts, data engineers and upstream teams to understand impacts to data sources. Take the requirements and update/build ETL data pipelines using Datastage and DBT for ingestion into Financial Crimes applications. Perform testing and ensure data quality of updated data sources.
Job Summary:Handle the design and construction of scalable management systems, ensure that all data systems meet company requirements, and also research new uses for data acquisition. Required to know and understand the ins and outs of the industry such as data mining practices, algorithms, and how data can be used.
Primary Responsibilities:
Design, construct, install, test and maintain data management systems.
Build high-performance algorithms, predictive models, and prototypes.
Ensure that all systems meet the business/company requirements as well as industry practices.
Integrate up-and-coming data management and software engineering technologies into existing data structures.
Develop set processes for data mining, data modeling, and data production.
Create custom software components and analytics applications.
Research new uses for existing data.
Employ an array of technological languages and tools to connect systems together.
Collaborate with members of your team (eg, data architects, the IT team, data scientists) on the project's goals.
Install/update disaster recovery procedures.
Recommend different ways to constantly improve data reliability and quality.
QualificationsLocals are highly preferred. Open to relocation possibility from within the state of Ohio with no assistance
Technical Degree or related work experience
Experience with non-relational & relational databases (SQL, MySQL, NoSQL, Hadoop, MongoDB, etc.)
Experience programming and/or architecting a back-end language (Java, J2EE, etc)
Business Intelligence - Data Engineering
ETL DataStage Developer
SQL
Strong communication skills, ability to collaborate with members of your team
Data Engineer III
Data engineer job in Cincinnati, OH
We are seeking an experienced Data Engineer III. The ideal candidate will be responsible for working with business analysts, data engineers and upstream teams to understand impacts to data sources.& Take the requirements and update/build ETL data pipelines using Datastage and DBT for ingestion into Financial Crimes applications.& Perform testing and ensure data quality of updated data sources.
Job Summary:Handle the design and construction of scalable management systems, ensure that all data systems meet company requirements, and also research new uses for data acquisition. Required to know and understand the ins and outs of the industry such as data mining practices, algorithms, and how data can be used.
Primary Responsibilities:
Design, construct, install, test and maintain data management systems.
Build high-performance algorithms, predictive models, and prototypes.
Ensure that all systems meet the business/company requirements as well as industry practices.
Integrate up-and-coming data management and software engineering technologies into existing data structures.
Develop set processes for data mining, data modeling, and data production.
Create custom software components and analytics applications.
Research new uses for existing data.
Employ an array of technological languages and tools to connect systems together.
Collaborate with members of your team (eg, data architects, the IT team, data scientists) on the project's goals.
Install/update disaster recovery procedures.
Recommend different ways to constantly improve data reliability and quality.
QualificationsLocals are highly preferred. Open to relocation possibility from within the state of Ohio with no assistance
Technical Degree or related work experience
Experience with non-relational relational databases (SQL, MySQL, NoSQL, Hadoop, MongoDB, etc.)
Experience programming and/or architecting a back-end language (Java, J2EE, etc)
Business Intelligence - Data Engineering
ETL DataStage Developer
SQL
Strong communication skills, ability to collaborate with members of your team
Data Engineer
Data engineer job in Cincinnati, OH
About AMEND: AMEND is a management consulting firm based in Cincinnati, OH with areas of focus in operations, analytics, and technology. We are focused on strengthening the people, processes, and systems in organizations to generate a holistic transformation. Our three-tiered approach provides a distinct competitive edge and allows us to build strong relationships and create customized solutions for every client. This is an incredible time to step into a growing team where everyone is aligned to a common goal to change lives, transform businesses, and make a positive impact on anything we touch.
Overview:
The Data Engineer consultant role is an incredibly exciting position in the fastest growing segment of AMEND. You will be working to solve real-world problems by designing cutting edge analytic solutions while surrounded by a team of world class talent. You will be entering an environment of explosive growth with ample opportunity for development. We are looking for individuals who can go into a client and optimize (or re-design) companies data architecture, who are the combination of a change agent, technical leader and passionate about transforming companies for the better. We need someone who is a problem solver, a critical thinker, and is always wanting to go after new things; you'll never be doing the same thing twice!
Job Tasks:
Create and maintain optimal data pipeline architecture
Assemble large, complex data sets that meet functional / non-functional business requirements
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics
Work with stakeholders including the Executive, Product, Data, and Design teams to assist with data-related technical issues and support their data infrastructure needs
Define project requirements by identifying project milestones, phases, and deliverables
Execute project plan, report progress, identify and resolve problems, and recommend further actions
Delegate tasks to appropriate resources as project requirements dictate
Design, develop, and deliver audience training and adoption methods and materials
Qualifications:
Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Databricks and DBT experience is a plus
Experience building and optimizing data pipelines, architectures, and data sets
Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
Strong analytic skills related to working with structured and unstructured datasets
Build processes supporting data transformation, data structures, metadata, dependency, and workload management
A successful history of manipulating, processing, and extracting value from large, disconnected datasets
Ability to interface with multiple other business functions (internally and externally)
Desire to build analytical competencies in others within the business
Curiosity to ask questions and challenge the status quo
Creativity to devise out-of-the-box solutions
Ability to travel as needed to meet client requirements
What's in it for you?
Competitive pay and bonus
Travel incentive bonus structure
Flexible time off
Investment in your growth and development
Full health, vision, dental, and life benefits
Paid parental leave
3:1 charity match
All this to say - we are looking for talented people who are excited to make an impact on our clients. If this job description isn't a perfect match for your skillset, but you are talented, eager to learn, and passionate about our work, please apply! Our recruiting process is centered around you as an individual and finding the best place for you to thrive at AMEND, whether it be with the specific title on this posting or something different. One recruiting conversation with us has the potential to open you up to our entire network of opportunities, so why not give it a shot? We're looking forward to connecting with you.
*Applicants must be authorized to work for any employer in the U.S. We are unable to sponsor or take over sponsorship of employment Visa at this time.*
Auto-ApplySenior Data Engineer
Data engineer job in Blue Ash, OH
Job Description
The Engineer is responsible for staying on track with key milestones in Customer Platform / Customer Data Acceleration, work will be on the new Customer Platform Analytics system in Databricks. The Engineer has overall responsibility in the technical design process. Leads and participates in the application technical design process and completes estimates and work plans for design, development, implementation, and rollout tasks. The Engineer also communicates with the appropriate teams to ensure that assignments are delivered with the highest of quality and in accordance to standards. The Engineer strives to continuously improve the software delivery processes and practices. Role model and demonstrate the companys core values of respect, honesty, integrity, diversity, inclusion and safety of others.
Current tools and technologies include:
Databricks and Netezza
Key Responsibilities
Lead and participate in the design and implementation of large and/or architecturally significant applications.
Champion company standards and best practices. Work to continuously improve software delivery processes and practices.
Build partnerships across the application, business and infrastructure teams.
Setting up new customer data platforms from Netezza to Databricks
Complete estimates and work plans independently as appropriate for design, development, implementation and rollout tasks.
Communicate with the appropriate teams to ensure that assignments are managed appropriately and that completed assignments are of the highest quality.
Support and maintain applications utilizing required tools and technologies.
May direct the day-to-day work activities of other team members.
Must be able to perform the essential functions of this position with or without reasonable accommodation.
Work quickly with the team to implement new platform.
Be onsite with development team when necessary.
Behaviors/Skills:
Puts the Customer First - Anticipates customer needs, champions for the customer, acts with customers in mind, exceeds customers expectations, gains customers trust and respect.
Communicates effectively and candidly - Communicates clearly and directly, approachable, relates well to others, engages people and helps them understand change, provides and seeks feedback, articulates clearly, actively listens.
Achieves results through teamwork Is open to diverse ideas, works inclusively and collaboratively, holds self and others accountable, involves others to accomplish individual and team goals
Note to Vendors
Length of Contract 9 months
Top skills Databricks, Netezza
Soft Skills Needed collaborating well with others, working in a team dynamic
Project person will be supporting - staying on track with key milestones in Customer Platform / Customer Data Acceleration, Work will be on the new Customer Platform Analytics system in Databricks that will replace Netezza
Team details ie. size, dynamics, locations most of the team is located in Cincinnati, working onsite at the BTD
Work Location (in office, hybrid, remote) Onsite at BTD when necessary, approximately 2-3 days a week
Is travel required - No
Max Rate if applicable best market rate
Required Working Hours 8-5 est
Interview process and when will it start Starting with one interview, process may change
Prescreening Details standard questions. Scores will carry over.
When do you want this person to start Looking to hire quickly, the team is looking to move fast.
Go Anywhere SFTP Data Engineer
Data engineer job in Cincinnati, OH
* Maintain robust data pipelines for ingesting and processing data from various sources, including SFTP servers. * Implement and manage automated SFTP data transfers, ensuring data security, integrity, and timely delivery. * Configure and troubleshoot SFTP connections, including handling authentication, key management, and directory structures.
* Develop and maintain scripts or tools for automating SFTP-related tasks, such as file monitoring, error handling, and data validation.
* Collaborate with external teams and vendors to establish and maintain secure SFTP connections for data exchange.
* Ensure compliance with data security and governance policies related to SFTP transfers.
* Monitor and optimize SFTP performance, addressing any bottlenecks or issues.
* Document SFTP integration processes, configurations, and best practices.
* Responsible for providing monthly SOC controls.
* Experience with Solimar software.
* Responsible for period software updates and patching.
* Manage open incidents.
* Responsible for after-hours and weekends on-call duties
* Minimum (3-5) years related work experience
* Experience with Microsoft Software and associated server tools.
* Experience with GoAnywhere managed file transfer (MFT) solution.
* Experience with WinSCP
* Experience with Azure Cloud
* Proven experience in data engineering, with a strong emphasis on data ingestion and pipeline development.
* Demonstrated expertise in working with SFTP for data transfer and integration.
* Proficiency in scripting languages (e.g., Python, Shell) for automating SFTP tasks.
* Familiarity with various SFTP clients, servers, and related security protocols.
* Understanding of data security best practices, including encryption and access control for SFTP.
* Experience with cloud platforms (e.g., AWS, Azure, GCP) and their SFTP integration capabilities is a plus.
* Strong problem-solving and troubleshooting skills related to data transfer and integration issues.
Salary Range- $80,000-$85,000 a year
#LI-SP3
#LI-VX1
T&O Data Source and System Origination Leader
Data engineer job in Olde West Chester, OH
GE Aerospace is seeking a T&O Data Source and System Origination Leader to drive development in business data management and system integration. This role is critical in identifying and addressing gaps in business processes and systems of record, eliminating lean waste, and ensuring seamless data ingestion and cataloging to support operational excellence. The ideal candidate will act as a liaison between cross-functional teams, including Digital Technology (DT), Data Ingestion, and System of Record owners, to ensure requirements are met and updates are delivered on schedule.
Job Description
Roles and Responsibilities
Ownership: Lead initiatives to explore innovative solutions for data management and system integration challenges, driving continuous improvement and operational efficiency.
Burn Down of Business Process/System of Record Gap List: Identify, prioritize, and resolve gaps in business processes and systems of record to enhance data accuracy and accessibility.
Lean Waste Reduction:
* Eliminate motion waste related to manual data input.
* Minimize transportation waste caused by downloading and manually manipulating data.
Digital Technology Liaison: Collaborate with the DT team to ensure alignment on requirements, timelines, and updates.
Data Ingestion Team Liaison:
* Work closely with the Data Ingestion team to ensure new data is successfully integrated into the Data Operating System (DOS).
* Facilitate communication and coordination between teams to address ingestion challenges.
Data Cataloging and Business Process Relationship: Develop and maintain a comprehensive data catalog, ensuring alignment with business processes and driving data accessibility and usability.
Change Management and Break/Fix:
* Manage changes to base data and ingestion processes.
* Lead efforts to address and resolve data-related issues promptly.
Required Qualifications
* Bachelor's degree in Engineering, Data Science, Business Administration, or a related field.
* Minimum of 5 years of experience in data management, system integration, or
* Legal authorization to work in the U.S. is required. We will not sponsor individuals for employment visas, now or in the future, for this job opening.
Desired Characteristics
* business process improvement.
* Proven experience in lean methodologies and waste reduction strategies.
* Strong project management skills with a track record of delivering results on time and within scope.
* Excellent communication and collaboration skills to act as a liaison between cross-functional teams.
* Familiarity with data ingestion processes, system of record management, and change management principles.
* Experience working with data cataloging tools and understanding their relationship to business processes.
* Demonstrated ability to lead cross-functional teams and drive alignment across diverse stakeholders.
* Strong analytical and problem-solving skills with a focus on continuous improvement.
* Knowledge of GE Aerospace's FLIGHT DECK operating model is a plus.
* Experience working in a fast-paced, dynamic environment with competing priorities.
* Ability to translate complex technical concepts into actionable business strategies.
This role requires access to U.S. export-controlled information. Therefore, employment will be contingent upon the ability to prove that you meet the status of a U.S. Person as one of the following: U.S. lawful permanent resident, U.S. Citizen, have been granted asylee or refugee status (i.e., a protected individual under the Immigration and Naturalization Act, 8 U.S.C. 1324b(a)(3)).
Additional Information
GE Aerospace offers a great work environment, professional development, challenging careers, and competitive compensation. GE Aerospace is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law.
GE Aerospace will only employ those who are legally authorized to work in the United States for this opening. Any offer of employment is conditioned upon the successful completion of a drug screen (as applicable).
Relocation Assistance Provided: Yes
Auto-ApplyETL Architect
Data engineer job in Cincinnati, OH
Job title: ETL Architect DURATION 18 months YEARS OF EXPERIENCE 7-10 INTERVIEW TYPE Phone Screen to Hire REQUIRED SKILLS • Experience with Data Stage and ETL design Technical • Requirement gathering , converting business requirements to technical specs to profile
• Worked hands on in minimum 2 projects with data stage
• Understand the process of developing an etl design that support multiple datastage developers
• Be able to create an etl design framework and related specifications for use by etl developers
• Define standards and best practices of Data Stage etl to be followed by all data stage developers
• Understanding of Data Warehouse, Data marts concepts and implementation experience
• Be able to look at code produced to insure conformance with developed ETL framework and design for reuse
• Preferable experienced user level comptency in IBM's metadata product, datastage and Infosphere product line
• Be able to design etl for oracle or sql server or any db
• Good analytical skills and process design
• Insuring compliance to quality standards, and delivery timelines.
Qualifications
Bachelors
Additional Information
Required Skills:
Job Description:
Performs highly complex application programming/systems development and support Performs highly complex configuration of business rules and technical parameters of software products Review business requirements and develop application design documentation Build technical components (Maximo objects, TRM Rules, Java extensions, etc) based on detailed design.
Performs unit testing of components along with completing necessary documentation. Supports product test, user acceptance test, etc as a member of the fix-it team. Employs consistent measurement techniques Include testing in project plans and establish controls to require adherence to test plans Manages the interrelationships among various projects or work objectives
Junior Data Scientist
Data engineer job in Cincinnati, OH
The Medpace Analytics and Business Intelligence team is growing rapidly and is focused on building a data driven culture across the enterprise. The BI team uses data and insights to drive increased strategic and operational efficiencies across the organization. As a Business Intelligence Analyst, you will hold a highly visible analytical role that requires interaction and partnership with leadership across the Medpace organization.
What's in this for you?
* Work in a collaborative, fast paced, entrepreneurial, and innovative workplace;
* Gain experience and exposure to advanced BI concepts from visualization to data warehousing;
* Grow business knowledge by working with leadership across all aspects of Medpace's business.
Responsibilities
What's involved?
We are looking for a Junior Business Intelligence Analyst to add additional depth to our growing Analytical team in a variety of areas - from Visualization and Storytelling to SQL, Data Modeling, and Data Warehousing. This role will work in close partnership with leadership, product management, operations, finance, and other technical teams to find opportunities to improve and expand our business.
An ideal candidate in this role will apply great analytical skills, communication skills, and problem-solving skills to continue developing our analytics & BI capabilities. We are looking for team members who thrive in working with complex data sets, conducting deep data analysis, are intensely curious, and enjoy designing and developing long-term solutions.
What you bring to the table - and why we need you!
* Data Visualization skills - Designing and developing key metrics, reports, and dashboards to drive insights and business decisions to improve performance and reduce costs;
* Technical Skills - either experience in, or strong desire to learn fundamental technical skills needed to drive BI initiatives (SQL, DAX, Data Modeling, etc.);
* Communication Skills - Partner with leadership and collaborate with software engineers to implement data architecture and design, to support complex analysis;
* Analytical Skills - Conduct complex analysis and proactively identify key business insights to assist departmental decision making.
Qualifications
* Bachelor's Degree in Business, Life Science, Computer Science, or Related Degree;
* 0-3 years of experience in business intelligence or analytics - Python & R heavily preferred
* Strong analytical and communication skills;
* Excellent organization skills and the ability to multitask while efficiently completing high quality work.
Medpace Overview
Medpace is a full-service clinical contract research organization (CRO). We provide Phase I-IV clinical development services to the biotechnology, pharmaceutical and medical device industries. Our mission is to accelerate the global development of safe and effective medical therapeutics through its scientific and disciplined approach. We leverage local regulatory and therapeutic expertise across all major areas including oncology, cardiology, metabolic disease, endocrinology, central nervous system, anti-viral and anti-infective. Headquartered in Cincinnati, Ohio, employing more than 5,000 people across 40+ countries.
Why Medpace?
People. Purpose. Passion. Make a Difference Tomorrow. Join Us Today.
The work we've done over the past 30+ years has positively impacted the lives of countless patients and families who face hundreds of diseases across all key therapeutic areas. The work we do today will improve the lives of people living with illness and disease in the future.
Cincinnati Perks
* Cincinnati Campus Overview
* Flexible work environment
* Competitive PTO packages, starting at 20+ days
* Competitive compensation and benefits package
* Company-sponsored employee appreciation events
* Employee health and wellness initiatives
* Community involvement with local nonprofit organizations
* Discounts on local sports games, fitness gyms and attractions
* Modern, ecofriendly campus with an on-site fitness center
* Structured career paths with opportunities for professional growth
* Discounted tuition for UC online programs
Awards
* Named a Top Workplace in 2024 by The Cincinnati Enquirer
* Recognized by Forbes as one of America's Most Successful Midsize Companies in 2021, 2022, 2023 and 2024
* Continually recognized with CRO Leadership Awards from Life Science Leader magazine based on expertise, quality, capabilities, reliability, and compatibility
What to Expect Next
A Medpace team member will review your qualifications and, if interested, you will be contacted with details for next steps.
Auto-ApplyData Engineer Level 3
Data engineer job in Cincinnati, OH
For over half a decade, Hudson Manpower has been a trusted partner in delivering specialized talent and technology solutions across IT, Energy, and Engineering industries worldwide. We work closely with startups, mid-sized firms, and Fortune 500 clients to support their digital transformation journeys. Our teams are empowered to bring fresh ideas, shape innovative solutions, and drive meaningful impact for our clients. If you're looking to grow in an environment where your expertise is valued and your voice matters, then Hudson Manpower is the place for you. Join us and collaborate with forward thinking professionals who are passionate about building the future of work.
Core Responsibilities
Design, develop, and optimize scalable ELT/ETL data pipelines leveraging Azure Synapse, Databricks, and PySpark.
Build and manage data lake house architectures for high-volume eCommerce, marketing, and behavioral datasets.
Model complex event-level data (clickstream, transactions, campaign interactions) to power dashboards, A/B testing, ML models, and marketing activation.
Implement Delta Lake optimization and Databricks Workflows for performance and reliability.
Partner with architects and business analysts to deliver reusable, high-quality data models for BI and self-service analytics.
Ensure data lineage, governance, and compliance using Unity Catalog and tools like Alation.
Collaborate with data scientists to productionize and operationalize datasets for ML and predictive analytics.
Validate and reconcile behavioral data with Adobe Analytics and Customer Journey Analytics (CJA) to ensure accuracy.
Maintain semantic models for Power BI dashboards, supporting self-service and advanced analytical insights.
Actively participate in Agile delivery: sprint planning, backlog refinement, technical documentation, and code reviews.
Job requirements
Required Qualifications
7+ years of experience in data engineering, data architecture, or similar roles.
Expertise in Databricks, Azure Synapse, Delta Lake, and Spark-based data processing.
Strong SQL proficiency for data transformation and performance tuning.
Deep experience with modern data architectures (Lakehouse, ELT, streaming).
Proven track record in handling behavioral/eCommerce datasets and analytics use cases.
Hands-on experience with Unity Catalog and metadata governance tools (e.g., Alation).
Experience with Adobe Analytics / CJA data validation preferred.
Familiarity with Power BI, Python, and data orchestration tools (ADF, Airflow).
Strong communication skills with the ability to bridge business needs and engineering solutions.
Experience in Agile environments with proven leadership in technical delivery.
All done!
Your application has been successfully submitted!
Other jobs
Senior Data Engineer (P358)
Data engineer job in Cincinnati, OH
Senior Data Engineer, AI Enablement (P358) As a Senior Data Engineer on our AI Enablement team, you will cultivate strategies and solutions to ingest, store and distribute our big data. This role is on our enablement team that builds solutions for monitoring, registering, and tracking our machine learning and AI solutions across 84.51˚ and develops monitoring and observability pipelines for our internal AI tooling. Our engineers use PySpark, Python, SQL, GitHub actions, and Databricks/Azure to develop scalable data solutions.
Responsibilities
Take ownership of features and drive them to completion through all phases of the entire 84.51° SDLC. This includes external facing and internal applications as well as process improvement activities such as:
* Lead design of Python and PySpark based solutions
* Perform development of cloud based (Azure) ETL solutions
* Build and configure cloud infrastructure for all stages of the data development lifecycle
* Execute unit and integration testing
* Develop robust data QA processes
* Collaborate with senior resources to ensure consistent development practices
* Provide mentoring to junior resources
* Build visualizations in Databricks Apps, Databricks Dashboards and PowerBI
* Bring new perspectives to problems and be driven to improve yourself and the way things are done
Qualifications, Skills, and Experience
* Bachelor's degree in Computer Science, Management Information Systems, Mathematics, Business Analytics or another technically strong program.
* 3+ years proven ability of professional data engineering experience
* Strong understanding of Agile Principles (Scrum)
* 3+ years proven ability of developing with Python and PySpark
* Full understanding of ETL concepts and data warehousing concepts
* Experience with CI/CD frameworks (GitHub Actions a plus)
* Experience with visualization techniques and tools like Databricks dashboards or PowerBI a plus
* Languages/Tech stack:
* Python
* PySpark
* Terraform
* Databricks
* GitHub Actions
* Azure
* AKS experience a plus
* Dashboard experience a plus
* WebApp experience a plus
PLEASE NOTE:
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United Stated and with the Kroger Family of Companies (i.e. H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).
#LI-SSS
Auto-ApplySenior Data Engineer (P3005)
Data engineer job in Cincinnati, OH
Who we are As a full-stack data science subsidiary of The Kroger Company, we leverage over 10 petabytes of data to personalize the experience for 62 million households. We are seeking a hands-on Senior Data Engineer (G2) to design, build, and operate our analytical lakehouse on a modern data stack. As a senior individual contributor on a hybrid scrum team, you will partner closely with Product, Analytics, and Engineering to deliver scalable, high-quality data products on Databricks and Azure. You will contribute to technical direction, uphold engineering best practices, and remain deeply involved in coding, testing, and production operations-without people management responsibilities.
Key Responsibilities
* Data engineering delivery: Design, develop, and optimize secure, scalable batch and near-real-time pipelines on Databricks (PySpark/SQL) with Delta Lake and Delta Live Tables (DLT). Implement medallion architecture, Unity Catalog governance, and robust data quality checks (expectations/testing). Build performant data models and tables to power analytics, ML, and downstream applications
* Product collaboration and agile execution: Translate business requirements into data contracts, schemas, and SLAs in partnership with Product and Analytics. Participate in backlog refinement, estimation, sprint planning, and retros in a hybrid onshore/offshore environment. Deliver clear documentation (designs, runbooks, data dictionaries) to enable self-serve and reuse
* Reliability, observability, and operations: Implement monitoring, alerting, lineage, and cost/performance telemetry; troubleshoot and tune Spark jobs and storage. Participate in on-call/incident response rotations and drive post-incident improvements
* CI/CD, and infrastructure as code: Contribute to coding standards, code reviews, and reusable patterns/modules. Build and maintain CI/CD pipelines (GitHub Actions) and manage infrastructure with Terraform (data, compute, secrets, policies)
* Continuous improvement and knowledge sharing: Mentor peers informally, share best practices, and help evaluate/introduce modern tools and patterns
Required Qualifications
* Experience: 4-6 years in data engineering; 1-2 years operating as a senior/lead individual contributor on delivery-critical projects. Proven track record delivering production-grade pipelines and data products on cloud platforms.
* Core technical skills: Databricks: 2-3+ years with Spark (PySpark/SQL); experience building and operating DLT pipelines and Delta Lake. Azure: Proficiency with ADLS Gen2, Entra ID (Azure AD), Key Vault, Databricks on Azure, and related services. Languages and tools: Expert-level SQL and strong Python; Git/GitHub, unit/integration/data testing, and performance tuning. Infrastructure as code: Hands-on Terraform for data platform resources and policies. Architecture: Solid understanding of medallion and dimensional modeling, data warehousing concepts, and CI/CD best practices
* Collaboration and communication: Excellent communicator with the ability to work across Product, Analytics, Security, and Platform teams in an agile setup
Preferred qualifications
* Bachelor's degree in Computer Science, Engineering, or related field (or equivalent experience)
* Azure or Databricks certifications (e.g., Data Engineer Associate/Professional)
* Experience with ELT tools (e.g., Fivetran), Snowflake, and streaming (Event Hubs, Kafka)
* Familiarity with AI-ready data practices and AI developer tools (e.g., GitHub Copilot)
* Exposure to FinOps concepts and cost/performance optimization on Databricks and Azure
The opportunity
* Build core data products that power personalization for millions of customers at enterprise scale
* Work with modern tooling (Databricks, Delta Lake/DLT, Unity Catalog, Terraform, GitHub Actions) in a collaborative, growth-minded culture
* Hybrid work, competitive compensation, comprehensive benefits, and clear paths for advancement
PLEASE NOTE:
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United Stated and with the Kroger Family of Companies (i.e. H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).
#LI-SSS
Auto-ApplyJunior Data Engineer
Data engineer job in Cincinnati, OH
Our corporate activities are growing rapidly, and we are currently seeking a full-time, office-based Junior Data Engineer to join our Information Technology team. This position will work on a team to accomplish tasks and projects that are instrumental to the company's success. If you want an exciting career where you use your previous expertise and can develop and grow your career even further, then this is the opportunity for you.
Responsibilities
* Utilize skills in development areas including data warehousing, business intelligence, and databases (Snowflake, ANSI SQL, SQL Server, T-SQL);
* Support programming/software development using Extract, Transform, and Load (ETL) and Extract, Load and Transform (ELT) tools, (dbt, Azure Data Factory, SSIS);
* Design, develop, enhance and support business intelligence systems primarily using Microsoft Power BI;
* Collect, analyze and document user requirements;
* Participate in software validation process through development, review, and/or execution of test plan/cases/scripts;
* Create software applications by following software development lifecycle process, which includes requirements gathering, design, development, testing, release, and maintenance;
* Communicate with team members regarding projects, development, tools, and procedures; and
* Provide end-user support including setup, installation, and maintenance for applications
Qualifications
* Bachelor's Degree in Computer Science, Data Science, or a related field;
* Internship experience in Data or Software Engineering;
* Knowledge of developing dimensional data models and awareness of the advantages and limitations of Star Schema and Snowflake schema designs;
* Solid ETL development, reporting knowledge based off intricate understanding of business process and measures;
* Knowledge of Snowflake cloud data warehouse, Fivetran data integration and dbt transformations is preferred;
* Knowledge of Python is preferred;
* Knowledge of REST API;
* Basic knowledge of SQL Server databases is required;
* Knowledge of C#, Azure development is a bonus; and
* Excellent analytical, written and oral communication skills.
Medpace Overview
Medpace is a full-service clinical contract research organization (CRO). We provide Phase I-IV clinical development services to the biotechnology, pharmaceutical and medical device industries. Our mission is to accelerate the global development of safe and effective medical therapeutics through its scientific and disciplined approach. We leverage local regulatory and therapeutic expertise across all major areas including oncology, cardiology, metabolic disease, endocrinology, central nervous system, anti-viral and anti-infective. Headquartered in Cincinnati, Ohio, employing more than 5,000 people across 40+ countries.
Why Medpace?
People. Purpose. Passion. Make a Difference Tomorrow. Join Us Today.
The work we've done over the past 30+ years has positively impacted the lives of countless patients and families who face hundreds of diseases across all key therapeutic areas. The work we do today will improve the lives of people living with illness and disease in the future.
Cincinnati Perks
* Cincinnati Campus Overview
* Flexible work environment
* Competitive PTO packages, starting at 20+ days
* Competitive compensation and benefits package
* Company-sponsored employee appreciation events
* Employee health and wellness initiatives
* Community involvement with local nonprofit organizations
* Discounts on local sports games, fitness gyms and attractions
* Modern, ecofriendly campus with an on-site fitness center
* Structured career paths with opportunities for professional growth
* Discounted tuition for UC online programs
Awards
* Named a Top Workplace in 2024 by The Cincinnati Enquirer
* Recognized by Forbes as one of America's Most Successful Midsize Companies in 2021, 2022, 2023 and 2024
* Continually recognized with CRO Leadership Awards from Life Science Leader magazine based on expertise, quality, capabilities, reliability, and compatibility
What to Expect Next
A Medpace team member will review your qualifications and, if interested, you will be contacted with details for next steps.
Auto-Apply