Data Architect
Data engineer job in Cincinnati, OH
THIS IS A W2 (NOT C2C OR REFERRAL BASED) CONTRACT OPPORTUNITY
REMOTE MOSTLY WITH 1 DAY/MO ONSITE IN CINCINNATI-LOCAL CANDIDATES TAKE PREFERENCE
RATE: $75-85/HR WITH BENEFITS
We are seeking a highly skilled Data Architect to function in a consulting capacity to analyze, redesign, and optimize a Medical Payments client's environment. The ideal candidate will have deep expertise in SQL, Azure cloud services, and modern data architecture principles.
Responsibilities
Design and maintain scalable, secure, and high-performing data architectures.
Lead migration and modernization projects in heavy use production systems.
Develop and optimize data models, schemas, and integration strategies.
Implement data governance, security, and compliance standards.
Collaborate with business stakeholders to translate requirements into technical solutions.
Ensure data quality, consistency, and accessibility across systems.
Required Qualifications
Bachelor's degree in Computer Science, Information Systems, or related field.
Proven experience as a Data Architect or similar role.
Strong proficiency in SQL (query optimization, stored procedures, indexing).
Hands-on experience with Azure cloud services for data management and analytics.
Knowledge of data modeling, ETL processes, and data warehousing concepts.
Familiarity with security best practices and compliance frameworks.
Preferred Skills
Understanding of Electronic Health Records systems.
Understanding of Big Data technologies and modern data platforms outside the scope of this project.
Databricks Engineer - 25-03206
Data engineer job in Cincinnati, OH
The Client team is looking for a Data Engineer experienced in implementing data solutions in Azure. The Data Engineer will analyze, design and develop enterprise data and information architecture deliverables, focusing on data as an asset for the enterprise. The Data Engineer will also support the implementation of Infrastructure as Code (IaC) by working with teams to help engineer scalable, reliable, and resilient software running in the cloud.
Accountable for developing and delivering technological responses to targeted business outcomes.
Analyze, design and develop enterprise data and information architecture deliverables, focusing on data as an asset for the enterprise. Understand and follow reusable standards, design patterns, guidelines, and configurations to deliver valuable data and information across the enterprise, including direct collaboration with 84.51, where needed.
2+ years' experience with automation production systems (Ansible Tower, Jenkins, Puppet, or Selenium)
Working knowledge of databases and SQL Experience with software development methodologies and SDLC Candidate possess a problem-solving attitude and can work independently
Must be very organized, able to balance multiple priorities, and self-motivated
Key Responsibilities
Experience in administration and configuration of API Gateways (e.g. Apigee/Kong) Apply cloud computing skill to deploy upgrades and fixes
Design, develop, and implement integrations based on use feedback.
Troubleshoot production issues and coordinate with the development team to streamline code deployment.
Implement automation tools and frameworks (Ci/CD pipelines).
Analyze code and communicate detailed reviews to development teams to ensure a marked improvement in applications and the timely completion of products.
Collaborate with team members to improve the company's engineering tools, systems and procedures, and data security.
Deliver quality customer service and resolve end-user issues in a timely manner
Draft architectural diagrams, interface specifications and other design documents
Participate in the development and communication of data strategy and roadmaps across the technology organization to support project portfolio and business strategy
Innovate, develop, and drive the development and communication of data strategy and roadmaps across the technology organization to support project portfolio
Drive the development and communication of enterprise standards for data domains and data solutions, focusing on simplified integration and streamlined operational and analytical uses
Drive digital innovation by leveraging innovative new technologies and approaches to renovate, extend, and transform the existing core data assets, including SQL-based, NoSQL-based, and Cloud-based data platforms
Define high-level migration plans to address the gaps between the current and future state, typically in sync with the budgeting or other capital planning processes
Lead the analysis of the technology environment to detect critical deficiencies and recommend solutions for improvement
Mentor team members in data principles, patterns, processes and practices
Promote the reuse of data assets, including the management of the data catalog for reference
Draft and review architectural diagrams, interface specifications and other design documents
Data Scientist
Data engineer job in Cincinnati, OH
Do you enjoy solving billion-dollar data science problems across trillions of data points? Are you passionate about working at the cutting edge of interdisciplinary boundaries, where computer science meets hard science? If you like turning untidy data into nonobvious insights and surprising business leaders with the transformative power of Artificial Intelligence (AI), we want you on our team at P&G.
As a Data Scientist in our organization, you will play a crucial role in disrupting current business practices by designing and implementing innovative models that enhance our processes. You will be expected to constructively research, design, and customize algorithms tailored to various problems and data types. Utilizing your expertise in Operations Research (including optimization and simulation) and machine learning models (such as tree models, deep learning, and reinforcement learning), you will directly contribute to the development of scalable Data Science algorithms and collaborate with Data and Software Engineering teams to productionize these solutions. Your technical knowledge will empower you to apply exploratory data analysis, feature engineering, and model building on massive datasets, delivering accurate and impactful insights. Additionally, you will mentor others as a technical coach and become a recognized expert in one or more Data Science techniques, quantifying the improvements in business outcomes resulting from your work.
Key Responsibilities:
+ Algorithm Design & Development: Directly contribute to the design and development of scalable Data Science algorithms.
+ Collaboration: Work closely with Data and Software Engineering teams to effectively productionize algorithms.
+ Data Analysis: Apply thorough technical knowledge to large datasets, conducting exploratory data analysis, feature engineering, and model building.
+ Coaching & Mentorship: Develop others as a technical coach, sharing your expertise and insights.
+ Expertise Development: Become a known expert in one or multiple Data Science techniques and methodologies.
Job Qualifications
Required Qualifications:
+ Education: Pursuing or has graduated with a Master's degree in a quantitative field (Operations Research, Computer Science, Engineering, Applied Mathematics, Statistics, Physics, Analytics, etc.) or possess equivalent work experience.
+ Technical Skills: Proficient in programming languages such as Python and familiar with data science/machine learning libraries like OpenCV, scikit-learn, PyTorch, TensorFlow/Keras, and Pandas.
+ Communication: Strong written and verbal communication skills, with the ability to influence others to take action.
Preferred Qualifications:
+ Analytic Methodologies: Experience applying analytic methodologies such as Machine Learning, Optimization, and Simulation to real-world problems.
+ Continuous Learning: A commitment to lifelong learning, keeping up to date with the latest technology trends, and a willingness to teach others while learning new techniques.
+ Data Handling: Experience with large datasets and cloud computing platforms such as GCP or Azure.
+ DevOps Familiarity: Familiarity with DevOps environments, including tools like Git and CI/CD practices.
Compensation for roles at P&G varies depending on a wide array of non-discriminatory factors including but not limited to the specific office location, role, degree/credentials, relevant skill set, and level of relevant experience. At P&G compensation decisions are dependent on the facts and circumstances of each case. Total rewards at P&G include salary + bonus (if applicable) + benefits . Your recruiter may be able to share more about our total rewards offerings and the specific salary range for the relevant location(s) during the hiring process.
We are committed to providing equal opportunities in employment. We value diversity and do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
Immigration Sponsorship is not available for this role. For more information regarding who is eligible for hire at P&G along with other work authorization FAQ's, please click HERE (******************************************************* .
Procter & Gamble participates in e-verify as required by law.
Qualified individuals will not be disadvantaged based on being unemployed.
We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Job Schedule
Full time
Job Number
R000135859
Job Segmentation
Entry Level
Starting Pay / Salary Range
$85,000.00 - $115,000.00 / year
ETL Architect
Data engineer job in Cincinnati, OH
Job title: ETL Architect DURATION 18 months YEARS OF EXPERIENCE 7-10 INTERVIEW TYPE Phone Screen to Hire REQUIRED SKILLS • Experience with Data Stage and ETL design Technical • Requirement gathering , converting business requirements to technical specs to profile
• Worked hands on in minimum 2 projects with data stage
• Understand the process of developing an etl design that support multiple datastage developers
• Be able to create an etl design framework and related specifications for use by etl developers
• Define standards and best practices of Data Stage etl to be followed by all data stage developers
• Understanding of Data Warehouse, Data marts concepts and implementation experience
• Be able to look at code produced to insure conformance with developed ETL framework and design for reuse
• Preferable experienced user level comptency in IBM's metadata product, datastage and Infosphere product line
• Be able to design etl for oracle or sql server or any db
• Good analytical skills and process design
• Insuring compliance to quality standards, and delivery timelines.
Qualifications
Bachelors
Additional Information
Required Skills:
Job Description:
Performs highly complex application programming/systems development and support Performs highly complex configuration of business rules and technical parameters of software products Review business requirements and develop application design documentation Build technical components (Maximo objects, TRM Rules, Java extensions, etc) based on detailed design.
Performs unit testing of components along with completing necessary documentation. Supports product test, user acceptance test, etc as a member of the fix-it team. Employs consistent measurement techniques Include testing in project plans and establish controls to require adherence to test plans Manages the interrelationships among various projects or work objectives
Junior Data Scientist
Data engineer job in Cincinnati, OH
The Medpace Analytics and Business Intelligence team is growing rapidly and is focused on building a data driven culture across the enterprise. The BI team uses data and insights to drive increased strategic and operational efficiencies across the organization. As a Business Intelligence Analyst, you will hold a highly visible analytical role that requires interaction and partnership with leadership across the Medpace organization.
What's in this for you?
* Work in a collaborative, fast paced, entrepreneurial, and innovative workplace;
* Gain experience and exposure to advanced BI concepts from visualization to data warehousing;
* Grow business knowledge by working with leadership across all aspects of Medpace's business.
Responsibilities
What's involved?
We are looking for a Junior Business Intelligence Analyst to add additional depth to our growing Analytical team in a variety of areas - from Visualization and Storytelling to SQL, Data Modeling, and Data Warehousing. This role will work in close partnership with leadership, product management, operations, finance, and other technical teams to find opportunities to improve and expand our business.
An ideal candidate in this role will apply great analytical skills, communication skills, and problem-solving skills to continue developing our analytics & BI capabilities. We are looking for team members who thrive in working with complex data sets, conducting deep data analysis, are intensely curious, and enjoy designing and developing long-term solutions.
What you bring to the table - and why we need you!
* Data Visualization skills - Designing and developing key metrics, reports, and dashboards to drive insights and business decisions to improve performance and reduce costs;
* Technical Skills - either experience in, or strong desire to learn fundamental technical skills needed to drive BI initiatives (SQL, DAX, Data Modeling, etc.);
* Communication Skills - Partner with leadership and collaborate with software engineers to implement data architecture and design, to support complex analysis;
* Analytical Skills - Conduct complex analysis and proactively identify key business insights to assist departmental decision making.
Qualifications
* Bachelor's Degree in Business, Life Science, Computer Science, or Related Degree;
* 0-3 years of experience in business intelligence or analytics - Python & R heavily preferred
* Strong analytical and communication skills;
* Excellent organization skills and the ability to multitask while efficiently completing high quality work.
Medpace Overview
Medpace is a full-service clinical contract research organization (CRO). We provide Phase I-IV clinical development services to the biotechnology, pharmaceutical and medical device industries. Our mission is to accelerate the global development of safe and effective medical therapeutics through its scientific and disciplined approach. We leverage local regulatory and therapeutic expertise across all major areas including oncology, cardiology, metabolic disease, endocrinology, central nervous system, anti-viral and anti-infective. Headquartered in Cincinnati, Ohio, employing more than 5,000 people across 40+ countries.
Why Medpace?
People. Purpose. Passion. Make a Difference Tomorrow. Join Us Today.
The work we've done over the past 30+ years has positively impacted the lives of countless patients and families who face hundreds of diseases across all key therapeutic areas. The work we do today will improve the lives of people living with illness and disease in the future.
Cincinnati Perks
* Cincinnati Campus Overview
* Flexible work environment
* Competitive PTO packages, starting at 20+ days
* Competitive compensation and benefits package
* Company-sponsored employee appreciation events
* Employee health and wellness initiatives
* Community involvement with local nonprofit organizations
* Discounts on local sports games, fitness gyms and attractions
* Modern, ecofriendly campus with an on-site fitness center
* Structured career paths with opportunities for professional growth
* Discounted tuition for UC online programs
Awards
* Named a Top Workplace in 2024 by The Cincinnati Enquirer
* Recognized by Forbes as one of America's Most Successful Midsize Companies in 2021, 2022, 2023 and 2024
* Continually recognized with CRO Leadership Awards from Life Science Leader magazine based on expertise, quality, capabilities, reliability, and compatibility
What to Expect Next
A Medpace team member will review your qualifications and, if interested, you will be contacted with details for next steps.
Auto-ApplyLead Data Engineer (P4031)
Data engineer job in Cincinnati, OH
84.51° is a retail data science, insights and media company. We help The Kroger Co., consumer packaged goods companies, agencies, publishers and affiliates create more personalized and valuable experiences for shoppers across the path to purchase.
Powered by cutting-edge science, we utilize first-party retail data from more than 62 million U.S. households sourced through the Kroger Plus loyalty card program to fuel a more customer-centric journey using 84.51° Insights, 84.51° Loyalty Marketing and our retail media advertising solution, Kroger Precision Marketing.
Join us at 84.51°!
__________________________________________________________
Lead Data Engineer (P4031)
Cincinnati / Chicago
SUMMARY:
The Lead Data Engineer serves as both a technical leader and an individual contributor within the data engineering team, embodying a player/coach role. This position is responsible for guiding and mentoring a team of data engineers while actively participating in data engineering projects to deliver results that support organizational objectives. As a technical lead, you will balance team leadership with hands-on contributions to drive innovation and excellence in data engineering. You will cultivate strategies and solutions to ingest, store and distribute our big data. Our developers use Python (PySpark, FastAPI), Databricks, and Azure cloud services in 6 week long scrum cycles to develop the products, tools and features.
RESPONSIBILITIES: Take ownership of features and drive them to completion through all phases of the entire 84.51° SDLC. This includes external facing and internal applications as well as process improvement activities such as:
Lead design and perform development of Python (PySpark, FastAPI) / Databricks / Azure based solutions
Develop and execute unit and integration testing
Collaborate with senior resources to ensure consistent development practices
Provide mentoring to junior resources
Bring new perspectives to problems and be driven to improve yourself and the way things are done
Manage resourcing across key initiatives in support of domain roadmaps and initiatives
QUALIFICATIONS, SKILLS, AND EXPERIENCE:
Bachelor's degree typically in Computer Science, Management Information Systems, Mathematics, Business Analytics or another technically strong program.
5+ years proven ability of professional data development experience
Strong understanding of Agile Principles (Scrum)
5+ years proven ability of developing Spark based solutions in a cloud environment
Full understanding of ETL concepts and data warehousing concepts
3+ year developing experience with Python
Experience with Databricks, REST APIs, and Microsoft Azure cloud services
#LI-SSS
Pay Transparency and Benefits
The stated salary range represents the entire span applicable across all geographic markets from lowest to highest. Actual salary offers will be determined by multiple factors including but not limited to geographic location, relevant experience, knowledge, skills, other job-related qualifications, and alignment with market data and cost of labor. In addition to salary, this position is also eligible for variable compensation.
Below is a list of some of the benefits we offer our associates:
Health: Medical: with competitive plan designs and support for self-care, wellness and mental health. Dental: with in-network and out-of-network benefit. Vision: with in-network and out-of-network benefit.
Wealth: 401(k) with Roth option and matching contribution. Health Savings Account with matching contribution (requires participation in qualifying medical plan). AD&D and supplemental insurance options to help ensure additional protection for you.
Happiness: Paid time off with flexibility to meet your life needs, including 5 weeks of vacation time, 7 health and wellness days, 3 floating holidays, as well as 6 company-paid holidays per year. Paid leave for maternity, paternity and family care instances.
Pay Range$121,000-$201,250 USD
Auto-ApplyData Engineer IV
Data engineer job in Cincinnati, OH
Job Title: Data Engineer IV
Payrate $70/hr on W2.
USC and GC Holder candidates only.
TOP SKILLS:
Must Have
Python
SQL
Nice To Have
AWS Sagemaker
DBT
Snowflake
What You'll Do
Squad: Machine Learning Data Enablement squad in the Data Insights Tribe
Required: In office 4 days a week minimum (Monday-Thursday)
We're hiring a Data Engineer to join our newly launched Machine Learning Data Enablement team at Fifth Third Bank. This team is focused on building high-quality, scalable data pipelines that power machine learning models across the enterprise, deployed in AWS SageMaker.
We're looking for an early-career professional who's excited to grow in a hands-on data engineering role. Ideal candidates will have experience working on machine learning-related projects or have partnered with data science teams to support model development and deployment - and have a strong interest in enabling ML workflows through robust data infrastructure.
You'll work closely with data scientists and ML engineers to deliver curated, production-ready datasets and help shape how machine learning data is delivered across the bank. You should have solid SQL and Python skills, a collaborative mindset, and a strong interest in modern data tooling. Experience with Snowflake, dbt, or cloud data platforms is a strong plus. Familiarity with ML tools like SageMaker or Databricks is helpful but not required - we're happy to help you learn.
This is a hands-on role with high visibility and high impact. You'll be joining a team at the ground level, helping to define how data powers machine learning at scale.
What You'll Get
Competitive base salary
Medical, dental, and vision insurance coverage
Optional life and disability insurance provided
401(k) with a company match and optional profit sharing
Paid vacation time
Paid Bench time
Training allowance offering
You'll be eligible to earn referral bonuses!
Job requirements
Python
SQL
Nice To Have
AWS Sagemaker
DBT
Snowflake
All done!
Your application has been successfully submitted!
Other jobs
Data Engineer
Data engineer job in Cincinnati, OH
Insight Global is looking for a data engineer contractor for one of their top financial clients. The following would be their roles and responsibilities: - Bachelor's degree in Computer Science/Information Systems or equivalent combination of education and experience.
- Must be able to communicate ideas both verbally and in writing to management, business and IT sponsors, and technical resources in language that is appropriate for each group.
- Four+ years of relevant IT experience in data engineering or related disciplines.
- Significant experience with at least one major relational database management system (RDBMS).
- Experience working with and supporting Unix/Linux and Windows systems.
- Proficiency in relational database modeling concepts and techniques.
- Solid conceptual understanding of distributed computing principles and scalable data architectures.
- Working knowledge of application and data security concepts, best practices, and common vulnerabilities.
- Experience in one or more of the following disciplines preferred: scalable data platforms and modern data architectures technologies and distributions, metadata management products, commercial ETL tools, data reporting and visualization tools, messaging systems, data warehousing, major version control systems, continuous integration/delivery tools, infrastructure automation and virtualization tools, major cloud platforms (AWS, Azure, GCP), or rest API design and development.
- Previous experience working with offshore teams desired.
- Financial industry experience, especially Regulatory Reporting, is a plus.
We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity/affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to ********************.To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy: ****************************************************
Skills and Requirements
4+ years in Business Intelligence - Data Engineering
Experience working within Data Stage
Experience working in DBT transformation framework
Experience working in Cloud Native data platforms, specifically Snowflake
Experience with SQL for interacting with relational databases Regulatory Reporting experience
Data Engineer III
Data engineer job in Cincinnati, OH
We are seeking an experienced Data Engineer III. The ideal candidate will be responsible for working with business analysts, data engineers and upstream teams to understand impacts to data sources. Take the requirements and update/build ETL data pipelines using Datastage and DBT for ingestion into Financial Crimes applications. Perform testing and ensure data quality of updated data sources.
Job Summary:Handle the design and construction of scalable management systems, ensure that all data systems meet company requirements, and also research new uses for data acquisition. Required to know and understand the ins and outs of the industry such as data mining practices, algorithms, and how data can be used.
Primary Responsibilities:
Design, construct, install, test and maintain data management systems.
Build high-performance algorithms, predictive models, and prototypes.
Ensure that all systems meet the business/company requirements as well as industry practices.
Integrate up-and-coming data management and software engineering technologies into existing data structures.
Develop set processes for data mining, data modeling, and data production.
Create custom software components and analytics applications.
Research new uses for existing data.
Employ an array of technological languages and tools to connect systems together.
Collaborate with members of your team (eg, data architects, the IT team, data scientists) on the project's goals.
Install/update disaster recovery procedures.
Recommend different ways to constantly improve data reliability and quality.
QualificationsLocals are highly preferred. Open to relocation possibility from within the state of Ohio with no assistance
Technical Degree or related work experience
Experience with non-relational & relational databases (SQL, MySQL, NoSQL, Hadoop, MongoDB, etc.)
Experience programming and/or architecting a back-end language (Java, J2EE, etc)
Business Intelligence - Data Engineering
ETL DataStage Developer
SQL
Strong communication skills, ability to collaborate with members of your team
Data Engineer III
Data engineer job in Cincinnati, OH
We are seeking an experienced Data Engineer III. The ideal candidate will be responsible for working with business analysts, data engineers and upstream teams to understand impacts to data sources.& Take the requirements and update/build ETL data pipelines using Datastage and DBT for ingestion into Financial Crimes applications.& Perform testing and ensure data quality of updated data sources.
Job Summary:Handle the design and construction of scalable management systems, ensure that all data systems meet company requirements, and also research new uses for data acquisition. Required to know and understand the ins and outs of the industry such as data mining practices, algorithms, and how data can be used.
Primary Responsibilities:
Design, construct, install, test and maintain data management systems.
Build high-performance algorithms, predictive models, and prototypes.
Ensure that all systems meet the business/company requirements as well as industry practices.
Integrate up-and-coming data management and software engineering technologies into existing data structures.
Develop set processes for data mining, data modeling, and data production.
Create custom software components and analytics applications.
Research new uses for existing data.
Employ an array of technological languages and tools to connect systems together.
Collaborate with members of your team (eg, data architects, the IT team, data scientists) on the project's goals.
Install/update disaster recovery procedures.
Recommend different ways to constantly improve data reliability and quality.
QualificationsLocals are highly preferred. Open to relocation possibility from within the state of Ohio with no assistance
Technical Degree or related work experience
Experience with non-relational relational databases (SQL, MySQL, NoSQL, Hadoop, MongoDB, etc.)
Experience programming and/or architecting a back-end language (Java, J2EE, etc)
Business Intelligence - Data Engineering
ETL DataStage Developer
SQL
Strong communication skills, ability to collaborate with members of your team
Data Analysis Engineer
Data engineer job in Covington, KY
• Execute Quality Data Analysis * Administer Root Cause Analysis * Data Collection and Management * Data Analysis and Interpretation DUTIES/RESPONSIBILITIES * Excellent analytical and problem-solving skills * Solid understanding of quality management principles, root cause analysis, and corrective action processes
REQUIREMENTS
* Must be legally permitted to work in the United States
* Proficiency in using quality management software and tools, as well as Microsoft Office applications
* Problem-solving mindset and the ability to work well under pressure to meet deadlines
* Strong analytical skills and attention to detail, with the ability to interpret data and trends to drive informed decisions
QUALITIFICATIONS
* Bachelor's degree related to Quality or similar industry
EDUCATION
* 5+ years of Quality experience
* Experience with Machine Learning
* Experience with Big Data Analysis & Automation
* Experience with Yield Management System from SK Hynix/Samsung Semiconductor
EXPERIENCE
Data Engineer
Data engineer job in Cincinnati, OH
About AMEND: AMEND is a management consulting firm based in Cincinnati, OH with areas of focus in operations, analytics, and technology. We are focused on strengthening the people, processes, and systems in organizations to generate a holistic transformation. Our three-tiered approach provides a distinct competitive edge and allows us to build strong relationships and create customized solutions for every client. This is an incredible time to step into a growing team where everyone is aligned to a common goal to change lives, transform businesses, and make a positive impact on anything we touch.
Overview:
The Data Engineer consultant role is an incredibly exciting position in the fastest growing segment of AMEND. You will be working to solve real-world problems by designing cutting edge analytic solutions while surrounded by a team of world class talent. You will be entering an environment of explosive growth with ample opportunity for development. We are looking for individuals who can go into a client and optimize (or re-design) companies data architecture, who are the combination of a change agent, technical leader and passionate about transforming companies for the better. We need someone who is a problem solver, a critical thinker, and is always wanting to go after new things; you'll never be doing the same thing twice!
Job Tasks:
Create and maintain optimal data pipeline architecture
Assemble large, complex data sets that meet functional / non-functional business requirements
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics
Work with stakeholders including the Executive, Product, Data, and Design teams to assist with data-related technical issues and support their data infrastructure needs
Define project requirements by identifying project milestones, phases, and deliverables
Execute project plan, report progress, identify and resolve problems, and recommend further actions
Delegate tasks to appropriate resources as project requirements dictate
Design, develop, and deliver audience training and adoption methods and materials
Qualifications:
Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Databricks and DBT experience is a plus
Experience building and optimizing data pipelines, architectures, and data sets
Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
Strong analytic skills related to working with structured and unstructured datasets
Build processes supporting data transformation, data structures, metadata, dependency, and workload management
A successful history of manipulating, processing, and extracting value from large, disconnected datasets
Ability to interface with multiple other business functions (internally and externally)
Desire to build analytical competencies in others within the business
Curiosity to ask questions and challenge the status quo
Creativity to devise out-of-the-box solutions
Ability to travel as needed to meet client requirements
What's in it for you?
Competitive pay and bonus
Travel incentive bonus structure
Flexible time off
Investment in your growth and development
Full health, vision, dental, and life benefits
Paid parental leave
3:1 charity match
All this to say - we are looking for talented people who are excited to make an impact on our clients. If this job description isn't a perfect match for your skillset, but you are talented, eager to learn, and passionate about our work, please apply! Our recruiting process is centered around you as an individual and finding the best place for you to thrive at AMEND, whether it be with the specific title on this posting or something different. One recruiting conversation with us has the potential to open you up to our entire network of opportunities, so why not give it a shot? We're looking forward to connecting with you.
*Applicants must be authorized to work for any employer in the U.S. We are unable to sponsor or take over sponsorship of employment Visa at this time.*
Auto-ApplySenior Data Engineer
Data engineer job in Blue Ash, OH
Job Description
The Engineer is responsible for staying on track with key milestones in Customer Platform / Customer Data Acceleration, work will be on the new Customer Platform Analytics system in Databricks. The Engineer has overall responsibility in the technical design process. Leads and participates in the application technical design process and completes estimates and work plans for design, development, implementation, and rollout tasks. The Engineer also communicates with the appropriate teams to ensure that assignments are delivered with the highest of quality and in accordance to standards. The Engineer strives to continuously improve the software delivery processes and practices. Role model and demonstrate the companys core values of respect, honesty, integrity, diversity, inclusion and safety of others.
Current tools and technologies include:
Databricks and Netezza
Key Responsibilities
Lead and participate in the design and implementation of large and/or architecturally significant applications.
Champion company standards and best practices. Work to continuously improve software delivery processes and practices.
Build partnerships across the application, business and infrastructure teams.
Setting up new customer data platforms from Netezza to Databricks
Complete estimates and work plans independently as appropriate for design, development, implementation and rollout tasks.
Communicate with the appropriate teams to ensure that assignments are managed appropriately and that completed assignments are of the highest quality.
Support and maintain applications utilizing required tools and technologies.
May direct the day-to-day work activities of other team members.
Must be able to perform the essential functions of this position with or without reasonable accommodation.
Work quickly with the team to implement new platform.
Be onsite with development team when necessary.
Behaviors/Skills:
Puts the Customer First - Anticipates customer needs, champions for the customer, acts with customers in mind, exceeds customers expectations, gains customers trust and respect.
Communicates effectively and candidly - Communicates clearly and directly, approachable, relates well to others, engages people and helps them understand change, provides and seeks feedback, articulates clearly, actively listens.
Achieves results through teamwork Is open to diverse ideas, works inclusively and collaboratively, holds self and others accountable, involves others to accomplish individual and team goals
Note to Vendors
Length of Contract 9 months
Top skills Databricks, Netezza
Soft Skills Needed collaborating well with others, working in a team dynamic
Project person will be supporting - staying on track with key milestones in Customer Platform / Customer Data Acceleration, Work will be on the new Customer Platform Analytics system in Databricks that will replace Netezza
Team details ie. size, dynamics, locations most of the team is located in Cincinnati, working onsite at the BTD
Work Location (in office, hybrid, remote) Onsite at BTD when necessary, approximately 2-3 days a week
Is travel required - No
Max Rate if applicable best market rate
Required Working Hours 8-5 est
Interview process and when will it start Starting with one interview, process may change
Prescreening Details standard questions. Scores will carry over.
When do you want this person to start Looking to hire quickly, the team is looking to move fast.
Go Anywhere SFTP Data Engineer
Data engineer job in Cincinnati, OH
* Maintain robust data pipelines for ingesting and processing data from various sources, including SFTP servers. * Implement and manage automated SFTP data transfers, ensuring data security, integrity, and timely delivery. * Configure and troubleshoot SFTP connections, including handling authentication, key management, and directory structures.
* Develop and maintain scripts or tools for automating SFTP-related tasks, such as file monitoring, error handling, and data validation.
* Collaborate with external teams and vendors to establish and maintain secure SFTP connections for data exchange.
* Ensure compliance with data security and governance policies related to SFTP transfers.
* Monitor and optimize SFTP performance, addressing any bottlenecks or issues.
* Document SFTP integration processes, configurations, and best practices.
* Responsible for providing monthly SOC controls.
* Experience with Solimar software.
* Responsible for period software updates and patching.
* Manage open incidents.
* Responsible for after-hours and weekends on-call duties
* Minimum (3-5) years related work experience
* Experience with Microsoft Software and associated server tools.
* Experience with GoAnywhere managed file transfer (MFT) solution.
* Experience with WinSCP
* Experience with Azure Cloud
* Proven experience in data engineering, with a strong emphasis on data ingestion and pipeline development.
* Demonstrated expertise in working with SFTP for data transfer and integration.
* Proficiency in scripting languages (e.g., Python, Shell) for automating SFTP tasks.
* Familiarity with various SFTP clients, servers, and related security protocols.
* Understanding of data security best practices, including encryption and access control for SFTP.
* Experience with cloud platforms (e.g., AWS, Azure, GCP) and their SFTP integration capabilities is a plus.
* Strong problem-solving and troubleshooting skills related to data transfer and integration issues.
Salary Range- $80,000-$85,000 a year
#LI-SP3
#LI-VX1
Salesforce Data 360 Architect
Data engineer job in Cincinnati, OH
Who You'll Work With In our Salesforce business, we help our clients bring the most impactful customer experiences to life and we do that in a way that makes our clients the hero of their transformation story. We are passionate about and dedicated to building a diverse and inclusive team, recognizing that diverse team members who are celebrated for bringing their authentic selves to their work build solutions that reach more diverse populations in innovative and impactful ways. Our team is comprised of customer strategy experts, Salesforce-certified experts across all Salesforce capabilities, industry experts, organizational and cultural change consultants, and project delivery leaders. As the 3rd largest Salesforce partner globally and in North America, we are committed to growing and developing our Salesforce talent, offering continued growth opportunities, and exposing our people to meaningful work that aligns to their personal and professional goals.
We're looking for individuals who have experience implementing Salesforce Data Cloud or similar platforms and are passionate about customer data. The ideal candidate has a desire for continuous professional growth and can deliver complex, end-to-end Data Cloud implementations from strategy and design, through to data ingestion, segment creation, and activation; all while working alongside both our clients and other delivery disciplines. Our Global Salesforce team is looking to add a passionate Principal or Senior Principal to take on the role of Data Cloud Architect within our Salesforce practice.
What You'll Do:
Responsible for business requirements gathering, architecture design, data ingestion and modeling, identity resolution setup, calculated insight configuration, segment creation and activation, end-user training, and support procedures
Lead technical conversations with both business and technical client teams; translate those outcomes into well-architected solutions that best utilize Salesforce Data Cloud and the wider Salesforce ecosystem
Ability to direct technical teams, both internal and client-side
Provide subject matter expertise as warranted via customer needs and business demands
Build lasting relationships with key client stakeholders and sponsors
Collaborate with digital specialists across disciplines to innovate and build premier solutions
Participate in compiling industry research, thought leadership and proposal materials for business development activities
Experience with scoping client work
Experience with hyperscale data platforms (ex: Snowflake), robust database modeling and data governance is a plus.
What You'll Bring:
Have been part of at least one Salesforce Data Cloud implementation
Familiarity with Salesforce's technical architecture: APIs, Standard and Custom Objects, APEX. Proficient with ANSI SQL and supported functions in Salesforce Data Cloud
Strong proficiency toward presenting complex business and technical concepts using visualization aids
Ability to conceptualize and craft sophisticated wireframes, workflows, and diagrams
Strong understanding of data management concepts, including data quality, data distribution, data modeling and data governance
Detailed understanding of the fundamentals of digital marketing and complementary Salesforce products that organizations may use to run their business. Experience defining strategy, developing requirements, and implementing practical business solutions.
Experience in delivering projects using Agile-based methodologies
Salesforce Data Cloud certification preferred
Additional Salesforce certifications like Administrator are a plus
Strong interpersonal skills
Bachelor's degree in a related field preferred, but not required
Open to travel (up to 50%)
About Us
Slalom is a fiercely human business and technology consulting company that leads with outcomes to bring more value, in all ways, always. From strategy through delivery, our agile teams across 52 offices in 12 countries collaborate with clients to bring powerful customer experiences, innovative ways of working, and new products and services to life. We are trusted by leaders across the Global 1000, many successful enterprise and mid-market companies, and 500+ public sector organizations to improve operations, drive growth, and create value. At Slalom, we believe that together, we can move faster, dream bigger, and build better tomorrows for all.
Compensation and Benefits
Slalom prides itself on helping team members thrive in their work and life. As a result, Slalom is proud to invest in benefits that include meaningful time off and paid holidays, parental leave, 401(k) with a match, a range of choices for highly subsidized health, dental, & vision coverage, adoption and fertility assistance, and short/long-term disability. We also offer yearly $350 reimbursement account for any well-being-related expenses, as well as discounted home, auto, and pet insurance.
Slalom is committed to fair and equitable compensation practices. For this role, we are hiring at the following levels and salary ranges:
East Bay, San Francisco, Silicon Valley:
Principal: $145,000-$225,000
San Diego, Los Angeles, Orange County, Seattle, Boston, Houston, New Jersey, New York City, Washington DC, Westchester:
Principal: $133,000-$206,000
All other locations:
Principal: $122,000-$189,000
In addition, individuals may be eligible for an annual discretionary bonus. Actual compensation will depend upon an individual's skills, experience, qualifications, location, and other relevant factors. The salary pay range is subject to change and may be modified at any time.
We are committed to pay transparency and compliance with applicable laws. If you have questions or concerns about the pay range or other compensation information in this posting, please contact us at: ********************.
We will accept applications until December 31, 2025 or until the position is filled.
EEO and Accommodations
Slalom is an equal opportunity employer and is committed to attracting, developing and retaining highly qualified talent who empower our innovative teams through unique perspectives and experiences. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veterans' status, or any other characteristic protected by federal, state, or local laws. Slalom will also consider qualified applications with criminal histories, consistent with legal requirements. Slalom welcomes and encourages applications from individuals with disabilities. Reasonable accommodations are available for candidates during all aspects of the selection process. Please advise the talent acquisition team or contact ****************************** if you require accommodations during the interview process.
Easy ApplyT&O Data Source and System Origination Leader
Data engineer job in Olde West Chester, OH
GE Aerospace is seeking a T&O Data Source and System Origination Leader to drive development in business data management and system integration. This role is critical in identifying and addressing gaps in business processes and systems of record, eliminating lean waste, and ensuring seamless data ingestion and cataloging to support operational excellence. The ideal candidate will act as a liaison between cross-functional teams, including Digital Technology (DT), Data Ingestion, and System of Record owners, to ensure requirements are met and updates are delivered on schedule.
Job Description
Roles and Responsibilities
Ownership: Lead initiatives to explore innovative solutions for data management and system integration challenges, driving continuous improvement and operational efficiency.
Burn Down of Business Process/System of Record Gap List: Identify, prioritize, and resolve gaps in business processes and systems of record to enhance data accuracy and accessibility.
Lean Waste Reduction:
* Eliminate motion waste related to manual data input.
* Minimize transportation waste caused by downloading and manually manipulating data.
Digital Technology Liaison: Collaborate with the DT team to ensure alignment on requirements, timelines, and updates.
Data Ingestion Team Liaison:
* Work closely with the Data Ingestion team to ensure new data is successfully integrated into the Data Operating System (DOS).
* Facilitate communication and coordination between teams to address ingestion challenges.
Data Cataloging and Business Process Relationship: Develop and maintain a comprehensive data catalog, ensuring alignment with business processes and driving data accessibility and usability.
Change Management and Break/Fix:
* Manage changes to base data and ingestion processes.
* Lead efforts to address and resolve data-related issues promptly.
Required Qualifications
* Bachelor's degree in Engineering, Data Science, Business Administration, or a related field.
* Minimum of 5 years of experience in data management, system integration, or
* Legal authorization to work in the U.S. is required. We will not sponsor individuals for employment visas, now or in the future, for this job opening.
Desired Characteristics
* business process improvement.
* Proven experience in lean methodologies and waste reduction strategies.
* Strong project management skills with a track record of delivering results on time and within scope.
* Excellent communication and collaboration skills to act as a liaison between cross-functional teams.
* Familiarity with data ingestion processes, system of record management, and change management principles.
* Experience working with data cataloging tools and understanding their relationship to business processes.
* Demonstrated ability to lead cross-functional teams and drive alignment across diverse stakeholders.
* Strong analytical and problem-solving skills with a focus on continuous improvement.
* Knowledge of GE Aerospace's FLIGHT DECK operating model is a plus.
* Experience working in a fast-paced, dynamic environment with competing priorities.
* Ability to translate complex technical concepts into actionable business strategies.
This role requires access to U.S. export-controlled information. Therefore, employment will be contingent upon the ability to prove that you meet the status of a U.S. Person as one of the following: U.S. lawful permanent resident, U.S. Citizen, have been granted asylee or refugee status (i.e., a protected individual under the Immigration and Naturalization Act, 8 U.S.C. 1324b(a)(3)).
Additional Information
GE Aerospace offers a great work environment, professional development, challenging careers, and competitive compensation. GE Aerospace is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law.
GE Aerospace will only employ those who are legally authorized to work in the United States for this opening. Any offer of employment is conditioned upon the successful completion of a drug screen (as applicable).
Relocation Assistance Provided: Yes
Auto-ApplyETL Architect
Data engineer job in Cincinnati, OH
Job title: ETL Architect
DURATION 18 months
YEARS OF EXPERIENCE 7-10
INTERVIEW TYPE Phone Screen to Hire
REQUIRED SKILLS
• Experience with Data Stage and ETL design
Technical
• Requirement gathering , converting business requirements to technical specs to profile
• Worked hands on in minimum 2 projects with data stage
• Understand the process of developing an etl design that support multiple datastage developers
• Be able to create an etl design framework and related specifications for use by etl developers
• Define standards and best practices of Data Stage etl to be followed by all data stage developers
• Understanding of Data Warehouse, Data marts concepts and implementation experience
• Be able to look at code produced to insure conformance with developed ETL framework and design for reuse
• Preferable experienced user level comptency in IBM's metadata product, datastage and Infosphere product line
• Be able to design etl for oracle or sql server or any db
• Good analytical skills and process design
• Insuring compliance to quality standards, and delivery timelines.
Qualifications
Bachelors
Additional Information
Required Skills:
Job Description:
Performs highly complex application programming/systems development and support Performs highly complex configuration of business rules and technical parameters of software products Review business requirements and develop application design documentation Build technical components (Maximo objects, TRM Rules, Java extensions, etc) based on detailed design.
Performs unit testing of components along with completing necessary documentation. Supports product test, user acceptance test, etc as a member of the fix-it team. Employs consistent measurement techniques Include testing in project plans and establish controls to require adherence to test plans Manages the interrelationships among various projects or work objectives
Data Engineer III / US Citizen and GC Candidates ONLY / ONSITE Cincinnati
Data engineer job in Cincinnati, OH
Data Engineer III
TOP SKILLS:
Must Have
PowerBI
SQL
Tableau
What You'll Do
Business Intelligence Developer
(s):
Proficiency in Power BI: Including data modeling, DAX, Power Query, and report/dashboard design.
SQL Knowledge: For data querying and manipulation.
Data Warehousing: Understanding principles like ETL and data modeling concepts.
JOB DESCRIPTION
GENERAL FUNCTION: A Power BI developer is responsible for designing, developing, and maintaining Business Intelligence (BI) solutions using Power BI. This includes building data models, creating reports and dashboards, and ensuring data accuracy and reliability. They also collaborate with stakeholders, translate business requirements into technical specifications, and manage the overall BI system
ESSENTIAL DUTIES AND RESPONSIBILITIES:
Data Modeling: Designing and implementing data models within Power BI to efficiently support business requirements and enable effective analysis.
Report and Dashboard Development: Creating interactive and visually appealing reports and dashboards using Power BI Desktop and the Power BI service.
Data Integration: Connecting to and integrating data from various sources, including databases, cloud services, and flat files.
Data Analysis and Visualization: Analyzing data to identify trends, patterns, and insights, and then effectively visualizing this information through charts, graphs, and other visualizations.
Performance Optimization: Optimizing Power BI solutions for performance, ensuring efficient data retrieval and report rendering.
Collaboration and Deployment: Collaborating with stakeholders, including business users and other IT teams, to gather requirements and deploy Power BI solutions.
Data Governance and Compliance: Ensuring data accuracy, security, and compliance with relevant regulations and policies.
DAX Calculations: Developing and utilizing DAX (Data Analysis Expressions) calculations to extend the functionality of Power BI and perform complex data analysis.
SQL Querying: Utilizing SQL for data retrieval and manipulation, often in conjunction with Power BI's data transformation capabilities.
Technical Documentation: Creating and maintaining technical documentation for Power BI solutions, including data models, report designs, and data lineage.
Troubleshooting and Support: Providing support and troubleshooting for Power BI solutions, addressing user issues and resolving technical problems.
Staying Updated: Keeping abreast of the latest Power BI features, updates, and best practices.
SUPERVISORY RESPONSIBLITIES: None
MUST HAVE SKILLS
· Power BI 3-4 years
· Analytical
· SQL, 3-5 years
· Strong Communications Skills
· Able to run meetings
· Problem Solving and Critical Thinking
· Onsite in Cincinnati
NICE TO HAVE SKILLS/Experience
· DataStage
· Dataiku
· AFS system application (AFS Level III or Vision)
MINIMUM KNOWLEDGE, SKILLS, AND ABILITIES REQUIRED:
Bachelor's degree in Computer Science/Information Systems or equivalent combination of education and experience.
Strong communication skills- Must be able to communicate ideas both verbally and in writing to management, business and IT sponsors, and technical resources in language that is appropriate for each group.
Excellent analytical and problem-solving skills when resolving data related issues or designing new solutions
Experienced in SDLC and can assist in Project Management. Able to understand release management protocols and manage important tasks for implementation.
Power BI Proficiency: Deep knowledge and experience with Microsoft Power BI, including:
Data Modeling: Designing and building data models within Power BI, understanding relationships, and optimizing performance.
DAX (Data Analysis Expressions): Writing complex DAX formulas to perform calculations, define measures, and create derived columns.
Power Query (M Language): Using Power Query to connect to various data sources, transform, and clean data.
Report & Dashboard Design: Creating visually appealing, user-friendly, and informative reports and dashboards that effectively communicate insights.
Data Visualization: Choosing appropriate chart types, designing layouts, and effectively presenting data through compelling visualizations.
SQL: Strong understanding of SQL for querying databases, extracting data, and data manipulation.
Data Warehousing Concepts: Familiarity with data warehousing principles, star schemas, snowflake schemas, and ETL processes.
Data Sources: Experience connecting to and working with a variety of data sources (e.g., SQL Server, Excel, Azure SQL Database, cloud-based data sources).
Performance Optimization: Ability to identify and resolve performance issues in Power BI reports and data models.
What You'll Get
Competitive base salary
Medical, dental, and vision insurance coverage
Optional life and disability insurance provided
401(k) with a company match and optional profit sharing
Paid vacation time
Paid Bench time
Training allowance offering
You'll be eligible to earn referral bonuses!
All done!
Your application has been successfully submitted!
Other jobs
Senior Data Engineer (P3005)
Data engineer job in Cincinnati, OH
Who we are As a full-stack data science subsidiary of The Kroger Company, we leverage over 10 petabytes of data to personalize the experience for 62 million households. We are seeking a hands-on Senior Data Engineer (G2) to design, build, and operate our analytical lakehouse on a modern data stack. As a senior individual contributor on a hybrid scrum team, you will partner closely with Product, Analytics, and Engineering to deliver scalable, high-quality data products on Databricks and Azure. You will contribute to technical direction, uphold engineering best practices, and remain deeply involved in coding, testing, and production operations-without people management responsibilities.
Key Responsibilities
* Data engineering delivery: Design, develop, and optimize secure, scalable batch and near-real-time pipelines on Databricks (PySpark/SQL) with Delta Lake and Delta Live Tables (DLT). Implement medallion architecture, Unity Catalog governance, and robust data quality checks (expectations/testing). Build performant data models and tables to power analytics, ML, and downstream applications
* Product collaboration and agile execution: Translate business requirements into data contracts, schemas, and SLAs in partnership with Product and Analytics. Participate in backlog refinement, estimation, sprint planning, and retros in a hybrid onshore/offshore environment. Deliver clear documentation (designs, runbooks, data dictionaries) to enable self-serve and reuse
* Reliability, observability, and operations: Implement monitoring, alerting, lineage, and cost/performance telemetry; troubleshoot and tune Spark jobs and storage. Participate in on-call/incident response rotations and drive post-incident improvements
* CI/CD, and infrastructure as code: Contribute to coding standards, code reviews, and reusable patterns/modules. Build and maintain CI/CD pipelines (GitHub Actions) and manage infrastructure with Terraform (data, compute, secrets, policies)
* Continuous improvement and knowledge sharing: Mentor peers informally, share best practices, and help evaluate/introduce modern tools and patterns
Required Qualifications
* Experience: 4-6 years in data engineering; 1-2 years operating as a senior/lead individual contributor on delivery-critical projects. Proven track record delivering production-grade pipelines and data products on cloud platforms.
* Core technical skills: Databricks: 2-3+ years with Spark (PySpark/SQL); experience building and operating DLT pipelines and Delta Lake. Azure: Proficiency with ADLS Gen2, Entra ID (Azure AD), Key Vault, Databricks on Azure, and related services. Languages and tools: Expert-level SQL and strong Python; Git/GitHub, unit/integration/data testing, and performance tuning. Infrastructure as code: Hands-on Terraform for data platform resources and policies. Architecture: Solid understanding of medallion and dimensional modeling, data warehousing concepts, and CI/CD best practices
* Collaboration and communication: Excellent communicator with the ability to work across Product, Analytics, Security, and Platform teams in an agile setup
Preferred qualifications
* Bachelor's degree in Computer Science, Engineering, or related field (or equivalent experience)
* Azure or Databricks certifications (e.g., Data Engineer Associate/Professional)
* Experience with ELT tools (e.g., Fivetran), Snowflake, and streaming (Event Hubs, Kafka)
* Familiarity with AI-ready data practices and AI developer tools (e.g., GitHub Copilot)
* Exposure to FinOps concepts and cost/performance optimization on Databricks and Azure
The opportunity
* Build core data products that power personalization for millions of customers at enterprise scale
* Work with modern tooling (Databricks, Delta Lake/DLT, Unity Catalog, Terraform, GitHub Actions) in a collaborative, growth-minded culture
* Hybrid work, competitive compensation, comprehensive benefits, and clear paths for advancement
PLEASE NOTE:
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United Stated and with the Kroger Family of Companies (i.e. H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).
#LI-SSS
Auto-ApplyData Engineer Level 3
Data engineer job in Cincinnati, OH
For over half a decade, Hudson Manpower has been a trusted partner in delivering specialized talent and technology solutions across IT, Energy, and Engineering industries worldwide. We work closely with startups, mid-sized firms, and Fortune 500 clients to support their digital transformation journeys. Our teams are empowered to bring fresh ideas, shape innovative solutions, and drive meaningful impact for our clients. If you're looking to grow in an environment where your expertise is valued and your voice matters, then Hudson Manpower is the place for you. Join us and collaborate with forward thinking professionals who are passionate about building the future of work.
Core Responsibilities
Design, develop, and optimize scalable ELT/ETL data pipelines leveraging Azure Synapse, Databricks, and PySpark.
Build and manage data lake house architectures for high-volume eCommerce, marketing, and behavioral datasets.
Model complex event-level data (clickstream, transactions, campaign interactions) to power dashboards, A/B testing, ML models, and marketing activation.
Implement Delta Lake optimization and Databricks Workflows for performance and reliability.
Partner with architects and business analysts to deliver reusable, high-quality data models for BI and self-service analytics.
Ensure data lineage, governance, and compliance using Unity Catalog and tools like Alation.
Collaborate with data scientists to productionize and operationalize datasets for ML and predictive analytics.
Validate and reconcile behavioral data with Adobe Analytics and Customer Journey Analytics (CJA) to ensure accuracy.
Maintain semantic models for Power BI dashboards, supporting self-service and advanced analytical insights.
Actively participate in Agile delivery: sprint planning, backlog refinement, technical documentation, and code reviews.
Job requirements
Required Qualifications
7+ years of experience in data engineering, data architecture, or similar roles.
Expertise in Databricks, Azure Synapse, Delta Lake, and Spark-based data processing.
Strong SQL proficiency for data transformation and performance tuning.
Deep experience with modern data architectures (Lakehouse, ELT, streaming).
Proven track record in handling behavioral/eCommerce datasets and analytics use cases.
Hands-on experience with Unity Catalog and metadata governance tools (e.g., Alation).
Experience with Adobe Analytics / CJA data validation preferred.
Familiarity with Power BI, Python, and data orchestration tools (ADF, Airflow).
Strong communication skills with the ability to bridge business needs and engineering solutions.
Experience in Agile environments with proven leadership in technical delivery.
All done!
Your application has been successfully submitted!
Other jobs