Post job

Data engineer jobs in Florence, KY

- 470 jobs
All
Data Engineer
Data Architect
Requirements Engineer
Data Scientist
Software Engineer
Senior Data Architect
ETL Architect
Lead Data Analyst
Data Modeler
Senior Software Engineer
Lead Data Architect
Data Warehouse Specialist
Application Programmer Analyst
  • Sr Data Engineer

    CBTS 4.9company rating

    Data engineer job in Cincinnati, OH

    Must Have PowerBI Snowflake SQL Nice To Have Python and/or SAS Notes: Data Engineer with a few years experience with SQL, DBT, Snowflake, and PowerBI. Cincinnati in person. Primary Responsibilities: Design, construct, install, test and maintain data management systems. Build high-performance algorithms, predictive models, and prototypes. Ensure that all systems meet the business/company requirements as well as industry practices. Integrate up-and-coming data management and software engineering technologies into existing data structures. Develop set processes for data mining, data modeling, and data production. Create custom software components and analytics applications. Research new uses for existing data. Employ an array of technological languages and tools to connect systems together. Collaborate with members of your team (eg, data architects, the IT team, data scientists) on the project s goals. Install/update disaster recovery procedures. Recommend different ways to constantly improve data reliability and quality. Qualifications: Technical Degree or related work experience Experience with non-relational & relational databases (SQL, MySQL, NoSQL, Hadoop, MongoDB, etc.) Experience programming and/or architecting a back end language (Java, J2EE, etc)
    $67k-95k yearly est. 3d ago
  • Senior Data Engineer

    Golden Technology 4.4company rating

    Data engineer job in Cincinnati, OH

    The team is seeking a Data Engineer experienced in implementing modern data solutions in Azure, with strong hands-on skills in Databricks, Spark, Python, and cloud-based DataOps practices. The Data Engineer will analyze, design, and develop data products, pipelines, and information architecture deliverables, focusing on data as an enterprise asset. This role also supports cloud infrastructure automation and CI/CD using Terraform, GitHub, and GitHub Actions to deliver scalable, reliable, and secure data solutions. Requirements: • 5+ years of experience as a Data Engineer • Hands-on experience with Azure Databricks, Spark, and Python • Experience with Delta Live Tables (DLT) or Databricks SQL • Strong SQL and database background • Experience with Azure Functions, messaging services, or orchestration tools • Familiarity with data governance, lineage, or cataloging tools (e.g., Purview, Unity Catalog) • Experience monitoring and optimizing Databricks clusters or workflows • Experience working with Azure cloud data services and understanding how they integrate with Databricks and enterprise data platforms • Experience with Terraform for cloud infrastructure provisioning • Experience with GitHub and GitHub Actions for version control and CI/CD automation • Strong understanding of distributed computing concepts (partitions, joins, shuffles, cluster behavior) • Familiarity with SDLC and modern engineering practices • Ability to balance multiple priorities, work independently, and stay organized Key Responsibilities • Analyze, design, and develop enterprise data solutions with a focus on Azure, Databricks, Spark, Python, and SQL • Develop, optimize, and maintain Spark/PySpark data pipelines, including managing performance issues such as data skew, partitioning, caching, and shuffle optimization • Build and support Delta Lake tables and data models for analytical and operational use cases • Apply reusable design patterns, data standards, and architecture guidelines across the enterprise, including collaboration with when needed • Use Terraform to provision and manage cloud and Databricks resources, supporting Infrastructure as Code (IaC) practices • Implement and maintain CI/CD workflows using GitHub and GitHub Actions for source control, testing, and pipeline deployment • Manage Git-based workflows for Databricks notebooks, jobs, and data engineering artifacts • Troubleshoot failures and improve reliability across Databricks jobs, clusters, and data pipelines • Apply cloud computing skills to deploy fixes, upgrades, and enhancements in Azure environments • Work closely with engineering teams to enhance tools, systems, development processes, and data security • Participate in the development and communication of data strategy, standards, and roadmaps • Draft architectural diagrams, interface specifications, and other design documents • Promote the reuse of data assets and contribute to enterprise data catalog practices • Deliver timely and effective support and communication to stakeholders and end users • Mentor team members on data engineering principles, best practices, and emerging technologies
    $74k-100k yearly est. 4d ago
  • Data Architect

    Optech 4.6company rating

    Data engineer job in Cincinnati, OH

    THIS IS A W2 (NOT C2C OR REFERRAL BASED) CONTRACT OPPORTUNITY REMOTE MOSTLY WITH 1 DAY/MO ONSITE IN CINCINNATI-LOCAL CANDIDATES TAKE PREFERENCE RATE: $75-85/HR WITH BENEFITS We are seeking a highly skilled Data Architect to function in a consulting capacity to analyze, redesign, and optimize a Medical Payments client's environment. The ideal candidate will have deep expertise in SQL, Azure cloud services, and modern data architecture principles. Responsibilities Design and maintain scalable, secure, and high-performing data architectures. Lead migration and modernization projects in heavy use production systems. Develop and optimize data models, schemas, and integration strategies. Implement data governance, security, and compliance standards. Collaborate with business stakeholders to translate requirements into technical solutions. Ensure data quality, consistency, and accessibility across systems. Required Qualifications Bachelor's degree in Computer Science, Information Systems, or related field. Proven experience as a Data Architect or similar role. Strong proficiency in SQL (query optimization, stored procedures, indexing). Hands-on experience with Azure cloud services for data management and analytics. Knowledge of data modeling, ETL processes, and data warehousing concepts. Familiarity with security best practices and compliance frameworks. Preferred Skills Understanding of Electronic Health Records systems. Understanding of Big Data technologies and modern data platforms outside the scope of this project.
    $75-85 hourly 1d ago
  • Senior Data Engineer

    Vista Applied Solutions Group Inc. 4.0company rating

    Data engineer job in Cincinnati, OH

    Data Engineer III About the Role We're looking for a Data Engineer III to play a key role in a large-scale data migration initiative within Client's commercial lending, underwriting, and reporting areas. This is a hands-on engineering role that blends technical depth with business analysis, focused on transforming legacy data systems into modern, scalable pipelines. What You'll Do Analyze legacy SQL, DataStage, and SAS code to extract business logic and identify key data dependencies. Document current data usage and evaluate the downstream impact of migrations. Design, build, and maintain data pipelines and management systems to support modernization goals. Collaborate with business and technology teams to translate requirements into technical solutions. Improve data quality, reliability, and performance across multiple environments. Develop backend solutions using Python, Java, or J2EE, and integrate with tools like DataStage and dbt. What You Bring 5+ years of experience with relational and non-relational databases (SQL, Snowflake, DB2, MongoDB). Strong background in legacy system analysis (SQL, DataStage, SAS). Experience with Python or Java for backend development. Proven ability to build and maintain ETL pipelines and automate data processes. Exposure to AWS, Azure, or GCP. Excellent communication and stakeholder engagement skills. Financial domain experience-especially commercial lending or regulatory reporting-is a big plus. Familiarity with Agile methodologies preferred.
    $74k-97k yearly est. 1d ago
  • Junior Data Engineer

    Agility Partners 4.6company rating

    Data engineer job in Cincinnati, OH

    Agility Partners is seeking a qualified Junior Data Engineer to fill an open position with one of our clients. This is an exciting opportunity for an early‑career professional to build real‑world data skills in a supportive, fast‑paced environment. You'll work closely with senior engineers and analysts, primarily using SQL to query, clean, and prepare data that powers reporting and analytics. Responsibilities Assist with writing and optimizing SQL queries to support reporting and ad‑hoc data requests Help create basic database objects (tables, views) and maintain data dictionaries and documentation Support routine data quality checks (duplicates, nulls, referential integrity) and simple SQL‑based transformations Participate in loading data into staging tables and preparing datasets for downstream use Troubleshoot query issues (e.g., incorrect results, slow performance) with guidance from senior engineers Collaborate with analytics teams to validate results and ensure datasets meet business needs The Ideal Candidate Foundational SQL proficiency: comfortable writing basic queries and joins; curiosity to learn window functions and indexing Understanding of relational database concepts (keys, normalization vs. denormalization) 0-2 years of professional experience (or internship/capstone/bootcamp projects); recent grads welcome Detail‑oriented, coachable, and comfortable asking questions and working through feedback Able to document work clearly and communicate findings to non‑technical stakeholders Bonus (not required): exposure to Excel/Google Sheets or a BI tool (Power BI/Tableau), and interest in learning simple ETL concepts Reasons to Love It Learn directly from experienced data engineers while working on meaningful, production datasets Clear growth path from SQL fundamentals to broader data engineering skills over time Supportive team culture that values curiosity, reliability, and steady skill development
    $93k-124k yearly est. 4d ago
  • Data Architect

    Costrategix 3.7company rating

    Data engineer job in Blue Ash, OH

    Since 2006, CoStrategix has defined and implemented digital transformation initiatives, data and analytics capabilities, and digital commerce solutions for Fortune 500 and mid-market customers. CoStrategix provides thought leadership, strategy, and comprehensive end-to-end technology execution to help organizations transform and stay competitive in today's digital world. As a Gixer (employee) at CoStrategix, you will have broad exposure to diverse industries and technologies. You will work on leading-edge digital projects in areas of Data Engineering, Data Governance, Data Strategy, AI, Cloud,. Gixers operate at the leading edge of technologies, and our projects require compelling human interfaces and modern data platforms. This role is based at our culture hub in Blue Ash, Ohio. About this role: As a Data Architect at CoStrategix, you will define, orchestrate, and implement modern data platforms and architectures. This role is about understanding the current state of data ecosystems, mapping existing data flows and structures, creating an architectural blueprint, and then implementing data strategies and governance frameworks in rapid cycles. In this role, you will provide the following: Strategic & Consultative Responsibilities Act as a trusted data advisor to client stakeholders, clearly communicating trade-offs, guiding decision-making, and influencing the adoption of modern data practices. Lead stakeholder interviews and working sessions to elicit requirements, clarify use cases, and align on priorities, scope, and success metrics. Create phased data roadmaps with clear milestones, dependencies, and value outcomes (e.g., time-to-insight, cost reduction, risk reduction) and track progress against them. Provide architectural input into scoping and pricing of data engagements; ensure solutions balance value, risk, and cost, and support delivery teams in staying aligned to scope and architectural guardrails. Work closely with sales and account teams to understand customer objectives and translate them into practical, scalable data architecture and solution designs. Participate in pre-sales engagements, discovery workshops, proposal development, client presentations, and proof-of-concept activities to showcase solution feasibility and value. Data Governance, Quality & Operating Model Bring consultative competencies around data governance and data quality, helping clients define guiding principles, policies, and operating models. Lead development of comprehensive data strategies for clients that align with their business priorities. Design processes for metadata management, lineage tracking, and master data management (MDM), including stewardship roles and workflows. Establish and maintain data quality standards, metrics, and monitoring processes to ensure accurate, complete, and timely data across critical domains. Develop semantic models and curated datasets, guide adoption of data cataloging and data literacy programs. Enterprise & Solution Architecture Design and maintain conceptual, logical, and physical data architectures to support enterprise, analytical, and operational systems. Assess and recommend data platforms, cloud services, and emerging technologies to meet business needs, while collaborating with Cloud, DevOps, and Security Architects to ensure architectural alignment. Partner with Data Analysts, BI Developers, and Data Scientists to ensure data architectures enable analytics, visualization, and AI/ML initiatives. Define non-functional requirements (performance, scalability, resilience, cost, security, and compliance) for data solutions, and ensure they are addressed in the architecture and design. Maintain architecture decision records, reference architectures, and reusable patterns; define and promote standards and best practices for data modeling, integration, and consumption across teams. Implementation, Delivery & Enablement Lead the implementation of scalable, secure, and high-performing data and transformation frameworks that unify data across platforms and enable real-time, batch, and event-driven use cases. Define and enforce data design standards, patterns, and best practices during implementation to ensure consistency, maintainability, and performance. Mentor and coach engineering and analytics teams in data design principles, governance frameworks, and architectural discipline to ensure consistency and quality in delivery. Qualifications: Bachelor's Degree in Math, Statistics, Computer Science, Information Technology, or a related field 8+ years of experience in data management and architecture roles 3 to 5 years of leading data strategy, governance, or modernization efforts 3 to 5 years of pre-sales, client solutioning, and/or consulting engagement in Data Management Experience designing and implementing modern data architectures Current understanding of best practices regarding data security, governance, and regulatory compliance Experience in data modeling, data engineering, and analytics platform architecture Experience with data engineering tooling such as Databricks, Snowflake, Synapse, BigQuery, Kafka, and dbt Experience with software development, DevOps best practices, and automation methodologies Excellent leadership and negotiation skills are necessary to work effectively with colleagues at various levels of the organization and across multiple locations Communicate complex issues crisply and concisely to various levels of management Coaching and mentoring skills - ability to adapt to all levels of the organization Strong collaboration skills and excellent verbal and written communication skills About CoStrategix We make CoStrategix an awesome place to work, offering a total rewards package that includes comprehensive benefits starting on day one. Benefits include medical, dental, vision, disability, and life insurance, as well as an EAP and 401(k) retirement plan. We are a flexible hybrid workplace committed to a culture of curiosity, collaboration, learning, self-improvement, and, above all, fun. We have been named a finalist for the Cincinnati Business Courier's Best Places to Work Awards for 4 consecutive years. Do the Right Thing. Always. At CoStrategix, we are passionate about our core values. Diversity, equity & inclusion (DE&I) are part of our core values Every Gixer (employee) has an opportunity for success regardless of their race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Creating an environment where everyone, from any background, can do their best work is the right thing to do.
    $75k-104k yearly est. 1d ago
  • Senior Business Data Architect

    Vernovis 4.0company rating

    Data engineer job in Cincinnati, OH

    Job Title: Senior Business Data Architect Who we are: Vernovis is a Total Talent Solutions company that specializes in Technology, Cybersecurity, Finance & Accounting functions. At Vernovis, we help these professionals achieve their career goals, matching them with innovative projects and dynamic direct hire opportunities in Ohio and across the Midwest. Client Overview: Vernovis is partnering with a fast-paced manufacturing company that is looking to hire a Sr. Data Manager. This is a great opportunity for an experienced Snowflake professional to elevate their career leading a team and designing the architecture of the data warehouse. If interested, please email Wendy Kolkmeyer at *********************** What You'll Do: Architect and optimize the enterprise data warehouse using Snowflake. Develop and maintain automated data pipelines with Fivetran or similar ETL tools. Design and enhance DBT data models to support analytics, reporting, and operational decision-making. Oversee and improve Power BI reporting, ensuring data is accurate, accessible, and actionable for business users. Establish and enforce enterprise data governance, standards, policies, and best practices. Collaborate with business leaders to translate requirements into scalable, high-quality data solutions. Enable advanced analytics and AI/ML initiatives through proper data structuring and readiness. Drive cross-functional alignment, communication, and stakeholder engagement. Lead, mentor, and develop members of the data team. Ensure compliance, conduct system audits, and maintain business continuity plans. What Experience You'll Have: 7+ years of experience in data architecture, data engineering, or enterprise data management. Expertise in Snowflake, Fivetran (or similar ETL tools), DBT, and Power BI. Strong proficiency in SQL and modern data architecture principles. Proven track record in data governance, modeling, and data quality frameworks. Demonstrated experience leading teams and managing complex data initiatives. Ability to communicate technical concepts clearly and collaborate effectively with business stakeholders. What is Nice to Have: Manufacturing experience Vernovis does not accept inquiries from Corp to Corp recruiting companies. Applicants must be currently authorized to work in the United States on a full-time basis and not violate any immigration or discrimination laws. Vernovis provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation, and training.
    $90k-124k yearly est. 1d ago
  • AI/ML Engineer

    Vaco By Highspring

    Data engineer job in Monroe, OH

    Machine Learning / AI Engineer Vaco is partnering with a growing organization near Monroe, Ohio to hire a Machine Learning / AI Engineer for a direct-hire, hybrid role. This position offers the opportunity to work on impactful, real-world AI solutions within a logistics-focused environment, collaborating closely with cross-functional teams to drive efficiency and innovation. What You'll Do: Design, train, and deploy machine learning models supporting demand forecasting, routing optimization, and network efficiency Integrate AI and ML solutions with existing logistics platforms, sensors, and data pipelines Partner with cross-functional teams to translate business challenges into scalable, production-ready solutions Build and maintain data processing, feature engineering, and model evaluation pipelines using modern ML frameworks Monitor model performance, retrain systems, and drive continuous improvement through feedback loops Ensure AI systems adhere to enterprise standards for security, scalability, and data governance Stay current on advancements in machine learning, optimization algorithms, and generative AI relevant to logistics and operations Required Skills: Bachelor's degree in Computer Science, Software Engineering, IT, or equivalent practical experience 5+ years of experience in ML/AI-focused roles with strong foundations in statistical learning Deep expertise in Generative AI, NLP, and simulation techniques applied to operational or logistics challenges Proficiency in Python and common ML frameworks (TensorFlow, PyTorch, Scikit-learn) Experience with data engineering tools and platforms such as SQL, Spark, Databricks, and Airflow Familiarity with cloud environments (AWS or Azure) and MLOps practices for model deployment and monitoring Strong ability to analyze large, complex datasets to uncover actionable insights and automation opportunities Proven problem-solving skills with an emphasis on scalability and real-world application Excellent communication and collaboration skills About The Role: Direct-hire opportunity with a stable, forward-thinking organization Hybrid work environment offering flexibility High-impact role focused on applied AI and operational optimization Desired Skills and Experience: AI, ML, Machine Learning, Artificial Intelligence, AI/ML, Natural language processing, NLP, Python, AWS, Cloud, Azure, Amazon Web Services, SQL, Spark, Databricks, Airflow, MLOps, TensorFlow, PyTorch, Scikit-learn, Generative AI, Data, AI/ML Engineer, AI Engineer, ML Engineer
    $62k-82k yearly est. 3d ago
  • Software Engineer

    Insight Global

    Data engineer job in Cincinnati, OH

    Role: Software Engineer 1 Pay Rate: $25-33/hr Must Haves: 1-3+ years of software development experience Ability to quickly learn and apply new programming languages Proficiency in HTML5, CSS, and GitHub .NET/C# development Plusses: Experience with Java, Java Spring, Docker, and Kubernetes Familiarity with REST APIs Knowledge of TypeScript and front end technologies Knowledge of ATM terminal driving and testing Experience with ATM software for Diebold, NCR, and Hyosung Day-to-Day: A large financial organization is seeking a Software Engineer I that will sit onsite in Cincinnati, Ohio. Our client is currently working on modernizing their ATM channel (both hardware and software) which you will be involved in as well as supporting a major debit card modernization effort. Responsibilities: Participate in the software development life cycle, from requirements gathering to deployment and maintenance. Learn and apply new programming languages and technologies as needed to support the ATM modernization Design and implement custom user interfaces using HTML5 and CSS. Develop and maintain scripts using Windows Batch, Shell scripting, VB, .NET, and C#. Collaborate with team members and external vendors/partners to ensure project success. Manage code repositories and collaborate using GitHub.
    $25-33 hourly 3d ago
  • Data Scientist

    Procter & Gamble 4.8company rating

    Data engineer job in Cincinnati, OH

    Do you enjoy solving billion-dollar data science problems across trillions of data points? Are you passionate about working at the cutting edge of interdisciplinary boundaries, where computer science meets hard science? If you like turning untidy data into nonobvious insights and surprising business leaders with the transformative power of Artificial Intelligence (AI), we want you on our team at P&G. As a Data Scientist in our organization, you will play a crucial role in disrupting current business practices by designing and implementing innovative models that enhance our processes. You will be expected to constructively research, design, and customize algorithms tailored to various problems and data types. Utilizing your expertise in Operations Research (including optimization and simulation) and machine learning models (such as tree models, deep learning, and reinforcement learning), you will directly contribute to the development of scalable Data Science algorithms and collaborate with Data and Software Engineering teams to productionize these solutions. Your technical knowledge will empower you to apply exploratory data analysis, feature engineering, and model building on massive datasets, delivering accurate and impactful insights. Additionally, you will mentor others as a technical coach and become a recognized expert in one or more Data Science techniques, quantifying the improvements in business outcomes resulting from your work. Key Responsibilities: + Algorithm Design & Development: Directly contribute to the design and development of scalable Data Science algorithms. + Collaboration: Work closely with Data and Software Engineering teams to effectively productionize algorithms. + Data Analysis: Apply thorough technical knowledge to large datasets, conducting exploratory data analysis, feature engineering, and model building. + Coaching & Mentorship: Develop others as a technical coach, sharing your expertise and insights. + Expertise Development: Become a known expert in one or multiple Data Science techniques and methodologies. Job Qualifications Required Qualifications: + Education: Pursuing or has graduated with a Master's degree in a quantitative field (Operations Research, Computer Science, Engineering, Applied Mathematics, Statistics, Physics, Analytics, etc.) or possess equivalent work experience. + Technical Skills: Proficient in programming languages such as Python and familiar with data science/machine learning libraries like OpenCV, scikit-learn, PyTorch, TensorFlow/Keras, and Pandas. + Communication: Strong written and verbal communication skills, with the ability to influence others to take action. Preferred Qualifications: + Analytic Methodologies: Experience applying analytic methodologies such as Machine Learning, Optimization, and Simulation to real-world problems. + Continuous Learning: A commitment to lifelong learning, keeping up to date with the latest technology trends, and a willingness to teach others while learning new techniques. + Data Handling: Experience with large datasets and cloud computing platforms such as GCP or Azure. + DevOps Familiarity: Familiarity with DevOps environments, including tools like Git and CI/CD practices. Compensation for roles at P&G varies depending on a wide array of non-discriminatory factors including but not limited to the specific office location, role, degree/credentials, relevant skill set, and level of relevant experience. At P&G compensation decisions are dependent on the facts and circumstances of each case. Total rewards at P&G include salary + bonus (if applicable) + benefits . Your recruiter may be able to share more about our total rewards offerings and the specific salary range for the relevant location(s) during the hiring process. We are committed to providing equal opportunities in employment. We value diversity and do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. Immigration Sponsorship is not available for this role. For more information regarding who is eligible for hire at P&G along with other work authorization FAQ's, please click HERE (******************************************************* . Procter & Gamble participates in e-verify as required by law. Qualified individuals will not be disadvantaged based on being unemployed. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Job Schedule Full time Job Number R000135859 Job Segmentation Entry Level Starting Pay / Salary Range $85,000.00 - $115,000.00 / year
    $85k-115k yearly 60d+ ago
  • ETL Architect

    Scadea Solutions

    Data engineer job in Cincinnati, OH

    Job title: ETL Architect DURATION 18 months YEARS OF EXPERIENCE 7-10 INTERVIEW TYPE Phone Screen to Hire REQUIRED SKILLS • Experience with Data Stage and ETL design Technical • Requirement gathering , converting business requirements to technical specs to profile • Worked hands on in minimum 2 projects with data stage • Understand the process of developing an etl design that support multiple datastage developers • Be able to create an etl design framework and related specifications for use by etl developers • Define standards and best practices of Data Stage etl to be followed by all data stage developers • Understanding of Data Warehouse, Data marts concepts and implementation experience • Be able to look at code produced to insure conformance with developed ETL framework and design for reuse • Preferable experienced user level comptency in IBM's metadata product, datastage and Infosphere product line • Be able to design etl for oracle or sql server or any db • Good analytical skills and process design • Insuring compliance to quality standards, and delivery timelines. Qualifications Bachelors Additional Information Required Skills: Job Description: Performs highly complex application programming/systems development and support Performs highly complex configuration of business rules and technical parameters of software products Review business requirements and develop application design documentation Build technical components (Maximo objects, TRM Rules, Java extensions, etc) based on detailed design. Performs unit testing of components along with completing necessary documentation. Supports product test, user acceptance test, etc as a member of the fix-it team. Employs consistent measurement techniques Include testing in project plans and establish controls to require adherence to test plans Manages the interrelationships among various projects or work objectives
    $86k-113k yearly est. 1d ago
  • Principal Data Scientist

    Maximus 4.3company rating

    Data engineer job in Cincinnati, OH

    Description & Requirements We now have an exciting opportunity for a Principal Data Scientist to join the Maximus AI Accelerator supporting both the enterprise and our clients. We are looking for an accomplished hands-on individual contributor and team player to be a part of the AI Accelerator team. You will be responsible for architecting and optimizing scalable, secure AI systems and integrating AI models in production using MLOps best practices, ensuring systems are resilient, compliant, and efficient. This role requires strong systems thinking, problem-solving abilities, and the capacity to manage risk and change in complex environments. Success depends on cross-functional collaboration, strategic communication, and adaptability in fast-paced, evolving technology landscapes. This position will be focused on strategic company-wide initiatives but will also play a role in project delivery and capture solutioning (i.e., leaning in on existing or future projects and providing solutioning to capture new work.) This position requires occasional travel to the DC area for client meetings. Essential Duties and Responsibilities: - Make deep dives into the data, pulling out objective insights for business leaders. - Initiate, craft, and lead advanced analyses of operational data. - Provide a strong voice for the importance of data-driven decision making. - Provide expertise to others in data wrangling and analysis. - Convert complex data into visually appealing presentations. - Develop and deploy advanced methods to analyze operational data and derive meaningful, actionable insights for stakeholders and business development partners. - Understand the importance of automation and look to implement and initiate automated solutions where appropriate. - Initiate and take the lead on AI/ML initiatives as well as develop AI/ML code for projects. - Utilize various languages for scripting and write SQL queries. Serve as the primary point of contact for data and analytical usage across multiple projects. - Guide operational partners on product performance and solution improvement/maturity options. - Participate in intra-company data-related initiatives as well as help foster and develop relationships throughout the organization. - Learn new skills in advanced analytics/AI/ML tools, techniques, and languages. - Mentor more junior data analysts/data scientists as needed. - Apply strategic approach to lead projects from start to finish; Job-Specific Minimum Requirements: - Develop, collaborate, and advance the applied and responsible use of AI, ML and data science solutions throughout the enterprise and for our clients by finding the right fit of tools, technologies, processes, and automation to enable effective and efficient solutions for each unique situation. - Contribute and lead the creation, curation, and promotion of playbooks, best practices, lessons learned and firm intellectual capital. - Contribute to efforts across the enterprise to support the creation of solutions and real mission outcomes leveraging AI capabilities from Computer Vision, Natural Language Processing, LLMs and classical machine learning. - Contribute to the development of mathematically rigorous process improvement procedures. - Maintain current knowledge and evaluation of the AI technology landscape and emerging. developments and their applicability for use in production/operational environments. Minimum Requirements - Bachelor's degree in related field required. - 10-12 years of relevant professional experience required. Job-Specific Minimum Requirements: - 10+ years of relevant Software Development + AI / ML / DS experience. - Professional Programming experience (e.g. Python, R, etc.). - Experience in two of the following: Computer Vision, Natural Language Processing, Deep Learning, and/or Classical ML. - Experience with API programming. - Experience with Linux. - Experience with Statistics. - Experience with Classical Machine Learning. - Experience working as a contributor on a team. Preferred Skills and Qualifications: - Masters or BS in quantitative discipline (e.g. Math, Physics, Engineering, Economics, Computer Science, etc.). - Experience developing machine learning or signal processing algorithms: - Ability to leverage mathematical principles to model new and novel behaviors. - Ability to leverage statistics to identify true signals from noise or clutter. - Experience working as an individual contributor in AI. - Use of state-of-the-art technology to solve operational problems in AI and Machine Learning. - Strong knowledge of data structures, common computing infrastructures/paradigms (stand alone and cloud), and software engineering principles. - Ability to design custom solutions in the AI and Advanced Analytics sphere for customers. This includes the ability to scope customer needs, identify currently existing technologies, and develop custom software solutions to fill any gaps in available off the shelf solutions. - Ability to build reference implementations of operational AI & Advanced Analytics processing solutions. Background Investigations: - IRS MBI - Eligibility #techjobs #VeteransPage EEO Statement Maximus is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, age, national origin, disability, veteran status, genetic information and other legally protected characteristics. Pay Transparency Maximus compensation is based on various factors including but not limited to job location, a candidate's education, training, experience, expected quality and quantity of work, required travel (if any), external market and internal value analysis including seniority and merit systems, as well as internal pay alignment. Annual salary is just one component of Maximus's total compensation package. Other rewards may include short- and long-term incentives as well as program-specific awards. Additionally, Maximus provides a variety of benefits to employees, including health insurance coverage, life and disability insurance, a retirement savings plan, paid holidays and paid time off. Compensation ranges may differ based on contract value but will be commensurate with job duties and relevant work experience. An applicant's salary history will not be used in determining compensation. Maximus will comply with regulatory minimum wage rates and exempt salary thresholds in all instances. Accommodations Maximus provides reasonable accommodations to individuals requiring assistance during any phase of the employment process due to a disability, medical condition, or physical or mental impairment. If you require assistance at any stage of the employment process-including accessing job postings, completing assessments, or participating in interviews,-please contact People Operations at **************************. Minimum Salary $ 156,740.00 Maximum Salary $ 234,960.00
    $69k-98k yearly est. Easy Apply 1d ago
  • Senior Informatica Data Engineer - Pharmacy & Healthcare Data

    Kroger 4.5company rating

    Data engineer job in Blue Ash, OH

    We are seeking a Senior Informatica Data Engineer with extensive experience in designing and supporting Informatica-based ETL solutions within regulated healthcare environments. This role demands hands-on technical proficiency, ownership of deliverables, and close collaboration with pharmacy operations, security, infrastructure, and analytics teams. Accountable for developing and delivering technological responses to targeted business outcomes. Analyze, design and develop enterprise data and information architecture deliverables, focusing on data as an asset for the enterprise. Understand and follow reusable standards, design patterns, guidelines, and configurations to deliver valuable data and information across the enterprise, including direct collaboration with 84.51, where needed. Demonstrate the company's core values of respect, honesty, integrity, diversity, inclusion and safety. Minimum - Bachelor's Degree computer science, software engineering, or related field - Any experience in a minimum of two of the following technical disciplines: data warehousing, big data management, analytics development, data science, application programming interfaces (APIs), data integration, cloud, servers and storage, and database mgmt - 4+ years successful and applicable experience building complex data solutions that have been successfully delivered to customers - 4+ years proven track record of delivering large scale, high quality operational or analytical data systems - 4+ years experience in the data development and principles including end-to-end design patterns - Excellent oral/written communication skills Desired - Any experience with SSAS Tabular models, Power BI, Dataflows and DAX - Any experience building solutions using elastic architectures (preferably Microsoft Azure and Google Cloud Platform) - Any experience with a variety of SQL, NoSQL and Big Data Platforms - Any experience with data science solutions or platforms - Any experience with streaming technologies like Kafka, IBM MQ and EventHub - Any experience with Python, Spark and SQL - Any experience with Azure Data Platform stack: Azure Data Lake, Data Factory and Databricks -Direct experience with pharmacy benefit management (PBM) or payer systems. -Familiarity with data governance, data lineage, and audit requirements. -Experience modernizing legacy Informatica solutions or migrating to cloud platforms. -Exposure to compliance frameworks beyond HIPAA (e.g., SOC 2, HITRUST). -Exposure to cloud based data platforms (e.g., Snowflake). -Proven experience working with HIPAA-regulated healthcare or pharmacy data. Practical knowledge of healthcare file formats: 834 HL7 FIRE or similar pharmacy/government reporting formats - Utilize enterprise standards for data domains and data solutions, focusing on simplified integration and streamlined operational and analytical uses - Ensure there is clarity between ongoing projects, escalating when necessary, including direct collaboration with 84.51 - Leverage innovative new technologies and approaches to renovate, extend, and transform the existing core data assets, including SQL-based, NoSQL-based, and Cloud-based data platforms - Define high-level migration plans to address the gaps between the current and future state - Contribute to the development of cost/benefit analysis for leadership to shape sound architectural decisions - Analyze technology environments to detect critical deficiencies and recommend solutions for improvement - Promote the reuse of data assets, including the management of the data catalog for reference - Draft architectural diagrams, interface specifications and other design documents - Must be able to perform the essential job functions of this position with or without reasonable accommodation
    $93k-116k yearly est. Auto-Apply 5d ago
  • Junior Data Scientist

    Medpace 4.5company rating

    Data engineer job in Cincinnati, OH

    The Medpace Analytics and Business Intelligence team is growing rapidly and is focused on building a data driven culture across the enterprise. The BI team uses data and insights to drive increased strategic and operational efficiencies across the organization. As a Business Intelligence Analyst, you will hold a highly visible analytical role that requires interaction and partnership with leadership across the Medpace organization. What's in this for you? * Work in a collaborative, fast paced, entrepreneurial, and innovative workplace; * Gain experience and exposure to advanced BI concepts from visualization to data warehousing; * Grow business knowledge by working with leadership across all aspects of Medpace's business. Responsibilities What's involved? We are looking for a Junior Business Intelligence Analyst to add additional depth to our growing Analytical team in a variety of areas - from Visualization and Storytelling to SQL, Data Modeling, and Data Warehousing. This role will work in close partnership with leadership, product management, operations, finance, and other technical teams to find opportunities to improve and expand our business. An ideal candidate in this role will apply great analytical skills, communication skills, and problem-solving skills to continue developing our analytics & BI capabilities. We are looking for team members who thrive in working with complex data sets, conducting deep data analysis, are intensely curious, and enjoy designing and developing long-term solutions. What you bring to the table - and why we need you! * Data Visualization skills - Designing and developing key metrics, reports, and dashboards to drive insights and business decisions to improve performance and reduce costs; * Technical Skills - either experience in, or strong desire to learn fundamental technical skills needed to drive BI initiatives (SQL, DAX, Data Modeling, etc.); * Communication Skills - Partner with leadership and collaborate with software engineers to implement data architecture and design, to support complex analysis; * Analytical Skills - Conduct complex analysis and proactively identify key business insights to assist departmental decision making. Qualifications * Bachelor's Degree in Business, Life Science, Computer Science, or Related Degree; * 0-3 years of experience in business intelligence or analytics - Python & R heavily preferred * Strong analytical and communication skills; * Excellent organization skills and the ability to multitask while efficiently completing high quality work. Medpace Overview Medpace is a full-service clinical contract research organization (CRO). We provide Phase I-IV clinical development services to the biotechnology, pharmaceutical and medical device industries. Our mission is to accelerate the global development of safe and effective medical therapeutics through its scientific and disciplined approach. We leverage local regulatory and therapeutic expertise across all major areas including oncology, cardiology, metabolic disease, endocrinology, central nervous system, anti-viral and anti-infective. Headquartered in Cincinnati, Ohio, employing more than 5,000 people across 40+ countries. Why Medpace? People. Purpose. Passion. Make a Difference Tomorrow. Join Us Today. The work we've done over the past 30+ years has positively impacted the lives of countless patients and families who face hundreds of diseases across all key therapeutic areas. The work we do today will improve the lives of people living with illness and disease in the future. Cincinnati Perks * Cincinnati Campus Overview * Flexible work environment * Competitive PTO packages, starting at 20+ days * Competitive compensation and benefits package * Company-sponsored employee appreciation events * Employee health and wellness initiatives * Community involvement with local nonprofit organizations * Discounts on local sports games, fitness gyms and attractions * Modern, ecofriendly campus with an on-site fitness center * Structured career paths with opportunities for professional growth * Discounted tuition for UC online programs Awards * Named a Top Workplace in 2024 by The Cincinnati Enquirer * Recognized by Forbes as one of America's Most Successful Midsize Companies in 2021, 2022, 2023 and 2024 * Continually recognized with CRO Leadership Awards from Life Science Leader magazine based on expertise, quality, capabilities, reliability, and compatibility What to Expect Next A Medpace team member will review your qualifications and, if interested, you will be contacted with details for next steps.
    $69k-98k yearly est. Auto-Apply 7d ago
  • Data Visualization Engineer- Healthcare

    DBSI Services 3.5company rating

    Data engineer job in Cincinnati, OH

    Job Title: Data Visualization Engineer- Healthcare Proven experience in developing and delivering within data visualization, reporting, or business intelligence. Proficiency in Looker and LookML, in addition to other business intelligence platforms like Tableau and PowerBI. Experience with dbt and Snowflake. Advanced SQL knowledge, including writing complex queries, optimizing performance, and working with large datasets. Strong analytical and problem-solving skills, with the ability to translate complex data into actionable insights. Excellent communication, with the ability to effectively convey technical concepts to non- technical audiences. Oversee the design and development of interactive and engaging data visualizations and report using tools such as Looker, Tableau, Power BI, or custom visualization libraries. Ensure adherence to best practices, including principles of clarity, accuracy, governance, and effectiveness Qualification: B.E Compensation: $50.00 - $55.00 per hour MAKING THE INDUSTRY'S BEST MATCHES DBSI Services is widely recognized as one of the industry's fastest growing staffing agencies. Thanks to our longstanding experience in various industries, we have the capacity to build meaningful, long-lasting relationships with all our clients. Our success is a result of our commitment to the best people, the best solutions and the best results. Our Story: Founded in 1995 Privately Owned Corporation Managing Partner Business Model Headquartered in New Jersey US Based Engineers Only Collaborative Team Approach Methodology and Process Driven GET HIRED Top performing engineers are the foundation of our business. Our priority is building strong relationships with each employment candidate we work with. You can trust our professional recruiters to invest the time required to fully understand your skills, explore your professional goals and help you find the right career opportunities.
    $50-55 hourly Auto-Apply 60d+ ago
  • Sr. Data Engineer

    Eliassen Group 4.7company rating

    Data engineer job in Cincinnati, OH

    **Cincinnati, OH** **Type:** Contract **Category:** Engineer **Industry:** Financial Services **Reference ID:** JN -122025-104807 **Shortcut:** ********************************** + Description + Recommended Jobs **Description:** **Required:** In office 4 days a week minimum (Monday-Thursday) in Cincinnati, Ohio We are looking to hire an experienced Data Engineer to support a Data Science enablement squad. This squad enables enterprise ML/AI pipelines in AWS SageMaker. In this role, you'll accelerate the migration of critical Cloud Pak for Data (R Shiny) dashboards to Streamlit in Snowflake. We're looking for a hands‑on engineer who's eager to move fast for users and has an interest in ML/AI pipelining. Ideal candidates have partnered with data science teams to support model development, reporting, or deployment-and want to grow in modern data tooling. _Due to client requirement, applicants must be willing and able to work on a w2 basis. For our w2 consultants, we offer a great benefits package that includes Medical, Dental, and Vision benefits, 401k with company matching, and life insurance._ Rate $60-$70/hr W2 **Responsibilities:** + Migrate prioritized R Shiny dashboards to Streamlit in Snowflake and speed up delivery for risk/DS users. + Build app patterns (state, caching, charts, tables/exports) and curate production‑ready Snowflake datasets. + Collaborate with DS/ML engineers to align dashboards with SageMaker pipelines and model monitoring. + Apply governance and performance best practices + Contribute to lightweight tests/CI/CD and operational telemetry. **Experience Requirements:** Technical Skills - Must Have + Streamlit + Snowflake + Python + R / R Shiny literacy + SQL Technical Skills - Nice to Have + dbt + DB2 + ETL / orchestration tools + AWS SageMaker + Snowpark / CI/CD (e.g., GitHub Actions) _Skills, experience, and other compensable factors will be considered when determining pay rate. The pay range provided in this posting reflects a W2 hourly rate; other employment options may be available that may result in pay outside of the provided range._ _W2 employees of Eliassen Group who are regularly scheduled to work 30 or more hours per week are eligible for the following benefits: medical (choice of 3 plans), dental, vision, pre-tax accounts, other voluntary benefits including life and disability insurance, 401(k) with match, and sick time if required by law in the worked-in state/locality._ _Please be advised- If anyone reaches out to you about an open position connected with Eliassen Group, please confirm that they have an Eliassen.com email address and never provide personal or financial information to anyone who is not clearly associated with Eliassen Group. If you have any indication of fraudulent activity, please contact ********************._ _About Eliassen Group:_ _Eliassen Group is a leading strategic consulting company for human-powered solutions. For over 30 years, Eliassen has helped thousands of companies reach further and achieve more with their technology solutions, financial, risk & compliance, and advisory solutions, and clinical solutions. With offices from coast to coast and throughout Europe, Eliassen provides a local community presence, balanced with international reach. Eliassen Group strives to positively impact the lives of their employees, clients, consultants, and the communities in which they operate._ _Eliassen Group is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status, or disability status._ _Don't miss out on our referral program! If we hire a candidate that you refer us to then you can be eligible for a $1,000 referral check!_
    $60-70 hourly 3d ago
  • Palantir Data Engineer / Architect

    Capgemini 4.5company rating

    Data engineer job in Cincinnati, OH

    **About the job:** Key Responsibilities Architect and design end-to-end data solutions using Palantir Foundry, including ontology modeling, pipeline development, and operational workflows. Collaborate with business and technical teams to translate requirements into scalable data models and applications. Lead the integration of diverse data sources (structured and unstructured) into Foundry, ensuring data quality, lineage, and governance. Develop and enforce best practices for data modeling, pipeline orchestration, and application deployment within the Palantir ecosystem. Provide technical leadership and mentorship to junior developers and engineers working on Palantir projects. Partner with security and compliance teams to ensure data privacy, access control, and regulatory compliance. Stay current with Palantir platform updates and emerging technologies to continuously improve solution design and performance. Required Qualifications Bachelor-s or Master-s degree in Computer Science, Engineering, Data Science, or a related field. 3+ years of experience working with Palantir Foundry, including ontology design, code repositories, and pipeline development. Strong understanding of data architecture, ETL/ELT processes, and cloud platforms (e.g., AWS, Azure, GCP). Proficiency in Python, PySpark, SQL, Experience with data governance, security models, and access control in enterprise environments. Excellent communication skills with the ability to explain complex technical concepts to non-technical stakeholders. Preferred Qualifications Palantir Foundry certification or equivalent hands-on experience. Experience in defense, aerospace, healthcare, or government sectors. Familiarity with DevOps practices, CI/CD pipelines, and agile methodologies. Knowledge of AI/ML integration within Palantir workflows. Familiarity with JavaScript or TypeScript. **Life at Sogeti** - Sogeti supports all aspects of your well-being throughout the changing stages of your life and career. For eligible employees, we offer: + Flexible work options + 401(k) with 150% match up to 6% + Employee Share Ownership Plan + Medical, Prescription, Dental & Vision Insurance + Life Insurance + 100% Company-Paid Mobile Phone Plan + 3 Weeks PTO + 7 Paid Holidays + Paid Parental Leave + Adoption, Surrogacy & Cryopreservation Assistance + Subsidized Back-up Child/Elder Care & Tutoring + Career Planning & Coaching + $5,250 Tuition Reimbursement & 20,000+ Online Courses + Employee Resource Groups + Counseling & Support for Physical, Financial, Emotional & Spiritual Well-being + Disaster Relief Programs **About Sogeti** Part of the Capgemini Group, Sogeti makes business value through technology for organizations that need to implement innovation at speed and want a local partner with global scale. With a hands-on culture and close proximity to its clients, Sogeti implements solutions that will help organizations work faster, better, and smarter. By combining its agility and speed of implementation through a DevOps approach, Sogeti delivers innovative solutions in quality engineering, cloud and application development, all driven by AI, data and automation. **Become Your Best** | ************* **Disclaimer** Capgemini is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law. This is a general description of the Duties, Responsibilities and Qualifications required for this position. Physical, mental, sensory or environmental demands may be referenced in an attempt to communicate the manner in which this position traditionally is performed. Whenever necessary to provide individuals with disabilities an equal employment opportunity, Capgemini will consider reasonable accommodations that might involve varying job requirements and/or changing the way this job is performed, provided that such accommodations do not pose an undue hardship. Capgemini is committed to providing reasonable accommodation during our recruitment process. If you need assistance or accommodation, please reach out to your recruiting contact. Please be aware that Capgemini may capture your image (video or screenshot) during the interview process and that image may be used for verification, including during the hiring and onboarding process. Click the following link for more information on your rights as an Applicant ************************************************************************** Applicants for employment in the US must have valid work authorization that does not now and/or will not in the future require sponsorship of a visa for employment authorization in the US by Capgemini. Capgemini discloses salary range information in compliance with state and local pay transparency obligations. The disclosed range represents the lowest to highest salary we, in good faith, believe we would pay for this role at the time of this posting, although we may ultimately pay more or less than the disclosed range, and the range may be modified in the future. The disclosed range takes into account the wide range of factors that are considered in making compensation decisions including, but not limited to, geographic location, relevant education, qualifications, certifications, experience, skills, seniority, performance, sales or revenue-based metrics, and business or organizational needs. At Capgemini, it is not typical for an individual to be hired at or near the top of the range for their role. The base salary range for the tagged location is [110 to 125k]. This role may be eligible for other compensation including variable compensation, bonus, or commission. Full time regular employees are eligible for paid time off, medical/dental/vision insurance, 401(k), and any other benefits to eligible employees. Note: No amount of pay is considered to be wages or compensation until such amount is earned, vested, and determinable. The amount and availability of any bonus, commission, or any other form of compensation that are allocable to a particular employee remains in the Company's sole discretion unless and until paid and may be modified at the Company-s sole discretion, consistent with the law. Sogeti is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law.
    $75k-97k yearly est. 60d+ ago
  • Data Engineer (Local to OH) / W2 Contract

    eTek It Services 4.2company rating

    Data engineer job in Cincinnati, OH

    We are seeking a skilled Data Engineer to join our Data Science team. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure to support data analytics, machine learning, and Retrieval-Augmented Generation (RAG) type Large Language Model (LLM) workflows. This role requires a strong technical background, excellent problem-solving skills, and the ability to work collaboratively with data scientists, analysts, and other stakeholders.Key Responsibilities: Data Pipeline Development: Design, develop, and maintain robust and scalable ETL (Extract, Transform, Load) processes. Ensure data is collected, processed, and stored efficiently and accurately. Data Integration: Integrate data from various sources, including databases, APIs, and third-party data providers. Ensure data consistency and integrity across different systems. RAG Type LLM Workflows: Develop and maintain data pipelines specifically tailored for Retrieval-Augmented Generation (RAG) type Large Language Model (LLM) workflows. Ensure efficient data retrieval and augmentation processes to support LLM training and inference. Collaborate with data scientists to optimize data pipelines for LLM performance and accuracy. Semantic/Ontology Data Layers: Develop and maintain semantic and ontology data layers to enhance data integration and retrieval. Ensure data is semantically enriched to support advanced analytics and machine learning models. Collaboration: Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. Provide technical support and guidance on data-related issues. Data Quality and Governance: Implement data quality checks and validation processes to ensure data accuracy and reliability. Adhere to data governance policies and best practices. Performance Optimization: Monitor and optimize the performance of data pipelines and infrastructure. Troubleshoot and resolve data-related issues in a timely manner. Support for Analysis: Support short-term ad-hoc analysis by providing quick and reliable data access. Contribute to longer-term goals by developing scalable and maintainable data solutions. Documentation: Maintain comprehensive documentation of data pipelines, processes, and infrastructure. Ensure knowledge transfer and continuity within the team. Technical Requirements: Education and Experience: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 3+ years of experience in data engineering or a related role. Technical Skills: Proficiency in Python (mandatory). Experience with other programming languages such as Java or Scala is a plus. Experience with SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB). Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka). Experience with cloud platforms (e.g., AWS, Azure, Google Cloud) and their data services. RAG Type LLM Skills: Experience with data pipelines for LLM workflows, including data retrieval and augmentation. Familiarity with natural language processing (NLP) techniques and tools. Understanding of LLM architectures and their data requirements. Semantic/Ontology Data Layers: Familiarity with semantic and ontology data layers and their application in data integration and retrieval. Tools and Frameworks: Experience with ETL tools and frameworks (e.g., Apache NiFi, Airflow, Talend). Familiarity with data visualization tools (e.g., Tableau, Power BI) is a plus. Soft Skills: Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Ability to work in a fast-paced, dynamic environment. Preferred Qualifications: Experience with machine learning and data science workflows. Knowledge of data governance and compliance standards. Certification in cloud platforms or data engineering.
    $74k-101k yearly est. 60d+ ago
  • Data Warehouse Specialist

    Collabera 4.5company rating

    Data engineer job in Hebron, KY

    Established in 1991, Collabera has been a leader in IT staffing for over 22 years and is one of the largest diversity IT staffing firms in the industry. As a half a billion dollar IT company, with more than 9,000 professionals across 30+ offices, Collabera offers comprehensive, cost-effective IT staffing & IT Services. We provide services to Fortune 500 and mid-size companies to meet their talent needs with high quality IT resources through Staff Augmentation, Global Talent Management, Value Added Services through CLASS (Competency Leveraged Advanced Staffing & Solutions) Permanent Placement Services and Vendor Management Programs. Collabera recognizes true potential of human capital and provides people the right opportunities for growth and professional excellence. Collabera offers a full range of benefits to its employees including paid vacations, holidays, personal days, Medical, Dental and Vision insurance, 401K retirement savings plan, Life Insurance, Disability Insurance. Job Description Job Details: Location: Hebron, KY Job Title: Data Warehouse Specialist Duration: 12+ Months (Strong Possibility of Extension) Description: • Detailed oriented and ability to anticipate logical next steps and potential pitfalls in the warranty collection process. • Proactive and strong data entry skills • High aptitude for learning computer systems • Process driven approach to data analysis and problem solving • Local focal point for backup to purchase material • Inventory management • Proficient computer skills to use multiple platforms with experience in Microsoft Office (pivot tables) • Ability to communicate clearly and effectively. • Customer and team focused • Cross functional integration skills • Multi-task with multiple managers • Finance or logistics background helpful • Familiar with basic accounting • Safety shoes required (voucher will be provided) 1st shift (M F 7:30 am 4:30 pm); occasional weekends. Additional Information If you are interested in this position and would like to set up an Interview, feel free to contact: Ujjwal Mane ************ ****************************
    $68k-86k yearly est. Easy Apply 1d ago
  • Data Engineer

    Robert Half 4.5company rating

    Data engineer job in Cincinnati, OH

    Description We are looking for an experienced Data Engineer to take a leading role in optimizing and transforming our data architecture. This position will focus on enhancing performance, scalability, and analytical capabilities within our systems. The ideal candidate will have a strong technical background and the ability to mentor teams while delivering innovative solutions. Responsibilities: - Redesign the existing PostgreSQL database architecture to support modern analytical models and improve overall system performance. - Optimize schemas by balancing normalization with denormalization techniques to achieve faster analytical reads. - Implement advanced strategies such as indexing, partitioning, caching, and replication to ensure high-throughput and low-latency data delivery. - Develop and maintain scalable data pipelines to guarantee accurate and timely data availability across distributed systems. - Collaborate with engineering teams to provide guidance on data modeling, query optimization, and architectural best practices. - Monitor and troubleshoot database performance issues, ensuring solutions are implemented effectively. - Lead efforts to enhance the reliability and scalability of data infrastructure to support future growth. - Serve as a technical mentor to team members, sharing expertise in database architecture and performance tuning. - Partner with stakeholders to understand data requirements and deliver solutions that meet business needs. Requirements - Minimum of 5 years of experience in senior-level data engineering or database architecture roles. - Expert-level proficiency in PostgreSQL, including schema design, optimization, and performance tuning. - Proven track record of transforming complex systems into efficient analytical data warehouses. - Advanced knowledge of database strategies such as indexing, partitioning, caching, and replication. - Hands-on experience designing and managing scalable data pipelines in cloud environments. - Strong skills in Amazon Web Services (AWS) and other modern data infrastructure tools. - Excellent communication skills, with the ability to convey technical concepts to non-technical stakeholders. - Ability to mentor and lead teams while driving technical innovation and best practices. Technology Doesn't Change the World, People Do. Robert Half is the world's first and largest specialized talent solutions firm that connects highly qualified job seekers to opportunities at great companies. We offer contract, temporary and permanent placement solutions for finance and accounting, technology, marketing and creative, legal, and administrative and customer support roles. Robert Half works to put you in the best position to succeed. We provide access to top jobs, competitive compensation and benefits, and free online training. Stay on top of every opportunity - whenever you choose - even on the go. Download the Robert Half app (https://www.roberthalf.com/us/en/mobile-app) and get 1-tap apply, notifications of AI-matched jobs, and much more. All applicants applying for U.S. job openings must be legally authorized to work in the United States. Benefits are available to contract/temporary professionals, including medical, vision, dental, and life and disability insurance. Hired contract/temporary professionals are also eligible to enroll in our company 401(k) plan. Visit roberthalf.gobenefits.net for more information. © 2025 Robert Half. An Equal Opportunity Employer. M/F/Disability/Veterans. By clicking "Apply Now," you're agreeing to Robert Half's Terms of Use (https://www.roberthalf.com/us/en/terms) .
    $84k-116k yearly est. 32d ago

Learn more about data engineer jobs

How much does a data engineer earn in Florence, KY?

The average data engineer in Florence, KY earns between $62,000 and $109,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.

Average data engineer salary in Florence, KY

$83,000

What are the biggest employers of Data Engineers in Florence, KY?

The biggest employers of Data Engineers in Florence, KY are:
  1. 84.51
  2. Ernst & Young
  3. ETEK International
  4. Cincinnati Bell
  5. Hudson Manpower
  6. U.S. Bank
  7. Eliassen Group
  8. Golden Tech
  9. Agility
  10. Tata Group
Job type you want
Full Time
Part Time
Internship
Temporary