Junior Data Engineer
Data engineer job in Columbus, OH
Contract-to-Hire
Columbus, OH (Hybrid)
Our healthcare services client is looking for an entry-level Data Engineer to join their team. You will play a pivotal role in maintaining and improving inventory and logistics management programs. Your day-to-day work will include leveraging machine learning and open-source technologies to drive improvements in data processes.
Job Responsibilities
Automate key processes and enhance data quality
Improve injection processes and enhance machine learning capabilities
Manage substitutions and allocations to streamline product ordering
Work on logistics-related data engineering tasks
Build and maintain ML models for predictive analytics
Interface with various customer systems
Collaborate on integrating AI models into customer service
Qualifications
Bachelor's degree in related field
0-2 years of relevant experience
Proficiency in SQL and Python
Understanding of GCP/BigQuery (or any cloud experience, basic certifications a plus).
Knowledge of data science concepts.
Business acumen and understanding (corporate experience or internship preferred).
Familiarity with Tableau
Strong analytical skills
Attitude for collaboration and knowledge sharing
Ability to present confidently in front of leaders
Why Should You Apply?
You will be part of custom technical training and professional development through our Elevate Program!
Start your career with a Fortune 15 company!
Access to cutting-edge technologies
Opportunity for career growth
Eight Eleven Group provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, national origin, age, sex, citizenship, disability, genetic information, gender, sexual orientation, gender identity, marital status, amnesty or status as a covered veteran in accordance with applicable federal, state, and local laws.
Lead Data Scientist
Data engineer job in Columbus, OH
Candidates MUST go on-site at one of the following locations
Columbus, OH
Cincinnati, OH
Cleveland, OH
Indianapolis, IN
Hagerstown, MD
Chicago, IL
Detroit, MI
Minnetonka, MN
Houston, TX
Charlotte, NC
Akron, OH
Experience:
· Master's degree and 5+ years of experience related work experience using statistics and machine learning to solve complex business problems, experience conducting statistical analysis with advanced statistical software, scripting languages, and packages, experience with big data analysis tools and techniques, and experience building and deploying predictive models, web scraping, and scalable data pipelines
· Expert understanding of statistical methods and skills such as Bayesian Networks Inference, linear and non-linear regression, hierarchical, mixed models/multi-level modeling
Python, R, or SAS SQL and some sort of lending experience (i.e. HELOC, Mortgage etc) is most important
Excellent communication skills
If a candidate has cred card experience (i.e. Discover or Bread financial ) THEY ARE A+ fit!
Education:
Master's degree or PhD in computer science, statistics, economics or related fields
Responsibilities:
· Prioritizes analytical projects based on business value and technological readiness
Performs large-scale experimentation and build data-driven models to answer business questions
Conducts research on cutting-edge techniques and tools in machine learning/deep learning/artificial intelligence
Evangelizes best practices to analytics and products teams
Acts as the go-to resource for machine learning across a range of business needs
Owns the entire model development process, from identifying the business requirements, data sourcing, model fitting, presenting results, and production scoring
Provides leadership, coaching, and mentoring to team members and develops the team to work with all areas of the organization
Works with stakeholders to ensure that business needs are clearly understood and that services meet those needs
Anticipates and analyzes trends in technology while assessing the emerging technology's impact(s)
Coaches' individuals through change and serves as a role model
Skills:
· Up-to-date knowledge of machine learning and data analytics tools and techniques
Strong knowledge in predictive modeling methodology
Experienced at leveraging both structured and unstructured data sources
Willingness and ability to learn new technologies on the job
Demonstrated ability to communicate complex results to technical and non-technical audiences
Strategic, intellectually curious thinker with focus on outcomes
Professional image with the ability to form relationships across functions
Ability to train more junior analysts regarding day-to-day activities, as necessary
Proven ability to lead cross-functional teams
Strong experience with Cloud Machine Learning technologies (e.g., AWS Sagemaker)
Strong experience with machine learning environments (e.g., TensorFlow, scikit-learn, caret)
Demonstrated Expertise with at least one Data Science environment (R/RStudio, Python, SAS) and at least one database architecture (SQL, NoSQL)
Financial Services background preferred
Senior Data Engineer(only W2)
Data engineer job in Columbus, OH
Bachelor's Degree in Computer Science or related technical field AND 5+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, or Java.
Proficiency with Azure data services, such as Azure Data Lake, Azure Data Factory and Databricks.
Expertise using Cloud Security (i.e., Active Directory, network security groups, and encryption services).
Proficient in Python for developing and maintaining data solutions.
Experience with optimizing or managing technology costs.
Ability to build and maintain a data architecture supporting both real-time and batch processing.
Ability to implement industry standard programming techniques by mastering advanced fundamental concepts, practices, and procedures, and having the ability to analyze and solve problems in existing systems.
Expertise with unit testing, integration testing and performance/stress testing.
Database management skills and understanding of legacy and contemporary data modeling and system architecture.
Demonstrated leadership skills, team spirit, and the ability to work cooperatively and creatively across an organization
Experience on teams leveraging Lean or Agile frameworks.
Senior Data Engineer.
Data engineer job in Columbus, OH
Immediate need for a talented Senior Data Engineer. This is a 06+ months contract opportunity with long-term potential and is located in Columbus, OH(Remote). Please review the job description below and contact me ASAP if you are interested.
Job ID: 25-95277
Pay Range: $70 - $71 /hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location).
Key Responsibilities:
Working with Marketing data partners and build data pipelines to automate the data feeds from the partners to internal systems on Snowflake.
Working with Data Analysts to understand their data needs and prepare the datasets for analytics.
Work with Data Scientists to build the infrastructure to deploy the models, monitor the performance, and build the necessary audit infrastructure.
Key Requirements and Technology Experience:
Key skills; Snowflake, Python and AWS
Experience with building data pipelines, data pipeline infrastructure and related tools and environments used in analytics and data science (ex: Python, Unix)
Experience in developing analytic workloads with AWS Services, S3, Simple Queue Service (SQS), Simple Notification Service (SNS), Lambda, EC2, ECR and Secrets Manager.
Strong proficiency in Python, SQL, Linux/Unix shell scripting, GitHub Actions or Docker, Terraform or CloudFormation, and Snowflake.
Order of Importance: Terraform, Docker, GitHub Actions OR Jenkins
Experience with orchestration tools such as Prefect, DBT, or Airflow.
Experience automating data ingestion, processing, and reporting/monitoring.
Experience with other relevant tools used in data engineering (e.g., SQL, GIT, etc.)
Ability to set up environments (Dev, QA, and Prod) using GitHub repo and GitHub rules/methodologies; how to maintain (via SQL coding and proper versioning)
Our client is a leading Insurance Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration
Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, colour, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws.
By applying to our jobs, you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
Data Engineer (Databricks)
Data engineer job in Columbus, OH
ComResource is searching for a highly skilled Data Engineer with a background in SQL and Databricks that can handle the design and construction of scalable management systems, ensure that all data systems meet company requirements, and also research new uses for data acquisition.
Requirements:
Design, construct, install, test and maintain data management systems.
Build high-performance algorithms, predictive models, and prototypes.
Ensure that all systems meet the business/company requirements as well as industry practices.
Integrate up-and-coming data management and software engineering technologies into existing data structures.
Develop set processes for data mining, data modeling, and data production.
Create custom software components and analytics applications.
Research new uses for existing data.
Employ an array of technological languages and tools to connect systems together.
Recommend different ways to constantly improve data reliability and quality.
Qualifications:
5+ years data quality engineering
Experience with Cloud-based systems, preferably Azure
Databricks and SQL Server testing
Experience with ML tools and LLMs
Test automation frameworks
Python and SQL for data quality checks
Data profiling and anomaly detection
Documentation and quality metrics
Healthcare data validation experience preferred
Test automation and quality process development
Plus:
Azure Databricks
Azure Cognitive Services integration
Databricks Foundational model Integration
Claude API implementation a plus
Python and NLP frameworks (spa Cy, Hugging Face, NLTK)
Senior Data Engineer
Data engineer job in Columbus, OH
Responsible for understanding, preparing, processing, and analyzing data to make it valuable and useful for operations decision support.
Accountabilities in this role include:
Partnering with Business Analysis and Analytics teams.
Demonstrating problem-solving ability for effective and timely resolution of system issues, including production outages.
Developing and supporting standard processes to harvest data from various sources and perform data blending to develop advanced data sets, analytical cubes, and data exploration.
Utilizing queries, data exploration and transformation, and basic statistical methods.
Creating Python scripts.
Developing Microsoft SQL Server Integration Services Workflows.
Building Microsoft SQL Server Analysis Services Tabular Models.
Focusing on SQL database work with a blend of strong technical and communication skills.
Demonstrating ability to learn and navigate in large complex environments.
Exhibiting Excel acumen to develop complex spreadsheets, formulas, create macros, and understand VBA code within the modules.
Required Skills:
Experience with MS SQL
Proficiency in Python
Desired Skills:
Experience with SharePoint
Advanced Excel Skills (formulas, VBA, Power Pivot, Pivot Table)
Senior Data Analytics Engineer
Data engineer job in Columbus, OH
We are seeking a highly skilled Analytics Data Engineer with deep expertise in building scalable data solutions on the AWS platform. The ideal candidate is a 10/10 expert in Python and PySpark, with strong working knowledge of SQL. This engineer will play a critical role in translating business and end-user needs into robust analytics products-spanning ingestion, transformation, curation, and enablement for downstream reporting and visualization.
You will work closely with both business stakeholders and IT teams to design, develop, and deploy advanced data pipelines and analytical capabilities that power enterprise decision-making.
Key Responsibilities
Data Engineering & Pipeline Development
Design, develop, and optimize scalable data ingestion pipelines using Python, PySpark, and AWS native services.
Build end-to-end solutions to move large-scale big data from source systems into AWS environments (e.g., S3, Redshift, DynamoDB, RDS).
Develop and maintain robust data transformation and curation processes to support analytics, dashboards, and business intelligence tools.
Implement best practices for data quality, validation, auditing, and error-handling within pipelines.
Analytics Solution Design
Collaborate with business users to understand analytical needs and translate them into technical specifications, data models, and solution architectures.
Build curated datasets optimized for reporting, visualization, machine learning, and self-service analytics.
Contribute to solution design for analytics products leveraging AWS services such as AWS Glue, Lambda, EMR, Athena, Step Functions, Redshift, Kinesis, Lake Formation, etc.
Cross-Functional Collaboration
Work with IT and business partners to define requirements, architecture, and KPIs for analytical solutions.
Participate in Daily Scrum meetings, code reviews, and architecture discussions to ensure alignment with enterprise data strategy and coding standards.
Provide mentorship and guidance to junior engineers and analysts as needed.
Engineering (Supporting Skills)
Employ strong skills in Python, Pyspark and SQL to support data engineering tasks, broader system integration requirements, and application layer needs.
Implement scripts, utilities, and micro-services as needed to support analytics workloads.
Required Qualifications
5+ years of professional experience in data engineering, analytics engineering, or full-stack data development roles.
Expert-level proficiency (10/10) in:
Python
PySpark
Strong working knowledge of:
SQL and other programming languages
Demonstrated experience designing and delivering big-data ingestion and transformation solutions through AWS.
Hands-on experience with AWS services such as Glue, EMR, Lambda, Redshift, S3, Kinesis, CloudFormation, IAM, etc.
Strong understanding of data warehousing, ETL/ELT, distributed computing, and data modeling.
Ability to partner effectively with business stakeholders and translate requirements into technical solutions.
Strong problem-solving skills and the ability to work independently in a fast-paced environment.
Preferred Qualifications
Experience with BI/Visualization tools such as Tableau
Experience building CI/CD pipelines for data products (e.g., Jenkins, GitHub Actions).
Familiarity with machine learning workflows or MLOps frameworks.
Knowledge of metadata management, data governance, and data lineage tools.
Data Scientist with Hands On development experience with R, SQL & Python
Data engineer job in Columbus, OH
*Per the client, No C2C's!*
Central Point Partners is currently interviewing candidates in the Columbus, Oh area for a large client.
only GC's and USC's.
This position is Hybrid (4 Days onsite)! Only candidates who are local to Columbus, Oh will be considered.
Data Scientist with Hands On development experience with R, SQL & Python
Summary:
Our client is seeking a passionate, data-savvy Senior Data Scientist to join the Enterprise Analytics team to fuel our mission of growth through data-driven insights and opportunity discovery. This dynamic role uses a consultative approach with the business segments to dive into our customer, product, channel, and digital data to uncover opportunities for consumer experience optimization and customer value delivery. You will also enable stakeholders with actionable, intuitive performance insights that provide the business with direction for growth. The ideal candidate will have a robust mix of technical and communication skills, with a passion for optimization, data storytelling, and data visualization. You will collaborate with a centralized team of data scientists as well as teams across the organization including Product, Marketing, Data, Finance, and senior leadership. This is an exciting opportunity to be a key influencer to the company's strategic decisions and to learn and grow with our Analytics team.
Notes from the manager
The skills that will be critical will be Python or R and a firm understanding of SQL along with foundationally understanding what data is needed to perform studies now and in the future. For a high-level summary that should help describe what this person will be asked to do alongside their peers:
I would say this person will balance analysis with development, knowing when to jump in and knowing when to step back to lend their expertise.
Feature & Functional Design
Data scientists are embedded in the team's designing the feature. Their main job here is to define the data tracking needed to evaluate the business case-things like event logging, Adobe tagging, third-party data ingestion, and any other tracking requirements. They are also meant to consult and outline if/when business should be bringing data into the bank and will help connect business with CDAO and IT warehousing and data engineering partners should new data need to be brought forward.
Feature Engineering & Development
The same data scientists stay involved as the feature moves into execution. They support all necessary functions (Amigo, QA, etc.) to ensure data tracking is in place when the feature goes live. They also begin preparing to support launch evaluation and measurement against experimentation design or business case success criteria.
Feature Rollout & Performance Evaluation
Owns tracking the rollout, running A/B tests, and conducting impact analysis for all features that they have been involved in the Feature & Functional Design and Feature Engineering & Development stages. They provide an unbiased view of how the feature performs against the original business case along with making objective recommendations that will provide direction for business. They will roll off once the feature has matured through business case/experiment design and evaluation.
In addition to supporting feature rollouts…
Data scientists on the team are also encouraged to pursue self-driven initiatives during periods when they are not actively supporting other projects. These initiatives may include designing experiments, conducting exploratory analyses, developing predictive models, or identifying new opportunities for impact.
For more information about this opportunity, please contact Bill Hart at ************ AND email your resume to **********************************!
Data Engineer
Data engineer job in Dublin, OH
The Data Engineer is a technical leader and hands-on developer responsible for designing, building, and optimizing data pipelines and infrastructure to support analytics and reporting. This role will serve as the lead developer on strategic data initiatives, ensuring scalable, high-performance solutions are delivered effectively and efficiently.
The ideal candidate is self-directed, thrives in a fast-paced project environment, and is comfortable making technical decisions and architectural recommendations. The ideal candidate has prior experience in modern data platforms, most notable Databricks and the “lakehouse” architecture. They will work closely with cross-functional teams, including business stakeholders, data analysts, and engineering teams, to develop data solutions that align with enterprise strategies and business goals.
Experience in the financial industry is a plus, particularly in designing secure and compliant data solutions.
Responsibilities:
Design, build, and maintain scalable ETL/ELT pipelines for structured and unstructured data.
Optimize data storage, retrieval, and processing for performance, security, and cost-efficiency.
Ensure data integrity and governance by implementing robust validation, monitoring, and compliance processes.
Consume and analyze data from the data pipeline to infer, predict and recommend actionable insight, which will inform operational and strategic decision making to produce better results.
Empower departments and internal consumers with metrics and business intelligence to operate and direct our business, better serving our end customers.
Determine technical and behavioral requirements, identify strategies as solutions, and section solutions based on resource constraints.
Work with the business, process owners, and IT team members to design solutions for data and advanced analytics solutions.
Perform data modeling and prepare data in databases for analysis and reporting through various analytics tools.
Play a technical specialist role in championing data as a corporate asset.
Provide technical expertise in collaborating with project and other IT teams, internal and external to the company.
Contribute to and maintain system data standards.
Research and recommend innovative, and where possible automated approaches for system data administration tasks. Identify approaches that leverage our resources and provide economies of scale.
Engineer system that balances and meets performance, scalability, recoverability (including backup design), maintainability, security, high availability requirements and objectives.
Skills:
Databricks and related - SQL, Python, PySpark, Delta Live Tables, Data pipelines, AWS S3 object storage, Parquet/Columnar file formats, AWS Glue.
Systems Analysis - The application of systems analysis techniques and procedures, including consulting with users, to determine hardware, software, platform, or system functional specifications.
Time Management - Managing one's own time and the time of others.
Active Listening - Giving full attention to what other people are saying, taking time to understand the points being made, asking questions as appropriate, and not interrupting at inappropriate times.
Critical Thinking - Using logic and reasoning to identify the strengths and weaknesses of alternative solutions, conclusions or approaches to problems.
Active Learning - Understanding the implications of new information for both current and future problem-solving and decision-making.
Writing - Communicating effectively in writing as appropriate for the needs of the audience.
Speaking - Talking to others to convey information effectively.
Instructing - Teaching others how to do something.
Service Orientation - Actively looking for ways to help people.
Complex Problem Solving - Identifying complex problems and reviewing related information to develop and evaluate options and implement solutions.
Troubleshooting - Determining causes of operating errors and deciding what to do about it.
Judgment and Decision Making - Considering the relative costs and benefits of potential actions to choose the most appropriate one.
Experience and Education:
High School Diploma (or GED or High School Equivalence Certificate).
Associate degree or equivalent training and certification.
5+ years of experience in data engineering including SQL, data warehousing, cloud-based data platforms.
Databricks experience.
2+ years Project Lead or Supervisory experience preferred.
Must be legally authorized to work in the United States. We are unable to sponsor or take over sponsorship at this time.
Data Engineer
Data engineer job in Columbus, OH
We're seeking a skilled Data Engineer based in Columbus, OH, to support a high-impact data initiative. The ideal candidate will have hands-on experience with Python, Databricks, SQL, and version control systems, and be comfortable building and maintaining robust, scalable data solutions.
Key Responsibilities
Design, implement, and optimize data pipelines and workflows within Databricks.
Develop and maintain data models and SQL queries for efficient ETL processes.
Partner with cross-functional teams to define data requirements and deliver business-ready solutions.
Use version control systems to manage code and ensure collaborative development practices.
Validate and maintain data quality, accuracy, and integrity through testing and monitoring.
Required Skills
Proficiency in Python for data engineering and automation.
Strong, practical experience with Databricks and distributed data processing.
Advanced SQL skills for data manipulation and analysis.
Experience with Git or similar version control tools.
Strong analytical mindset and attention to detail.
Preferred Qualifications
Experience with cloud platforms (AWS, Azure, or GCP).
Familiarity with enterprise data lake architectures and best practices.
Excellent communication skills and the ability to work independently or in team environments.
Senior Data Architect
Data engineer job in Marysville, OH
4 days onsite - Marysville, OH
Skillset:
Bachelor's degree in computer science, data science, engineering, or related field
10 years minimum relevant experience in design and implementation of data models (Erwin) for enterprise data warehouse initiatives
Experience leading projects involving cloud data lakes, data warehousing, data modeling, and data analysis
Proficiency in the design and implementation of modern data architectures and concepts such as cloud services (AWS), real-time data distribution (Kinesis, Kafka, Dataflow), and modern data warehouse tools (Redshift, Snowflake, Databricks)
Experience with various database platforms, including DB2, MS SQL Server, PostgreSQL, Couchbase, MongoDB, etc.
Understanding of entity-relationship modeling, metadata systems, and data security, quality tools and techniques
Ability to design traditional/relational and modern big-data architecture based on business needs
Experience with business intelligence tools and technologies such as Informatica, Power BI, and Tableau
Exceptional communication and presentation skills Strong analytical and problem-solving skills
Ability to collaborate and excel in complex, cross-functional teams involving data scientists, business analysts, and stakeholders
Ability to guide solution design and architecture to meet business needs.
Cloud Engineer
Data engineer job in Columbus, OH
We're looking for very strong candidates, as the interview process for this team is extremely rigorous and highly selective.
GC/USC/GC-EAD ONLY- Must provide valid LinkedIn and reference upon request.
5 days on site in Jersey City
We have an opportunity for an Azure Cloud Developer to lead infrastructure provisioning and CI/CD integration efforts. This role is suited for candidates with proven experience in enterprise cloud operations and a strong grasp of automation, Terraform, and Azure services.
Key Responsibilities
Manage and support CI/CD pipelines across Azure environments.
Use Terraform to provision, configure, and maintain Azure infrastructure.
Execute deployment runbooks and adjust infrastructure as needed.
Work with internal tools (e.g., Jet and Jewels) to manage deployments.
Administer core Azure services: compute, networking, storage, and identity.
Support Azure messaging systems such as Event Grid, Event Hubs, and Service Bus.
Collaborate with cross-functional teams to support deployment readiness.
Required Qualifications
5-8 years of experience with Azure cloud infrastructure and Terraform.
Strong knowledge of Azure services, including compute, networking, storage, and IAM.
Hands-on experience managing CI/CD pipelines in enterprise environments.
Ability to interpret and execute operational runbooks independently.
Familiarity with internal DevOps systems such as Jet and Jewels.
Solid scripting skills in Python or similar languages. Preferred Qualifications
Experience with enterprise-grade Azure environments and large-scale infrastructure.
Proficiency in Git-based workflows and CI/CD platforms such as Azure DevOps or GitHub Actions.
Understanding of security, governance, and compliance in cloud deployments.
Certifications such as AZ-104, AZ-204, or AZ-400 are preferred.
Software Engineer
Data engineer job in Columbus, OH
hackajob has partnered with a global technology and management consultancy, specializing in driving transformation across the financial services and energy industries, and we're looking for Java & Python Developers!
Role: Software Engineer (Java & Python)
Mission: This role focuses on a large technology implementation with a major transition of a broker/dealer platform. These resources will support ETL development, API development, and conversion planning.
Location: On-site role in Columbus, OH.
Rates:
W2 - $32 per hour
1099 - $42 per hour
Work authorization: This role requires you to be authorized to work in the United States without sponsorship.
Qualifications (+4 years of experience):
Strong experience with Java, Spring Boot, and microservices architecture.
Proficiency in Python for ETL and automation.
Hands-on experience with API development.
Knowledge of data integration, ETL tools, and conversion workflows.
hackajob is a recruitment platform that matches you with relevant roles based on your preferences. To be matched with the roles, you need to create an account with us.
This role requires you to be based in the US.
Principal Data Scientist : Product to Market (P2M) Optimization
Data engineer job in Groveport, OH
About Gap Inc. Our brands bridge the gaps we see in the world. Old Navy democratizes style to ensure everyone has access to quality fashion at every price point. Athleta unleashes the potential of every woman, regardless of body size, age or ethnicity. Banana Republic believes in sustainable luxury for all. And Gap inspires the world to bring individuality to modern, responsibly made essentials.
This simple idea-that we all deserve to belong, and on our own terms-is core to who we are as a company and how we make decisions. Our team is made up of thousands of people across the globe who take risks, think big, and do good for our customers, communities, and the planet. Ready to learn fast, create with audacity and lead boldly? Join our team.
About the Role
Gap Inc. is seeking a Principal Data Scientist with deep expertise in operations research and machine learning to lead the design and deployment of advanced analytics solutions across the Product-to-Market (P2M) space. This role focuses on driving enterprise-scale impact through optimization and data science initiatives spanning pricing, inventory, and assortment optimization.
The Principal Data Scientist serves as a senior technical and strategic thought partner, defining solution architectures, influencing product and business decisions, and ensuring that analytical solutions are both technically rigorous and operationally viable. The ideal candidate can lead end-to-end solutioning independently, manage ambiguity and complex stakeholder dynamics, and communicate technical and business risk effectively across teams and leadership levels.
What You'll Do
* Lead the framing, design, and delivery of advanced optimization and machine learning solutions for high-impact retail supply chain challenges.
* Partner with product, engineering, and business leaders to define analytics roadmaps, influence strategic priorities, and align technical investments with business goals.
* Provide technical leadership to other data scientists through mentorship, design reviews, and shared best practices in solution design and production deployment.
* Evaluate and communicate solution risks proactively, grounding recommendations in realistic assessments of data, system readiness, and operational feasibility.
* Evaluate, quantify, and communicate the business impact of deployed solutions using statistical and causal inference methods, ensuring benefit realization is measured rigorously and credibly.
* Serve as a trusted advisor by effectively managing stakeholder expectations, influencing decision-making, and translating analytical outcomes into actionable business insights.
* Drive cross-functional collaboration by working closely with engineering, product management, and business partners to ensure model deployment and adoption success.
* Quantify business benefits from deployed solutions using rigorous statistical and causal inference methods, ensuring that model outcomes translate into measurable value
* Design and implement robust, scalable solutions using Python, SQL, and PySpark on enterprise data platforms such as Databricks and GCP.
* Contribute to the development of enterprise standards for reproducible research, model governance, and analytics quality.
Who You Are
* Master's or Ph.D. in Operations Research, Operations Management, Industrial Engineering, Applied Mathematics, or a closely related quantitative discipline.
* 10+ years of experience developing, deploying, and scaling optimization and data science solutions in retail, supply chain, or similar complex domains.
* Proven track record of delivering production-grade analytical solutions that have influenced business strategy and delivered measurable outcomes.
* Strong expertise in operations research methods, including linear, nonlinear, and mixed-integer programming, stochastic modeling, and simulation.
* Deep technical proficiency in Python, SQL, and PySpark, with experience in optimization and ML libraries such as Pyomo, Gurobi, OR-Tools, scikit-learn, and MLlib.
* Hands-on experience with enterprise platforms such as Databricks and cloud environments
* Demonstrated ability to assess, communicate, and mitigate risk across analytical, technical, and business dimensions.
* Excellent communication and storytelling skills, with a proven ability to convey complex analytical concepts to technical and non-technical audiences.
* Strong collaboration and influence skills, with experience leading cross-functional teams in matrixed organizations.
* Experience managing code quality, CI/CD pipelines, and GitHub-based workflows.
Preferred Qualifications
* Experience shaping and executing multi-year analytics strategies in retail or supply chain domains.
* Proven ability to balance long-term innovation with short-term deliverables.
* Background in agile product development and stakeholder alignment for enterprise-scale initiatives.
Benefits at Gap Inc.
* Merchandise discount for our brands: 50% off regular-priced merchandise at Old Navy, Gap, Banana Republic and Athleta, and 30% off at Outlet for all employees.
* One of the most competitive Paid Time Off plans in the industry.*
* Employees can take up to five "on the clock" hours each month to volunteer at a charity of their choice.*
* Extensive 401(k) plan with company matching for contributions up to four percent of an employee's base pay.*
* Employee stock purchase plan.*
* Medical, dental, vision and life insurance.*
* See more of the benefits we offer.
* For eligible employees
Gap Inc. is an equal-opportunity employer and is committed to providing a workplace free from harassment and discrimination. We are committed to recruiting, hiring, training and promoting qualified people of all backgrounds, and make all employment decisions without regard to any protected status. We have received numerous awards for our long-held commitment to equality and will continue to foster a diverse and inclusive environment of belonging. In 2022, we were recognized by Forbes as one of the World's Best Employers and one of the Best Employers for Diversity.
ETL Architect
Data engineer job in Columbus, OH
E*Pro Consulting service offerings include contingent Staff Augmentation of IT professionals, Permanent Recruiting and Temp-to-Hire. In addition, our industry expertise and knowledge within financial services, Insurance, Telecom, Manufacturing, Technology, Media and Entertainment, Pharmaceutical, Health Care and service industries ensures our services are customized to meet specific needs. For more details please visit our website ******************
Job Description
Title : ETL Architect
Location : Columbus, OH
Type : Fulltime Permanent
Work Status : US Citizen / GC / EAD (GC)
Required Skills:
• Responsible for Architecture, Design and Implementation of Data Integration/ETL, Data Quality, Metadata Management and Data Migration solutions using Informatica tools
• Execute engagements as Data Integration-ETL Architect and define Solution Strategy, Architecture, Design and Implementation approach
• Expertise in implementing Data Integration-ETL solutions which include components such as ETL, Data Migration, Replication, Consolidation, Data Quality, Metadata Management etc. using Informatica products (e.g. Power Center, Power Exchange, IDQ, Metadata Manager)
• Responsible for Detailed ETL design, Data Mapping, Transformation Rules, Interfaces, Database schema, Scheduling, Performance Tuning, etc
• Lead a team of designers/developers and guide them throughout the implementation life cycle and perform Code review
• Engage client Architects, SMEs and other stakeholders throughout Architecture, Design and implementation lifecycle and recommend effective solutions
• experience in multiple Databases such as Oracle, DB2, SQL Server, Mainframe, etc
• Experience in Industry models such as IIW, IAA, ACORD, HL7, etc. and Insurance products (e.g. Guidewire) will be plus
Additional Information
All your information will be kept confidential according to EEO guidelines.
Data Scientist
Data engineer job in Delaware, OH
The Data Scientist will be responsible to create product reliability models and advanced analytics that drive strategic decisions about product improvements. This role will collaborate closely with cross-functional teams, including Engineering, Product Management, Quality, Services, and IT, to develop and deploy data-driven solutions that address complex customer challenges. You should understand the impacts of environmental and other field conditions as they relate to product reliability. In this role, you should be able to apply mathematical and statistical methods to predict future field performance by building product reliability models using software tools.
PRINCIPAL DUTIES & RESPONSIBILITIES:
* Analyze product reliability requirements
* Create predictive models for field performance and product reliability
* Assist in design of experiments and analysis to understand impacts of different design decisions and test results
* Correlate predictive models with test results and field data
REQUIREMENTS:
* Bachelor's Degree in Math, Statistics, Data Science, Computer Science, Reliability Engineering, or equivalent experience
* 1-5 years of experience
* Basic knowledge in power/electrical engineering; AC and DC power
* Basic knowledge in large- and small-scale cooling systems
* LabVIEW, Python, or other coding and modeling language
* Minitab or other statistical software tools
* Power BI or other data visualization tools
* Travel 15%
Auto-ApplySenior Data Engineer
Data engineer job in Columbus, OH
Here at Lower, we believe homeownership is the key to building wealth, and we're making it easier and more accessible than ever. As a mission-driven fintech, we simplify the home-buying process through cutting-edge technology and a seamless customer experience.
With tens of billions in funded home loans and top ratings on Trustpilot (4.8), Google (4.9), and Zillow (4.9), we're a leader in the industry. But what truly sets us apart? Our people. Join us and be part of something bigger.
Job Description:
We are seeking a Senior Data Engineer to play a key role in building and optimizing our data infrastructure to support business insights and decision-making. In this role, you will design and enhance denormalized analytics tables in Snowflake, build scalable ETL pipelines, and ensure data from diverse sources is transformed into accurate, reliable, and accessible formats. You will collaborate with business and sales stakeholders to gather requirements, partner with developers to ensure critical data is captured at the application level, and optimize existing frameworks for performance and integrity. This role also includes creating robust testing frameworks and documentation to ensure quality and consistency across data pipelines.
What you'll do:
Data Pipeline Engineering:
Design, develop, and optimize high-performance ETL/ELT pipelines using Python, dbt, and Snowflake.
Build and manage real-time ingestion pipelines leveraging AWS Lambda and CDC systems.
Cloud & Infrastructure:
Develop scalable serverless solutions with AWS, adopting event-driven architecture patterns.
Manage containerized applications using Docker and infrastructure as code via GitHub Actions.
Advanced Data Management:
Create sophisticated, multi-layered Snowflake data models optimized for scalability, flexibility, and performance.
Integrate and manage APIs for Salesforce, Braze, and various financial systems, emphasizing robust error handling and reliability.
Quality Assurance & Operations:
Implement robust testing frameworks, data lineage tracking, monitoring, and alerting.
Enhance and manage CI/CD pipelines, drive migration to modern orchestration tools (e.g., Dagster, Airflow), and manage multi-environment deployments.
Who you are:
5+ years of data engineering experience, ideally with cloud-native architectures.
Expert-level Python skills, particularly with pandas, SQLAlchemy, and asynchronous processing.
Advanced SQL and Snowflake expertise, including stored procedures, external stages, performance tuning, and complex query optimization.
Strong proficiency with dbt, including macro development, testing, and automated deployments.
Production-grade Pipeline Experience specifically with Lambda, S3, API Gateway, and IAM.
Proven experience with REST APIs, authentication patterns, and handling complex data integrations.
Preferred Experience
Background in financial services or fintech, particularly loan processing, customer onboarding, or compliance.
Experience with real-time streaming platforms like Kafka or Kinesis.
Familiarity with Infrastructure as Code tools (Terraform, CloudFormation).
Knowledge of BI and data visualization tools (Tableau, Looker, Domo).
Container orchestration experience (ECS, Kubernetes).
Understanding of data lake architectures and Delta Lake.
Technical Skills
Programming: Python (expert), SQL (expert), Bash scripting.
Cloud: AWS (Lambda, S3, API Gateway, CloudWatch, IAM).
Data Warehouse: Snowflake, dimensional modeling, query optimization.
ETL/ELT: dbt, pandas, custom Python workflows.
DevOps: GitHub Actions, Docker, automated testing.
APIs: REST integration, authentication, error handling.
Data Formats: JSON, CSV, Parquet, Avro.
Version Control: Git, GitHub workflows.
What Sets You Apart
Systems Thinking: You see the big picture, designing data flows that scale and adapt with the business.
Problem Solver: You quickly diagnose and resolve complex data issues across diverse systems and APIs.
Quality Advocate: You write comprehensive tests, enforce data quality standards, and proactively prevent data issues.
Collaborative: You thrive working alongside analysts, developers, and product teams, ensuring seamless integration and teamwork.
Continuous Learner: You actively seek emerging data technologies and best practices to drive innovation.
Business Impact: You understand how your data engineering decisions directly influence and drive business outcomes.
Benefits & Perks
Competitive salary and comprehensive benefits (healthcare, dental, vision, 401k match)
Hybrid work environment (primarily remote, with two days a week in downtown Columbus Ohio
Professional growth opportunities and internal promotion pathways
Collaborative, mission-driven culture recognized as a local and national "best place to work"
If you don't think you meet all of the criteria below but still are interested in the job, please apply. Nobody checks every box, and we're looking for someone excited to join the team.
Lower provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training.
Privacy Policy
Auto-ApplySenior Data Engineer
Data engineer job in Columbus, OH
Our direct client has a long-term contract need for a Sr. Data Engineer.
Candidate Requirements:
Candidates must be local to Columbus, Ohio
Candidates must be willing and able to work the following:
Hybrid schedule (3 days in office & 2 days WFH)
The team is responsible for the implementation of the new Contract Management System (FIS Asset Finance) as well as the integration into the overall environment and the migration of data from the legacy contract management system to the new system.
Candidate will be focused on the delivery of data migration topics to ensure that high quality data is migrated from the legacy systems to the new systems. This may involve data mapping, SQL development and other technical activities to support Data Migration objectives.
Must Have Experience:
Strong C# and SQL Server design and development skills. Analysis Design. IMPORTANT MUST HAVE!
Strong technical analysis skills
Strong collaboration skills to work effectively with cross-functional teams
Exceptional ability to structure, illustrate, and communicate complex concepts clearly and effectively to diverse audiences, ensuring understanding and actionable insights.
Demonstrated adaptability and problem-solving skills to navigate challenges and uncertainties in a fast-paced environment.
Strong prioritization and time management skills to balance multiple projects and deadlines in a dynamic environment.
In-depth knowledge of Agile methodologies and practices, with the ability to adapt and implement Agile principles in testing and delivery processes.
Nice to have:
ETL design and development; data mapping skills and experience; experience executing/driving technical design and implementation topics
Senior Looker Developer
Data engineer job in Columbus, OH
Responsible for collaborating with business and technical teams to gather requirements and translate them into insightful, scalable dashboards and widgets using Google Looker. Develops and maintains LookML models, ensures data accuracy, and drives impactful data visualization solutions to support customer's data needs.
We are seeking an experienced Senior Business Intelligence Developer to join our growing customer-facing reporting portal team. The ideal candidate will be adept at utilizing BI tools to transform data into meaningful insights that drive business decisions. Your primary role will be to develop, implement, and maintain BI solutions tailored to the financial reporting needs of our customers and internal stakeholders.
Key Responsibilities
Participate in the full lifecycle of BI development, from requirements gathering to deployment and user acceptance testing
Design, develop, and maintain scalable BI solutions focused on financial reporting using enterprise BI tools such as Looker and PowerBI
Monitor reports to ensure data integrity and report functionality is upkept
Provide training and support to end-users on new reports and dashboards
Stay current with the latest trends and technologies in BI and financial reporting
Job Qualifications
Bachelor's degree in Computer Science, Information Technology, Finance, or a related field
Minimum of 5 years of experience as a BI Developer, bonus if experience is focused on financial reporting
Strong proficiency in SQL and experience with large datasets
Extensive experience with BI tools such as Looker, PowerBI, or Microstrategy
Excellent understanding of data modeling, data warehousing, and ETL processes
Proven ability to translate business needs into technical specifications
Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy
Excellent communication and collaboration skills, with the ability to interact effectively with various stakeholders
Java Software Engineer
Data engineer job in Columbus, OH
Title: Java Software Engineer
Hire Type: 12 month contract to start (potential extensions and full time hire)
Pay Range: $50/hr - $65/hr (contingent on years of experience, skills, and education)
Required Skills & Experience
Strong programming skills within Java
Jenkins experience for automating builds, CI/CD, and pipeline orchestration
experience withing in AWS environment with some exposure to cloud development
experience with event driven architecture
Job Description
Insight Global is looking for a Java Software Engineer to sit in Columbus, Ohio. This candidate will be aligned to a platform automation project within their internal ERP system. Automation efforts will be assigned to internal developers, and this resource will be working within the middle tier of their internal system. The current code is written in .NET framework, but the new code being developed will be Java based. Candidates will be working with various teams and specifically aligned to their Billing Portal within the internal system focusing on the code for transitions in the middle tier to the customer/client facing tier and back office functions. Candidates need to have worked in an AWS environment and have some exposure to event driven architecture (General structure).