Post job

Senior data scientist jobs in Lorain, OH - 76 jobs

All
Senior Data Scientist
Data Engineer
Data Scientist
Lead Data Technician
Biostatistician
  • AI & GenAI Data Scientist-Senior Associate

    PwC 4.8company rating

    Senior data scientist job in Cleveland, OH

    Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn't clear, you ask questions, and you use these moments as opportunities to grow. Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: * Respond effectively to the diverse perspectives, needs, and feelings of others. * Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. * Use critical thinking to break down complex concepts. * Understand the broader objectives of your project or role and how your work fits into the overall strategy. * Develop a deeper understanding of the business context and how it is changing. * Use reflection to develop self awareness, enhance strengths and address development areas. * Interpret data to inform insights and recommendations. * Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Minimum Degree Required Bachelor's Degree Minimum Year(s) of Experience 4 year(s) Demonstrates thorough-level abilities and/or a proven record of success managing the identification and addressing of client needs: * Building of GenAI and AI solutions, including but not limited to analytical model development and implementation, prompt engineering, general all-purpose programming (e.g., Python), testing, communication of results, front end and back-end integration, and iterative development with clients * Documenting and analyzing business processes for AI and Generative AI opportunities, including gathering of requirements, creation of initial hypotheses, and development of GenAI and AI solution approach * Collaborating with client team to understand their business problem and select the appropriate analytical models and approaches for AI and GenAI use cases * Designing and solutioning AI/GenAI architectures for clients, specifically for plugin-based solutions (i.e., ChatClient application with plugins) and custom AI/GenAI application builds * Processing unstructured and structured data to be consumed as context for LLMs, including but not limited to embedding of large text corpus, generative development of SQL queries, building connectors to structured databases * Support management of daily operations of a global data and analytics team on client engagements, review developed models, provide feedback and assist in analysis; * Directing data engineers and other data scientists to deliver efficient solutions to meet client requirements; * Leading and contributing to development of proof of concepts, pilots, and production use cases for clients while working in cross-functional teams; * Structuring, write, communicate and facilitate client presentations; and, * Directing associates through coaching, providing feedback, and guiding work performance. Demonstrates thorough abilities and/or a proven record of success learning and performing in functional and technical capacities, including the following areas: * Managing AI/GenAI application development teams including back-end and front-end integrations * Using Python (e.g., Pandas, NLTK, Scikit-learn, Keras etc.), common LLM development frameworks (e.g., Langchain, Semantic Kernel), Relational storage (SQL), Non-relational storage (NoSQL); * Experience in analytical techniques such as Machine Learning, Deep Learning and Optimization * Vectorization and embedding, prompt engineering, RAG (retrieval, augmented, generation) workflow dev * Understanding or hands on experience with Azure, AWS, and / or Google Cloud platforms * Experience with Git Version Control, Unit/Integration/End-to-End Testing, CI/CD, release management, etc. Travel Requirements Up to 80% Job Posting End Date Learn more about how we work: ************************** PwC does not intend to hire experienced or entry level job seekers who will need, now or in the future, PwC sponsorship through the H-1B lottery, except as set forth within the following policy: *********************************** As PwC is an equal opportunity employer, all qualified applicants will receive consideration for employment at PwC without regard to race; color; religion; national origin; sex (including pregnancy, sexual orientation, and gender identity); age; disability; genetic information (including family medical history); veteran, marital, or citizenship status; or, any other status protected by law. For only those qualified applicants that are impacted by the Los Angeles County Fair Chance Ordinance for Employers, the Los Angeles' Fair Chance Initiative for Hiring Ordinance, the San Francisco Fair Chance Ordinance, San Diego County Fair Chance Ordinance, and the California Fair Chance Act, where applicable, arrest or conviction records will be considered for Employment in accordance with these laws. At PwC, we recognize that conviction records may have a direct, adverse, and negative relationship to responsibilities such as accessing sensitive company or customer information, handling proprietary assets, or collaborating closely with team members. We evaluate these factors thoughtfully to establish a secure and trusted workplace for all. Applications will be accepted until the position is filled or the posting is removed, unless otherwise set forth on the following webpage. Please visit this link for information about anticipated application deadlines: *************************************** The salary range for this position is: $77,000 - $202,000. Actual compensation within the range will be dependent upon the individual's skills, experience, qualifications and location, and applicable employment laws. All hired individuals are eligible for an annual discretionary bonus. PwC offers a wide range of benefits, including medical, dental, vision, 401k, holiday pay, vacation, personal and family sick leave, and more. To view our benefits at a glance, please visit the following link: ***********************************
    $77k-202k yearly Auto-Apply 12d ago
  • Job icon imageJob icon image 2

    Looking for a job?

    Let Zippia find it for you.

  • Senior Data Scientist - Supply Chain

    Dealer Tire 4.7company rating

    Senior data scientist job in Cleveland, OH

    Who We Are We're Dealer Tire, a family-owned, international distributor of tires and parts established in 1918 in Cleveland, OH. We're laser focused on helping the world's largest and most trusted auto manufacturers grow their tire business-in fact, we've sold more than 60 million tires to date. We're a thriving company, and we're looking for driven individuals to join our team. That's where you come in! Base Pay Range: $130,000-$145,000 As a Senior Data Scientist, your essential job functions will include the following: Provide Thought Leadership & Drive Change Deliver data-driven insights through advanced algorithms and statistical models aligned with Supply Chain strategy, maximizing profitability and minimizing costs. Collaborate with leadership to effectively solve problems, assess results, and communicate actionable recommendations. Build relationships with cross-functional teams to drive progress and adoption of Data Science projects. Execute discovery sprints and deep EDA to convert anecdotes into testable hypotheses, KPIs, and use cases across supply chain functions. Define and track operational KPIs with model health to tie analytics to service, cost, safety, and cash. Translate complex analyses into simple narratives and decision memos leaders can act on, linking findings to economics and operational playbooks. Advanced Modeling and Analysis Identify and prioritize projects with the most significant returns and develop advanced mathematical and statistical models. Design optimization/ML/simulation models and package them to drive day-to-day operations and decisions with clear recommendations and “what-if” scenarios. Create high-quality features and datasets with lineage and quality checks; stand up lightweight feature stores and reusable “EDA fact packs” to accelerate future work. Develop, maintain, and optimize models in cloud environments, ensuring business teams can leverage insights with actionable recommendations. Collect, cleanse, transform, and curate structured and unstructured data into usable formats for research and analysis. Utilize tools like SQL and data APIs to manage data in various storage environments, ensuring privacy and confidentiality. Product and Project Management Lead projects and collaborate with subject matter experts to align analysis outcomes with organizational goals. Partner with MLE / DE to deliver resilient pipelines with alerting, and retraining; author runbooks and guardrails so Ops can operate models confidently. Design and productionize multi-agent workflows that read from the data lake or warehouse and trigger actions; implement human-in-the-loop review, role-based guardrails, and observability. Design and deliver scalable, integrated software tools and models that balance predictive power, interpretability, and operational risks. Manage strategic projects, ensuring clear communication, timely delivery, and appropriate ROI for recommended initiatives. Other Duties as Assigned Basic Position Requirements MS in Computer Science, Operations Research, Applied Mathematics, Machine Learning, or a related field. 4-5 years of experience implementing optimization, ML, or AI models. Effective program management experience with proven results required Strong data engineering principles, familiar with technologies to store, query, process, and analyze data. Strong programming skills in languages like Python, C/C++, Java, or Go. Proven record of productionizing models in cloud environments, version control, and MLOps. Preferred Qualifications Demonstrate in-depth technical capabilities, professional knowledge and understanding in functional areas of Supply Chain & Optimization - Inventory, Transportation, Sourcing, Distribution, Fulfillment, and Planning. 2+ years of practical experience with Optimization + Simulation (Pyomo/OR-Tools/Gurobi/CPLEX; Monte Carlo/discrete-event) and ML. Ability to simplify complexity, influence cross-functionally, and tell the story behind the math. Displays a creative mindset with problem solving in sophisticated and complex environments. This includes and is not limited to identifying both new and existing problems and applying the most appropriate technology to successfully solve the given challenge. Competencies Required Results Orientation Business Acumen Agility Initiative Influence Communication Customer Focus Relationship Building Critical Thinking Physical Job Requirements Continuous viewing from and inputting to a computer screen. Sitting for long periods of time. Travel as necessary, Drug Policy Dealer Tire is a drug-free environment. All applicants being considered for employment must pass a pre-employment drug screening before beginning work. Why Dealer Tire: An amazing opportunity to join a growing organization, built on the efforts of hard working, innovative, and team-oriented people. The compensation offered for this position will depend on qualifications, experience, and geographic location. The total compensation package may also include commission, bonus or profit sharing. We offer a competitive & comprehensive benefit package including: paid time off, medical, dental, vision, and 401k match (50% on the dollar up to 7% of employee contribution). For more information on our benefit offerings, please visit our Dealer Tire Family of Companies Benefits Highlights Booklet. EOE Statement: Dealer Tire is an Equal Employment Opportunity (EEO) employer and does not discriminate on the basis of race, color, national origin, religion, gender, age, veteran status, political affiliation, sexual orientation, marital status or disability (in compliance with the Americans with Disabilities Act*), or any other legally protected status, with respect to employment opportunities. *ADA Disclosure: Any candidate who feels that they may need an accommodation to complete this application, or any portions of same, based on the impact of a disability should contact Dealer Tire's Human Resources Department to discuss your specific needs. Please feel free to contact us at ADAAAccommodation@dealertire.com or via phone at ************.
    $130k-145k yearly Auto-Apply 60d+ ago
  • Senior Data Scientist

    Sequoia Financial Group LLC 3.9company rating

    Senior data scientist job in Akron, OH

    Job DescriptionSequoia Financial Group is a growing Registered Investment Advisor (RIA), headquartered in Northeast Ohio, offering financial planning and wealth management services. At Sequoia, we exist with a singular purpose: to enrich lives. Our values define how we behave and guide us through the pursuit of our purpose to enrich lives. At Sequoia, our core values are: Integrity. We act in the best interests of others by providing an honest, consistent experience for our clients and team. Passion. We pursue our full potential, seeking to continually enhance and evolve our ability to serve our clients and team. Teamwork. We subordinate our egos to work together for the benefit of our clients. Our promise to team members is that you will grow with us. From experienced advisors to new college grads to transitioning principals, every team member will find Sequoia a place to refine their professional mission, move into new opportunities, go deeper, and lead further. We are built to help you build a career here as a long-term contributor in our work to enrich lives for generations. As we expand our Data & AI Office, we seek a hands-on Senior Data Scientist to help shape our data and AI strategy, drive architectural excellence, and enable scalable, secure, and intelligent AI-first processes. This role is pivotal in supporting our enterprise-wide AI initiatives and AI adoption. Role Overview Lead enterprise-wide AI discovery and predictive modeling to identify high-impact business problems and translate them into production-ready solutions. Build Proof of Concepts (POCs) and Minimal Viable Products (MVPs) using AI/LLMs and data science, operationalize predictive models across Client Experience, Operations, Compliance, and Marketing, and drive a culture of experimentation grounded in measurable ROI and responsible AI. Key ResponsibilitiesDiscovery & Problem Framing Conduct user interviews and journey walkthroughs to surface real problems; apply behavioral insights to translate ambiguous needs into quantified problem statements and value hypotheses. Build prioritization frameworks (impact, effort, risk, data readiness, compliance) and size ROI for use cases. Partner with Finance/PMO to track realized value vs. forecast post-launch Rapid Prototyping & MVP Development Design POCs using LLMs, RAG, prompt engineering, and classical ML; evolve into MVPs with clear success criteria and guardrails. Conduct feature engineering, algorithm selection, and set monitoring plans for drift and bias Package prototypes with evaluation harnesses; enforce clear "continue/pivot/stop" decision gates Enterprise Prediction & Model Development Build and maintain forecasting and propensity models (churn, next best action, AUM growth, advisor capacity, client lifecycle scoring) serving multiple departments. Standardize feature stores across Salesforce and planning/portfolio platforms (Tamarac, Orion, Addepar, Black Diamond) Define SLAs/SLOs, feature refresh cadence, and rollback criteria Decision Support & Operationalization Convert model outputs into actionable artifacts, such as decision playbooks, scenario calculators, dashboards, and alerting rules. Define workflows, leading indicators, and counter-metrics that drive business actions. Create Tableau/Power BI visualizations with clear narratives for non-technical stakeholders. Data Science Culture & Governance Champion MLOps hygiene: versioning, experiment tracking, model cards, and documentation Host office hours and workshops to build data literacy and ethical AI awareness Embed responsible-AI principles, RBAC, and human-in-the-loop controls; coordinate with Legal/Compliance on risk and auditability Rigorously document Workflows and Processes to establish the AI and Data Office for long-term reliability and resilience Model Portfolio Management Maintain a transparent inventory of models and experiments; deprecate low-value assets. Establish prioritization scoring frameworks and publish quarterly roadmap updates. Track portfolio health: use-case progression, realized value, time-to-decision improvements Cross-Functional Partnership Coordinate with data engineering on operationalization and with analytics on KPI alignment. Collaborate with PMO on experiment time-boxing and capacity management Work with vendors on architecture, integration, and build-vs-buy decisions Required Qualifications 7-12 years in data science, ML, analytics, or product discovery Python expertise: pandas, scikit-learn, NumPy; Jupyter notebooks and Git Statistical & ML fundamentals: feature engineering, validation, error analysis LLM & AI proficiency: POC development with prompt engineering, RAG, and classical ML Decision-support framing: translate models into actionable workflows; define KPIs and measure business impact Data governance: Data Dictionary, lineage, RBAC, privacy principles Communication: strong written/verbal skills with technical and non-technical audiences Iterative delivery: comfort with short feedback cycles, A/B testing, and learning from experiments
    $100k-142k yearly est. 1d ago
  • Senior Data Scientist / AI Engineer

    Insight Global

    Senior data scientist job in Beachwood, OH

    Seeking a Senior Data Scientist with hands-on experience in GenAI agent development and modern AI engineering. The focus is on building production-grade agents, MCP integrations, and enterprise knowledge systems using cloud and data stack. We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity/affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to ********************.To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy: **************************************************** Skills and Requirements Responsibilities: Build and refine GenAI agents that interact with internal systems Develop MCP tools and secure integrations Create and maintain knowledge bases and retrieval pipelines Implement agent evaluation frameworks and guardrails Work across Azure, Snowflake, and Databricks to support production AI systems Key Skills & Requirements: NEEDS to be a team player and work well with others Hands-on experience building and deploying GenAI agents (including open-source LLMs) in production environments Strong knowledge of MCP, tool integrations, and agent orchestration Proven ability to design and maintain knowledge bases, vector search, and retrieval systems independently Rapid GenAI prototyping (vibe coding) Cloud expertise: Azure (preferred), AWS, or GCP; secure, governed infrastructure Expert-level SQL skills; experience with Snowflake (flexible) Experience with Databricks for data, ML workflows, and predictive modeling Solid background in traditional ML (classification, clustering, etc.) Ability to build evals, guardrails, and safety layers for agents at scale Experience with MLOps and LLM Observability 5+ years of relevant experience; independent and collaborative team player Comfortable with LLMs and working in a team environment Azure, Databricks, and Snowflake expertise
    $79k-109k yearly est. 6d ago
  • Senior Data Scientist

    Stratacuity

    Senior data scientist job in Cleveland, OH

    The Senior Data Scientist plays a key role in leveraging advanced analytics and machine learning to solve complex business challenges. This position works closely with business partners to deliver actionable insights and drive strategic initiatives through data-driven solutions. Key Responsibilities & Required Skills: * Analyze large, diverse datasets to identify trends, patterns, and opportunities for business improvement. * Develop and implement statistical models, machine learning algorithms, and predictive analytics to address organizational needs. * Collaborate with internal teams to define analytical questions, design experiments, and translate business requirements into technical solutions. * Present findings and recommendations to stakeholders using clear visualizations and concise communication. * Demonstrate proficiency in programming languages such as Python and experience with machine learning frameworks and cloud platforms. * Utilize database technologies such as SQL and data visualization tools like Tableau and Power BI to support analysis and reporting. * Understand AI and Machine Learning frameworks such as Databricks, Snowflake, and Dataiku. Qualifications: * Bachelor's degree in Data Science, Statistics, Mathematics, Business Analytics, or a related field, or equivalent professional experience. * Minimum 7 years of experience in data science, analytics, or artificial intelligence. Apex Systems is a world-class IT services company that serves thousands of clients across the globe. When you join Apex, you become part of a team that values innovation, collaboration, and continuous learning. We offer quality career resources, training, certifications, development opportunities, and a comprehensive benefits package. Our commitment to excellence is reflected in many awards, including ClearlyRated's Best of Staffing in Talent Satisfaction in the United States and Great Place to Work in the United Kingdom and Mexico. Apex uses a virtual recruiter as part of the application process. Click here for more details. Apex Benefits Overview: Apex offers a range of supplemental benefits, including medical, dental, vision, life, disability, and other insurance plans that offer an optional layer of financial protection. We offer an ESPP (employee stock purchase program) and a 401K program which allows you to contribute typically within 30 days of starting, with a company match after 12 months of tenure. Apex also offers a HSA (Health Savings Account on the HDHP plan), a SupportLinc Employee Assistance Program (EAP) with up to 8 free counseling sessions, a corporate discount savings program and other discounts. In terms of professional development, Apex hosts an on-demand training program, provides access to certification prep and a library of technical and leadership courses/books/seminars once you have 6+ months of tenure, and certification discounts and other perks to associations that include CompTIA and IIBA. Apex has a dedicated customer service team for our Consultants that can address questions around benefits and other resources, as well as a certified Career Coach. You can access a full list of our benefits, programs, support teams and resources within our 'Welcome Packet' as well, which an Apex team member can provide. Employee Type: FullTime Location: Cleveland, OH, US Job Type: Date Posted: January 8, 2026 Pay Range: $105,883 - $136,669 Similar Jobs * IT Manager Data Scientist * Data Scientist * Data Scientist * Scientist II / Sr. Scientist * R&D Senior Scientist
    $105.9k-136.7k yearly 3d ago
  • Data Scientist

    6090-Johnson & Johnson Services Legal Entity

    Senior data scientist job in Brunswick, OH

    At Johnson & Johnson, we believe health is everything. Our strength in healthcare innovation empowers us to build a world where complex diseases are prevented, treated, and cured, where treatments are smarter and less invasive, and solutions are personal. Through our expertise in Innovative Medicine and MedTech, we are uniquely positioned to innovate across the full spectrum of healthcare solutions today to deliver the breakthroughs of tomorrow, and profoundly impact health for humanity. Learn more at ******************* Job Function: Data Analytics & Computational Sciences Job Sub Function: Data Science Job Category: Scientific/Technology All Job Posting Locations: New Brunswick, New Jersey, United States of America Job Description: About Innovative Medicine Our expertise in Innovative Medicine is informed and inspired by patients, whose insights fuel our science-based advancements. Visionaries like you work on teams that save lives by developing the medicines of tomorrow. Join us in developing treatments, finding cures, and pioneering the path from lab to life while championing patients every step of the way. Learn more at *******************/innovative-medicine We are searching for the best talent for Data Scientist Purpose: As a Data Scientist in Global Finance Data Science Team: You will be contributing to a high standard delivery of data science driven predictive financial statements - consumed by senior leadership. You will work in a global team of Data Scientists, Data Engineers and Machine Learning Engineers to advance the Data Science and AI roadmap for J&J's Global Finance function. You will assist in delivering value-added insights and analytics to our finance and business leaders, reduce manual work through automated reasoning, and enhance user-experience. You will be responsible for: You will mainly focus on advancing and broadening the capabilities of our Predictive Financial Statements, plugging results into SAP Analytics Cloud reporting - where most stakeholders go for corporate internal financial statements. Your job will also include aligning finance and business needs, validation of data from different source systems, and data reconciliation. You will mainly be required to create new models and leverage or extend the use of existing models for other financial statements. You will also enhance existing models for accuracy, speed and cost. You will be involved in data science projects across their lifecycle, from design to production and adoption by end users. This will include creating proof-of-concepts for new project, data science model development, data pipeline development and production deployment. The capabilities developed will include forecasting, descriptive analytics, data visualization, GenAI and decision support. This role will involve understanding the needs of business stakeholders and advocating the merits of data-driven analytics to provide viable solutions. You will be responsible for: Adopting a highly successful forecasting processes and technologies and delivering predictive financial statements monthly. Modeling the impact of future events to enhance forecast accuracy. Developing data pipelines for large datasets sourced from financial systems and automating data science processes. Documenting and aligning model changes within the team and stakeholders. Communicating insights to stakeholders leveraging data visualization tools. Monitoring model performance, and continuously improving existing models. Collaborating with finance, commercial leaders, technology teams, and external partners to deliver end-to-end solutions, ensuring compliance and risk management. Advocating for data-driven insights and data science methods across the organization and managing compliance adherence. Qualifications / Requirements: Minimum of 2 years of Data Science/ AI experience in an industry setting is required, preferably in a Finance or Healthcare setting. Minimum of a Bachelors degree is required, preferably in Science, Economics, Business Analytics, Data Science, Finance, Computer Science, Engineering or any other quantitative or STEM discipline. Technical Requirements Proficient in Python/R/Alteryx programming and experienced with Data Science Cloud platforms like AWS and Databricks or Domino. Experienced using finance data and SAP HANA data tables is a plus. Proficient in interpreting data sources and correlating to financial metrics. Able to work independently and under time pressure to deliver results, investigating and solving data issues in an explainable way. Skilled in data visualization and dashboarding using Tableau or PowerBI. Knowledgeable in statistical techniques and concepts, such as regression, properties of distributions, and statistical tests. Strong data analytics skills and attention to detail. Other: The position will be located in New Brunswick, NJ and may require up to 10% travel. Johnson & Johnson is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, age, national origin, disability, protected veteran status or other characteristics protected by federal, state or local law. We actively seek qualified candidates who are protected veterans and individuals with disabilities as defined under VEVRAA and Section 503 of the Rehabilitation Act. Johnson & Johnson is committed to providing an interview process that is inclusive of our applicants' needs. If you are an individual with a disability and would like to request an accommodation, external applicants please contact us via *******************/contact-us/careers , internal employees contact AskGS to be directed to your accommodation resource. #LI-Hybrid #JNJDataScience Required Skills: Artificial Intelligence (AI), Python (Programming Language) Preferred Skills: Advanced Analytics, Analytical Reasoning, Business Intelligence (BI), Business Writing, Coaching, Collaborating, Communication, Data Analysis, Data Compilation, Data Privacy Standards, Data Savvy, Data Science, Data Visualization, Econometric Models, Execution Focus, Technical Credibility, Technologically Savvy, Workflow Analysis The anticipated base pay range for this position is : The anticipated base pay range for this position is $89,000 to $143,750 USD. Additional Description for Pay Transparency: Subject to the terms of their respective plans, employees and/or eligible dependents are eligible to participate in the following Company sponsored employee benefit programs: medical, dental, vision, life insurance, short- and long-term disability, business accident insurance, and group legal insurance. Subject to the terms of their respective plans, employees are eligible to participate in the Company's consolidated retirement plan (pension) and savings plan (401(k)). This position is eligible to participate in the Company's long-term incentive program. Subject to the terms of their respective policies and date of hire, Employees are eligible for the following time off benefits: Vacation -120 hours per calendar year Sick time - 40 hours per calendar year; for employees who reside in the State of Washington -56 hours per calendar year Holiday pay, including Floating Holidays -13 days per calendar year Work, Personal and Family Time - up to 40 hours per calendar year Parental Leave - 480 hours within one year of the birth/adoption/foster care of a child Condolence Leave - 30 days for an immediate family member: 5 days for an extended family member Caregiver Leave - 10 days Volunteer Leave - 4 days Military Spouse Time-Off - 80 hours Additional information can be found through the link below. *********************************************
    $89k-143.8k yearly Auto-Apply 4d ago
  • Senior Data Scientist - Classical / Gen AI

    Capgemini Holding Inc. 4.5company rating

    Senior data scientist job in Cleveland, OH

    Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you'd like, where you'll be supported and inspired by a collaborative community of colleagues around the world, and where you'll be able to reimagine what's possible. Join us and help the world's leading organizations unlock the value of technology and build a more sustainable, more inclusive world. YOUR TEAM Insights & Data delivers state-of-the-art Data solutions. Our expertise primarily lies in Cloud & Big Data engineering, where we develop robust systems capable of processing extensive and complex datasets, utilizing specialized Cloud Data services across platforms like AWS, Azure, and GCP. We oversee the entire Software Development Life Cycle (SDLC) of these solutions, which involves not only leveraging data processing tools such as ETL but also extensive programming in languages like Python, Scala, or Java, coupled with the adoption of DevOps tools and best practices. The processed data is then made accessible to downstream systems through APIs, outbound interfaces, or is visualized via comprehensive reports and dashboards. Additionally, within our AI Center of Excellence, we undertake Data Science and Machine Learning projects with a focus on cutting-edge areas such as Generative AI, Natural Language Processing (NLP), Anomaly Detection, and Computer Vision. YOUR TASKS * designing, developing, and deploying state-of-the-art data science solutions in area of GenAI, Computer Vision, RCA, EDA, NLP * cooperating with the client stakeholders * leading data science projects from conception to deployment, including data collection, preprocessing, model development, and performance evaluation * collaborating with cross-functional teams (including engineers, product managers, and other data scientists) to integrate AI models into our products and services. YOUR PROFILE * master's or PhD in Computer Science, Mathematics, Statistics, or a related field * at least 3 years of experience in a data science role * expertise in machine learning and deep learning, particularly with generative models * strong programming skills in Python, including experience with ML frameworks like TensorFlow or PyTorch * good understanding of data structures, data modeling, and software architecture * demonstrated ability to lead projects and work collaboratively in a team environment * good problem-solving skills and the ability to work on complex issues requiring innovative solutions * good communication skills, the ability to explain complex technical concepts to non-technical stakeholders. NICE TO HAVE * publications in journals or conferences in the AI/ML field * experience with cloud platforms (AWS, Azure, GCP) and MLOps tools * contributions to open-source projects related to AI and machine learning. WHAT YOU'LL LOVE ABOUT WORKING HERE * Practical benefits: permanent employment contract from the first day; hybrid, flexible working model; equipment package for home office; private medical care with Medicover; life insurance; Capgemini Helpline; NAIS benefit platform. * Access to 70+ training tracks with certification opportunities; platform with free access to Pluralsight, TED Talks, Coursera, Udemy Business and SAP Learning HUB * Community Hub that will allow you to choose from over 20 professional communities that gather people interested in, among others: Salesforce, Java, Cloud, IoT, Agile, AI. GET TO KNOW US Capgemini is committed to diversity and inclusion, ensuring fairness in all employment practices. We evaluate individuals based on qualifications and performance, not personal characteristics, striving to create a workplace where everyone can succeed and feel valued. Do you want to get to know us better? Check our Instagram - @capgeminipl or visit our Facebook profile - Capgemini Polska. You can also find us on TikTok! - @capgeminipl. ABOUT CAPGEMINI Capgemini is a global leader in partnering with companies to transform and manage their business by harnessing the power of technology. The Group is guided everyday by its purpose of unleashing human energy through technology for an inclusive and sustainable future. It is a responsible and diverse organization of nearly 350,000 team members in more than 50 countries. With its strong 55-year heritage and deep industry expertise, Capgemini is trusted by its clients to address the entire breadth of their business needs, from strategy and design to operations, fueled by the fast evolving and innovative world of cloud, data, AI, connectivity, software, digital engineering and platforms. Apply now!
    $72k-94k yearly est. 51d ago
  • Data Scientist

    Johnson & Johnson 4.7company rating

    Senior data scientist job in Brunswick, OH

    At Johnson & Johnson, we believe health is everything. Our strength in healthcare innovation empowers us to build a world where complex diseases are prevented, treated, and cured, where treatments are smarter and less invasive, and solutions are personal. Through our expertise in Innovative Medicine and MedTech, we are uniquely positioned to innovate across the full spectrum of healthcare solutions today to deliver the breakthroughs of tomorrow, and profoundly impact health for humanity. Learn more at ******************* Job Function: Data Analytics & Computational Sciences Job Sub Function: Data Science Job Category: Scientific/Technology All Job Posting Locations: New Brunswick, New Jersey, United States of America Job Description: About Innovative Medicine Our expertise in Innovative Medicine is informed and inspired by patients, whose insights fuel our science-based advancements. Visionaries like you work on teams that save lives by developing the medicines of tomorrow. Join us in developing treatments, finding cures, and pioneering the path from lab to life while championing patients every step of the way. Learn more at *******************/innovative-medicine We are searching for the best talent for Data Scientist Purpose: As a Data Scientist in Global Finance Data Science Team: You will be contributing to a high standard delivery of data science driven predictive financial statements - consumed by senior leadership. You will work in a global team of Data Scientists, Data Engineers and Machine Learning Engineers to advance the Data Science and AI roadmap for J&J's Global Finance function. You will assist in delivering value-added insights and analytics to our finance and business leaders, reduce manual work through automated reasoning, and enhance user-experience. You will be responsible for: You will mainly focus on advancing and broadening the capabilities of our Predictive Financial Statements, plugging results into SAP Analytics Cloud reporting - where most stakeholders go for corporate internal financial statements. Your job will also include aligning finance and business needs, validation of data from different source systems, and data reconciliation. You will mainly be required to create new models and leverage or extend the use of existing models for other financial statements. You will also enhance existing models for accuracy, speed and cost. You will be involved in data science projects across their lifecycle, from design to production and adoption by end users. This will include creating proof-of-concepts for new project, data science model development, data pipeline development and production deployment. The capabilities developed will include forecasting, descriptive analytics, data visualization, GenAI and decision support. This role will involve understanding the needs of business stakeholders and advocating the merits of data-driven analytics to provide viable solutions. You will be responsible for: Adopting a highly successful forecasting processes and technologies and delivering predictive financial statements monthly. Modeling the impact of future events to enhance forecast accuracy. Developing data pipelines for large datasets sourced from financial systems and automating data science processes. Documenting and aligning model changes within the team and stakeholders. Communicating insights to stakeholders leveraging data visualization tools. Monitoring model performance, and continuously improving existing models. Collaborating with finance, commercial leaders, technology teams, and external partners to deliver end-to-end solutions, ensuring compliance and risk management. Advocating for data-driven insights and data science methods across the organization and managing compliance adherence. Qualifications / Requirements: Minimum of 2 years of Data Science/ AI experience in an industry setting is required, preferably in a Finance or Healthcare setting. Minimum of a Bachelors degree is required, preferably in Science, Economics, Business Analytics, Data Science, Finance, Computer Science, Engineering or any other quantitative or STEM discipline. Technical Requirements Proficient in Python/R/Alteryx programming and experienced with Data Science Cloud platforms like AWS and Databricks or Domino. Experienced using finance data and SAP HANA data tables is a plus. Proficient in interpreting data sources and correlating to financial metrics. Able to work independently and under time pressure to deliver results, investigating and solving data issues in an explainable way. Skilled in data visualization and dashboarding using Tableau or PowerBI. Knowledgeable in statistical techniques and concepts, such as regression, properties of distributions, and statistical tests. Strong data analytics skills and attention to detail. Other: The position will be located in New Brunswick, NJ and may require up to 10% travel. Johnson & Johnson is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, age, national origin, disability, protected veteran status or other characteristics protected by federal, state or local law. We actively seek qualified candidates who are protected veterans and individuals with disabilities as defined under VEVRAA and Section 503 of the Rehabilitation Act. Johnson & Johnson is committed to providing an interview process that is inclusive of our applicants' needs. If you are an individual with a disability and would like to request an accommodation, external applicants please contact us via *******************/contact-us/careers , internal employees contact AskGS to be directed to your accommodation resource. #LI-Hybrid #JNJDataScience Required Skills: Artificial Intelligence (AI), Python (Programming Language) Preferred Skills: Advanced Analytics, Analytical Reasoning, Business Intelligence (BI), Business Writing, Coaching, Collaborating, Communication, Data Analysis, Data Compilation, Data Privacy Standards, Data Savvy, Data Science, Data Visualization, Econometric Models, Execution Focus, Technical Credibility, Technologically Savvy, Workflow Analysis The anticipated base pay range for this position is : The anticipated base pay range for this position is $89,000 to $143,750 USD. Additional Description for Pay Transparency: Subject to the terms of their respective plans, employees and/or eligible dependents are eligible to participate in the following Company sponsored employee benefit programs: medical, dental, vision, life insurance, short- and long-term disability, business accident insurance, and group legal insurance. Subject to the terms of their respective plans, employees are eligible to participate in the Company's consolidated retirement plan (pension) and savings plan (401(k)). This position is eligible to participate in the Company's long-term incentive program. Subject to the terms of their respective policies and date of hire, Employees are eligible for the following time off benefits: Vacation -120 hours per calendar year Sick time - 40 hours per calendar year; for employees who reside in the State of Washington -56 hours per calendar year Holiday pay, including Floating Holidays -13 days per calendar year Work, Personal and Family Time - up to 40 hours per calendar year Parental Leave - 480 hours within one year of the birth/adoption/foster care of a child Condolence Leave - 30 days for an immediate family member: 5 days for an extended family member Caregiver Leave - 10 days Volunteer Leave - 4 days Military Spouse Time-Off - 80 hours Additional information can be found through the link below. *********************************************
    $89k-143.8k yearly Auto-Apply 5d ago
  • Data Technology Lead

    Westfield Group, Insurance

    Senior data scientist job in Westfield Center, OH

    The role is part of the Data, Analytics and Reporting team. This role leads a team of data engineers and data testers to deliver secure, scalable, and high-quality data solutions that support analytics, reporting, and business operations. The Data Technology Lead collaborates with stakeholders to understand technology and data requirements, implements best practices for data governance and testing, and drives innovation in data engineering. The position involves managing modern cloud-based platforms, while fostering a culture of continuous improvement and technical excellence within the IT team. Job Responsibilities * Data Architecture & Engineering: Design, develop, and maintain robust data pipelines and architectures for structured and unstructured data; optimize workflows across Azure Data Lake, Snowflake, and other environments; implement best practices for data modeling and transformation using dbt. * Team Leadership: Lead and mentor data engineers and testers; manage workload distribution; foster collaboration and innovation. * Testing & Quality Assurance: Establish data testing frameworks; ensure data accuracy and reliability; integrate testing into broader QA processes. * Collaboration & Stakeholder Engagement: Partner with analytics, BI, and business teams to deliver data solutions; provide technical guidance. * Vendor & Tool Management: Evaluate and select tools and vendors; negotiate contracts and manage relationships. * Business Continuity: Develop and maintain disaster recovery and business continuity plans for data systems. Job Qualifications * 7+ years of experience in data engineering, with at least 2 years in a data leadership role. * Insurance industry experience required * Bachelor's degree in Computer Science, Information Technology, or a related field and/or commensurate experience. * Master's degree in related field is preferred. * Proficiency in SQL. * Optional skills in Python and experience with modern data frameworks (e.g., Spark). * Expertise in Snowflake, Azure Data Lake, dbt, and modern data platforms. * Strong experience in data integration, data warehousing, and data lake architectures. * Experience with Azure DevOps, CI/CD pipelines, and Git for code management * Awareness of Generative AI (GenAI) capabilities to accelerate development and testing processes. * Familiarity with data testing methodologies and tools. * Excellent leadership, communication, and problem-solving skills. Behavioral Competencies * Directs work * Collaborates * Develops talent * Customer focus * Communicates effectively * Ensures accountability * Decision quality * Business insight * Nimble learning * Builds effective teams * Manages complexity Technical Skills * Technical Support * Operating Systems * Workflow Management * Budgeting * Disaster Recovery * Process Improvement * Project Management * IT Strategy & Framework * IT Regulatory Compliance * Stakeholder Management This job description describes the general nature and level of work performed in this role. It is not intended to be an exhaustive list of all duties, skills, responsibilities, knowledge, etc. These may be subject to change and additional functions may be assigned as needed by management.
    $92k-127k yearly est. 49d ago
  • Data Technology Lead

    Westfield High School 3.3company rating

    Senior data scientist job in Westfield Center, OH

    The role is part of the Data, Analytics and Reporting team. This role leads a team of data engineers and data testers to deliver secure, scalable, and high-quality data solutions that support analytics, reporting, and business operations. The Data Technology Lead collaborates with stakeholders to understand technology and data requirements, implements best practices for data governance and testing, and drives innovation in data engineering. The position involves managing modern cloud-based platforms, while fostering a culture of continuous improvement and technical excellence within the IT team. Job Responsibilities Data Architecture & Engineering: Design, develop, and maintain robust data pipelines and architectures for structured and unstructured data; optimize workflows across Azure Data Lake, Snowflake, and other environments; implement best practices for data modeling and transformation using dbt. Team Leadership: Lead and mentor data engineers and testers; manage workload distribution; foster collaboration and innovation. Testing & Quality Assurance: Establish data testing frameworks; ensure data accuracy and reliability; integrate testing into broader QA processes. Collaboration & Stakeholder Engagement: Partner with analytics, BI, and business teams to deliver data solutions; provide technical guidance. Vendor & Tool Management: Evaluate and select tools and vendors; negotiate contracts and manage relationships. Business Continuity: Develop and maintain disaster recovery and business continuity plans for data systems. Job Qualifications 7+ years of experience in data engineering, with at least 2 years in a data leadership role. Insurance industry experience required Bachelor's degree in Computer Science, Information Technology, or a related field and/or commensurate experience. Master's degree in related field is preferred. Proficiency in SQL. Optional skills in Python and experience with modern data frameworks (e.g., Spark). Expertise in Snowflake, Azure Data Lake, dbt, and modern data platforms. Strong experience in data integration, data warehousing, and data lake architectures. Experience with Azure DevOps, CI/CD pipelines, and Git for code management Awareness of Generative AI (GenAI) capabilities to accelerate development and testing processes. Familiarity with data testing methodologies and tools. Excellent leadership, communication, and problem-solving skills. Behavioral Competencies Directs work Collaborates Develops talent Customer focus Communicates effectively Ensures accountability Decision quality Business insight Nimble learning Builds effective teams Manages complexity Technical Skills Technical Support Operating Systems Workflow Management Budgeting Disaster Recovery Process Improvement Project Management IT Strategy & Framework IT Regulatory Compliance Stakeholder Management This job description describes the general nature and level of work performed in this role. It is not intended to be an exhaustive list of all duties, skills, responsibilities, knowledge, etc. These may be subject to change and additional functions may be assigned as needed by management.
    $76k-95k yearly est. Auto-Apply 50d ago
  • Biostatistician

    Case Western Reserve University 4.0company rating

    Senior data scientist job in Cleveland, OH

    Salary Grade Case Western Reserve University is committed to providing a transparent estimate of the salary range available for this position at the time of its posting. The salary range is between $52,705 and $66,672, depending on qualifications, experience, department budgets, and industry data. Employees receive more than just a paycheck. University employees enjoy a comprehensive benefits package that includes excellent healthcare, retirement plans, tuition assistance, paid time off, and a winter recess. Job Description POSITION OBJECTIVE The objective of the position is to provide metagenomic sequence processing, statistical design and analysis, data management, high-quality visualization, and software and website development support to data-enhanced and omics-driven biomedical research. The analyst will play a leadership role in all aspects of data analyses, data management and report generation for these research projects. The analyst will be integrally involved in all aspects of research project design, computational implementation, preparation of research reports and grant applications. This position functions under minimal supervision. ESSENTIAL FUNCTIONS * Metagenomic sequence processing Execute metagenomic data processing workflows using established bioinformatics pipelines. This includes running standard tools for quality control, taxonomic and functional profiling, and downstream analysis following documented procedures. The individual will routinely submit and manage batch jobs on high-performance computing (HPC) systems, monitor job status, troubleshoot errors, and ensure efficient pipeline execution. The role also involves organizing output files, maintaining reproducible workflows, and communicating results to the research team. (15%) * Data Analysis Perform statistical programming and analysis for multiple research studies. Create analytic datasets, summarizes data and creates reports. Work with other team members to develop and implement analytic approaches including but not limited to dimension reduction, multiple linear regression, testing procedures, clustering methods, and so on. Assess appropriateness of final statistical models. Document methods and results via written memoranda and reports. (20%) * Data Management Design, develop, and maintain appropriate databases, such as microbiome data across body sites. Develop programs to ensure data integrity (e.g., logic and range checks). Perform intermediate statistical programming for quality assurance. Develop efficient data archival systems for use and maintains detailed written documentation of all data programming and data archival systems. (10%) * High-quality visualization Create a wide range of graphical outputs; such as taxonomic summaries, heatmaps, ordination plots, pathway diagrams, and interactive visual dashboards,using tools like R (ggplot2), Python (matplotlib, seaborn), or web-based platforms. The individual will translate complex analytical results into intuitive visual formats for manuscripts, presentations, lab meetings, and collaborator reports. Ensuring visual consistency, scientific accuracy, and adherence to data-visualization best practices is essential. (15%) * Software and website development - Write clean, efficient, and well-documented code in languages such as R and Python; implementing modular functions and reproducible analytical workflows; and managing version control using Git. The individual will translate research needs into functional software packages, optimize computational performance, and ensure reliability across computing environments. In addition, the role involves creating and maintaining lightweight websites or interactive dashboards; using HTML/CSS, JavaScript, R Shiny, or similar frameworks to present analytical results, support data visualization, and disseminate methods and documentation. Ensuring usability, responsiveness, and clarity in both software tools and web interfaces is essential to support the laboratory's research and collaborative projects. (20%) * Prepare Research Reports - Assist in the writing and preparation of research papers, including internal documentation, academic publication and research presentations at conferences and meetings. (10%) NONESSENTIAL FUNCTIONS * Study Design Work with the Principal Investigator and with other investigators on the projects to develop appropriate methodologies for addressing study hypotheses. Participate in the research design phase of projects and in the preparation of research proposals for funding. This work will be comprehensive and will include developing approaches for data management, assuring the integrity of the data, and methods for implementing data analysis. (5%) * Development of Educational Opportunities Work with other personnel to develop workshops and written materials to assist graduate students to learn about methodological and programming issues involved in working with population-based data, with emphasis on data management and reporting. (5%) * Perform other duties as assigned. ( CONTACTS Department: Frequent interactions (50%) with the principal investigator to provide status of ongoing projects and summary of completed tasks. Occasional interactions (10%) with members of the department to help with data analysis support. Contact with administrative staff as needed University: Regular contact (40%) with collaborators in other university departments and centers. Occasional contact (10%) with the CWRU [U]Tech for computing support. Occasional contact with Research Administration, IRB, and Sponsored Projects Accounting. External: Regular contact (40%) with researchers outside of CWRU to discuss and review experimental results. Student: Occasional interactions (10%) with undergraduate/graduate students working in the lab or in the lab of collaborators. SUPERVISORY RESPONSIBILITIES NA QUALIFICAITONS Experience: 3 years of work experience in an increasingly responsible position in biomedical research or social science research environment housed in a university, nonprofit research organization or similar setting. Education/licensing: A Master's degree in computer science, computational biology, biotechnology, biology, bioinformatics, biostatistics, statistics, or related degree is required. REQUIRED SKILLS * Close familiarity and ability to work with genomics, radiomics data and other biomedical data * Ability to work semi-independently with direct but limited guidance * Highly organized, with decision-making skills * Ability to work and communicate knowledgably in collaboration with other team members including statisticians, investigators, clinicians, and programmers * Strong oral and written communication skills, adaptability, and the capacity to work with team members from varied cultural and professional backgrounds. * Desirable skills include background/understanding of medical terminology, and clinical/translational research * Hands-on experience in LotusNotes, Web databases or related collaborative task management apps * Ability to use Microsoft office products such as Word, Power Point, Excel and Access * Ability to use OverLeaf, GitHub, BioRender, Adobe Illustrator * Ability to use software packages and websites, such as R, Python, R Quarto, GitHub, Bioconductor, and R CRAN * Demonstrated history of successful support, education, and advocacy for all students, aligned with the values, mission, and messaging of the university, while adhering to the staff policy on conflict of commitment and interest. * Ability to meet consistent attendance. * Ability to interact with colleagues, supervisors, and customers face to face. * Willingness to learn and utilize Artificial Intelligence (AI). WORKING CONDITIONS Employee will be expected to obtain and maintain certification of training in human subjects research through an online course and exam and other seminars. The position may require occasional work with investigators at the Cleveland Clinic and/or MetroHealth Medical Center, and might require occasional travel to either of these locations. No travel outside the Cleveland area is anticipated. EEO Statement Case Western Reserve University is an equal opportunity employer. All applicants are protected under federal and state laws and university policy from discrimination based on race, color, religion, sex, sexual orientation, gender identity or expression, national or ethnic origin, protected veteran status, disability, age and genetic information. Reasonable Accommodations Case Western Reserve University complies with the Americans with Disabilities Act regarding reasonable accommodations for applicants with disabilities. Applicants requiring a reasonable accommodation for any part of the application and hiring process should contact the CWRU Office of Equity at ************ to request a reasonable accommodation. Determinations as to granting reasonable accommodations for any applicants will be made on a case-by-case basis. .
    $52.7k-66.7k yearly 5d ago
  • Data Engineer

    GD Information Technology

    Senior data scientist job in Fairview Park, OH

    Type of Requisition: Regular Clearance Level Must Currently Possess: None Clearance Level Must Be Able to Obtain: None Public Trust/Other Required: None Job Family: Data Science and Data Engineering Job Qualifications: Skills: Business Functions, Data Science, Data Solutions Certifications: None Experience: 5 + years of related experience US Citizenship Required: No Job Description: Own your opportunity to turn data into measurable outcomes for our customers' most complex challenges. As a Data Engineer at GDIT, you'll power innovation to drive mission impact and grow your expertise to power your career forward. We are a fast-growing AI and Data team within the Global CIO organization at GDIT. We design and deliver enterprise-grade solutions that integrate AI, data, and human-AI collaboration workflows across key business functions (e.g., Growth, Finance, HR, Legal, and Supply Chain). We are looking for a highly skilled Data Engineer (Agentic AI) who shares a passion for delivering data engineering for agentic AI applications and building an AI-first data ecosystem foundational to enterprise AI transformation. MEANINGFUL WORK AND PERSONAL IMPACT: Design, build, and operate scalable end-to-end data pipelines and curated data products that support enterprise analytics and agentic AI use cases Integrate data from enterprise systems and external sources, including structured, semi-structured, and unstructured data Deliver reliable data services for agentic AI workflows, including APIs, retrieval/indexing, and governed context delivery for AI agents Implement data quality, observability, and governance best practices across data pipelines and products Optimize performance and cost across storage, compute, orchestration, and serving layers Collaborate with cross-functional teams, including business stakeholders, AI engineers, and software developers, to translate requirements into production solutions WHAT YOU'LL NEED TO SUCCEED Bring your expertise and drive for innovation to GDIT. The Data Engineer must have: Education: Bachelor's degree in Computer Science/Engineering, Data Science, or a related field Experience: 5+ years of experience delivering production-grade data engineering across databases, data integration, data services, and data governance Role requirements: Proficiency in programming languages (Python or Java) and databases (SQL and NoSQL). Strong collaboration and communication skills in cross-functional enterprise environments. Preferred Skills and Abilities: Master's degree in Computer Science/Engineering, Data Science, or a related field. Relevant certifications in Data Engineering, AI, or Cloud. Experience delivering data engineering for agentic AI applications. Experience with retrieval-based AI data foundations (document processing, metadata, embeddings, vector or hybrid search). Familiarity with agent workflows and how agents interact with data services and tools in production. Experience with lakehouse architectures and cloud data platforms such as Azure (preferred), OCI (preferred), or AWS, Experience in real-time streaming applications or other high-velocity solutions. Experience leveraging AI tools to improve data engineering productivity and quality in coding, testing, and documentation. Location: This is a hybrid position working at our GDIT facility in Falls Church, VA. Must be comfortable working (3) days a week onsite. US Persons required GDIT IS YOUR PLACE At GDIT, the mission is our purpose, and our people are at the center of everything we do. Growth: AI-powered career tool that identifies career steps and learning opportunities Support: An internal mobility team focused on helping you achieve your career goals Rewards: Comprehensive benefits and wellness packages, 401K with company match, and competitive pay and paid time off Flexibility: Full-flex work week to own your priorities at work and at home Community: Award-winning culture of innovation and a military-friendly workplace OWN YOUR OPPORTUNITY Explore a career in data science and engineering at GDIT and you'll find endless opportunities to grow alongside colleagues who share your determination for solving complex data challenges. The likely salary range for this position is $119,000 - $161,000. This is not, however, a guarantee of compensation or salary. Rather, salary will be set based on experience, geographic location and possibly contractual requirements and could fall outside of this range. Scheduled Weekly Hours: 40 Travel Required: 10-25% Telecommuting Options: Hybrid Work Location: USA VA Falls Church Additional Work Locations: Total Rewards at GDIT: Our benefits package for all US-based employees includes a variety of medical plan options, some with Health Savings Accounts, dental plan options, a vision plan, and a 401(k) plan offering the ability to contribute both pre and post-tax dollars up to the IRS annual limits and receive a company match. To encourage work/life balance, GDIT offers employees full flex work weeks where possible and a variety of paid time off plans, including vacation, sick and personal time, holidays, paid parental, military, bereavement and jury duty leave. To ensure our employees are able to protect their income, other offerings such as short and long-term disability benefits, life, accidental death and dismemberment, personal accident, critical illness and business travel and accident insurance are provided or available. We regularly review our Total Rewards package to ensure our offerings are competitive and reflect what our employees have told us they value most.We are GDIT. A global technology and professional services company that delivers consulting, technology and mission services to every major agency across the U.S. government, defense and intelligence community. Our 30,000 experts extract the power of technology to create immediate value and deliver solutions at the edge of innovation. We operate across 50 countries worldwide, offering leading capabilities in digital modernization, AI/ML, Cloud, Cyber and application development. Together with our clients, we strive to create a safer, smarter world by harnessing the power of deep expertise and advanced technology.Join our Talent Community to stay up to date on our career opportunities and events at gdit.com/tc. Equal Opportunity Employer / Individuals with Disabilities / Protected Veterans
    $119k-161k yearly Auto-Apply 5d ago
  • BI Data Engineer

    Quadax Careers & Culture

    Senior data scientist job in Middleburg Heights, OH

    Job Title: Data Engineer We are seeking a Data Engineer to join our growing Big Data team. In this role, you will design and implement data models, build semantic layers, create data extracts, and develop robust data pipelines to ingest data into our data warehouse. You will collaborate with BI developers, data analysts, and data scientists to support a variety of data initiatives. The ideal candidate is self-directed, highly organized, and comfortable managing the data needs of multiple teams, systems, and products. This is a hybrid position: 4 days per week onsite in Middleburg Heights, OH following initial 3 month/5 day onsite period. Visa sponsorship not available. Key Responsibilities: Design, build, and maintain data pipelines for optimal extraction, transformation, and loading (ETL) from diverse sources into our Snowflake Data Warehouse. Perform data analysis, mapping, and validation across multiple sources and formats to produce consolidated data models. Assemble large, complex datasets to meet technical requirements for reporting, extraction, and analytics. Define and implement semantic layers on top of data models for reporting and analytical purposes. Create custom data extracts to support ad-hoc reporting requests. Identify and implement process improvements, including automation of manual tasks, pipeline optimization for scalability and performance, and data model enhancements for query efficiency. Support Data Science initiatives by creating and automating pipelines for model training datasets. Maintain and optimize the Snowflake environment, including monitoring consumption, configuring warehouses, managing role-based security, applying environment changes, and evaluating new functionality. Collaborate with stakeholders such as BI Developers, Product Owners, Data Scientists, and Architects to resolve data-related technical issues and support infrastructure needs. Ensure compliance with PHI and HIPAA standards and guidelines. Perform other duties as assigned. Education / Experience: 3+ years of experience in data engineering, including building and optimizing ETL processes, data pipelines, and datasets. Strong understanding of data engineering principles and best practices. Hands-on experience working with structured and unstructured datasets. Proven ability to analyze datasets to answer business questions and identify improvement opportunities. Preferred experience with Snowflake, Microsoft SQL Server, Microsoft Fabric, and Python. Experience with C# and/or Java is a plus. Strong project management and organizational skills. Demonstrated success working with cross-functional teams in dynamic environments. Familiarity with Scrum/Agile development methodologies is a plus. Medical billing knowledge is a plus.
    $78k-106k yearly est. 8d ago
  • GTM Data Engineer

    Partssource 4.4company rating

    Senior data scientist job in Cleveland, OH

    PartsSource is the leading technology and software platform for managing mission-critical healthcare equipment. Trusted by over 5,000 US hospitals and 15,000 clinical sites, PartsSource empowers providers and service organizations to maximize clinical availability for patient care and automates the procurement of parts, services and training through a unique digital experience. PartsSource team members are deeply committed to our mission of Ensuring Healthcare is Always On , which is foundational to our success and growth. Our vibrant culture is built upon aligned values, shared ownership, mutual respect, and a passion for collaborating to solve complex customer problems. About the Job Opportunity The GTM Data Engineer is responsible for building and maintaining a single, trusted customer and revenue data foundation across Marketing, Sales, and Customer Success. This role partners closely with Revenue Operations to ensure all GTM teams operate from a consistent source of truth for pipeline, revenue, retention, and growth. You will own how GTM data is structured, enriched, validated, and made available-eliminating data ambiguity and enabling confident, data-driven decision making. What You'll Do GTM Data Modeling & Governance (Technology - Data Engineering: Data Modeling & Architecture, Data Quality & Governance) Design and maintain the canonical customer, account, and revenue data model across GTM systems Resolve identity across contacts, accounts, users, assets, services, and subscriptions Define authoritative objects and metrics for pipeline, bookings, renewals, expansion, and churn Ensure historical accuracy, data lineage, and consistent metric definitions Data Enrichment, Integration & Pipelines (Technology - Data Engineering: ETL & Data Integration, Data Pipeline Development) Build and manage data pipelines across CRM, marketing automation, services, and financial systems Identify data gaps and implement enrichment strategies to improve completeness and usability Merge datasets into unified customer and account views with clear conflict-resolution rules Own schema changes, backfills, reprocessing, and validation as systems evolve Attribution, Revenue Logic & Reporting Enablement (Sales Revenue Operations: Performance Metrics & Reporting, Sales Analytics) Implement approved attribution and revenue logic consistently across channels and time periods Validate sourced, influenced, and assisted revenue before executive reporting Enable trusted funnel, pipeline, retention, and expansion reporting within systems of record Reduce reliance on spreadsheets and manual reconciliation GTM Architecture, CDP & AI Readiness (Technology - Systems & Applications: Systems Integration, Systems Thinking) Support a warehouse-centric or composable CDP approach for GTM data Partner with GTM leadership to evolve long-term data architecture Prepare high-quality, LLM-ready datasets for AI-enabled GTM workflows Ensure access controls, privacy, and compliance requirements are met What You'll Bring Your Background 5+ years in data engineering, analytics engineering, or GTM data roles Strong experience with CRM and GTM data models Advanced SQL skills and experience with modern data stacks and ETL tools Experience supporting attribution, lifecycle, and revenue reporting Familiarity with Customer Data Platforms or warehouse-centric CDP approaches Ability to work cross-functionally with Marketing, Sales, Customer Success, Finance, and RevOps Who We Want to Meet Act Like an Owner - Accountability & Execution : You take full ownership of GTM data quality and follow through to reliable outcomes. Serve with Purpose - Business Impact : You connect data architecture decisions to revenue visibility and GTM effectiveness. Adapt to Thrive - Managing Ambiguity : You remain productive amid evolving systems, definitions, and priorities. Collaborate to Win - Influence & Communication : You partner effectively with RevOps and GTM teams to align on shared metrics. Challenge the Status Quo - Data-Informed Decision Making : You use evidence and clarity to replace assumptions and debates. Benefits & Perks Competitive compensation package with salary, incentives, company ownership/equity, and comprehensive benefits (401k match, health, college debt reduction, and more!) Career and professional development through training, coaching and new experiences. Hybrid culture with new & beautiful workspaces that balance flexibility, collaboration, and productivity. Inclusive and diverse community of passionate professionals learning and growing together. Interested? We'd love to hear from you! Submit your resume and an optional cover letter explaining why you'd be a great fit. About PartsSource Since 2001, PartsSource has evolved into the leading technology and software platform for managing mission-critical equipment, serving over half of the U.S. hospital infrastructure. Our digital systems modernize and automate the procurement of parts, services, technical support, and training for HTM professionals to efficiently and effectively maintain their mission-critical equipment. PartsSource employs over 700 employees nationwide that committed to supporting healthcare providers and ensuring healthcare always on. In 2021, Bain Capital invested in the business, further accelerating our growth and positive impact within the healthcare industry. Read more about us here: · PartsSource Named to Newsweek's List of the Top 200 America's Most Loved Workplaces for 2024 · PartsSource Named Among the Top 50 Healthcare Technology Companies of 2025 · PartsSource Named Among the Top 25 Healthcare Software Companies of 2025 · PartsSource President and CEO Philip Settimi Named to Top 50 Healthcare Technology CEO List 2025 · WSJ: Bain Capital Private Equity Scoops Up PartsSource EEO PartsSource, Inc., and its affiliates and subsidiaries, provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. Legal authorization to work in the U.S. is required.
    $86k-122k yearly est. Auto-Apply 3d ago
  • Senior Data Engineer

    Advance Local 3.6company rating

    Senior data scientist job in Cleveland, OH

    **Advance Local** is looking for a **Senior Data Engineer** to design, build, and maintain the enterprise data infrastructure that powers our cloud data platform. This position will combine your deep technical expertise in data engineering with team leadership responsibilities for data engineering, overseeing the ingestion, integration, and reliability of data systems across Snowflake, AWS, Google Cloud, and legacy platforms. You'll partner with data product and across business units to translate requirements into technical solutions, integrate data from numerous third-party platforms, (CDPs, DMPs, analytics platforms, marketing tech) into central data platform, collaborate closely with the Data Architect on platform strategy and ensure scalable, well-engineered solutions for modern data infrastructure using infrastructure as code and API-driven integrations. The base salary range is $120,000 - $140,000 per year. **What you'll be doing:** + Lead the design and implementation of scalable data ingestion pipelines from diverse sources into Snowflake. + Partner with platform owners across business units to establish and maintain data integrations from third party systems into the central data platform. + Architect and maintain data infrastructure using IAC, ensuring reproducibility, version control and disaster recovery capabilities. + Design and implement API integrations and event-driven data flows to support real time and batch data requirements. + Evaluate technical capabilities and integration patterns of existing and potential third-party platforms, advising on platform consolidation and optimization opportunities. + Partner with the Data Architect and data product to define the overall data platform strategy, ensuring alignment between raw data ingestion and analytics-ready data products that serve business unit needs. + Develop and enforce data engineering best practices including testing frameworks, deployment automation, and observability. + Support rapid prototyping of new data products in collaboration with data product by building flexible, reusable data infrastructure components. + Design, develop, and maintain scalable data pipelines and ETL processes; optimize and improve existing data systems for performance, cost efficiency, and scalability. + Collaborate with data product, third-party platform owners, Data Architects, Analytics Engineers, Data Scientists, and business stakeholders to understand data requirements and deliver technical solutions that enable business outcomes across the organization. + Implement data quality validation, monitoring, and alerting systems to ensure reliability of data pipelines from all sources. + Develop and maintain comprehensive documentation for data engineering processes and systems, architecture, integration patterns, and runbooks. + Lead incident response and troubleshooting efforts for data pipeline issues, ensuring minimal business impact. + Stay current with the emerging data engineering technologies, cloud services, SaaS platform capabilities, and industry best practices. **Our ideal candidate will have the following:** + Bachelor's degree in computer science, engineering, or a related field + Minimum of seven years of experience in data engineering with at least two years in a lead or senior technical role + Expert proficiency in Snowflake data engineering patterns + Strong experience with AWS services (S3, Lambda, Glue, Step Functions) and Google Cloud Platform + Experience integrating data from SaaS platforms and marketing technology stacks (CDPs, DMPs, analytics platforms, CRMs) + Proven ability to work with third party APIs, webhooks, and data exports + Experience with infrastructure such as code (Terraform, Cloud Formation) and CI/CD pipelines for data infrastructure + Proven ability to design and implement API integrations and event-driven architecture + Experience with data modeling, data warehousing, and ETL processes at scale + Advanced proficiency in Python and SQL for data pipeline development + Experience with data orchestration tools (airflow, dbt, Snowflake tasks) + Strong understanding of data security, access controls, and compliance requirements + Ability to navigate vendor relationships and evaluate technical capabilities of third-party platforms + Excellent problem-solving skills and attention to detail + Strong communication and collaboraion skills **Additional Information** Advance Local Media offers competitive pay and a comprehensive benefits package with affordable options for your healthcare including medical, dental and vision plans, mental health support options, flexible spending accounts, fertility assistance, a competitive 401(k) plan to help plan for your future, generous paid time off, paid parental and caregiver leave and an employee assistance program to support your work/life balance, optional legal assistance, life insurance options, as well as flexible holidays to honor cultural diversity. Advance Local Media is one of the largest media groups in the United States, which operates the leading news and information companies in more than 20 cities, reaching 52+ million people monthly with our quality, real-time journalism and community engagement. Our company is built upon the values of Integrity, Customer-first, Inclusiveness, Collaboration and Forward-looking. For more information about Advance Local, please visit ******************** . Advance Local Media includes MLive Media Group, Advance Ohio, Alabama Media Group, NJ Advance Media, Advance Media NY, MassLive Media, Oregonian Media Group, Staten Island Media Group, PA Media Group, ZeroSum, Headline Group, Adpearance, Advance Aviation, Advance Healthcare, Advance Education, Advance National Solutions, Advance Originals, Advance Recruitment, Advance Travel & Tourism, BookingsCloud, Cloud Theory, Fox Dealer, Hoot Interactive, Search Optics, Subtext. _Advance Local Media is proud to be an equal opportunity employer, encouraging applications from people of all backgrounds. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, genetic information, national origin, age, disability, sexual orientation, marital status, veteran status, or any other category protected under federal, state or local law._ _If you need a reasonable accommodation because of a disability for any part of the employment process, please contact Human Resources and let us know the nature of your request and your contact information._ Advance Local Media does not provide sponsorship for work visas or employment authorization in the United States. Only candidates who are legally authorized to work in the U.S. will be considered for this position.
    $120k-140k yearly 60d+ ago
  • Azure Data Engineer - 6013916

    Accenture 4.7company rating

    Senior data scientist job in Cleveland, OH

    Accenture Flex offers you the flexibility of local fixed-duration project-based work powered by Accenture, a leading global professional services company. Accenture is consistently recognized on FORTUNE's 100 Best Companies to Work For and Diversity Inc's Top 50 Companies For Diversity lists. As an Accenture Flex employee, you will apply your skills and experience to help drive business transformation for leading organizations and communities. In addition to delivering innovative solutions for Accenture's clients, you will work with a highly skilled, diverse network of people across Accenture businesses who are using the latest emerging technologies to address today's biggest business challenges. You will receive competitive rewards and access to benefits programs and world-class learning resources. Accenture Flex employees work in their local metro area onsite at the project, significantly reducing and/or eliminating the demands to travel. Job Description: Join our dynamic team and embark on a journey where you will be empowered to perform independently and become an SME. Required active participation/contribution in team discussions will be key as you contribute in providing solutions to work related problems. Let's work together to achieve greatness! Responsibilities: + Create new data pipelines leveraging existing data ingestion frameworks, tools + Orchestrate data pipelines using the Azure Data Factory service. + Develop/Enhance data transformations based on the requirements to parse, transform and load data into Enterprise Data Lake, Delta Lake, Enterprise DWH (Synapse Analytics) + Perform Unit Testing, coordinate integration testing and UAT Create HLD/DD/runbooks for the data pipelines + Configure compute, DQ Rules, Maintenance Performance tuning/optimization Basic Qualifications: + Minimum of 3 years of work experience with one or more of the following: Databricks Data Engineering, DLT, Azure Data Factory, SQL, PySpark, Synapse Dedicated SQL Pool, Azure DevOps, Python Preferred Qualifications: + Azure Function Apps + Azure Logic Apps + Precisely & COSMOS DB + Advanced proficiency in PySpark. + Advanced proficiency in Microsoft Azure Databricks, Azure DevOps, Databricks Delta Live Tables and Azure Data Factory. + Bachelor's or Associate's degree Compensation at Accenture varies depending on a wide array of factors, which may include but are not limited to the specific office location, role, skill set, and level of experience. As required by local law, Accenture provides a reasonable range of compensation for roles that may be hired as set forth below. We accept applications on an on-going basis and there is no fixed deadline to apply. Information on benefits is here. (************************************************************ Role Location: California - $47.69 - $57.69 Cleveland - $47.69 - $57.69 Colorado - $47.69 - $57.69 District of Columbia - $47.69 - $57.69 Illinois - $47.69 - $57.69 Minnesota - $47.69 - $57.69 Maryland - $47.69 - $57.69 Massachusetts - $47.69 - $57.69 New York/New Jersey - $47.69 - $57.69 Washington - $47.69 - $57.69 Requesting an Accommodation Accenture is committed to providing equal employment opportunities for persons with disabilities or religious observances, including reasonable accommodation when needed. If you are hired by Accenture and require accommodation to perform the essential functions of your role, you will be asked to participate in our reasonable accommodation process. Accommodations made to facilitate the recruiting process are not a guarantee of future or continued accommodations once hired. If you would like to be considered for employment opportunities with Accenture and have accommodation needs such as for a disability or religious observance, please call us toll free at **************** or send us an email or speak with your recruiter. Equal Employment Opportunity Statement We believe that no one should be discriminated against because of their differences. All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Our rich diversity makes us more innovative, more competitive, and more creative, which helps us better serve our clients and our communities. For details, view a copy of the Accenture Equal Opportunity Statement (******************************************************************************************************************************************** Accenture is an EEO and Affirmative Action Employer of Veterans/Individuals with Disabilities. Accenture is committed to providing veteran employment opportunities to our service men and women. Other Employment Statements Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States. Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Job candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process. Further, at Accenture a criminal conviction history is not an absolute bar to employment. The Company will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay or the pay of another employee or applicant. Additionally, employees who have access to the compensation information of other employees or applicants as a part of their essential job functions cannot disclose the pay of other employees or applicants to individuals who do not otherwise have access to compensation information, unless the disclosure is (a) in response to a formal complaint or charge, (b) in furtherance of an investigation, proceeding, hearing, or action, including an investigation conducted by the employer, or (c) consistent with the Company's legal duty to furnish information. California requires additional notifications for applicants and employees. If you are a California resident, live in or plan to work from Los Angeles County upon being hired for this position, please click here for additional important information. Please read Accenture's Recruiting and Hiring Statement for more information on how we process your data during the Recruiting and Hiring process.
    $69k-89k yearly est. 3d ago
  • Senior Data Engineer - N.I.

    Cleveland Clinic 4.7company rating

    Senior data scientist job in Cleveland, OH

    Join the Cleveland Clinic team where you will work alongside passionate caregivers and make a lasting, meaningful impact on patient care. Here, you will receive endless support and appreciation while building a rewarding career with one of the most respected healthcare organizations in the world. The Senior Data Engineer will extract, transform, and integrate data from diverse clinical systems-including Epic Clarity, third-party APIs, and other internal and external sources-using SQL, Python, and additional programming languages as needed. The role ensures seamless integration of structured, semi-structured, and unstructured data within Snowflake and Databricks environments, with familiarity in REDCap databases considered a plus. This position plays a key role in organizing, managing, and optimizing the organization's data ecosystem, enabling enhanced decision-making across clinical operations and research. You will be responsible for designing, building, and maintaining scalable data pipelines, data warehouses, and data lakes that support clinical and financial reporting. The role is full time (40 hours per week), Monday through Friday from 8:00 a.m. to 5:00 p.m., with minimal weekend or holiday work and occasional on-call coverage as needed. The caregiver must work onsite at Cleveland Main Campus, with eligibility for a hybrid work arrangement after 90 days. A caregiver who excels in this role will: * Analyze, design and coordinate the development of software systems. * Develop new software and proof/test them to assure production of a quality product. * Analyze current programs and processes. * Make recommendations for more cost-effective products and better streamlined workflows. * Define and implement high performance software by leveraging a strong understanding of embedded hardware design. * Provide technical support to other Developers when project support is needed, including the design of relational databases and client-side programming strategies using the latest HTML, CSS and JavaScript frameworks. * Determine and communicate the implications of system-level decisions on subsystems and components and mitigate issues. * Translate clinical problems into innovative healthcare solutions. * Gather business and application specific requirements. * Ensure all requirements are met as well as maintained within defined project scope. * Determine whether a particular problem is caused by hardware, operating systems software, application programs or network failures and supporting as necessary. Minimum qualifications for the ideal future caregiver include: * Bachelor's Degree in Computer Science, Computer Engineering or a related field and 10 years of software development experience and healthcare software development * OR High School Diploma/GED and 15 years of experience * OR Associate's Degree and 13 years of experience * ITIL Foundations Certification within six months of hire * Proven track record of enterprise architecture experience with large volume, high availability enterprise applications. * Experience with Microsoft, Net Technology Stack and Relational Database Design * Microsoft C#, iOS Objective C, VBScript, Visual Basic, ColdFusion, Microsoft T-SQL, JavaScript, CCSS3 and/or HTML5 experience * Query, query Mobile, KnockoutJS, UnderscoreJS and/or YUI Compressor experience * Experience in interfacing with internal and external customers * SOA development using SOAP and REST; mobile application development and associated libraries experience Preferred qualifications for the ideal future caregiver include: * Experience with hospital or medical systems Physical Requirements: * Ability to perform work in a stationary position for extended periods. * Ability to travel throughout the hospital system. * Ability to operate a computer and other office equipment. * Ability to communicate and exchange accurate information. Personal Protective Equipment: * Follows Standard Precautions using personal protective equipment as required for procedures. Pay Range Minimum Annual Salary: $92,620.00 Maximum Annual Salary: $141,265.00 The pay range displayed on this job posting reflects the anticipated range for new hires. A successful candidate's actual compensation will be determined after taking factors into consideration such as the candidate's work history, experience, skill set and education. The pay range displayed does not include any applicable pay practices (e.g., shift differentials, overtime, etc.). The pay range does not include the value of Cleveland Clinic's benefits package (e.g., healthcare, dental and vision benefits, retirement savings account contributions, etc.).
    $92.6k-141.3k yearly 6d ago
  • Data Engineer (Mentor, OH, US, 44060)

    Steris Corporation 4.5company rating

    Senior data scientist job in Mentor, OH

    At STERIS, we help our Customers create a healthier and safer world by providing innovative healthcare and life science product and service solutions around the globe. The BIA Data Engineer Designs, implements, and maintains complex data engineering solutions in the Business Intelligence and Analytics team Responsible for design, development, implementation, testing, documentation, and support of analytical and data solutions/projects requiring data aggregation/data pipelines/ETL/ELT from multiple sources into an efficient reporting mechanism, database/data warehouse using appropriate tools like Informatica, Azure Data Factory, SSIS. This includes interacting with business to gather requirements, analysis, and creation of functional and technical specs, testing, training, escalation, and follow-up. Support of the applications would include resolving issues reported by users. Issues could be caused by bugs in the application or user errors or programming errors. Resolution process will include, but not limited to, investigate known bugs on software vendor support website, create tickets or service requests with software vendor, develop scripts to fix data issues, make program changes, test fixes and apply the changes to production. These tasks and activities will be completed with the help and under the guidance of the supervisor. Participation in team and / or project meetings, to schedule work and discuss status, will be required. The position also requires staying abreast with changes in technology, programming languages, and software development tools. What You'll do as a Data Engineer * Data Pipeline/ETL (40%): Designs and implements data stores and ETL data flows and data pipelines to connect and prepare operational systems data for analytics and business intelligence (BI) systems. * Support & Operations (10%): Manages production deployments and automation, monitoring, job control and production support. Works with business users to test programs in Development and Quality. Investigates issues using vendor support website(s). * Data Modeling/Designing Datasets (10%): Reviews and understands business requirements for development tasks assigned and applies standard data modelling and design techniques based upon a detailed understanding of requirements. * Data Architecture and Technical Infrastructure (10%): Plans and drives the development of data engineering solutions ensuring that solutions balance functional and non-functional requirements. Monitors application of data standards and architectures including security and compliance. * SDLC Methodology & Project Management (5%): Contributes to technical transitions between development, testing, and production phases of solutions' lifecycle, and the facilitation of the change control, problem management, and communication processes. * Data Governance and Data Quality (5%): Identifies and investigates data quality/integrity problems, determine impact and provide solutions to problems. * Metadata Management & Documentation (5%): Documents all processes and mappings related to Data Pipelines work and follows development best practices as adopted by the BIA team * End-User Support, Education and Enablement (5%): Contributes to training and Data Literacy initiatives within the team and End user community. * Innovation, Continuous Improvement & Optimization (5%): Continuously improves and optimizes existing Data Engineering assets/processes. * Partnership and Community Building (5%): Collaborates with other IT teams, Business Community, data scientists and other architects to meet business requirements. Interact with DBAs on data designs optimal for data engineering solutions performance. The Experience, Abilities and Skills Needed * Bachelor's Degree and 2-4 years of relevant experience * Development, maintenance, and enhancement of Data Pipelines (ETL/ELT) and processes with thorough knowledge of star/snowflake schemas * Developing complex SQL queries and SQL optimization * Development experience must be full Life Cycle experience including business requirements gathering, data sourcing, testing/data reconciliation, and deployment within Business Intelligence/Data Warehousing Architecture. * Understanding of Data Architecture * Knowledge of ETL and data engineering standards and best practices for the design and development of data pipelines and data extract, transform and load processes * Design, build and test data products based on feeds from multiple systems, using a range of different storage technologies, access methods or both * Knowledge of data warehousing concepts, including multi-dimensional models and ETL logic for maintaining star-schemas * Good Understanding of concepts and principles of data modelling. * Ability to produce, maintain and update relevant data models for specific needs. * Can reverse-engineer data models from a live system * SQL programming desirable (i.e., stored procedures dev.) * Proficient in data analysis, defect identifications and resolutions. * Strong professional verbal and written communication skills. * Ability to work with little supervision and within changing priorities. * Ability to analyze requirements and troubleshoot problems. What STERIS Offers At STERIS, we invest in our employees and their families for the long term! STERIS wouldn't be where it is today without our incredible people. We share our success together with you by rewarding you for your hard work and achievements. Here is just a brief overview of what we offer: * Competitive Pay * Extensive Paid Time Off and (9) added Holidays. * Excellent healthcare, dental, and vision benefits * 401(k) with a company match * Long/Short term disability coverage * Parental Leave * Additional add-on benefits/discounts for programs such as Pet Insurance * Continued training and educations programs * Excellent opportunities for advancement in a stable long-term career * #LI-KS1 #LI-Hybrid Pay range for this opportunity is $70,000-95,000. This position is eligible for bonus participation. Minimum pay rates offered will comply with county/city minimums, if higher than range listed. Pay rates are based on a number of factors, including but not limited to local labor market costs, years of relevant experience, education, professional certifications, foreign language fluency, etc. STERIS offers a comprehensive and competitive benefits portfolio. Click here for a complete list of benefits: STERIS Benefits Open until position is filled. STERIS is a leading global provider of products and services that support patient care with an emphasis on infection prevention. WE HELP OUR CUSTOMERS CREATE A HEALTHIER AND SAFER WORLD by providing innovative healthcare and life sciences products and services around the globe. For more information, visit *************** If you need assistance completing the application process, please call ****************. This contact information is for accommodation inquiries only and cannot be used to check application status. STERIS is an Equal Opportunity Employer. We are committed to equal employment opportunity to ensure that persons are recruited, hired, trained, transferred and promoted in all job groups regardless of race, color, religion, age, disability, national origin, citizenship status, military or veteran status, sex (including pregnancy, childbirth and related medical conditions), sexual orientation, gender identity, genetic information, and any other category protected by federal, state or local law. We are not only committed to this policy by our status as a federal government contractor, but also we are strongly bound by the principle of equal employment opportunity. The full affirmative action program, absent the data metrics required by § 60-741.44(k), shall be available to all employees and applicants for employment for inspection upon request. The program may be obtained at your location's HR Office during normal business hours.
    $70k-95k yearly 60d+ ago
  • Data Engineer

    Rockwool

    Senior data scientist job in Cleveland, OH

    Ready to help build a better future for generations to come? In an ever-changing, fast paced world, we owe it to ourselves and our future generations to live life responsibly. At ROCKWOOL, we work relentlessly to enrich modern living through our innovative stone wool solutions. Join us and make a difference! Your future team You will join our Factory of the Future team within Group R&D - a passionate community of specialists who develop and implement the latest technologies for stone wool production. Our team's mission is to transform the way we operate by building next-generation OT & IT solutions that drive smarter, more efficient, and more sustainable manufacturing. As part of this journey, we are expanding our Data Science and Engineering capabilities and you could be a key part of it. What you will be doing * Design, build, and automate data pipelines for industrial process analysis * Create systems for efficient storage and access to large-scale operational data * Ensure data quality, reliability, and accessibility for ML model training and deployment * Develop real-time visualizations and dashboards to monitor industrial operations * Engineer data infrastructure for deploying ML solutions both on-premises and in the cloud * Evaluate and validate new technologies in production environments * Collaborate closely with Data Scientists and stakeholders to align technology across all levels of the organization What you bring * 2+ years of experience in data engineering, preferably in a production or industrial setting * Degree in Computer Science, Computer Engineering, or a related technical field (or equivalent practical foundations) * Experience with relational and non-relational databases (SQL, NoSQL, InfluxDB) * Solid programming skills in Python * Strong knowledge of data pipelines and ELT processes * Proficiency in data warehousing and data lake technologies (on-prem & cloud) * Hands-on experience with on-premise data infrastructure * Experience with big data frameworks (Kafka, Apache Spark, Flink) * Understanding of IT/OT convergence and data quality management * Familiarity with cloud platforms (AWS, Azure, or GCP) * Experience with DevOps tools * Familiarity with Linux environments Tools and technologies you'll work with * Data storage: PostgreSQL, MS SQL Server, MongoDB, S3/MinIO * Transformation & orchestration: DBT, Apache Airflow, Luigi * Big data: Apache Kafka, Apache Flink, Apache Spark * Cloud platforms: AWS S3, Azure Blob Storage, GCP Cloud Storage * Visualization: Grafana, Apache Superset, Power BI * DevOps: Git, Docker, CI/CD, OpenShift What we offer By joining our team, you become a part of the people-centric work environment of a Danish company. We offer you a competitive salary, permanent contract after the probation period, development package, team building events, activity-based office in Poznan's city center in the new prestigious office building - Nowy Rynek. The building is recognized as a building without barriers, which means that it is fully adapted to the needs of people with disabilities. Our compensation package on employment contracts includes: * An office-first approach: home office is available up to 1 day per week * Adaptable Hours: start your workday anytime between 7:00 AM and 9:00 AM * Home office subsidy * Private Medical Care * Multikafeteria MyBenefit * Wellbeing program * Extra Day Off for voluntary activities … and while in the office you can also use modern office space with beautiful view and high standard furniture, bicycle parking facilities & showers, chill-out rooms with PlayStation, football table, pool table, board games, subsidized canteen with delicious food & fruit. Interested? If you recognize yourself in this profile and challenge, we kindly invite you to apply with CV written in English. Who we are We are the world leader in stone wool solutions. Founded in 1937 in Denmark, we transform volcanic rock into safe, sustainable products that help people and communities thrive. We are a global company with more than 12,200 employees, located in 40+ countries with 42 manufacturing facilities… all focused on one common purpose - to release the natural power of stone to enrich modern living. Sustainability is central to our business strategy. ROCKWOOL was one of the first companies to commit to actively contributing to the United Nations Sustainable Development Goals (SDGs) framework and are actively committed to 11 SDGs, including SDG 14, Life Below Water. Through our partnership with the One Ocean Foundation and in connection with our sponsorship of the ROCKWOOL Denmark SailGP team, we will help raise awareness around ocean health challenges in an effort to accelerate solutions to protect it. Diverse and Inclusive Culture We want all our people to feel valued, respected, included and heard. We employ 79 different nationalities worldwide and are committed to providing equal opportunities to all employees, promote diversity, and work against all forms of discrimination among ROCKWOOL employees. At ROCKWOOL, you will experience a friendly team environment. Our culture is very important to us. In fact, we refer to our culture as "The ROCKWOOL Way". This is the foundation in which we operate and is based upon our values of ambition, responsibility, integrity and efficiency.
    $78k-106k yearly est. Auto-Apply 60d+ ago
  • Junior Data Engineer

    Qode

    Senior data scientist job in Cleveland, OH

    PNC Bank is seeking a Junior Data Engineer to support the design, development, and maintenance of scalable data pipelines and data platforms that enable analytics, reporting, and regulatory compliance. This role is ideal for early-career professionals eager to build hands-on experience in enterprise data engineering within the financial services domain. Key Responsibilities Assist in building and maintaining ETL/ELT pipelines to ingest, transform, and load data from multiple source systems.Support development of batch and near real-time data processing workflows.Work with structured and semi-structured data using SQL and Python.Participate in data validation, reconciliation, and quality checks to ensure accuracy and completeness.Collaborate with senior data engineers, data analysts, and business stakeholders to understand data requirements.Help manage data storage solutions such as data warehouses and data lakes.Assist with documentation of data models, pipelines, and operational processes.Follow data governance, security, and compliance standards relevant to banking and financial services.Monitor data pipelines and troubleshoot failures under guidance.Support deployment and version control using Git and CI/CD practices. Required Qualifications Bachelor's degree in Computer Science, Information Systems, Engineering, or a related field.1-3 years of experience or strong academic/project experience in data engineering or data analytics.Proficiency in SQL (joins, subqueries, performance tuning basics).Working knowledge of Python for data processing.Basic understanding of ETL concepts, data modeling, and data warehousing.Familiarity with relational databases (Oracle, PostgreSQL, SQL Server, or similar).Exposure to cloud platforms (AWS, Azure, or GCP) is a plus.Experience with version control tools such as Git. Preferred / Nice-to-Have Skills Exposure to Big Data technologies (Spark, Hadoop).Familiarity with cloud data services (AWS S3, Glue, Redshift, Azure Data Factory, Snowflake).Understanding of banking or financial data, including transactions, risk, or regulatory reporting.Knowledge of data quality frameworks and basic data governance concepts.Experience with workflow orchestration tools (Airflow, Control-M). Soft Skills Strong analytical and problem-solving skills.Willingness to learn and adapt in a regulated environment.Good communication and documentation skills.Ability to work effectively in a team-oriented, Agile environment.
    $78k-106k yearly est. 12d ago

Learn more about senior data scientist jobs

How much does a senior data scientist earn in Lorain, OH?

The average senior data scientist in Lorain, OH earns between $68,000 and $125,000 annually. This compares to the national average senior data scientist range of $90,000 to $170,000.

Average senior data scientist salary in Lorain, OH

$92,000
Job type you want
Full Time
Part Time
Internship
Temporary