Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals.
In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems.
Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn't clear, you ask questions, and you use these moments as opportunities to grow.
Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to:
* Respond effectively to the diverse perspectives, needs, and feelings of others.
* Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems.
* Use critical thinking to break down complex concepts.
* Understand the broader objectives of your project or role and how your work fits into the overall strategy.
* Develop a deeper understanding of the business context and how it is changing.
* Use reflection to develop self awareness, enhance strengths and address development areas.
* Interpret data to inform insights and recommendations.
* Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements.
Minimum Degree Required
Bachelor's Degree
Minimum Year(s) of Experience
4 year(s)
Demonstrates thorough-level abilities and/or a proven record of success managing the identification and addressing of client needs:
* Building of GenAI and AI solutions, including but not limited to analytical model development and implementation, prompt engineering, general all-purpose programming (e.g., Python), testing, communication of results, front end and back-end integration, and iterative development with clients
* Documenting and analyzing business processes for AI and Generative AI opportunities, including gathering of requirements, creation of initial hypotheses, and development of GenAI and AI solution approach
* Collaborating with client team to understand their business problem and select the appropriate analytical models and approaches for AI and GenAI use cases
* Designing and solutioning AI/GenAI architectures for clients, specifically for plugin-based solutions (i.e., ChatClient application with plugins) and custom AI/GenAI application builds
* Processing unstructured and structured data to be consumed as context for LLMs, including but not limited to embedding of large text corpus, generative development of SQL queries, building connectors to structured databases
* Support management of daily operations of a global data and analytics team on client engagements, review developed models, provide feedback and assist in analysis;
* Directing data engineers and other datascientists to deliver efficient solutions to meet client requirements;
* Leading and contributing to development of proof of concepts, pilots, and production use cases for clients while working in cross-functional teams;
* Structuring, write, communicate and facilitate client presentations; and,
* Directing associates through coaching, providing feedback, and guiding work performance.
Demonstrates thorough abilities and/or a proven record of success learning and performing in functional and technical capacities, including the following areas:
* Managing AI/GenAI application development teams including back-end and front-end integrations
* Using Python (e.g., Pandas, NLTK, Scikit-learn, Keras etc.), common LLM development frameworks (e.g., Langchain, Semantic Kernel), Relational storage (SQL), Non-relational storage (NoSQL);
* Experience in analytical techniques such as Machine Learning, Deep Learning and Optimization
* Vectorization and embedding, prompt engineering, RAG (retrieval, augmented, generation) workflow dev
* Understanding or hands on experience with Azure, AWS, and / or Google Cloud platforms
* Experience with Git Version Control, Unit/Integration/End-to-End Testing, CI/CD, release management, etc.
Travel Requirements
Up to 80%
Job Posting End Date
Learn more about how we work: **************************
PwC does not intend to hire experienced or entry level job seekers who will need, now or in the future, PwC sponsorship through the H-1B lottery, except as set forth within the following policy: ***********************************
As PwC is an equal opportunity employer, all qualified applicants will receive consideration for employment at PwC without regard to race; color; religion; national origin; sex (including pregnancy, sexual orientation, and gender identity); age; disability; genetic information (including family medical history); veteran, marital, or citizenship status; or, any other status protected by law.
For only those qualified applicants that are impacted by the Los Angeles County Fair Chance Ordinance for Employers, the Los Angeles' Fair Chance Initiative for Hiring Ordinance, the San Francisco Fair Chance Ordinance, San Diego County Fair Chance Ordinance, and the California Fair Chance Act, where applicable, arrest or conviction records will be considered for Employment in accordance with these laws. At PwC, we recognize that conviction records may have a direct, adverse, and negative relationship to responsibilities such as accessing sensitive company or customer information, handling proprietary assets, or collaborating closely with team members. We evaluate these factors thoughtfully to establish a secure and trusted workplace for all.
Applications will be accepted until the position is filled or the posting is removed, unless otherwise set forth on the following webpage. Please visit this link for information about anticipated application deadlines: ***************************************
The salary range for this position is: $77,000 - $202,000. Actual compensation within the range will be dependent upon the individual's skills, experience, qualifications and location, and applicable employment laws. All hired individuals are eligible for an annual discretionary bonus. PwC offers a wide range of benefits, including medical, dental, vision, 401k, holiday pay, vacation, personal and family sick leave, and more. To view our benefits at a glance, please visit the following link: ***********************************
$77k-202k yearly Auto-Apply 10d ago
Looking for a job?
Let Zippia find it for you.
Senior Data Scientist, Specialist Senior - SFL Scientific
Deloitte 4.7
Senior data scientist job in Cleveland, OH
Our Deloitte Strategy & Transactions team helps guide clients through their most critical moments and transformational initiatives. From strategy to execution, this team delivers integrated, end-to-end support and advisory services covering valuation modeling, cost optimization, restructuring, business design and transformation, infrastructure and real estate, mergers and acquisitions (M&A), and sustainability. Work alongside clients every step of the way, helping them navigate new challenges, avoid financial pitfalls, and provide practical solutions at every stage of their journey-before, during, and after any major transformational projects or transactions.
SFL Scientific, a Deloitte Business, is a U.S. based, data science consulting firm specializing in building industry-specific, artificial intelligence (AI) technologies.
We are hiring a SeniorDataScientist to collaborate directly with clients to design and develop novel projects and solutions. Join a rapidly growing team of professionals working to build a world-class data science practice focused on solving complex and R&D problems.
Recruiting for this role ends on 2/28/2026.
Work You'll Do
As a SeniorDataScientist at SFL Scientific, a Deloitte Business, you will define data strategy, drive technical development, and help us create the next generation of tools, products, and AI services. You will work closely with clients to understand their data sets, strategy, and operational requirements, in order to drive exploratory analysis and design long-term solutions.
Working with a team of interdisciplinary datascientists, engineers, architects, and consultants, our work includes novel areas such as cancer detection, drug discovery, optimizing population health and clinical trials, autonomous systems and edge AI, agentic solutions, and consumer product innovation. Join us to expand your technical career through the lens of consulting and work on novel projects and use cases to expand your data science & AI skills.
+ Guide clients with high autonomy in AI strategy and development, including understanding organizational needs, performing exploratory data analysis, building and validating models, and deploying models into production
+ Lead client initiatives to deliver AI/ML solutions, including providing thought leadership, long-term maintenance, and AI strategy objectives
+ Research and implement novel machine learning approaches, including advancing state of the art training, solution design, network design, and hardware optimization
+ Validate AI models and algorithm via code reviews, unit, and integration tests
+ Support prioritization of project performance and model development and ensure AI solutions are delivered to maximize business impact and new initiatives
+ Collaborate with data engineers, datascientists, project managers, and business teams to make sure delivery and presentations align with business objectives
The Team
Our Strategy offering architects bold strategies to achieve business and mission goals, enabling growth, competitive advantage, technology modernization, and continuous digital and AI transformation.
SFL Scientific, a Deloitte Business, is a data science professional services practice focused on strategy, technology, and solving business challenges with Artificial Intelligence (AI). The team has a proven track record serving large, market-leading organizations in the private and public sectors, successfully delivering high-quality, novel and complex projects, and offering deep domain and scientific capabilities. Made up of experienced AI strategists, datascientists, and AI engineers, they serve as trusted advisors to executives, helping them understand and evaluate new and essential areas for AI investment and identify unique opportunities to transform their businesses.
Qualifications:
+ Master's or Ph.D. in a relevant STEM field (Data Science, Computer Science, Engineering, Physics, Mathematics, etc.)
+ 3+ years of experience in AI/ML algorithm development using core data science languages and frameworks (Python, PyTorch, etc.) and data analysis (NLP, time-series analysis, computer vision)
+ 3+ years of experience and a proven track record applying traditional ML and deep learning techniques (CNNs, RNNs, GANs) across real-world projects, including model tuning and performance validation in production environments
+ 3+ years of experience deploying and optimizing ML models using tools like Kubernetes, Docker, TensorRT/Trion, RAPIDs, Kubeflow, and MLflow
+ 3+ years of experience in leveraging cloud environments (AWS, Azure, or GCP) to deploy AI/ML workloads
+ Live within commuting distance to one of Deloitte's consulting offices
+ Ability to travel 10%, on average, based on the work you do and the clients and industries/sectors you serve
+ Limited immigration sponsorship may be available
Preferred:
+ 2+ years of experience working in a client-facing, consulting environment
+ 2+ years of experience leading project/client engagement teams in the execution of complex AI data science solutions
+ 1+ year of experience with LLM/GenAI use cases and developing RAG solutions, tools, and services (i.e., LangChain, LangGraph, MCP, etc.)
+ 1+ year of experience with AWS Sagemaker or AWS ML Studio
The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Deloitte, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is $107,600 to $198,400.
You may also be eligible to participate in a discretionary annual incentive program, subject to the rules governing the program, whereby an award, if any, depends on various factors, including, without limitation, individual and organizational performance.
Information for applicants with a need for accommodation: ************************************************************************************************************
#MonitorDeloitte
#DeloitteJobs
#StrategyConsulting
#DeloitteStrategy
#Strategy26
#SFL26
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law.
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law.
$107.6k-198.4k yearly 60d+ ago
Senior Data Scientist - Supply Chain
Dealer Tire 4.7
Senior data scientist job in Cleveland, OH
Who We Are
We're Dealer Tire, a family-owned, international distributor of tires and parts established in 1918 in Cleveland, OH. We're laser focused on helping the world's largest and most trusted auto manufacturers grow their tire business-in fact, we've sold more than 60 million tires to date. We're a thriving company, and we're looking for driven individuals to join our team. That's where you come in!
Base Pay Range: $130,000-$145,000
As a SeniorDataScientist, your essential job functions will include the following:
Provide Thought Leadership & Drive Change
Deliver data-driven insights through advanced algorithms and statistical models aligned with Supply Chain strategy, maximizing profitability and minimizing costs.
Collaborate with leadership to effectively solve problems, assess results, and communicate actionable recommendations.
Build relationships with cross-functional teams to drive progress and adoption of Data Science projects.
Execute discovery sprints and deep EDA to convert anecdotes into testable hypotheses, KPIs, and use cases across supply chain functions.
Define and track operational KPIs with model health to tie analytics to service, cost, safety, and cash.
Translate complex analyses into simple narratives and decision memos leaders can act on, linking findings to economics and operational playbooks.
Advanced Modeling and Analysis
Identify and prioritize projects with the most significant returns and develop advanced mathematical and statistical models.
Design optimization/ML/simulation models and package them to drive day-to-day operations and decisions with clear recommendations and โwhat-ifโ scenarios.
Create high-quality features and datasets with lineage and quality checks; stand up lightweight feature stores and reusable โEDA fact packsโ to accelerate future work.
Develop, maintain, and optimize models in cloud environments, ensuring business teams can leverage insights with actionable recommendations.
Collect, cleanse, transform, and curate structured and unstructured data into usable formats for research and analysis.
Utilize tools like SQL and data APIs to manage data in various storage environments, ensuring privacy and confidentiality.
Product and Project Management
Lead projects and collaborate with subject matter experts to align analysis outcomes with organizational goals.
Partner with MLE / DE to deliver resilient pipelines with alerting, and retraining; author runbooks and guardrails so Ops can operate models confidently.
Design and productionize multi-agent workflows that read from the data lake or warehouse and trigger actions; implement human-in-the-loop review, role-based guardrails, and observability.
Design and deliver scalable, integrated software tools and models that balance predictive power, interpretability, and operational risks.
Manage strategic projects, ensuring clear communication, timely delivery, and appropriate ROI for recommended initiatives.
Other Duties as Assigned
Basic Position Requirements
MS in Computer Science, Operations Research, Applied Mathematics, Machine Learning, or a related field.
4-5 years of experience implementing optimization, ML, or AI models.
Effective program management experience with proven results required
Strong data engineering principles, familiar with technologies to store, query, process, and analyze data.
Strong programming skills in languages like Python, C/C++, Java, or Go.
Proven record of productionizing models in cloud environments, version control, and MLOps.
Preferred Qualifications
Demonstrate in-depth technical capabilities, professional knowledge and understanding in functional areas of Supply Chain & Optimization - Inventory, Transportation, Sourcing, Distribution, Fulfillment, and Planning.
2+ years of practical experience with Optimization + Simulation (Pyomo/OR-Tools/Gurobi/CPLEX; Monte Carlo/discrete-event) and ML.
Ability to simplify complexity, influence cross-functionally, and tell the story behind the math.
Displays a creative mindset with problem solving in sophisticated and complex environments. This includes and is not limited to identifying both new and existing problems and applying the most appropriate technology to successfully solve the given challenge.
Competencies Required
Results Orientation
Business Acumen
Agility
Initiative
Influence
Communication
Customer Focus
Relationship Building
Critical Thinking
Physical Job Requirements
Continuous viewing from and inputting to a computer screen.
Sitting for long periods of time.
Travel as necessary,
Drug Policy
Dealer Tire is a drug-free environment. All applicants being considered for employment must pass a pre-employment drug screening before beginning work.
Why Dealer Tire: An amazing opportunity to join a growing organization, built on the efforts of hard working, innovative, and team-oriented people. The compensation offered for this position will depend on qualifications, experience, and geographic location. The total compensation package may also include commission, bonus or profit sharing. We offer a competitive & comprehensive benefit package including: paid time off, medical, dental, vision, and 401k match (50% on the dollar up to 7% of employee contribution). For more information on our benefit offerings, please visit our Dealer Tire Family of Companies Benefits Highlights Booklet.
EOE Statement: Dealer Tire is an Equal Employment Opportunity (EEO) employer and does not discriminate on the basis of race, color, national origin, religion, gender, age, veteran status, political affiliation, sexual orientation, marital status or disability (in compliance with the Americans with Disabilities Act*), or any other legally protected status, with respect to employment opportunities.
*ADA Disclosure: Any candidate who feels that they may need an accommodation to complete this application, or any portions of same, based on the impact of a disability should contact Dealer Tire's Human Resources Department to discuss your specific needs. Please feel free to contact us at ADAAAccommodation@dealertire.com or via phone at ************.
$130k-145k yearly Auto-Apply 60d+ ago
Senior Data Scientist
Stratacuity
Senior data scientist job in Cleveland, OH
The SeniorDataScientist plays a key role in leveraging advanced analytics and machine learning to solve complex business challenges. This position works closely with business partners to deliver actionable insights and drive strategic initiatives through data-driven solutions.
Key Responsibilities & Required Skills:
* Analyze large, diverse datasets to identify trends, patterns, and opportunities for business improvement.
* Develop and implement statistical models, machine learning algorithms, and predictive analytics to address organizational needs.
* Collaborate with internal teams to define analytical questions, design experiments, and translate business requirements into technical solutions.
* Present findings and recommendations to stakeholders using clear visualizations and concise communication.
* Demonstrate proficiency in programming languages such as Python and experience with machine learning frameworks and cloud platforms.
* Utilize database technologies such as SQL and data visualization tools like Tableau and Power BI to support analysis and reporting.
* Understand AI and Machine Learning frameworks such as Databricks, Snowflake, and Dataiku.
Qualifications:
* Bachelor's degree in Data Science, Statistics, Mathematics, Business Analytics, or a related field, or equivalent professional experience.
* Minimum 7 years of experience in data science, analytics, or artificial intelligence.
Apex Systems is a world-class IT services company that serves thousands of clients across the globe. When you join Apex, you become part of a team that values innovation, collaboration, and continuous learning. We offer quality career resources, training, certifications, development opportunities, and a comprehensive benefits package. Our commitment to excellence is reflected in many awards, including ClearlyRated's Best of Staffing in Talent Satisfaction in the United States and Great Place to Work in the United Kingdom and Mexico. Apex uses a virtual recruiter as part of the application process. Click here for more details.
Apex Benefits Overview: Apex offers a range of supplemental benefits, including medical, dental, vision, life, disability, and other insurance plans that offer an optional layer of financial protection. We offer an ESPP (employee stock purchase program) and a 401K program which allows you to contribute typically within 30 days of starting, with a company match after 12 months of tenure. Apex also offers a HSA (Health Savings Account on the HDHP plan), a SupportLinc Employee Assistance Program (EAP) with up to 8 free counseling sessions, a corporate discount savings program and other discounts. In terms of professional development, Apex hosts an on-demand training program, provides access to certification prep and a library of technical and leadership courses/books/seminars once you have 6+ months of tenure, and certification discounts and other perks to associations that include CompTIA and IIBA. Apex has a dedicated customer service team for our Consultants that can address questions around benefits and other resources, as well as a certified Career Coach. You can access a full list of our benefits, programs, support teams and resources within our 'Welcome Packet' as well, which an Apex team member can provide.
Employee Type:
FullTime
Location:
Cleveland, OH, US
Job Type:
Date Posted:
January 8, 2026
Pay Range:
$105,883 - $136,669
Similar Jobs
* IT Manager DataScientist
* DataScientist
* DataScientist
* Scientist II / Sr. Scientist
* R&D SeniorScientist
$105.9k-136.7k yearly 1d ago
Senior Data Scientist / AI Engineer
Insight Global
Senior data scientist job in Beachwood, OH
Seeking a SeniorDataScientist with hands-on experience in GenAI agent development and modern AI engineering. The focus is on building production-grade agents, MCP integrations, and enterprise knowledge systems using cloud and data stack. We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity/affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to ********************.To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy: ****************************************************
Skills and Requirements
Responsibilities:
Build and refine GenAI agents that interact with internal systems
Develop MCP tools and secure integrations
Create and maintain knowledge bases and retrieval pipelines
Implement agent evaluation frameworks and guardrails
Work across Azure, Snowflake, and Databricks to support production AI systems
Key Skills & Requirements:
NEEDS to be a team player and work well with others
Hands-on experience building and deploying GenAI agents (including open-source LLMs) in production environments
Strong knowledge of MCP, tool integrations, and agent orchestration
Proven ability to design and maintain knowledge bases, vector search, and retrieval systems independently
Rapid GenAI prototyping (vibe coding)
Cloud expertise: Azure (preferred), AWS, or GCP; secure, governed infrastructure
Expert-level SQL skills; experience with Snowflake (flexible)
Experience with Databricks for data, ML workflows, and predictive modeling
Solid background in traditional ML (classification, clustering, etc.)
Ability to build evals, guardrails, and safety layers for agents at scale
Experience with MLOps and LLM Observability
5+ years of relevant experience; independent and collaborative team player
Comfortable with LLMs and working in a team environment Azure, Databricks, and Snowflake expertise
$79k-109k yearly est. 4d ago
Senior Data Scientist Forecasting BI
Johnson & Johnson 4.7
Senior data scientist job in Brunswick, OH
At Johnson & Johnson, we believe health is everything. Our strength in healthcare innovation empowers us to build a world where complex diseases are prevented, treated, and cured, where treatments are smarter and less invasive, and solutions are personal. Through our expertise in Innovative Medicine and MedTech, we are uniquely positioned to innovate across the full spectrum of healthcare solutions today to deliver the breakthroughs of tomorrow, and profoundly impact health for humanity. Learn more at *******************
Job Function:
Data Analytics & Computational Sciences
Job Sub Function:
Data Science
Job Category:
Scientific/Technology
All Job Posting Locations:
New Brunswick, New Jersey, United States of America
Job Description:
We are searching for the best talent for SeniorDataScientist: Forecasting, BI to be in New Brunswick, NJ.
About Innovative Medicine
Our expertise in Innovative Medicine is informed and inspired by patients, whose insights fuel our science-based advancements. Visionaries like you work on teams that save lives by developing the medicines of tomorrow.
Join us in developing treatments, finding cures, and pioneering the path from lab to life while championing patients every step of the way.
Learn more at *******************/innovative-medicine
Purpose:
As a SeniorDataScientist in Global Finance Data Science Team: You will be responsible for delivering data science production processes to a high standard, contributing to products that are consumed and scrutinized by senior leadership. You will work in a global team of DataScientists, Data Engineers and Machine Learning Engineers to advance data science/AI roadmap for J&J's Global Finance function. You will help deliver value-added insights and analytics to our finance and business leaders, reduce manual workload through automation, and enhance user-experience.
Project focus area will also include Predictive Sales Forecasting as part of SAP Analytics Cloud reporting capability, taking it from PoC to Production-grade standards. This includes aligning finance and business needs, input data validations from different source systems, data reconciliation/validations and work on other ongoing enhancements/ad-hoc requests from leadership etc.
You will help in data science projects across their lifecycle: Design/ proof-of-concept (PoC), development, data pipelines and engineering, deployment, adoption by end-users and ongoing enhancements. The capabilities developed will include predictive financial forecasting, Gen AI, descriptive analytics, data visualization and decision support. You will need to ensure that PoC's are put into production-grade with correct amount of automation, data validations and systems/process integration. This role will involve understanding the needs of business stakeholders and advocating the merits of data-driven analytics to provide viable solutions.
You will be responsible for:
Adopting a highly successful forecasting process and technologies to deliver a monthly sales financial forecast for consumption by senior leaders
Assessing BI sources to understand the impact of future events on forecast accuracy, developing judgement on how to adjust forecasts.
Adjust data engineering pipelines for large datasets from finance/ERP systems and automate data science processes.
Iteratively improve the process and models to solve problems, testing and documenting changes, aligning changes within the team.
Develop and deploy data science and AI solutions, including data processing, algorithm development, data visualization, and communication of insights to stakeholders.
Research and implement statistical and machine learning models, monitor model performance, and continuously improve existing models.
Collaborate with finance, commercial leaders, technology teams, and external partners to deliver end-to-end solutions, ensuring compliance and risk management.
Advocate for data-driven insights and data science methods across the organization and managing compliance adherence.
Qualifications / Requirements:
Qualifications:
Minimum of 5 years of Data Science/ AI experience in an industry setting is required, preferably in a Finance or Healthcare setting.
Minimum of a Bachelors degree is required, preferably in Science, Economics, Business Analytics, Data Science, Finance, Computer Science, Engineering or any other quantitative or STEM discipline. Master's degree in a relevant field is preferred.
Technical Requirements
Proficient in Python/R/Alteryx programming and experienced with Data Science Cloud platforms like AWS, Azure, and Domino.
Experienced using finance data and SAP HANA data tables is an advantage.
Proficient in interpreting BI sources and understanding the impact of these on finance metrics.
Able to work independently and under time pressure to deliver a process, investigating and solving data issues in an explainable way.
Skilled in data visualization and dashboarding using Tableau or PowerBI.
Knowledgeable in advanced statistical techniques and concepts, such as regression, properties of distributions, and statistical tests. Proficient in end-to-end production deployment, from concept to production-grade models, and experienced with Gen AI technologies like AWS Bedrock, Azure OpenAI service, and Joule.
Familiar with automated workflow tools like Alteryx and workflow orchestration tools like Airflow, as well as standards and best practices for technical documentation using tools like JIRA and Confluence.
Strong data analytics skills, attention to detail with business/finance logics, and experience working across multiple levels of stakeholders to create context around key business drivers.
Other:
The position will be located in New Brunswick, NJ and may require up to 10% travel.
Johnson & Johnson is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, age, national origin, disability, protected veteran status or other characteristics protected by federal, state or local law. We actively seek qualified candidates who are protected veterans and individuals with disabilities as defined under VEVRAA and Section 503 of the Rehabilitation Act.
Johnson & Johnson is committed to providing an interview process that is inclusive of our applicants' needs. If you are an individual with a disability and would like to request an accommodation, external applicants please contact us via *******************/contact-us/careers . Internal employees contact AskGS to be directed to your accommodation resource.
#JNJDataScience
Required Skills:
Data Science
Preferred Skills:
Alteryx, Healthcare Industry, Microsoft Power Business Intelligence (BI) Data Visualization, Python (Programming Language), R Programming, SAP HANA, Tableau Dashboard Development
The anticipated base pay range for this position is :
The anticipated base pay range for this position is $147,000 to $169,050 USD
Additional Description for Pay Transparency:
Subject to the terms of their respective plans, employees and/or eligible dependents are eligible to participate in the following Company sponsored employee benefit programs: medical, dental, vision, life insurance, short- and long-term disability, business accident insurance, and group legal insurance. Subject to the terms of their respective plans, employees are eligible to participate in the Company's consolidated retirement plan (pension) and savings plan (401(k)). This position is eligible to participate in the Company's long-term incentive program. Subject to the terms of their respective policies and date of hire, Employees are eligible for the following time off benefits: Vacation -120 hours per calendar year Sick time - 40 hours per calendar year; for employees who reside in the State of Washington -56 hours per calendar year Holiday pay, including Floating Holidays -13 days per calendar year Work, Personal and Family Time - up to 40 hours per calendar year Parental Leave - 480 hours within one year of the birth/adoption/foster care of a child Condolence Leave - 30 days for an immediate family member: 5 days for an extended family member Caregiver Leave - 10 days Volunteer Leave - 4 days Military Spouse Time-Off - 80 hours Additional information can be found through the link below. *********************************************
$147k-169.1k yearly Auto-Apply 60d+ ago
Data Scientist
6090-Johnson & Johnson Services Legal Entity
Senior data scientist job in Brunswick, OH
At Johnson & Johnson, we believe health is everything. Our strength in healthcare innovation empowers us to build a world where complex diseases are prevented, treated, and cured, where treatments are smarter and less invasive, and solutions are personal. Through our expertise in Innovative Medicine and MedTech, we are uniquely positioned to innovate across the full spectrum of healthcare solutions today to deliver the breakthroughs of tomorrow, and profoundly impact health for humanity. Learn more at *******************
Job Function:
Data Analytics & Computational Sciences
Job Sub Function:
Data Science
Job Category:
Scientific/Technology
All Job Posting Locations:
New Brunswick, New Jersey, United States of America
Job Description:
About Innovative Medicine
Our expertise in Innovative Medicine is informed and inspired by patients, whose insights fuel our science-based advancements. Visionaries like you work on teams that save lives by developing the medicines of tomorrow.
Join us in developing treatments, finding cures, and pioneering the path from lab to life while championing patients every step of the way.
Learn more at *******************/innovative-medicine
We are searching for the best talent for DataScientist
Purpose:
As a DataScientist in Global Finance Data Science Team: You will be contributing to a high standard delivery of data science driven predictive financial statements - consumed by senior leadership. You will work in a global team of DataScientists, Data Engineers and Machine Learning Engineers to advance the Data Science and AI roadmap for J&J's Global Finance function. You will assist in delivering value-added insights and analytics to our finance and business leaders, reduce manual work through automated reasoning, and enhance user-experience.
You will be responsible for:
You will mainly focus on advancing and broadening the capabilities of our Predictive Financial Statements, plugging results into SAP Analytics Cloud reporting - where most stakeholders go for corporate internal financial statements. Your job will also include aligning finance and business needs, validation of data from different source systems, and data reconciliation. You will mainly be required to create new models and leverage or extend the use of existing models for other financial statements. You will also enhance existing models for accuracy, speed and cost.
You will be involved in data science projects across their lifecycle, from design to production and adoption by end users. This will include creating proof-of-concepts for new project, data science model development, data pipeline development and production deployment. The capabilities developed will include forecasting, descriptive analytics, data visualization, GenAI and decision support. This role will involve understanding the needs of business stakeholders and advocating the merits of data-driven analytics to provide viable solutions.
You will be responsible for:
Adopting a highly successful forecasting processes and technologies and delivering predictive financial statements monthly.
Modeling the impact of future events to enhance forecast accuracy.
Developing data pipelines for large datasets sourced from financial systems and automating data science processes.
Documenting and aligning model changes within the team and stakeholders.
Communicating insights to stakeholders leveraging data visualization tools.
Monitoring model performance, and continuously improving existing models.
Collaborating with finance, commercial leaders, technology teams, and external partners to deliver end-to-end solutions, ensuring compliance and risk management.
Advocating for data-driven insights and data science methods across the organization and managing compliance adherence.
Qualifications / Requirements:
Minimum of 2 years of Data Science/ AI experience in an industry setting is required, preferably in a Finance or Healthcare setting.
Minimum of a Bachelors degree is required, preferably in Science, Economics, Business Analytics, Data Science, Finance, Computer Science, Engineering or any other quantitative or STEM discipline.
Technical Requirements
Proficient in Python/R/Alteryx programming and experienced with Data Science Cloud platforms like AWS and Databricks or Domino.
Experienced using finance data and SAP HANA data tables is a plus.
Proficient in interpreting data sources and correlating to financial metrics.
Able to work independently and under time pressure to deliver results, investigating and solving data issues in an explainable way.
Skilled in data visualization and dashboarding using Tableau or PowerBI.
Knowledgeable in statistical techniques and concepts, such as regression, properties of distributions, and statistical tests.
Strong data analytics skills and attention to detail.
Other:
The position will be located in New Brunswick, NJ and may require up to 10% travel.
Johnson & Johnson is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, age, national origin, disability, protected veteran status or other characteristics protected by federal, state or local law. We actively seek qualified candidates who are protected veterans and individuals with disabilities as defined under VEVRAA and Section 503 of the Rehabilitation Act.
Johnson & Johnson is committed to providing an interview process that is inclusive of our applicants' needs. If you are an individual with a disability and would like to request an accommodation, external applicants please contact us via *******************/contact-us/careers , internal employees contact AskGS to be directed to your accommodation resource.
#LI-Hybrid #JNJDataScience
Required Skills:
Artificial Intelligence (AI), Python (Programming Language)
Preferred Skills:
Advanced Analytics, Analytical Reasoning, Business Intelligence (BI), Business Writing, Coaching, Collaborating, Communication, Data Analysis, Data Compilation, Data Privacy Standards, Data Savvy, Data Science, Data Visualization, Econometric Models, Execution Focus, Technical Credibility, Technologically Savvy, Workflow Analysis
The anticipated base pay range for this position is :
The anticipated base pay range for this position is $89,000 to $143,750 USD.
Additional Description for Pay Transparency:
Subject to the terms of their respective plans, employees and/or eligible dependents are eligible to participate in the following Company sponsored employee benefit programs: medical, dental, vision, life insurance, short- and long-term disability, business accident insurance, and group legal insurance. Subject to the terms of their respective plans, employees are eligible to participate in the Company's consolidated retirement plan (pension) and savings plan (401(k)). This position is eligible to participate in the Company's long-term incentive program. Subject to the terms of their respective policies and date of hire, Employees are eligible for the following time off benefits: Vacation -120 hours per calendar year Sick time - 40 hours per calendar year; for employees who reside in the State of Washington -56 hours per calendar year Holiday pay, including Floating Holidays -13 days per calendar year Work, Personal and Family Time - up to 40 hours per calendar year Parental Leave - 480 hours within one year of the birth/adoption/foster care of a child Condolence Leave - 30 days for an immediate family member: 5 days for an extended family member Caregiver Leave - 10 days Volunteer Leave - 4 days Military Spouse Time-Off - 80 hours Additional information can be found through the link below. *********************************************
$89k-143.8k yearly Auto-Apply 2d ago
Financial Crimes - Senior Data Scientist
Keybank 4.4
Senior data scientist job in Cleveland, OH
ABOUT THE JOB (JOB BRIEF)
Under manager's supervision, the SeniorDataScientist is primarily responsible for conducting quantitative modeling and analytics of financial crimes. Leveraging both current and emerging technologies and applications, this role covers all key aspects of model development and analytics, i.e. data identification and gathering, methodology/technique selection, performance assessment, documentation, and ongoing monitoring.
ESSENTIAL JOB FUNCTIONS
Under manager's oversight, perform a broad range of quantitative works, including model development and ad hoc analytics to address financial crime compliance needs in AML/BSA/OFAC
Research, compile and evaluate large sets of data to assess quality and determine suitability for model building
Develop/maintain internal models and test/configure vendor solutions to ensure conceptually sound design, proper implementation, and acceptable model performance
Document model development process and outcomes properly and support model validation and review
Employ innovative techniques to drive continuous improvements in model effectiveness and efficiency, e.g. reducing false positives
Proactively develop and build technical skills and business knowledge; and effectively collaborate with compliance, technology, and risk partners
REQUIRED QUALIFICATIONS
Master's or Ph.D. degree in statistics, mathematics, economics, computer science, data sciences, predictive modeling, or other quantitative disciplines and at least 3 years of relevant experience, preferred in AML/BSA, OFAC, or fraud modeling/analytics; 4 years with bachelor's degree
Solid expertise with both traditional and Machine Learning (ML)/Artificial Intelligence (AI) modeling practice and solutions
Hands-on work experience with statistical coding in SAS and/or Python
Knowledge of and ability to leverage traditional databases, cloud-based computing, and distribution computing
Knowledge of financial crime regulatory requirements, technology, and data analysis best practices
Excellent verbal, written and visual communication skills; ability to translate technical observations to a non-technical audience
Candidates must be located in or willing to relocate to Cleveland, OH or Buffalo, NY
COMPENSATION AND BENEFITS
This position is eligible to earn a base salary in the range of $94,000.00 - $175,000.00 annually. Placement within the pay range may differ based upon various factors, including but not limited to skills, experience and geographic location. Compensation for this role also includes eligibility for incentive compensation which may include production, commission, and/or discretionary incentives.
Please click here for a list of benefits for which this position is eligible.
Key has implemented an approach to employee workspaces which prioritizes in-office presence, while providing flexible options in circumstances where roles can be performed effectively in a mobile environment.
Job Posting Expiration Date: 02/22/2026 KeyCorp is an Equal Opportunity Employer committed to sustaining an inclusive culture. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, genetic information, pregnancy, disability, veteran status or any other characteristic protected by law.
Qualified individuals with disabilities or disabled veterans who are unable or limited in their ability to apply on this site may request reasonable accommodations by emailing HR_**********************.
#LI-Hybrid
$94k-175k yearly Auto-Apply 6d ago
Data Technology Lead
Westfield Group, Insurance
Senior data scientist job in Westfield Center, OH
The role is part of the Data, Analytics and Reporting team. This role leads a team of data engineers and data testers to deliver secure, scalable, and high-quality data solutions that support analytics, reporting, and business operations. The Data Technology Lead collaborates with stakeholders to understand technology and data requirements, implements best practices for data governance and testing, and drives innovation in data engineering. The position involves managing modern cloud-based platforms, while fostering a culture of continuous improvement and technical excellence within the IT team.
Job Responsibilities
* Data Architecture & Engineering: Design, develop, and maintain robust data pipelines and architectures for structured and unstructured data; optimize workflows across Azure Data Lake, Snowflake, and other environments; implement best practices for data modeling and transformation using dbt.
* Team Leadership: Lead and mentor data engineers and testers; manage workload distribution; foster collaboration and innovation.
* Testing & Quality Assurance: Establish data testing frameworks; ensure data accuracy and reliability; integrate testing into broader QA processes.
* Collaboration & Stakeholder Engagement: Partner with analytics, BI, and business teams to deliver data solutions; provide technical guidance.
* Vendor & Tool Management: Evaluate and select tools and vendors; negotiate contracts and manage relationships.
* Business Continuity: Develop and maintain disaster recovery and business continuity plans for data systems.
Job Qualifications
* 7+ years of experience in data engineering, with at least 2 years in a data leadership role.
* Insurance industry experience required
* Bachelor's degree in Computer Science, Information Technology, or a related field and/or commensurate experience.
* Master's degree in related field is preferred.
* Proficiency in SQL.
* Optional skills in Python and experience with modern data frameworks (e.g., Spark).
* Expertise in Snowflake, Azure Data Lake, dbt, and modern data platforms.
* Strong experience in data integration, data warehousing, and data lake architectures.
* Experience with Azure DevOps, CI/CD pipelines, and Git for code management
* Awareness of Generative AI (GenAI) capabilities to accelerate development and testing processes.
* Familiarity with data testing methodologies and tools.
* Excellent leadership, communication, and problem-solving skills.
Behavioral Competencies
* Directs work
* Collaborates
* Develops talent
* Customer focus
* Communicates effectively
* Ensures accountability
* Decision quality
* Business insight
* Nimble learning
* Builds effective teams
* Manages complexity
Technical Skills
* Technical Support
* Operating Systems
* Workflow Management
* Budgeting
* Disaster Recovery
* Process Improvement
* Project Management
* IT Strategy & Framework
* IT Regulatory Compliance
* Stakeholder Management
This job description describes the general nature and level of work performed in this role. It is not intended to be an exhaustive list of all duties, skills, responsibilities, knowledge, etc. These may be subject to change and additional functions may be assigned as needed by management.
$92k-127k yearly est. 47d ago
Data Technology Lead
Westfield High School 3.3
Senior data scientist job in Westfield Center, OH
The role is part of the Data, Analytics and Reporting team. This role leads a team of data engineers and data testers to deliver secure, scalable, and high-quality data solutions that support analytics, reporting, and business operations. The Data Technology Lead collaborates with stakeholders to understand technology and data requirements, implements best practices for data governance and testing, and drives innovation in data engineering. The position involves managing modern cloud-based platforms, while fostering a culture of continuous improvement and technical excellence within the IT team.
Job Responsibilities
Data Architecture & Engineering: Design, develop, and maintain robust data pipelines and architectures for structured and unstructured data; optimize workflows across Azure Data Lake, Snowflake, and other environments; implement best practices for data modeling and transformation using dbt.
Team Leadership: Lead and mentor data engineers and testers; manage workload distribution; foster collaboration and innovation.
Testing & Quality Assurance: Establish data testing frameworks; ensure data accuracy and reliability; integrate testing into broader QA processes.
Collaboration & Stakeholder Engagement: Partner with analytics, BI, and business teams to deliver data solutions; provide technical guidance.
Vendor & Tool Management: Evaluate and select tools and vendors; negotiate contracts and manage relationships.
Business Continuity: Develop and maintain disaster recovery and business continuity plans for data systems.
Job Qualifications
7+ years of experience in data engineering, with at least 2 years in a data leadership role.
Insurance industry experience required
Bachelor's degree in Computer Science, Information Technology, or a related field and/or commensurate experience.
Master's degree in related field is preferred.
Proficiency in SQL.
Optional skills in Python and experience with modern data frameworks (e.g., Spark).
Expertise in Snowflake, Azure Data Lake, dbt, and modern data platforms.
Strong experience in data integration, data warehousing, and data lake architectures.
Experience with Azure DevOps, CI/CD pipelines, and Git for code management
Awareness of Generative AI (GenAI) capabilities to accelerate development and testing processes.
Familiarity with data testing methodologies and tools.
Excellent leadership, communication, and problem-solving skills.
Behavioral Competencies
Directs work
Collaborates
Develops talent
Customer focus
Communicates effectively
Ensures accountability
Decision quality
Business insight
Nimble learning
Builds effective teams
Manages complexity
Technical Skills
Technical Support
Operating Systems
Workflow Management
Budgeting
Disaster Recovery
Process Improvement
Project Management
IT Strategy & Framework
IT Regulatory Compliance
Stakeholder Management
This job description describes the general nature and level of work performed in this role. It is not intended to be an exhaustive list of all duties, skills, responsibilities, knowledge, etc. These may be subject to change and additional functions may be assigned as needed by management.
$76k-95k yearly est. Auto-Apply 48d ago
Biostatistician
Case Western Reserve University 4.0
Senior data scientist job in Cleveland, OH
Salary Grade Case Western Reserve University is committed to providing a transparent estimate of the salary range available for this position at the time of its posting. The salary range is between $52,705 and $66,672, depending on qualifications, experience, department budgets, and industry data.
Employees receive more than just a paycheck. University employees enjoy a comprehensive benefits package that includes excellent healthcare, retirement plans, tuition assistance, paid time off, and a winter recess.
Job Description
POSITION OBJECTIVE
The objective of the position is to provide metagenomic sequence processing, statistical design and analysis, data management, high-quality visualization, and software and website development support to data-enhanced and omics-driven biomedical research. The analyst will play a leadership role in all aspects of data analyses, data management and report generation for these research projects. The analyst will be integrally involved in all aspects of research project design, computational implementation, preparation of research reports and grant applications. This position functions under minimal supervision.
ESSENTIAL FUNCTIONS
* Metagenomic sequence processing Execute metagenomic data processing workflows using established bioinformatics pipelines. This includes running standard tools for quality control, taxonomic and functional profiling, and downstream analysis following documented procedures. The individual will routinely submit and manage batch jobs on high-performance computing (HPC) systems, monitor job status, troubleshoot errors, and ensure efficient pipeline execution. The role also involves organizing output files, maintaining reproducible workflows, and communicating results to the research team. (15%)
* Data Analysis Perform statistical programming and analysis for multiple research studies. Create analytic datasets, summarizes data and creates reports. Work with other team members to develop and implement analytic approaches including but not limited to dimension reduction, multiple linear regression, testing procedures, clustering methods, and so on. Assess appropriateness of final statistical models. Document methods and results via written memoranda and reports. (20%)
* Data Management Design, develop, and maintain appropriate databases, such as microbiome data across body sites. Develop programs to ensure data integrity (e.g., logic and range checks). Perform intermediate statistical programming for quality assurance. Develop efficient data archival systems for use and maintains detailed written documentation of all data programming and data archival systems. (10%)
* High-quality visualization Create a wide range of graphical outputs; such as taxonomic summaries, heatmaps, ordination plots, pathway diagrams, and interactive visual dashboards,using tools like R (ggplot2), Python (matplotlib, seaborn), or web-based platforms. The individual will translate complex analytical results into intuitive visual formats for manuscripts, presentations, lab meetings, and collaborator reports. Ensuring visual consistency, scientific accuracy, and adherence to data-visualization best practices is essential. (15%)
* Software and website development - Write clean, efficient, and well-documented code in languages such as R and Python; implementing modular functions and reproducible analytical workflows; and managing version control using Git. The individual will translate research needs into functional software packages, optimize computational performance, and ensure reliability across computing environments. In addition, the role involves creating and maintaining lightweight websites or interactive dashboards; using HTML/CSS, JavaScript, R Shiny, or similar frameworks to present analytical results, support data visualization, and disseminate methods and documentation. Ensuring usability, responsiveness, and clarity in both software tools and web interfaces is essential to support the laboratory's research and collaborative projects. (20%)
* Prepare Research Reports - Assist in the writing and preparation of research papers, including internal documentation, academic publication and research presentations at conferences and meetings. (10%)
NONESSENTIAL FUNCTIONS
* Study Design Work with the Principal Investigator and with other investigators on the projects to develop appropriate methodologies for addressing study hypotheses. Participate in the research design phase of projects and in the preparation of research proposals for funding. This work will be comprehensive and will include developing approaches for data management, assuring the integrity of the data, and methods for implementing data analysis. (5%)
* Development of Educational Opportunities Work with other personnel to develop workshops and written materials to assist graduate students to learn about methodological and programming issues involved in working with population-based data, with emphasis on data management and reporting. (5%)
* Perform other duties as assigned. (
CONTACTS
Department: Frequent interactions (50%) with the principal investigator to provide status of ongoing projects and summary of completed tasks. Occasional interactions (10%) with members of the department to help with data analysis support. Contact with administrative staff as needed
University: Regular contact (40%) with collaborators in other university departments and centers. Occasional contact (10%) with the CWRU [U]Tech for computing support. Occasional contact with Research Administration, IRB, and Sponsored Projects Accounting.
External: Regular contact (40%) with researchers outside of CWRU to discuss and review experimental
results.
Student: Occasional interactions (10%) with undergraduate/graduate students working in the lab or in the lab of collaborators.
SUPERVISORY RESPONSIBILITIES
NA
QUALIFICAITONS
Experience: 3 years of work experience in an increasingly responsible position in biomedical research or social science research environment housed in a university, nonprofit research organization or similar setting.
Education/licensing: A Master's degree in computer science, computational biology, biotechnology, biology, bioinformatics, biostatistics, statistics, or related degree is required.
REQUIRED SKILLS
* Close familiarity and ability to work with genomics, radiomics data and other biomedical data
* Ability to work semi-independently with direct but limited guidance
* Highly organized, with decision-making skills
* Ability to work and communicate knowledgably in collaboration with other team members including statisticians, investigators, clinicians, and programmers
* Strong oral and written communication skills, adaptability, and the capacity to work with team members from varied cultural and professional backgrounds.
* Desirable skills include background/understanding of medical terminology, and clinical/translational research
* Hands-on experience in LotusNotes, Web databases or related collaborative task management apps
* Ability to use Microsoft office products such as Word, Power Point, Excel and Access
* Ability to use OverLeaf, GitHub, BioRender, Adobe Illustrator
* Ability to use software packages and websites, such as R, Python, R Quarto, GitHub, Bioconductor, and R CRAN
* Demonstrated history of successful support, education, and advocacy for all students, aligned with the values, mission, and messaging of the university, while adhering to the staff policy on conflict of commitment and interest.
* Ability to meet consistent attendance.
* Ability to interact with colleagues, supervisors, and customers face to face.
* Willingness to learn and utilize Artificial Intelligence (AI).
WORKING CONDITIONS
Employee will be expected to obtain and maintain certification of training in human subjects research through an online course and exam and other seminars. The position may require occasional work with investigators at the Cleveland Clinic and/or MetroHealth Medical Center, and might require occasional travel to either of these locations. No travel outside the Cleveland area is anticipated.
EEO Statement
Case Western Reserve University is an equal opportunity employer. All applicants are protected under federal and state laws and university policy from discrimination based on race, color, religion, sex, sexual orientation, gender identity or expression, national or ethnic origin, protected veteran status, disability, age and genetic information.
Reasonable Accommodations
Case Western Reserve University complies with the Americans with Disabilities Act regarding reasonable accommodations for applicants with disabilities. Applicants requiring a reasonable accommodation for any part of the application and hiring process should contact the CWRU Office of Equity at ************ to request a reasonable accommodation. Determinations as to granting reasonable accommodations for any applicants will be made on a case-by-case basis.
.
$52.7k-66.7k yearly 3d ago
GTM Data Engineer
Partssource 4.4
Senior data scientist job in Cleveland, OH
PartsSource is the leading technology and software platform for managing mission-critical healthcare equipment. Trusted by over 5,000 US hospitals and 15,000 clinical sites, PartsSource empowers providers and service organizations to maximize clinical availability for patient care and automates the procurement of parts, services and training through a unique digital experience.
PartsSource team members are deeply committed to our mission of Ensuring Healthcare is Always On , which is foundational to our success and growth. Our vibrant culture is built upon aligned values, shared ownership, mutual respect, and a passion for collaborating to solve complex customer problems.
About the Job Opportunity
The GTM Data Engineer is responsible for building and maintaining a single, trusted customer and revenue data foundation across Marketing, Sales, and Customer Success. This role partners closely with Revenue Operations to ensure all GTM teams operate from a consistent source of truth for pipeline, revenue, retention, and growth. You will own how GTM data is structured, enriched, validated, and made available-eliminating data ambiguity and enabling confident, data-driven decision making.
What You'll Do GTM Data Modeling & Governance
(Technology - Data Engineering: Data Modeling & Architecture, Data Quality & Governance)
Design and maintain the canonical customer, account, and revenue data model across GTM systems
Resolve identity across contacts, accounts, users, assets, services, and subscriptions
Define authoritative objects and metrics for pipeline, bookings, renewals, expansion, and churn
Ensure historical accuracy, data lineage, and consistent metric definitions
Data Enrichment, Integration & Pipelines
(Technology - Data Engineering: ETL & Data Integration, Data Pipeline Development)
Build and manage data pipelines across CRM, marketing automation, services, and financial systems
Identify data gaps and implement enrichment strategies to improve completeness and usability
Merge datasets into unified customer and account views with clear conflict-resolution rules
Own schema changes, backfills, reprocessing, and validation as systems evolve
Attribution, Revenue Logic & Reporting Enablement
(Sales Revenue Operations: Performance Metrics & Reporting, Sales Analytics)
Implement approved attribution and revenue logic consistently across channels and time periods
Validate sourced, influenced, and assisted revenue before executive reporting
Enable trusted funnel, pipeline, retention, and expansion reporting within systems of record
Reduce reliance on spreadsheets and manual reconciliation
GTM Architecture, CDP & AI Readiness
(Technology - Systems & Applications: Systems Integration, Systems Thinking)
Support a warehouse-centric or composable CDP approach for GTM data
Partner with GTM leadership to evolve long-term data architecture
Prepare high-quality, LLM-ready datasets for AI-enabled GTM workflows
Ensure access controls, privacy, and compliance requirements are met
What You'll Bring
Your Background
5+ years in data engineering, analytics engineering, or GTM data roles
Strong experience with CRM and GTM data models
Advanced SQL skills and experience with modern data stacks and ETL tools
Experience supporting attribution, lifecycle, and revenue reporting
Familiarity with Customer Data Platforms or warehouse-centric CDP approaches
Ability to work cross-functionally with Marketing, Sales, Customer Success, Finance, and RevOps
Who We Want to Meet
Act Like an Owner -
Accountability & Execution
: You take full ownership of GTM data quality and follow through to reliable outcomes.
Serve with Purpose -
Business Impact
: You connect data architecture decisions to revenue visibility and GTM effectiveness.
Adapt to Thrive -
Managing Ambiguity
: You remain productive amid evolving systems, definitions, and priorities.
Collaborate to Win -
Influence & Communication
: You partner effectively with RevOps and GTM teams to align on shared metrics.
Challenge the Status Quo -
Data-Informed Decision Making
: You use evidence and clarity to replace assumptions and debates.
Benefits & Perks
Competitive compensation package with salary, incentives, company ownership/equity, and comprehensive benefits (401k match, health, college debt reduction, and more!)
Career and professional development through training, coaching and new experiences.
Hybrid culture with new & beautiful workspaces that balance flexibility, collaboration, and productivity.
Inclusive and diverse community of passionate professionals learning and growing together.
Interested?
We'd love to hear from you! Submit your resume and an optional cover letter explaining why you'd be a great fit.
About PartsSource
Since 2001, PartsSource has evolved into the leading technology and software platform for managing mission-critical equipment, serving over half of the U.S. hospital infrastructure. Our digital systems modernize and automate the procurement of parts, services, technical support, and training for HTM professionals to efficiently and effectively maintain their mission-critical equipment. PartsSource employs over 700 employees nationwide that committed to supporting healthcare providers and ensuring healthcare always on.
In 2021, Bain Capital invested in the business, further accelerating our growth and positive impact within the healthcare industry.
Read more about us here:
ยท PartsSource Named to Newsweek's List of the Top 200 America's Most Loved Workplaces for 2024
ยท PartsSource Named Among the Top 50 Healthcare Technology Companies of 2025
ยท PartsSource Named Among the Top 25 Healthcare Software Companies of 2025
ยท PartsSource President and CEO Philip Settimi Named to Top 50 Healthcare Technology CEO List 2025
ยท WSJ: Bain Capital Private Equity Scoops Up PartsSource
EEO PartsSource, Inc., and its affiliates and subsidiaries, provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training.
Legal authorization to work in the U.S. is required.
$86k-122k yearly est. Auto-Apply 1d ago
Data Engineer
GD Information Technology
Senior data scientist job in Fairview Park, OH
Type of Requisition:
Regular
Clearance Level Must Currently Possess:
None
Clearance Level Must Be Able to Obtain:
None
Public Trust/Other Required:
None
Job Family:
Data Science and Data Engineering
Job Qualifications:
Skills:
Business Functions, Data Science, Data Solutions
Certifications:
None
Experience:
5 + years of related experience
US Citizenship Required:
No
Job Description:
Own your opportunity to turn data into measurable outcomes for our customers' most complex challenges. As a Data Engineer at GDIT, you'll power innovation to drive mission impact and grow your expertise to power your career forward. We are a fast-growing AI and Data team within the Global CIO organization at GDIT. We design and deliver enterprise-grade solutions that integrate AI, data, and human-AI collaboration workflows across key business functions (e.g., Growth, Finance, HR, Legal, and Supply Chain). We are looking for a highly skilled Data Engineer (Agentic AI) who shares a passion for delivering data engineering for agentic AI applications and building an AI-first data ecosystem foundational to enterprise AI transformation.
MEANINGFUL WORK AND PERSONAL IMPACT:
Design, build, and operate scalable end-to-end data pipelines and curated data products that support enterprise analytics and agentic AI use cases
Integrate data from enterprise systems and external sources, including structured, semi-structured, and unstructured data
Deliver reliable data services for agentic AI workflows, including APIs, retrieval/indexing, and governed context delivery for AI agents
Implement data quality, observability, and governance best practices across data pipelines and products
Optimize performance and cost across storage, compute, orchestration, and serving layers
Collaborate with cross-functional teams, including business stakeholders, AI engineers, and software developers, to translate requirements into production solutions
WHAT YOU'LL NEED TO SUCCEED
Bring your expertise and drive for innovation to GDIT. The Data Engineer must have:
Education: Bachelor's degree in Computer Science/Engineering, Data Science, or a related field
Experience: 5+ years of experience delivering production-grade data engineering across databases, data integration, data services, and data governance
Role requirements: Proficiency in programming languages (Python or Java) and databases (SQL and NoSQL). Strong collaboration and communication skills in cross-functional enterprise environments.
Preferred Skills and Abilities: Master's degree in Computer Science/Engineering, Data Science, or a related field. Relevant certifications in Data Engineering, AI, or Cloud. Experience delivering data engineering for agentic AI applications. Experience with retrieval-based AI data foundations (document processing, metadata, embeddings, vector or hybrid search). Familiarity with agent workflows and how agents interact with data services and tools in production. Experience with lakehouse architectures and cloud data platforms such as Azure (preferred), OCI (preferred), or AWS, Experience in real-time streaming applications or other high-velocity solutions. Experience leveraging AI tools to improve data engineering productivity and quality in coding, testing, and documentation.
Location: This is a hybrid position working at our GDIT facility in Falls Church, VA. Must be comfortable working (3) days a week onsite.
US Persons required
GDIT IS YOUR PLACE
At GDIT, the mission is our purpose, and our people are at the center of everything we do.
Growth: AI-powered career tool that identifies career steps and learning opportunities
Support: An internal mobility team focused on helping you achieve your career goals
Rewards: Comprehensive benefits and wellness packages, 401K with company match, and competitive pay and paid time off
Flexibility: Full-flex work week to own your priorities at work and at home
Community: Award-winning culture of innovation and a military-friendly workplace
OWN YOUR OPPORTUNITY
Explore a career in data science and engineering at GDIT and you'll find endless opportunities to grow alongside colleagues who share your determination for solving complex data challenges.
The likely salary range for this position is $119,000 - $161,000. This is not, however, a guarantee of compensation or salary. Rather, salary will be set based on experience, geographic location and possibly contractual requirements and could fall outside of this range.
Scheduled Weekly Hours:
40
Travel Required:
10-25%
Telecommuting Options:
Hybrid
Work Location:
USA VA Falls Church
Additional Work Locations:
Total Rewards at GDIT:
Our benefits package for all US-based employees includes a variety of medical plan options, some with Health Savings Accounts, dental plan options, a vision plan, and a 401(k) plan offering the ability to contribute both pre and post-tax dollars up to the IRS annual limits and receive a company match. To encourage work/life balance, GDIT offers employees full flex work weeks where possible and a variety of paid time off plans, including vacation, sick and personal time, holidays, paid parental, military, bereavement and jury duty leave. To ensure our employees are able to protect their income, other offerings such as short and long-term disability benefits, life, accidental death and dismemberment, personal accident, critical illness and business travel and accident insurance are provided or available. We regularly review our Total Rewards package to ensure our offerings are competitive and reflect what our employees have told us they value most.We are GDIT. A global technology and professional services company that delivers consulting, technology and mission services to every major agency across the U.S. government, defense and intelligence community. Our 30,000 experts extract the power of technology to create immediate value and deliver solutions at the edge of innovation. We operate across 50 countries worldwide, offering leading capabilities in digital modernization, AI/ML, Cloud, Cyber and application development. Together with our clients, we strive to create a safer, smarter world by harnessing the power of deep expertise and advanced technology.Join our Talent Community to stay up to date on our career opportunities and events at
gdit.com/tc.
Equal Opportunity Employer / Individuals with Disabilities / Protected Veterans
$119k-161k yearly Auto-Apply 3d ago
Senior Data Engineer
Advance Local 3.6
Senior data scientist job in Cleveland, OH
**Advance Local** is looking for a **SeniorData Engineer** to design, build, and maintain the enterprise data infrastructure that powers our cloud data platform. This position will combine your deep technical expertise in data engineering with team leadership responsibilities for data engineering, overseeing the ingestion, integration, and reliability of data systems across Snowflake, AWS, Google Cloud, and legacy platforms. You'll partner with data product and across business units to translate requirements into technical solutions, integrate data from numerous third-party platforms, (CDPs, DMPs, analytics platforms, marketing tech) into central data platform, collaborate closely with the Data Architect on platform strategy and ensure scalable, well-engineered solutions for modern data infrastructure using infrastructure as code and API-driven integrations.
The base salary range is $120,000 - $140,000 per year.
**What you'll be doing:**
+ Lead the design and implementation of scalable data ingestion pipelines from diverse sources into Snowflake.
+ Partner with platform owners across business units to establish and maintain data integrations from third party systems into the central data platform.
+ Architect and maintain data infrastructure using IAC, ensuring reproducibility, version control and disaster recovery capabilities.
+ Design and implement API integrations and event-driven data flows to support real time and batch data requirements.
+ Evaluate technical capabilities and integration patterns of existing and potential third-party platforms, advising on platform consolidation and optimization opportunities.
+ Partner with the Data Architect and data product to define the overall data platform strategy, ensuring alignment between raw data ingestion and analytics-ready data products that serve business unit needs.
+ Develop and enforce data engineering best practices including testing frameworks, deployment automation, and observability.
+ Support rapid prototyping of new data products in collaboration with data product by building flexible, reusable data infrastructure components.
+ Design, develop, and maintain scalable data pipelines and ETL processes; optimize and improve existing data systems for performance, cost efficiency, and scalability.
+ Collaborate with data product, third-party platform owners, Data Architects, Analytics Engineers, DataScientists, and business stakeholders to understand data requirements and deliver technical solutions that enable business outcomes across the organization.
+ Implement data quality validation, monitoring, and alerting systems to ensure reliability of data pipelines from all sources.
+ Develop and maintain comprehensive documentation for data engineering processes and systems, architecture, integration patterns, and runbooks.
+ Lead incident response and troubleshooting efforts for data pipeline issues, ensuring minimal business impact.
+ Stay current with the emerging data engineering technologies, cloud services, SaaS platform capabilities, and industry best practices.
**Our ideal candidate will have the following:**
+ Bachelor's degree in computer science, engineering, or a related field
+ Minimum of seven years of experience in data engineering with at least two years in a lead or senior technical role
+ Expert proficiency in Snowflake data engineering patterns
+ Strong experience with AWS services (S3, Lambda, Glue, Step Functions) and Google Cloud Platform
+ Experience integrating data from SaaS platforms and marketing technology stacks (CDPs, DMPs, analytics platforms, CRMs)
+ Proven ability to work with third party APIs, webhooks, and data exports
+ Experience with infrastructure such as code (Terraform, Cloud Formation) and CI/CD pipelines for data infrastructure
+ Proven ability to design and implement API integrations and event-driven architecture
+ Experience with data modeling, data warehousing, and ETL processes at scale
+ Advanced proficiency in Python and SQL for data pipeline development
+ Experience with data orchestration tools (airflow, dbt, Snowflake tasks)
+ Strong understanding of data security, access controls, and compliance requirements
+ Ability to navigate vendor relationships and evaluate technical capabilities of third-party platforms
+ Excellent problem-solving skills and attention to detail
+ Strong communication and collaboraion skills
**Additional Information**
Advance Local Media offers competitive pay and a comprehensive benefits package with affordable options for your healthcare including medical, dental and vision plans, mental health support options, flexible spending accounts, fertility assistance, a competitive 401(k) plan to help plan for your future, generous paid time off, paid parental and caregiver leave and an employee assistance program to support your work/life balance, optional legal assistance, life insurance options, as well as flexible holidays to honor cultural diversity.
Advance Local Media is one of the largest media groups in the United States, which operates the leading news and information companies in more than 20 cities, reaching 52+ million people monthly with our quality, real-time journalism and community engagement. Our company is built upon the values of Integrity, Customer-first, Inclusiveness, Collaboration and Forward-looking. For more information about Advance Local, please visit ******************** .
Advance Local Media includes MLive Media Group, Advance Ohio, Alabama Media Group, NJ Advance Media, Advance Media NY, MassLive Media, Oregonian Media Group, Staten Island Media Group, PA Media Group, ZeroSum, Headline Group, Adpearance, Advance Aviation, Advance Healthcare, Advance Education, Advance National Solutions, Advance Originals, Advance Recruitment, Advance Travel & Tourism, BookingsCloud, Cloud Theory, Fox Dealer, Hoot Interactive, Search Optics, Subtext.
_Advance Local Media is proud to be an equal opportunity employer, encouraging applications from people of all backgrounds. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, genetic information, national origin, age, disability, sexual orientation, marital status, veteran status, or any other category protected under federal, state or local law._
_If you need a reasonable accommodation because of a disability for any part of the employment process, please contact Human Resources and let us know the nature of your request and your contact information._
Advance Local Media does not provide sponsorship for work visas or employment authorization in the United States. Only candidates who are legally authorized to work in the U.S. will be considered for this position.
$120k-140k yearly 60d+ ago
BI Data Engineer
Quadax Careers & Culture
Senior data scientist job in Middleburg Heights, OH
Job Title: Data Engineer
We are seeking a Data Engineer to join our growing Big Data team. In this role, you will design and implement data models, build semantic layers, create data extracts, and develop robust data pipelines to ingest data into our data warehouse. You will collaborate with BI developers, data analysts, and datascientists to support a variety of data initiatives. The ideal candidate is self-directed, highly organized, and comfortable managing the data needs of multiple teams, systems, and products. This is a hybrid position: 4 days per week onsite in Middleburg Heights, OH following initial 3 month/5 day onsite period. Visa sponsorship not available.
Key Responsibilities:
Design, build, and maintain data pipelines for optimal extraction, transformation, and loading (ETL) from diverse sources into our Snowflake Data Warehouse.
Perform data analysis, mapping, and validation across multiple sources and formats to produce consolidated data models.
Assemble large, complex datasets to meet technical requirements for reporting, extraction, and analytics.
Define and implement semantic layers on top of data models for reporting and analytical purposes.
Create custom data extracts to support ad-hoc reporting requests.
Identify and implement process improvements, including automation of manual tasks, pipeline optimization for scalability and performance, and data model enhancements for query efficiency.
Support Data Science initiatives by creating and automating pipelines for model training datasets.
Maintain and optimize the Snowflake environment, including monitoring consumption, configuring warehouses, managing role-based security, applying environment changes, and evaluating new functionality.
Collaborate with stakeholders such as BI Developers, Product Owners, DataScientists, and Architects to resolve data-related technical issues and support infrastructure needs.
Ensure compliance with PHI and HIPAA standards and guidelines.
Perform other duties as assigned.
Education / Experience:
3+ years of experience in data engineering, including building and optimizing ETL processes, data pipelines, and datasets.
Strong understanding of data engineering principles and best practices.
Hands-on experience working with structured and unstructured datasets.
Proven ability to analyze datasets to answer business questions and identify improvement opportunities.
Preferred experience with Snowflake, Microsoft SQL Server, Microsoft Fabric, and Python.
Experience with C# and/or Java is a plus.
Strong project management and organizational skills.
Demonstrated success working with cross-functional teams in dynamic environments.
Familiarity with Scrum/Agile development methodologies is a plus.
Medical billing knowledge is a plus.
$78k-106k yearly est. 6d ago
Data Engineer
Rockwool
Senior data scientist job in Cleveland, OH
Ready to help build a better future for generations to come? In an ever-changing, fast paced world, we owe it to ourselves and our future generations to live life responsibly. At ROCKWOOL, we work relentlessly to enrich modern living through our innovative stone wool solutions.
Join us and make a difference!
Your future team
You will join our Factory of the Future team within Group R&D - a passionate community of specialists who develop and implement the latest technologies for stone wool production.
Our team's mission is to transform the way we operate by building next-generation OT & IT solutions that drive smarter, more efficient, and more sustainable manufacturing. As part of this journey, we are expanding our Data Science and Engineering capabilities and you could be a key part of it.
What you will be doing
* Design, build, and automate data pipelines for industrial process analysis
* Create systems for efficient storage and access to large-scale operational data
* Ensure data quality, reliability, and accessibility for ML model training and deployment
* Develop real-time visualizations and dashboards to monitor industrial operations
* Engineer data infrastructure for deploying ML solutions both on-premises and in the cloud
* Evaluate and validate new technologies in production environments
* Collaborate closely with DataScientists and stakeholders to align technology across all levels of the organization
What you bring
* 2+ years of experience in data engineering, preferably in a production or industrial setting
* Degree in Computer Science, Computer Engineering, or a related technical field (or equivalent practical foundations)
* Experience with relational and non-relational databases (SQL, NoSQL, InfluxDB)
* Solid programming skills in Python
* Strong knowledge of data pipelines and ELT processes
* Proficiency in data warehousing and data lake technologies (on-prem & cloud)
* Hands-on experience with on-premise data infrastructure
* Experience with big data frameworks (Kafka, Apache Spark, Flink)
* Understanding of IT/OT convergence and data quality management
* Familiarity with cloud platforms (AWS, Azure, or GCP)
* Experience with DevOps tools
* Familiarity with Linux environments
Tools and technologies you'll work with
* Data storage: PostgreSQL, MS SQL Server, MongoDB, S3/MinIO
* Transformation & orchestration: DBT, Apache Airflow, Luigi
* Big data: Apache Kafka, Apache Flink, Apache Spark
* Cloud platforms: AWS S3, Azure Blob Storage, GCP Cloud Storage
* Visualization: Grafana, Apache Superset, Power BI
* DevOps: Git, Docker, CI/CD, OpenShift
What we offer
By joining our team, you become a part of the people-centric work environment of a Danish company. We offer you a competitive salary, permanent contract after the probation period, development package, team building events, activity-based office in Poznan's city center in the new prestigious office building - Nowy Rynek. The building is recognized as a building without barriers, which means that it is fully adapted to the needs of people with disabilities.
Our compensation package on employment contracts includes:
* An office-first approach: home office is available up to 1 day per week
* Adaptable Hours: start your workday anytime between 7:00 AM and 9:00 AM
* Home office subsidy
* Private Medical Care
* Multikafeteria MyBenefit
* Wellbeing program
* Extra Day Off for voluntary activities
โฆ and while in the office you can also use modern office space with beautiful view and high standard furniture, bicycle parking facilities & showers, chill-out rooms with PlayStation, football table, pool table, board games, subsidized canteen with delicious food & fruit.
Interested?
If you recognize yourself in this profile and challenge, we kindly invite you to apply with CV written in English.
Who we are
We are the world leader in stone wool solutions. Founded in 1937 in Denmark, we transform volcanic rock into safe, sustainable products that help people and communities thrive. We are a global company with more than 12,200 employees, located in 40+ countries with 42 manufacturing facilitiesโฆ all focused on one common purpose - to release the natural power of stone to enrich modern living.
Sustainability is central to our business strategy. ROCKWOOL was one of the first companies to commit to actively contributing to the United Nations Sustainable Development Goals (SDGs) framework and are actively committed to 11 SDGs, including SDG 14, Life Below Water. Through our partnership with the One Ocean Foundation and in connection with our sponsorship of the ROCKWOOL Denmark SailGP team, we will help raise awareness around ocean health challenges in an effort to accelerate solutions to protect it.
Diverse and Inclusive Culture
We want all our people to feel valued, respected, included and heard. We employ 79 different nationalities worldwide and are committed to providing equal opportunities to all employees, promote diversity, and work against all forms of discrimination among ROCKWOOL employees.
At ROCKWOOL, you will experience a friendly team environment. Our culture is very important to us. In fact, we refer to our culture as "The ROCKWOOL Way". This is the foundation in which we operate and is based upon our values of ambition, responsibility, integrity and efficiency.
$78k-106k yearly est. Auto-Apply 60d+ ago
Junior Data Engineer
Qode
Senior data scientist job in Cleveland, OH
PNC Bank is seeking a Junior Data Engineer to support the design, development, and maintenance of scalable data pipelines and data platforms that enable analytics, reporting, and regulatory compliance. This role is ideal for early-career professionals eager to build hands-on experience in enterprise data engineering within the financial services domain.
Key Responsibilities
Assist in building and maintaining ETL/ELT pipelines to ingest, transform, and load data from multiple source systems.Support development of batch and near real-time data processing workflows.Work with structured and semi-structured data using SQL and Python.Participate in data validation, reconciliation, and quality checks to ensure accuracy and completeness.Collaborate with seniordata engineers, data analysts, and business stakeholders to understand data requirements.Help manage data storage solutions such as data warehouses and data lakes.Assist with documentation of data models, pipelines, and operational processes.Follow data governance, security, and compliance standards relevant to banking and financial services.Monitor data pipelines and troubleshoot failures under guidance.Support deployment and version control using Git and CI/CD practices.
Required Qualifications
Bachelor's degree in Computer Science, Information Systems, Engineering, or a related field.1-3 years of experience or strong academic/project experience in data engineering or data analytics.Proficiency in SQL (joins, subqueries, performance tuning basics).Working knowledge of Python for data processing.Basic understanding of ETL concepts, data modeling, and data warehousing.Familiarity with relational databases (Oracle, PostgreSQL, SQL Server, or similar).Exposure to cloud platforms (AWS, Azure, or GCP) is a plus.Experience with version control tools such as Git.
Preferred / Nice-to-Have Skills
Exposure to Big Data technologies (Spark, Hadoop).Familiarity with cloud data services (AWS S3, Glue, Redshift, Azure Data Factory, Snowflake).Understanding of banking or financial data, including transactions, risk, or regulatory reporting.Knowledge of data quality frameworks and basic data governance concepts.Experience with workflow orchestration tools (Airflow, Control-M).
Soft Skills
Strong analytical and problem-solving skills.Willingness to learn and adapt in a regulated environment.Good communication and documentation skills.Ability to work effectively in a team-oriented, Agile environment.
$78k-106k yearly est. 10d ago
Senior Data Engineer - N.I.
Cleveland Clinic 4.7
Senior data scientist job in Cleveland, OH
Join the Cleveland Clinic team where you will work alongside passionate caregivers and make a lasting, meaningful impact on patient care. Here, you will receive endless support and appreciation while building a rewarding career with one of the most respected healthcare organizations in the world.
The SeniorData Engineer will extract, transform, and integrate data from diverse clinical systems-including Epic Clarity, third-party APIs, and other internal and external sources-using SQL, Python, and additional programming languages as needed. The role ensures seamless integration of structured, semi-structured, and unstructured data within Snowflake and Databricks environments, with familiarity in REDCap databases considered a plus.
This position plays a key role in organizing, managing, and optimizing the organization's data ecosystem, enabling enhanced decision-making across clinical operations and research. You will be responsible for designing, building, and maintaining scalable data pipelines, data warehouses, and data lakes that support clinical and financial reporting.
The role is full time (40 hours per week), Monday through Friday from 8:00 a.m. to 5:00 p.m., with minimal weekend or holiday work and occasional on-call coverage as needed. The caregiver must work onsite at Cleveland Main Campus, with eligibility for a hybrid work arrangement after 90 days.
A caregiver who excels in this role will:
* Analyze, design and coordinate the development of software systems.
* Develop new software and proof/test them to assure production of a quality product.
* Analyze current programs and processes.
* Make recommendations for more cost-effective products and better streamlined workflows.
* Define and implement high performance software by leveraging a strong understanding of embedded hardware design.
* Provide technical support to other Developers when project support is needed, including the design of relational databases and client-side programming strategies using the latest HTML, CSS and JavaScript frameworks.
* Determine and communicate the implications of system-level decisions on subsystems and components and mitigate issues.
* Translate clinical problems into innovative healthcare solutions.
* Gather business and application specific requirements.
* Ensure all requirements are met as well as maintained within defined project scope.
* Determine whether a particular problem is caused by hardware, operating systems software, application programs or network failures and supporting as necessary.
Minimum qualifications for the ideal future caregiver include:
* Bachelor's Degree in Computer Science, Computer Engineering or a related field and 10 years of software development experience and healthcare software development
* OR High School Diploma/GED and 15 years of experience
* OR Associate's Degree and 13 years of experience
* ITIL Foundations Certification within six months of hire
* Proven track record of enterprise architecture experience with large volume, high availability enterprise applications.
* Experience with Microsoft, Net Technology Stack and Relational Database Design
* Microsoft C#, iOS Objective C, VBScript, Visual Basic, ColdFusion, Microsoft T-SQL, JavaScript, CCSS3 and/or HTML5 experience
* Query, query Mobile, KnockoutJS, UnderscoreJS and/or YUI Compressor experience
* Experience in interfacing with internal and external customers
* SOA development using SOAP and REST; mobile application development and associated libraries experience
Preferred qualifications for the ideal future caregiver include:
* Experience with hospital or medical systems
Physical Requirements:
* Ability to perform work in a stationary position for extended periods.
* Ability to travel throughout the hospital system.
* Ability to operate a computer and other office equipment.
* Ability to communicate and exchange accurate information.
Personal Protective Equipment:
* Follows Standard Precautions using personal protective equipment as required for procedures.
Pay Range
Minimum Annual Salary: $92,620.00
Maximum Annual Salary: $141,265.00
The pay range displayed on this job posting reflects the anticipated range for new hires. A successful candidate's actual compensation will be determined after taking factors into consideration such as the candidate's work history, experience, skill set and education. The pay range displayed does not include any applicable pay practices (e.g., shift differentials, overtime, etc.). The pay range does not include the value of Cleveland Clinic's benefits package (e.g., healthcare, dental and vision benefits, retirement savings account contributions, etc.).
$92.6k-141.3k yearly 4d ago
Lead Data Engineer (Mentor, OH, US, 44060)
Steris Corporation 4.5
Senior data scientist job in Mentor, OH
At STERIS, we help our Customers create a healthier and safer world by providing innovative healthcare and life science product and service solutions around the globe. The Lead Data Engineer demonstrates mastery of skills and knowledge and is a mentor, strategist, thought leader, evangelist, champion, Plans and leads data engineering activities.
Responsible for overseeing the design, development, deployment, and maintenance of scalable and robust data solutions. Develops and manages data integration pipelines connecting disparate data sources. Works closely with data architects, datascientists, analysts, and other stakeholders to support business needs in analytical and data solutions/projects. Collaborates with Infrastructure and DBA teams to ensure appropriate infrastructure is in place. Optimizes and streamlines data processing efforts to ensure data quality, security, privacy, on time delivery and compliance. Provides technical leadership, mentorship, reviews deliverables and provides feedback to the data engineering team.
What You'll do as a Lead Data Engineer
* Data Architecture and Technical Infrastructure: Defines, plans, designs and support implementation of enterprise data architectures and enterprise data platform. Plans and leads data engineering activities for strategic, large, and complex programs. Leads the selection and development of data engineering methods, tools, and techniques.
* SDLC Methodology & Project Management: Plans technical transitions between development, testing, and production phases of solutions' lifecycle, and the facilitation of the change control, problem management, and communication processes.
* Innovation, Continuous Improvement & Optimization: Develops organizational policies, standards, and guidelines for the development and secure operation of data services and products. Ensures adherence to technical strategies and architectures.
* Data Modelling / Designing Datasets: Coordinates the application of analysis, design, and modelling techniques to establish, modify or maintain data structures and their associated components. Manages the iteration, review and maintenance of data requirements and data models.
* Partnership and Community Building: Collaborates with other IT teams, business community, datascientists and other architects to meet business requirements.
* Data Pipeline/ETL: Sets standards for data modelling and design tools and techniques, advises on their application, and ensures compliance. Defines and implements administration and control activities related to data warehouse planning and development and the establishment of policies and procedures pertaining to its management, security, maintenance, and utilization.
* Support & Operations: Manages the investigation of enterprise data requirements based upon a detailed understanding of information requirements.
* Data Governance and Data Quality: Ensures that data is reliable, secure, and timely. Implement Data privacy and best practices. Defines, designs and implements data quality assessment and improvement methodology, processes and solutions.
* End-User Support, Education and Enablement: Plans, designs, develops and facilitates training and Data Literacy initiatives within the team and End user community.
* Metadata Management & Documentation: Ensure standards, and best practices in documentation of Metadata, Data Engineering processes and Architectures.
The Experience, Abilities and Skills Needed
* Bachelor's Degree and 6+ years of experience
* Development, maintenance, and enhancement of Data Pipelines (ETL/ELT) and processes with thorough knowledge of star/snowflake schemas
* Developing complex SQL queries and SQL optimization
* Development experience must be full Life Cycle experience including business requirements gathering, data sourcing, testing/data reconciliation, and deployment within Business
* Intelligence/Data Warehousing Architecture.
* Designing and implementing data security
* Monitoring and optimizing data storage and data processing
* Delivering Data Solutions using Cloud Technologies
* Advanced SQL skills and experience with relational databases and database design like Oracle and SQLSERVER.
* Significant experience working with cloud Data Warehouse and Data Lake solutions (e.g., Snowflake, Redshift, BigQuery, Azure Data Lake Storage, Amazon S3, etc.)
* Experience working with data ingestion tools such as: Informatica, IDMC, IICS, Power Center, Fivetran, stitch, or Matillion.
* Working knowledge of Cloud-based solutions (e.g., Azure, AWS and GCP).
* Experience building and deploying machine learning models in production.
* Strong proficiency in object-oriented languages: Python, Java, C++, Scala.
* Strong proficiency in scripting languages like Bash.
* Strong proficiency in data pipeline and workflow management tools (e.g., Airflow, Azkaban).
* Familiarity with big data frameworks such as Apache Hadoop and Apache Spark
* Strong project management and organizational skills.
* Excellent problem-solving, communication, and organizational skills.
* Demonstrated leadership experience and skills
* Ability to communicate effectively and influence technical and business stakeholders at all levels of the organization
What STERIS Offers
At STERIS, we invest in our employees and their families for the long term! STERIS wouldn't be where it is today without our incredible people. We shares our success together with you by rewarding you for your hard work and achievements.
Here is just a brief overview of what we offer:
* Competitive Pay
* Extensive Paid Time Off and (9) added Holidays.
* Excellent healthcare, dental, and vision benefits
* 401(k) with a company match
* Long/Short term disability coverage
* Parental Leave
* Additional add-on benefits/discounts for programs such as Pet Insurance
* Continued training and educations programs
* Excellent opportunities for advancement in a stable long-term career
* #LI-KS1 #LI-Hybrid
Pay range for this opportunity is $105,000-$125,000. This position is eligible for bonus participation.
Minimum pay rates offered will comply with county/city minimums, if higher than range listed. Pay rates are based on a number of factors, including but not limited to local labor market costs, years of relevant experience, education, professional certifications, foreign language fluency, etc.
STERIS offers a comprehensive and competitive benefits portfolio. Click here for a complete list of benefits: STERIS Benefits
Open until position is filled.
STERIS is a leading global provider of products and services that support patient care with an emphasis on infection prevention. WE HELP OUR CUSTOMERS CREATE A HEALTHIER AND SAFER WORLD by providing innovative healthcare and life sciences products and services around the globe. For more information, visit ***************
If you need assistance completing the application process, please call ****************. This contact information is for accommodation inquiries only and cannot be used to check application status.
STERIS is an Equal Opportunity Employer. We are committed to equal employment opportunity to ensure that persons are recruited, hired, trained, transferred and promoted in all job groups regardless of race, color, religion, age, disability, national origin, citizenship status, military or veteran status, sex (including pregnancy, childbirth and related medical conditions), sexual orientation, gender identity, genetic information, and any other category protected by federal, state or local law. We are not only committed to this policy by our status as a federal government contractor, but also we are strongly bound by the principle of equal employment opportunity.
The full affirmative action program, absent the data metrics required by ยง 60-741.44(k), shall be available to all employees and applicants for employment for inspection upon request. The program may be obtained at your location's HR Office during normal business hours.
$105k-125k yearly 60d+ ago
Azure Data Engineer - 6013916
Accenture 4.7
Senior data scientist job in Cleveland, OH
Accenture Flex offers you the flexibility of local fixed-duration project-based work powered by Accenture, a leading global professional services company. Accenture is consistently recognized on FORTUNE's 100 Best Companies to Work For and Diversity Inc's Top 50 Companies For Diversity lists.
As an Accenture Flex employee, you will apply your skills and experience to help drive business transformation for leading organizations and communities. In addition to delivering innovative solutions for Accenture's clients, you will work with a highly skilled, diverse network of people across Accenture businesses who are using the latest emerging technologies to address today's biggest business challenges.
You will receive competitive rewards and access to benefits programs and world-class learning resources. Accenture Flex employees work in their local metro area onsite at the project, significantly reducing and/or eliminating the demands to travel.
Job Description:
Join our dynamic team and embark on a journey where you will be empowered to perform independently and become an SME. Required active participation/contribution in team discussions will be key as you contribute in providing solutions to work related problems. Let's work together to achieve greatness!
Responsibilities:
* Create new data pipelines leveraging existing data ingestion frameworks, tools
* Orchestrate data pipelines using the Azure Data Factory service.
* Develop/Enhance data transformations based on the requirements to parse, transform and load data into Enterprise Data Lake, Delta Lake, Enterprise DWH (Synapse Analytics)
* Perform Unit Testing, coordinate integration testing and UAT Create HLD/DD/runbooks for the data pipelines
* Configure compute, DQ Rules, Maintenance Performance tuning/optimization
Qualification
Basic Qualifications:
* Minimum of 3 years of work experience with one or more of the following: DatabricksData Engineering, DLT, Azure Data Factory, SQL, PySpark, Synapse Dedicated SQL Pool, Azure DevOps, Python
Preferred Qualifications:
* Azure Function Apps
* Azure Logic Apps
* Precisely & COSMOS DB
* Advanced proficiency in PySpark.
* Advanced proficiency in Microsoft Azure Databricks, Azure DevOps, Databricks Delta Live Tables and Azure Data Factory.
* Bachelor's or Associate's degree
Compensation at Accenture varies depending on a wide array of factors, which may include but are not limited to the specific office location, role, skill set, and level of experience. As required by local law, Accenture provides a reasonable range of compensation for roles that may be hired as set forth below. We accept applications on an on-going basis and there is no fixed deadline to apply.
Information on benefits is here.
Role Location:
California - $47.69 - $57.69
Cleveland - $47.69 - $57.69
Colorado - $47.69 - $57.69
District of Columbia - $47.69 - $57.69
Illinois - $47.69 - $57.69
Minnesota - $47.69 - $57.69
Maryland - $47.69 - $57.69
Massachusetts - $47.69 - $57.69
New York/New Jersey - $47.69 - $57.69
Washington - $47.69 - $57.69
Locations
How much does a senior data scientist earn in Parma, OH?
The average senior data scientist in Parma, OH earns between $68,000 and $125,000 annually. This compares to the national average senior data scientist range of $90,000 to $170,000.
Average senior data scientist salary in Parma, OH
$92,000
What are the biggest employers of Senior Data Scientists in Parma, OH?
The biggest employers of Senior Data Scientists in Parma, OH are: