**Why Mayo Clinic** Mayo Clinic is top-ranked in more specialties than any other care provider according to U.S. News & World Report. As we work together to put the needs of the patient first, we are also dedicated to our employees, investing in competitive compensation and comprehensive benefit plans (************************************** - to take care of you and your family, now and in the future. And with continuing education and advancement opportunities at every turn, you can build a long, successful career with Mayo Clinic.
**Benefits Highlights**
+ Medical: Multiple plan options.
+ Dental: Delta Dental or reimbursement account for flexible coverage.
+ Vision: Affordable plan with national network.
+ Pre-Tax Savings: HSA and FSAs for eligible expenses.
+ Retirement: Competitive retirement package to secure your future.
**Responsibilities**
**DataScientist Foundational Model Science**
**Position Summary**
The DataScientist for Foundational Model Science is the senior technical leader, and the lead scientist responsible for designing, training, and governing Mayo's multimodal foundational model. This model forms the core intelligence layer used by clinical departments, researchers, agentic workflows, and sovereign AI collaborations. The individual will work as a hands-on architect, model-builder, and researcher while acting as a player-coach, guiding strategy and building a future team.
**Key Responsibilities**
**Scientific & Technical Leadership**
+ Design multimodal foundational model architectures integrating signals from imaging, text, waveforms, structured data, graph representations, and temporal embeddings.
+ Develop fusion, alignment, and cross-modal reasoning mechanisms (early fusion, late fusion, token-level fusion, hybrid models).
+ Define and implement methods for grounded clinical reasoning, retrieval-augmented inference, graph-augmented attention, and chain-of-thought verification.
+ Establish protocols for model lifecycle governance, safe update cycles, drift-aware re-training, and provenance tracking.
**Hands-On Modeling & Training**
+ Train large-scale deep learning models, including multimodal architectures and domain-specific transformer-based systems, on real clinical datasets.
+ Fine-tune and adapt **large language models (LLMs)** for clinical reasoning, summarization, question answering, agentic behavior, and instruction-following tasks.
+ Build retrieval-augmented pipelines using embeddings, vector stores, graph traversal, and clinically grounded context construction.
+ Develop evaluation methods for reasoning quality, temporal prediction accuracy, multimodal synergy, ablation-based robustness, and counterfactual behavior.
+ Create reference-grounded training datasets, structured reasoning tasks, and multimodal benchmarks to evaluate model performance.
+ Conduct hands-on experimentation with optimization strategies, large-scale distributed training, model quantization, and inference acceleration.
+ Implement uncertainty modeling, selective prediction, abstention mechanisms, and clinically meaningful risk thresholds.
+ Build interpretable reasoning pathways, cross-modal attribution maps, and reference-grounded explanations.
**Cross-functional Collaboration**
+ Work closely with the Representation team to ensure representation-model alignment.
+ Partner with clinical SMEs to encode domain reasoning into reinforcement learning, preference optimization, or rule-guided behaviors.
**Team Leadership**
+ Serve as the future founding technical lead of the Foundational Model Science Program.
+ Mentor scientists and engineers and eventually build a specialty modeling team.
**Qualifications**
**Required**
+ PhD in Machine Learning, Computer Science, Applied Mathematics, or related discipline with at least four years of informatics, Artificial Intelligence, data science and/or machine learning.
+ Experience with generative modeling, reasoning models, or multimodal foundation models.
+ Expertise in alignment methods (contrastive learning, RLHF/RLCS, preference optimization).
+ Experience with distributed training, and large-scale compute.
**Preferred**
+ Experience with clinical or EMR data across multiple modalities.
+ 7+ years experience training deep learning models, including transformers or multimodal architectures.
+ Experience defining evaluation frameworks for reasoning, multimodal synergy, reliability, or fairness.
+ Publications in multimodal learning, foundation models, or reasoning architectures.
**Exemption Status**
Exempt
**Compensation Detail**
Education, experience and tenure may be considered along with internal equity when job offers are extended.; $165,276 -247,998/ annually.
**Benefits Eligible**
Yes
**Schedule**
Full Time
**Hours/Pay Period**
80
**Schedule Details**
Monday - Friday. Regular day hours.
**Weekend Schedule**
none expected
**International Assignment**
No
**Site Description**
Just as our reputation has spread beyond our Minnesota roots, so have our locations. Today, our employees are located at our three major campuses in Phoenix/Scottsdale, Arizona, Jacksonville, Florida, Rochester, Minnesota, and at Mayo Clinic Health System campuses throughout Midwestern communities, and at our international locations. Each Mayo Clinic location is a special place where our employees thrive in both their work and personal lives. Learn more about what each unique Mayo Clinic campus has to offer, and where your best fit is. (*****************************************
**Equal Opportunity**
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender identity, sexual orientation, national origin, protected veteran status or disability status. Learn more about the "EOE is the Law" (**************************** . Mayo Clinic participates in E-Verify (******************************************************************************************** and may provide the Social Security Administration and, if necessary, the Department of Homeland Security with information from each new employee's Form I-9 to confirm work authorization.
**Recruiter**
Jill Squier
**Equal opportunity**
As an Affirmative Action and Equal Opportunity Employer Mayo Clinic is committed to creating an inclusive environment that values the diversity of its employees and does not discriminate against any employee or candidate. Women, minorities, veterans, people from the LGBTQ communities and people with disabilities are strongly encouraged to apply to join our teams. Reasonable accommodations to access job openings or to apply for a job are available.
$71k-112k yearly est. 21d ago
Looking for a job?
Let Zippia find it for you.
Data Scientist - Research Sovereign AI
Mayo Healthcare 4.0
Data scientist job in Rochester, MN
DataScientist Foundational Model Science
The DataScientist for Foundational Model Science is the senior technical leader, and the lead scientist responsible for designing, training, and governing Mayo's multimodal foundational model. This model forms the core intelligence layer used by clinical departments, researchers, agentic workflows, and sovereign AI collaborations. The individual will work as a hands-on architect, model-builder, and researcher while acting as a player-coach, guiding strategy and building a future team.
Key Responsibilities
Scientific & Technical Leadership
Design multimodal foundational model architectures integrating signals from imaging, text, waveforms, structured data, graph representations, and temporal embeddings.
Develop fusion, alignment, and cross-modal reasoning mechanisms (early fusion, late fusion, token-level fusion, hybrid models).
Define and implement methods for grounded clinical reasoning, retrieval-augmented inference, graph-augmented attention, and chain-of-thought verification.
Establish protocols for model lifecycle governance, safe update cycles, drift-aware re-training, and provenance tracking.
Hands-On Modeling & Training
Train large-scale deep learning models, including multimodal architectures and domain-specific transformer-based systems, on real clinical datasets.
Fine-tune and adapt large language models (LLMs) for clinical reasoning, summarization, question answering, agentic behavior, and instruction-following tasks.
Build retrieval-augmented pipelines using embeddings, vector stores, graph traversal, and clinically grounded context construction.
Develop evaluation methods for reasoning quality, temporal prediction accuracy, multimodal synergy, ablation-based robustness, and counterfactual behavior.
Create reference-grounded training datasets, structured reasoning tasks, and multimodal benchmarks to evaluate model performance.
Conduct hands-on experimentation with optimization strategies, large-scale distributed training, model quantization, and inference acceleration.
Implement uncertainty modeling, selective prediction, abstention mechanisms, and clinically meaningful risk thresholds.
Build interpretable reasoning pathways, cross-modal attribution maps, and reference-grounded explanations.
Cross-functional Collaboration
Work closely with the Representation team to ensure representation-model alignment.
Partner with clinical SMEs to encode domain reasoning into reinforcement learning, preference optimization, or rule-guided behaviors.
Team Leadership
Serve as the future founding technical lead of the Foundational Model Science Program.
Mentor scientists and engineers and eventually build a specialty modeling team.
Required
PhD in Machine Learning, Computer Science, Applied Mathematics, or related discipline with at least four years of informatics, Artificial Intelligence, data science and/or machine learning.
Experience with generative modeling, reasoning models, or multimodal foundation models.
Expertise in alignment methods (contrastive learning, RLHF/RLCS, preference optimization).
Experience with distributed training, and large-scale compute.
Preferred
Experience with clinical or EMR data across multiple modalities.
7+ years experience training deep learning models, including transformers or multimodal architectures.
Experience defining evaluation frameworks for reasoning, multimodal synergy, reliability, or fairness.
Publications in multimodal learning, foundation models, or reasoning architectures.
$66k-98k yearly est. Auto-Apply 19d ago
Biology Water Sciences Intern - Minnesota
Xcel Energy 4.4
Data scientist job in Welch, MN
Are you looking for an exciting job where you can put your skills and talents to work at a company you can feel proud to be a part of? Do you want a workplace that will challenge you and offer you opportunities to learn and grow? A position at Xcel Energy could be just what you're looking for.
At Xcel Energy, our employees are the driving force behind our success. So we make sure that, here, you can be your best. Doing work that makes a difference for neighbors and communities. Working with a team you can count on to push you. Expanding skills, staying ready for change, and capturing opportunities to grow. All with the support, rewards and recognition you need to thrive - during your internship and beyond.
Xcel Energy is seeking candidates to support our Environmental Services Department conduct biology and water science related studies at power plants located throughout MN and WI. The paid internship will provide you with practical experience in the energy industry at a company that's committed to excellence, safety and environmental stewardship. The position will involve a variety of field, laboratory and data processing tasks. Considerable time will be spent outdoors working from a boat in a variety of weather conditions, including inclement conditions, and potential for non-typical work schedules including extended hours, nights and weekends. Through practical field experience and mentoring, successful candidates will gain significant professional and personal skills.
Typical intern responsibilities may include but are not limited to:
Participate in a field crew of two to four people using various field sampling techniques including electrofishing, seines and kick nets to collect fish and aquatic samples
Collect biological data including field identification of fish to species and weight and length data
Process preserved minnow, larval fish and macroinvertebrate samples in a laboratory setting and identify to species
Use water quality monitoring equipment including hand-held turbidity and pH meters, hydrolabs and data loggers
Enter field data into Excel spreadsheets and assist with data and statistical analysis, drafting graphs and tables and reporting writing
This is for a full-time position (40 hours/week) to start June 1, 2026. This position may have the possibility to extend beyond the internship's initial summer term and work part-time during the school year based on the candidate's successful performance and Xcel Energy's business needs.
Minimum Requirements:
Current student (as of May 2026) enrolled in an accredited college or university and pursuing a B.S. or advanced degree in biology, aquatic biology, fisheries, chemistry, natural resources sciences, sustainability or other science-based program.
Able to commute to the Prairie Island Environmental Laboratory located at 1717 Wakonade Dr., Welch MN and various worksite locations in the Twin Cities Metro and surrounding areas.
Able to work full-time during the summer (40 hours a week)
Preferred Qualifications:
3.0 GPA (out of a 4.0 scale) or higher
Completed undergraduate coursework in aquatic biology, ichthyology, aquatic insects, limnology, hydrology, chemistry or similar related courses.
Familiarity and experience working from a boat and / or in an outdoor setting.
Experience or Proficiency with Microsoft Excel and data analytics tools.
As a leading combination electricity and natural gas energy company, Xcel Energy offers a comprehensive portfolio of energy-related products and services to 3.4 million electricity and 1.9 million natural gas customers across eight Western and Midwestern states. At Xcel Energy, we strive to be the preferred and trusted provider of the energy our customers need. If you're ready to be a part of something big, we invite you to join our team.
All qualified applicants will receive consideration for employment without regard to age, race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Individuals with a disability who need an accommodation to apply please contact us at *************************.
Non-BargainingThe anticipated starting base pay for this position is: $17.40 to $27.90 per hour This position is eligible for the following benefits: Pension, 401(k) plan, Paid time off (PTO), Holidays
Benefit plans are subject to change and Xcel Energy has the right to end, suspend, or amend any of its plans, at any time, in whole or in part.
In any materials you submit, you may redact or remove age-identifying information including but not limited to dates of school attendance and graduation. You will not be penalized for redacting or removing this information.
Deadline to Apply: 02/12/26
EEO is the Law | EEO is the Law Supplement | Pay Transparency Nondiscrimination | Equal Opportunity Policy (PDF) | Employee Rights (PDF)
All Xcel Energy employees and contractors share responsibility for protecting the company's information and systems by adhering to cybersecurity policies, standards, and best practices, recognizing that cybersecurity is everyone's responsibility.
ACCESSIBILITY STATEMENT
Xcel Energy endeavors to make *************************** accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact Xcel Energy Talent Acquisition at *************************. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications.
$17.4-27.9 hourly Auto-Apply 8d ago
Data Engineer Supply Chain Planning - IT - Hormel Foods (Austin, MN)
Hormel Foods 4.6
Data scientist job in Austin, MN
**DATA ENGINEER SUPPLY CHAIN PLANNING - IT - CORPORATE OFFICE (AUSTIN, MN)** To save time applying, Hormel Foods does not offer sponsorship of job applicants for employment-based visas for this position at this time. **Hormel Foods Corporation** -** **_Inspired People. Inspired Food._**
Hormel Foods Corporation, based in Austin, Minnesota, is a global branded food company with approximately $12 billion in annual revenue across more than 80 countries worldwide. Its brands include Planters , _Skippy_ _ _ , _SPAM_ , _Hormel_ _ _ _Natural Choice_ _ _ _, Applegate_ _ _ _, Justin's_ _ _ _, Wholly_ _ _ _, Hormel_ _ _ _Black Label_ _ _ _, Columbus_ _ _ , _Jennie-O _ and more than 30 other beloved brands. The company is a member of the S&P 500 Index and the S&P 500 Dividend Aristocrats, was named one of the best companies to work for by U.S. News & World Report, one of America's most responsible companies by Newsweek, recognized by TIME magazine as one of the World's Best Companies, and has received numerous other awards and accolades for its corporate responsibility and community service efforts. The company lives by its purpose statement - _Inspired People. Inspired Food._ - to bring some of the world's most trusted and iconic brands to tables across the globe. For more information, visit ******************* .
**Summary:**
Join Hormel Foods in transforming supply chain analytics through data engineering excellence. Your work will directly impact operational efficiency, sustainability, and customer satisfaction, while supporting various transform and modernize initiatives.
We are looking for a Data Engineer specializing in supply chain planning within our Data and Analytics team. This is an exciting opportunity to help grow and modernize analytics at Hormel Foods! Individuals interested in this position will need strong communication skills, communicating up, down and across the organization. You will be responsible for managing simultaneous initiatives that require innovative problem solving.
This role is critical in enabling data-driven decisions across supply planning and demand planning, helping the organization optimize inventory, reduce waste, and improve service levels. It also offers opportunities for growth into data architecture, advanced analytics, and leadership roles within the Data & Analytics team.
You will use tools such as Google Cloud Platform, SQL, Python, Incorta, Oracle Business Intelligence, Tableau, and Informatica ETL to engineer data pipelines and data models to enhance enterprise reporting and analytics. Additionally, you will engineer reports, dashboards and visualizations using enterprise business intelligence tools (Oracle and Tableau).
**Specific Competencies Include:**
+ Data Structures and Models - Designs and develops the overall database/data warehouse structure based on functional and technical requirements. Designs and develops data collection frameworks for structured and unstructured data.
+ Data Pipelines and ELT - Designs and applies data extraction, loading and transformation techniques in order to connect large data sets from a variety of sources.
+ Data Performance - Troubleshoots and fixes for data performance issues that come with querying and combining large volumes of data. Adjusts performance for solution accordingly during initial development.
+ Visualizations and Dashboards - Gathers requirements, designs and develops reports and dashboards with multiple sources that meet business needs. Leverages visualizations when possible to increase speed to identifying an insight. Walks business through definitions of data.
**Responsibilities:**
+ Works closely with Supply Chain Planning functions, datascientists, BI analysts, and other stakeholders to understand data needs and deliver solutions.
+ Engineers physical and logical data models for dimensions and facts within the staging, warehouse and semantic layer of our enterprise data warehouses and platforms.
+ Engineers and performance tunes SQL, Python, Incorta or Informatica ETLs and pipelines as well as Google BigQuery Dataprocs to move data from a variety of source systems and file types to fit into dimensional data models.
+ Utilizes SQL within Google BigQuery, Informatica ETLs, Incorta pipelines or Oracle SQL Views when necessary to achieve proper metric calculations or derive dimension attributes.
+ Engineers schedule and orchestration for batch and mini-batch data loads into the enterprise data warehouses and platforms.
+ Provide issue resolution and maintenance for a variety of business unit solutions already existing in the enterprise data warehouses and platforms.
+ Use tools such as SQL, Oracle Business Intelligence, Tableau, Google Cloud Platform, Python, Incorta and Informatica ETL to engineer data pipelines and data models to enhance enterprise reporting and analytics.
+ Engineers dashboards within the enterprise business intelligence platforms containing reports and visualization that have intelligent user interface design and flow for the business including adequate performance.
+ Ensure data quality, lineage, and governance standards are upheld across all engineered solutions.
**Required Qualifications:**
+ A bachelor's degree in Computer Science, MIS, or related area and significant experience with business intelligence, data engineering and data modeling.
+ 5+ years of experience with reading and writing SQL.
+ 5+ years of experience engineering within a data warehouse or related experience with dimensional data modeling.
+ 5+ years of experience designing and developing ETLs/pipelines in Python, Google BigQuery Dataprocs and/or Informatica ETL.
+ 3+ years of experience with data enablement for a Supply Chain Planning platform, preferably o9 solutions.
+ Excellent written and verbal communication skills.
+ Proven ability to gather detailed technical requirements to design and develop data structures supporting business intelligence report solutions from beginning to end.
+ Excellent organizational and time management skills.
+ Tested problem-solving and decision-making skills.
+ A strong pattern of initiative.
+ Highly developed interpersonal and leadership skills.
+ Applicants must not now, or at any time in the future, require employer sponsorship for a work visa.
+ Applicants must be authorized to work in the United States for any employer.
**Preferred Qualifications:**
+ Experience with Supply Chain data, including logistics, inventory, procurement, and manufacturing.
+ Familiarity with ERP systems and supply chain planning tools.
+ 3+ years of experience designing and developing within a business intelligence/reporting tool like Oracle Business Intelligence, Tableau or Google Cloud Platform.
+ Experience with Incorta Data Delivery Platform connecting to Oracle Fusion Cloud applications.
+ Experience working within Google Cloud Platform with services like Dataflow, Datafusion, Pub/Sub, Cloud SQL, Cloud Storage.
+ Experience with Oracle SQL including advanced functions like analytical functions.
+ Experience tuning SQL and ETLs.
+ Experience within a core metadata model (RPD) including the physical, logical and presentation layers for the enterprise business intelligence platform (OBIEE - Oracle Business Intelligence Enterprise Edition).
+ Experience working in Agile environments and familiarity with CI/CD pipelines, version control (Git), and automated testing frameworks is a plus.
**LOCATION:** Austin, MN - Global Headquarters (preferred); Eden Prairie, MN (office location)
**TRAVEL REQUIREMENTS** : Travel may be necessary 10% of the time.
**BENEFITS:** Hormel Foods offers an excellent benefits package. Competitive base salary plus target incentive, discretionary annual merit increase, annual performance review, medical, dental, vision, non-contributory pension, profit sharing, 401(k), stock purchase plan, paid personal time off (PTO), FREE two-year community or technical college tuition for children of employees, relocation assistance and more. On-the-job training, certifications and opportunities to expand skill sets.
At Hormel Foods, base pay is one part of our total compensation package and is determined within a range. The base hiring pay range for this role is between $98,100-$137,300 per year, and your actual base pay within that range will depend upon a variety of factors including, but not limited to, job-related knowledge, skill set, level of experience, and geographic market location.
At Hormel we invite difference and diversity in all aspects. We offer a space of support, understanding, and community. We are committed to the journey! Learn more about our progress here: https://*******************/about/diversity-and-inclusion/
Hormel Foods provides equal employment opportunities to applicants and employees without regard to race; color; sex; gender identity; sexual orientation; religious practices and observances; national origin; pregnancy, childbirth, or related medical conditions; status as a protected veteran or spouse/family member of a protected veteran; or disability.
**Requisition ID** : 32243
Hormel Foods Corporation is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, gender, sexual orientation, gender identity, national origin, disability, or veteran status.
$98.1k-137.3k yearly 20d ago
Sr. Data Engineer
PDS Inc., LLC 3.8
Data scientist job in Rochester, MN
Required Skills & Experience
Microsoft Fabric Expertise: Hands-on experience with Data Warehouse, Data Pipelines & Power BI integration.
Data Modeling: Strong understanding of dimensional/star schemas and semantic model design to support analytics and reporting needs.
ETL/ELT Pipeline Development: Ability to build robust, parameterized, and testable data pipelines for ingestion, transformation, and curation using Fabric Data Pipelines or comparable tools.
Advanced SQL/T-SQL: Proven experience with query optimization, performance tuning, indexing, and complex data transformations.
Programming/Scripting: Proficiency in Python or similar scripting languages for data wrangling, automation, and integration tasks.
Cloud Architecture: Experience designing scalable, secure cloud-based data solutions that align with institutional standards.
Metadata & Governance: Familiarity with metadata management, data lineage, and governance frameworks.
DevOps & IaC: Knowledge of infrastructure-as-code and CI/CD practices for data version control and deployment (e.g., Git, Azure DevOps, Terraform).
Healthcare Data Experience: Experience working with clinical data in a healthcare setting.
Compliance & Security: Understanding of healthcare privacy and security regulations, including HIPAA, PHI handling, and de-identification.
Cross-Functional Collaboration: Demonstrated success partnering with IT, data science, and clinical stakeholders to deliver data-driven solutions.
Communication Skills: Ability to explain technical concepts clearly to both technical and non-technical audiences.
Knowledge Sharing: Commitment to mentoring, documentation, and strengthening internal data engineering capabilities, especially within Microsoft Fabric.
Preferred (Not Required): Data engineering or cloud certifications.
Compensation:$60.00-75.00 hourly DOE
We look forward to reviewing your application. We encourage everyone to apply - even if every box isn't checked for what you are looking for or what is required.
PDSINC, LLC is an Equal Opportunity Employer.
$60-75 hourly 60d+ ago
Senior Data Engineer
Mastech Digital 4.7
Data scientist job in Austin, MN
Job Description
Mastech Digital provides digital and mainstream technology staff as well as Digital Transformation Services for all American Corporations. We are currently seeking a Senior Data Engineer for our client in the Manufacturing domain. We value our professionals, providing comprehensive benefits and the opportunity for growth. This is a Permanent position, and the client is looking for someone to start immediately.
Duration: Full-time
Location: Austin, MN (Onsite)
Salary: $117,000-$164,000/Annually
Role: Senior Data Engineer
Primary Skills: ETL
Role Description: The Senior Data Engineer must have 10+ years of experience. You must have extensive Supply Chain Management and SCM Data Sets experience.
Required Experience and Skills:
- Experience in engineering within a data warehouse or related experience with dimensional data modeling.
- Experience designing and developing ETLs/pipelines in Python, Google BigQuery Dataprocs and/or Informatica ETL.
- Experience with Supply Chain data, including logistics, inventory, procurement, and manufacturing.
- Managing a team and handling performance reviews, annual reviews etc.
- Proven ability to gather detailed technical requirements to design and develop data structures supporting business intelligence report solutions from beginning to end.
- Experience with reading and writing SQL.
Key Take Aways and Key Responsibilities for the Project:
- Strong Dimensional Data Modeling experience building DWs is critical.
- Will be overseeing a team of 3 internal resources comprised of data analyst and jr. Data Engineers plus eight external resources.
- Time spent is about 50% managing and 50% Development, fixing fires, managing issues etc.
- The management is not just being a lead; it is direct supervision. i.e., annual reviews. This is all about data sets in Supply Chain. Demand planning, sales planning etc.
- They are Agile and this role is responsible for setting up and running the two-week sprints.
Education: Bachelor's degree in Computer Science, Electrical/Electronic Engineering, Information Technology or another related field or Equivalent
Experience: Minimum 10+ years of experience
Relocation: This position will cover relocation expenses
Travel: No
Local Preferred: Yes
Note: Must be able to work on a W2 basis (No C2C)
Recruiter Name: Anshika Pasahan
Recruiter Phone: ************
Benefits:
This is a direct hire position, and the hired applicant will receive our client's benefits package.
Mastech Digital is an Equal Opportunity Employer - All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or protected veteran status.
$117k-164k yearly 26d ago
Data Scientist
Mayo Clinic 4.8
Data scientist job in Rochester, MN
**Why Mayo Clinic** Mayo Clinic is top-ranked in more specialties than any other care provider according to U.S. News & World Report. As we work together to put the needs of the patient first, we are also dedicated to our employees, investing in competitive compensation and comprehensive benefit plans (************************************** - to take care of you and your family, now and in the future. And with continuing education and advancement opportunities at every turn, you can build a long, successful career with Mayo Clinic.
**Benefits Highlights**
+ Medical: Multiple plan options.
+ Dental: Delta Dental or reimbursement account for flexible coverage.
+ Vision: Affordable plan with national network.
+ Pre-Tax Savings: HSA and FSAs for eligible expenses.
+ Retirement: Competitive retirement package to secure your future.
**Responsibilities**
Join a world-renowned institution where data science directly improves patient lives. At Mayo Clinic, DataScientists turn complex, heterogeneous data into meaningful insights that enhance clinical care, accelerate scientific discovery, and strengthen the digital future of healthcare.
This role partners closely with Data Science and Informatics faculty and serves as a technical thought leader-shaping the strategy, direction, and impact of data science across the enterprise.
**As a DataScientist at Mayo Clinic, you will:**
+ **Transform data into insight and insight into action** , spanning the full spectrum from problem formulation and data acquisition to modeling, deployment, and interpretation.
+ Provide **strategic direction for data science and AI** in a specialized domain-such as cancer, surgery, healthcare delivery, marketing, or planning services.
+ Collaborate with enterprise leaders to **advance Mayo Clinic's digital and analytics strategy** .
+ Work side‑by‑side with Informatics and IT teams to build **data‑ and intelligence‑driven systems** that address complex, high‑priority challenges.
+ Recommend best practices for **data collection, integration, and retention** , incorporating technical, operational, and business needs.
+ Support scientific and operational initiatives under the guidance of a senior datascientist or through full independent direction.
+ Design and develop **scripts, tools, and software applications** that enable data extraction, management, and analysis across the organization.
+ Deliver enterprise‑level consultative services, providing analysis and presenting findings to leadership and multidisciplinary stakeholders.
+ In Generative AI, a DataScientist **will lead and execute Gen AI/Agentic based approaches**
**Key Responsibilities**
+ Partner with multidisciplinary teams to create innovative approaches for **data‑driven decision‑making** .
+ Lead exploration of **next‑generation AI/ML approaches** to solve complex analytical problems across diverse domains.
+ Apply deep expertise in data science methods, data types, and scientific challenges to help **shape new products, experiences, and technologies** .
+ Guide and mentor data science staff, ensuring high‑quality analysis, unbiased recommendations, and alignment with strategic priorities.
+ Develop **analytics tools and solutions** that can be used effectively by non‑technical staff.
**Qualifications**
- **PhD** in a domain relevant field (mathematics, computer science, statistics, physics, engineering, data science, health science, or a related discipline) **Plus**
- **At least four years** of experience in data science, machine learning, AI, or informatics
**What You Bring**
+ A blend of **deep technical expertise and strong business acumen** , with a proven ability to lead technical or quantitative teams.
+ Demonstrated success developing **predictive and prescriptive models** using advanced statistical modeling, machine learning, or data mining.
+ Experience applying **problem‑solving frameworks** , planning methods, continuous improvement approaches, and project management techniques.
+ Ability to independently manage multiple high‑impact projects in a dynamic environment, staying current with healthcare trends and enterprise priorities.
+ Exceptional interpersonal skills- **presentation, negotiation, persuasion, and written communication** .
+ Strong time‑management skills and the ability to prioritize, organize, and delegate effectively.
+ Expertise in **scientific computing** , data management packages, data modeling, and data exploration tools.
+ A consulting mindset: the ability to identify challenges, recommend solutions, deploy analytics tools, and support non‑technical users.
+ Demonstrated initiative in areas such as training, software development, education, and technical documentation.
+ Proven ability to provide **vision and strategic direction** at the departmental, institutional, or enterprise level.
+ Preferred publications in high impact factors
**Exemption Status**
Exempt
**Compensation Detail**
Compensation range is $165,276.80 - $247,998.40 / Salary. This vacancy is not eligible for sponsorship, and we will not sponsor or transfer visas for this position.
**Benefits Eligible**
Yes
**Schedule**
Full Time
**Hours/Pay Period**
80
**Schedule Details**
Monday - Friday, 8am - 5pm CST. Some on-site travel may be required for retreats or meetings.
**Weekend Schedule**
As needed.
**International Assignment**
No
**Site Description**
Just as our reputation has spread beyond our Minnesota roots, so have our locations. Today, our employees are located at our three major campuses in Phoenix/Scottsdale, Arizona, Jacksonville, Florida, Rochester, Minnesota, and at Mayo Clinic Health System campuses throughout Midwestern communities, and at our international locations. Each Mayo Clinic location is a special place where our employees thrive in both their work and personal lives. Learn more about what each unique Mayo Clinic campus has to offer, and where your best fit is. (*****************************************
**Equal Opportunity**
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender identity, sexual orientation, national origin, protected veteran status or disability status. Learn more about the "EOE is the Law" (**************************** . Mayo Clinic participates in E-Verify (******************************************************************************************** and may provide the Social Security Administration and, if necessary, the Department of Homeland Security with information from each new employee's Form I-9 to confirm work authorization.
**Recruiter**
Briana Priniski
**Equal opportunity**
As an Affirmative Action and Equal Opportunity Employer Mayo Clinic is committed to creating an inclusive environment that values the diversity of its employees and does not discriminate against any employee or candidate. Women, minorities, veterans, people from the LGBTQ communities and people with disabilities are strongly encouraged to apply to join our teams. Reasonable accommodations to access job openings or to apply for a job are available.
$71k-112k yearly est. 35d ago
Data Scientist
Mayo Healthcare 4.0
Data scientist job in Rochester, MN
Join a world-renowned institution where data science directly improves patient lives. At Mayo Clinic, DataScientists turn complex, heterogeneous data into meaningful insights that enhance clinical care, accelerate scientific discovery, and strengthen the digital future of healthcare.
This role partners closely with Data Science and Informatics faculty and serves as a technical thought leader-shaping the strategy, direction, and impact of data science across the enterprise.
As a DataScientist at Mayo Clinic, you will:
Transform data into insight and insight into action, spanning the full spectrum from problem formulation and data acquisition to modeling, deployment, and interpretation.
Provide strategic direction for data science and AI in a specialized domain-such as cancer, surgery, healthcare delivery, marketing, or planning services.
Collaborate with enterprise leaders to advance Mayo Clinic's digital and analytics strategy.
Work side‑by‑side with Informatics and IT teams to build data‑ and intelligence‑driven systems that address complex, high‑priority challenges.
Recommend best practices for data collection, integration, and retention, incorporating technical, operational, and business needs.
Support scientific and operational initiatives under the guidance of a senior datascientist or through full independent direction.
Design and develop scripts, tools, and software applications that enable data extraction, management, and analysis across the organization.
Deliver enterprise‑level consultative services, providing analysis and presenting findings to leadership and multidisciplinary stakeholders.
In Generative AI, a DataScientist will lead and execute Gen AI/Agentic based approaches
Key Responsibilities
Partner with multidisciplinary teams to create innovative approaches for data‑driven decision‑making.
Lead exploration of next‑generation AI/ML approaches to solve complex analytical problems across diverse domains.
Apply deep expertise in data science methods, data types, and scientific challenges to help shape new products, experiences, and technologies.
Guide and mentor data science staff, ensuring high‑quality analysis, unbiased recommendations, and alignment with strategic priorities.
Develop analytics tools and solutions that can be used effectively by non‑technical staff.
• PhD in a domain relevant field (mathematics, computer science, statistics, physics, engineering, data science, health science, or a related discipline) Plus
• At least four years of experience in data science, machine learning, AI, or informatics
What You Bring
A blend of deep technical expertise and strong business acumen, with a proven ability to lead technical or quantitative teams.
Demonstrated success developing predictive and prescriptive models using advanced statistical modeling, machine learning, or data mining.
Experience applying problem‑solving frameworks, planning methods, continuous improvement approaches, and project management techniques.
Ability to independently manage multiple high‑impact projects in a dynamic environment, staying current with healthcare trends and enterprise priorities.
Exceptional interpersonal skills-presentation, negotiation, persuasion, and written communication.
Strong time‑management skills and the ability to prioritize, organize, and delegate effectively.
Expertise in scientific computing, data management packages, data modeling, and data exploration tools.
A consulting mindset: the ability to identify challenges, recommend solutions, deploy analytics tools, and support non‑technical users.
Demonstrated initiative in areas such as training, software development, education, and technical documentation.
Proven ability to provide vision and strategic direction at the departmental, institutional, or enterprise level.
Preferred publications in high impact factors
$66k-98k yearly est. Auto-Apply 7d ago
Data Engineer Supply Chain Planning - IT - Hormel Foods (Austin, MN)
Hormel Foods Corp 4.6
Data scientist job in Austin, MN
JobID: 32243 JobSchedule: Full time JobShift: Company Name: Hormel Foods Corporation DATA ENGINEER SUPPLY CHAIN PLANNING - IT - CORPORATE OFFICE (AUSTIN, MN) To save time applying, Hormel Foods does not offer sponsorship of job applicants for employment-based visas for this position at this time.
Hormel Foods Corporation
ABOUT HORMEL FOODS - Inspired People. Inspired Food.
Hormel Foods Corporation, based in Austin, Minnesota, is a global branded food company with approximately $12 billion in annual revenue across more than 80 countries worldwide. Its brands include Planters, Skippy, SPAM, Hormel Natural Choice, Applegate, Justin's, Wholly, Hormel Black Label, Columbus, Jennie-O and more than 30 other beloved brands. The company is a member of the S&P 500 Index and the S&P 500 Dividend Aristocrats, was named one of the best companies to work for by U.S. News & World Report, one of America's most responsible companies by Newsweek, recognized by TIME magazine as one of the World's Best Companies, and has received numerous other awards and accolades for its corporate responsibility and community service efforts. The company lives by its purpose statement - Inspired People. Inspired Food. - to bring some of the world's most trusted and iconic brands to tables across the globe. For more information, visit ********************
Summary:
Join Hormel Foods in transforming supply chain analytics through data engineering excellence. Your work will directly impact operational efficiency, sustainability, and customer satisfaction, while supporting various transform and modernize initiatives.
We are looking for a Data Engineer specializing in supply chain planning within our Data and Analytics team. This is an exciting opportunity to help grow and modernize analytics at Hormel Foods! Individuals interested in this position will need strong communication skills, communicating up, down and across the organization. You will be responsible for managing simultaneous initiatives that require innovative problem solving.
This role is critical in enabling data-driven decisions across supply planning and demand planning, helping the organization optimize inventory, reduce waste, and improve service levels. It also offers opportunities for growth into data architecture, advanced analytics, and leadership roles within the Data & Analytics team.
You will use tools such as Google Cloud Platform, SQL, Python, Incorta, Oracle Business Intelligence, Tableau, and Informatica ETL to engineer data pipelines and data models to enhance enterprise reporting and analytics. Additionally, you will engineer reports, dashboards and visualizations using enterprise business intelligence tools (Oracle and Tableau).
Specific Competencies Include:
* Data Structures and Models - Designs and develops the overall database/data warehouse structure based on functional and technical requirements. Designs and develops data collection frameworks for structured and unstructured data.
* Data Pipelines and ELT - Designs and applies data extraction, loading and transformation techniques in order to connect large data sets from a variety of sources.
* Data Performance - Troubleshoots and fixes for data performance issues that come with querying and combining large volumes of data. Adjusts performance for solution accordingly during initial development.
* Visualizations and Dashboards - Gathers requirements, designs and develops reports and dashboards with multiple sources that meet business needs. Leverages visualizations when possible to increase speed to identifying an insight. Walks business through definitions of data.
Responsibilities:
* Works closely with Supply Chain Planning functions, datascientists, BI analysts, and other stakeholders to understand data needs and deliver solutions.
* Engineers physical and logical data models for dimensions and facts within the staging, warehouse and semantic layer of our enterprise data warehouses and platforms.
* Engineers and performance tunes SQL, Python, Incorta or Informatica ETLs and pipelines as well as Google BigQuery Dataprocs to move data from a variety of source systems and file types to fit into dimensional data models.
* Utilizes SQL within Google BigQuery, Informatica ETLs, Incorta pipelines or Oracle SQL Views when necessary to achieve proper metric calculations or derive dimension attributes.
* Engineers schedule and orchestration for batch and mini-batch data loads into the enterprise data warehouses and platforms.
* Provide issue resolution and maintenance for a variety of business unit solutions already existing in the enterprise data warehouses and platforms.
* Use tools such as SQL, Oracle Business Intelligence, Tableau, Google Cloud Platform, Python, Incorta and Informatica ETL to engineer data pipelines and data models to enhance enterprise reporting and analytics.
* Engineers dashboards within the enterprise business intelligence platforms containing reports and visualization that have intelligent user interface design and flow for the business including adequate performance.
* Ensure data quality, lineage, and governance standards are upheld across all engineered solutions.
Required Qualifications:
* A bachelor's degree in Computer Science, MIS, or related area and significant experience with business intelligence, data engineering and data modeling.
* 5+ years of experience with reading and writing SQL.
* 5+ years of experience engineering within a data warehouse or related experience with dimensional data modeling.
* 5+ years of experience designing and developing ETLs/pipelines in Python, Google BigQuery Dataprocs and/or Informatica ETL.
* 3+ years of experience with data enablement for a Supply Chain Planning platform, preferably o9 solutions.
* Excellent written and verbal communication skills.
* Proven ability to gather detailed technical requirements to design and develop data structures supporting business intelligence report solutions from beginning to end.
* Excellent organizational and time management skills.
* Tested problem-solving and decision-making skills.
* A strong pattern of initiative.
* Highly developed interpersonal and leadership skills.
* Applicants must not now, or at any time in the future, require employer sponsorship for a work visa.
* Applicants must be authorized to work in the United States for any employer.
Preferred Qualifications:
* Experience with Supply Chain data, including logistics, inventory, procurement, and manufacturing.
* Familiarity with ERP systems and supply chain planning tools.
* 3+ years of experience designing and developing within a business intelligence/reporting tool like Oracle Business Intelligence, Tableau or Google Cloud Platform.
* Experience with Incorta Data Delivery Platform connecting to Oracle Fusion Cloud applications.
* Experience working within Google Cloud Platform with services like Dataflow, Datafusion, Pub/Sub, Cloud SQL, Cloud Storage.
* Experience with Oracle SQL including advanced functions like analytical functions.
* Experience tuning SQL and ETLs.
* Experience within a core metadata model (RPD) including the physical, logical and presentation layers for the enterprise business intelligence platform (OBIEE - Oracle Business Intelligence Enterprise Edition).
* Experience working in Agile environments and familiarity with CI/CD pipelines, version control (Git), and automated testing frameworks is a plus.
LOCATION: Austin, MN - Global Headquarters (preferred); Eden Prairie, MN (office location)
TRAVEL REQUIREMENTS: Travel may be necessary 10% of the time.
BENEFITS: Hormel Foods offers an excellent benefits package. Competitive base salary plus target incentive, discretionary annual merit increase, annual performance review, medical, dental, vision, non-contributory pension, profit sharing, 401(k), stock purchase plan, paid personal time off (PTO), FREE two-year community or technical college tuition for children of employees, relocation assistance and more. On-the-job training, certifications and opportunities to expand skill sets.
At Hormel Foods, base pay is one part of our total compensation package and is determined within a range. The base hiring pay range for this role is between $98,100-$137,300 per year, and your actual base pay within that range will depend upon a variety of factors including, but not limited to, job-related knowledge, skill set, level of experience, and geographic market location.
At Hormel we invite difference and diversity in all aspects. We offer a space of support, understanding, and community. We are committed to the journey! Learn more about our progress here: **********************************************************
Hormel Foods provides equal employment opportunities to applicants and employees without regard to race; color; sex; gender identity; sexual orientation; religious practices and observances; national origin; pregnancy, childbirth, or related medical conditions; status as a protected veteran or spouse/family member of a protected veteran; or disability.
$98.1k-137.3k yearly 20d ago
Data Scientist
Mayo Clinic Health System 4.8
Data scientist job in Rochester, MN
Why Mayo Clinic Mayo Clinic is top-ranked in more specialties than any other care provider according to U.S. News & World Report. As we work together to put the needs of the patient first, we are also dedicated to our employees, investing in competitive compensation and comprehensive benefit plans - to take care of you and your family, now and in the future. And with continuing education and advancement opportunities at every turn, you can build a long, successful career with Mayo Clinic.
Benefits Highlights
* Medical: Multiple plan options.
* Dental: Delta Dental or reimbursement account for flexible coverage.
* Vision: Affordable plan with national network.
* Pre-Tax Savings: HSA and FSAs for eligible expenses.
* Retirement: Competitive retirement package to secure your future.
Responsibilities
Join a world-renowned institution where data science directly improves patient lives. At Mayo Clinic, DataScientists turn complex, heterogeneous data into meaningful insights that enhance clinical care, accelerate scientific discovery, and strengthen the digital future of healthcare.
This role partners closely with Data Science and Informatics faculty and serves as a technical thought leader-shaping the strategy, direction, and impact of data science across the enterprise.
As a DataScientist at Mayo Clinic, you will:
* Transform data into insight and insight into action, spanning the full spectrum from problem formulation and data acquisition to modeling, deployment, and interpretation.
* Provide strategic direction for data science and AI in a specialized domain-such as cancer, surgery, healthcare delivery, marketing, or planning services.
* Collaborate with enterprise leaders to advance Mayo Clinic's digital and analytics strategy.
* Work side‑by‑side with Informatics and IT teams to build data‑ and intelligence‑driven systems that address complex, high‑priority challenges.
* Recommend best practices for data collection, integration, and retention, incorporating technical, operational, and business needs.
* Support scientific and operational initiatives under the guidance of a senior datascientist or through full independent direction.
* Design and develop scripts, tools, and software applications that enable data extraction, management, and analysis across the organization.
* Deliver enterprise‑level consultative services, providing analysis and presenting findings to leadership and multidisciplinary stakeholders.
* In Generative AI, a DataScientist will lead and execute Gen AI/Agentic based approaches
Key Responsibilities
* Partner with multidisciplinary teams to create innovative approaches for data‑driven decision‑making.
* Lead exploration of next‑generation AI/ML approaches to solve complex analytical problems across diverse domains.
* Apply deep expertise in data science methods, data types, and scientific challenges to help shape new products, experiences, and technologies.
* Guide and mentor data science staff, ensuring high‑quality analysis, unbiased recommendations, and alignment with strategic priorities.
* Develop analytics tools and solutions that can be used effectively by non‑technical staff.
Qualifications
* PhD in a domain relevant field (mathematics, computer science, statistics, physics, engineering, data science, health science, or a related discipline) Plus
* At least four years of experience in data science, machine learning, AI, or informatics
What You Bring
* A blend of deep technical expertise and strong business acumen, with a proven ability to lead technical or quantitative teams.
* Demonstrated success developing predictive and prescriptive models using advanced statistical modeling, machine learning, or data mining.
* Experience applying problem‑solving frameworks, planning methods, continuous improvement approaches, and project management techniques.
* Ability to independently manage multiple high‑impact projects in a dynamic environment, staying current with healthcare trends and enterprise priorities.
* Exceptional interpersonal skills-presentation, negotiation, persuasion, and written communication.
* Strong time‑management skills and the ability to prioritize, organize, and delegate effectively.
* Expertise in scientific computing, data management packages, data modeling, and data exploration tools.
* A consulting mindset: the ability to identify challenges, recommend solutions, deploy analytics tools, and support non‑technical users.
* Demonstrated initiative in areas such as training, software development, education, and technical documentation.
* Proven ability to provide vision and strategic direction at the departmental, institutional, or enterprise level.
* Preferred publications in high impact factors
Exemption Status
Exempt
Compensation Detail
Compensation range is $165,276.80 - $247,998.40 / Salary. This vacancy is not eligible for sponsorship, and we will not sponsor or transfer visas for this position.
Benefits Eligible
Yes
Schedule
Full Time
Hours/Pay Period
80
Schedule Details
Monday - Friday, 8am - 5pm CST. Some on-site travel may be required for retreats or meetings.
Weekend Schedule
As needed.
International Assignment
No
Site Description
Just as our reputation has spread beyond our Minnesota roots, so have our locations. Today, our employees are located at our three major campuses in Phoenix/Scottsdale, Arizona, Jacksonville, Florida, Rochester, Minnesota, and at Mayo Clinic Health System campuses throughout Midwestern communities, and at our international locations. Each Mayo Clinic location is a special place where our employees thrive in both their work and personal lives. Learn more about what each unique Mayo Clinic campus has to offer, and where your best fit is.
Equal Opportunity
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender identity, sexual orientation, national origin, protected veteran status or disability status. Learn more about the 'EOE is the Law'. Mayo Clinic participates in E-Verify and may provide the Social Security Administration and, if necessary, the Department of Homeland Security with information from each new employee's Form I-9 to confirm work authorization.
Recruiter
Briana Priniski
$71k-112k yearly est. 5d ago
Data Science AI&I Intern - PhD degree level (On-site)
Mayo Healthcare 4.0
Data scientist job in Rochester, MN
Functions as an artificial intelligence and informatics researcher by applying data science, artificial intelligence and informatics, and machine learning expertise to the design, implementation, analysis, interpretations, and reporting of research, clinical and administrative studies.
Working towards a PhD degree in data science, computer science, mathematics/statistics, informatics, computer engineering, or related field. Research experience in artificial intelligence, machine learning, deep learning, natural language processing, large language models, and data science is highly desired. Must have excellent analytical and problem-solving expertise, along with exceptional written and oral communication skills. Must be highly motived, willing to learn, and demonstrate initiative in assigned tasks.
Notes: Relocation assistance is not available for this position. Mayo Clinic will not sponsor or transfer a visa for this position, which includes F1 OPT STEM. Must be a U.S. citizen, permanent resident, refugee or asylee.
$37k-50k yearly est. Auto-Apply 7d ago
Data Engineer Supply Chain Planning - IT - Hormel Foods (Austin, MN)
Hormel Foods 4.6
Data scientist job in Austin, MN
DATA ENGINEER SUPPLY CHAIN PLANNING - IT - CORPORATE OFFICE (AUSTIN, MN)
To save time applying, Hormel Foods does not offer sponsorship of job applicants for employment-based visas for this position at this time.
Hormel Foods Corporation
ABOUT HORMEL FOODS -
Inspired People. Inspired Food.™
Hormel Foods Corporation, based in Austin, Minnesota, is a global branded food company with approximately $12 billion in annual revenue across more than 80 countries worldwide. Its brands include Planters ,
Skippy
,
SPAM
,
Hormel Natural Choice , Applegate , Justin's , Wholly , Hormel Black Label , Columbus
,
Jennie-O
and more than 30 other beloved brands. The company is a member of the S&P 500 Index and the S&P 500 Dividend Aristocrats, was named one of the best companies to work for by U.S. News & World Report, one of America's most responsible companies by Newsweek, recognized by TIME magazine as one of the World's Best Companies, and has received numerous other awards and accolades for its corporate responsibility and community service efforts. The company lives by its purpose statement -
Inspired People. Inspired Food.™
- to bring some of the world's most trusted and iconic brands to tables across the globe. For more information, visit ********************
Summary:
Join Hormel Foods in transforming supply chain analytics through data engineering excellence. Your work will directly impact operational efficiency, sustainability, and customer satisfaction, while supporting various transform and modernize initiatives.
We are looking for a Data Engineer specializing in supply chain planning within our Data and Analytics team. This is an exciting opportunity to help grow and modernize analytics at Hormel Foods! Individuals interested in this position will need strong communication skills, communicating up, down and across the organization. You will be responsible for managing simultaneous initiatives that require innovative problem solving.
This role is critical in enabling data-driven decisions across supply planning and demand planning, helping the organization optimize inventory, reduce waste, and improve service levels. It also offers opportunities for growth into data architecture, advanced analytics, and leadership roles within the Data & Analytics team.
You will use tools such as Google Cloud Platform, SQL, Python, Incorta, Oracle Business Intelligence, Tableau, and Informatica ETL to engineer data pipelines and data models to enhance enterprise reporting and analytics. Additionally, you will engineer reports, dashboards and visualizations using enterprise business intelligence tools (Oracle and Tableau).
Specific Competencies Include:
Data Structures and Models - Designs and develops the overall database/data warehouse structure based on functional and technical requirements. Designs and develops data collection frameworks for structured and unstructured data.
Data Pipelines and ELT - Designs and applies data extraction, loading and transformation techniques in order to connect large data sets from a variety of sources.
Data Performance - Troubleshoots and fixes for data performance issues that come with querying and combining large volumes of data. Adjusts performance for solution accordingly during initial development.
Visualizations and Dashboards - Gathers requirements, designs and develops reports and dashboards with multiple sources that meet business needs. Leverages visualizations when possible to increase speed to identifying an insight. Walks business through definitions of data.
Responsibilities:
Works closely with Supply Chain Planning functions, datascientists, BI analysts, and other stakeholders to understand data needs and deliver solutions.
Engineers physical and logical data models for dimensions and facts within the staging, warehouse and semantic layer of our enterprise data warehouses and platforms.
Engineers and performance tunes SQL, Python, Incorta or Informatica ETLs and pipelines as well as Google BigQuery Dataprocs to move data from a variety of source systems and file types to fit into dimensional data models.
Utilizes SQL within Google BigQuery, Informatica ETLs, Incorta pipelines or Oracle SQL Views when necessary to achieve proper metric calculations or derive dimension attributes.
Engineers schedule and orchestration for batch and mini-batch data loads into the enterprise data warehouses and platforms.
Provide issue resolution and maintenance for a variety of business unit solutions already existing in the enterprise data warehouses and platforms.
Use tools such as SQL, Oracle Business Intelligence, Tableau, Google Cloud Platform, Python, Incorta and Informatica ETL to engineer data pipelines and data models to enhance enterprise reporting and analytics.
Engineers dashboards within the enterprise business intelligence platforms containing reports and visualization that have intelligent user interface design and flow for the business including adequate performance.
Ensure data quality, lineage, and governance standards are upheld across all engineered solutions.
Required Qualifications:
A bachelor's degree in Computer Science, MIS, or related area and significant experience with business intelligence, data engineering and data modeling.
5+ years of experience with reading and writing SQL.
5+ years of experience engineering within a data warehouse or related experience with dimensional data modeling.
5+ years of experience designing and developing ETLs/pipelines in Python, Google BigQuery Dataprocs and/or Informatica ETL.
3+ years of experience with data enablement for a Supply Chain Planning platform, preferably o9 solutions.
Excellent written and verbal communication skills.
Proven ability to gather detailed technical requirements to design and develop data structures supporting business intelligence report solutions from beginning to end.
Excellent organizational and time management skills.
Tested problem-solving and decision-making skills.
A strong pattern of initiative.
Highly developed interpersonal and leadership skills.
Applicants must not now, or at any time in the future, require employer sponsorship for a work visa.
Applicants must be authorized to work in the United States for any employer.
Preferred Qualifications:
Experience with Supply Chain data, including logistics, inventory, procurement, and manufacturing.
Familiarity with ERP systems and supply chain planning tools.
3+ years of experience designing and developing within a business intelligence/reporting tool like Oracle Business Intelligence, Tableau or Google Cloud Platform.
Experience with Incorta Data Delivery Platform connecting to Oracle Fusion Cloud applications.
Experience working within Google Cloud Platform with services like Dataflow, Datafusion, Pub/Sub, Cloud SQL, Cloud Storage.
Experience with Oracle SQL including advanced functions like analytical functions.
Experience tuning SQL and ETLs.
Experience within a core metadata model (RPD) including the physical, logical and presentation layers for the enterprise business intelligence platform (OBIEE - Oracle Business Intelligence Enterprise Edition).
Experience working in Agile environments and familiarity with CI/CD pipelines, version control (Git), and automated testing frameworks is a plus.
LOCATION: Austin, MN - Global Headquarters (preferred); Eden Prairie, MN (office location)
TRAVEL REQUIREMENTS: Travel may be necessary 10% of the time.
BENEFITS: Hormel Foods offers an excellent benefits package. Competitive base salary plus target incentive, discretionary annual merit increase, annual performance review, medical, dental, vision, non-contributory pension, profit sharing, 401(k), stock purchase plan, paid personal time off (PTO), FREE two-year community or technical college tuition for children of employees, relocation assistance and more. On-the-job training, certifications and opportunities to expand skill sets.
At Hormel Foods, base pay is one part of our total compensation package and is determined within a range. The base hiring pay range for this role is between $98,100-$137,300 per year, and your actual base pay within that range will depend upon a variety of factors including, but not limited to, job-related knowledge, skill set, level of experience, and geographic market location.
At Hormel we invite difference and diversity in all aspects. We offer a space of support, understanding, and community. We are committed to the journey! Learn more about our progress here: **********************************************************
Hormel Foods provides equal employment opportunities to applicants and employees without regard to race; color; sex; gender identity; sexual orientation; religious practices and observances; national origin; pregnancy, childbirth, or related medical conditions; status as a protected veteran or spouse/family member of a protected veteran; or disability.
$98.1k-137.3k yearly Auto-Apply 20d ago
Data Scientist - Research Sovereign AI
Mayo Clinic Health System 4.8
Data scientist job in Rochester, MN
Why Mayo Clinic Mayo Clinic is top-ranked in more specialties than any other care provider according to U.S. News & World Report. As we work together to put the needs of the patient first, we are also dedicated to our employees, investing in competitive compensation and comprehensive benefit plans - to take care of you and your family, now and in the future. And with continuing education and advancement opportunities at every turn, you can build a long, successful career with Mayo Clinic.
Benefits Highlights
* Medical: Multiple plan options.
* Dental: Delta Dental or reimbursement account for flexible coverage.
* Vision: Affordable plan with national network.
* Pre-Tax Savings: HSA and FSAs for eligible expenses.
* Retirement: Competitive retirement package to secure your future.
Responsibilities
DataScientist Foundational Model Science
Position Summary
The DataScientist for Foundational Model Science is the senior technical leader, and the lead scientist responsible for designing, training, and governing Mayo's multimodal foundational model. This model forms the core intelligence layer used by clinical departments, researchers, agentic workflows, and sovereign AI collaborations. The individual will work as a hands-on architect, model-builder, and researcher while acting as a player-coach, guiding strategy and building a future team.
Key Responsibilities
Scientific & Technical Leadership
* Design multimodal foundational model architectures integrating signals from imaging, text, waveforms, structured data, graph representations, and temporal embeddings.
* Develop fusion, alignment, and cross-modal reasoning mechanisms (early fusion, late fusion, token-level fusion, hybrid models).
* Define and implement methods for grounded clinical reasoning, retrieval-augmented inference, graph-augmented attention, and chain-of-thought verification.
* Establish protocols for model lifecycle governance, safe update cycles, drift-aware re-training, and provenance tracking.
Hands-On Modeling & Training
* Train large-scale deep learning models, including multimodal architectures and domain-specific transformer-based systems, on real clinical datasets.
* Fine-tune and adapt large language models (LLMs) for clinical reasoning, summarization, question answering, agentic behavior, and instruction-following tasks.
* Build retrieval-augmented pipelines using embeddings, vector stores, graph traversal, and clinically grounded context construction.
* Develop evaluation methods for reasoning quality, temporal prediction accuracy, multimodal synergy, ablation-based robustness, and counterfactual behavior.
* Create reference-grounded training datasets, structured reasoning tasks, and multimodal benchmarks to evaluate model performance.
* Conduct hands-on experimentation with optimization strategies, large-scale distributed training, model quantization, and inference acceleration.
* Implement uncertainty modeling, selective prediction, abstention mechanisms, and clinically meaningful risk thresholds.
* Build interpretable reasoning pathways, cross-modal attribution maps, and reference-grounded explanations.
Cross-functional Collaboration
* Work closely with the Representation team to ensure representation-model alignment.
* Partner with clinical SMEs to encode domain reasoning into reinforcement learning, preference optimization, or rule-guided behaviors.
Team Leadership
* Serve as the future founding technical lead of the Foundational Model Science Program.
* Mentor scientists and engineers and eventually build a specialty modeling team.
Qualifications
Required
* PhD in Machine Learning, Computer Science, Applied Mathematics, or related discipline with at least four years of informatics, Artificial Intelligence, data science and/or machine learning.
* Experience with generative modeling, reasoning models, or multimodal foundation models.
* Expertise in alignment methods (contrastive learning, RLHF/RLCS, preference optimization).
* Experience with distributed training, and large-scale compute.
Preferred
* Experience with clinical or EMR data across multiple modalities.
* 7+ years experience training deep learning models, including transformers or multimodal architectures.
* Experience defining evaluation frameworks for reasoning, multimodal synergy, reliability, or fairness.
* Publications in multimodal learning, foundation models, or reasoning architectures.
Exemption Status
Exempt
Compensation Detail
Education, experience and tenure may be considered along with internal equity when job offers are extended.; $165,276 -247,998/ annually.
Benefits Eligible
Yes
Schedule
Full Time
Hours/Pay Period
80
Schedule Details
Monday - Friday. Regular day hours.
Weekend Schedule
none expected
International Assignment
No
Site Description
Just as our reputation has spread beyond our Minnesota roots, so have our locations. Today, our employees are located at our three major campuses in Phoenix/Scottsdale, Arizona, Jacksonville, Florida, Rochester, Minnesota, and at Mayo Clinic Health System campuses throughout Midwestern communities, and at our international locations. Each Mayo Clinic location is a special place where our employees thrive in both their work and personal lives. Learn more about what each unique Mayo Clinic campus has to offer, and where your best fit is.
Equal Opportunity
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender identity, sexual orientation, national origin, protected veteran status or disability status. Learn more about the 'EOE is the Law'. Mayo Clinic participates in E-Verify and may provide the Social Security Administration and, if necessary, the Department of Homeland Security with information from each new employee's Form I-9 to confirm work authorization.
Recruiter
Jill Squier
$71k-112k yearly est. 21d ago
Staff Data Engineer - IT (Sales & Marketing) Hormel Foods (Austin, MN, Eden Prairie, MN)
Hormel Foods 4.6
Data scientist job in Austin, MN
**STAFF DATA ENGINEER - IT (Sales & Marketing) - HORMEL FOODS (AUSTIN, MN, EDEN PRAIRIE, MN)** To save time applying, Hormel Foods does not offer sponsorship of job applicants for employment-based visas for this position at this time. **Hormel Foods Corporation**
**ABOUT HORMEL FOODS -** **_Inspired People. Inspired Food._**
Hormel Foods Corporation, based in Austin, Minn., is a global branded food company with over $12 billion in annual revenue across more than 80 countries worldwide. Its brands include _SKIPPY_ _ _ , _SPAM_ , _Hormel_ _ _ _Natural Choice_ _ _ _, Applegate_ _ _ _, Justin's_ _ _ _, Wholly_ _ _ _, Hormel_ _ _ _Black Label_ _ _ _, Columbus_ _ _ and more than 30 other beloved brands. The company is a member of the S&P 500 Index and the S&P 500 Dividend Aristocrats, was named on the "Global 2000 World's Best Employers" list by Forbes magazine for three straight years, is one of Fortune magazine's most admired companies, has appeared on Corporate Responsibility Magazine's "The 100 Best Corporate Citizens" list for the 12 th year in a row, and has received numerous other awards and accolades for its corporate responsibility and community service efforts. The company lives by its purpose statement - _Inspired People. Inspired Food._ - to bring some of the world's most trusted and iconic brands to tables across the globe. For more information, visit ******************* and *************************** .
**Summary:**
We are looking for a Staff Data Engineer within the Sales and Marketing domain as part of our Data and Analytics team. This is an exciting opportunity to help grow and modernize analytics at Hormel Foods! The ideal candidate will have strong communication skills and the ability to collaborate across multiple levels of the organization. You will manage multiple initiatives that require creative problem-solving.
You will use tools such as Google Cloud Platform, SQL, Python, Incorta, Oracle Business Intelligence, and Informatica ETL to engineer data pipelines and data models to enhance enterprise reporting and analytics. Additionally, you will engineer reports, dashboards and visualizations using enterprise business intelligence tools (Oracle, Power BI and Tableau).
**Specific competencies include:**
+ **Data Structures and Models -** Develops the overall database/data warehouse structure based on functional and technical requirements. Develops data collection frameworks for mainly structured and sometimes unstructured data.
+ **Data Pipelines and ELT -** Applies data extraction, loading and transformation techniques in order to connect medium to large data sets from a variety of sources.
+ **Data Performance -** With minimal guidance, troubleshoots and fixes for data performance issues that come with querying and combining medium to large volumes of data. Tests for scenarios affecting performance during initial development.
+ **Visualizations and Dashboards -** Designs and develops reports and dashboards that meet business needs. Leverages visualizations when possible to increase speed to identifying an insight.
**Responsibilities:**
+ Collaborate with Sales & Marketing team members, datascientists, BI analysts, and other stakeholders to understand data needs and deliver solutions.
+ Develop the overall database/data warehouse structure based on functional and technical requirements.
+ Engineer physical and logical data models for dimensions and facts within the staging, warehouse, and semantic layers of enterprise data warehouses and platforms.
+ Performance tune SQL, Python, Incorta, or Informatica ETL pipelines, as well as Google BigQuery Dataprocs to move data from various source systems and file types into dimensional data models.
+ Utilize SQL within Google BigQuery, Informatica ETLs, Incorta pipelines, or Oracle SQL Views to achieve proper metric calculations or derive dimension attributes.
+ Engineer schedule and orchestration for batch and mini-batch data loads into enterprise data warehouses and platforms.
+ Provide issue resolution and maintenance for various business unit solutions existing in enterprise data warehouses and platforms.
+ Use tools such as SQL, Oracle Business Intelligence, Power BI, Tableau, Google Cloud Platform, Python, Incorta, and Informatica ETL to engineer data pipelines and models to enhance enterprise reporting and analytics.
+ Engineer dashboards within enterprise business intelligence platforms containing reports and visualizations with intelligent user interface design and flow for the business, ensuring adequate performance.
**Required Qualifications:**
+ Bachelor's degree in Computer Science, MIS, or related area with experience in business intelligence, data engineering, and data modeling.
+ 4+ years of experience with reading and writing SQL.
+ 3+ years of experience engineering within a data warehouse or related experience with dimensional data modeling.
+ 3+ years of experience designing and developing ETLs/pipelines in Python, Google BigQuery Dataprocs and/or Informatica ETL.
+ 3+ years of experience with data enablement for a Point of Sale or Syndicated Consumption platforms. Preferably, Circana.
+ Ability to gather detailed technical requirements to design and develop data structures supporting business intelligence report solutions from beginning to end.
+ Excellent written and verbal communication skills.
+ Excellent organizational and time management skills.
+ Tested problem-solving and decision-making skills.
+ Strong pattern of initiative.
+ Strong interpersonal skills.
+ Applicants must not now, or at any time in the future, require employer sponsorship for a work visa.
+ Applicants must be authorized to work in the United States for any employer.
**Preferred Qualifications:**
+ Advanced SQL reading and writing skills.
+ Experience developing data pipelines and queries within Google Cloud Platform (Google BigQuery) and/or Oracle Databases.
+ Experience engineering within a data warehouse or related experience with dimensional data modeling.
+ Experience tuning SQL and ETLs.
+ Proven ability to gather detailed technical requirements to design and develop business intelligence report solutions from beginning to end.
+ Experience with Sales/Marketing/Consumption data.
+ Experience with syndicated consumption providers or retailer-focused Point of Sale platforms.
+ Experience working within Google Cloud Platform with services like Dataflow, Datafusion, Pub/Sub, Cloud SQL, Cloud Storage.
+ Experience with Oracle SQL including advanced functions like analytical functions.
+ Experience tuning complex SQL and ETLs.
+ Experience within a core metadata model (RPD) including the physical, logical, and presentation layers for the enterprise business intelligence platform (OBIEE - Oracle Business Intelligence Enterprise Edition).
**LOCATION:** Austin, MN - Global Headquarters (Preferred). Secondary location Eden Prairie, MN
**TRAVEL REQUIREMENTS:** Travel may be necessary 10% of the time.
**BENEFITS:** Hormel Foods offers an excellent benefits package. Competitive base salary plus target incentive, discretionary annual merit increase, annual performance review, medical, dental, vision, non-contributory pension, profit sharing, 401(k) immediate eligible, stock purchase plan, relocation assistance, paid personal time (PTO), FREE two-year community/technical college tuition for children of employees, and more.
At Hormel Foods, base pay is one part of our total compensation package and is determined within a range. The base hiring pay range for this role is between $86,500-$121,200 per year, and your actual base pay within that range will depend upon a variety of factors including, but not limited to, job-related knowledge, skill set, level of experience, and geographic market location.
At Hormel we invite difference and diversity in all aspects. We offer a space of support, understanding, and community. We are committed to the journey! Learn more about our progress here: https://*******************/about/diversity-and-inclusion/
Hormel Foods provides equal employment opportunities to applicants and employees without regard to race; color; sex; gender identity; sexual orientation; religious practices and observances; national origin; pregnancy, childbirth, or related medical conditions; status as a protected veteran or spouse/family member of a protected veteran; or disability.
**Requisition ID** : 32250
Hormel Foods Corporation is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, gender, sexual orientation, gender identity, national origin, disability, or veteran status.
$86.5k-121.2k yearly 20d ago
Statistician
Mayo Healthcare 4.0
Data scientist job in Rochester, MN
The Biostatistician functions as a statistical consultant/project manager by applying statistical programming and related methods such as machine learning and scientific expertise to the design, implementation, analysis, interpretation, and reporting of research, clinical, and administrative studies. The Biostatistician must possess basic expertise in a wide variety of statistical techniques to meet the demands of study teams and investigators. The Biostatistician will assist with directing the progress of the biostatistical aspects of research and clinical studies, in collaboration with consulting staff or a higher-level Biostatistician. The Biostatistician will work with senior-level Biostatisticians to summarize and communicate project findings to study team members, investigators, journal and grant reviewers, committees, and other individuals and entities as needed. The Biostatistician may work independently with limited oversight from higher-level Biostatisticians in some situations.
This vacancy is not eligible for sponsorship/ we will not sponsor or transfer visas for this position. Also, Mayo Clinic DOES NOT participate in the F-1 STEM OPT extension program.
Master degree in Statistics, Biostatistics or equivalent. Alternatively, a master degree in Mathematics, Public Health or Data Science with at least 12 graduate level semester hours in statistics, biostatistics or equivalent. Experience in statistical analysis and programming software such as SAS, R or python. Applicable skills include organization, documentation, written and oral communication. A commitment to customer service with an attitude of owning the experience of each customer is required. Other beneficial attributes include logical and systematic thinking, basic knowledge of human physiology and/or medical terminology, and inquisitiveness. Prefer graduate GPA of 3.0 or greater.
$49k-68k yearly est. Auto-Apply 7d ago
Data Scientist - Research Sovereign AI
Mayo Clinic 4.8
Data scientist job in Rochester, MN
DataScientist Foundational Model Science
The DataScientist for Foundational Model Science is the senior technical leader, and the lead scientist responsible for designing, training, and governing Mayo's multimodal foundational model. This model forms the core intelligence layer used by clinical departments, researchers, agentic workflows, and sovereign AI collaborations. The individual will work as a hands-on architect, model-builder, and researcher while acting as a player-coach, guiding strategy and building a future team.
Key Responsibilities
Scientific & Technical Leadership
Design multimodal foundational model architectures integrating signals from imaging, text, waveforms, structured data, graph representations, and temporal embeddings.
Develop fusion, alignment, and cross-modal reasoning mechanisms (early fusion, late fusion, token-level fusion, hybrid models).
Define and implement methods for grounded clinical reasoning, retrieval-augmented inference, graph-augmented attention, and chain-of-thought verification.
Establish protocols for model lifecycle governance, safe update cycles, drift-aware re-training, and provenance tracking.
Hands-On Modeling & Training
Train large-scale deep learning models, including multimodal architectures and domain-specific transformer-based systems, on real clinical datasets.
Fine-tune and adapt large language models (LLMs) for clinical reasoning, summarization, question answering, agentic behavior, and instruction-following tasks.
Build retrieval-augmented pipelines using embeddings, vector stores, graph traversal, and clinically grounded context construction.
Develop evaluation methods for reasoning quality, temporal prediction accuracy, multimodal synergy, ablation-based robustness, and counterfactual behavior.
Create reference-grounded training datasets, structured reasoning tasks, and multimodal benchmarks to evaluate model performance.
Conduct hands-on experimentation with optimization strategies, large-scale distributed training, model quantization, and inference acceleration.
Implement uncertainty modeling, selective prediction, abstention mechanisms, and clinically meaningful risk thresholds.
Build interpretable reasoning pathways, cross-modal attribution maps, and reference-grounded explanations.
Cross-functional Collaboration
Work closely with the Representation team to ensure representation-model alignment.
Partner with clinical SMEs to encode domain reasoning into reinforcement learning, preference optimization, or rule-guided behaviors.
Team Leadership
Serve as the future founding technical lead of the Foundational Model Science Program.
Mentor scientists and engineers and eventually build a specialty modeling team.
Required
PhD in Machine Learning, Computer Science, Applied Mathematics, or related discipline with at least four years of informatics, Artificial Intelligence, data science and/or machine learning.
Experience with generative modeling, reasoning models, or multimodal foundation models.
Expertise in alignment methods (contrastive learning, RLHF/RLCS, preference optimization).
Experience with distributed training, and large-scale compute.
Preferred
Experience with clinical or EMR data across multiple modalities.
7+ years experience training deep learning models, including transformers or multimodal architectures.
Experience defining evaluation frameworks for reasoning, multimodal synergy, reliability, or fairness.
Publications in multimodal learning, foundation models, or reasoning architectures.
$71k-112k yearly est. Auto-Apply 19d ago
Staff Data Engineer - IT (Sales & Marketing) Hormel Foods (Austin, MN, Eden Prairie, MN)
Hormel Foods Corp 4.6
Data scientist job in Austin, MN
JobID: 32250 JobSchedule: Full time JobShift: Company Name: Hormel Foods Corporation STAFF DATA ENGINEER - IT (Sales & Marketing) - HORMEL FOODS (AUSTIN, MN, EDEN PRAIRIE, MN) To save time applying, Hormel Foods does not offer sponsorship of job applicants for employment-based visas for this position at this time.
Hormel Foods Corporation
ABOUT HORMEL FOODS - Inspired People. Inspired Food.
Hormel Foods Corporation, based in Austin, Minn., is a global branded food company with over $12 billion in annual revenue across more than 80 countries worldwide. Its brands include SKIPPY, SPAM, Hormel Natural Choice, Applegate, Justin's, Wholly, Hormel Black Label, Columbus and more than 30 other beloved brands. The company is a member of the S&P 500 Index and the S&P 500 Dividend Aristocrats, was named on the "Global 2000 World's Best Employers" list by Forbes magazine for three straight years, is one of Fortune magazine's most admired companies, has appeared on Corporate Responsibility Magazine's "The 100 Best Corporate Citizens" list for the 12th year in a row, and has received numerous other awards and accolades for its corporate responsibility and community service efforts. The company lives by its purpose statement - Inspired People. Inspired Food. - to bring some of the world's most trusted and iconic brands to tables across the globe. For more information, visit ******************* and ****************************
Summary:
We are looking for a Staff Data Engineer within the Sales and Marketing domain as part of our Data and Analytics team. This is an exciting opportunity to help grow and modernize analytics at Hormel Foods! The ideal candidate will have strong communication skills and the ability to collaborate across multiple levels of the organization. You will manage multiple initiatives that require creative problem-solving.
You will use tools such as Google Cloud Platform, SQL, Python, Incorta, Oracle Business Intelligence, and Informatica ETL to engineer data pipelines and data models to enhance enterprise reporting and analytics. Additionally, you will engineer reports, dashboards and visualizations using enterprise business intelligence tools (Oracle, Power BI and Tableau).
Specific competencies include:
* Data Structures and Models - Develops the overall database/data warehouse structure based on functional and technical requirements. Develops data collection frameworks for mainly structured and sometimes unstructured data.
* Data Pipelines and ELT - Applies data extraction, loading and transformation techniques in order to connect medium to large data sets from a variety of sources.
* Data Performance - With minimal guidance, troubleshoots and fixes for data performance issues that come with querying and combining medium to large volumes of data. Tests for scenarios affecting performance during initial development.
* Visualizations and Dashboards - Designs and develops reports and dashboards that meet business needs. Leverages visualizations when possible to increase speed to identifying an insight.
Responsibilities:
* Collaborate with Sales & Marketing team members, datascientists, BI analysts, and other stakeholders to understand data needs and deliver solutions.
* Develop the overall database/data warehouse structure based on functional and technical requirements.
* Engineer physical and logical data models for dimensions and facts within the staging, warehouse, and semantic layers of enterprise data warehouses and platforms.
* Performance tune SQL, Python, Incorta, or Informatica ETL pipelines, as well as Google BigQuery Dataprocs to move data from various source systems and file types into dimensional data models.
* Utilize SQL within Google BigQuery, Informatica ETLs, Incorta pipelines, or Oracle SQL Views to achieve proper metric calculations or derive dimension attributes.
* Engineer schedule and orchestration for batch and mini-batch data loads into enterprise data warehouses and platforms.
* Provide issue resolution and maintenance for various business unit solutions existing in enterprise data warehouses and platforms.
* Use tools such as SQL, Oracle Business Intelligence, Power BI, Tableau, Google Cloud Platform, Python, Incorta, and Informatica ETL to engineer data pipelines and models to enhance enterprise reporting and analytics.
* Engineer dashboards within enterprise business intelligence platforms containing reports and visualizations with intelligent user interface design and flow for the business, ensuring adequate performance.
Required Qualifications:
* Bachelor's degree in Computer Science, MIS, or related area with experience in business intelligence, data engineering, and data modeling.
* 4+ years of experience with reading and writing SQL.
* 3+ years of experience engineering within a data warehouse or related experience with dimensional data modeling.
* 3+ years of experience designing and developing ETLs/pipelines in Python, Google BigQuery Dataprocs and/or Informatica ETL.
* 3+ years of experience with data enablement for a Point of Sale or Syndicated Consumption platforms. Preferably, Circana.
* Ability to gather detailed technical requirements to design and develop data structures supporting business intelligence report solutions from beginning to end.
* Excellent written and verbal communication skills.
* Excellent organizational and time management skills.
* Tested problem-solving and decision-making skills.
* Strong pattern of initiative.
* Strong interpersonal skills.
* Applicants must not now, or at any time in the future, require employer sponsorship for a work visa.
* Applicants must be authorized to work in the United States for any employer.
Preferred Qualifications:
* Advanced SQL reading and writing skills.
* Experience developing data pipelines and queries within Google Cloud Platform (Google BigQuery) and/or Oracle Databases.
* Experience engineering within a data warehouse or related experience with dimensional data modeling.
* Experience tuning SQL and ETLs.
* Proven ability to gather detailed technical requirements to design and develop business intelligence report solutions from beginning to end.
* Experience with Sales/Marketing/Consumption data.
* Experience with syndicated consumption providers or retailer-focused Point of Sale platforms.
* Experience working within Google Cloud Platform with services like Dataflow, Datafusion, Pub/Sub, Cloud SQL, Cloud Storage.
* Experience with Oracle SQL including advanced functions like analytical functions.
* Experience tuning complex SQL and ETLs.
* Experience within a core metadata model (RPD) including the physical, logical, and presentation layers for the enterprise business intelligence platform (OBIEE - Oracle Business Intelligence Enterprise Edition).
LOCATION: Austin, MN - Global Headquarters (Preferred). Secondary location Eden Prairie, MN
TRAVEL REQUIREMENTS: Travel may be necessary 10% of the time.
BENEFITS: Hormel Foods offers an excellent benefits package. Competitive base salary plus target incentive, discretionary annual merit increase, annual performance review, medical, dental, vision, non-contributory pension, profit sharing, 401(k) immediate eligible, stock purchase plan, relocation assistance, paid personal time (PTO), FREE two-year community/technical college tuition for children of employees, and more.
At Hormel Foods, base pay is one part of our total compensation package and is determined within a range. The base hiring pay range for this role is between $86,500-$121,200 per year, and your actual base pay within that range will depend upon a variety of factors including, but not limited to, job-related knowledge, skill set, level of experience, and geographic market location.
At Hormel we invite difference and diversity in all aspects. We offer a space of support, understanding, and community. We are committed to the journey! Learn more about our progress here: https://*******************/about/diversity-and-inclusion/
Hormel Foods provides equal employment opportunities to applicants and employees without regard to race; color; sex; gender identity; sexual orientation; religious practices and observances; national origin; pregnancy, childbirth, or related medical conditions; status as a protected veteran or spouse/family member of a protected veteran; or disability.
$86.5k-121.2k yearly 20d ago
Data Integration Analyst
Mayo Healthcare 4.0
Data scientist job in Rochester, MN
Core Requirements
Analyzing data sources and requirements to design efficient data integration solutions
Collaborating with key internal and external stakeholders to identify what data (type and form) is deemed to be of high value
Implementing data integration solutions using tools such as ETL (extract, transform, load) software
Ensuring the accuracy and completeness of data as it is transferred between systems
Writing and maintaining documentation for data integration processes and procedures
Testing data integration solutions to ensure they are working correctly
Providing support and troubleshooting for data integration issues
Maintaining and improving existing data integration processes
Collaborating with data analysts and other stakeholders to understand data needs and requirements
Becoming a subject matter expert in the markets we serve by cultivating a detailed knowledge of relevant market trends, economic drivers, the typical economic, user and technical buying influences, needs relative to each buying influence, as well as the competitive landscape and MCP's points of differentiation
Identifying and helping prioritize product roadmap requirements based on market knowledge
Identifying implementation requirements with respect to data, platform configuration, and engagement to anticipate pitfalls, and to ensure delivery teams have adequate awareness and that deal scoping/pricing reflects actual requirements
Documenting solutions technical requirements based on delivery team feedback and knowledge of customer needs
This vacancy is not eligible for sponsorship/ we will not sponsor or transfer visas for this position. Also, Mayo Clinic DOES NOT participate in the F-1 STEM OPT extension program.
Experience with complex integrations, web applications, and SaaS configuration
Extensive understanding of the healthcare industry including drivers and value opportunities; ability to tie technical solutions to client or prospect needs
Experience with product and technology demonstrations
Ability to work and collaborate across teams without management direction
Ability to eliminate sales obstacles through creative and adaptive approaches
Bachelor's degree and 4+ years of experience
Highly motivated self-starter with the ability to work proactively with internal teammates and external clients
Excellent communication (oral/written/presentation) skills
Master's degree preferred
Degree(s) in Business, Computer Science, Engineering, or other technology or domain-related field preferred
$41k-56k yearly est. Auto-Apply 7d ago
Data Scientist
Mayo Clinic 4.8
Data scientist job in Rochester, MN
Join a world-renowned institution where data science directly improves patient lives. At Mayo Clinic, DataScientists turn complex, heterogeneous data into meaningful insights that enhance clinical care, accelerate scientific discovery, and strengthen the digital future of healthcare.
This role partners closely with Data Science and Informatics faculty and serves as a technical thought leader-shaping the strategy, direction, and impact of data science across the enterprise.
As a DataScientist at Mayo Clinic, you will:
Transform data into insight and insight into action, spanning the full spectrum from problem formulation and data acquisition to modeling, deployment, and interpretation.
Provide strategic direction for data science and AI in a specialized domain-such as cancer, surgery, healthcare delivery, marketing, or planning services.
Collaborate with enterprise leaders to advance Mayo Clinic's digital and analytics strategy.
Work side‑by‑side with Informatics and IT teams to build data‑ and intelligence‑driven systems that address complex, high‑priority challenges.
Recommend best practices for data collection, integration, and retention, incorporating technical, operational, and business needs.
Support scientific and operational initiatives under the guidance of a senior datascientist or through full independent direction.
Design and develop scripts, tools, and software applications that enable data extraction, management, and analysis across the organization.
Deliver enterprise‑level consultative services, providing analysis and presenting findings to leadership and multidisciplinary stakeholders.
In Generative AI, a DataScientist will lead and execute Gen AI/Agentic based approaches
Key Responsibilities
Partner with multidisciplinary teams to create innovative approaches for data‑driven decision‑making.
Lead exploration of next‑generation AI/ML approaches to solve complex analytical problems across diverse domains.
Apply deep expertise in data science methods, data types, and scientific challenges to help shape new products, experiences, and technologies.
Guide and mentor data science staff, ensuring high‑quality analysis, unbiased recommendations, and alignment with strategic priorities.
Develop analytics tools and solutions that can be used effectively by non‑technical staff.
• PhD in a domain relevant field (mathematics, computer science, statistics, physics, engineering, data science, health science, or a related discipline) Plus
• At least four years of experience in data science, machine learning, AI, or informatics
What You Bring
A blend of deep technical expertise and strong business acumen, with a proven ability to lead technical or quantitative teams.
Demonstrated success developing predictive and prescriptive models using advanced statistical modeling, machine learning, or data mining.
Experience applying problem‑solving frameworks, planning methods, continuous improvement approaches, and project management techniques.
Ability to independently manage multiple high‑impact projects in a dynamic environment, staying current with healthcare trends and enterprise priorities.
Exceptional interpersonal skills-presentation, negotiation, persuasion, and written communication.
Strong time‑management skills and the ability to prioritize, organize, and delegate effectively.
Expertise in scientific computing, data management packages, data modeling, and data exploration tools.
A consulting mindset: the ability to identify challenges, recommend solutions, deploy analytics tools, and support non‑technical users.
Demonstrated initiative in areas such as training, software development, education, and technical documentation.
Proven ability to provide vision and strategic direction at the departmental, institutional, or enterprise level.
Preferred publications in high impact factors
$71k-112k yearly est. Auto-Apply 7d ago
Staff Data Engineer - IT (Sales & Marketing) Hormel Foods (Austin, MN, Eden Prairie, MN)
Hormel Foods 4.6
Data scientist job in Austin, MN
STAFF DATA ENGINEER - IT (Sales & Marketing) - HORMEL FOODS (AUSTIN, MN, EDEN PRAIRIE, MN)
To save time applying, Hormel Foods does not offer sponsorship of job applicants for employment-based visas for this position at this time.
Hormel Foods Corporation
ABOUT HORMEL FOODS -
Inspired People. Inspired Food.™
Hormel Foods Corporation, based in Austin, Minn., is a global branded food company with over $12 billion in annual revenue across more than 80 countries worldwide. Its brands include
SKIPPY
,
SPAM
,
Hormel Natural Choice , Applegate , Justin's , Wholly , Hormel Black Label , Columbus
and more than 30 other beloved brands. The company is a member of the S&P 500 Index and the S&P 500 Dividend Aristocrats, was named on the “Global 2000 World's Best Employers” list by Forbes magazine for three straight years, is one of Fortune magazine's most admired companies, has appeared on Corporate Responsibility Magazine's “The 100 Best Corporate Citizens” list for the 12
th
year in a row, and has received numerous other awards and accolades for its corporate responsibility and community service efforts. The company lives by its purpose statement -
Inspired People. Inspired Food.™
- to bring some of the world's most trusted and iconic brands to tables across the globe. For more information, visit ******************* and ****************************
Summary:
We are looking for a Staff Data Engineer within the Sales and Marketing domain as part of our Data and Analytics team. This is an exciting opportunity to help grow and modernize analytics at Hormel Foods! The ideal candidate will have strong communication skills and the ability to collaborate across multiple levels of the organization. You will manage multiple initiatives that require creative problem-solving.
You will use tools such as Google Cloud Platform, SQL, Python, Incorta, Oracle Business Intelligence, and Informatica ETL to engineer data pipelines and data models to enhance enterprise reporting and analytics. Additionally, you will engineer reports, dashboards and visualizations using enterprise business intelligence tools (Oracle, Power BI and Tableau).
Specific competencies include:
Data Structures and Models - Develops the overall database/data warehouse structure based on functional and technical requirements. Develops data collection frameworks for mainly structured and sometimes unstructured data.
Data Pipelines and ELT - Applies data extraction, loading and transformation techniques in order to connect medium to large data sets from a variety of sources.
Data Performance - With minimal guidance, troubleshoots and fixes for data performance issues that come with querying and combining medium to large volumes of data. Tests for scenarios affecting performance during initial development.
Visualizations and Dashboards - Designs and develops reports and dashboards that meet business needs. Leverages visualizations when possible to increase speed to identifying an insight.
Responsibilities:
Collaborate with Sales & Marketing team members, datascientists, BI analysts, and other stakeholders to understand data needs and deliver solutions.
Develop the overall database/data warehouse structure based on functional and technical requirements.
Engineer physical and logical data models for dimensions and facts within the staging, warehouse, and semantic layers of enterprise data warehouses and platforms.
Performance tune SQL, Python, Incorta, or Informatica ETL pipelines, as well as Google BigQuery Dataprocs to move data from various source systems and file types into dimensional data models.
Utilize SQL within Google BigQuery, Informatica ETLs, Incorta pipelines, or Oracle SQL Views to achieve proper metric calculations or derive dimension attributes.
Engineer schedule and orchestration for batch and mini-batch data loads into enterprise data warehouses and platforms.
Provide issue resolution and maintenance for various business unit solutions existing in enterprise data warehouses and platforms.
Use tools such as SQL, Oracle Business Intelligence, Power BI, Tableau, Google Cloud Platform, Python, Incorta, and Informatica ETL to engineer data pipelines and models to enhance enterprise reporting and analytics.
Engineer dashboards within enterprise business intelligence platforms containing reports and visualizations with intelligent user interface design and flow for the business, ensuring adequate performance.
Required Qualifications:
Bachelor's degree in Computer Science, MIS, or related area with experience in business intelligence, data engineering, and data modeling.
4+ years of experience with reading and writing SQL.
3+ years of experience engineering within a data warehouse or related experience with dimensional data modeling.
3+ years of experience designing and developing ETLs/pipelines in Python, Google BigQuery Dataprocs and/or Informatica ETL.
3+ years of experience with data enablement for a Point of Sale or Syndicated Consumption platforms. Preferably, Circana.
Ability to gather detailed technical requirements to design and develop data structures supporting business intelligence report solutions from beginning to end.
Excellent written and verbal communication skills.
Excellent organizational and time management skills.
Tested problem-solving and decision-making skills.
Strong pattern of initiative.
Strong interpersonal skills.
Applicants must not now, or at any time in the future, require employer sponsorship for a work visa.
Applicants must be authorized to work in the United States for any employer.
Preferred Qualifications:
Advanced SQL reading and writing skills.
Experience developing data pipelines and queries within Google Cloud Platform (Google BigQuery) and/or Oracle Databases.
Experience engineering within a data warehouse or related experience with dimensional data modeling.
Experience tuning SQL and ETLs.
Proven ability to gather detailed technical requirements to design and develop business intelligence report solutions from beginning to end.
Experience with Sales/Marketing/Consumption data.
Experience with syndicated consumption providers or retailer-focused Point of Sale platforms.
Experience working within Google Cloud Platform with services like Dataflow, Datafusion, Pub/Sub, Cloud SQL, Cloud Storage.
Experience with Oracle SQL including advanced functions like analytical functions.
Experience tuning complex SQL and ETLs.
Experience within a core metadata model (RPD) including the physical, logical, and presentation layers for the enterprise business intelligence platform (OBIEE - Oracle Business Intelligence Enterprise Edition).
LOCATION: Austin, MN - Global Headquarters (Preferred). Secondary location Eden Prairie, MN
TRAVEL REQUIREMENTS: Travel may be necessary 10% of the time.
BENEFITS: Hormel Foods offers an excellent benefits package. Competitive base salary plus target incentive, discretionary annual merit increase, annual performance review, medical, dental, vision, non-contributory pension, profit sharing, 401(k) immediate eligible, stock purchase plan, relocation assistance, paid personal time (PTO), FREE two-year community/technical college tuition for children of employees, and more.
At Hormel Foods, base pay is one part of our total compensation package and is determined within a range. The base hiring pay range for this role is between $86,500-$121,200 per year, and your actual base pay within that range will depend upon a variety of factors including, but not limited to, job-related knowledge, skill set, level of experience, and geographic market location.
At Hormel we invite difference and diversity in all aspects. We offer a space of support, understanding, and community. We are committed to the journey! Learn more about our progress here: https://*******************/about/diversity-and-inclusion/
Hormel Foods provides equal employment opportunities to applicants and employees without regard to race; color; sex; gender identity; sexual orientation; religious practices and observances; national origin; pregnancy, childbirth, or related medical conditions; status as a protected veteran or spouse/family member of a protected veteran; or disability.
How much does a data scientist earn in Rochester, MN?
The average data scientist in Rochester, MN earns between $61,000 and $110,000 annually. This compares to the national average data scientist range of $75,000 to $148,000.
Average data scientist salary in Rochester, MN
$82,000
What are the biggest employers of Data Scientists in Rochester, MN?
The biggest employers of Data Scientists in Rochester, MN are: