Applied Data Scientists
Data engineer job in Sandy, UT
**1\. Role Overview**Mercor is seeking applied data science professionals to support a strategic analytics initiative with a global enterprise. This contract-based opportunity focuses on extracting insights, building statistical models, and informing business decisions through advanced data science techniques. Freelancers will translate complex datasets into actionable outcomes using tools like Python, SQL, and visualization platforms. This short-term engagement emphasizes experimentation, modeling, and stakeholder communication - distinct from production ML engineering. **2\. Key Responsibilities**
● Translate business questions into data science problems and analytical workflows
● Conduct data wrangling, exploratory analysis, and hypothesis testing
● Develop statistical models and predictive tools for decision support
● Create compelling data visualizations and dashboards for business users
● Present findings and recommendations to non-technical stakeholders **3\. Ideal Qualifications**
● 5+ years of applied data science or analytics experience in business settings
● Proficiency in Python or R (pandas, NumPy, Jupyter) and strong SQL skills
● Experience with data visualization tools (e.g., Tableau, Power BI)
● Solid understanding of statistical modeling, experimentation, and A/B testing
● Strong communication skills for translating technical work into strategic insights **4\. More About the Opportunity**
● Remote
● **Expected commitment: min 30 hours/week
● Project duration: ~6 weeks** **5\. Compensation & Contract Terms**
● $75-100/hour
● Paid weekly via Stripe Connect
● You'll be classified as an independent contractor **6\. Application Process**
● Submit your resume followed by domain expertise interview and short form **7.About Mercor**
● Mercor is a talent marketplace that connects top experts with leading AI labs and research organizations
● Our investors include Benchmark, General Catalyst, Adam D'Angelo, Larry Summers, and Jack Dorsey
● Thousands of professionals across domains like law, creatives, engineering, and research have joined Mercor to work on frontier projects shaping the next era of AI
Engineer I
Data engineer job in Draper, UT
The Manufacturing Engineer will apply engineering principles to support manufacturing workflows, improve efficiency, and ensure quality and reliability across production and inspection operations. This role requires interpreting technical documentation and supporting routing, layout, and workflow optimization.
Responsibilities:
Analyze time, motion, and operational methods to establish standard production rates and identify efficiency improvements.
Review engineering drawings, schematics, and technical documentation; collaborate with engineering or management to define quality and reliability standards.
Verify logs, processing sheets, and specification documents meet quality assurance requirements.
Assist in planning daily work assignments considering performance, machine capability, schedules, and potential delays.
Prepare documentation such as charts, diagrams, workflow maps, routing sheets, and floor layout visuals.
Skills:
Strong analytical, problem-solving, and communication abilities.
High attention to detail and ability to work collaboratively.
Ability to create sketches, engineering drawings, and perform necessary technical computations.
Ability to interpret blueprints, schematics, and computer-generated reports.
Experience with CAD or related engineering software.
Education & Experience:
Bachelor's degree in Engineering required.
0-2 years of related experience.
Data Scientist (Technical Leadership)
Data engineer job in Salt Lake City, UT
We are seeking experienced Data Scientists to join our team and drive impact across various product areas. As a Data Scientist, you will collaborate with cross-functional partners to identify and solve complex problems using data and analysis. Your role will involve shaping product development, quantifying new opportunities, and ensuring products bring value to users and the company. You will guide teams using data-driven insights, develop hypotheses, and employ rigorous analytical approaches to test them. You will tell data-driven stories, present clear insights, and build credibility with stakeholders. By joining our team, you will become part of a world-class analytics community dedicated to skill development and career growth in analytics and beyond.
**Required Skills:**
Data Scientist (Technical Leadership) Responsibilities:
1. Work with complex data sets to solve challenging problems using analytical and statistical approaches
2. Apply technical expertise in quantitative analysis, experimentation, and data mining to develop product strategies
3. Identify and measure success through goal setting, forecasting, and monitoring key metrics
4. Partner with cross-functional teams to inform and execute product strategy and investment decisions
5. Build long-term vision and strategy for programs and products
6. Collaborate with executives to define and develop data platforms and instrumentation
7. Effectively communicate insights and recommendations to stakeholders
8. Define success metrics, forecast changes, and set team goals
9. Support developing roadmaps and coordinate analytics efforts across teams
**Minimum Qualifications:**
Minimum Qualifications:
10. Mathematics, Statistics, Operations Research) 5+ years of experience with data querying languages (e.g. SQL), scripting languages (e.g. Python), or statistical/mathematical software (e.g. R, SAS, Matlab)
11. 8+ years of work experience leading analytics work in IC capacity, working collaboratively with Engineering and cross-functional partners, and guiding data-influenced product planning, prioritization and strategy development
12. Experience with predictive modeling, machine learning, and experimentation/causal inference methods
13. Experience communicating complex technical topics in a clear, precise, and actionable manner
**Preferred Qualifications:**
Preferred Qualifications:
14. 10+ years of experience communicating the results of analyses to leadership teams to influence the strategy
15. Masters or Ph.D. Degree in a quantitative field
16. Bachelor's Degree in an analytical or scientific field (e.g. Computer Science, Engineering, Mathematics, Statistics, Operations Research)
17. 10+ years of experience doing complex quantitative analysis in product analytics
**Public Compensation:**
$206,000/year to $281,000/year + bonus + equity + benefits
**Industry:** Internet
**Equal Opportunity:**
Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Meta participates in the E-Verify program in certain locations, as required by law. Please note that Meta may leverage artificial intelligence and machine learning technologies in connection with applications for employment.
Meta is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance or accommodations due to a disability, please let us know at accommodations-ext@fb.com.
Data Scientist, NLP & Language Models
Data engineer job in Salt Lake City, UT
Datavant is a data platform company and the world's leader in health data exchange. Our vision is that every healthcare decision is powered by the right data, at the right time, in the right format. Our platform is powered by the largest, most diverse health data network in the U.S., enabling data to be secure, accessible and usable to inform better health decisions. Datavant is trusted by the world's leading life sciences companies, government agencies, and those who deliver and pay for care.
By joining Datavant today, you're stepping onto a high-performing, values-driven team. Together, we're rising to the challenge of tackling some of healthcare's most complex problems with technology-forward solutions. Datavanters bring a diversity of professional, educational and life experiences to realize our bold vision for healthcare.
Datavant is looking for an enthusiastic and meticulous Data Scientist to join our growing team, which builds machine learning models for use across Datavant in multiple verticals and for multiple customer types.
As part of the Data Science team, you will play a crucial role in developing new product features and automating existing internal processes to drive innovation across Datavant. You will work with tens of millions of patients' worth of healthcare data to develop models, contributing to the entirety of the model development lifecycle from ideation and research to deployment and monitoring. You will collaborate with an experienced team of Data Scientists and Machine Learning Engineers along with application Engineers and Product Managers across the company to achieve Datavant's AI-enabled future.
**You Will:**
+ Play a key role in the success of our products by developing models for NLP (and other) tasks.
+ Perform error analysis, data cleaning, and other related tasks to improve models.
+ Collaborate with your team by making recommendations for the development roadmap of a capability.
+ Work with other data scientists and engineers to optimize machine learning models and insert them into end-to-end pipelines.
+ Understand product use-cases and define key performance metrics for models according to business requirements.
+ Set up systems for long-term improvement of models and data quality (e.g. active learning, continuous learning systems, etc.).
**What You Will Bring to the Table:**
+ Advanced degree in computer science, data science, statistics, or a related field, or equivalent work experience.
+ 4+ years of experience with data science and machine learning in an industry setting.
+ 4+ years experience with Python.
+ Experience designing and building NLP models for tasks such as classification, named-entity recognition, and dependency parsing.
+ Proficiency with standard data analysis toolkits such as SQL, Numpy, Pandas, etc.
+ Proficiency with deep learning frameworks like PyTorch (preferred) or TensorFlow.
+ Demonstrated ability to drive results in a team environment and contribute to team decision-making in the face of ambiguity.
+ Strong time management skills and demonstrable experience of prioritising work to meet tight deadlines.
+ Initiative and ability to independently explore and research novel topics and concepts as they arise.
\#LI-BC1
We are committed to building a diverse team of Datavanters who are all responsible for stewarding a high-performance culture in which all Datavanters belong and thrive. We are proud to be an Equal Employment Opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, disability, veteran status, or other legally protected status.
At Datavant our total rewards strategy powers a high-growth, high-performance, health technology company that rewards our employees for transforming health care through creating industry-defining data logistics products and services.
The range posted is for a given job title, which can include multiple levels. Individual rates for the same job title may differ based on their level, responsibilities, skills, and experience for a specific job.
The estimated total cash compensation range for this role is:
$136,000-$170,000 USD
To ensure the safety of patients and staff, many of our clients require post-offer health screenings and proof and/or completion of various vaccinations such as the flu shot, Tdap, COVID-19, etc. Any requests to be exempted from these requirements will be reviewed by Datavant Human Resources and determined on a case-by-case basis. Depending on the state in which you will be working, exemptions may be available on the basis of disability, medical contraindications to the vaccine or any of its components, pregnancy or pregnancy-related medical conditions, and/or religion.
This job is not eligible for employment sponsorship.
Datavant is committed to a work environment free from job discrimination. We are proud to be an Equal Employment Opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, disability, veteran status, or other legally protected status. To learn more about our commitment, please review our EEO Commitment Statement here (************************************************** . Know Your Rights (*********************************************************************** , explore the resources available through the EEOC for more information regarding your legal rights and protections. In addition, Datavant does not and will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay.
At the end of this application, you will find a set of voluntary demographic questions. If you choose to respond, your answers will be anonymous and will help us identify areas for improvement in our recruitment process. (We can only see aggregate responses, not individual ones. In fact, we aren't even able to see whether you've responded.) Responding is entirely optional and will not affect your application or hiring process in any way.
Datavant is committed to working with and providing reasonable accommodations to individuals with physical and mental disabilities. If you need an accommodation while seeking employment, please request it here, (************************************************************** Id=**********48790029&layout Id=**********48795462) by selecting the 'Interview Accommodation Request' category. You will need your requisition ID when submitting your request, you can find instructions for locating it here (******************************************************************************************************* . Requests for reasonable accommodations will be reviewed on a case-by-case basis.
For more information about how we collect and use your data, please review our Privacy Policy (**************************************** .
Data Science Returnship
Data engineer job in Salt Lake City, UT
Your work will change lives. Including your own. We are leveraging new technology to create virtuous cycles of learning around datasets to build a next-generation biopharmaceutical company. It's complex biology, decoded.
Recursion is a digital biology company industrializing drug discovery. We are working to solve some of the hardest, most meaningful problems facing human health today. Come join us in our mission to decode biology to radically improve lives, while doing the most impactful work of your life.
Recursion's Returnship Program
Our Returnship program is sponsored by the Women at Recursion Employee Resource Group. The program is aimed at helping those who have taken a hiatus (2+ years) from the STEM industry have the opportunity to return to the workforce in a learning environment with support from teams and mentors. This allows our company to tap into an underutilized pool of talent in Utah, and leverage the experience and skills of previous work and life experiences, while also providing opportunity to learn and develop experience in new cutting-edge tools and technology. This sixteen-week program will enable you to have ownership of projects that can deeply impact the company's mission to radically decode biology and serve patients, while having the guidance, support and mentorship has you re-enter the workforce. Each Returner will be assigned a mentor who will meet with them weekly, as well as weekly seminars on workplace culture, communication and technology.
Our returner program lasts 4 months, running from February 2026 through the end of May 2026, with potential for transition into full-time employment depending on performance and availability. This position is mainly based in our Salt Lake City, UT headquarters, with some hybrid working flexibility available.
Data Science & Applied Machine Learning Returnship
Working alongside Recursion's Nomination Workflows team, you will engage with a cross-functional group having expertise in biology, chemistry, data science, and engineering. This team is responsible for the first two stages of Recursion's drug discovery pipeline, where we generate novel hypotheses and identify promising chemical matter. We accomplish this by building and operating a scalable, semi-automated computational system that mines our data for new & interesting starting points for our discovery portfolio. All work in this team directly contributes to improving the quantity, reliability, quality, and confidence of our early-stage program starting points.
Projects on this team include:
Program Advancement: Partner closely with biologists and chemists to advance specific early-stage programs through exploratory data analysis and experiment evaluation.
Method Development: Analyze and improve our semi-automated evaluation systems through exploratory data analysis, method development, and deployment of new data analysis methods or metrics.
Platform Improvements: Maintain, update, and improve our scientific code-bases to harden our systems, improve observability, accelerate development, and enable new capabilities and automations.
The Experience You'll Need
This role is for individuals who have been on a career hiatus of 2+ years from the STEM industry. We are seeking candidates with a strong foundational background and a passion for learning.
A strong foundation in either:
applying probability, statistics, and machine learning to real-world datasets
developing scientific software, workflow orchestration, or data-engineering
Familiarity with the Python data stack (e.g., numpy, pandas, scikit-learn).
Prior experience in collaborative software development, including version control tools like git.
The ability to critically review python code, whether authored by peers or LLM coding agents.
An aptitude for breaking down complex problems into manageable parts and clearly communicating project objectives & progress to a diverse team.
Nice to have:
Experience analyzing biological or high-dimensional datasets.
Familiarity with workflow orchestration systems (e.g., Prefect).
Experience accelerating & improving code development with coding agents
The Recursion Community
While we offer cutting-edge tools, the secret sauce is our people. Our organization structure and culture isn't driven by politics or ego, it is designed first and foremost to help you do your best work. We live and work by values that we see as the strategic differentiators that give us a competitive advantage, allowing for better and faster work that isn't predicated on burnout and encourages us to make leaps where others take steps. This is a place where people in every role and every level make the bold bets that create large leaps forward on a regular basis!
The Perks You'll Enjoy as a Returner Recursionaut
Paid sick pay and additional flexibility as needed.
Complimentary chef-prepared lunches and well-stocked snack bars (Salt Lake City).
One-of-a-kind 100,000 square foot headquarters complete with a 70-foot climbing wall, showers, lockers and bike parking (Salt Lake City).
Weekly Returners Skill Development Classes.
1:1 Weekly Mentorship with a member of your team and a member of the Returnship ERG.
#LI-DB1
The Values We Hope You Share:
We act boldly with integrity. We are unconstrained in our thinking, take calculated risks, and push boundaries, but never at the expense of ethics, science, or trust.
We care deeply and engage directly. Caring means holding a deep sense of responsibility and respect - showing up, speaking honestly, and taking action.
We learn actively and adapt rapidly. Progress comes from doing. We experiment, test, and refine, embracing iteration over perfection.
We move with urgency because patients are waiting. Speed isn't about rushing but about moving the needle every day.
We take ownership and accountability. Through ownership and accountability, we enable trust and autonomy-leaders take accountability for decisive action, and teams own outcomes together.
We are One Recursion. True cross-functional collaboration is about trust, clarity, humility, and impact. Through sharing, we can be greater than the sum of our individual capabilities.
Our values underpin the employee experience at Recursion. They are the character and personality of the company demonstrated through how we communicate, support one another, spend our time, make decisions, and celebrate collectively.
More About Recursion
Recursion (NASDAQ: RXRX) is a clinical stage TechBio company leading the space by decoding biology to radically improve lives. Enabling its mission is the Recursion OS, a platform built across diverse technologies that continuously generate one of the world's largest proprietary biological and chemical datasets. Recursion leverages sophisticated machine-learning algorithms to distill from its dataset a collection of trillions of searchable relationships across biology and chemistry unconstrained by human bias. By commanding massive experimental scale - up to millions of wet lab experiments weekly - and massive computational scale - owning and operating one of the most powerful supercomputers in the world, Recursion is uniting technology, biology and chemistry to advance the future of medicine.
Recursion is headquartered in Salt Lake City, where it is a founding member of BioHive, the Utah life sciences industry collective. Recursion also has offices in Toronto, Montréal, New York, London, Oxford area, and the San Francisco Bay area. Learn more at ****************** or connect on X (formerly Twitter) and LinkedIn.
Recursion is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, veteran status, or any other characteristic protected under applicable federal, state, local, or provincial human rights legislation.
Accommodations are available on request for candidates taking part in all aspects of the selection process.
Recruitment & Staffing Agencies: Recursion Pharmaceuticals and its affiliate companies do not accept resumes from any source other than candidates. The submission of resumes by recruitment or staffing agencies to Recursion or its employees is strictly prohibited unless contacted directly by Recursion's internal Talent Acquisition team. Any resume submitted by an agency in the absence of a signed agreement will automatically become the property of Recursion, and Recursion will not owe any referral or other fees. Our team will communicate directly with candidates who are not represented by an agent or intermediary unless otherwise agreed to prior to interviewing for the job.
Auto-ApplySenior Data Engineer
Data engineer job in Lehi, UT
Our Company Changing the world through digital experiences is what Adobe's all about. We give everyone-from emerging artists to global brands-everything they need to design and deliver exceptional digital experiences! We're passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen.
We're on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours!
Our Company
Changing the world through digital experiences is what Adobe's all about. We give everyone-from emerging artists to global brands-everything they need to design and deliver exceptional digital experiences. We're passionate about empowering people to craft beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen.
We're on a mission to hire the very best and are committed to building exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours!
The Opportunity
The Senior Data Engineer will have a significant impact on driving our organization's digital evolution by collaborating with various teams at Adobe. You will collaborate closely with product, analytics, and engineering teams to build, construct, and enhance scalable data pipelines and analytic solutions that generate insights into product usage. This position is highly data engineering oriented, focusing on the collection, integration, and transformation of large and varied data sets from multiple platforms to support business intelligence, product development, and strategic decision-making.
What you'll Do
You will be responsible for developing and maintaining analytics infrastructure, including ETL pipelines and automated data quality tools. The ideal candidate will have a strong background in modern frameworks, cloud platforms, data technologies, SQL, Python, and visualization tools including Power BI or Tableau. In addition, you will apply strong communication skills and a customer-focused attitude to deliver effective solutions.
Your key tasks will be collaborating with multi-functional teams to develop and refine adaptable data pipelines and analytical solutions. You will build connectors to a variety of API sources, enabling the integration of diverse data sources, and apply cloud-based platforms to perform advanced analytics while upholding rigorous standards for data quality and security.
Projects will require you to actively contribute to planning, solution design, and deployment of analytic tools that drive and support business goals. Your efforts will ensure that the data infrastructure efficiently serves the evolving needs of the organization.
What you need to succeed
* Bachelor's Degree required, master's Preferred in Computer Science or relevant experience
* 8+ Years Experience with BS or 5+ Years Experience with MS
* 5+ Years Experience with Python or equivalent
* Experienced in developing data pipelines using Python, SQL, and Bash on Linux.
* Expertise working with Databricks, Spark SQL, Spark and DBFS systems.
* Proficient in orchestration tools like Airflow or Databricks Workflows.
* Experience deploying and managing services and applications on AWS and Azure Cloud platforms
* Application design and architecture with a drive for delivering data with high throughput and low latency
* Experience implementing and managing CI/CD pipelines using Jenkins or similar automation tools.
* Committed to finding solutions to challenges and enjoy creative problem solving
* Ability to solve problems collaboratively and build strong relationships
* Experience delivering quality process and outcomes
* Process orientated with strong attention to detail.
Additional Consideration Given
* Experience with LLMs, vector databases, and embeddings.
* Exposure to graph databases, Elastic Stack, or real-time streaming (Kafka, Kinesis).
* Proficiency in managing and orchestrating containers using Kubernetes.
At Adobe, you will be immersed in an exceptional work environment that is recognized throughout the world on Best Companies lists. You will also be surrounded by colleagues who are committed to helping each other grow through our unique Check-In approach where ongoing feedback flows freely.
If you're looking to make an impact, Adobe's the place for you. Discover what our employees are saying about their career experiences on the Adobe Life blog and explore the meaningful benefits we offer.
Adobe is an equal opportunity employer. We welcome and encourage diversity in the workplace regardless of gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, or veteran status.
Our compensation reflects the cost of labor across several U.S. geographic markets, and we pay differently based on those defined markets. The U.S. pay range for this position is $128,600 -- $234,200 annually. Pay within this range varies by work location and may also depend on job-related knowledge, skills, and experience. Your recruiter can share more about the specific salary range for the job location during the hiring process.
At Adobe, for sales roles starting salaries are expressed as total target compensation (TTC = base + commission), and short-term incentives are in the form of sales commission plans. Non-sales roles starting salaries are expressed as base salary and short-term incentives are in the form of the Annual Incentive Plan (AIP).
In addition, certain roles may be eligible for long-term incentives in the form of a new hire equity award.
State-Specific Notices:
California:
Fair Chance Ordinances
Adobe will consider qualified applicants with arrest or conviction records for employment in accordance with state and local laws and "fair chance" ordinances.
Colorado:
Application Window Notice
If this role is open to hiring in Colorado (as listed on the job posting), the application window will remain open until at least the date and time stated above in Pacific Time, in compliance with Colorado pay transparency regulations. If this role does not have Colorado listed as a hiring location, no specific application window applies, and the posting may close at any time based on hiring needs.
Massachusetts:
Massachusetts Legal Notice
It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability.
Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more.
Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call **************.
Data Scientist
Data engineer job in Lehi, UT
Data Scientist | Product Management | Lehi, Utah | Hybrid
RLDatix (RLD) is on a mission to help raise the standard of care…everywhere. Trusted by over 10,000 healthcare organizations around the world, our solutions help improve health and care. Our applications ensure that patients receive the best and safest care while supporting the providers who deliver it.
Joining TeamRLD means being part of a global effort of over 2,000 team members in making a difference in healthcare…every day.
We're searching for a US-based Data Scientist to join our Analytics & Insights team, so that we can deliver predictive models, intelligent data pipelines, and actionable insights to improve patient outcomes and operational efficiency. The Data Scientist will design, implement, and scale advanced statistical and machine learning solutions to help healthcare organizations make better, data-driven decisions.
Develop statistical models and machine learning algorithms to support predictive analytics, forecasting, and decision-making in healthcare operations
Extract, clean, and transform complex datasets using SQL, Python, and modern ETL frameworks in order to deliver reliable, scalable data pipelines
Design and implement experiments and hypothesis tests to validate product impact and optimize customer outcomes
Collaborate with engineers, product managers, and clinical experts to translate business questions into data-driven solutions
Implement monitoring, validation, and alerting tools to ensure data quality, accuracy, and system reliability
What Kind of Things We're Most Interested in You Having
Mid-level experience in data science, machine learning, or a related role, ideally within SaaS or healthcare
Proven success in building and deploying end-to-end data solutions (from pipeline to predictive model to production)
In-depth knowledge of statistical analysis, regression, time-series forecasting, clustering, and experimental design
Applied experience with SQL, Python (or R), and distributed data technologies (e.g., Spark, Hadoop, Kafka, or cloud platforms such as AWS/GCP/Azure)
Demonstrated ability to troubleshoot and debug data workflows while maintaining system scalability and availability
Sincere interest in using data science to improve healthcare outcomes and operational efficiency
A knack for working collaboratively across technical and non-technical teams in a fast-paced, global environment
By enabling flexibility in how we work and prioritizing employee wellness, we empower our team to do and be their best. Our benefits package includes health, dental, vision, life, disability insurance, 401K, paid time off, and paid holidays.
RLDatix is an equal opportunity employer, and our employment decisions are made without regard to race, color, religion, age, gender, national origin, disability, handicap, marital status or any other status or condition protected by Federal and/or State laws.
As part of RLDatix's commitment to the inclusion of all qualified individuals, we ensure that persons with disabilities are provided reasonable accommodation in the job application and interview process. If reasonable accommodation is needed to participate in either step, please don't hesitate to send a note to accessibility@rldatix.com.
Salary offers are based on a wide range of factors including location, relevant skills, training, experience, education, and, where applicable, licensure or certifications obtained. Market and organizational factors are also taken into consideration.
Data Scientist
Data engineer job in Lehi, UT
Data Scientist | Product Management | Lehi, Utah | Hybrid RLDatix (RLD) is on a mission to help raise the standard of care…everywhere. Trusted by over 10,000 healthcare organizations around the world, our solutions help improve health and care. Our applications ensure that patients receive the best and safest care while supporting the providers who deliver it.
Joining TeamRLD means being part of a global effort of over 2,000 team members in making a difference in healthcare…every day.
We're searching for a US-based Data Scientist to join our Analytics & Insights team, so that we can deliver predictive models, intelligent data pipelines, and actionable insights to improve patient outcomes and operational efficiency. The Data Scientist will design, implement, and scale advanced statistical and machine learning solutions to help healthcare organizations make better, data-driven decisions.
How You'll Spend Your Time
* Develop statistical models and machine learning algorithms to support predictive analytics, forecasting, and decision-making in healthcare operations
* Extract, clean, and transform complex datasets using SQL, Python, and modern ETL frameworks in order to deliver reliable, scalable data pipelines
* Design and implement experiments and hypothesis tests to validate product impact and optimize customer outcomes
* Collaborate with engineers, product managers, and clinical experts to translate business questions into data-driven solutions
* Implement monitoring, validation, and alerting tools to ensure data quality, accuracy, and system reliability
What Kind of Things We're Most Interested in You Having
* Mid-level experience in data science, machine learning, or a related role, ideally within SaaS or healthcare
* Proven success in building and deploying end-to-end data solutions (from pipeline to predictive model to production)
* In-depth knowledge of statistical analysis, regression, time-series forecasting, clustering, and experimental design
* Applied experience with SQL, Python (or R), and distributed data technologies (e.g., Spark, Hadoop, Kafka, or cloud platforms such as AWS/GCP/Azure)
* Demonstrated ability to troubleshoot and debug data workflows while maintaining system scalability and availability
* Sincere interest in using data science to improve healthcare outcomes and operational efficiency
* A knack for working collaboratively across technical and non-technical teams in a fast-paced, global environment
By enabling flexibility in how we work and prioritizing employee wellness, we empower our team to do and be their best. Our benefits package includes health, dental, vision, life, disability insurance, 401K, paid time off, and paid holidays.
RLDatix is an equal opportunity employer, and our employment decisions are made without regard to race, color, religion, age, gender, national origin, disability, handicap, marital status or any other status or condition protected by Federal and/or State laws.
As part of RLDatix's commitment to the inclusion of all qualified individuals, we ensure that persons with disabilities are provided reasonable accommodation in the job application and interview process. If reasonable accommodation is needed to participate in either step, please don't hesitate to send a note to accessibility@rldatix.com .
Salary offers are based on a wide range of factors including location, relevant skills, training, experience, education, and, where applicable, licensure or certifications obtained. Market and organizational factors are also taken into consideration.
Staff Data Scientist
Data engineer job in Lehi, UT
Responsibilities: - Design and deploy advanced models for occupancy, runtime, cost forecasting, anomaly detection, and preconditioning to enable comfort-aware, energy-efficient control and maintenance. - Leverage data-driven insights to enhance the accuracy and reliability of Demand Response (DR), Time-of-Use (TOU) shifting, and Virtual Power Plant (VPP) strategies.
- Collaborate with data engineering to modernize legacy structures into robust, documented, and reusable data products that support machine learning and real-time analytics.
- Partner with product, engineering, and analytics teams to embed intelligence into production systems and shape future data-driven energy experiences.
- Translate complex model outputs into clear, actionable recommendations for both technical and non-technical stakeholders.
Day to Day:
One of Insight Global's clients is looking for a Staff Data Scientist to join their team. This individual will design and deploy predictive models that enable intelligent home energy decisions. They will build advanced models for occupancy, runtime, cost forecasting, and anomaly detection. Other responsibilities include optimizing energy operations by improving the precision of Demand Response, Time-of-Use shifting, and Virtual Power Plant strategies through data-driven insights. They will work closely with data engineering teams to enhance data quality and scalability. Transforming legacy systems into robust, reusable products that support machine learning and real-time analytics. Additionally, they will translate complex model outputs into clear, actionable insights for both technical and non-technical Stakeholders. At times, they will explore advanced AI applications such as generative models and energy forecasting to push the boundaries of predictive analytics within the energy domain.
We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity/affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to ********************.To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy: ****************************************************
Skills and Requirements
- 5-8 Years of experience
- Proven expertise in predictive modeling, forecasting, and applied machine learning techniques (e.g., regression, gradient boosting, time-series analysis, causal inference).
- Hands-on experience working with large-scale event and sensor data, ideally within energy, IoT, or device-driven ecosystems.
- Strong proficiency in Python (including Pandas, NumPy, scikit-learn, PySpark)
- Experience with distributed computing environments such as Spark, Databricks, or GCP. - Expertise in energy forecasting, thermal/comfort modeling, or Demand Response optimization
- Deep understanding of energy markets, DER, and Virtual Power Plant (VPP) concepts
- Experience applying LLM or generative AI in analytics and optimization
- Advanced degree (MS/PhD) in a quantitative field
- 7+ years of industry experience with proven technical leadership on high-impact modeling initiatives
Lead Data Scientist
Data engineer job in South Jordan, UT
Strider Technologies is on a mission to deliver strategic intelligence that enables faster, more confident decision-making for organizations around the world. As the leading strategic intelligence company, Strider empowers organizations to secure and advance their technology and innovation. We leverage cutting-edge AI technology and proprietary methodologies to transform publicly available data into critical insights. These insights enable organizations to proactively address and respond to risks associated with state-sponsored intellectual property theft, targeted talent acquisition, and supply chain vulnerabilities.
About the Role
Strider is seeking a Lead Data Scientist to architect, build, and scale AI-driven capabilities that power Strider's core products. You'll own some of the most technically challenging problems Strider faces and you'll be responsible for defining what we build and how we do it. This is a technical leadership role tasked with developing high-quality models and systems, setting technical standards, accelerating delivery through the smart use of AI, and grounding all work in objective, measurable impact.
This role is ideal for a hands-on senior contributor who thrives on complexity, loves building scalable systems, and is eager to influence the technical direction of a high-impact data science team.
You'll prototype fast, iterate intelligently, and help shape how AI elevates Strider's ecosystem.
What You Will Do
* Architect data services that power Strider's current and future products - think entity resolution, knowledge graphs, large-scale systems that transform unstructured data into structured intelligence.
* Drive the integration and adoption of AI and machine learning that make our data science stack faster, smarter, and more autonomous.
* Own end-to-end delivery of complex technical projects, from scoping to deployment, balancing innovation, velocity, and precision.
* Partner cross-functionally with Engineering, Product, and Intelligence teams to translate ambiguous mission statements into technical solutions.
* Define and track metrics for progress and impact, ensuring that every model matures from experimentation to product, delivering quantifiable value.
* Provide technical leadership through code reviews, architectural guidance, and methodological rigor-helping peers unblock and level up.
* Push the team's technical boundaries - propose new methods, prototype novel architectures, advocate for scalable AI systems, and share learnings that move us all forward.
What You Will Need to Be Successful
* 6+ years of hands-on data science experience, including production-grade model development.
* Deep expertise in Python; experience with a search infrastructure (e.g., Elasticsearch), graph-based methods, or distributed data frameworks (Spark, Ray, etc.) is a strong plus.
* Demonstrated success building AI-augmented workflows, tools, or data services that scale across use cases.
* Proven ability to take a nebulous question, frame it as a technical problem, and deliver a working system that people actually use. You know when to go deep and when to ship.
* Curiosity that borders on obsession - you read papers, explore new methodologies, test new tools, and challenge yourself because you love it.
* Relentless pusuit of excellence.
Why Join This Team
* You'll work at the intersection of AI, data science, and intelligence, developing systems that power Strider's most critical missions.
* You'll play a central role in transitioning Strider from smart people solving hard problems to smart systems solving them at scale.
* You'll have the autonomy to innovate-designing architecture, improving processes, and influencing the evolution of Strider's AI capabilities.
* You'll collaborate with a team of motivated engineers, data scientists, and intelligence analysts committed to pushing technical boundaries and redefining what is possible in data-driven intelligence.
Strider is an equal opportunity employer. We are committed to fostering an inclusive workplace and do not discriminate against employees or applicants based on race, color, religion, gender, national origin, age, disability, genetic information, or any other characteristic protected by applicable law. We comply with all relevant employment laws in the locations where we operate. This commitment applies to all aspects of employment, including recruitment, hiring, promotion, compensation, and professional development.
Auto-ApplyData Scientist
Data engineer job in Provo, UT
MeetCute needs a data scientist
MeetCute has a lot of data, and needs to mine that data to #hackdating. As a growing company, we're looking for smart, curious people to answer questions and provide insights that make MeetCute a better place to find a date.
The main technologies you'll be working with are Python and MySQL. Several years of data science experience are preferred.
Requirements
You need to be:
Independent
Smart
Good at communicating
Capable Python programmer
Able to think critically and effectively about user experience
Effective at turning open-ended projects into concrete deliverables
It's also good if you have:
Math or CS degree
Familiarity with IPython notebook (or Jupyter), pandas, Hadoop
How does a MeetCute data scientist #hackdating?
First, and most importantly, is the design and analysis of experiments. We run a substantial number of tests in the process of developing our project, and we want to make sure we come to the right conclusions.
Data scientists are also often called upon to answer questions about how the site currently functions, to inform and guide future product decisions that we make. These questions tend to be open-ended, so the ability to go from a vague question to a concrete answer is essential.
How can you join our team?
Send us your resume and tell us why you're excited to join us
CORP - Data Scientist
Data engineer job in Lehi, UT
+ A Company is redefining home energy intelligence through data and AI to enable personalized comfort, energy efficiency, and demand-response optimization across millions of connected homes. + We are seeking a Staff Data Scientist to design and deploy predictive models that enable intelligent home energy decisions: from forecasting comfort and cost to optimizing EV charging and demand response events.
+ Develop Predictive Models: Build and deploy advanced models for occupancy, runtime, cost forecasting, anomaly detection, and preconditioning to enable comfort-aware, energy-efficient control and maintenance.
**Responsibilities:**
+ Optimize Energy Operations: Use data-driven insights to improve the reliability and precision of Demand Response (DR), Time-of-Use (TOU) shifting, and Virtual Power Plant (VPP) strategies.
+ Advance Data Quality & Scalability: Partner with data engineering to transform legacy data structures into robust, documented, and reusable data products that support ML and real-time analytics.
+ Cross-Functional Collaboration: Work closely with product, engineering, and analytics teams to embed intelligence into production systems and shape future data-driven energy experiences.
+ Communicate Impact: Translate complex model outcomes into actionable insights for both technical and non-technical audiences.
**Experience:**
+ Proven expertise in predictive modeling, forecasting, and applied ML (e.g., regression, gradient boosting, time-series, causal inference).
+ Experience working with large-scale event and sensor data, preferably within energy, IoT, or device-driven ecosystems.
+ Strong proficiency in Python (Pandas, NumPy, scikit-learn, PySpark) and experience with distributed compute environments (Spark, Databricks, GCP).
+ Ability to take models from concept to production in collaboration with engineering partners.
+ Skilled in statistical analysis, feature engineering, and experimental design (e.g., A/B testing).
+ Excellent communication and storytelling skills for complex, data-driven topics.
**Skills:**
+ Experience with energy forecasting, thermal modeling, or Demand Response optimization.
+ Understanding energy markets, Distributed Energy Resources (DER), and Virtual Power Plant (VPP) concepts.
+ Familiarity with LLM or generative AI applications in analytics and optimization.
+ 5+ years of industry experience, including demonstrated technical leadership on high-impact modeling initiatives.
**Education:**
+ Advanced degree (MS/PhD) in a quantitative field such as Statistics, Computer Science, or Engineering.
**About US Tech Solutions:**
US Tech Solutions is a global staff augmentation firm providing a wide range of talent on-demand and total workforce solutions. To know more about US Tech Solutions, please visit *********************** (********************************** .
US Tech Solutions is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity,
national origin, disability, or status as a protected veteran.
CORP - Data Scientist
Data engineer job in Lehi, UT
Company is redefining home energy intelligence through data and AI to enable personalized comfort, energy efficiency, and demand-response optimization across millions of connected homes. We are seeking a Staff Data Scientist to design and deploy predictive models that enable intelligent home energy decisions: from forecasting comfort and cost to optimizing EV charging and demand response events.
Develop Predictive Models: Build and deploy advanced models for occupancy, runtime, cost forecasting, anomaly detection, and preconditioning to enable comfort-aware, energy-efficient control and maintenance.
Optimize Energy Operations: Use data-driven insights to improve the reliability and precision of Demand Response (DR), Time-of-Use (TOU) shifting, and Virtual Power Plant (VPP) strategies.
Advance Data Quality & Scalability: Partner with data engineering to transform legacy data structures into robust, documented, and reusable data products that support ML and real-time analytics.
Cross-Functional Collaboration: Work closely with product, engineering, and analytics teams to embed intelligence into production systems and shape future data-driven energy experiences.
Communicate Impact: Translate complex model outcomes into actionable insights for both technical and non-technical audiences.
Required Qualifications
Proven expertise in predictive modeling, forecasting, and applied ML (e.G., regression, gradient boosting, time-series, causal inference).
Experience working with large-scale event and sensor data, preferably within energy, IoT, or device-driven ecosystems.
Strong proficiency in Python (Pandas, NumPy, scikit-learn, PySpark) and experience with distributed compute environments (Spark, Databricks, GCP).
Ability to take models from concept to production in collaboration with engineering partners.
Skilled in statistical analysis, feature engineering, and experimental design (e.G., A/B testing).
Excellent communication and storytelling skills for complex, data-driven topics.
Preferred Qualifications
Experience with energy forecasting, thermal modeling, or Demand Response optimization.
Understanding energy markets, Distributed Energy Resources (DER), and Virtual Power Plant (VPP) concepts.
Familiarity with LLM or generative AI applications in analytics and optimization.
Advanced degree (MS/PhD) in a quantitative field such as Statistics, Computer Science, or Engineering.
5+ years of industry experience, including demonstrated technical leadership on high-impact modeling initiatives.
Google Cloud Data & AI Engineer
Data engineer job in Salt Lake City, UT
Who You'll Work With As a modern technology company, our Slalom Technologists are disrupting the market and bringing to life the art of the possible for our clients. We have passion for building strategies, solutions, and creative products to help our clients solve their most complex and interesting business problems. We surround our technologists with interesting challenges, innovative minds, and emerging technologies
You will collaborate with cross-functional teams, including Google Cloud architects, data scientists, and business units, to design and implement Google Cloud data and AI solutions. As a Consultant, Senior Consultant or Principal at Slalom, you will be a part of a team of curious learners who lean into the latest technologies to innovate and build impactful solutions for our clients.
What You'll Do
* Design, build, and operationalize large-scale enterprise data and AI solutions using Google Cloud services such as BigQuery, Vertex AI, Dataflow, Cloud Storage, Pub/Sub and more.
* Implement cloud-based data solutions for data ingestion, transformation, and storage; and AI solutions for model development, deployment, and monitoring, ensuring both areas meet performance, scalability, and compliance needs.
* Develop and maintain comprehensive architecture plans for data and AI solutions, ensuring they are optimized for both data processing and AI model training within the Google Cloud ecosystem.
* Provide technical leadership and guidance on Google Cloud best practices for data engineering (e.g., ETL pipelines, data pipelines) and AI engineering (e.g., model deployment, MLOps).
* Conduct assessments of current data architectures and AI workflows, and develop strategies for modernizing, migrating, or enhancing data systems and AI models within Google Cloud.
* Stay current with emerging Google Cloud data and AI technologies, such as BigQuery ML, AutoML, and Vertex AI, and lead efforts to integrate new innovations into solutions for clients.
* Mentor and develop team members to enhance their skills in Google Cloud data and AI technologies, while providing leadership and training on both data pipeline optimization and AI/ML best practices.
What You'll Bring
* Proven experience as a Cloud Data and AI Engineer or similar role, with hands-on experience in Google Cloud tools and services (e.g., BigQuery, Vertex AI, Dataflow, Cloud Storage, Pub/Sub, etc.).
* Strong knowledge of data engineering concepts, such as ETL processes, data warehousing, data modeling, and data governance.
* Proficiency in AI engineering, including experience with machine learning models, model training, and MLOps pipelines using tools like Vertex AI, BigQuery ML, and AutoML.
* Strong problem-solving and decision-making skills, particularly with large-scale data systems and AI model deployment.
* Strong communication and collaboration skills to work with cross-functional teams, including data scientists, business stakeholders, and IT teams, bridging data engineering and AI efforts.
* Experience with agile methodologies and project management tools in the context of Google Cloud data and AI projects.
* Ability to work in a fast-paced environment, managing multiple Google Cloud data and AI engineering projects simultaneously.
* Knowledge of security and compliance best practices as they relate to data and AI solutions on Google Cloud.
* Google Cloud certifications (e.g., Professional Data Engineer, Professional Database Engineer, Professional Machine Learning Engineer) or willingness to obtain certification within a defined timeframe.
About Us
Slalom is a fiercely human business and technology consulting company that leads with outcomes to bring more value, in all ways, always. From strategy through delivery, our agile teams across 52 offices in 12 countries collaborate with clients to bring powerful customer experiences, innovative ways of working, and new products and services to life. We are trusted by leaders across the Global 1000, many successful enterprise and mid-market companies, and 500+ public sector organizations to improve operations, drive growth, and create value. At Slalom, we believe that together, we can move faster, dream bigger, and build better tomorrows for all.
Compensation and Benefits
Slalom prides itself on helping team members thrive in their work and life. As a result, Slalom is proud to invest in benefits that include meaningful time off and paid holidays, parental leave, 401(k) with a match, a range of choices for highly subsidized health, dental, & vision coverage, adoption and fertility assistance, and short/long-term disability. We also offer yearly $350 reimbursement account for any well-being-related expenses, as well as discounted home, auto, and pet insurance.
Slalom is committed to fair and equitable compensation practices. For this position the base salary pay range for Consultant level is $105,000 to $162,000. For Senior Consultant level, the base salary is $120,000 to $186,000. For Principal level, the base salary is $122,000-$189,000. In addition, individuals may be eligible for an annual discretionary bonus. Actual compensation will depend upon an individual's skills, experience, qualifications, location, and other relevant factors. The salary pay range is subject to change and may be modified at any time.
EEO and Accommodations
Slalom is an equal opportunity employer and is committed to inclusion, diversity, and equity in the workplace. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veterans' status, or any other characteristic protected by federal, state, or local laws. Slalom will also consider qualified applications with criminal histories, consistent with legal requirements. Slalom welcomes and encourages applications from individuals with disabilities. Reasonable accommodations are available for candidates during all aspects of the selection process. Please advise the talent acquisition team if you require accommodations during the interview process.
We are accepting applications until 11/17.
#LI-FB1
Senior Data Engineer
Data engineer job in South Jordan, UT
What you'll do:
Onboard new geospatial datasets into our systems: imagery, parcels, extracted polygons, and build ETL pipelines to automate these processes
Manage the interface between our AI systems and the input data they need to operate via streaming systems, storage systems, and caching systems
Be aware of, and manage 3rd party provider rate limits, and architect our systems to gracefully deal with these limits while maximizing our throughput
Participate in an agile product development process, where collaboration with stakeholders is a vital step to building what is needed
Challenge and be challenged on a diverse, collaborative, and brilliant team
Write automated test suites to ensure the quality of your code
Contribute to open -source geospatial software
Build solutions that enable new products - - typically involving large scale or intricate geospatial techniques
Build -in system quality from the beginning by writing unit & integration tests and integrating with logging, metrics, and observability systems
Requirements
What you bring:
Good to Expert level understanding of geospatial systems, concepts, patterns, and software, including both legacy formats and software, as well as the hottest newest open -source packages and tools
Professional experience writing production -ready Python code that leverages modern software development best practices (automated testing, CICD, observability)
Experience working on a team of developers, maintaining a shared codebase, and having your code reviewed before merging
Strong DB Expertise in an Amazon environment (RDS, Postgres, and DynamoDB)
Strong ETL Experience (especially in extraction and ingestion of 3rd party data)
Nice -to -haves:
Familiarity with machine learning concepts
Familiarity with asynchronous programming
Benefits
Key competencies at Arturo:
Willingness to learn - You have an insatiable desire to continue growing, a fearless approach to the unknown, and love a challenge
Teamwork/Collaboration - You like working with others; you participate actively and enjoy sharing the responsibilities and rewards. You pro -actively work to strengthen our team. And you definitely have a sense of humor
Critical Thinking - You incorporate analysis, interpretation, inference, explanation, self -regulation, open -mindedness, and problem -solving in everything you do
Drive for Results - You keep looking forward, solve problems and participate in the success of our growing organization
Lead Data Engineer
Data engineer job in Pleasant Grove, UT
About Us
Kenect is on a mission to revolutionize customer communication and engagement for businesses across North America. Founded with a deep understanding of the challenges businesses face in connecting with their customers, Kenect helps companies streamline communication, enhance customer satisfaction, and drive growth through its innovative messaging and reputation platform. Trusted by thousands of businesses, our passionate team is committed to building technology that fosters closer connections and helps businesses thrive in a digital-first world.
About This Role
As the Lead Data Engineer, you will be the architectural and technical leader for Kenect's core data ecosystem, including designing and implementing a robust Customer Data Platform (CDP). This platform will unify batch and real-time data from sources such as DMS, CRMs, social networks, and web traffic to create comprehensive ML-ready 360-degree customer profiles. You will work closely with a Product Manager to align data initiatives with the product roadmap, ensuring the creation of high-value data products prioritized to deliver customer-facing value. In collaboration with our product, engineering, and AI Platform teams, you will engineer the foundational data infrastructure that enables advanced digital initiatives like real-time personalization, predictive modeling, and targeted marketing campaigns.
In this role, you will help to define the architecture, tools, and strategic execution for data pipelines, the data lake, our cloud data warehouse, and integrations with marketing automation tools. Your work will directly support Kenect's mission to transform the customer experience at tens of thousands of dealerships.
What you will be doing
Designing and implementing highly scalable batch and real-time data pipelines using modern tools such as GCP Dataflow (Apache Beam), dbt, and managed Apache Spark (Dataproc).
Architecting and managing a secure, performance-optimized data lake and cloud data warehouse solution in Google Cloud using BigQuery, Cloud Storage (GCS), and open formats like Apache Iceberg.
Building and optimizing high-throughput streaming data pipelines with GCP Pub/Sub and Dataflow to support real-time data ingestion and processing.
Designing and engineering robust feature pipelines to deliver high-quality, low-latency, ML-ready datasets to the AI Platform team for model training and serving.
Developing integrations with Customer Data Platforms (e.g., Twilio Segment, RudderStack) and marketing automation systems to ensure a closed-loop data strategy.
Collaborating with engineers, analysts, and Data Scientists on the AI Platform team to build data solutions that support advanced use cases like real-time personalization, predictive scoring, and segmentation.
Defining strategies for multi-tenant SaaS data solutions to ensure scalability, performance, security, and cost-efficiency.
Leading technical initiatives to adopt new technologies and best practices for high-volume data engineering.
Ensuring data quality, governance, and compliance with industry standards and regulations.
Skills & Qualifications
8+ years of professional experience in data engineering, with a focus on high-volume batch and streaming data architectures.
Expertise with core GCP data products (e.g., BigQuery, Dataflow, Pub/Sub, Cloud Composer, Dataproc, and experience integrating with Vertex AI services).
Proficiency in Python and modern data processing tools like GCP Dataflow (Apache Beam), dbt, and Apache Spark.
Hands-on experience with feature engineering, feature store patterns, and designing data pipelines for low-latency ML serving.
Strong understanding of CDPs and tools like Twilio Segment or RudderStack.
Hands-on experience with streaming tools like GCP Pub/Sub, Kafka, or AWS SNS is highly desirable.
Experience with multi-tenant SaaS architecture is a strong plus.
Excellent communication skills and a collaborative approach to solving complex problems, particularly when engaging with platform teams.
What Kenect Offers!
• Health, Dental, Vision, Life & Disability Insurance
• Your birthday is a paid day off
• Onsite gym
• Breakroom full of snacks and drinks
• Convenient location next to freeway entrance/exit
We believe in hiring self-motivated team members who can run alongside us without needing to be “managed” along the way. Yes, we have managers and 1:1s. Yes, we believe in giving open two-way feedback. We also believe in having team members who can run without the daily guidance that some companies prefer.
Kenect is an equal opportunity employer. We are an organization comprised of people of all kinds of backgrounds, and we believe this mix is precisely what makes us strong. All employment decisions at Kenect are based on business needs, job requirements, and individual qualifications without regard to race, color, religion or belief, family or parental status, or any other status protected under federal, state, or local law.
Data Scientist
Data engineer job in Sandy, UT
# Job Description: AI Task Evaluation & Statistical Analysis Specialist
## Role Overview We're seeking a data-driven analyst to conduct comprehensive failure analysis on AI agent performance across finance-sector tasks. You'll identify patterns, root causes, and systemic issues in our evaluation framework by analyzing task performance across multiple dimensions (task types, file types, criteria, etc.). ## Key Responsibilities - **Statistical Failure Analysis**: Identify patterns in AI agent failures across task components (prompts, rubrics, templates, file types, tags) - **Root Cause Analysis**: Determine whether failures stem from task design, rubric clarity, file complexity, or agent limitations - **Dimension Analysis**: Analyze performance variations across finance sub-domains, file types, and task categories - **Reporting & Visualization**: Create dashboards and reports highlighting failure clusters, edge cases, and improvement opportunities - **Quality Framework**: Recommend improvements to task design, rubric structure, and evaluation criteria based on statistical findings - **Stakeholder Communication**: Present insights to data labeling experts and technical teams ## Required Qualifications - **Statistical Expertise**: Strong foundation in statistical analysis, hypothesis testing, and pattern recognition - **Programming**: Proficiency in Python (pandas, scipy, matplotlib/seaborn) or R for data analysis - **Data Analysis**: Experience with exploratory data analysis and creating actionable insights from complex datasets - **AI/ML Familiarity**: Understanding of LLM evaluation methods and quality metrics - **Tools**: Comfortable working with Excel, data visualization tools (Tableau/Looker), and SQL ## Preferred Qualifications - Experience with AI/ML model evaluation or quality assurance - Background in finance or willingness to learn finance domain concepts - Experience with multi-dimensional failure analysis - Familiarity with benchmark datasets and evaluation frameworks - 2-4 years of relevant experience
Data Scientist, Product Analytics
Data engineer job in Salt Lake City, UT
As a Data Scientist at Meta, you will shape the future of people-facing and business-facing products we build across our entire family of applications (Facebook, Instagram, Messenger, WhatsApp, Oculus). By applying your technical skills, analytical mindset, and product intuition to one of the richest data sets in the world, you will help define the experiences we build for billions of people and hundreds of millions of businesses around the world. You will collaborate on a wide array of product and business problems with a wide-range of cross-functional partners across Product, Engineering, Research, Data Engineering, Marketing, Sales, Finance and others. You will use data and analysis to identify and solve product development's biggest challenges. You will influence product strategy and investment decisions with data, be focused on impact, and collaborate with other teams. By joining Meta, you will become part of a world-class analytics community dedicated to skill development and career growth in analytics and beyond.Product leadership: You will use data to shape product development, quantify new opportunities, identify upcoming challenges, and ensure the products we build bring value to people, businesses, and Meta. You will help your partner teams prioritize what to build, set goals, and understand their product's ecosystem.Analytics: You will guide teams using data and insights. You will focus on developing hypotheses and employ a varied toolkit of rigorous analytical approaches, different methodologies, frameworks, and technical approaches to test them.Communication and influence: You won't simply present data, but tell data-driven stories. You will convince and influence your partners using clear insights and recommendations. You will build credibility through structure and clarity, and be a trusted strategic partner.
**Required Skills:**
Data Scientist, Product Analytics Responsibilities:
1. Work with large and complex data sets to solve a wide array of challenging problems using different analytical and statistical approaches
2. Apply technical expertise with quantitative analysis, experimentation, data mining, and the presentation of data to develop strategies for our products that serve billions of people and hundreds of millions of businesses
3. Identify and measure success of product efforts through goal setting, forecasting, and monitoring of key product metrics to understand trends
4. Define, understand, and test opportunities and levers to improve the product, and drive roadmaps through your insights and recommendations
5. Partner with Product, Engineering, and cross-functional teams to inform, influence, support, and execute product strategy and investment decisions
**Minimum Qualifications:**
Minimum Qualifications:
6. Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
7. A minimum of 6 years of work experience in analytics (minimum of 4 years with a Ph.D.)
8. Bachelor's degree in Mathematics, Statistics, a relevant technical field, or equivalent practical experience
9. Experience with data querying languages (e.g. SQL), scripting languages (e.g. Python), and/or statistical/mathematical software (e.g. R)
**Preferred Qualifications:**
Preferred Qualifications:
10. Master's or Ph.D. Degree in a quantitative field
**Public Compensation:**
$173,000/year to $242,000/year + bonus + equity + benefits
**Industry:** Internet
**Equal Opportunity:**
Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Meta participates in the E-Verify program in certain locations, as required by law. Please note that Meta may leverage artificial intelligence and machine learning technologies in connection with applications for employment.
Meta is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance or accommodations due to a disability, please let us know at accommodations-ext@fb.com.
Data Engineer
Data engineer job in Lehi, UT
Our Company Changing the world through digital experiences is what Adobe's all about. We give everyone-from emerging artists to global brands-everything they need to design and deliver exceptional digital experiences! We're passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen.
We're on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours!
Are you ready to have fun with data? As a Data Engineer focused on cloud spend optimization at Adobe, you'll play a key role in transforming massive amounts of cloud usage data into actionable insights. You'll combine strong data engineering fundamentals with analytical curiosity-helping surface patterns, trends, and opportunities to drive more efficient cloud operations across Adobe's platforms and products.
This role sits at the intersection of data engineering, analytics, and AI innovation. You'll build and maintain data pipelines that power our cost insights, partner with analysts and data scientists to interpret results, and experiment with emerging approaches-including AI Agent development-to automate data analysis and accelerate decision-making.
Key Responsibilities
* Design, build, and maintain scalable and reliable data pipelines for cloud spend and utilization analytics.
* Develop data models and transformations that make complex cloud usage data accessible and useful.
* Analyze large datasets to identify trends, anomalies, and optimization opportunities.
* Partner with data scientists and product engineers to translate findings into business and technical actions.
* Contribute to the development of data-driven tools, including early experimentation with AI Agents for insight generation and automation.
* Ensure data quality, integrity, and performance across all stages of the pipeline.
* Document workflows, participate in code reviews, and continuously improve data processes.
Qualifications
* BS in Computer Science, Engineering, or a related field with 4+ years of experience in data engineering or data science.
* Strong proficiency in SQL and Python for data wrangling, automation, and analysis.
* Experience with AWS, DBT, and Airflow (or similar modern data stack tools).
* Solid understanding of data modeling, warehousing concepts, and ETL/ELT pipeline design.
* Comfortable with exploratory data analysis and visualization using tools like Pandas, Matplotlib, or Jupyter.
* Curiosity about AI Agent development and how generative AI can transform analytics workflows.
* Analytical mindset with strong attention to detail and problem-solving skills.
* Strong communication skills and a collaborative, growth-oriented attitude.
Our compensation reflects the cost of labor across several U.S. geographic markets, and we pay differently based on those defined markets. The U.S. pay range for this position is $113,400 -- $206,300 annually. Pay within this range varies by work location and may also depend on job-related knowledge, skills, and experience. Your recruiter can share more about the specific salary range for the job location during the hiring process.
At Adobe, for sales roles starting salaries are expressed as total target compensation (TTC = base + commission), and short-term incentives are in the form of sales commission plans. Non-sales roles starting salaries are expressed as base salary and short-term incentives are in the form of the Annual Incentive Plan (AIP).
In addition, certain roles may be eligible for long-term incentives in the form of a new hire equity award.
State-Specific Notices:
California:
Fair Chance Ordinances
Adobe will consider qualified applicants with arrest or conviction records for employment in accordance with state and local laws and "fair chance" ordinances.
Colorado:
Application Window Notice
If this role is open to hiring in Colorado (as listed on the job posting), the application window will remain open until at least the date and time stated above in Pacific Time, in compliance with Colorado pay transparency regulations. If this role does not have Colorado listed as a hiring location, no specific application window applies, and the posting may close at any time based on hiring needs.
Massachusetts:
Massachusetts Legal Notice
It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability.
Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more.
Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call **************.
CORP - Data Engineer II
Data engineer job in Lehi, UT
Hybrid Schedule (Onsite Monday, Tuesday, Wednesday, Thursday) About This Role Company is redefining home energy intelligence through data and AI to enable personalized comfort, energy efficiency, and demand-response optimization across millions of connected homes. We are seeking a Senior Software Engineer, Data to build and scale the pipelines and data products that transform raw device telemetry into reliable, actionable intelligence.
Data Pipeline Development: Design and maintain scalable ETL/ELT pipelines that process time-series signals from thermostats, weather, schedules, and device analytics.
Core Data Products: Build verified HVAC and Energy data tables (e.G., Run & Drift, Thermal Coefficients, Efficiency Drift) to serve as trusted sources for analytics, modeling, and automation.
Modernization & Quality: Refactor legacy Scala/Akka processes into PySpark or Databricks jobs, improving observability, testing, and CI/CD coverage for upstream feeds.
Integration & Streaming: Manage data sourced from Mongo-based telemetry, Kafka or Pub/Sub streams, and cloud storage (GCS) to ensure reliability and consistency.
Model Enablement: Collaborate with data scientists to generate and operationalize features supporting HVAC runtime prediction, anomaly detection, and DR optimization.
Documentation & Governance: Promote best practices for data lineage, schema documentation, and change control to prevent regressions in production systems.
Required Qualifications
3+ years of data engineering or backend data systems experience
Strong proficiency in Python, SQL, and distributed data frameworks (PySpark, Databricks)
Hands-on experience with GCP, Kafka/Pub-Sub, and data lake architecture
Ability to read and modernize legacy Scala/Akka codebases
Proven track record building production-grade pipelines that deliver analytics-ready datasets
Strong problem-solving skills in ambiguous, under-documented environments
Preferred Qualifications
Experience with ML platforms and feature engineering workflows (e.G. Vertex AI)
AI/ML application experience (LLMs, computer vision, energy forecasting models)
Background in IoT applications, protocols, and telemetry
Familiarity with specialized databases:
Graph databases (e.G. Neo4j)
Vector databases (e.G. Pinecone)
Experience with data orchestration tools (e.G. Airflow)
Background in Demand Response or home energy automation
Experience implementing data quality metrics, observability, and alerting
Track record of significant cost optimization or performance improvements in data systems