Zions Bancorporation's Enterprise Technology and Operations (ETO) team is transforming what it means to work for a financial institution. With a commitment to technology and innovation, we have been providing our community, clients and colleagues the best experience possible for over 150 years. Help us transform our workforce of the future, today.
We are looking for a highly experienced Staff DataEngineer that will lead the end-to-end migration of on-premise data warehouse systems to modern cloud-based data platforms (e.g., AWS, GCP, Azure). In this role, you will partner with our Architecture, DataEngineering, and Technology Enablement teams to design and build new data solutions.
Key Responsibilities:
Contribute to the technical strategy, roadmap, and architectural design of software systems.
Demonstrate strong analytical, organizational, problem-solving skills and the ability to implement solutions independently.
Develop and maintain scalable applications using dataengineering technologies.
Creates and manages CI/CD pipelines for automated deployments.
Mentor and guide engineers on technical aspects of cloud and data development.
Coordinate efforts across multiple teams and departments.
Lead and facilitate technical discussions and presentations.
Actively engage in engineering community channels and events.
Act as a technical advisor and reviewer for critical projects.
Demonstrate emotional intelligence and empathy when interacting with others.
Qualifications:
10+ years of dataengineering development experience. Bachelor's degree in computer science, computer engineering, or related field. A combination of education and experience may meet requirements.
Deep experience with the design and management of scalable data pipelines and workflows: Built robust ETL/ELT processes using Databricks Workflows, Kafka, and modern ELT tools. Champion CI/CD and automated testing best practices to ensure reliability, maintainability, and rapid deployment.
Demonstrated expertise in distributed systems and advanced programming: Developed production-grade solutions in Python and PySpark, leveraging performance optimization techniques such as partitioning, caching, parallelism, and efficient data formats.
Optimized cloud infrastructure for cost and performance: Led cost-efficient Spark workload strategies on Databricks by right-sizing clusters (e.g., on-demand drivers, spot/preemptible workers), enabling autoscaling and auto-termination, and utilizing job compute, serverless clusters, cluster pools, tagging, and budget controls.
Architected and led scalable data platforms: Designed high-throughput, distributed data systems including data lakes and warehouses. Applied strong OLTP/OLAP modeling principles to support both real-time and analytical workloads.
Deep understanding of cloud platforms (Google Cloud preferred), cloud-native technologies (e.g., Kubernetes, Docker), and infrastructure-as-code (IaC) practices.
Extensive expertise in managing both cloud and on-premises environments, with strong focus on seamless data and file movement between cloud and on-premises systems to ensure efficient and secure data integration.
Data governance and security: Ensured data governance and compliance with industry standards and regulations through robust security practices and techniques
Demonstrated ability to mentor and coach engineers and collaborate effectively with cross-functional teams and facilitate clear communication across the organization.
Ability to produce comprehensive technical documentation.
Location:
Zions Technology Center - 7860 South Bingham Junction Blvd, Midvale, UT 84047.
The Zions Technology Center is a 400,000-square-foot technology campus in Midvale, Utah. Located on the former Sharon Steel Mill superfund site, the sustainably built campus is the company's primary technology and operations center. This modern and environmentally friendly technology center enables Zions to compete for the best technology talent in the state while providing team members with an exceptional work environment with features such as:
Electric vehicle charging stations and close proximity to Historic Gardner Village UTA TRAX station.
At least 75% of the building is powered by on-site renewable solar energy.
Access to outdoor recreation, parks, trails, shareable bikes and locker rooms.
Large modern cafe with a healthy and diverse menu.
Healthy indoor environment with ample natural light and fresh air.
LEED-certified sustainable building that features include the use of low VOC-emitting construction materials.
Benefits:
Medical, Dental and Vision Insurance - START DAY ONE!
Life and Disability Insurance, Paid Parental Leave and Adoption Assistance
Health Savings (HSA), Flexible Spending (FSA) and dependent care accounts
Paid Training, Paid Time Off (PTO) and 11 Paid Federal Holidays
401(k) plan with company match, Profit Sharing, competitive compensation in line with work experience
Mental health benefits including coaching and therapy sessions
Tuition Reimbursement for qualifying employees
Employee Ambassador preferred banking products
Apply now if you have a passion for impactful outcomes, enjoy working collaboratively with co-workers, and want to make a difference for the clients and communities we serve.
$71k-98k yearly est. 1d ago
Looking for a job?
Let Zippia find it for you.
Data Scientist, Analytics (Technical Leadership)
Meta 4.8
Data engineer job in Salt Lake City, UT
We are seeking experienced Data Scientists to join our team and drive impact across various product areas. As a Data Scientist, you will collaborate with cross-functional partners to identify and solve complex problems using data and analysis. Your role will involve shaping product development, quantifying new opportunities, and ensuring products bring value to users and the company. You will guide teams using data-driven insights, develop hypotheses, and employ rigorous analytical approaches to test them. You will tell data-driven stories, present clear insights, and build credibility with stakeholders. By joining our team, you will become part of a world-class analytics community dedicated to skill development and career growth in analytics and beyond.
**Required Skills:**
Data Scientist, Analytics (Technical Leadership) Responsibilities:
1. Work with complex data sets to solve challenging problems using analytical and statistical approaches
2. Apply technical expertise in quantitative analysis, experimentation, and data mining to develop product strategies
3. Identify and measure success through goal setting, forecasting, and monitoring key metrics
4. Partner with cross-functional teams to inform and execute product strategy and investment decisions
5. Build long-term vision and strategy for programs and products
6. Collaborate with executives to define and develop data platforms and instrumentation
7. Effectively communicate insights and recommendations to stakeholders
8. Define success metrics, forecast changes, and set team goals
9. Support developing roadmaps and coordinate analytics efforts across teams
**Minimum Qualifications:**
Minimum Qualifications:
10. Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
11. Mathematics, Statistics, Operations Research) 5+ years of experience with data querying languages (e.g. SQL), scripting languages (e.g. Python), or statistical/mathematical software (e.g. R, SAS, Matlab)
12. 8+ years of work experience leading analytics work in IC capacity, working collaboratively with Engineering and cross-functional partners, and guiding data-influenced product planning, prioritization and strategy development
13. Experience working effectively with and leading complex analytics projects across multiple stakeholders and cross-functional teams, including Engineering, PM/TPM, Analytics and Finance
14. Experience with predictive modeling, machine learning, and experimentation/causal inference methods
15. Experience communicating complex technical topics in a clear, precise, and actionable manner
**Preferred Qualifications:**
Preferred Qualifications:
16. 10+ years of experience communicating the results of analyses to leadership teams to influence the strategy
17. Masters or Ph.D. Degree in a quantitative field
18. Bachelor's Degree in an analytical or scientific field (e.g. Computer Science, Engineering, Mathematics, Statistics, Operations Research)
19. 10+ years of experience doing complex quantitative analysis in product analytics
**Public Compensation:**
$210,000/year to $281,000/year + bonus + equity + benefits
**Industry:** Internet
**Equal Opportunity:**
Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Meta participates in the E-Verify program in certain locations, as required by law. Please note that Meta may leverage artificial intelligence and machine learning technologies in connection with applications for employment.
Meta is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance or accommodations due to a disability, please let us know at accommodations-ext@fb.com.
$210k-281k yearly 60d+ ago
Data Scientist, Privacy
Datavant
Data engineer job in Salt Lake City, UT
Datavant is a data platform company and the world's leader in health data exchange. Our vision is that every healthcare decision is powered by the right data, at the right time, in the right format. Our platform is powered by the largest, most diverse health data network in the U.S., enabling data to be secure, accessible and usable to inform better health decisions. Datavant is trusted by the world's leading life sciences companies, government agencies, and those who deliver and pay for care.
By joining Datavant today, you're stepping onto a high-performing, values-driven team. Together, we're rising to the challenge of tackling some of healthcare's most complex problems with technology-forward solutions. Datavanters bring a diversity of professional, educational and life experiences to realize our bold vision for healthcare.
As part of the Privacy Science team within Privacy Hub you will play a crucial role in ensuring that privacy of patients is safeguarded in the modern world of data sharing. As well as working on real data, you will be involved in exciting research to keep us as industry leaders in this area, and stimulating discussions on re-identification risk. You will be supported in developing/consolidating data analysis and coding skills to become proficient in the analysis of large health-related datasets.
**You Will:**
+ Critically analyze large health datasets using standard and bespoke software libraries
+ Discuss your findings and progress with internal and external stakeholders
+ Produce high quality reports which summarise your findings
+ Contribute to research activities as we explore novel and established sources of re-identification risk
**What You Will Bring to the Table:**
+ Excellent communication skills. Meticulous attention to detail in the production of comprehensive, well-presented reports
+ A good understanding of statistical probability distributions, bias, error and power as well as sampling and resampling methods
+ Seeks to understand real-world data in context rather than consider it in abstraction.
+ Familiarity or proficiency with programmable data analysis software R or Python, and the desire to develop expertise in its language
+ Application of scientific methods to practical problems through experimental design, exploratory data analysis and hypothesis testing to reach robust conclusions
+ Strong time management skills and demonstrable experience of prioritising work to meet tight deadlines
+ Initiative and ability to independently explore and research novel topics and concepts as they arise, to expand Privacy Hub's knowledge base
+ An appreciation of the need for effective methods in data privacy and security, and an awareness of the relevant legislation
+ Familiarity with Amazon Web Services cloud-based storage and computing facilities
**Bonus Points If You Have:**
+ Experience creating documents using LATEX
+ Detailed knowledge of one or more types of health information, e.g., genomics, disease, health images
+ Experience working with or supporting public sector organizations, such as federal agencies (e.g., CMS, NIH, VA, CDC), state health departments, or public health research partners. Familiarity with government data environments, procurement processes, or privacy frameworks in regulated settings is highly valued.
\#LI-BC1
We are committed to building a diverse team of Datavanters who are all responsible for stewarding a high-performance culture in which all Datavanters belong and thrive. We are proud to be an Equal Employment Opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, disability, veteran status, or other legally protected status.
At Datavant our total rewards strategy powers a high-growth, high-performance, health technology company that rewards our employees for transforming health care through creating industry-defining data logistics products and services.
The range posted is for a given job title, which can include multiple levels. Individual rates for the same job title may differ based on their level, responsibilities, skills, and experience for a specific job.
The estimated total cash compensation range for this role is:
$104,000-$130,000 USD
To ensure the safety of patients and staff, many of our clients require post-offer health screenings and proof and/or completion of various vaccinations such as the flu shot, Tdap, COVID-19, etc. Any requests to be exempted from these requirements will be reviewed by Datavant Human Resources and determined on a case-by-case basis. Depending on the state in which you will be working, exemptions may be available on the basis of disability, medical contraindications to the vaccine or any of its components, pregnancy or pregnancy-related medical conditions, and/or religion.
This job is not eligible for employment sponsorship.
Datavant is committed to a work environment free from job discrimination. We are proud to be an Equal Employment Opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, disability, veteran status, or other legally protected status. To learn more about our commitment, please review our EEO Commitment Statement here (************************************************** . Know Your Rights (*********************************************************************** , explore the resources available through the EEOC for more information regarding your legal rights and protections. In addition, Datavant does not and will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay.
At the end of this application, you will find a set of voluntary demographic questions. If you choose to respond, your answers will be anonymous and will help us identify areas for improvement in our recruitment process. (We can only see aggregate responses, not individual ones. In fact, we aren't even able to see whether you've responded.) Responding is entirely optional and will not affect your application or hiring process in any way.
Datavant is committed to working with and providing reasonable accommodations to individuals with physical and mental disabilities. If you need an accommodation while seeking employment, please request it here, (************************************************************** Id=**********48790029&layout Id=**********48795462) by selecting the 'Interview Accommodation Request' category. You will need your requisition ID when submitting your request, you can find instructions for locating it here (******************************************************************************************************* . Requests for reasonable accommodations will be reviewed on a case-by-case basis.
For more information about how we collect and use your data, please review our Privacy Policy (**************************************** .
$104k-130k yearly 15d ago
DATA SCIENTIST
Department of The Air Force
Data engineer job in Clearfield, UT
The PALACE Acquire Program offers you a permanent position upon completion of your formal training plan. As a Palace Acquire Intern you will experience both personal and professional growth while dealing effectively and ethically with change, complexity, and problem solving. The program offers a 3-year formal training plan with yearly salary increases. Promotions and salary increases are based upon your successful performance and supervisory approval.
Summary
The PALACE Acquire Program offers you a permanent position upon completion of your formal training plan. As a Palace Acquire Intern you will experience both personal and professional growth while dealing effectively and ethically with change, complexity, and problem solving. The program offers a 3-year formal training plan with yearly salary increases. Promotions and salary increases are based upon your successful performance and supervisory approval.
Overview
Help
Accepting applications
Open & closing dates
09/29/2025 to 09/28/2026
Salary $49,960 to - $99,314 per year
Total salary varies depending on location of position
Pay scale & grade GS 7 - 9
Locations
Gunter AFB, AL
Few vacancies
Maxwell AFB, AL
Few vacancies
Davis Monthan AFB, AZ
Few vacancies
Edwards AFB, CA
Few vacancies
Show morefewer locations (44)
Los Angeles, CA
Few vacancies
Travis AFB, CA
Few vacancies
Vandenberg AFB, CA
Few vacancies
Air Force Academy, CO
Few vacancies
Buckley AFB, CO
Few vacancies
Cheyenne Mountain AFB, CO
Few vacancies
Peterson AFB, CO
Few vacancies
Schriever AFB, CO
Few vacancies
Joint Base Anacostia-Bolling, DC
Few vacancies
Cape Canaveral AFS, FL
Few vacancies
Eglin AFB, FL
Few vacancies
Hurlburt Field, FL
Few vacancies
MacDill AFB, FL
Few vacancies
Patrick AFB, FL
Few vacancies
Tyndall AFB, FL
Few vacancies
Robins AFB, GA
Few vacancies
Hickam AFB, HI
Few vacancies
Barksdale AFB, LA
Few vacancies
Hanscom AFB, MA
Few vacancies
Natick, MA
Few vacancies
Aberdeen Proving Ground, MD
Few vacancies
Andrews AFB, MD
Few vacancies
White Oak, MD
Few vacancies
Offutt AFB, NE
Few vacancies
Holloman AFB, NM
Few vacancies
Kirtland AFB, NM
Few vacancies
Nellis AFB, NV
Few vacancies
Rome, NY
Few vacancies
Heath, OH
Few vacancies
Wright-Patterson AFB, OH
Few vacancies
Tinker AFB, OK
Few vacancies
Arnold AFB, TN
Few vacancies
Dyess AFB, TX
Few vacancies
Fort Sam Houston, TX
Few vacancies
Goodfellow AFB, TX
Few vacancies
Lackland AFB, TX
Few vacancies
Randolph AFB, TX
Few vacancies
Hill AFB, UT
Few vacancies
Arlington, VA
Few vacancies
Dahlgren, VA
Few vacancies
Langley AFB, VA
Few vacancies
Pentagon, Arlington, VA
Few vacancies
Fairchild AFB, WA
Few vacancies
Warren AFB, WY
Few vacancies
Remote job No Telework eligible No Travel Required Occasional travel - You may be expected to travel for this position. Relocation expenses reimbursed No Appointment type Internships Work schedule Full-time Service Competitive
Promotion potential
13
Job family (Series)
* 1560 Data Science Series
Supervisory status No Security clearance Secret Drug test No Position sensitivity and risk Noncritical-Sensitive (NCS)/Moderate Risk
Trust determination process
* Suitability/Fitness
Financial disclosure No Bargaining unit status No
Announcement number K-26-DHA-12804858-AKK Control number 846709300
This job is open to
Help
The public
U.S. Citizens, Nationals or those who owe allegiance to the U.S.
Students
Current students enrolled in an accredited high school, college or graduate institution.
Recent graduates
Individuals who have graduated from an accredited educational institute or certificate program within the last 2 years or 6 years for Veterans.
Clarification from the agency
This public notice is to gather applications that may or may not result in a referral or selection.
Duties
Help
1. Performs developmental assignments in support of projects assigned to higher level analysts. Performs minor phases of a larger assignment or work of moderate difficulty where procedures are established, and a number of specific guidelines exist. Applies the various steps of accepted data science procedures to search for information and perform well precedented work.
2. Performs general operations and assignments for portions of a project or study consisting of a series of interrelated tasks or problems. The employee applies judgment in the independent application of methods and techniques previously learned. The employee locates and selects the most appropriate guidelines and modifies to address unusual situations.
3. Participates in special initiatives, studies, and projects. Performs special research tasks designed to utilize and enhance knowledge of work processes and techniques. Works with higher graded specialists in planning and conducting special initiatives, studies, and projects. Assists in preparing reports and briefings outlining study findings and recommendations.
4. Prepares correspondence and other documentation. Drafts or prepares a variety of documents to include newsletter items,
responses to routine inquiries, reports, letters, and other related documents.
Requirements
Help
Conditions of employment
* Employee must maintain current certifications
* Successful completion of all training and regulatory requirements as identified in the applicable training plan
* Must meet suitability for federal employment
* Direct Deposit: All federal employees are required to have direct deposit
* Please read this Public Notice in its entirety prior to submitting your application for consideration.
* Males must be registered for Selective Service, see ***********
* A security clearance may be required. This position may require a secret, top-secret or special sensitive clearance.
* If authorized, PCS will be paid IAW JTR and AF Regulations. If receiving an authorized PCS, you may be subject to completing/signing a CONUS agreement. More information on PCS requirements, may be found at: *****************************************
* More information on PCS requirements, may be found at: *****************************************
* Position may be subject to random drug testing
* U.S. Citizenship Required
* Disclosure of Political Appointments
* Student Loan Repayment may be authorized
* Recruitment Incentive may be authorized for this position
* Total salary varies depending on location of position
* You will be required to serve a one year probationary period
* Grade Point Average - 2.95 or higher out of a possible 4.0
* Mobility - you may be required to relocate during or after completion of your training
* Work may occasionally require travel away from the normal duty station on military or commercial aircraft
Qualifications
BASIC REQUIREMENT OR INDIVIDUAL OCCUPATIONAL REQUIREMENT:
Degree: Mathematics, statistics, computer science, data science or field directly related to the position. The degree must be in a major field of study (at least at the baccalaureate level) that is appropriate for the position.
You may qualify if you meet one of the following:
1. GS-7: You must have completed or will complete a 4-year course of study leading to a bachelor's from an accredited institution AND must have documented Superior Academic Achievement (SAA) at the undergraduate level in the following:
a) Grade Point Average 2.95 or higher out of a possible 4.0 as recorded on your official transcript or as computed based on 4 years of education or as computed based on courses completed during the final 2 years of curriculum; OR 3.45 or higher out of a possible 4.0 based on the average of the required courses completed in your major field or the required courses in your major field completed during the final 2 years of your curriculum.
2. GS-9: You must have completed 2 years of progressively higher-level graduate education leading to a master's degree or equivalent graduate degree:
a) Grade Point Average - 2.95 or higher out of a possible 4.0 as recorded on your official transcript or as computed based on 4 years of education or as computed based on courses completed during the final 2 years of curriculum; OR 3.45 or higher out of a possible 4.0 based on the average of the required courses completed in your major field or the required courses in your major field completed during the final 2 years of your curriculum. If more than 10 percent of total undergraduate credit hours are non-graded, i.e. pass/fail, CLEP, CCAF, DANTES, military credit, etc. you cannot qualify based on GPA.
KNOWLEDGE, SKILLS AND ABILITIES (KSAs): Your qualifications will be evaluated on the basis of your level of knowledge, skills, abilities and/or competencies in the following areas:
1. Professional knowledge of basic principles, concepts, and practices of data science to apply scientific methods and techniques to analyze systems, processes, and/or operational problems and procedures.
2. Knowledge of mathematics and analysis to perform minor phases of a larger assignment and prepare reports, documentation, and correspondence to communicate factual and procedural information clearly.
3. Skill in applying basic principles, concepts, and practices of the occupation sufficient to perform routine to difficult but well precedented assignments in data science analysis.
4. Ability to analyze, interpret, and apply data science rules and procedures in a variety of situations and recommend solutions to senior analysts.
5. Ability to analyze problems to identify significant factors, gather pertinent data, and recognize solutions.
6. Ability to plan and organize work and confer with co-workers effectively.
PART-TIME OR UNPAID EXPERIENCE: Credit will be given for appropriate unpaid and or part-time work. You must clearly identify the duties and responsibilities in each position held and the total number of hours per week.
VOLUNTEER WORK EXPERIENCE: Refers to paid and unpaid experience, including volunteer work done through National Service Programs (i.e., Peace Corps, AmeriCorps) and other organizations (e.g., professional; philanthropic; religious; spiritual; community; student and social). Volunteer work helps build critical competencies, knowledge and skills that can provide valuable training and experience that translates directly to paid employment. You will receive credit for all qualifying experience, including volunteer experience.
Education
IF USING EDUCATION TO QUALIFY: If position has a positive degree requirement or education forms the basis for qualifications, you MUST submit transcriptswith the application. Official transcripts are not required at the time of application; however, if position has a positive degree requirement, qualifying based on education alone or in combination with experience, transcripts must be verified prior to appointment. An accrediting institution recognized by the U.S. Department of Education must accredit education. Click here to check accreditation.
FOREIGN EDUCATION: Education completed in foreign colleges or universities may be used to meet the requirements. You must show proof the education credentials have been deemed to be at least equivalent to that gained in conventional U.S. education program. It is your responsibility to provide such evidence when applying.
Additional information
For DHA Positions:
These positions are being filled under Direct-Hire Authority for the Department of Defense for Post-Secondary Students and Recent Graduates. The Secretary of the Air Force has delegated authority by the Office of the Secretary of Defense to directly appoint qualified post-secondary students and recent graduates directly into competitive service positions; these positions may be professional or administrative occupations and are located Air Force-Wide. Positions may be filled as permanent or term with a full-time or part-time work schedule. Pay will vary by geographic location.
* The term "Current post-secondary student" means a person who is currently enrolled in, and in good academic standing at a full-time program at an institution of higher education; and is making satisfactory progress toward receipt of a baccalaureate or graduate degree; and has completed at least one year of the program.
* The term "recent graduate" means a person who was awarded a degree by an institution of higher education not more than two years before the date of the appointment of such person, except in the case of a person who has completed a period of obligated service in a uniform service of more than four years.
Selective Service: Males born after 12-31-59 must be registered or exempt from Selective Service. For additional information, click here.
Direct Deposit: All federal employees are required to have direct deposit.
If you are unable to apply online, view the following link for information regarding Alternate Application. The Vacancy ID is
If you have questions regarding this announcement and have hearing or speech difficulties click here.
Tax Law Impact for PCS: On 22-Dec-2017, Public Law 115-97 - the "Tax Cuts and Jobs Act of 2017" suspended qualified moving expense deductions along with the exclusion for employer reimbursements and payments of moving expenses effective 01-Jan-2018 for tax years 2018 through 2025. The law made taxable certain reimbursements and other payments, including driving mileage, airfare and lodging expenses, en-route travel to the new duty station, and temporary storage of those items. The Federal Travel Regulation Bulletin (FTR) 18-05 issued by General Services Administration (GSA) has authorized agencies to use the Withholding Tax Allowance (WTA) and Relocation Income Tax Allowance (RITA) to pay for "substantially all" of the increased tax liability resulting from the "2018 Tax Cuts and Jobs Act" for certain eligible individuals. For additional information on WTA/RITA allowances and eligibilities please click here. Subsequently, FTR Bulletin 20-04 issued by GSA, provides further information regarding NDAA FY2020, Public Law 116-92, and the expansion of eligibility beyond "transferred" for WTA/RITA allowances. For additional information, please click here.
Expand Hide additional information
Candidates should be committed to improving the efficiency of the Federal government, passionate about the ideals of our American republic, and committed to upholding the rule of law and the United States Constitution.
Benefits
Help
A career with the U.S. government provides employees with a comprehensive benefits package. As a federal employee, you and your family will have access to a range of benefits that are designed to make your federal career very rewarding. Opens in a new window Learn more about federal benefits.
Review our benefits
Eligibility for benefits depends on the type of position you hold and whether your position is full-time, part-time or intermittent. Contact the hiring agency for more information on the specific benefits offered.
How you will be evaluated
You will be evaluated for this job based on how well you meet the qualifications above.
For DHA Positions:
These positions are being filled under Direct-Hire Authority for the DoD for Post-Secondary Students and Recent Graduates. The Secretary of the Air Force has delegated authority by the Office of the Secretary of Defense to directly appoint qualified students and recent graduates directly into competitive service positions; positions may be professional or administrative occupations and located Air Force-Wide. Positions may be filled as permanent/term with a full-time/part-time work schedule. Pay will vary by geographic location.
* The term "Current post-secondary student" means a person who is currently enrolled and in good academic standing at a full-time program at an institution of higher education; and is progressing toward a baccalaureate or graduate degree; and has completed at least 1 year of the program.
* The term "recent graduate" means a person awarded a degree by an institution of higher education not more than 2 years before the date of the appointment of such person, except in the case of a person who has completed a period of obligated service in a uniform service of more than 4 years.
Your latest resume will be used to determine your qualifications.
Your application package (resume, supporting documents, and responses to the questionnaire) will be used to determine your eligibility, qualifications, and quality ranking for this position. Please follow all instructions carefully. Errors or omissions may affect your rating or consideration for employment.
Your responses to the questionnaire may be compared to the documents you submit. The documents you submit must support your responses to the online questionnaire. If your application contradicts or does not support your questionnaire responses, you will receive a rating of "not qualified" or "insufficient information" and you will not receive further consideration for this job.
Applicants who disqualify themselves will not be evaluated further.
Benefits
Help
A career with the U.S. government provides employees with a comprehensive benefits package. As a federal employee, you and your family will have access to a range of benefits that are designed to make your federal career very rewarding. Opens in a new window Learn more about federal benefits.
Review our benefits
Eligibility for benefits depends on the type of position you hold and whether your position is full-time, part-time or intermittent. Contact the hiring agency for more information on the specific benefits offered.
Required documents
Required Documents
Help
The following documents are required and must be provided with your application for this Public Notice. Applicants who do not submit required documentation to determine eligibility and qualifications will be eliminated from consideration. Other documents may be required based on the eligibility/eligibilities you are claiming. Click here to view the AF Civilian Employment Eligibility Guide and the required documents you must submit to substantiate the eligibilities you are claiming.
* Online Application - Questionnaire
* Resume: Your resume may NOT exceed two pages, and the font size should not be smaller than 10 pts. You will not be considered for this vacancy if your resume is illegible/unreadable. Additional information on resume requirements can be located under "
$50k-99.3k yearly 26d ago
AWS Data Migration Consultant
Slalom 4.6
Data engineer job in Salt Lake City, UT
Candidates can live within commutable distance to any Slalom office in the US. We have a hybrid and flexible environment. Who You'll Work With As a modern technology company, we've never met a technical challenge we didn't like. We enable our clients to learn from their data, create incredible digital experiences, and make the most of new technologies. We blend design, engineering, and analytics expertise to build the future. We surround our technologists with interesting challenges, innovative minds, and emerging technologies.
We are seeking an experienced Cloud Data Migration Architect with deep expertise in SQL Server, Oracle, DB2, or a combination of these platforms, to lead the design, migration, and optimization of scalable database solutions in the AWS cloud. This role will focus on modernizing on-premises database systems by architecting high-performance, secure, and reliable AWS-hosted solutions.
As a key technical leader, you will work closely with dataengineers, cloud architects, and business stakeholders to define data strategies, lead complex database migrations, build out ETL pipelines, and optimize performance across legacy and cloud-native environments.
What You'll Do
* Design and optimize database solutions on AWS, including Amazon RDS, EC2-hosted instances, and advanced configurations like SQL Server Always On or Oracle RAC (Real Application Clusters).
* Lead and execute cloud database migrations using AWS Database Migration Service (DMS), Schema Conversion Tool (SCT), and custom automation tools.
* Architect high-performance database schemas, indexing strategies, partitioning models, and query optimization techniques.
* Optimize complex SQL queries, stored procedures, functions, and views to ensure performance and scalability in the cloud.
* Implement high-availability and disaster recovery (HA/DR) strategies including Always-On, Failover Clusters, Log Shipping, and Replication, tailored to each RDBMS.
* Ensure security best practices are followed including IAM-based access control, encryption, and compliance with industry standards.
* Collaborate with DevOps teams to implement Infrastructure-as-Code (IaC) using tools like Terraform, CloudFormation, or AWS CDK.
* Monitor performance using tools such as AWS CloudWatch, Performance Insights, Query Store, Dynamic Management Views (DMVs), or Oracle-native tools.
* Work with software engineers and data teams to integrate cloud databases into enterprise applications and analytics platforms.
What You'll Bring
* 5+ years of experience in database architecture, design, and administration with at least one of the following: SQL Server, Oracle, or DB2.
* Expertise in one or more of the following RDBMS platforms: Microsoft SQL Server, Oracle, DB2.
* Hands-on experience with AWS database services (RDS, EC2-hosted databases).
* Strong understanding of HA/DR solutions and cloud database design patterns.
* Experience with ETL development and data integration, using tools such as SSIS, AWS Glue, or custom solutions.
* Familiarity with AWS networking components (VPCs, security groups) and hybrid cloud connectivity.
* Strong troubleshooting and analytical skills to resolve complex database and performance issues.
* Ability to work independently and lead database modernization initiatives in collaboration with engineering and client stakeholders.
Nice to Have
* AWS certifications such as AWS Certified Database - Specialty or AWS Certified Solutions Architect - Professional.
* Experience with NoSQL databases or hybrid data architectures.
* Knowledge of analytics and big data tools (e.g., Snowflake, Redshift, Athena, Power BI, Tableau).
* Familiarity with containerization (Docker, Kubernetes) and serverless technologies (AWS Lambda, Fargate).
* Experience with DB2 on-premise or cloud-hosted environments.
About Us
Slalom is a fiercely human business and technology consulting company that leads with outcomes to bring more value, in all ways, always. From strategy through delivery, our agile teams across 52 offices in 12 countries collaborate with clients to bring powerful customer experiences, innovative ways of working, and new products and services to life. We are trusted by leaders across the Global 1000, many successful enterprise and mid-market companies, and 500+ public sector organizations to improve operations, drive growth, and create value. At Slalom, we believe that together, we can move faster, dream bigger, and build better tomorrows for all.
Compensation and Benefits
Slalom prides itself on helping team members thrive in their work and life. As a result, Slalom is proud to invest in benefits that include meaningful time off and paid holidays, parental leave, 401(k) with a match, a range of choices for highly subsidized health, dental, & vision coverage, adoption and fertility assistance, and short/long-term disability. We also offer yearly $350 reimbursement account for any well-being-related expenses, as well as discounted home, auto, and pet insurance.
Slalom is committed to fair and equitable compensation practices. For this position, the target base salary pay range in the following locations:
Boston, Houston, Los Angeles, Orange County, Seattle, San Diego, Washington DC, New York, New Jersey, for Consultant level is $105,000-147,000 and for Senior Consultant level it is $120,000-$169,000 and for Principal level it is $133,000-$187,000.
In all other markets, the target base salary pay range for Consultant level is $96,000-$135,000 and for Senior Consultant level it is $110,000-$155,000 and for Principal level it is $122,000-$172,000.
In addition, individuals may be eligible for an annual discretionary bonus. Actual compensation will depend upon an individual's skills, experience, qualifications, location, and other relevant factors. The salary pay range is subject to change and may be modified at any time.
EEO and Accommodations
Slalom is an equal opportunity employer and is committed to attracting, developing and retaining highly qualified talent who empower our innovative teams through unique perspectives and experiences. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veterans' status, or any other characteristic protected by federal, state, or local laws. Slalom will also consider qualified applications with criminal histories, consistent with legal requirements. Slalom welcomes and encourages applications from individuals with disabilities. Reasonable accommodations are available for candidates during all aspects of the selection process. Please advise the talent acquisition team if you require accommodations during the interview process.
We will accept applications until 1/31/2026 or until the positions are filled.
$133k-187k yearly 4d ago
Data Engineer
Strider Technologies 3.6
Data engineer job in South Jordan, UT
Strider Technologies delivers strategic intelligence that helps organizations make faster, more confident decisions in an increasingly complex global environment. Using cutting-edge AI and proprietary methodologies, we transform open-source data into actionable insights that help protect technology, talent, and supply chains from nation-state risks.
About the Role
Strider Technologies is building its first internal data platform - and we're looking for a DataEngineer to help design, implement, and scale it. You'll work alongside the Director of Analytics to establish core data infrastructure, pipelines, and integrations that power analytics, business intelligence, and data science across the company.
This is a foundational role: you'll be a key builder and early technical voice in how data is engineered, modeled, and served to business users. As our first DataEngineer, you'll help us set standards for data governance, reliability, and performance, while also mentoring future team members as we grow.
Key Responsibilities
Design, implement, and maintain scalable data pipelines (ETL/ELT) in a modern cloud data warehouse (e.g., Databricks, Snowflake, BigQuery, or Redshift).
Ingest, transform, and integrate internal system data (e.g., CRM, product, finance) and external vendor datasets.
Prepare and process structured and unstructured data to support analytics, business intelligence, and automation use cases.
Model and optimize datasets using Lakehouse or warehouse-based architectures for downstream consumption.
Collaborate closely with analysts and cross-functional partners to build production-grade data assets.
Champion and implement best practices for data quality, observability, and lineage.
Partner with cloud infrastructure teams (AWS, Azure, or GCP) to ensure scalable, secure, and cost-efficient systems.
Explore and deploy AI-enabled automation tools and workflows across the business.
Advocate for productivity-boosting technologies such as ChatGPT, Copilot, Cursor, and dbt-assist.
Help document standards, develop reusable components, and mentor junior dataengineers.
Key Qualifications
5+ years of experience in dataengineering or analytics engineering roles.
Strong proficiency with SQL and Python (or Scala).
Hands-on experience with modern cloud data warehouses (Databricks, Snowflake, BigQuery, or Redshift).
Familiarity with tools in the modern data stack (e.g., dbt, Airflow, Fivetran, Dagster).
Proven ability to manage and transform large, complex datasets across structured and unstructured formats.
Solid understanding of data warehousing concepts, dimensional modeling, and Git-based version control.
Experience leveraging AI-assisted development tools and enthusiasm for scaling their use internally.
Comfortable working in fast-paced environments and collaborating across technical and non-technical teams.
Why Join This Team
This is a greenfield opportunity to shape the future of Strider's internal data ecosystem.
You'll work cross-functionally with leaders across the company, gaining visibility and accelerating your career.
Join a mission-driven organization focused on protecting innovation and economic security.
Help build a modern, AI-empowered analytics practice from the ground up.
Enjoy a flexible, hybrid work environment that values autonomy, innovation, and collaboration.
Benefits
Competitive Compensation
Company Equity Options
Flexible PTO
Wellness Reimbursement
US Holidays (Office Closed)
Paid Parental Leave
Comprehensive Medical, Dental, and Vision Insurance
401(k) Plan
Strider is an equal opportunity employer. We are committed to fostering an inclusive workplace and do not discriminate against employees or applicants based on race, color, religion, gender, national origin, age, disability, genetic information, or any other characteristic protected by applicable law. We comply with all relevant employment laws in the locations where we operate. This commitment applies to all aspects of employment, including recruitment, hiring, promotion, compensation, and professional development.
$81k-116k yearly est. Auto-Apply 60d+ ago
Sr Data Engineer, Palantir
The Hertz Corporation 4.3
Data engineer job in Salt Lake City, UT
**A Day in the Life:** We are seeking a talented **Sr DataEngineer, Palantir (experience required)** to join our Strategic Data & Analytics team working on Hertz's strategic applications and initiatives. This role will work in multi-disciplinary teams rapidly building high-value products that directly impact our financial performance and customer experience. You'll build cloud-native, large-scale, employee facing software using modern technologies including React, Python, Java, AWS, and Palantir Foundry.
The ideal candidate will have strong development skills across the full stack, a growth mindset, and a passion for building software at a sustainable pace in a highly productive engineering culture. Experience with Palantir Foundry is highly preferred but not required - we're looking for engineers who are eager to learn and committed to engineering excellence.
We expect the starting salary to be around $135k but will be commensurate with experience.
**What You'll Do:**
Day-to-Day Responsibilities
+ Work in balanced teams consisting of Product Managers, Product Designers, and engineers
+ Test first - We strive for Test-Driven Development (TDD) for all production code
+ CI (Continuous Integration) everything - Automation is core to our development process
+ Architect user-facing interfaces and design functions that help users visualize and interact with their data
+ Contribute to both frontend and backend codebases to enhance and develop projects
+ Build software at a sustainable pace to ensure longevity, reliability, and higher quality output
Frontend Development
+ Design and develop responsive, intuitive user interfaces using React and modern JavaScript/TypeScript
+ Build reusable component libraries and implement best practices for frontend architecture
+ Generate UX/UI designs (no dedicated UX/UI designers on team) with considerations for usability and efficiency
+ Optimize applications for maximum speed, scalability, and accessibility
+ Develop large-scale, web and mobile software utilizing appropriate technologies for use by our employees
Backend Development
+ Develop and maintain RESTful APIs and backend services using Python or Java
+ Design and implement data models and database schemas
+ Deploy to cloud environments (primarily AWS)
+ Integrate with third-party services and APIs
+ Write clean, maintainable, and well-documented code
Palantir Foundry Development (Highly Preferred)
+ Build custom applications and integrations within the Palantir Foundry platform
+ Develop Ontology-based applications leveraging object types, link types, and actions
+ Create data pipelines and transformations using Python transforms
+ Implement custom widgets and user experiences using the Foundry SDK
+ Design and build functions that assist users to visualize and interact with their data
Product Development & Delivery
+ Research problems and break them into deliverable parts
+ Work with a Lean mindset and deliver value quickly
+ Participate in all stages of the product development and deployment lifecycle
+ Conduct code reviews and provide constructive feedback to team members
+ Work with product managers and stakeholders to define requirements and deliverables
+ Contribute to architectural decisions and technical documentation
**What We're Looking For:**
+ Experience with Palantir Foundry platform, required
+ 5+ years in web front-end or mobile development
+ Bachelor's or Master's degree in Computer Science or other related field, preferred
+ Strong proficiency in React, JavaScript/TypeScript, HTML, and CSS for web front-end development
+ Strong knowledge of one or more Object Oriented Programming or Functional Programming languages such as JavaScript, Typescript, Java, Python, or Kotlin
+ Experience with RESTful API design and development
+ Experience deploying to cloud environments (AWS preferred)
+ Understanding of version control systems, particularly GitHub
+ Experience with relational and/or NoSQL databases
+ Familiarity with modern frontend build tools and package managers (e.g., Webpack, npm, yarn)
+ Experience with React, including React Native for mobile app development, preferred
+ Experience in Android or iOS development, preferred
+ Experience with data visualization libraries (e.g., D3.js, Plotly, Chart.js), preferred
+ Familiarity with CI/CD pipelines and DevOps practices, preferred
+ Experience with Spring framework, preferred
+ Working knowledge of Lean, User Centered Design, and Agile methodologies
+ Strong communication skills and ability to collaborate effectively across teams
+ Growth mindset - Aptitude and willingness to learn new technologies
+ Empathy - Kindness and empathy when building software for end users
+ Pride - Takes pride in engineering excellence and quality craftsmanship
+ Customer obsession - Obsessed with the end user experience of products
+ Strong problem-solving skills and attention to detail
+ Ability to work independently and as part of a balanced, multi-disciplinary team
+ Self-motivated with a passion for continuous learning and improvement
**What You'll Get:**
+ Up to 40% off the base rate of any standard Hertz Rental
+ Paid Time Off
+ Medical, Dental & Vision plan options
+ Retirement programs, including 401(k) employer matching
+ Paid Parental Leave & Adoption Assistance
+ Employee Assistance Program for employees & family
+ Educational Reimbursement & Discounts
+ Voluntary Insurance Programs - Pet, Legal/Identity Theft, Critical Illness
+ Perks & Discounts -Theme Park Tickets, Gym Discounts & more
The Hertz Corporation operates the Hertz, Dollar Car Rental, Thrifty Car Rental brands in approximately 9,700 corporate and franchisee locations throughout North America, Europe, The Caribbean, Latin America, Africa, the Middle East, Asia, Australia and New Zealand. The Hertz Corporation is one of the largest worldwide airport general use vehicle rental companies, and the Hertz brand is one of the most recognized in the world.
**US EEO STATEMENT**
At Hertz, we champion and celebrate a culture of diversity and inclusion. We take affirmative steps to promote employment and advancement opportunities. The endless variety of perspectives, experiences, skills and talents that our employees invest in their work every day represent a significant part of our culture - and our success and reputation as a company.
Individuals are encouraged to apply for positions because of the characteristics that make them unique.
EOE, including disability/veteran
$135k yearly 47d ago
Forward Deployed Data Engineer
Filevine 4.3
Data engineer job in Salt Lake City, UT
Filevine is forging the future of legal work with cloud-based workflow tools. We have a reputation for intuitive, streamlined technology that helps professionals manage their organization and serve their clients better. We're also known for our team of extraordinary and passionate professionals who love working together to help organizations thrive. Our success has catapulted Filevine to the forefront of our field-we are ranked as one of the most innovative and fastest-growing technology companies in the country by both Deloitte and Inc.
Our Mission
Filevine is building the seamless intersection between legal and business by creating a world- class platform to help professionals scale.
Role Summary:
As a Forward Deployed DataEngineer, you will be a customer-facing expert focused on integrating and ingesting data from diverse client sources into our platform. This role demands creativity, resilience, and technical prowess to connect to even the most challenging or unconventional data systems, ensuring seamless data flows that power critical legal workflows. You'll work directly with law firms to build robust ingestion pipelines, enabling them to leverage their existing data within Filevine for better insights and efficiency. This position requires extensive travel (up to 75% or more), including frequent onsite visits to client locations across the country.
Responsibilities
* Design and build custom data ingestion pipelines to connect client data sources-no matter how disparate, legacy, or unconventional-to Filevine.
* Collaborate with clients to assess their data landscapes, identify ingestion opportunities, and implement secure, reliable ETL (Extract, Transform, Load) processes.
* Handle a wide variety of data formats and sources (e.g., databases, APIs, flat files, spreadsheets, legacy systems) with a focus on data quality, validation, and transformation.
* Ensure compliance with data privacy and security standards critical to the legal industry (e.g., handling sensitive client information).
* Troubleshoot and optimize data flows in production environments, resolving issues related to connectivity, volume, or schema mismatches.
* Provide technical guidance and training to clients on data integration best practices during and after implementation.
* Document pipelines, mappings, and custom solutions while feeding client insights back to the product and engineering teams for platform improvements.
* Perform onsite engagements as needed to accelerate integrations and drive adoption.
Qualifications
* Proven experience as a DataEngineer, ETL Developer, or in customer-facing integration roles (e.g., solutions engineering or professional services).
* Strong proficiency in building data pipelines using tools like Python, SQL, Apache Spark, Airflow, or similar.
* Expertise in connecting to diverse data sources: relational databases (e.g., MySQL, PostgreSQL, SQL Server), NoSQL, APIs (REST/SOAP), file-based systems, and cloud storage.
* Experience with data transformation, cleansing, and handling unstructured or "messy" data.
* Knowledge of data security best practices, especially in regulated industries (e.g., encryption, access controls, compliance with HIPAA or similar).
* Excellent problem-solving skills with the ability to improvise solutions for non-standard data challenges.
* Strong communication skills to translate technical details for non-technical stakeholders (e.g., firm administrators and attorneys).
* Willingness and ability to travel extensively to client sites nationwide.
* Bachelor's degree in Computer Science, Engineering, Data Science, or equivalent experience.
Nice-to-have
* Experience in legal tech, SaaS integrations, or enterprise data migrations.
* Familiarity with cloud platforms (e.g., AWS, Azure, GCP) for data ingestion and storage.
* Background in handling high-volume or real-time data streams.
* Passion for tackling "impossible" data problems in a high-impact, client-driven environment.
Cool Company Benefits:
* A dynamic, rapidly growing company, focused on helping organizations thrive
* Medical, Dental, & Vision Insurance (for full-time employees)
* Competitive & Fair Pay
* Maternity & paternity leave (for full-time employees)
* Short & long-term disability
* Opportunity to learn from a dedicated leadership team
* Centrally located open office building in Sugar House (onsite employees)
* Top-of-the-line company swag
Privacy Policy Notice
Filevine will handle your personal information according to what's outlined in our Privacy Policy.
Communication about this opportunity, or any open role at Filevine, will only come from representatives with email addresses using "filevine.com". Other addresses reaching out are not affiliated with Filevine and should not be responded to.
We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.
$80k-114k yearly est. 29d ago
Azure AI & Data Engineer
Ultradent Products 4.8
Data engineer job in Salt Lake City, UT
Please note before applying: This position is required to work onsite at our headquarters in South Jordan, UT. We are only considering local candidates for this position! Thank you!
The Ultradent Data & Analytics Team is looking for a passionate and solution oriented Azure AI & DataEngineer to help the implementation of data pipelines and supporting data architecture on Azure to increase our analytical solutions that deepen our collective understanding of business activity, influence innovation, and deliver actionable insights.
The ideal candidate has outstanding business acumen, a restlessness to answer “why?” questions, excellent analytical abilities, strong technical skills, superior written and verbal communication skills.
The Azure ML & DataEngineer is responsible for building, implementing, automating services, and supporting Microsoft BI and AI solutions in the cloud. The candidate will design architectures for data integration and processing to provide high quality datasets and utilize Big Data processing tools to build data and ML pipelines on a modern technology stack. Your area of expertise covers the creation of a complete, end-to-end data pipeline and AI lifecycle: from the import of structured and unstructured raw data, its processing and transformation, metadata management to the implementation of scalable analytics and model deployment platforms.
The right individual will possess experience in all stages of database and ML project work (requirements, logical and physical design, implementation, testing and deployment). The Azure AI & DataEngineer should have a firm grasp of modern BI implementation and MLOps methodologies and will have had in-depth experience with the Microsoft BI Stack (SSIS/Data Factory, Azure Data Warehouse, Azure Analytics/SSAS and PowerBi) and Azure AI platforms.
General & Core Engineering
Thought leadership in technology innovation and transformation.
Maintaining applications ensuring high availability, performance and user experience.
Creates and maintains optimal data pipeline architecture on Azure platform. Ensures that system designs adhere to solution architecture design and are traceable to functional as well as non-functional requirements.
Designs relational and non-relational data stores on Azure. Builds analytical tools to provide actionable insights from key business performance metrics.
Leverages Azure Data Factory and Databricks to assemble large, complex data sets that meet end user and business requirements. Designs new solutions and services to improve overall user experience.
Identifies, designs, and implements internal process improvements such as automating manual processes and optimizing data delivery. Defines system design standards to improve and sustain standardization.
Expertise in Azure DevOps concepts and implementations, including git, CI/CD pipelines, IaC, docker containers, and Work Items.
Assists Manager-Data Administration in leading coordination with other staff to ensure data handling meets organizational objectives for data quality, business process management, and risk management.
Provides technical mentoring to other team members for best practices on dataengineering and cloud technologies.
Data Architecture and Governance
Designs and Governs Data Models:
Defines and enforces logical and physical data models for both the Azure Data Lake and the Azure Synapse Analytics data warehouse.
Leads the effort to standardize data definitions, metadata management, and data lineage across the Azure platform to ensure consistency and quality.
Architects Scalable Data Ecosystems:
Develops and maintains the multi-layered Data Lake architecture (e.g., Raw, Bronze, Silver, Gold zones) to optimize for ingestion, transformation, and consumption of data by both BI and AI initiatives.
Designs and implements real-time and batch data ingestion strategies, ensuring low latency and high reliability for critical business data.
Evaluates and recommends new Azure services or architectural patterns to continuously improve data security, performance, and cost efficiency of the overall data ecosystem.
Ensures Data Quality and Reliability:
Establishes and monitors data quality checks and validation rules within data pipelines to maintain a high level of trust in the datasets used for reporting and machine learning models.
Implements schema evolution strategies to manage changes in source data and prevent pipeline breaks.
AI/ML and Automation
Designs and Implements AI/ML Solutions:
Develops, trains, and deploys machine learning models using Azure Machine Learning (AML) / Azure AI Studio or equivalent platforms.
Builds end-to-end MLOps pipelines (e.g., model registration, automated retraining, deployment via Azure Kubernetes Service/Managed Endpoints) using Azure DevOps and AML.
Hands-on experience implementing data and AI workflow automation using n8n, Microsoft Power Automate, or equivalent tools to connect Azure services, databases, and enterprise applications.
AI Workflow Integration and Automation:
Architects and implements low-code/no-code AI workflow automation using n8n or equivalent tools (e.g., Power Automate, Zapier, Apache Airflow).
Integrates AI services (e.g., Azure Cognitive Services, custom models) with business processes, data pipelines, and third-party systems via REST APIs and webhooks.
Designs and maintains Real-time AI inference services and ensures their scalability and performance.
Preferred Technology Experience
Azure PaaS, Data analytics, Data warehousing and Data science.
Azure Spark, ADF, T-SQL Scripting, Stored Procs, Python.
Azure Synapse Analytics, SQL Pool.
Deep knowledge of data ingestion strategies and understanding of the Data Lake and Dimensional data models within Azure.
Experience with the Azure storage technologies (Azure Data Lake, Azure SQL Data Warehouse, Azure SQL Database).
Experience with Azure data movement and transformation capabilities (Azure Data Factory, Data Lake Analytics, Data Bricks, Stream Analytics).
Comfortable with Microsoft SQL data technologies (SSAS/SSIS/SSRS).
Core Experience and Education
BA / BS preferred; equivalent work experience will be considered.
Minimum of 7 years of successful, results-oriented experience in a diverse range of business and IT functions is ideal.
Hands on experience with Azure Data Lake, Azure Data Factory, SQL Data Warehouse Azure Blob, Azure Storage Explorer.
Hands on experience with Delta tables and implementing delta logic on data lakes and warehouse.
Hands on experience with implementing multi layered data lake architecture.
Hands on experience with implementing data warehouse architecture.
Hands on experience in creating Data Factory pipelines for on-cloud ETL processing; copy activity, custom Azure development etc.
PowerBi experience a plus.
Ultradent is an Equal Opportunity Employer. We are a global culture where differences and perspectives are sought after, welcomed, and embraced. We consider all qualified applicants fairly, based on their experience, skills, and potential to contribute to our team. Our core values - Integrity, care, quality, innovation, and hard work- guide us daily. These values, when balanced, shape our workplace culture and ensure that we remain focused on our vision while maintaining a professional and inclusive environment.
VEVRAA Federal Contractor: For more information please contact us at ************************
PWDNET
$86k-118k yearly est. Auto-Apply 60d+ ago
Senior Data Engineer
Arturo Intelligence
Data engineer job in South Jordan, UT
What you'll do:
Onboard new geospatial datasets into our systems: imagery, parcels, extracted polygons, and build ETL pipelines to automate these processes
Manage the interface between our AI systems and the input data they need to operate via streaming systems, storage systems, and caching systems
Be aware of, and manage 3rd party provider rate limits, and architect our systems to gracefully deal with these limits while maximizing our throughput
Participate in an agile product development process, where collaboration with stakeholders is a vital step to building what is needed
Challenge and be challenged on a diverse, collaborative, and brilliant team
Write automated test suites to ensure the quality of your code
Contribute to open -source geospatial software
Build solutions that enable new products - - typically involving large scale or intricate geospatial techniques
Build -in system quality from the beginning by writing unit & integration tests and integrating with logging, metrics, and observability systems
Requirements
What you bring:
Good to Expert level understanding of geospatial systems, concepts, patterns, and software, including both legacy formats and software, as well as the hottest newest open -source packages and tools
Professional experience writing production -ready Python code that leverages modern software development best practices (automated testing, CICD, observability)
Experience working on a team of developers, maintaining a shared codebase, and having your code reviewed before merging
Strong DB Expertise in an Amazon environment (RDS, Postgres, and DynamoDB)
Strong ETL Experience (especially in extraction and ingestion of 3rd party data)
Nice -to -haves:
Familiarity with machine learning concepts
Familiarity with asynchronous programming
Benefits
Key competencies at Arturo:
Willingness to learn - You have an insatiable desire to continue growing, a fearless approach to the unknown, and love a challenge
Teamwork/Collaboration - You like working with others; you participate actively and enjoy sharing the responsibilities and rewards. You pro -actively work to strengthen our team. And you definitely have a sense of humor
Critical Thinking - You incorporate analysis, interpretation, inference, explanation, self -regulation, open -mindedness, and problem -solving in everything you do
Drive for Results - You keep looking forward, solve problems and participate in the success of our growing organization
$71k-98k yearly est. 60d+ ago
Senior Big Data Engineer
Deegit 3.9
Data engineer job in Salt Lake City, UT
Responsibilities · Participate in the engineering and administration of big data systems. · Apache Storm/Java development for both data transformation and augmentation. · Employ best practices to ensure high availability and scalability. · Install, configure, and maintain big data technologies and systems.
·
Maintain documentation and troubleshooting playbooks.
·
Respond to and resolve access and performance issues.
·
Ability to work non-peak hours when needed.
·
Assist in capacity planning.
·
Proactively monitor performance.
·
Collaborate with engineering teams to optimize data collection.
·
Maintain change control and testing processes for all modifications and deployments.
·
Conduct research and make recommendations on big data products, services, and standards.
Preferred Qualifications
·
Experience with operational analytics and log management using ELK or Splunk.
·
Understanding and experience with SQL and MPP databases such as Greenplum.
·
Experience with SaltStack or comparable deployment automation tools.
·
Working knowledge of Zenoss, NewRelic, PagerDuty or other monitoring technologies.
·
Experience with Apache Drill, Apache Hbase, Hive, Phoenix, or other Hadoop with SQL technologies.
·
Experience working with batch-processing and tools in Hadoop technology stack (MapReduce, Yarm, Pig, Hive, HDFS)
Qualifications
Basic Requirements
·
BS in Computer Science, Information Systems, or related technical degree.
·
5 or more years' experience with data transformation pipelines preferably Storm
·
5 or more years' experience in messaging frameworks preferably Kafka.
·
5 or more years' experience with Java programming.
Additional Information
All your information will be kept confidential according to EEO guidelines.
$81k-116k yearly est. 1d ago
Data Engineer
Tata Consulting Services 4.3
Data engineer job in Salt Lake City, UT
Must Have Technical/Functional Skills * Solid understanding of search technology, real-time data pipeline construction, ETL processes, and various programming languages including Java and Python. Proficient with UNIX/Linux, scheduling, and orchestration tools.
* Expert in developing robust Python scripts for automating ETL tasks, data ingestion, cleansing, validation, and workflow orchestration.
* Experienced in implementing RESTful APIs in Python for data exposure and consumption, and skilled in using Python libraries (e.g., Pandas, NumPy, SQLAlchemy) for data manipulation and database connectivity.
* Design and deploy microservices and data applications on Kubernetes clusters, managing deployments, services, and autoscaling.
* Implement and manage CI/CD pipelines that automate the deployment of Docker images to Kubernetes environments.
* Create optimized Docker files to package data applications (Python scripts, ETL services) and their dependencies, ensuring consistent environments.
* Customize deployments using values.yaml in Helm for different environments (dev, QA, production).
Roles & Responsibilities
* Configure and troubleshoot authentication and authorization using Kerberos to secure access to Big Data systems like Hadoop, Spark, or specific databases. Implement security principles and role-based access in data environments.
* Perform basic database administration tasks such as performance monitoring, indexing, and query optimization on engines like SQL Server, PostgreSQL, Oracle, and Greenplum.
* Efficiently extract, transform, and load data between these heterogeneous databases.
* Requires a Bachelors in Computer Science, Computer Engineering or related field and some experience ADO/GIT, ETL, SQL, UNIX/Linux, Docker/Kubernetes, API Development, JSON, Kafka, Automated Testing, BDD, Big Data distributed systems, various programming languages like Java and Python, orchestration tools and processes or other directly related experience.
* Develops, tests, and modifies software to enhance data platform and application efficiency. Provides technical support for issues.
* Actively participates in agile ceremonies, including program increment planning, daily standups, team backlog grooming, iteration retrospectives, team demos, and inspect & adapt sessions.
* Supports test and QA efforts on data projects and coordinates with data operations teams for production deployments.
* Possesses strong analytical, organizational, and problem-solving skills. Demonstrates the ability and desire to quickly learn new technologies and adapt to changing technology and priorities.
* Excellent verbal and written communication skills with both technical and non-technical staff. Capable of working independently, handling multiple concurrent projects, prioritizing effectively, and collaborating effectively within a team environment.
* Skilled at eliciting, gathering, analyzing user requirements, and interpreting, validating, and mapping business requirements to appropriate solutions.
* Able to meet deadlines.
Generic Managerial Skills, If any
Managerial exp is good to have
Salary Range: $115,000 - $120,000 a year
$115k-120k yearly 14d ago
Engineer, Data
Holley Performance
Data engineer job in Ogden, UT
Job Description
This role focuses on backend development and integrations for building and maintaining enterprise data warehouses and data lakes. The ideal candidate will possess a deep understanding of data architecture, ETL pipelines, and integration technologies, ensuring seamless data flow and accessibility across the organization.
Key Responsibilities:
· Design, develop, and maintain scalable backend systems to support data warehousing and data lake initiatives.
· Build and optimize ETL/ELT processes to extract, transform, and load data from various sources into centralized data repositories.
· Develop and implement integration solutions for seamless data exchange between systems, applications, and platforms.
· Collaborate with data architects, analysts, and other stakeholders to define and implement data models, schemas, and storage solutions.
· Ensure data quality, consistency, and security by implementing best practices and monitoring frameworks.
· Monitor and troubleshoot data pipelines and systems to ensure high availability and performance.
· Stay up-to-date with emerging technologies and trends in dataengineering and integration to recommend improvements and innovations.
· Document technical designs, processes, and standards for the team and stakeholders.
Qualifications:
· Bachelor's degree in Computer Science, Engineering, or a related field; equivalent experience considered.
· Proven experience as a DataEngineer with 5 or more years of experience; or in a similar backend development role.
· Strong proficiency in programming languages such as Python, Java, or Scala.
· Hands-on experience with ETL/ELT tools and frameworks (e.g., Apache Airflow, Talend, Informatica, etc.).
· Extensive knowledge of relational and non-relational databases (e.g., SQL, NoSQL, PostgreSQL, MongoDB).
· Expertise in building and managing enterprise data warehouses (e.g., Snowflake, Amazon Redshift, Google BigQuery) and data lakes (e.g., AWS S3, Azure Data Lake).
· Familiarity with cloud platforms (AWS, Azure, Google Cloud) and their data services.
· Experience with API integrations and data exchange protocols (e.g., REST, SOAP, JSON, XML).
· Solid understanding of data governance, security, and compliance standards.
· Strong analytical and problem-solving skills with attention to detail.
· Excellent communication and collaboration abilities.
Preferred Qualifications:
· Certifications in cloud platforms (AWS Certified Data Analytics, Azure DataEngineer, etc.)
· Experience with big data technologies (e.g., Apache Hadoop, Spark, Kafka).
· Knowledge of data visualization tools (e.g., Tableau, Power BI) for supporting downstream analytics.
· Familiarity with DevOps practices and tools (e.g., Docker, Kubernetes, Jenkins).
Please note: Relocation assistance will not be available for this position.
$71k-98k yearly est. 22d ago
Data Engineer
Calyx Containers
Data engineer job in West Valley City, UT
Calyx Containers | Full-Time
Calyx Containers is a B2B packaging company serving the cannabis (mainly for now), food, and consumer goods industries. We're building technology that transforms how businesses order, track, and manage custom packaging-from instant quotes to production visibility. Our platform integrates sales, manufacturing, and fulfillment into a unified customer experience called Calyx Command. Calyx Command puts our customers in the driver's seat of their packaging through a space-themed consumer-centric experience. We are pioneering the future of the packaging tech stack while making sure our customers have fun doing it.
About the Role
We're looking for a hands-on DataEngineer to own and scale our data infrastructure. You'll build the pipelines and architecture that unify data across our CRM, ERP, and production systems-enabling real-time visibility into customer behavior, order fulfillment, and manufacturing operations. This is a foundational role with significant ownership and growth potential. We've already built base infrastructure for you to expand upon.
What You'll Do
Design and maintain ETL/ELT pipelines connecting CRM, ERP, MES, Sales Intelligence, and internal application databases
Own our Data Lake architecture using a Bronze/Silver/Gold medallion pattern for raw ingestion, cleansed data, and business-ready datasets
Build and operate our Golden Record System for customer identity resolution across external platforms
Implement data quality gates, freshness SLAs, anomaly detection, and automated alerting
Create datasets that power AI features (damage detection, document parsing, demand forecasting etc)
Collaborate with product and engineering to expose clean data via REST APIs
Establish observability dashboards for pipeline health and data lineage
Our Tech Stack
Database: PostgreSQL (Neon-backed), Drizzle ORM
Backend: Node.js, Express, TypeScript
AI/ML: LLM APIs for document parsing and analytics
Infrastructure: Replit deployments, structured logging with Pino, distributed tracing
Data Patterns: Medallion architecture (Bronze/Silver/Gold), CDC-style sync jobs, batch and real-time pipelines
What We're Looking For
4+ years of experience in dataengineering or a related role
Strong SQL skills and proficiency with TypeScript or Python
Experience with relational databases (PostgreSQL preferred) and data modeling
Familiarity with ERP systems and CRM platforms
Understanding of identity resolution, master data management, and data quality principles
Comfort working across the stack-you'll touch APIs, database schemas, and monitoring
Self-starter who thrives with autonomy in a fast-moving environment
Nice to Have
Experience in manufacturing, packaging, or cannabis industry compliance
Exposure to production/MES systems like LabelTraxx
Background with event-driven architectures or real-time streaming
Familiarity with Drizzle ORM or similar TypeScript-first database tooling
Why Join Us
Ownership: You'll define how we collect, store, and use data across the company
Impact: Your work directly powers customer-facing features and operational decisions
Growth: Opportunity to build and lead a data team as we scale
Modern stack: TypeScript end-to-end, no legacy systems to maintain
Profit Sharing: Ability to participate in the company's profit-sharing plan
MORE ABOUT US:
Our team is composed of bright, hardworking, creative, and highly motivated individuals looking to make an impact on the world. We seek like-minded colleagues who share our values and want to apply their experience, energy, and enthusiasm to help grow and scale a dynamic business in a rapidly expanding industry. The Calyx Containers culture fosters the personal and professional growth in a challenging and rewarding environment. We operate at a fast pace, demand high personal standards, and offer everyone the opportunity to contribute, skill-build, and develop their talents.
Benefits and Perks offered to full time employees:
-Flexible Paid Time Off
-Comprehensive benefits offerings including: Medical (with company-funded HRA), Dental, Vision, Short- and Long-Term Disability Insurance, Life Insurance, Headspace Care Mental Health support...all effective the first day of the month following hire.
-401(k)
-Ability to make an immediate impact
-Monthly team meetings and frequent social events
An ideal Calyx candidate looks like:
-Has experience in the cannabis and/or packaging industry
-Thrives in a fast-paced environment
-Handles ambiguity with a positive attitude
-Rolls up their sleeves to help their team
How success is measured at Calyx:
First 30 days spent getting to know the company and our team!
Ability to make an immediate impact - we're growing quickly and want you to help cultivate that!
Living our core values:
--X-treme Ownership
--Be Quick, But Don't Hurry
--Sustainability Is Multi-Dimensional
--We Are Square: Quality Does Not Cut Corners
--Customer Is The Only Boss
--Earn Success Every Day
--The Biggest Failure Is The Failure To Ask For Help
--Better Together: Cultivate An Inclusive Environment
Calyx Containers is committed to creating a diverse environment and is proud to be an Equal Opportunity Employer. We believe strongly in fair hiring practices and in creating a welcoming environment for all team members. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. Diversity drives innovation; inclusion drives success. We believe a multitude of approaches and ideas enable us to deliver the best results for our workforce, workplace, and customers. We are committed to fostering a culture where all employees can share their passions and ideas so we can tackle the toughest challenges in our industry and pave new paths to limitless possibilities.
Calyx is committed to providing access, equal opportunity and reasonable accommodation for individuals with disabilities in employment and activities. To request reasonable accommodation, please contact **********************. (Please note that applications should not be emailed to this address).
$71k-98k yearly est. Auto-Apply 6d ago
Data Engineers
University of Utah 4.0
Data engineer job in Salt Lake City, UT
Bookmark this Posting Print Preview | Apply for this Job Announcement Details Open Date 12/18/2025 Requisition Number PRN43881B Job Title DataEngineers Working Title Healthcare DataEngineer III Career Progression Track P00 Track Level P3 - Career FLSA Code Computer Employee Patient Sensitive Job Code? No Standard Hours per Week 40 Full Time or Part Time? Full Time Shift Day Work Schedule Summary VP Area U of U Health - Academics Department 00943 - HSC Core Resrch Facility Oper Location Campus City Salt Lake City, UT Type of Recruitment External Posting Pay Rate Range $95,000.00 - $125,000.00 Close Date 04/18/2026 Priority Review Date (Note - Posting may close at any time) Job Summary
DataEngineers
Design, build, implement, and maintain data processing pipelines for the extraction, transformation, and loading (ETL) of data from a variety of data sources. Develop robust and scalable solutions that transform data into a useful format for analysis, enhance data flow, and enable end users to consume and analyze data faster and easier. Write complex SQL queries to support analytics needs. Evaluate and recommend tools and technologies for data infrastructure and processing. Collaborate with engineers, data scientists, data analysts, product teams, and other stakeholders to translate business requirements to technical specifications and coded data pipelines. Work with tools, languages, data processing frameworks, and databases such as R, Python, SQL, MongoDB, Redis, Hadoop, Spark, Hive, Scala, BigTable, Cassandra, Presto, Strom. Work with structured and unstructured data from a variety of data stores, such as data lakes, relational database management systems, and/or data warehouses.
Work collaboratively with end-users including researchers to recommend strategies for problem analysis, study design, and apply rigorous data science methods. Serve as the front-line support for data-related questions from end-users, and be responsible for standardization, documentation, and maintaining a repertoire of answers and solutions to common problems.
Learn more about the great benefits of working for University of Utah: benefits.utah.edu
The department may choose to hire at any of the below job levels and associated pay rates based on their business need and budget.
Responsibilities
DataEngineer, III
Design, build, implement, and maintain data processing pipelines for the extraction, transformation, and loading (ETL) of data from a variety of data sources. Develop robust and scalable solutions that transform data into a useful format for analysis, enhance data flow, and enable end users to consume and analyze data faster and easier. Write complex SQL queries to support analytics needs. Evaluate and recommend tools and technologies for data infrastructure and processing. Collaborate with engineers, data scientists, data analysts, product teams, and other stakeholders to translate business requirements to technical specifications and coded data pipelines. Work with tools, languages, data processing frameworks, and databases such as R, Python, SQL, MongoDB, Redis, Hadoop, Spark, Hive, Scala, BigTable, Cassandra, Presto, Strom. Work with structured and unstructured data from a variety of data stores, such as data lakes, relational database management systems, and/or data warehouses. Considered highly skilled and proficient in discipline. Conducts complex, important work under minimal supervision and with wide latitude for independent judgment.
Requires a bachelor's (or equivalency) + 6 years or a master's (or equivalency) + 4 years of directly related work experience.
This is a Career-Level position in the General Professional track.
Minimum Qualifications
EQUIVALENCY STATEMENT: 1 year of higher education can be substituted for 1 year of directly related work experience (Example: bachelor's degree = 4 years of directly related work experience).
DataEngineer, III: Requires a bachelor's (or equivalency) + 6 years or a master's (or equivalency) + 4 years of directly related work experience.
Preferences
o Master's degree in biomedical informatics, computer science, information systems, bioengineering or equivalent, with an emphasis on healthcare.
o 4+ years of experience with healthcare data warehousing, electronic health records (ideally, Epic), and data preparation & analysis.
o Knowledge and experience in applying controlled medical terminologies like SNOMEDCT, LOINC, RxNorm, etc. in healthcare.
o Clinical training (e.g., medicine, nursing, pharmacy, etc.).
o Demonstrated expertise in a broad array of data science theories and techniques.
Type Benefited Staff Special Instructions Summary Additional Information
The University of Utah values candidates who have experience working in settings with students from diverse backgrounds and possess a strong commitment to improving access to higher education for historically underrepresented students.
Individuals from historically underrepresented groups, such as minorities, women, qualified persons with disabilities and protected veterans are encouraged to apply. Veterans' preference is extended to qualified applicants, upon request and consistent with University policy and Utah state law. Upon request, reasonable accommodations in the application process will be provided to individuals with disabilities.
The University of Utah is an Affirmative Action/Equal Opportunity employer and does not discriminate based upon race, ethnicity, color, religion, national origin, age, disability, sex, sexual orientation, gender, gender identity, gender expression, pregnancy, pregnancy-related conditions, genetic information, or protected veteran's status. The University does not discriminate on the basis of sex in the education program or activity that it operates, as required by Title IX and 34 CFR part 106. The requirement not to discriminate in education programs or activities extends to admission and employment. Inquiries about the application of Title IX and its regulations may be referred to the Title IX Coordinator, to the Department of Education, Office for Civil Rights, or both.
To request a reasonable accommodation for a disability or if you or someone you know has experienced discrimination or sexual misconduct including sexual harassment, you may contact the Director/Title IX Coordinator in the Office of Equal Opportunity and Affirmative Action:
Director/ Title IX Coordinator
Office of Equal Opportunity and Affirmative Action (OEO/AA)
383 University Street, Level 1 OEO Suite
Salt Lake City, UT 84112
************
************
Online reports may be submitted at oeo.utah.edu
For more information: ***************************************
To inquire about this posting, email: ******************* or call ************.
The University is a participating employer with Utah Retirement Systems ("URS"). Eligible new hires with prior URS service, may elect to enroll in URS if they make the election before they become eligible for retirement (usually the first day of work). Contact Human Resources at ************** for information. Individuals who previously retired and are receiving monthly retirement benefits from URS are subject to URS' post-retirement rules and restrictions. Please contact Utah Retirement Systems at ************** or ************** or University Human Resource Management at ************** if you have questions regarding the post-retirement rules.
This position may require the successful completion of a criminal background check and/or drug screen.
************************************ This report includes statistics about criminal offenses, hate crimes, arrests and referrals for disciplinary action, and Violence Against Women Act offenses. They also provide information about safety and security-related services offered by the University of Utah. A paper copy can be obtained by request at the Department of Public Safety located at 1658 East 500 South.
Posting Specific Questions
Required fields are indicated with an asterisk (*).
* * What is your highest level of completed education?
* None
* High School Diploma or Equivalent
* Associate Degree
* Bachelor's Degree
* Master's Degree
* Doctorate Degree
* * How many years of related work experience do you have?
* Less than 6 years
* 6 years or more, but less than 9 years
* 9 years or more, but less than 12 years
* 12 years or more, but less than 15 years
* 15 years or more
Applicant Documents
Required Documents
* Resume
Optional Documents
* Cover Letter
$95k-125k yearly Auto-Apply 33d ago
Data Scientist, Product Analytics
Meta 4.8
Data engineer job in Salt Lake City, UT
As a Data Scientist at Meta, you will shape the future of people-facing and business-facing products we build across our entire family of applications (Facebook, Instagram, Messenger, WhatsApp, Oculus). By applying your technical skills, analytical mindset, and product intuition to one of the richest data sets in the world, you will help define the experiences we build for billions of people and hundreds of millions of businesses around the world. You will collaborate on a wide array of product and business problems with a wide-range of cross-functional partners across Product, Engineering, Research, DataEngineering, Marketing, Sales, Finance and others. You will use data and analysis to identify and solve product development's biggest challenges. You will influence product strategy and investment decisions with data, be focused on impact, and collaborate with other teams. By joining Meta, you will become part of a world-class analytics community dedicated to skill development and career growth in analytics and beyond.Product leadership: You will use data to shape product development, quantify new opportunities, identify upcoming challenges, and ensure the products we build bring value to people, businesses, and Meta. You will help your partner teams prioritize what to build, set goals, and understand their product's ecosystem.Analytics: You will guide teams using data and insights. You will focus on developing hypotheses and employ a varied toolkit of rigorous analytical approaches, different methodologies, frameworks, and technical approaches to test them.Communication and influence: You won't simply present data, but tell data-driven stories. You will convince and influence your partners using clear insights and recommendations. You will build credibility through structure and clarity, and be a trusted strategic partner.
**Required Skills:**
Data Scientist, Product Analytics Responsibilities:
1. Work with large and complex data sets to solve a wide array of challenging problems using different analytical and statistical approaches
2. Apply technical expertise with quantitative analysis, experimentation, data mining, and the presentation of data to develop strategies for our products that serve billions of people and hundreds of millions of businesses
3. Identify and measure success of product efforts through goal setting, forecasting, and monitoring of key product metrics to understand trends
4. Define, understand, and test opportunities and levers to improve the product, and drive roadmaps through your insights and recommendations
5. Partner with Product, Engineering, and cross-functional teams to inform, influence, support, and execute product strategy and investment decisions
**Minimum Qualifications:**
Minimum Qualifications:
6. Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
7. Bachelor's degree in Mathematics, Statistics, a relevant technical field, or equivalent
8. 4+ years of work experience in analytics, data querying languages such as SQL, scripting languages such as Python, and/or statistical mathematical software such as R (minimum of 2 years with a Ph.D.)
9. 4+ years of experience solving analytical problems using quantitative approaches, understanding ecosystems, user behaviors & long-term product trends, and leading data-driven projects from definition to execution [including defining metrics, experiment, design, communicating actionable insights]
**Preferred Qualifications:**
Preferred Qualifications:
10. Master's or Ph.D. Degree in a quantitative field
**Public Compensation:**
$147,000/year to $208,000/year + bonus + equity + benefits
**Industry:** Internet
**Equal Opportunity:**
Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Meta participates in the E-Verify program in certain locations, as required by law. Please note that Meta may leverage artificial intelligence and machine learning technologies in connection with applications for employment.
Meta is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance or accommodations due to a disability, please let us know at accommodations-ext@fb.com.
$147k-208k yearly 60d+ ago
Sr Data Engineer (MFT - IBM Sterling)
The Hertz Corporation 4.3
Data engineer job in Salt Lake City, UT
**A Day in the Life:** The IBM Sterling Senior MFT Engineer will have expertise in Sterling B2B Integrator, Sterling File Gateway, Sterling Secure Proxy, Control Center. This position requires an expert level knowledge of these technologies. You'll provide third-level support for core hardware, software, data and infrastructure components within the B2B integration team. The software focus areas will include software infrastructure components, continuous improvement and automation related to middleware and enterprise data movement within a mission-critical, multi-site enterprise environment.
The ideal candidate will have a passion for technology; possess the ability to create change and be the facilitator for this transformation. They will have experience tailoring software design, developing, and implementing of solutions in support of data quality and automation. They will need to understand the requirements of a project, test the performance of the infrastructure, and verify that the requirements have been successfully met.
We expect the starting salary to be around $135k but will be commensurate with experience.
**What You'll Do:**
TECHNICAL SENIORSHIP
+ Communication with internal and external business users on Sterling Integrator mappings
+ Making changes to existing partner integrations to meet internal and external requirements
+ Design, develop and implement solutions based on standards and processes that establish consistency across the enterprise data, reduce risks, and promote efficiencies in support of the organizational goals and objectives.
+ Diagnose and troubleshoot complex issues, restore services and perform root cause analysis.
+ Facilitate the review, vetting of these designs with the architecture governance bodies, as required.
+ Be aware of all aspects of security related to the Sterling environment and integrations
INNOVATIVE INFRASTRUCTURE & PROBLEM SOLVING
+ Strive to engineering excellence by simplifying, optimizing, and automating processes and workflows.
TEAMWORK & COMMUNICATION
+ Superior & demonstrated team building & development skills to harness powerful teams
+ Ability to communicate effectively with different levels of Seniorship within the organization
+ Provide timely updates so that progress against each individual incident can be updated as required
+ Write and review high quality technical documentation
CONTROL & AUDIT
+ Ensures their workstation and all processes and procedures, follow organization standards
CONTINUOUS IMPROVEMENT
+ Encourages and maintains a 'Best practice sharing culture', always striving to find ways to improve service and change mind set.
**What We're Looking For:**
+ Bachelor's degree in Engineering, Statistics, Computer Science or other quantitative fields, required
+ 5+ years of IT experience
+ 4+ years' experience working with Sterling B2B Integrator, Sterling File Gateway, PCM (preferred)
+ 3+ years' experience with scripting to enable automation of standard activities (example: Ansible, Python, Bash, Java)
+ Strong interpersonal and communication skills with Agile/Scrum experience.
+ Strong problem solving and critical thinking skills with a proven record for identifying and diagnosing problems, and solving complex problems with simple, logical solutions with the ability to develop custom setups.
+ Outstanding verbal, written, presentation, facilitation, and interaction skills, including ability to effectively communicate technical and non-technical issues and concepts to multiple organization levels.
+ Prefer Travel, transportation, or hospitality experience
+ Prefer experience with designing application data models for mobile or web applications
+ Excellent written and verbal communication skills.
+ Flexibility in scheduling which may include nights, weekends, and holidays
**What You'll Get:**
+ Up to 40% off the base rate of any standard Hertz Rental
+ Paid Time Off
+ Medical, Dental & Vision plan options
+ Retirement programs, including 401(k) employer matching
+ Paid Parental Leave & Adoption Assistance
+ Employee Assistance Program for employees & family
+ Educational Reimbursement & Discounts
+ Voluntary Insurance Programs - Pet, Legal/Identity Theft, Critical Illness
+ Perks & Discounts -Theme Park Tickets, Gym Discounts & more
The Hertz Corporation operates the Hertz, Dollar Car Rental, Thrifty Car Rental brands in approximately 9,700 corporate and franchisee locations throughout North America, Europe, The Caribbean, Latin America, Africa, the Middle East, Asia, Australia and New Zealand. The Hertz Corporation is one of the largest worldwide airport general use vehicle rental companies, and the Hertz brand is one of the most recognized in the world.
**US EEO STATEMENT**
At Hertz, we champion and celebrate a culture of diversity and inclusion. We take affirmative steps to promote employment and advancement opportunities. The endless variety of perspectives, experiences, skills and talents that our employees invest in their work every day represent a significant part of our culture - and our success and reputation as a company.
Individuals are encouraged to apply for positions because of the characteristics that make them unique.
EOE, including disability/veteran
$135k yearly 60d+ ago
Google Cloud Data & AI Engineer
Slalom 4.6
Data engineer job in Salt Lake City, UT
Who You'll Work With As a modern technology company, our Slalom Technologists are disrupting the market and bringing to life the art of the possible for our clients. We have passion for building strategies, solutions, and creative products to help our clients solve their most complex and interesting business problems. We surround our technologists with interesting challenges, innovative minds, and emerging technologies
You will collaborate with cross-functional teams, including Google Cloud architects, data scientists, and business units, to design and implement Google Cloud data and AI solutions. As a Consultant, Senior Consultant or Principal at Slalom, you will be a part of a team of curious learners who lean into the latest technologies to innovate and build impactful solutions for our clients.
What You'll Do
* Design, build, and operationalize large-scale enterprise data and AI solutions using Google Cloud services such as BigQuery, Vertex AI, Dataflow, Cloud Storage, Pub/Sub and more.
* Implement cloud-based data solutions for data ingestion, transformation, and storage; and AI solutions for model development, deployment, and monitoring, ensuring both areas meet performance, scalability, and compliance needs.
* Develop and maintain comprehensive architecture plans for data and AI solutions, ensuring they are optimized for both data processing and AI model training within the Google Cloud ecosystem.
* Provide technical leadership and guidance on Google Cloud best practices for dataengineering (e.g., ETL pipelines, data pipelines) and AI engineering (e.g., model deployment, MLOps).
* Conduct assessments of current data architectures and AI workflows, and develop strategies for modernizing, migrating, or enhancing data systems and AI models within Google Cloud.
* Stay current with emerging Google Cloud data and AI technologies, such as BigQuery ML, AutoML, and Vertex AI, and lead efforts to integrate new innovations into solutions for clients.
* Mentor and develop team members to enhance their skills in Google Cloud data and AI technologies, while providing leadership and training on both data pipeline optimization and AI/ML best practices.
What You'll Bring
* Proven experience as a Cloud Data and AI Engineer or similar role, with hands-on experience in Google Cloud tools and services (e.g., BigQuery, Vertex AI, Dataflow, Cloud Storage, Pub/Sub, etc.).
* Strong knowledge of dataengineering concepts, such as ETL processes, data warehousing, data modeling, and data governance.
* Proficiency in AI engineering, including experience with machine learning models, model training, and MLOps pipelines using tools like Vertex AI, BigQuery ML, and AutoML.
* Strong problem-solving and decision-making skills, particularly with large-scale data systems and AI model deployment.
* Strong communication and collaboration skills to work with cross-functional teams, including data scientists, business stakeholders, and IT teams, bridging dataengineering and AI efforts.
* Experience with agile methodologies and project management tools in the context of Google Cloud data and AI projects.
* Ability to work in a fast-paced environment, managing multiple Google Cloud data and AI engineering projects simultaneously.
* Knowledge of security and compliance best practices as they relate to data and AI solutions on Google Cloud.
* Google Cloud certifications (e.g., Professional DataEngineer, Professional DatabaseEngineer, Professional Machine Learning Engineer) or willingness to obtain certification within a defined timeframe.
About Us
Slalom is a fiercely human business and technology consulting company that leads with outcomes to bring more value, in all ways, always. From strategy through delivery, our agile teams across 52 offices in 12 countries collaborate with clients to bring powerful customer experiences, innovative ways of working, and new products and services to life. We are trusted by leaders across the Global 1000, many successful enterprise and mid-market companies, and 500+ public sector organizations to improve operations, drive growth, and create value. At Slalom, we believe that together, we can move faster, dream bigger, and build better tomorrows for all.
Compensation and Benefits
Slalom prides itself on helping team members thrive in their work and life. As a result, Slalom is proud to invest in benefits that include meaningful time off and paid holidays, parental leave, 401(k) with a match, a range of choices for highly subsidized health, dental, & vision coverage, adoption and fertility assistance, and short/long-term disability. We also offer yearly $350 reimbursement account for any well-being-related expenses, as well as discounted home, auto, and pet insurance.
Slalom is committed to fair and equitable compensation practices. For this position the target base salaries are listed below. In addition, individuals may be eligible for an annual discretionary bonus. Actual compensation will depend upon an individual's skills, experience, qualifications, location, and other relevant factors. The target salary pay range is subject to change and may be modified at any time.
East Bay, San Francisco, Silicon Valley:
* Consultant $114,000-$171,000
* Senior Consultant: $131,000-$196,500
* Principal: $145,000-$217,500
San Diego, Los Angeles, Orange County, Seattle, Houston, New Jersey, New York City, Westchester, Boston, Washington DC:
* Consultant $105,000-$157,500
* Senior Consultant: $120,000-$180,000
* Principal: $133,000-$199,500
All other locations:
* Consultant: $96,000-$144,000
* Senior Consultant: $110,000-$165,000
* Principal: $122,000-$183,000
EEO and Accommodations
Slalom is an equal opportunity employer and is committed to inclusion, diversity, and equity in the workplace. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veterans' status, or any other characteristic protected by federal, state, or local laws. Slalom will also consider qualified applications with criminal histories, consistent with legal requirements. Slalom welcomes and encourages applications from individuals with disabilities. Reasonable accommodations are available for candidates during all aspects of the selection process. Please advise the talent acquisition team if you require accommodations during the interview process.
We are accepting applications until 12/31.
#LI-FB1
$145k-217.5k yearly 30d ago
Senior Big Data Engineer
Deegit 3.9
Data engineer job in Salt Lake City, UT
Responsibilities
· Participate in the engineering and administration of big data systems.
· Apache Storm/Java development for both data transformation and augmentation.
· Employ best practices to ensure high availability and scalability.
· Install, configure, and maintain big data technologies and systems.
· Maintain documentation and troubleshooting playbooks.
· Respond to and resolve access and performance issues.
· Ability to work non-peak hours when needed.
· Assist in capacity planning.
· Proactively monitor performance.
· Collaborate with engineering teams to optimize data collection.
· Maintain change control and testing processes for all modifications and deployments.
· Conduct research and make recommendations on big data products, services, and standards.
Preferred Qualifications
· Experience with operational analytics and log management using ELK or Splunk.
· Understanding and experience with SQL and MPP databases such as Greenplum.
· Experience with SaltStack or comparable deployment automation tools.
· Working knowledge of Zenoss, NewRelic, PagerDuty or other monitoring technologies.
· Experience with Apache Drill, Apache Hbase, Hive, Phoenix, or other Hadoop with SQL technologies.
· Experience working with batch-processing and tools in Hadoop technology stack (MapReduce, Yarm, Pig, Hive, HDFS)
Qualifications
Basic Requirements
· BS in Computer Science, Information Systems, or related technical degree.
· 5 or more years' experience with data transformation pipelines preferably Storm
· 5 or more years' experience in messaging frameworks preferably Kafka.
· 5 or more years' experience with Java programming.
Additional Information
All your information will be kept confidential according to EEO guidelines.
$81k-116k yearly est. 60d+ ago
Data Engineers
The University of Utah 4.0
Data engineer job in Salt Lake City, UT
DataEngineers Design, build, implement, and maintain data processing pipelines for the extraction, transformation, and loading ( ETL ) of data from a variety of data sources. Develop robust and scalable solutions that transform data into a useful format for analysis, enhance data flow, and enable end users to consume and analyze data faster and easier. Write complex SQL queries to support analytics needs. Evaluate and recommend tools and technologies for data infrastructure and processing. Collaborate with engineers, data scientists, data analysts, product teams, and other stakeholders to translate business requirements to technical specifications and coded data pipelines. Work with tools, languages, data processing frameworks, and databases such as R, Python, SQL , MongoDB, Redis, Hadoop, Spark, Hive, Scala, BigTable, Cassandra, Presto, Strom. Work with structured and unstructured data from a variety of data stores, such as data lakes, relational database management systems, and/or data warehouses. Work collaboratively with end-users including researchers to recommend strategies for problem analysis, study design, and apply rigorous data science methods. Serve as the front-line support for data-related questions from end-users, and be responsible for standardization, documentation, and maintaining a repertoire of answers and solutions to common problems. Learn more about the great benefits of working for University of Utah: benefits.utah.edu The department may choose to hire at any of the below job levels and associated pay rates based on their business need and budget.
Responsibilities
DataEngineer, III Design, build, implement, and maintain data processing pipelines for the extraction, transformation, and loading ( ETL ) of data from a variety of data sources. Develop robust and scalable solutions that transform data into a useful format for analysis, enhance data flow, and enable end users to consume and analyze data faster and easier. Write complex SQL queries to support analytics needs. Evaluate and recommend tools and technologies for data infrastructure and processing. Collaborate with engineers, data scientists, data analysts, product teams, and other stakeholders to translate business requirements to technical specifications and coded data pipelines. Work with tools, languages, data processing frameworks, and databases such as R, Python, SQL , MongoDB, Redis, Hadoop, Spark, Hive, Scala, BigTable, Cassandra, Presto, Strom. Work with structured and unstructured data from a variety of data stores, such as data lakes, relational database management systems, and/or data warehouses. Considered highly skilled and proficient in discipline. Conducts complex, important work under minimal supervision and with wide latitude for independent judgment. Requires a bachelor's (or equivalency) + 6 years or a master's (or equivalency) + 4 years of directly related work experience. This is a Career-Level position in the General Professional track.
Minimum Qualifications
EQUIVALENCY STATEMENT : 1 year of higher education can be substituted for 1 year of directly related work experience (Example: bachelor's degree = 4 years of directly related work experience). DataEngineer, III : Requires a bachelor's (or equivalency) + 6 years or a master's (or equivalency) + 4 years of directly related work experience.
The average data engineer in Ogden, UT earns between $61,000 and $113,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.
Average data engineer salary in Ogden, UT
$83,000
What are the biggest employers of Data Engineers in Ogden, UT?
The biggest employers of Data Engineers in Ogden, UT are: