Post job

Data engineer jobs in Mesa, AZ

- 860 jobs
All
Data Engineer
Data Scientist
Data Consultant
Hadoop Developer
Data Architect
Software Systems Engineer
  • Senior Data Engineer

    Addison Group 4.6company rating

    Data engineer job in Phoenix, AZ

    Job Title: Sr. Data Engineer Job Type: Full Time Compensation: $130,000 - $150,000 D.O.E. is eligible for medical, dental, vision, and life insurance coverage, & PTO Senior Data Engineer ROLE OVERVIEW The Senior Data Engineer is responsible for designing, building, and maintaining scalable data platforms that support analytics, reporting, and advanced data-driven initiatives. This is a hands-on engineering role focused on developing reliable, high-performing data solutions while contributing to architectural standards, data quality, and governance practices. The ideal candidate has strong experience with modern data architectures, data modeling, and pipeline development, and is comfortable collaborating across technical and business teams to deliver trusted, production-ready datasets. KEY RESPONSIBILITIES Design and maintain data models across analytical and operational use cases to support reporting and advanced analytics. Build and manage data pipelines that ingest, transform, and deliver structured and unstructured data at scale. Contribute to data governance practices, including data quality controls, metadata management, lineage, and stewardship. Develop and maintain cloud-based data platforms, including data lakes, analytical stores, and curated datasets. Implement and optimize batch and near-real-time data ingestion and transformation processes. Support data migration and modernization efforts while ensuring accuracy, performance, and reliability. Partner with analytics, engineering, and business teams to understand data needs and deliver high-quality solutions. Enable reporting and visualization use cases by providing clean, well-structured datasets for downstream tools. Apply security, privacy, and compliance best practices throughout the data lifecycle. Establish standards for performance tuning, scalability, reliability, and maintainability of data solutions. Implement automation, testing, and deployment practices to improve data pipeline quality and consistency. QUALIFICATIONS Bachelor's degree in Computer Science, Engineering, or a related technical field, or equivalent professional experience. 5+ years of experience in data engineering or related roles. Strong hands-on experience with: Data modeling, schema design, and pipeline development Cloud-based data platforms and services Data ingestion, transformation, and optimization techniques Familiarity with modern data architecture patterns, including lakehouse-style designs and governance frameworks. Experience supporting analytics, reporting, and data science use cases. Proficiency in one or more programming languages commonly used in data engineering (e.g., Python, SQL, or similar). Solid understanding of data structures, performance optimization, and scalable system design. Experience integrating data from APIs and distributed systems. Exposure to CI/CD practices and automated testing for data workflows. Familiarity with streaming or event-driven data processing concepts preferred. Experience working in Agile or iterative delivery environments. Strong communication skills with the ability to document solutions and collaborate across teams.
    $130k-150k yearly 2d ago
  • Data Architect

    Akkodis

    Data engineer job in Phoenix, AZ

    Akkodis is seeking a Data Architect local to Phoenix, AZ that can come onsite 3 days a week. If you are interested, please apply! JOB TITLE: Data Architect EMPLOYMENT TYPE: 24+ month Contract | 3 days/week on site Pay: 80 - 96/hr ETL design and development for enterprise data solutions. Design and build databases, data warehouses, and strategies for data acquisition, archiving, and recovery. Review new data sources for compliance with standards. Provide technical leadership, set standards, and mentor junior team members. Collaborate with business stakeholders to translate requirements into scalable solutions. Guide teams on Azure data tools (Data Factory, Synapse, Data Lake, Databricks). Establish best practices for database design, data integration, and data governance. Ensure solutions are secure, high-performing, and easy to support. Essential Skills & Experience Bachelor's degree in computer science, Information Systems, or equivalent experience. 10+ years with Microsoft SQL technologies. 3+ years with cloud-based solutions (Azure preferred). Strong knowledge of ETL, data modeling, and data warehousing. Experience with source control, change/release management, and documentation. Excellent communication and leadership skills. Preferred Retail or grocery industry experience. Familiarity with Power BI and MDM principles. Work Schedule Hybrid: 3 days onsite in Phoenix, AZ; 2 days remote. “Benefit offerings include medical, dental, vision, term life insurance, short-term disability insurance, additional voluntary benefits, commuter benefits and 401K plan. Our program provides employees the flexibility to choose the type of coverage that meets their individual needs. Available paid leave may include Paid Sick Leave, where required by law; any other paid leave required by Federal, State or local law; and Holiday pay upon meeting eligibility criteria. Disclaimer: These benefit offerings do not apply to client-recruited jobs and jobs which are direct hire to a client”
    $92k-128k yearly est. 3d ago
  • Systems Software Engineer

    Sunbelt Controls 3.3company rating

    Data engineer job in Phoenix, AZ

    Now Hiring: Systems Software Engineer II 📍 Phoenix , Arizona | 💰 $108,000 - $135,000 per year 🏢 About the Role We're looking for an experienced Systems Software Engineer II to join Sunbelt Controls, a leading provider of Building Automation System (BAS) solutions across the Western U.S. In this role, you'll develop and program databases, create custom graphics, and integrate control systems for smart buildings. You'll also support project startups, commissioning, and troubleshooting - working closely with project managers and engineers to deliver high-quality, energy-efficient building automation solutions. If you have a passion for technology, problem-solving, and helping create intelligent building systems, this opportunity is for you. ⚙️ What You'll Do Design and program BAS control system databases and graphics for assigned projects. Lead the startup, commissioning, and troubleshooting of control systems. Work with networked systems and diagnose LAN/WAN connectivity issues. Perform pre-functional and functional system testing, including LEED and Title 24 requirements. Manage project documentation, including as-builts and commissioning records. Coordinate with project teams, subcontractors, and clients for smooth execution. Mentor and support junior Systems Software Engineers. 🧠 What We're Looking For 2-5 years of experience in Building Automation Systems or a related field. Associate's degree in a technical field (Bachelor's in Mechanical or Electrical Engineering preferred). Proficiency in MS Office, Windows, and basic TCP/IP networking. Strong organizational skills and the ability to manage multiple priorities. Excellent communication and customer-service skills. Valid Arizona driver's license. 💎 Why You'll Love Working With Us At Sunbelt Controls, we don't just build smart buildings - we build smart careers. As a 100% employee-owned company (ESOP), we offer a supportive, growth-oriented environment where innovation and teamwork thrive. What we offer: Competitive salary: $108K - $135K, based on experience Employee-owned company culture with a family-oriented feel Comprehensive health, dental, and vision coverage Paid time off, holidays, and 401(k)/retirement plan Professional growth, mentorship, and ongoing learning opportunities Veteran-friendly employer & Equal Opportunity workplace 🌍 About Sunbelt Controls Sunbelt Controls is a premier BAS solutions provider serving clients across multiple industries, including data centers, healthcare, education, biotech, and commercial real estate. We specialize in smart building technology, system retrofits, analytics, and energy efficiency - helping clients reduce operational costs and achieve sustainable performance. 👉 Apply today to join a team that's shaping the future of intelligent buildings. #Sunbelt #BuildingAutomation #SystemsEngineer #HVACControls #BASCareers
    $108k-135k yearly 5d ago
  • Data Scientist (Technical Leadership)

    Meta 4.8company rating

    Data engineer job in Phoenix, AZ

    We are seeking experienced Data Scientists to join our team and drive impact across various product areas. As a Data Scientist, you will collaborate with cross-functional partners to identify and solve complex problems using data and analysis. Your role will involve shaping product development, quantifying new opportunities, and ensuring products bring value to users and the company. You will guide teams using data-driven insights, develop hypotheses, and employ rigorous analytical approaches to test them. You will tell data-driven stories, present clear insights, and build credibility with stakeholders. By joining our team, you will become part of a world-class analytics community dedicated to skill development and career growth in analytics and beyond. **Required Skills:** Data Scientist (Technical Leadership) Responsibilities: 1. Work with complex data sets to solve challenging problems using analytical and statistical approaches 2. Apply technical expertise in quantitative analysis, experimentation, and data mining to develop product strategies 3. Identify and measure success through goal setting, forecasting, and monitoring key metrics 4. Partner with cross-functional teams to inform and execute product strategy and investment decisions 5. Build long-term vision and strategy for programs and products 6. Collaborate with executives to define and develop data platforms and instrumentation 7. Effectively communicate insights and recommendations to stakeholders 8. Define success metrics, forecast changes, and set team goals 9. Support developing roadmaps and coordinate analytics efforts across teams **Minimum Qualifications:** Minimum Qualifications: 10. Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience 11. Mathematics, Statistics, Operations Research) 5+ years of experience with data querying languages (e.g. SQL), scripting languages (e.g. Python), or statistical/mathematical software (e.g. R, SAS, Matlab) 12. 8+ years of work experience leading analytics work in IC capacity, working collaboratively with Engineering and cross-functional partners, and guiding data-influenced product planning, prioritization and strategy development 13. Experience working effectively with and leading complex analytics projects across multiple stakeholders and cross-functional teams, including Engineering, PM/TPM, Analytics and Finance 14. Experience with predictive modeling, machine learning, and experimentation/causal inference methods 15. Experience communicating complex technical topics in a clear, precise, and actionable manner **Preferred Qualifications:** Preferred Qualifications: 16. 10+ years of experience communicating the results of analyses to leadership teams to influence the strategy 17. Masters or Ph.D. Degree in a quantitative field 18. Bachelor's Degree in an analytical or scientific field (e.g. Computer Science, Engineering, Mathematics, Statistics, Operations Research) 19. 10+ years of experience doing complex quantitative analysis in product analytics **Public Compensation:** $206,000/year to $281,000/year + bonus + equity + benefits **Industry:** Internet **Equal Opportunity:** Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Meta participates in the E-Verify program in certain locations, as required by law. Please note that Meta may leverage artificial intelligence and machine learning technologies in connection with applications for employment. Meta is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance or accommodations due to a disability, please let us know at accommodations-ext@fb.com.
    $206k-281k yearly 60d+ ago
  • Data Scientist, NLP

    Datavant

    Data engineer job in Phoenix, AZ

    Datavant is a data platform company and the world's leader in health data exchange. Our vision is that every healthcare decision is powered by the right data, at the right time, in the right format. Our platform is powered by the largest, most diverse health data network in the U.S., enabling data to be secure, accessible and usable to inform better health decisions. Datavant is trusted by the world's leading life sciences companies, government agencies, and those who deliver and pay for care. By joining Datavant today, you're stepping onto a high-performing, values-driven team. Together, we're rising to the challenge of tackling some of healthcare's most complex problems with technology-forward solutions. Datavanters bring a diversity of professional, educational and life experiences to realize our bold vision for healthcare. We are looking for a motivated Data Scientist to help Datavant revolutionize the healthcare industry with AI. This is a critical role where the right candidate will have the ability to work on a wide range of problems in the healthcare industry with an unparalleled amount of data. You'll join a team focused on deep medical document understanding, extracting meaning, intent, and structure from unstructured medical and administrative records. Our mission is to build intelligent systems that can reliably interpret complex, messy, and high-stakes healthcare documentation at scale. This role is a unique blend of applied machine learning, NLP, and product thinking. You'll collaborate closely with cross-functional teams to: + Design and develop models to extract entities, detect intents, and understand document structure + Tackle challenges like long-context reasoning, layout-aware NLP, and ambiguous inputs + Evaluate model performance where ground truth is partial, uncertain, or evolving + Shape the roadmap and success metrics for replacing legacy document processing systems with smarter, scalable solutions We operate in a high-trust, high-ownership environment where experimentation and shipping value quickly are key. If you're excited by building systems that make healthcare data more usable, accurate, and safe, please reach out. **Qualifications** + 3+ years of experience with data science and machine learning in an industry setting, particularly in designing and building NLP models. + Proficiency with Python + Experience with the latest in language models (transformers, LLMs, etc.) + Proficiency with standard data analysis toolkits such as SQL, Numpy, Pandas, etc. + Proficiency with deep learning frameworks like PyTorch (preferred) or TensorFlow + Industry experience shepherding ML/AI projects from ideation to delivery + Demonstrated ability to influence company KPIs with AI + Demonstrated ability to navigate ambiguity **Bonus Experience** + Experience with document layout analysis (using vision, NLP, or both). + Experience with Spark/PySpark + Experience with Databricks + Experience in the healthcare industry **Responsibilities** + Play a key role in the success of our products by developing models for document understanding tasks. + Perform error analysis, data cleaning, and other related tasks to improve models. + Collaborate with your team by making recommendations for the development roadmap of a capability. + Work with other data scientists and engineers to optimize machine learning models and insert them into end-to-end pipelines. + Understand product use-cases and define key performance metrics for models according to business requirements. + Set up systems for long-term improvement of models and data quality (e.g. active learning, continuous learning systems, etc.). **After 3 Months, You Will...** + Have a strong grasp of technologies upon which our platform is built. + Be fully integrated into ongoing model development efforts with your team. **After 1 Year, You Will...** + Be independent in reading literature and doing research to develop models for new and existing products. + Have ownership over models internally, communicating with product managers, customer success managers, and engineers to make the model and the encompassing product succeed. + Be a subject matter expert on Datavant's models and a source from which other teams can seek information and recommendations. \#LI-BC1 We are committed to building a diverse team of Datavanters who are all responsible for stewarding a high-performance culture in which all Datavanters belong and thrive. We are proud to be an Equal Employment Opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, disability, veteran status, or other legally protected status. At Datavant our total rewards strategy powers a high-growth, high-performance, health technology company that rewards our employees for transforming health care through creating industry-defining data logistics products and services. The range posted is for a given job title, which can include multiple levels. Individual rates for the same job title may differ based on their level, responsibilities, skills, and experience for a specific job. The estimated total cash compensation range for this role is: $136,000-$170,000 USD To ensure the safety of patients and staff, many of our clients require post-offer health screenings and proof and/or completion of various vaccinations such as the flu shot, Tdap, COVID-19, etc. Any requests to be exempted from these requirements will be reviewed by Datavant Human Resources and determined on a case-by-case basis. Depending on the state in which you will be working, exemptions may be available on the basis of disability, medical contraindications to the vaccine or any of its components, pregnancy or pregnancy-related medical conditions, and/or religion. This job is not eligible for employment sponsorship. Datavant is committed to a work environment free from job discrimination. We are proud to be an Equal Employment Opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, disability, veteran status, or other legally protected status. To learn more about our commitment, please review our EEO Commitment Statement here (************************************************** . Know Your Rights (*********************************************************************** , explore the resources available through the EEOC for more information regarding your legal rights and protections. In addition, Datavant does not and will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay. At the end of this application, you will find a set of voluntary demographic questions. If you choose to respond, your answers will be anonymous and will help us identify areas for improvement in our recruitment process. (We can only see aggregate responses, not individual ones. In fact, we aren't even able to see whether you've responded.) Responding is entirely optional and will not affect your application or hiring process in any way. Datavant is committed to working with and providing reasonable accommodations to individuals with physical and mental disabilities. If you need an accommodation while seeking employment, please request it here, (************************************************************** Id=**********48790029&layout Id=**********48795462) by selecting the 'Interview Accommodation Request' category. You will need your requisition ID when submitting your request, you can find instructions for locating it here (******************************************************************************************************* . Requests for reasonable accommodations will be reviewed on a case-by-case basis. For more information about how we collect and use your data, please review our Privacy Policy (**************************************** .
    $136k-170k yearly 11d ago
  • Staff Data Science Consultant - Retail Planning Solutions

    Blue Yonder

    Data engineer job in Scottsdale, AZ

    Title: Staff Data Science Consultant - Retail Planning Solutions Travel: Ability to travel across North America up to 50% Overview: Blue Yonder's Professional Services team is seeking a Staff Data Science Consultant (DSC) to join our Retail Planning Solutions group. In this role, you'll partner with customers to unlock the full value of their business through data-driven insights and intelligent decision support. You'll serve as a trusted advisor by recommending software configuration options, implementing optimized solutions, and ensuring our customers achieve maximum value from their Blue Yonder investments. What you'll do: * Algorithm Development: Design, develop, and test algorithms, models, and solution approaches to address business issues with minimal supervision. * Prototype Development: Create prototypes and proofs of concept for new features to demonstrate feasibility and effectiveness. * Integration and Implementation: Embed models and algorithms into product solutions, ensuring operational aspects and client usability are prioritized. * Mentorship: Provide guidance and mentorship to junior data scientists and data analysts within the team. * Quality Assurance: Develop high-quality code and tests, adhering to clean code principles and Blue Yonder's standards. * Customer Collaboration: Engage with clients to understand their needs, present findings, and implement tailored solutions that drive business value. * Continuous Improvement: Stay abreast of advancements in technology and methodologies to continuously improve the team's capabilities. Technical Environment: * Advanced machine learning frameworks and libraries (e.g., TensorFlow, PyTorch, scikit-learn) * Big data processing tools (e.g., Hadoop, Spark) and cloud computing platforms (AWS, Azure, Google Cloud) * Proficiency in programming languages such as Python and R * Experience with CI/CD pipelines for efficient deployment of machine learning models * Familiarity with SQL and data visualization tools (e.g., Tableau, Power BI) What we are looking for: * Advanced degree (PhD or Master's) in Computer Science, Data Science, Operations Research, Statistics, or a related discipline * 8+ years of experience applying data science to solve real-world challenges, with deep expertise in machine learning and operations research in a professional work setting * Proficiency in Python or R, with hands-on experience using modern frameworks such as TensorFlow or PyTorch * Proven track record of leading data-driven projects and mentoring teams to deliver impactful business outcomes * Exceptional ability to translate complex technical concepts into insights that inspire understanding and action across diverse audiences * Recognized thought leadership through publications, patents, or conference contributions is a strong plus * Experience in Retail Supply Chain Management or related domains is highly desirable Comparable title: Staff Data Sciene Consultant I - Consulting * LI-AD1 #LI-remote * ------------------------------------------ The annual salary range for this position is USD $105,000 - $129,000 The salary range information provided, reflects the anticipated base salary range for this position based on current national data. Minimums and maximums may vary based on location. Individual salary will be commensurate with skills, experience, certifications or licenses and other relevant factors. In addition, this role will be eligible to participate in either the annual performance bonus or commission program, determined by the nature of the position. At Blue Yonder, we care about the wellbeing of our employees and those most important to them. This is reflected in our robust benefits package and options that includes: * Comprehensive Medical, Dental and Vision * 401K with Matching * Flexible Time Off * Corporate Fitness Program * A variety of voluntary benefits such as; Legal Plans, Accident and Hospital Indemnity, Pet Insurance and much more At Blue Yonder, we are committed to a workplace that genuinely fosters inclusion and belonging in which everyone can share their unique voices and talents in a safe space. We continue to be guided by our core values and are proud of our diverse culture as an equal opportunity employer. We understand that your career search may look different than others, and embrace the professional, personal, educational, and volunteer opportunities through which people gain experience. Our Values If you want to know the heart of a company, take a look at their values. Ours unite us. They are what drive our success - and the success of our customers. Does your heart beat like ours? Find out here: Core Values All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status.
    $105k-129k yearly Auto-Apply 6d ago
  • Data Scientist

    Athletics-Baseball Operations

    Data engineer job in Mesa, AZ

    Title: Data Scientist Department: Baseball Operations Reporting to: Assistant GM, Baseball Development & Technology Job Classification: Full-time, Exempt Full-time Location (City, State): Mesa, AZ About the A's: The A's are a baseball team founded in 1901. They have a rich history, having won nine World Series championships and 15 American League pennants. The A's are known for pioneering the "Moneyball" approach to team-building, which focuses on using statistical analysis to identify undervalued players. In addition to their success on the field, the A's also have a positive and dynamic work culture. They have been recognized twice as the Front Office Sports, Best Employers in Sports. The A's are defined by their core pillars of being Dynamic, Innovative, and Inclusive. Working for the A's offers the opportunity to be part of an innovative organization that values its employees and strives to create a positive work environment. Description: The A's are hiring for a full-time Data Scientist for the Baseball Operations Department. The Data Scientist will construct statistical models that inform decision-making in all facets of Baseball Operations. This position requires strong experience in statistics, data analytics, and computer science. This position is primarily based out of Mesa, AZ. Responsibilities: Design, build, and maintain predictive models to support player evaluation, acquisition, development, and performance optimization. Collaborate with Baseball Analytics staff to integrate analytical findings into decision-making tools and ensure seamless implementation. Analyze and synthesize large-scale data, creating actionable insights for stakeholders within Baseball Operations. Research and implement advanced statistical methods, including time series modeling, spatial statistics, boosting models, and Bayesian regression, to stay on the cutting edge of sabermetric research. Develop and maintain robust data modeling pipelines and workflows in cloud environments to ensure scalability and reliability of analytical outputs. Produce clear, concise written reports and compelling data visualizations to communicate insights effectively across diverse audiences. Stay current with advancements in data science, statistical methodologies, and player evaluation techniques to identify and propose new opportunities for organizational improvement. Mentor team members within the Baseball Operations department, fostering a collaborative and innovative research environment. Requirements: PhD in Mathematics, Statistics, Computer Science, or a related quantitative field. Proficiency in SQL, R, Python, or other similar programming languages. Strong understanding of modern statistical and machine learning methods, including experience with predictive modeling techniques. Proven experience productionizing machine learning models in cloud environments. Ability to communicate complex analytical concepts effectively to both technical and non-technical audiences. Demonstrated ability to independently design, implement, and present rigorous quantitative research. Passion for sabermetric research and baseball analytics with a deep understanding of player evaluation methodologies. Strong interpersonal and mentoring skills with a demonstrated ability to work collaboratively in a team-oriented environment. Preferred Qualifications: Expertise in time series modeling, spatial statistics, boosting models, and Bayesian regression. Previous experience in sports analytics, ideally baseball, is a plus. Familiarity with integrating biomechanical data into analytical frameworks. The A's Diversity Statement: Diversity Statement Diversity, Equity, and Inclusion are in our organizational DNA. Our commitment to these values is unwavering - on and off the field. Together, we continue to build an inclusive, innovative, and dynamic culture that encourages, supports, and celebrates belonging and amplifies diverse voices. Combining a collaborative and innovative work environment with talented and diverse team members, we've created a workforce in which every team member has the tools to reach their full potential. Equal Opportunity Consideration: We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, sexual orientation, age, disability, gender identity, marital or veteran status, or any other protected class.
    $74k-107k yearly est. Auto-Apply 51d ago
  • Data Engineer

    Runbuggy Inc.

    Data engineer job in Tempe, AZ

    About Us: RunBuggy is the most technically advanced automotive logistics platform on the market. Period. Backed by Porsche Ventures and Hearst Ventures, RunBuggy is transforming the way cars move. Our cutting-edge technology is trusted by some of the largest OEMs, captive finance companies, and automotive lenders in the world to streamline vehicle transportation at scale. RunBuggy's end-to-end platform connects car shippers and haulers in real time - eliminating the friction of traditional load boards and costly custom software. For shippers, RunBuggy integrates directly into existing management systems, reducing transportation costs and accelerating delivery timelines. For transporters, we offer a smarter, more profitable way to find, accept, and manage loads - all from a single app. Since launching in 2019, RunBuggy has grown to over 150 team members, facilitated the movement of hundreds of thousands of vehicles, and attracted tens of thousands of transporters across the U.S. We're not just building a better logistics platform - we're redefining the future of automotive transportation. About the Role: Are you passionate about building the backbone of data-driven innovation? As a Data Engineer on our Data Science team, you'll architect and develop scalable data pipelines and systems that empower our Data Science department to unlock powerful insights. You'll work side-by-side with talented colleagues across engineering, business, and analytics, ensuring our data infrastructure is robust, secure, and ready for tomorrow's challenges. In this pivotal role, you'll lead engineering projects, champion best practices in data quality and security, and optimize workflows to keep our organization ahead in a fast-paced environment. Your expertise in cloud platforms, ETL processes, and cutting-edge data engineering tools will be key to enabling seamless data capture, processing, and analysis. If you thrive on collaboration, innovation, and making a real impact with data, we want to hear from you! This role can be performed in Phoenix, AZ, or in the Bay Area of CA. What You Will Be Doing: Design, develop, and maintain scalable data pipelines and systems. Independently create and own new data capture/ETL's for the entire stack and ensure data quality. Collaborate with data scientists, engineers, business leaders, and other stakeholders to understand data requirements and provide the necessary infrastructure. Create and contribute to frameworks that improve the effectiveness of logging data, triage issues, and resolution. Define and manage Service Level Agreements (SLA) for all data sets in allocated areas of ownership. Lead data engineering projects and determine the appropriate tools and libraries for each task. Implement data security and privacy best practices. Create and maintain technical documentation for data engineering processes. Work with cloud-based data storage and processing solutions (for example, Docker and Kubernetes). Build out and support a DAG orchestration cluster framework. Migrate workflows from batch processes to the DAG cluster via concurrent data flows. Data pipeline maintenance, including debugging code, monitoring, and incident response. Collaborate with engineering to enforce data collection and data contracts for API's, databases, etc. Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts. Other duties as assigned. Requirements What You Bring to the Team by Way of Skills and Experience: Bachelor's degree in Computer Science, Engineering, or a related field required; master's degree preferred. 5+ years of experience in data engineering. Proficiency in Python and experience with data engineering libraries (e.g., Pandas). Experience with ETL processes and tools. Strong knowledge of relational and non-relational databases. Experience with cloud platforms (e.g., AWS, GCP, Azure). Excellent communication skills. Ability to work independently and lead projects. Experience with data warehousing solutions. Familiarity with data visualization tools (e.g., Tableau). Experience with building and managing DAG clusters (eg Airflow, Prefect). Ability to work with the following: JavaScript, Node.js, AngularJS, Java, and Java Spring Boot. Knowledge of machine learning and data science workflows. Ability to handle a variety of duties in a fast-paced environment. Excellent organizational skills, along with professionalism and diplomacy with internal and external customers/vendors. Ability to prioritize tasks and manage time. Ability to work under tight deadlines. Travel Requirements: Employees based in Phoenix are expected to travel to the Bay Area (Northern California) approximately every other month. Employees based outside of Phoenix are expected to travel to Phoenix with similar frequency. Whenever possible, advanced notice of required travel will be provided. What is in it for You and Why you Should Apply: Market competitive pay based on education, experience, and location. Highly competitive medical, dental, vision, Life w/ AD&D, Short-Term Disability insurance, Long-Term Disability insurance, pet insurance, identity theft protection, and a 401(k) retirement savings plan. Employee wellness program. Employee rewards, discounts, and recognition programs. Generous company-paid holidays (12 per year), vacation, and sick time. Paid paternity/maternity leave. Monthly connectivity/home office stipend if working from home 5 days a week. A supportive and positive space for you to grow and expand your career. Pay Range Disclosure: The advertised range represents the expected pay range for this position at the time of posting based on education, experience, skills, location, and other factors. To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. RunBuggy is an equal-opportunity employer that is committed to diversity and inclusion in the workplace. We prohibit discrimination, harassment, and retaliation on the basis of race, color, religion, sex (including gender identity and sexual orientation), pregnancy, parental status, national origin, age, disability, genetic information, or any other status protected under federal, state, or local law. Salary Description The expected pay range is $140k to $170k/yr. DOE&L
    $140k-170k yearly 25d ago
  • Big Data Consultant

    Career Guidant

    Data engineer job in Phoenix, AZ

    Career Guidant, an internationally acclimed, trusted multi-faced orgiansation into Information Technology Custom Learning Services for Enterprises, Lateral Staffing Solutions, Information Technology Development & Consulting, Infrastructure & Facility Management Services and Technical Content development as core competencies. Our experienced professionals bring a wealth of industry knowledge to each client and operate in a manner that produces superior quality and outstanding results. Career Guidant proven and tested methodologies ensures client satisfaction being the primary objective. Committed to our core values of Client Satisfaction, Professionalism, Teamwork, Respect, and Integrity. Career Guidant with its large network of delivery centres,support offices and Partners across India, Asia Pacific, Middle East, Far East, Europe, USA has committed to render the best service to the client closely to ensure their operation continues to run smoothly. Our Mission "To build Customer satisfaction, and strive to provide complete Information Technology solution you need to stay ahead of your competition" If you have any queries about our services. Job Description • At least 5 years of Design and development experience in Big data, Java or Datawarehousing related technologies • Atleast 3 years of hands on design and development experience on Big data related technologies - PIG, Hive, Mapreduce, HDFS, HBase, Hive, YARN, SPARK, Oozie, Java and shell scripting • Should be a strong communicator and be able to work independently with minimum involvement from client SMEs • Should be able to work in team in diverse/ multiple stakeholder environment Qualifications • Bachelor's degree or foreign equivalent required. Will also consider one year of relevant work experience in lieu of every year of education. Additional Information Note : NO OPT, H1 for this position Client : Infosys
    $73k-100k yearly est. 11h ago
  • Data Scientist

    Isolved HCM

    Data engineer job in Phoenix, AZ

    Summary/objective We are seeking a highly skilled Data Scientist to focus on building and deploying predictive models that identify customer churn risk and upsell opportunities. This role will play a key part in driving revenue growth and retention strategies by leveraging advanced machine learning, statistical modeling, and large-scale data capabilities within Databricks. Why Join Us? Be at the forefront of using Databricks AI/ML capabilities to solve real-world business challenges. Directly influence customer retention and revenue growth through applied data science. Work in a collaborative environment where experimentation and innovation are encouraged. Core Job Duties: Model Development * Design, develop, and deploy predictive models for customer churn and upsell propensity using Databricks ML capabilities. * Evaluate and compare algorithms (e.g., logistic regression, gradient boosting, random forest, deep learning) to optimize predictive performance. * Incorporate feature engineering pipelines that leverage customer behavior, transaction history, and product usage data. Data Engineering & Pipeline Ownership * Build and maintain scalable data pipelines in Databricks (using PySpark, Delta Lake, and MLflow) to enable reliable model training and scoring. * Collaborate with data engineers to ensure proper data ingestion, transformation, and governance. Experimentation & Validation * Conduct A/B tests and back testing to validate model effectiveness. * Apply techniques for model monitoring, drift detection, and retraining in production. Business Impact & Storytelling * Translate complex analytical outputs into clear recommendations for business stakeholders. * Partner with Product and Customer Success teams to design strategies that reduce churn, increase upsell and improve customer retention KPIs. Minimum Qualifications: * Master's or PhD in Data Science, Statistics, Computer Science, or related field (or equivalent industry experience). * 3+ years of experience building predictive models in a production environment. * Strong proficiency in Python (pandas, scikit-learn, PySpark) and SQL. * Demonstrated expertise using Databricks for: * Data manipulation and distributed processing with PySpark. * Building and managing models with MLflow. * Leveraging Delta Lake for efficient data storage and retrieval. * Implementing scalable ML pipelines within Databricks' ML Runtime. * Experience with feature engineering for behavioral and transactional datasets. * Strong understanding of customer lifecycle analytics, including churn modeling and upsell/recommendation systems. * Ability to communicate results and influence decision-making across technical and non-technical teams. Preferred Qualifications: * Experience with cloud platforms (Azure Databricks, AWS, or GCP). * Familiarity with Unity Catalog for data governance and security. * Knowledge of deep learning frameworks (TensorFlow, PyTorch) within Databricks. * Exposure to MLOps best practices (CI/CD for ML, model versioning, monitoring). * Background in SaaS, subscription-based businesses, or customer analytics. Physical Demands Prolonged periods of sitting at a desk and working on a computer. Must be able to lift up to 15 pounds. Travel Required: Limited Work Authorization: Employees must be legally authorized to work in the United States. FLSA Classification: Exempt Location: Any Effective Date: 9/16/2025 About isolved isolved is a provider of human capital management (HCM) solutions that help organizations recruit, retain and elevate their workforce. More than 195,000 employers and 8 million employees rely on isolved's software and services to streamline human resource (HR) operations and deliver employee experiences that matter. isolved People Cloud is a unified yet modular HCM platform with built-in artificial intelligence (AI) and analytics that connects HR, payroll, benefits, and workforce and talent management into a single solution that drives better business outcomes. Through the Sidekick Advantage, isolved also provides expert guidance, embedded services and an engaged community that empowers People Heroes to grow their companies and careers. Learn more at ******************* isolved is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. isolved is a progressive and open-minded meritocracy. If you are smart and good at what you do, come as you are. Visit ************************** for more information regarding our incredible culture and focus on our employee experience. Visit ************************* for a comprehensive list of our employee total rewards offerings.
    $74k-107k yearly est. 4d ago
  • Data Engineer

    L R S 4.3company rating

    Data engineer job in Peoria, AZ

    Job Description Looking for more than just another assignment? We're looking for you! This isn't just another assignment, but a real opportunity and a challenge for the right person. LRS Consulting Services is seeking a Data Engineer for a 6-month contract to hire opportunity with our client in Central Illinois! LRS Consulting Services has been delivering the highest quality consultants to our clients since 1979. We have built a solid reputation for dealing with our clients and our consultants with honesty, integrity, and respect. We work hard every day to maintain that reputation, and we are very interested in candidates who can help us. If you're that candidate, this opportunity is made for you! As a Data Engineer you will be: Supports and helps implement a Data and Analytics strategy. Helps to increase timely, transparent accessibility across the enterprise data via appropriate analytics and reporting resources. Helps expand, maintain and administer the client's primary analytics environment, including data model and dictionary, ELT processes, connector development, and more. Develops database models/data integration solutions and supports existing data solutions/platforms. Essential Duties and Responsibilities: Writes and maintains core data extraction programs in COBOL to enable core data to enrich the analytics environment. Supports ongoing enhancements to core interfaces, such as implementation of API-based core calls as appropriate. Works with stakeholders to determine new internal/external data sources (First Data/PSCU/CO-OP/Claritas/Temenos, etc.) for the analytics environment; builds/partners with external resources to build appropriate data connectors for new applications. Works with IS Programming, Information Security, IT Network Communications, and product areas to develop, test, and implement extraction and ingestion workflows/pipelines. Maintains data extraction standards and documentation for the analytics environment in partnership with IT Web Engineering and other system owners. Writes and maintains conformation, transformation, and data modeling coding/standards for use within the analytics environment to continuously expand, improve, and document the warehouse and reporting layers. Performs regular monitoring of the analytics environment and systems, in partnership with IT Network Communications and Information Security. Recommends workflows for ongoing data backups to balance availability and integrity for the analytics environment. Works closely with data stewards and Information Security to establish and implement controls to manage the risk of vulnerabilities to confidentiality, integrity, or availability of data or analytics outputs from the analytics environment. Serves as a member of the Data Governance team with an emphasis on metadata management and data documentation. Supports ongoing enterprise-wide change management and communications for enhancements to the technical capabilities of enterprise analytics environment. Performs other duties as assigned. * Required: 3+ years of experience with COBOL or Python programming 3+ years of experience with SQL, writing SQL Queries. Advanced working SQL knowledge and experience working with relational databases, data governance/data modeling, query authoring (SQL) as well as working familiarity with a variety of databases Experience in database administration is desired Experience in metadata management is desired Experience working with ETL in a data warehouse, data lake, or data mart environment. Experience data modeling (logical/physical, relational) Strong analytic skills related to working with structured and/or unstructured datasets Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement Candidate must be able to effectively communicate in English (written & verbal) Candidate must have permanent authorization to work in the USA for any employer The base range for this contract position is $35.00 - $80.00 per hour, depending on experience. Our pay ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target for new hires of this position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. LRS is an equal opportunity employer. Applicants for employment will receive consideration without unlawful discrimination based on race, color, religion, creed, national origin, sex, age, disability, marital status, gender identity, domestic partner status, sexual orientation, genetic information, citizenship status or protected veteran status.
    $35-80 hourly 60d+ ago
  • Data Governance Engineer

    Tata Consulting Services 4.3company rating

    Data engineer job in Phoenix, AZ

    Job Title : Data Governance Engineer Experience Required - 6+ Years Must Have Technical/Functional Skills * Understanding of Data Management and Data Governance concepts (metadata, lineage, data quality, etc.) and prior experience. * 2 - 5 years of Data Quality Management experience. * Intermediate competency in SQL & Python or related programming language. * Strong familiarity with data architecture and/or data modeling concepts * 2 - 5 years of experience with Agile or SAFe project methodologies Roles & Responsibilities * Assist in identifying data-related risks and associated controls for key business processes. Risks relate to Record Retention, Data Quality, Data Movement, Data Stewardship, Data Protection, Data Sharing, among others. * Identify data quality issues, perform root-cause-analysis of data quality issues and drive remediation of audit and regulatory feedback. * Develop deep understanding of key enterprise data-related policies and serve as the policy expert for the business unit, providing education to teams regarding policy implications for business. * Responsible for holistic platform data quality monitoring, including but not limited to critical data elements. * Collaborate with and influence product managers to ensure all new use cases are managed according to policies. * Influence and contribute to strategic improvements to data assessment processes and analytical tools. * Responsible for monitoring data quality issues, communicating issues, and driving resolution. * Support current regulatory reporting needs via existing platforms, working with upstream data providers, downstream business partners, as well as technology teams. * Subject matter expertise on multiple platforms. * Responsible to partner with the Data Steward Manager in developing and managing the data compliance roadmap. Generic Managerial Skills, If any * Drives Innovation & Change: Provides systematic and rational analysis to identify the root cause of problems. Is prepared to challenge the status quo and drive innovation. Makes informed judgments, recommends tailored solutions. * Leverages Team - Collaboration: Coordinates efforts within and across teams to deliver goals, accountable to bring in ideas, information, suggestions, and expertise from others outside & inside the immediate team. * Communication: Influences and holds others accountable and has ability to convince others. Identifies the specific data governance requirements and is able to communicate clearly and in a compelling way. Salary Range - $100,000 to $120,000 per year TCS Employee Benefits Summary: Discretionary Annual Incentive. Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans. Family Support: Maternal & Parental Leaves. Insurance Options: Auto & Home Insurance, Identity Theft Protection. Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement. Time Off: Vacation, Time Off, Sick Leave & Holidays. Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing. #LI-JS2
    $100k-120k yearly 9d ago
  • Data Scientist Lead, Vice President

    Jpmorgan Chase & Co 4.8company rating

    Data engineer job in Tempe, AZ

    JobID: 210686904 JobSchedule: Full time JobShift: Day : Join a powerhouse team at the forefront of Home Lending Data & Analytics, where we partner with Product, Marketing, and Sales to solve the most urgent and complex business challenges. Our team thrives in a fast-paced, matrixed environment, driving transformative impact through bold analytics and innovative data science solutions. We are relentless in our pursuit of actionable insights, seamlessly engaging with stakeholders and redefining what's possible through strategic collaboration and visionary problem solving. If you're ready to shape the future of home lending with breakthrough ideas and data-driven leadership, this is the team for you. We are seeking a senior Data Scientist Lead to join our Home Lending Data & Analytics team, supporting Originations Product Team. This strategic role requires a visionary leader with a consulting background who excels at translating complex data into actionable business insights. The ideal candidate will be a recognized thought leader, demonstrating exceptional critical thinking and problem-solving skills, and a proven ability to craft and deliver compelling data-driven stories that influence decision-making at all levels. Success in this role requires not only technical expertise, but also the ability to inspire others, drive innovation, and communicate insights in a way that resonates with both technical and non-technical audiences. Key Responsibilities: * Identify, quantify, and solve obstacles to business goals using advanced business analysis and data science skillsets. * Recognize and communicate meaningful trends and patterns in data, delivering clear, compelling narratives to drive business decisions. * Serve as a data expert and consultant to the predictive modeling team, identifying and validating data sources. * Advise business and technology partners on data-driven opportunities to increase efficiency and improve customer experience. * Proactively interface with, and gather information from, other areas of the business (Operations, Technology, Finance, Marketing). * Extract and analyze data from various sources and technologies using complex SQL queries. * Summarize discoveries with solid data support and quick turnaround, tailoring messages for technical and non-technical audiences. * Influence upward and downward-mentor junior team members and interface with business leaders to drive strategic initiatives. * Foster a culture of innovation, attention to detail, and results within the team. Qualifications: * 6+ years of experience in business strategy, analytics, or data science. * 2+ years of experience in business management consulting. * Strong experience with SQL (query/procedure writing). * Proficiency in at least one versatile, cross-technology tool/language: Python, SAS, R, or Alteryx. * Demonstrated ability to craft compelling stories from data and present insights that influence decision-making. * Clear and succinct written and verbal communication skills, able to frame and present messages for different audiences. * Critical and analytical thinking, with the ability to maintain detail focus and retain big picture perspective. * Strong Microsoft Excel skills. * Ability to work independently, manage shifting priorities and projects, and thrive in a fast-paced, competitive environment. * Excellent interpersonal skills to work effectively with a variety of individuals, departments, and organizations. * Experience mentoring or leading teams is highly desirable. Preferred Background: * Experience in Mortgage Banking or Financial Services industry preferred. * Previous experience in consulting, with exposure to a variety of industries and business challenges. * Track record of successful stakeholder engagement and change management. * Recognized as a thought leader, with experience driving strategic initiatives and innovation.
    $99k-128k yearly est. Auto-Apply 3d ago
  • Slalom Flex (Project Based) - Data Engineer

    Slalom 4.6company rating

    Data engineer job in Phoenix, AZ

    Data Engineer Who You'll Work With At Slalom, personal connection meets global scale. Our vision is to enable a world in which everyone loves their work and life. We help organizations of all kinds redefine what's possible, give shape to the future-and get there. What You'll Do * Perform manual data ingestions * Monitor and resolve data pipelines * Maintain quality control of operations What You'll Bring * 3+ years of data engineering experience specifically using Azure. * Hands on experience with FiveTran, Azure Data Factory, Azure Blob Storage, Azure SQL Server. * Hands on experience with SQL and data validation. * Hands on experience monitoring and troubleshooting data pipelines in production. * Understanding or hands-on experience with reporting dashboards such as Tableau or Power BI. * Strong communication. * Consulting experience preferred. About Us Slalom is a fiercely human business and technology consulting company that leads with outcomes to bring more value, in all ways, always. From strategy through delivery, our agile teams across 52 offices in 12 countries collaborate with clients to bring powerful customer experiences, innovative ways of working, and new products and services to life. We are trusted by leaders across the Global 1000, many successful enterprise and mid-market companies, and 500+ public sector organizations to improve operations, drive growth, and create value. At Slalom, we believe that together, we can move faster, dream bigger, and build better tomorrows for all. Compensation and Benefits Slalom prides itself on helping team members thrive in their work and life. As a result, Slalom is proud to invest in benefits that include meaningful time off and paid holidays, 401(k) with a match, a range of choices for highly subsidized health, dental, & vision coverage, adoption and fertility assistance, and short/long-term disability. We also offer yearly $350 reimbursement account for any well-being-related expenses, as well as discounted home, auto, and pet insurance. Slalom is committed to fair and equitable compensation practices. For this position, the pay range is 55/HR to $80/HR. In addition, individuals may be eligible for an annual discretionary bonus. Actual compensation will depend upon an individual's skills, experience, qualifications, location, and other relevant factors. The salary pay range is subject to change and may be modified at any time. We will accept applicants until December 18th, 2025. We are committed to pay transparency and compliance with applicable laws. If you have questions or concerns about the pay range or other compensation information in this posting, please contact us at: ********************. EEO and Accommodations Slalom is an equal opportunity employer and is committed to attracting, developing and retaining highly qualified talent who empower our innovative teams through unique perspectives and experiences. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veterans' status, or any other characteristic protected by federal, state, or local laws. Slalom will also consider qualified applications with criminal histories, consistent with legal requirements. Slalom welcomes and encourages applications from individuals with disabilities. Reasonable accommodations are available for candidates during all aspects of the selection process. Please advise the talent acquisition team or contact ****************************** if you require accommodations during the interview process.
    $80 hourly Easy Apply 9d ago
  • DataScientist

    V15P1Talonnn

    Data engineer job in Arizona City, AZ

    Java Full Stack Developer (Job Code: J2EE) 3 to 10 years of experience developing web based applications in Java/J2EE technologies Knowledge of RDBMS and NoSQL data stores and polyglot persistence (Oracle, MongoDB etc.) Knowledge of event sourcing and distributed message systems (Kafka, RabbitMQ) AngularJS, React, Backbone or other client-side MVC experience Experience with JavaScript build tools and dependency management (npm, bower, grunt, gulp) Experience creating responsive designs (Bootstrap, mobile, etc.) Experience with unit and automation testing (Jasmine, Protractor, JUnit) Expert knowledge of build tools and dependency management (gradle, maven) Knowledge of Domain Driven Design concepts and microservices Participate in software design and development using modern Java and web technology stack. Should be proficient in Spring boot and Angular Sound understanding of Microservices architecture Good understanding of event driven architecture Experience building Web Services (REST/SOAP) Experience in writing©Junit Good to have experience in TDD Expert in developing highly responsive web application using Angular4 or above Good Knowledge of HTML/HTML5/CSS, JavaScript/AJAX, and XML Good understanding of SQL and relational databases and NO SQL databases Familiarity with design patterns and should be able to design small to medium complexity modules independently Experience with Agile or similar development methodologies Experience with a versioning system (e.g., CVS/SVN/Git) Experience with agile development methodologies including TDD, Scrum and Kanban Strong verbal communications, cross-group collaboration skills, analytical, structured and strategic thinking. Great interpersonal skills, cultural awareness, belief in teamwork Collaborating with product owners, stakeholders and potentially globally distributed teams Work cross-functional in an Agile environment Excellent problem-solving, organizational and analytical skills Qualification : BE / B.Tech / MCA / ME / M.Tech **************************** TESt Shift 9 hrs Test Budget 5 hrs
    $73k-105k yearly est. Auto-Apply 60d+ ago
  • Sr. Big Data Engineer

    Ktek Resourcing 4.1company rating

    Data engineer job in Phoenix, AZ

    K-Tek's core business is into temporary staffing, permanent placement and volume hiring. Since inception of our staffing solutions has grown multi-fold with global offices. We know what works best for our clients and what doesn't. This is the key differentiator and this is how we edge over the competition. Job Description Hi, Please review and let me know if you are interested Good Salary + Benefits + Relocation + travel expense - All provided. ONLY FULLTIME Locations: Phoenix, AZ, Washington, Atlanta Must Have's- Java (Expert developer/engineer), Hadoop (expert level), strong Communication skills (involves interacting with clients). Mode of interview: Online test, Phone round 1,2. And final interview. Position 1: Big Data Sr. Software Eng (10+ years) Project: Development/Coding/Technical (muliple roles) Must Have's- Java (Expert developer/engineer), Hadoop (expert level), strong Communication skills (involves interacting with clients). Overall 8-10 yrs of expereince Atleast 6 years of Java expertise/development + coding Must have minimum 3 yrs of Hadoop, Hive, Big Data exp Description: Very strong server-side Java experience, especially in an Open Source, data-intensive, distributed environments Experience in the implementation role of high end software products in telecom/ financials/ healthcare/ hospitality domain Should have worked on open source products and contribution towards it would be an added advantage Implemented and in-depth knowledge of various Java, J2EE and EAI patterns Implemented complex projects dealing with the considerable data size (GB/ PB) and with high complexity Well aware of various architectural concepts (Multi-tenancy, SOA, SCA, etc.) and NFR's (performance, scalability, monitoring, etc.) Good understanding of algorithms, data structure, and performance optimization techniques Knowledge of database principles, SQL, and experience working with large databases beyond just data access Exposure to complete SDLC and PDLC Capable of working as an individual contributor and within team too Self-starter & resourceful personality with ability to manage pressure situations Should have experience/ knowledge on working with batch processing/ real-time systems using various Open Source technologies like Solr, Hadoop, NoSQL DB's, Storm, Kafka, etc. ROLE & RESPONSIBILITIES : Implementation of various solutions arising out of the large data processing (GB's/ PB's) over various NoSQL, Hadoop, and MPP-based products Active participation in the various Architecture and design calls with Big Data customers Working with Sr. Architects and providing implementation details to offshore Conducting sessions/ writing whitepapers/ Case Studies pertaining to Big Data Responsible for timely and quality deliveries Fulfill organization responsibilities - Sharing knowledge and experience within the other groups in the organization, conducting various technical sessions and trainings Additional Information All your information will be kept confidential according to EEO guidelines.
    $90k-128k yearly est. 11h ago
  • Hadoop Developer

    Charter Global 4.0company rating

    Data engineer job in Phoenix, AZ

    Senior Hadoop Tools Developer · Computer Science Degree or related, master degree is an advantage · 5-10+ years total experience in development mainly around Java and all related technologies in the Java stack (e.g. Spring) · 2-3+ year in depth knowledge & experience in Hadoop around all the Hadoop ecosystem (M/R, Hive - master) · Spark, HBase experience is a plus · 3+ years of experience working in Linux/Unix · Good understanding & experience with Performance and Performance tuning for complex S/W projects mainly around large scale and low latency · Experience with leading Design & Architecture is an advantage · NoSQL experience, mainly around MongoDB is an advantage · Web dev experience, mainly around Angular is an advantage · Hadoop/Java certifications is an advantage · Excellent communication skills · Ability to work in a fast-paced, team oriented environment Qualifications 5-10+ years total experience in development mainly around Java and all related technologies in the Java stack (e.g. Spring) 2-3+ year in depth knowledge & experience in Hadoop around all the Hadoop ecosystem (M/R, Hive - master) Additional Information 2-3+ year in depth knowledge & experience in Hadoop around all the Hadoop ecosystem (M/R, Hive - master)
    $78k-98k yearly est. 10h ago
  • Data Engineer

    American Express 4.8company rating

    Data engineer job in Phoenix, AZ

    At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. As part of our diverse tech team, you can architect, code and ship software that makes us an essential part of our customers' digital lives. Here, you can work alongside talented engineers in an open, supportive, inclusive environment where your voice is valued, and you make your own decisions on what tech to use to solve challenging problems. Amex offers a range of opportunities to work with the latest technologies and encourages you to back the broader engineering community through open source. And because we understand the importance of keeping your skill fresh and relevant, we give you dedicated time to invest in your professional development. Find your place in technology on #TeamAmex. **How will you make an impact in this role?** The Infrastructure Data & Analytics team unifies FinOps, Data Science and Business Intelligence to enable Technology cost transparency, infrastructure performance optimization and commercial efficiency for the enterprise through consistent, high-quality data and predictive analytics. This team within Global Infrastructure aims to establish and reinforce a culture of effective metrics, data-driven business processes, architecture simplification, and cost awareness. Metric-driven cost optimization, workload-specific forecasting and robust contract management are among the tools and practices required to drive accountability for delivering business solutions that derive maximum value. The result will provide a solid foundation for decision-making around cost, quality and speed. We are seeking an experienced Data Engineer for our team who can design, develop and enhance of our data model and architecture. This team sets the foundation for how infrastructure utilization, consumption and asset inventory data is ingested, validated, and presented in our suite of reporting capabilities. The Data Engineer will develop and refine processes to ensure data quality (data accuracy, completeness, timeliness, consistency, uniqueness and relevance) is meeting the highest standards for our users. They will support teams in doing data quality testing and validation processes to identify where data pipelines are not fit for purpose or need improvement. This individual will be responsible for implementing our target data architecture, leveraging best practices for data management, automating data feeds and transformations, and maintaining excellent documentation. Let's build on what you know. The role will require a unique blend of strong DataOps technical and design skills to translate business decisions into data requirements. This individual will build a deep understanding of the infrastructure data we use in order to work across the ID&A team and key stakeholders the appropriate data to tell a data story. This includes implementing and maintaining a data architecture that follows data management best practices for ensuring data ingestion, transformation, storage and analytics are handled according to their specific purpose using the appropriate tools: ingestion captures raw data without applying business logic, transformation processes data discretely for auditability, and analytical queries retrieve structured outputs without relying on upstream processes. They will be responsible for building and automating data pipelines to maximize data availability and efficiency, as well as migrating the data model and transformations to our target architecture. This individual will bring passion for data-driven decisions, enterprise solutions, and collaboration to the role, transforming platform data into actionable insights by utilizing data engineering and data visualization best practices. **Key responsibilities include:** + Data Architecture: Perform all technical aspects of data architecture and database management for ID&A, including developing data pipelines, new database structures and APIs as applicable + Data Design: Translate logical data architectures into physical data designs, ensuring alignment with data modeling best practices and standards + Data Process and Monitoring: Ensure proper data ingestion, validation, testing, and monitoring for ID&A data lake + Data Quality Testing: Develop and provide subject matter expertise on data analysis, testing and Quality Assurance (QA) methodologies and processes + Support database and data platform administration for initiatives building, enhancing, or maintaining databases, data warehouses and data pipelines + Data Migration: Design and support migration to a technology-agnostic data model that decouples data architecture from specific tools or platform used for storage, processing, or access + Data Integrity: Ensure accuracy, completeness, and data quality, independent of upstream or downstream systems; collaborate with data owners to validate and refine data sources where applicable + Agile Methodologies: Function as a senior member of an agile feature team and manage data assets as per the enterprise standards, guidelines and policies + Collaboration: Partner closely with business intelligence team to capture and define data requirements for new and enhanced data visualizations + Prioritization: Work with product teams to prioritize new features for ongoing sprints and manage backlog + Continuous Improvement: Monitor performance and make recommendations for areas of opportunity/improvement for automation and tool usage + Compliance: Understand and abide by SDLC standards and policies + Liaison: Act as point of contact for data-related inquiries and data access requests + Innovation: Leverage the evolving technical landscape as needed, including AI, Big Data, Machine Learning and other technologies to deliver meaningful business insights **Minimum Requirements:** + 4 years of DataOps engineering experience in implementing pipeline orchestration, data quality monitoring, governance, security processes, and self-service data access + Experience managing databases, ETL/ELT pipelines, data lake architectures, and real-time processing + Proficiency in API development and stream processing frameworks + Hands-on coding experience in Python + Hands-on expertise with design and development across one or more database management systems (e.g. SQL Server, PostgreSQL, Oracle) + Testing and Troubleshooting: Ability to test, troubleshoot, and debug data processes + Strong analytical skills with a proven ability to understand and document business data requirements in complete, accurate, extensible and flexible logical data models, data visualization tools (e.g. Apptio BI, PowerBI) + Ability to write efficient SQL queries to extract and manipulate data from relational databases, data warehouses and batch processing systems + Experience in data quality and QA testing methodologies + Fluent in data risk, management, and compliance terminology and best practices + Proven track record for managing large, complex ecosystems with multiple stakeholders + Self-starter who is able to problem-solve effectively, organize and document processes, and prioritize feature with limited guidance + An enterprise mindset that connects the dots across various requirements and the broader operations/infrastructure data architecture landscape + Excellent collaboration skills; ability to drive consensus and tangible outcomes, demonstrated by breaking down silos and fostering cross-team communication + Understanding of complex software delivery including build, test, deployment, and operations; conversant in AI, Data Science, and Business Intelligence concepts and technology stack + Exposure to distributed (multi-tiered) systems, algorithms, IT asset management, cloud services, and relational databases + Foundational Public Cloud (AWS, Google, Microsoft) certification; advanced Public Cloud certifications a plus + Experience working in technology business management, technology infrastructure or data visualization teams a plus + Experience with design and coding across multiple platforms and languages a plus + Bachelor's Degree in computer science, computer science engineering, data engineering, or related field required; advanced degree preferred **Qualifications** Salary Range: $78,000.00 to $124,750.00 annually bonus benefits The above represents the expected salary range for this job requisition. Ultimately, in determining your pay, we'll consider your location, experience, and other job-related factors. We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: + Competitive base salaries + Bonus incentives + 6% Company Match on retirement savings plan + Free financial coaching and financial well-being support + Comprehensive medical, dental, vision, life insurance, and disability benefits + Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need + 20 weeks paid parental leave for all parents, regardless of gender, offered for pregnancy, adoption or surrogacy + Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) + Free and confidential counseling support through our Healthy Minds program + Career development and training opportunities For a full list of Team Amex benefits, visit our Colleague Benefits Site . American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. American Express will consider for employment all qualified applicants, including those with arrest or conviction records, in accordance with the requirements of applicable state and local laws, including, but not limited to, the California Fair Chance Act, the Los Angeles County Fair Chance Ordinance for Employers, and the City of Los Angeles' Fair Chance Initiative for Hiring Ordinance. For positions covered by federal and/or state banking regulations, American Express will comply with such regulations as it relates to the consideration of applicants with criminal convictions. We back our colleagues with the support they need to thrive, professionally and personally. That's why we have Amex Flex, our enterprise working model that provides greater flexibility to colleagues while ensuring we preserve the important aspects of our unique in-person culture. Depending on role and business needs, colleagues will either work onsite, in a hybrid model (combination of in-office and virtual days) or fully virtually. US Job Seekers - Click to view the " Know Your Rights " poster. If the link does not work, you may access the poster by copying and pasting the following URL in a new browser window: *************************** Employment eligibility to work with American Express in the United States is required as the company will not pursue visa sponsorship for these positions. **Job:** Technologies **Primary Location:** US-Arizona-Phoenix **Other Locations:** US-New York-New York **Schedule** Full-time **Req ID:** 25022269
    $78k-124.8k yearly 15d ago
  • Data Scientist, Product Analytics

    Meta 4.8company rating

    Data engineer job in Phoenix, AZ

    As a Data Scientist at Meta, you will shape the future of people-facing and business-facing products we build across our entire family of applications (Facebook, Instagram, Messenger, WhatsApp, Oculus). By applying your technical skills, analytical mindset, and product intuition to one of the richest data sets in the world, you will help define the experiences we build for billions of people and hundreds of millions of businesses around the world. You will collaborate on a wide array of product and business problems with a wide-range of cross-functional partners across Product, Engineering, Research, Data Engineering, Marketing, Sales, Finance and others. You will use data and analysis to identify and solve product development's biggest challenges. You will influence product strategy and investment decisions with data, be focused on impact, and collaborate with other teams. By joining Meta, you will become part of a world-class analytics community dedicated to skill development and career growth in analytics and beyond.Product leadership: You will use data to shape product development, quantify new opportunities, identify upcoming challenges, and ensure the products we build bring value to people, businesses, and Meta. You will help your partner teams prioritize what to build, set goals, and understand their product's ecosystem.Analytics: You will guide teams using data and insights. You will focus on developing hypotheses and employ a varied toolkit of rigorous analytical approaches, different methodologies, frameworks, and technical approaches to test them.Communication and influence: You won't simply present data, but tell data-driven stories. You will convince and influence your partners using clear insights and recommendations. You will build credibility through structure and clarity, and be a trusted strategic partner. **Required Skills:** Data Scientist, Product Analytics Responsibilities: 1. Work with large and complex data sets to solve a wide array of challenging problems using different analytical and statistical approaches 2. Apply technical expertise with quantitative analysis, experimentation, data mining, and the presentation of data to develop strategies for our products that serve billions of people and hundreds of millions of businesses 3. Identify and measure success of product efforts through goal setting, forecasting, and monitoring of key product metrics to understand trends 4. Define, understand, and test opportunities and levers to improve the product, and drive roadmaps through your insights and recommendations 5. Partner with Product, Engineering, and cross-functional teams to inform, influence, support, and execute product strategy and investment decisions **Minimum Qualifications:** Minimum Qualifications: 6. Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience 7. Bachelor's degree in Mathematics, Statistics, a relevant technical field, or equivalent 8. 4+ years of work experience in analytics, data querying languages such as SQL, scripting languages such as Python, and/or statistical mathematical software such as R (minimum of 2 years with a Ph.D.) 9. 4+ years of experience solving analytical problems using quantitative approaches, understanding ecosystems, user behaviors & long-term product trends, and leading data-driven projects from definition to execution [including defining metrics, experiment, design, communicating actionable insights] **Preferred Qualifications:** Preferred Qualifications: 10. Master's or Ph.D. Degree in a quantitative field **Public Compensation:** $145,000/year to $204,000/year + bonus + equity + benefits **Industry:** Internet **Equal Opportunity:** Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Meta participates in the E-Verify program in certain locations, as required by law. Please note that Meta may leverage artificial intelligence and machine learning technologies in connection with applications for employment. Meta is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance or accommodations due to a disability, please let us know at accommodations-ext@fb.com.
    $145k-204k yearly 60d+ ago
  • Data Engineer

    American Express 4.8company rating

    Data engineer job in Phoenix, AZ

    At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. As part of our diverse tech team, you can architect, code and ship software that makes us an essential part of our customers' digital lives. Here, you can work alongside talented engineers in an open, supportive, inclusive environment where your voice is valued, and you make your own decisions on what tech to use to solve challenging problems. Amex offers a range of opportunities to work with the latest technologies and encourages you to back the broader engineering community through open source. And because we understand the importance of keeping your skill fresh and relevant, we give you dedicated time to invest in your professional development. Find your place in technology on #TeamAmex. How will you make an impact in this role? The Infrastructure Data & Analytics team unifies FinOps, Data Science and Business Intelligence to enable Technology cost transparency, infrastructure performance optimization and commercial efficiency for the enterprise through consistent, high-quality data and predictive analytics. This team within Global Infrastructure aims to establish and reinforce a culture of effective metrics, data-driven business processes, architecture simplification, and cost awareness. Metric-driven cost optimization, workload-specific forecasting and robust contract management are among the tools and practices required to drive accountability for delivering business solutions that derive maximum value. The result will provide a solid foundation for decision-making around cost, quality and speed. We are seeking an experienced Data Engineer for our team who can design, develop and enhance of our data model and architecture. This team sets the foundation for how infrastructure utilization, consumption and asset inventory data is ingested, validated, and presented in our suite of reporting capabilities. The Data Engineer will develop and refine processes to ensure data quality (data accuracy, completeness, timeliness, consistency, uniqueness and relevance) is meeting the highest standards for our users. They will support teams in doing data quality testing and validation processes to identify where data pipelines are not fit for purpose or need improvement. This individual will be responsible for implementing our target data architecture, leveraging best practices for data management, automating data feeds and transformations, and maintaining excellent documentation. Let's build on what you know. The role will require a unique blend of strong DataOps technical and design skills to translate business decisions into data requirements. This individual will build a deep understanding of the infrastructure data we use in order to work across the ID&A team and key stakeholders the appropriate data to tell a data story. This includes implementing and maintaining a data architecture that follows data management best practices for ensuring data ingestion, transformation, storage and analytics are handled according to their specific purpose using the appropriate tools: ingestion captures raw data without applying business logic, transformation processes data discretely for auditability, and analytical queries retrieve structured outputs without relying on upstream processes. They will be responsible for building and automating data pipelines to maximize data availability and efficiency, as well as migrating the data model and transformations to our target architecture. This individual will bring passion for data-driven decisions, enterprise solutions, and collaboration to the role, transforming platform data into actionable insights by utilizing data engineering and data visualization best practices. Key responsibilities include: * Data Architecture: Perform all technical aspects of data architecture and database management for ID&A, including developing data pipelines, new database structures and APIs as applicable * Data Design: Translate logical data architectures into physical data designs, ensuring alignment with data modeling best practices and standards * Data Process and Monitoring: Ensure proper data ingestion, validation, testing, and monitoring for ID&A data lake * Data Quality Testing: Develop and provide subject matter expertise on data analysis, testing and Quality Assurance (QA) methodologies and processes * Support database and data platform administration for initiatives building, enhancing, or maintaining databases, data warehouses and data pipelines * Data Migration: Design and support migration to a technology-agnostic data model that decouples data architecture from specific tools or platform used for storage, processing, or access * Data Integrity: Ensure accuracy, completeness, and data quality, independent of upstream or downstream systems; collaborate with data owners to validate and refine data sources where applicable * Agile Methodologies: Function as a senior member of an agile feature team and manage data assets as per the enterprise standards, guidelines and policies * Collaboration: Partner closely with business intelligence team to capture and define data requirements for new and enhanced data visualizations * Prioritization: Work with product teams to prioritize new features for ongoing sprints and manage backlog * Continuous Improvement: Monitor performance and make recommendations for areas of opportunity/improvement for automation and tool usage * Compliance: Understand and abide by SDLC standards and policies * Liaison: Act as point of contact for data-related inquiries and data access requests * Innovation: Leverage the evolving technical landscape as needed, including AI, Big Data, Machine Learning and other technologies to deliver meaningful business insights Minimum Requirements: * 4+ years of DataOps engineering experience in implementing pipeline orchestration, data quality monitoring, governance, security processes, and self-service data access * Experience managing databases, ETL/ELT pipelines, data lake architectures, and real-time processing * Proficiency in API development and stream processing frameworks * Hands-on coding experience in Python * Hands-on expertise with design and development across one or more database management systems (e.g. SQL Server, PostgreSQL, Oracle) * Testing and Troubleshooting: Ability to test, troubleshoot, and debug data processes * Strong analytical skills with a proven ability to understand and document business data requirements in complete, accurate, extensible and flexible logical data models, data visualization tools (e.g. Apptio BI, PowerBI) * Ability to write efficient SQL queries to extract and manipulate data from relational databases, data warehouses and batch processing systems * Experience in data quality and QA testing methodologies * Fluent in data risk, management, and compliance terminology and best practices * Proven track record for managing large, complex ecosystems with multiple stakeholders * Self-starter who is able to problem-solve effectively, organize and document processes, and prioritize feature with limited guidance * An enterprise mindset that connects the dots across various requirements and the broader operations/infrastructure data architecture landscape * Excellent collaboration skills; ability to drive consensus and tangible outcomes, demonstrated by breaking down silos and fostering cross-team communication * Understanding of complex software delivery including build, test, deployment, and operations; conversant in AI, Data Science, and Business Intelligence concepts and technology stack * Exposure to distributed (multi-tiered) systems, algorithms, IT asset management, cloud services, and relational databases * Foundational Public Cloud (AWS, Google, Microsoft) certification; advanced Public Cloud certifications a plus * Experience working in technology business management, technology infrastructure or data visualization teams a plus * Experience with design and coding across multiple platforms and languages a plus * Bachelor's Degree in computer science, computer science engineering, data engineering, or related field required; advanced degree preferred Salary Range: $78,000.00 to $124,750.00 annually + bonus + benefits The above represents the expected salary range for this job requisition. Ultimately, in determining your pay, we'll consider your location, experience, and other job-related factors. We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: * Competitive base salaries * Bonus incentives * 6% Company Match on retirement savings plan * Free financial coaching and financial well-being support * Comprehensive medical, dental, vision, life insurance, and disability benefits * Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need * 20+ weeks paid parental leave for all parents, regardless of gender, offered for pregnancy, adoption or surrogacy * Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) * Free and confidential counseling support through our Healthy Minds program * Career development and training opportunities For a full list of Team Amex benefits, visit our Colleague Benefits Site. American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. American Express will consider for employment all qualified applicants, including those with arrest or conviction records, in accordance with the requirements of applicable state and local laws, including, but not limited to, the California Fair Chance Act, the Los Angeles County Fair Chance Ordinance for Employers, and the City of Los Angeles' Fair Chance Initiative for Hiring Ordinance. For positions covered by federal and/or state banking regulations, American Express will comply with such regulations as it relates to the consideration of applicants with criminal convictions. We back our colleagues with the support they need to thrive, professionally and personally. That's why we have Amex Flex, our enterprise working model that provides greater flexibility to colleagues while ensuring we preserve the important aspects of our unique in-person culture. Depending on role and business needs, colleagues will either work onsite, in a hybrid model (combination of in-office and virtual days) or fully virtually. US Job Seekers - Click to view the "Know Your Rights" poster. If the link does not work, you may access the poster by copying and pasting the following URL in a new browser window: *************************** Employment eligibility to work with American Express in the United States is required as the company will not pursue visa sponsorship for these positions.
    $78k-124.8k yearly 15d ago

Learn more about data engineer jobs

How much does a data engineer earn in Mesa, AZ?

The average data engineer in Mesa, AZ earns between $69,000 and $129,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.

Average data engineer salary in Mesa, AZ

$94,000

What are the biggest employers of Data Engineers in Mesa, AZ?

The biggest employers of Data Engineers in Mesa, AZ are:
  1. PF Carrus LLC
  2. Bank of America
  3. Agap Technologies
  4. Rise Family
Job type you want
Full Time
Part Time
Internship
Temporary