Data Scientist
Data engineer job in Phoenix, AZ
We are seeking a Data Scientist to support advanced analytics and machine learning initiatives across the organization. This role involves working with large, complex datasets to uncover insights, validate data integrity, and build predictive models. A key focus will be developing and refining machine learning models that leverage sales and operational data to optimize pricing strategies at the store level.
Day-to-Day Responsibilities
Compare and validate numbers across multiple data systems
Investigate discrepancies and understand how metrics are derived
Perform data science and data analysis tasks
Build and maintain AI/ML models using Python
Interpret model results, fine-tune algorithms, and iterate based on findings
Validate and reconcile data from different sources to ensure accuracy
Work with sales and production data to produce item-level pricing recommendations
Support ongoing development of a new data warehouse and create queries as needed
Review Power BI dashboards (Power BI expertise not required)
Contribute to both ML-focused work and general data science responsibilities
Improve and refine an existing ML pricing model already in production
Qualifications
Strong proficiency with MS SQL Server
Experience creating and deploying machine learning models in Python
Ability to interpret, evaluate, and fine-tune model outputs
Experience validating and reconciling data across systems
Strong foundation in machine learning, data modeling, and backend data operations
Familiarity with querying and working with evolving data environments
Senior Data Engineer
Data engineer job in Phoenix, AZ
Job Title: Sr. Data Engineer
Job Type: Full Time
Compensation: $130,000 - $150,000 D.O.E.
is eligible for medical, dental, vision, and life insurance coverage, & PTO
Senior Data Engineer
ROLE OVERVIEW
The Senior Data Engineer is responsible for designing, building, and maintaining scalable data platforms that support analytics, reporting, and advanced data-driven initiatives. This is a hands-on engineering role focused on developing reliable, high-performing data solutions while contributing to architectural standards, data quality, and governance practices.
The ideal candidate has strong experience with modern data architectures, data modeling, and pipeline development, and is comfortable collaborating across technical and business teams to deliver trusted, production-ready datasets.
KEY RESPONSIBILITIES
Design and maintain data models across analytical and operational use cases to support reporting and advanced analytics.
Build and manage data pipelines that ingest, transform, and deliver structured and unstructured data at scale.
Contribute to data governance practices, including data quality controls, metadata management, lineage, and stewardship.
Develop and maintain cloud-based data platforms, including data lakes, analytical stores, and curated datasets.
Implement and optimize batch and near-real-time data ingestion and transformation processes.
Support data migration and modernization efforts while ensuring accuracy, performance, and reliability.
Partner with analytics, engineering, and business teams to understand data needs and deliver high-quality solutions.
Enable reporting and visualization use cases by providing clean, well-structured datasets for downstream tools.
Apply security, privacy, and compliance best practices throughout the data lifecycle.
Establish standards for performance tuning, scalability, reliability, and maintainability of data solutions.
Implement automation, testing, and deployment practices to improve data pipeline quality and consistency.
QUALIFICATIONS
Bachelor's degree in Computer Science, Engineering, or a related technical field, or equivalent professional experience.
5+ years of experience in data engineering or related roles.
Strong hands-on experience with:
Data modeling, schema design, and pipeline development
Cloud-based data platforms and services
Data ingestion, transformation, and optimization techniques
Familiarity with modern data architecture patterns, including lakehouse-style designs and governance frameworks.
Experience supporting analytics, reporting, and data science use cases.
Proficiency in one or more programming languages commonly used in data engineering (e.g., Python, SQL, or similar).
Solid understanding of data structures, performance optimization, and scalable system design.
Experience integrating data from APIs and distributed systems.
Exposure to CI/CD practices and automated testing for data workflows.
Familiarity with streaming or event-driven data processing concepts preferred.
Experience working in Agile or iterative delivery environments.
Strong communication skills with the ability to document solutions and collaborate across teams.
Sr Bigdata engineer
Data engineer job in Scottsdale, AZ
Sr Bigdata developer
Scottsdale AZ
Must have :
10-12 years of experience
Strong Experience in Scala, Spark, hive SQL, Hadoop and Kafka
Proficiency in Hive and SQL optimization.
Understanding of distributed systems and big data architecture.
Knowledge of streaming frameworks (Spark Streaming, Kafka Streams).
Good to have - Aerospike experience
Data Governance Engineer
Data engineer job in Phoenix, AZ
Job Title : Data Governance Engineer
Phoenix, AZ - Complete Onsite
Full-Time Permanent
Experience Required - 6+ Years
Must Have Technical/Functional Skills
Understanding of Data Management and Data Governance concepts (metadata, lineage, data quality, etc.) and prior experience.
2 - 5 years of Data Quality Management experience.
Intermediate competency in SQL & Python or related programming language.
Strong familiarity with data architecture and/or data modeling concepts
2 - 5 years of experience with Agile or SAFe project methodologies
Roles & Responsibilities
Assist in identifying data-related risks and associated controls for key business processes. Risks relate to Record Retention, Data Quality, Data Movement, Data Stewardship, Data Protection, Data Sharing, among others.
Identify data quality issues, perform root-cause-analysis of data quality issues and drive remediation of audit and regulatory feedback.
Develop deep understanding of key enterprise data-related policies and serve as the policy expert for the business unit, providing education to teams regarding policy implications for business.
Responsible for holistic platform data quality monitoring, including but not limited to critical data elements.
Collaborate with and influence product managers to ensure all new use cases are managed according to policies.
Influence and contribute to strategic improvements to data assessment processes and analytical tools.
Responsible for monitoring data quality issues, communicating issues, and driving resolution.
Support current regulatory reporting needs via existing platforms, working with upstream data providers, downstream business partners, as well as technology teams.
Subject matter expertise on multiple platforms.
Responsible to partner with the Data Steward Manager in developing and managing the data compliance roadmap.
Generic Managerial Skills, If any
Drives Innovation & Change: Provides systematic and rational analysis to identify the root cause of problems. Is prepared to challenge the status quo and drive innovation. Makes informed judgments, recommends tailored solutions.
Leverages Team - Collaboration: Coordinates efforts within and across teams to deliver goals, accountable to bring in ideas, information, suggestions, and expertise from others outside & inside the immediate team.
Communication: Influences and holds others accountable and has ability to convince others. Identifies the specific data governance requirements and is able to communicate clearly and in a compelling way.
Data Engineer
Data engineer job in Phoenix, AZ
Hi,
We do have an job opportunity for Data Engineer Analyst role.
Data Analyst / Data Engineer
Expectations: Our project is data analysis heavy, and we are looking for someone who can grasp business functionality and translate that into working technical solutions.
Job location: Phoenix, Arizona.
Type - Hybrid model (3 days a week in office)
Job Description: Data Analyst / Data Engineer (6+ Years relevant Experience with required skill set)
Summary:
We are seeking a Data Analyst Engineer with a minimum of 6 years in data engineering, data analysis, and data design. The ideal candidate will have strong hands-on expertise in Python and relational databases such as Postgres, SQL Server, or MySQL. Should have good understanding of data modeling theory and normalization forms.
Required Skills:
6+ years of experience in data engineering, data analysis, and data design
Your approach as a data analysis in your previous / current role, and what methods or techniques did you use to extract insights from large datasets
Good proficiency in Python
Do you have any formal training or education in data modeling? If so, please provide details about the course, program, or certification you completed, including when you received it.
Strong experience with relational databases: Postgres, SQL Server, or MySQL.
What are the essential factors that contribute to a project's success, and how do you plan to leverage your skills and expertise to ensure our project meets its objectives?
Expertise in writing complex SQL queries and optimizing database performance
Solid understanding of data modeling theory and normalization forms.
Good communicator with the ability to articulate business problems for technical solutions.
Key Responsibilities:
Analyze complex datasets to derive actionable insights and support business decisions.
Model data solutions for high performance and reliability.
Work extensively with Python for data processing and automation.
Develop and optimize SQL queries for Postgres, SQL Server, or MySQL databases.
Ensure data integrity, security, and compliance across all data solutions.
Collaborate with cross-functional teams to understand data requirements and deliver solutions.
Communicate effectively with stakeholders and articulate business problems to drive technical solutions.
Secondary Skills:
Experience deploying applications in Kubernetes.
API development using FastAPI or Django.
Familiarity with containerization (Docker) and CI/CD tools.
Regards,
Suhas Gharge
ORACLE CLOUD DATA ENGINEER
Data engineer job in Phoenix, AZ
Hiring: Oracle Cloud Data Engineer / Technology Lead
We're looking for a hands-on Oracle Cloud Data Engineer (Technology Lead) to drive OCI-based data engineering and Power BI analytics initiatives. This role combines technical leadership with active development in a high-impact data program.
Location: Phoenix, AZ (Hybrid)
Duration: 6+ Months (Contract)
Work Authorization: USC & Green Card holders ONLY (Strict Requirement)
Job Summary
This role focuses on building scalable data pipelines on Oracle Cloud Infrastructure (OCI) while leading Power BI dashboard and reporting development. You'll apply Medallion Architecture, enforce data governance, and collaborate closely with business stakeholders. Utility industry experience is a strong plus.
Must-Have (Non-Negotiable) Skills
8-10 years of experience in Data Engineering & Business Intelligence
3+ years of hands-on OCI experience
Strong expertise in OCI Data Services, including:
OCI Data Integration, OCI Data Flow, OCI Streaming
Autonomous Data Warehouse, Oracle Exadata, OCI Object Storage
Hands-on experience with Medallion Architecture (Bronze, Silver, Gold layers)
Power BI expertise: dashboards, reports, DAX, Power Query, data modeling, RLS
Strong coding skills in SQL, PL/SQL, Python
Experience with Terraform, Ansible, and CI/CD pipelines
Bachelor's or Master's degree in a related field
Power BI Certification - Required
Hands-on development is mandatory
Key Responsibilities
Design and implement secure, scalable OCI data pipelines
Lead Power BI dashboard and reporting development
Build inbound/outbound integration patterns (APIs, files, streaming)
Implement Audit, Balance, and Control (ABC) frameworks
Ensure data quality, governance, lineage, and monitoring
Mentor engineers and BI developers
Drive agile delivery and stakeholder collaboration
📩 Interested? Apply now or DM us to explore this opportunity! You can share resumes at ********************* OR Call us on *****************
Data Engineer
Data engineer job in Phoenix, AZ
Hybrid - 2-3 days on site
Phoenix, AZ
We're looking for a Data Engineer to help build the cloud-native data pipelines that power critical insights across our organization. You'll work with modern technologies, solve real-world data challenges, and support analytics and reporting systems that drive smarter decision-making in the transportation space.
What You'll Do
Build and maintain data pipelines using Databricks, Azure Data Factory, and Microsoft Fabric
Implement incremental and real-time ingestion using medallion architecture
Develop and optimize complex SQL and Python transformations
Support legacy platforms (SSIS, SQL Server) while contributing to modernization efforts
Troubleshoot data quality and integration issues
Participate in proof-of-concepts and recommend technical solutions
What You Bring
5+ years designing and building data solutions
Strong SQL and Python skills
Experience with ETL pipelines and Data Lake architecture
Ability to collaborate and adapt in a fast-moving environment
Preferred: Azure services, cloud ETL tools, Power BI/Tableau, event-driven systems, NoSQL databases
Bonus: Experience with Data Science or Machine Learning
Benefits
Medical, dental, and vision from day one · PTO & holidays · 401(k) with match · Lifestyle account · Tuition reimbursement · Voluntary benefits · Employee Assistance Program · Well-being & culture programs · Professional development support
Senior Data Engineer (PySpark / Python) (Only USC or GC on W2)
Data engineer job in Phoenix, AZ
Job Title: Senior Data Engineer (PySpark / Python)
Employment Type: Contract
Must Have Skills
py Spark, Python development , data engineer
Hands on knowledge for py Spark, Hadoop, Python
Github Backend API integration knowledge (JASON, REST)
Certifications Needed : No (Good to have GCP certification)
Top 3 responsibilities you would expect the Subcon to shoulder and execute
Individual contributor
Strong development experience and leading dev module
Work with client directly
AI Data Engineer
Data engineer job in Phoenix, AZ
Echelix is a leading AI consulting company helping businesses design, build, and scale intelligent systems. We partner with organizations to make artificial intelligence practical, powerful, and easy to adopt. Our team blends deep technical skill with real-world business sense to deliver AI that drives measurable results.
The Role
We're looking for a Senior Data Engineer to architect, optimize, and manage database systems that power AI-driven solutions and enterprise applications. You'll lead the design of scalable, secure, and high-performance data infrastructure across cloud platforms, ensuring our clients' data foundations are built for the future.
This role is ideal for database professionals who have evolved beyond traditional DBA work into cloud-native architectures, API-driven data access layers, and modern DevOps practices. You'll work with cutting-edge technologies like GraphQL, Hasura, and managed cloud databases while mentoring engineers on data architecture best practices.
What You'll Do
Design, tune, and manage PostgreSQL, SQL Server, and cloud-managed databases (AWS RDS/Aurora, Azure SQL Database/Cosmos DB)
Architect and implement GraphQL APIs using Hasura or equivalent technologies for real-time data access
Lead cloud database migrations and deployments across AWS and Azure environments
Automate database CI/CD pipelines using tools like GitHub Actions, Azure DevOps, or AWS Code Pipeline
Develop and maintain data access layers and APIs that integrate with AI and application workloads
Monitor, secure, and optimize database performance using cloud-native tools (AWS CloudWatch, Azure Monitor, Datadog)
Implement database security best practices including encryption, access controls, and compliance requirements
Mentor engineers on database design, data modeling, and architecture best practices
Requirements
5+ years of experience designing and managing production database systems
Deep expertise in PostgreSQL and SQL Server, including performance tuning and query optimization
Hands-on experience with cloud database services (AWS RDS, Aurora, Azure SQL Database, Azure Cosmos DB)
Experience with GraphQL and API development, preferably with Hasura or similar platforms
Strong background in database CI/CD automation and Infrastructure as Code (Terraform, CloudFormation, Bicep)
Proficiency in scripting languages (Python, Bash) for automation and tooling
Solid understanding of data modeling, schema design, and database normalization
Strong communication and mentoring skills
US citizen and must reside in the United States
Nice to Have
Experience with NoSQL databases (MongoDB, DynamoDB, Redis)
Knowledge of data streaming platforms (Kafka, AWS Kinesis, Azure Event Hubs)
Experience with data warehousing solutions (Snowflake, Redshift, Azure Synapse)
Background in AI/ML data pipelines and feature stores
Relevant certifications (AWS Database Specialty, Azure Database Administrator, PostgreSQL Professional)
Why Join Echelix
You'll join a fast-moving team that's shaping how AI connects people and data. We value curiosity, precision, and practical innovation. You'll work on real projects with real impact, not just proofs of concept.
Data Engineer (GIS)
Data engineer job in Scottsdale, AZ
About the Role
We're partnering with a large, operations-focused organization to hire a Data Scientist (GIS) to support analytics initiatives within their operations function. This role applies geospatial data and advanced analytics to help improve operational efficiency, service reliability, and planning decisions.
The work is highly analytical and engineering-focused, with models built directly in Snowflake and used as inputs into downstream optimization and planning systems.
What You'll Work On
Geospatial Modeling & Time Estimation
Develop data-driven models to estimate operational timing across different service and facility interactions
Leverage GPS data and geofencing techniques to understand behavior across locations
Incorporate contextual variables such as:
Geography and location characteristics
Customer and service attributes
Site complexity and external conditions (e.g., weather, time-based patterns)
Produce reliable, explainable time estimates that support planning and decision-making
Facility & Location Analytics
Model turnaround and processing time across different types of locations
Analyze performance variability based on operational and environmental factors
Apply polygon- and radius-based geofencing to capture location-specific behavior
Quantify how conditions impact operational flow and timing outcomes
Technical Environment
Primary development and modeling in Snowflake
Build and engineer transformations and analytical processes directly in Snowflake
Modeling approaches may include:
Percentile-based time estimates
Aggregations such as averages and medians by service and location attributes
Data sources include:
Latitude/longitude data
High-frequency GPS signals
Location and facility reference data
What We're Looking For
Strong hands-on experience with Snowflake
Advanced SQL skills
Python for analytics and data engineering
Solid understanding of core GIS concepts, including:
Spatial joins
Polygons
Geofencing
Experience with traditional GIS tools (e.g., ArcGIS) is a plus, but this is not a cartography or visualization-focused role
Background in geospatial data engineering and modeling is key
Interview Process
Two One hour video interviews
Data Engineer
Data engineer job in Tempe, AZ
About the Role
We are seeking a highly skilled Databricks Data Engineer with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions.
What We're Looking For
8+ years designing and delivering scalable data pipelines in modern data platforms
Deep experience in data engineering, data warehousing, and enterprise-grade solution delivery
Ability to lead cross-functional initiatives in matrixed teams
Advanced skills in SQL, Python, and ETL/ELT development, including performance tuning
Hands-on experience with Azure, Snowflake, and Databricks, including system integrations
Key Responsibilities
Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform
Modernize and enhance cloud-based data ecosystems on Azure, contributing to architecture, modeling, security, and CI/CD
Use Apache Airflow and similar tools for workflow automation and orchestration
Work with financial or regulated datasets while ensuring strong compliance and governance
Drive best practices in data quality, lineage, cataloging, and metadata management
Primary Technical Skills
Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks Notebooks
Design efficient Delta Lake models for reliability and performance
Implement and manage Unity Catalog for governance, RBAC, lineage, and secure data sharing
Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables
Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems
Automate ingestion and workflows using Python and REST APIs
Support downstream analytics for BI, data science, and application workloads
Write optimized SQL/T-SQL queries, stored procedures, and curated datasets
Automate DevOps workflows, testing pipelines, and workspace configurations
Additional Skills
Azure: Data Factory, Data Lake, Key Vault, Logic Apps, Functions
CI/CD: Azure DevOps
Orchestration: Apache Airflow (plus)
Streaming: Delta Live Tables
MDM: Profisee (nice-to-have)
Databases: SQL Server, Cosmos DB
Soft Skills
Strong analytical and problem-solving mindset
Excellent communication and cross-team collaboration
Detail-oriented with a high sense of ownership and accountability
Data Architect
Data engineer job in Phoenix, AZ
Akkodis is seeking a Data Architect local to Phoenix, AZ that can come onsite 3 days a week. If you are interested, please apply!
JOB TITLE: Data Architect
EMPLOYMENT TYPE:
24+ month Contract | 3 days/week on site
Pay: 80 - 96/hr
ETL design and development for enterprise data solutions.
Design and build databases, data warehouses, and strategies for data acquisition, archiving, and recovery.
Review new data sources for compliance with standards.
Provide technical leadership, set standards, and mentor junior team members.
Collaborate with business stakeholders to translate requirements into scalable solutions.
Guide teams on Azure data tools (Data Factory, Synapse, Data Lake, Databricks).
Establish best practices for database design, data integration, and data governance.
Ensure solutions are secure, high-performing, and easy to support.
Essential Skills & Experience
Bachelor's degree in computer science, Information Systems, or equivalent experience.
10+ years with Microsoft SQL technologies.
3+ years with cloud-based solutions (Azure preferred).
Strong knowledge of ETL, data modeling, and data warehousing.
Experience with source control, change/release management, and documentation.
Excellent communication and leadership skills.
Preferred
Retail or grocery industry experience.
Familiarity with Power BI and MDM principles.
Work Schedule
Hybrid: 3 days onsite in Phoenix, AZ; 2 days remote.
“Benefit offerings include medical, dental, vision, term life insurance, short-term disability insurance, additional voluntary benefits, commuter benefits and 401K plan. Our program provides employees the flexibility to choose the type of coverage that meets their individual needs. Available paid leave may include Paid Sick Leave, where required by law; any other paid leave required by Federal, State or local law; and Holiday pay upon meeting eligibility criteria.
Disclaimer: These benefit offerings do not apply to client-recruited jobs and jobs which are direct hire to a client”
Data Governance Engineer
Data engineer job in Phoenix, AZ
Role: Data Governance Engineer
Experience Required - 6+ Years
Must Have Technical/Functional Skills
• Understanding of Data Management and Data Governance concepts (metadata, lineage, data quality, etc.) and prior experience.
• 2 - 5 years of Data Quality Management experience.
• Intermediate competency in SQL & Python or related programming language.
• Strong familiarity with data architecture and/or data modeling concepts
• 2 - 5 years of experience with Agile or SAFe project methodologies
Roles & Responsibilities
• Assist in identifying data-related risks and associated controls for key business processes. Risks relate to Record Retention, Data Quality, Data Movement, Data Stewardship, Data Protection, Data Sharing, among others.
• Identify data quality issues, perform root-cause-analysis of data quality issues and drive remediation of audit and regulatory feedback.
• Develop deep understanding of key enterprise data-related policies and serve as the policy expert for the business unit, providing education to teams regarding policy implications for business.
• Responsible for holistic platform data quality monitoring, including but not limited to critical data elements.
• Collaborate with and influence product managers to ensure all new use cases are managed according to policies.
• Influence and contribute to strategic improvements to data assessment processes and analytical tools.
• Responsible for monitoring data quality issues, communicating issues, and driving resolution.
• Support current regulatory reporting needs via existing platforms, working with upstream data providers, downstream business partners, as well as technology teams.
• Subject matter expertise on multiple platforms.
• Responsible to partner with the Data Steward Manager in developing and managing the data compliance roadmap.
Generic Managerial Skills, If any
• Drives Innovation & Change: Provides systematic and rational analysis to identify the root cause of problems. Is prepared to challenge the status quo and drive innovation. Makes informed judgments, recommends tailored solutions.
• Leverages Team - Collaboration: Coordinates efforts within and across teams to deliver goals, accountable to bring in ideas, information, suggestions, and expertise from others outside & inside the immediate team.
• Communication: Influences and holds others accountable and has ability to convince others. Identifies the specific data governance requirements and is able to communicate clearly and in a compelling way.
Interested candidates please do share me your updated resume to *******************
Salary Range - $100,000 to $120,000 per year
TCS Employee Benefits Summary:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
Data Architect
Data engineer job in Phoenix, AZ
The Senior Data Engineer & Test in Phoenix 85029 will play a pivotal role in delivering major data engineering initiatives within the Data & Advanced Analytics space. This position requires hands-on expertise in building, deploying, and maintaining robust data pipelines using Python, PySpark, and Airflow, as well as designing and implementing CI/CD processes for data engineering projects
Key Responsibilities
1. Data Engineering: Design, develop, and optimize scalable data pipelines using Python and PySpark for batch and streaming workloads.
2. Workflow Orchestration: Build, schedule, and monitor complex workflows using Airflow, ensuring reliability and maintainability.
3. CI/CD Pipeline Development: Architect and implement CI/CD pipelines for data engineering projects using GitHub, Docker, and cloud-native solutions.
4. Testing & Quality: Apply test-driven development (TDD) practices and automate unit/integration tests for data pipelines.
5. Secure Development: Implement secure coding best practices and design patterns throughout the development lifecycle.
6. Collaboration: Work closely with Data Architects, QA teams, and business stakeholders to translate requirements into technical solutions.
7. Documentation: Create and maintain technical documentation, including process/data flow diagrams and system design artifacts.
8. Mentorship: Lead and mentor junior engineers, providing guidance on coding, testing, and deployment best practices.
9. Troubleshooting: Analyze and resolve technical issues across the data stack, including pipeline failures and performance bottlenecks.
Cross-Team Knowledge Sharing: Cross-train team members outside the project team (e.g., operations support) for full knowledge coverage. Includes all above skills, plus the following;
· Minimum of 10+ years overall IT experience
· Experienced in waterfall, iterative, and agile methodologies
Technical Requirment:
1. Hands-on Data Engineering : Minimum 5+ yearsof practical experience building production-grade data pipelines using Python and PySpark.
2. Airflow Expertise: Proven track record of designing, deploying, and managing Airflow DAGs in enterprise environments.
3. CI/CD for Data Projects : Ability to build and maintain CI/CD pipelinesfor data engineering workflows, including automated testing and deployment**.
4. Cloud & Containers: Experience with containerization (Docker and cloud platforms (GCP) for data engineering workloads. Appreciation for twelve-factor design principles
5. Python Fluency : Ability to write object-oriented Python code manage dependencies, and follow industry best practices
6. Version Control: Proficiency with **Git** for source code management and collaboration (commits, branching, merging, GitHub/GitLab workflows).
7. Unix/Linux: Strong command-line skills** in Unix-like environments.
8. SQL : Solid understanding of SQL for data ingestion and analysis.
9. Collaborative Development : Comfortable with code reviews, pair programming and usingremote collaboration tools effectively.
10. Engineering Mindset: Writes code with an eye for maintainability and testability; excited to build production-grade software
11. Education: Bachelor's or graduate degree in Computer Science, Data Analytics or related field, or equivalent work experience.
ServiceNow IRM Engineer
Data engineer job in Phoenix, AZ
ServiceNow IRM (Integrated Risk Management) Engineer
Circa $160,000
Experience Required - 5+ Years
Must Have Technical/Functional Skills
ServiceNow IRM Module Implementation Exp is must
Must have worked on ServiceNow IRM CMDB and CSDM capabilities
Good to have Javascript, Augular
Preferred Qualifications/Certifications:
ServiceNow Certified System Administrator (CSA).
ServiceNow Certified Implementation Specialist - Risk and Compliance (GRC CIS).
ServiceNow Certified Application Developer (CAD).
Roles & Responsibilities
We are seeking a highly skilled and experienced ServiceNow IRM Developer to join our team. In this role, associate will be instrumental in designing, developing, implementing, and maintaining robust Integrated Risk Management (IRM) solutions on the ServiceNow platform. Associate will work closely with business stakeholders, risk managers, compliance officers, and audit teams to translate business requirements into technical solutions that enhance our organization's risk posture and ensure regulatory adherence.
Benefits:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
**please note - we cannot offer Sponsorship
If you feel this is a good fit and would like to find out more I look forward to recieving your application.
Principal Software Engineer - Digital Banking
Data engineer job in Phoenix, AZ
Principal Software Engineer - Digital Banking
COMPENSATION: $160,000 base salary + 15% bonus and $15k in stock (LTI plan)
BENEFITS: Annual Bonus, Medical, Dental, Vision, PTO, 401(k), Health Savings Account, Disability & Life Insurance, Tuition Assistance, Parental Leave, LTI Plan, Employee Wellness
EMPLOYMENT TERMS: Direct-Hire/Permanent
SUMMARY & OVERVIEW:
The role is part of Digital Banking team, specifically working on the "Temenos" Digital Account Onboarding (DAO) platform and technologies. As a Principal Engineer I you'll provide SME expertise in your respective domain as well as adjacent domains to ensure solutions are safe, secure, compliant and reliable. You'll identify development and support needs as well as take on large and complex design responsibilities supporting project tasks. You'll also engage with project and business sponsors refining requirements and objectives of targeted solutions. As a Principal Engineer I, you also facilitate dialogue and activities, and work to ensure team collaboration including teams outside of your domain. In this role, you'll also develop the technical features while also guiding junior engineers.
KEY RESPONSIBILITIES:
Work on the current Temenos DAO platform to integrate, develop new features while enhancing the existing feature in alignment with business requirements and priorities.
Provide production support, timely resolution of incidents, and communication to business stakeholders.
Build the solution design of efforts that can be handed off to lower-level engineers for execution assuring reuse of platforms where possible.
Review technical plans developed by lower-level engineers and analysts to assure quality designs prevail which can support the volumetrics of our business partners objectives.
Build comprehensive measurement dashboards that give performance insight into key applications of the bank which can feed operational results of our business partners.
Work independently or sometimes with architecture team counterparts to lay out the final documentation required for proper ongoing reference of the given solution, including physical and logical layouts with cross reference to use case models while enforcing standards.
REQUIRED QUALIFICATIONS:
8+ years of related experience (recent experience at the “principal/staff-level” or similar).
Experience working on the Temenos Frameworks & Tools (e.g., DAO, Journey Manager, T24, Transact, Banking Cloud, Infinity/Kony, Payments, TIM.).
Familiarity with Temenos Exchange Framework (TIF/Transact Integration Framework) for connecting external systems.
Advanced to expert knowledge of applicable regulatory and legal compliance obligations, rules and regulations, industry standards and practices.
Advanced to expert experience in leading cross-functional teams and managing multiple projects simultaneously with an established expertise in one or more key domains of the bank (Deposits, Loans, Operations or Reporting).
Capable of working with regulatory partners like the CFPB, OCC and FRB through audits and collaboration efforts as situations arise.
Advanced to expert familiarity with the capability model across IT and the applications and infrastructures available for engagement in solutioning across the bank.
Experience in the design, enhancement, and compliance to all governance frameworks across the IT organization to ensure proper compliance to published procedures and standards.
Bachelor's degree in related field required.
PREFERRED QUALIFICATIONS:
Previous leadership experience preferred.
Advanced to expert knowledge of general Financial Services or Banking is preferred.
Masters or MBA in related field preferred.
Compensation:
$160,000 per year base salary
Exact compensation may vary based on several factors, including skills, experience, and education.
Benefits include:
-Annual bonus
-Comprehensive medical, dental, vision, life insurance, and disability benefits
-401(k) program
-Health savings account
-Tuition assistance program
-LTI Plan
-Employee wellness program
Java Software Engineer
Data engineer job in Phoenix, AZ
Job Title : Java Developer
Duration : 12 Months
Must Have Skills:
Good Knowledge on Java
Strong communication skill
Should be able to work independently
Detailed Job Description:
JavaJ2EE full stack developer with financial or Banking domain experience.
Should be very fluent in communication and should be able to work on his own without hand holding.
Should be completely hands on.
Responsibilities:
Good Knowledge on Java
Strong communication skill
Should be able to work independently
Mid Level Software Engineer - Oracle Cloud Apps
Data engineer job in Phoenix, AZ
Why USAA?
At USAA, our mission is to empower our members to achieve financial security through highly competitive products, exceptional service and trusted advice. We seek to be the #1 choice for the military community and their families.
Embrace a fulfilling career at USAA, where our core values - honesty, integrity, loyalty and service - define how we treat each other and our members. Be part of what truly makes us special and impactful.
The Opportunity
As a dedicated Mid Level Software Engineer - Oracle Cloud Apps, you will collaborate closely with the Finance-IT and Accounting teams within USAA's Chief Financial Office (CFO). They will play a key role in Financial Close and Consolidation projects, using Oracle Cloud technologies to enhance financial processes and system efficiencies.
Provides support to the Enterprise through delivering best in class technology solutions. Engaged in all phases of the software systems and application development lifecycle which include gathering and analyzing requirements, designing, testing, documenting, and implementing software, responding to outages.
We offer a flexible work environment that requires an individual to be in the office 4 days per week. This position can be based in one of the following locations: San Antonio, TX, Plano, TX, Phoenix, AZ, or Charlotte, NC. Relocation assistance is not available for this position.
What you'll do:
Design, develop, code, and test complex technical solutions
Investigates and resolves complex application and system technical problems and production issues through solving techniques.
Continually improves operations by conducting complex systems analysis and recommending changes in policies and procedures.
Prepares and installs complex solutions by resolving and designing system specifications, standards, and programming.
Follows the software development lifecycle.
Participates in design reviews and learns key system design principles.
Mentors junior engineers and may begin mentoring peer engineers; Review teammates' code.
Ensures risks associated with business activities are effectively identified, measured, monitored, and controlled in accordance with risk and compliance policies and procedures.
What you have:
Bachelor's Degree or 4 additional years of experience beyond the minimum requirement can be used in lieu of a degree OR Approved certification from CodeUp, Galvanize, VetFIT (Veterans for IT) or eFIT (Employees for IT).
4 years of software development experience demonstrating depth of technical understanding within a specific discipline(s)/technology(s).
2 years of experience delivering technology solutions in all phases of the software systems and application development lifecycle to include leading code/design reviews.
Basic Understanding of one or more of the following: Java, Swift, Objective-C, Cobol, JavaScript, Kotlin, C++, HTML, CSS, SQL, Go, and Python
Developing level of business insight in the areas of business operations, risk management, industry practices and emerging trends.
Experience supporting efforts to address production issues through fixing applications and systems.
Experience articulating technical challenges and solutions.
Basic understanding of cloud technologies and tools.
What sets you apart:
Strong understanding of the Financial & Insurance Industry technical and functional landscape.
Deep knowledge of CFO processes and related business operations.
Validated experience in driving the development and configuration of Oracle Cloud ERP modules (GL, AR, RM, etc.) and Oracle EPM applications (FCCS, EPCM, etc.)
Expertise in Oracle Fusion Cloud Reporting Applications, including: FDI (Fusion Data Intelligence), BI Publisher (BIP), Oracle Analytics Cloud (OAC)
Demonstrated experience in implementing at least two modules across ERP or EPM platforms (Examples: Implemented GL & ARE; Implemented FCCS and PCMCS; or two FCCS implementations.)
Compensation range: The salary range for this position is: $93,770.00 - $179,240.00.
USAA does not provide visa sponsorship for this role. Please do not apply for this role if at any time (now or in the future) you will need immigration support (i.e., H-1B, TN, STEM OPT Training Plans, etc.).
Compensation: USAA has an effective process for assessing market data and establishing ranges to ensure we remain competitive. You are paid within the salary range based on your experience and market data of the position. The actual salary for this role may vary by location.
Employees may be eligible for pay incentives based on overall corporate and individual performance and at the discretion of the USAA Board of Directors.
The above description reflects the details considered necessary to describe the principal functions of the job and should not be construed as a detailed description of all the work requirements that may be performed in the job.
Benefits: At USAA our employees enjoy best-in-class benefits to support their physical, financial, and emotional wellness. These benefits include comprehensive medical, dental and vision plans, 401(k), pension, life insurance, parental benefits, adoption assistance, paid time off program with paid holidays plus 16 paid volunteer hours, and various wellness programs. Additionally, our career path planning and continuing education assists employees with their professional goals.
For more details on our outstanding benefits, visit our benefits page on USAAjobs.com.
Applications for this position are accepted on an ongoing basis, this posting will remain open until the position is filled. Thus, interested candidates are encouraged to apply the same day they view this posting.
USAA is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Auto-ApplyJava Software Engineer
Data engineer job in Phoenix, AZ
We are seeking a skilled Back-End Senor Java Developer to join our development team. In this role, you will be responsible for designing, building, and maintaining the server-side logic, databases, and APIs of scalable web applications. The ideal candidate will have a strong background in Java development, excellent problem-solving abilities, and a passion for delivering high-performance back-end solutions.
Qualifications
6+ years of hands-on experience in Java development.
Back-End API Development, Full-Stack Development, and Software Development skills
Strong knowledge of Spring or Spring Boot framework for building back-end services.
Strong knowledge of microservices architecture and development.
Experience with RESTful API design and development.
Proficiency in working with databases (e.g., MySQL, PostgreSQL, Cassandra, Couchbase & Mongo).
Experience in monitoring tools such as Micrometer, Prometheus, Elastic, Kibana, Grafana & Splunk.
Experience with cloud platforms (AWS or GCP,).
Familiarity with version control tools like Git.
Knowledge of security best practices in back-end development.
Experience with Agile methodologies and working in a collaborative, fast-paced environment.
Understanding of containerization tools such as Docker or Kubernetes
Good communication and team work skills are a must
DevOps Engineer
Data engineer job in Chandler, AZ
Build Tools: Proficiency in build automation tools such as Make, Maven, Gradle, or Ant.
Continuous Integration/Continuous Deployment (CI/CD): Experience with CI/CD tools like Jenkins or GitLab CI.
Version Control Systems: Strong knowledge of version control systems, particularly Git, including branching strategies and workflows.
Scripting Languages: Proficiency in scripting languages such as Bash, Python, or Ruby for automating build processes.
Containerization: Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes.
Static and Dynamic Analysis Tools: Understanding of tools for code quality and security analysis (e.g., SonarQube, Val grind).
Programming Languages: Knowledge of programming languages relevant to the projects (e.g., C/C++, Python).
Preferred Qualifications
Experience in managing large data sets.
Parallel Computing: Familiarity with parallel programming models like MPI (Message Passing Interface), OpenMP, and CUDA for GPU-based computing.
Performance Optimization: Skills in profiling and optimizing code for better performance on HPC systems (e.g., using tools like Gprof, Valgrind, or Intel VTune).
Storage Architecture Knowledge: Understanding file systems such as Lustre, GPFS, or HDFS and strategies for efficient data storage and retrieval in HPC environments.
Distributed Computing Tools: Familiarity with frameworks such as Hadoop, Spark, or Dask for handling distributed datasets.
Education and Experience
· A bachelor's degree in Computer Science, Software Engineering, or a related field.
· Experience: Proven experience in software build management, DevOps, or continuous integration roles (typically 3+ years).