Senior Data Engineer - Remote
Data engineer job at Arthur J. Gallagher & Co. Human Resources & Compensation Consulting Practice (formerly Companalysis)
Introduction Welcome to Gallagher - a global community of people who bring bold ideas, deep expertise, and a shared commitment to doing what's right. We help clients navigate complexity with confidence by empowering businesses, communities, and individuals to thrive.
At Gallagher, you'll find more than a job; you'll find a culture built on trust, driven by collaboration, and sustained by the belief that we're better together. Whether you join us in a client-facing role or as part of our corporate team |our brokerage division, our benefits and HR consulting division, or our corporate team, you'll have the opportunity to grow your career, make an impact, and be part of something bigger.
Experience a workplace where you're encouraged to be yourself, supported to succeed, and inspired to keep learning. That's what it means to live The Gallagher Way.
Overview
JOB DESCRIPTION - SENIOR DATA ENGINEER
Gallagher is a global leader in insurance, risk management and consulting services. We help businesses grow, communities thrive and people prosper. We live a culture defined by The Gallagher Way, our set of shared values and guiding tenets. A culture driven by our people, over 30,000 strong, serving our clients with customized solutions that will protect them and fuel their futures.
Position Summary:
The Senior Data Engineer will be responsible for the design, development, implementation and support of the Data Initiatives throughout Gallagher, to ensure that optimal data delivery architecture is consistent throughout ongoing projects. You will engage in supporting the data analysts and data scientists, and data needs of multiple teams, systems and products. Do you find the prospect of optimizing or even re-designing our company's integration and data architecture to support our next generation of products and data initiatives most exciting? We really should explore together.
How you'll make an impact
Essential Duties and Responsibilities
* Drive requirements, scope, and technical design of the integration workflows, to make sure the build is conducted accurately and according to spec. Develop and maintain requirements, design documentation and test plans.
* Seek out, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability.
* Coordinate with BI Engineers, Financial Applications and Oracle HR teams around data management including schemas, failure conditions, reconciliation, test data set up, etc.
* Build the infrastructure required for optimal ETL/ELT pipelines to ingest data from a wide variety of data sources using Microsoft Azure technologies such as Azure Data Factory and Databricks.
* Construct and maintain of enterprise level integrations using the Snowflake platform, Azure Synapse, Azure SQL and SQL Server.
* Create data tools for data analytics and data science team members that assist them in building and optimizing our product into an innovative industry leader.
* Design analytics tools that utilize the data pipeline to deliver actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
* Troubleshoot issues helping to drive root-cause analysis, and work with infrastructure teams to resolve incidents and arrive to a permanent resolution.
* Partner with data and analytics teams to strive for greater functionality in our data systems.
* Provide direction and coordination for development, and support teams, including globally located resources.
* Understand the layout and working of existing integrations that send and receive data between Oracle, Concur, JDE, Corporate Data Platform and other systems.
About You
Required:
* A relevant technical BS Degree in Information Technology
* 5+ years writing SQL queries against any RDBMS with query optimization
* 5 years of data engineering experience leveraging technologies such as Snowflake, Azure Data Factory, ADLS Gen 2, Logic Apps, Azure Functions, Databricks, Apache Spark, Scala, Synapse, SQL Server
* Experience with scripting tools such as Power Shell, Python, Scala, Java and XML
* Understanding the pros and cons, and best practices of implementing Data Lake, using Microsoft Azure Data Lake Storage
* Experience structuring Data Lake for the reliability, security and performance
* Experience implementing ETL for Data Warehouse and Business intelligence solutions
* Skills to read and write effective, modular, dynamic, parameterized and robust code, establish and follow already established code standards, and ETL framework
* Strong analytical, problem solving, and troubleshooting abilities
* Good understanding of unit testing, software change management, and software release management
* Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as Code fundamentals
* Experience performing root cause analysis on data and processes to answer specific business questions, and identify opportunities for improvement
* Experience working within an agile team
* Excellent communication skills
Compensation and benefits
We offer a competitive and comprehensive compensation package. The base salary range represents the anticipated low end and high end of the range for this position. The actual compensation will be influenced by a wide range of factors including, but not limited to previous experience, education, pay market/geography, complexity or scope, specialized skill set, lines of business/practice area, supply/demand, and scheduled hours. On top of a competitive salary, great teams and exciting career opportunities, we also offer a wide range of benefits.
Below are the minimum core benefits you'll get, depending on your job level these benefits may improve:
* Medical/dental/vision plans, which start from day one!
* Life and accident insurance
* 401(K) and Roth options
* Tax-advantaged accounts (HSA, FSA)
* Educational expense reimbursement
* Paid parental leave
Other benefits include:
* Digital mental health services (Talkspace)
* Flexible work hours (availability varies by office and job function)
* Training programs
* Gallagher Thrive program - elevating your health through challenges, workshops and digital fitness programs for your overall wellbeing
* Charitable matching gift program
* And more...
The benefits summary above applies to fulltime positions. If you are not applying for a fulltime position, details about benefits will be provided during the selection process.
We value inclusion and diversity
Click Here to review our U.S. Eligibility Requirements
Inclusion and diversity (I&D) is a core part of our business, and it's embedded into the fabric of our organization. For more than 95 years, Gallagher has led with a commitment to sustainability and to support the communities where we live and work.
Gallagher embraces our employees' diverse identities, experiences and talents, allowing us to better serve our clients and communities. We see inclusion as a conscious commitment and diversity as a vital strength. By embracing diversity in all its forms, we live out The Gallagher Way to its fullest.
Gallagher believes that all persons are entitled to equal employment opportunity and prohibits any form of discrimination by its managers, employees, vendors or customers based on race, color, religion, creed, gender (including pregnancy status), sexual orientation, gender identity (which includes transgender and other gender non-conforming individuals), gender expression, hair expression, marital status, parental status, age, national origin, ancestry, disability, medical condition, genetic information, veteran or military status, citizenship status, or any other characteristic protected (herein referred to as "protected characteristics") by applicable federal, state, or local laws.
Equal employment opportunity will be extended in all aspects of the employer-employee relationship, including, but not limited to, recruitment, hiring, training, promotion, transfer, demotion, compensation, benefits, layoff, and termination. In addition, Gallagher will make reasonable accommodations to known physical or mental limitations of an otherwise qualified person with a disability, unless the accommodation would impose an undue hardship on the operation of our business.
Senior Data Engineer
Charlotte, NC jobs
**NO 3rd Party vendor candidates or sponsorship**
Role Title: Senior Data Engineer
Client: Global construction and development company
Employment Type: Contract
Duration: 1 year
Preferred Location: Remote based in ET or CT time zones
Role Description:
The Senior Data Engineer will play a pivotal role in designing, architecting, and optimizing cloud-native data integration and Lakehouse solutions on Azure, with a strong emphasis on Microsoft Fabric adoption, PySpark/Spark-based transformations, and orchestrated pipelines. This role will lead end-to-end data engineering-from ingestion through APIs and Azure services to curated Lakehouse/warehouse layers-while ensuring scalable, secure, well-governed, and well-documented data products. The ideal candidate is hands-on in delivery and also brings data architecture knowledge to help shape patterns, standards, and solution designs.
Key Responsibilities
Design and implement end-to-end data pipelines and ELT/ETL workflows using Azure Data Factory (ADF), Synapse, and Microsoft Fabric.
Build and optimize PySpark/Spark transformations for large-scale processing, applying best practices for performance tuning (partitioning, joins, file sizing, incremental loads).
Develop and maintain API-heavy ingestion patterns, including REST/SOAP integrations, authentication/authorization handling, throttling, retries, and robust error handling.
Architect scalable ingestion, transformation, and serving solutions using Azure Data Lake / OneLake, Lakehouse patterns (Bronze/Silver/Gold), and data warehouse modeling practices.
Implement monitoring, logging, alerting, and operational runbooks for production pipelines; support incident triage and root-cause analysis.
Apply governance and security practices across the lifecycle, including access controls, data quality checks, lineage, and compliance requirements.
Write complex SQL, develop data models, and enable downstream consumption through analytics tools and curated datasets.
Drive engineering standards: reusable patterns, code reviews, documentation, source control, and CI/CD practices.
Requirements:
Bachelor's degree (or equivalent experience) in Computer Science, Engineering, or a related field.
5+ years of experience in data engineering with strong focus on Azure Cloud.
Strong experience with Azure Data Factory pipelines, orchestration patterns, parameterization, and production support.
Strong hands-on experience with Synapse (pipelines, SQL pools and/or Spark), and modern cloud data platform patterns.
Advanced PySpark/Spark experience for complex transformations and performance optimization.
Heavy experience with API-based integrations (building ingestion frameworks, handling auth, pagination, retries, rate limits, and resiliency).
Strong knowledge of SQL and data warehousing concepts (dimensional modeling, incremental processing, data quality validation).
Strong understanding of cloud data architectures including Data Lake, Lakehouse, and Data Warehouse patterns.
Preferred Skills
Experience with Microsoft Fabric (Lakehouse/Warehouse/OneLake, Pipelines, Dataflows Gen2, notebooks).
Architecture experience (formal or informal), such as contributing to solution designs, reference architectures, integration standards, and platform governance.
Experience with DevOps/CI-CD for data engineering using Azure DevOps or GitHub (deployment patterns, code promotion, testing).
Experience with Power BI and semantic model considerations for Lakehouse/warehouse-backed reporting.
Familiarity with data catalog/governance tooling (e.g., Microsoft Purview).
Senior Data Engineer
Nashville, TN jobs
Concert is a software and managed services company that promotes health by providing the digital infrastructure for reliable and efficient management of laboratory testing and precision medicine. We are wholeheartedly dedicated to enhancing the transparency and efficiency of health care. Our customers include health plans, provider systems, laboratories, and other important stakeholders. We are a growing organization driven by smart, creative people to help advance precision medicine and health care. Learn more about us at ***************
YOUR ROLE
Concert is seeking a skilled Senior Data Engineer to join our team. Your role will be pivotal in designing, developing, and maintaining our data infrastructure and pipelines, ensuring robust, scalable, and efficient data solutions. You will work closely with data scientists, analysts, and other engineers to support our mission of automating the application of clinical policy and payment through data-driven insights.
You will be joining an innovative, energetic, passionate team who will help you grow and build skills at the intersection of diagnostics, information technology and evidence-based clinical care.
As a Senior Data Engineer you will:
Design, develop, and maintain scalable and efficient data pipelines using AWS services such as Redshift, S3, Lambda, ECS, Step Functions, and Kinesis Data Streams.
Implement and manage data warehousing solutions, primarily with Redshift, and optimize existing data models for performance and scalability.
Utilize DBT (data build tool) for data transformation and modeling, ensuring data quality and consistency.
Develop and maintain ETL/ELT processes to ingest, process, and store large datasets from various sources.
Work with SageMaker for machine learning data preparation and integration.
Ensure data security, privacy, and compliance with industry regulations.
Collaborate with data scientists and analysts to understand data requirements and deliver solutions that meet their needs.
Monitor and troubleshoot data pipelines, identifying and resolving issues promptly.
Implement best practices for data engineering, including code reviews, testing, and automation.
Mentor junior data engineers and share knowledge on data engineering best practices.
Stay up-to-date with the latest advancements in data engineering, AWS services, and related technologies.
After 3 months on the job you will have:
Developed a strong understanding of Concert's data engineering infrastructure
Learned the business domain and how it maps to the information architecture
Made material contributions towards existing key results
After 6 months you will have:
Led a major initiative
Become the first point of contact when issues related to the data warehouse are identified
After 12 months you will have:
Taken responsibility for the long term direction of the data engineering infrastructure
Proposed and executed key results with an understanding of the business strategy
Communicated the business value of major technical initiatives to key non-technical business stakeholders
WHAT LEADS TO SUCCESS
Self-Motivated A team player with a positive attitude and a proactive approach to problem-solving.
Executes Well You are biased to action and get things done. You acknowledge unknowns and recover from setbacks well.
Comfort with Ambiguity You aren't afraid of uncertainty and blazing new trails, you care about building towards a future that is different from today.
Technical Bravery You are comfortable with new technologies and eager to dive in to understand data in the raw and in its processed states.
Mission-focused You are personally motivated to drive more affordable, equitable and effective integration of genomic technologies into clinical care.
Effective Communication You build rapport and great working relationships with senior leaders, peers, and use the relationships you've built to drive the company forward
RELEVANT SKILLS & EXPERIENCE
Minimum of 4 years experience working as a data engineer
Bachelor's degree in software or data engineering or comparable technical certification / experience
Ability to effectively communicate complex technical concepts to both technical and non-technical audiences.
Proven experience in designing and implementing data solutions on AWS, including Redshift, S3, Lambda, ECS, and Step Functions
Strong understanding of data warehousing principles and best practices
Experience with DBT for data transformation and modeling.
Proficiency in SQL and at least one programming language (e.g., Python, Scala)
Familiarity or experience with the following tools / concepts are a plus: BI tools such as Metabase; Healthcare claims data, security requirements, and HIPAA compliance; Kimball's dimensional modeling techniques; ZeroETL and Kinesis data streams
COMPENSATION
Concert is seeking top talent and offers competitive compensation based on skills and experience. Compensation will commensurate with experience. This position will report to the VP of Engineering.
LOCATION
Concert is based in Nashville, Tennessee and supports a remote work environment.
For further questions, please contact: ******************.
Data Engineer
Denver, CO jobs
Data Engineer
Compensation: $80 - $90/hour, depending on experience
Inceed has partnered with a great company to help find a skilled Data Engineer to join their team!
Join a dynamic team as a contract Data Engineer, where you'll be the backbone of data-driven operations. This role offers the opportunity to work with a modern tech stack in a hybrid on-prem and cloud environment. You'll design and implement innovative solutions to complex challenges, collaborating with data scientists, location intelligence experts, and ML engineers. This exciting opportunity has opened due to a new project initiative and you'll be making a tangible impact.
Key Responsibilities & Duties:
Design and deploy scalable data pipelines and architectures
Collaborate with stakeholders to deliver high-impact data solutions
Integrate data from multiple sources ensuring quality and reliability
Develop automation workflows and BI solutions
Mentor others and contribute to the knowledge base
Explore and implement emerging technologies
Required Qualifications & Experience:
8+ years of experience in data engineering
Experience with large oil and gas datasets
Proficiency in SQL and Python
Hands-on experience in cloud environments (Azure, AWS, or GCP)
Familiarity with Apache Kafka, Apache Flink, or Azure Event Hubs
Nice to Have Skills & Experience:
Experience with Palantir Foundry
Knowledge of query federation platforms
Experience with modern data stack tools like dbt or Airflow
Perks & Benefits:
3 different medical health insurance plans, dental, and vision insurance
Voluntary and Long-term disability insurance
Paid time off, 401k, and holiday pay
Weekly direct deposit or pay card deposit
If you are interested in learning more about the Data Engineer opportunity, please submit your resume for consideration. Our client is unable to provide sponsorship at this time.
We are Inceed, a staffing direct placement firm who believes in the possibility of something better. Our mission is simple: We're here to help every person, whether client, candidate, or employee, find and secure what's better for them.
Inceed is an equal opportunity employer. Inceed prohibits discrimination and harassment of any type and affords equal employment opportunities to employees and applicants without regard to race, color, religion, sex, sexual orientation, gender identity, or expression, pregnancy, age, national origin, disability status, genetic information, protected veteran status, or any other characteristic protected by law.
Data Engineer
Denver, CO jobs
Data Engineer
Compensation: $ 80 - 90 /hour, depending on experience
Inceed has partnered with a great energy company to help find a skilled Data Engineer to join their team!
Join a dynamic team where you'll be at the forefront of data-driven operations. This role offers the autonomy to design and implement groundbreaking data architectures, working primarily remotely. This position is open due to exciting new projects. You'll be collaborating with data scientists and engineers, making impactful contributions to the company's success.
Key Responsibilities & Duties:
Design and deploy scalable data pipelines and architectures
Collaborate with stakeholders to deliver high-impact data solutions
Integrate data from various sources ensuring consistency and reliability
Develop automation workflows and BI solutions
Mentor others and advise on data process best practices
Explore and implement emerging technologies
Required Qualifications & Experience:
8+ years of data engineering experience
Experience with PI
Experience with SCADA
Experience with Palantir
Experience with large oil and gas datasets
Proficiency in Python and SQL
Hands-on experience in cloud environments (Azure, AWS, GCP)
Nice to Have Skills & Experience:
Familiarity with Apache Kafka or Flink
Perks & Benefits:
3 different medical health insurance plans, dental, and vision insurance
Voluntary and Long-term disability insurance
Paid time off, 401k, and holiday pay
Weekly direct deposit or pay card deposit
If you are interested in learning more about the Data Engineer opportunity, please submit your resume for consideration. Our client is unable to provide sponsorship at this time.
We are Inceed, a staffing direct placement firm who believes in the possibility of something better. Our mission is simple: We're here to help every person, whether client, candidate, or employee, find and secure what's better for them.
Inceed is an equal opportunity employer. Inceed prohibits discrimination and harassment of any type and affords equal employment opportunities to employees and applicants without regard to race, color, religion, sex, sexual orientation, gender identity, or expression, pregnancy, age, national origin, disability status, genetic information, protected veteran status, or any other characteristic protected by law.
Data Scientist - Causal Inference and Measurement
Menlo Park, CA jobs
W2 Contract Role (No C2C) - Requires Big-Tech or Core Causal Inference & Measurement Experience
The main function of the Data Scientist is to produce innovative solutions driven by exploratory data analysis from complex and high-dimensional datasets.
Job Responsibilities
Apply knowledge of statistics, machine learning, programming, data modeling, simulation, and advanced mathematics to recognize patterns, identify opportunities, pose business questions, and make valuable discoveries leading to prototype development and product improvement.
Use a flexible, analytical approach to design, develop, and evaluate predictive models and advanced algorithms that lead to optimal value extraction from the data.
Generate and test hypotheses and analyze and interpret the results of product experiments.
Work with product engineers to translate prototypes into new products, services, and features and provide guidelines for large-scale implementation.
Provide Business Intelligence (BI) and data visualization support, which includes, but limited to support for the online customer service dashboards and other ad-hoc requests requiring data analysis and visual support.
Key Projects
Methodological questions - optimize current methods.
Analyzing marketing performance.
Expert to marketing measurements.
Inference in python.
Candidate Value Proposition
Forefront of marketing measurement research.
State of the art measurement methods, marketing tools, and access to experts in the field.
Skills
Experienced in either programming languages such as Python and/or R, big data tools such as Hadoop, or data visualization tools such as Tableau.
The ability to communicate effectively in writing, including conveying complex information and promoting in-depth engagement on course topics.
Experience working with large datasets.
Education/Experience
Master of Science degree in computer science or in a relevant field.
Top 3 must-have HARD skills
Python, experience with causal inference models (impact measurement).
experience with different Difference in Difference DID models.
methodology experience and ability to track it using Python.
Good to Have Skills
Survey data analysis
AI workflows
Senior Data Analytics Engineer (Customer Data)
Irving, TX jobs
Our client is seeking a Senior Data Analytics Engineer (Customer Data) to join their team! This position is located in remote.
Build, optimize, and maintain customer data pipelines in PySpark/Databricks to support CDP-driven use cases across AWS/Azure/GCP
Transform raw and integrated customer data into analytics-ready datasets used for dashboards, reporting, segmentation, personalization, and downstream AI/ML applications
Develop and enrich customer behavior metrics, campaign analytics, and performance insights such as: ad engagement, lifecycle metrics, retention
Partner with Marketing, Sales, Product, and Data Science teams to translate business goals into metrics, features, and analytical data models
Build datasets consumed by Power BI/Tableau dashboards (hands-on dashboard creation not required)
Ensure high cluster performance and pipeline optimization in Databricks, including troubleshooting skewed joins, sorting, partitioning, and real-time processing needs
Work across multiple cloud and vendor ecosystems such as: AWS/Azure/GCP; Hightouch or comparable CDP vendors
Participate in the data ingestion and digestion phases, shaping integrated data into analytical layers for MarTech and BI
Contribute to and enforce data engineering standards, documentation, governance, and best practices across the organization
Desired Skills/Experience:
6+ years of experience in Data Engineering, Analytics Engineering, or related fields, including data modeling experience
Strong Data Engineering fundamentals with the ability to design pipelines, optimize performance, and deliver real-time or near-real-time datasets
Ability to deeply understand data, identifying gaps, designing meaningful transformations, and creating metrics with clear business context
Understanding of how customer data moves through Customer Data Platforms (CDPs) and how to design pipelines that integrate with them
Experience supporting Marketing, Customer Data, MarTech, CDP, segmentation, or personalization teams strongly preferred
Hands-on experience required with: Databricks, PySpark, Python, SQL, Building analytics datasets for dashboards/reporting and customer behavior analytics or campaign performance insights
Experience designing and implementing features that feed downstream AI or customer-facing applications
Benefits:
Medical, Dental, & Vision Insurance Plans
Employee-Owned Profit Sharing (ESOP)
401K offered
The approximate pay range for this position starting at $150-160,000+. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
Data Engineer
New York, NY jobs
Hey All, We are looking for a mid-level data engineer. No third parties As a result of this expansion, we are seeking experienced software Data engineers with 5+ years of relevant experience to support the design and development of a strategic data platform for SMBC Capital Markets and Nikko Securities Group.
Qualifications and Skills
• Proven experience as a Data Engineer with experience in Azure cloud.
• Experience implementing solutions using -
• Azure cloud services
• Azure Data Factory
• Azure Lake Gen 2
• Azure Databases
• Azure Data Fabric
• API Gateway management
• Azure Functions
• Well versed with Azure Databricks
• Strong SQL skills with RDMS or no SQL databases
• Experience with developing APIs using FastAPI or similar frameworks in Python
• Familiarity with the DevOps lifecycle (git, Jenkins, etc.), CI/CD processes
• Good understanding of ETL/ELT processes
• Experience in financial services industry, financial instruments, asset classes and market data are a plus.
Big Data Engineer
Santa Monica, CA jobs
Our client is seeking a Big Data Engineer to join their team! This position is located in Santa Monica, California.
Design and build core components of a large-scale data platform for both real-time and batch processing, owning key features of big data applications that evolve with business needs
Develop next-generation, cloud-based big data infrastructure supporting batch and streaming workloads, with continuous improvements to performance, scalability, reliability, and availability
Champion engineering excellence, promoting best practices such as design patterns, CI/CD, thorough code reviews, and automated testing
Drive innovation, contributing new ideas and applying cutting-edge technologies to deliver impactful solutions
Participate in the full software development lifecycle, including system design, experimentation, implementation, deployment, and testing
Collaborate closely with program managers, product managers, SDETs, and researchers in an open, agile, and highly innovative environment
Desired Skills/Experience:
Bachelor's degree in a STEM field such as: Science, Technology, Engineering, Mathematics
5+ years of relevant professional experience
4+ years of professional software development experience using Java, Scala, Python, or similar programming languages
3+ years of hands-on big data development experience with technologies such as Spark, Flink, SingleStore, Kafka, NiFi, and AWS big data tools
Strong understanding of system and application design, architecture principles, and distributed system fundamentals
Proven experience building highly available, scalable, and production-grade services
Genuine passion for technology, with the ability to work across interdisciplinary areas and adopt new tools or approaches
Experience processing massive datasets at the petabyte scale
Proficiency with cloud infrastructure and DevOps tools, such as Terraform, Kubernetes (K8s), Spinnaker, IAM, and ALB
Hands-on experience with modern data warehousing and analytics platforms, including ClickHouse, Druid, Snowflake, Impala, Presto, Kinesis, and more
Familiarity with common web development frameworks, such as Spring Boot, React.js, Vue.js, or Angular
Benefits:
Medical, Dental, & Vision Insurance Plans
Employee-Owned Profit Sharing (ESOP)
401K offered
The approximate pay range for this position is between $52.00 and $75.00. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
Senior Data Engineer
Glendale, CA jobs
Our client is seeking a Senior Data Engineer to join their team! This position is located in Glendale, California.
Contribute to maintaining, updating, and expanding existing Core Data platform data pipelines
Build tools and services to support data discovery, lineage, governance, and privacy
Collaborate with other software and data engineers and cross-functional teams
Work with a tech stack that includes Airflow, Spark, Databricks, Delta Lake, Kubernetes, and AWS
Collaborate with product managers, architects, and other engineers to drive the success of the Core Data platform
Contribute to developing and documenting internal and external standards and best practices for pipeline configurations, naming conventions, and more
Ensure high operational efficiency and quality of Core Data platform datasets to meet SLAs and ensure reliability and accuracy for stakeholders in Engineering, Data Science, Operations, and Analytics
Participate in agile and scrum ceremonies to collaborate and refine team processes
Engage with customers to build relationships, understand needs, and prioritize both innovative solutions and incremental platform improvements
Maintain detailed documentation of work and changes to support data quality and data governance requirements
Desired Skills/Experience:
5+ years of data engineering experience developing large data pipelines
Proficiency in at least one major programming language such as: Python, Java or Scala
Strong SQL skills and the ability to create queries to analyze complex datasets
Hands-on production experience with distributed processing systems such as Spark
Experience interacting with and ingesting data efficiently from API data sources
Experience coding with the Spark DataFrame API to create data engineering workflows in Databricks
Hands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines
Experience developing APIs with GraphQL
Deep understanding of AWS or other cloud providers, as well as infrastructure-as-code
Familiarity with data modeling techniques and data warehousing best practices
Strong algorithmic problem-solving skills
Excellent written and verbal communication skills
Advanced understanding of OLTP versus OLAP environments
Benefits:
Medical, Dental, & Vision Insurance Plans
Employee-Owned Profit Sharing (ESOP)
401K offered
The approximate pay range for this position is between $51.00 and $73.00. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
Azure Data Engineer
Irving, TX jobs
Our client is seeking an Azure Data Engineer to join their team! This position is located in Irving, Texas. THIS ROLE REQUIRES AN ONSITE INTERVIEW IN IRVING, please only apply if you are local and available to interview onsite.
Duties:
Lead the design, architecture, and implementation of key data initiatives and platform capabilities
Optimize existing data workflows and systems to improve performance, cost-efficiency, identifying and guiding teams to implement solutions
Lead and mentor a team of 2-5 data engineers, providing guidance on technical best practices, career development, and initiative execution
Contribute to the development of data engineering standards, processes, and documentation, promoting consistency and maintainability across teams while enabling business stakeholders
Desired Skills/Experience:
Bachelor's degree or equivalent in Computer Science, Mathematics, Software Engineering, Management Information Systems, etc.
5+ years of relevant work experience in data engineering
Strong technical skills in SQL, PySpark/Python, Azure, and Databricks
Deep understanding of data engineering fundamentals, including database architecture and design, ETL, etc.
Benefits:
Medical, Dental, & Vision Insurance Plans
Employee-Owned Profit Sharing (ESOP)
401K offered
The approximate pay range for this position starting at $140-145,000+. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
Azure Data Engineer
Jersey City, NJ jobs
Title: Senior Azure Data Engineer Client: Major Japanese Bank Experience Level: Senior (10+ Years)
The Senior Azure Data Engineer will design, build, and optimize enterprise data solutions within Microsoft Azure for a major Japanese bank. This role focuses on architecting scalable data pipelines, enhancing data lake environments, and ensuring security, compliance, and data governance best practices.
Key Responsibilities:
Develop, maintain, and optimize Azure-based data pipelines and ETL/ELT workflows.
Design and implement Azure Data Lake, Synapse, Databricks, and ADF solutions.
Ensure data security, compliance, lineage, and governance controls.
Partner with architecture, data governance, and business teams to deliver high-quality data solutions.
Troubleshoot performance issues and improve system efficiency.
Required Skills:
10+ years of data engineering experience.
Strong hands-on expertise with Azure Synapse, Azure Data Factory, Azure Databricks, Azure Data Lake, and Azure SQL.
Azure certifications strongly preferred.
Strong SQL, Python, and cloud data architecture skills.
Experience in financial services or large enterprise environments preferred.
AWS Data Engineer
McLean, VA jobs
Responsibilities:
Design, build, and maintain scalable data pipelines using AWS Glue and Databricks.
Develop and optimize ETL/ELT processes using PySpark and Python.
Collaborate with data scientists, analysts, and stakeholders to enable efficient data access and transformation.
Implement and maintain data lake and warehouse solutions on AWS (S3, Glue Catalog, Redshift, Athena, etc.).
Ensure data quality, consistency, and reliability across systems.
Optimize performance of large-scale distributed data processing workflows.
Develop automation scripts and frameworks for data ingestion, transformation, and validation.
Follow best practices for data governance, security, and compliance.
Required Skills & Experience:
5-8 years of hands-on experience in Data Engineering.
Strong proficiency in Python and PySpark for data processing and transformation.
Expertise in AWS services - particularly Glue, S3, Lambda, Redshift, and Athena.
Hands-on experience with Databricks for building and managing data pipelines.
Experience working with large-scale data systems and optimizing performance.
Solid understanding of data modeling, data lake architecture, and ETL design principles.
Strong problem-solving skills and ability to work independently in a fast-paced environment.
“Mindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of - Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.”
Data Engineer
Tempe, AZ jobs
About the Role
We are seeking a highly skilled Databricks Data Engineer with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions.
What We're Looking For
8+ years designing and delivering scalable data pipelines in modern data platforms
Deep experience in data engineering, data warehousing, and enterprise-grade solution delivery
Ability to lead cross-functional initiatives in matrixed teams
Advanced skills in SQL, Python, and ETL/ELT development, including performance tuning
Hands-on experience with Azure, Snowflake, and Databricks, including system integrations
Key Responsibilities
Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform
Modernize and enhance cloud-based data ecosystems on Azure, contributing to architecture, modeling, security, and CI/CD
Use Apache Airflow and similar tools for workflow automation and orchestration
Work with financial or regulated datasets while ensuring strong compliance and governance
Drive best practices in data quality, lineage, cataloging, and metadata management
Primary Technical Skills
Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks Notebooks
Design efficient Delta Lake models for reliability and performance
Implement and manage Unity Catalog for governance, RBAC, lineage, and secure data sharing
Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables
Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems
Automate ingestion and workflows using Python and REST APIs
Support downstream analytics for BI, data science, and application workloads
Write optimized SQL/T-SQL queries, stored procedures, and curated datasets
Automate DevOps workflows, testing pipelines, and workspace configurations
Additional Skills
Azure: Data Factory, Data Lake, Key Vault, Logic Apps, Functions
CI/CD: Azure DevOps
Orchestration: Apache Airflow (plus)
Streaming: Delta Live Tables
MDM: Profisee (nice-to-have)
Databases: SQL Server, Cosmos DB
Soft Skills
Strong analytical and problem-solving mindset
Excellent communication and cross-team collaboration
Detail-oriented with a high sense of ownership and accountability
Data Scientist - ML, Python
McLean, VA jobs
10+years of experience required in Information Technology.
Python Programming: At least 5 years of hands-on experience with Python, particularly in
frameworks like FastAPI, Django, Flask, and experience using AI frameworks.
• Access Control Expertise: Strong understanding of access control models such as Role-Based
Access Control (RBAC) and Attribute-Based Access Control (ABAC).
• API and Connector Development: Experience in developing API connectors using Python for
extracting and managing access control data from platforms like Azure, SharePoint, Java, .NET,
WordPress, etc.
• AI and Machine Learning: Hands-on experience integrating AI into applications for automating
tasks such as access control reviews and identifying anomalies
• Cloud and Microsoft Technologies: Proficiency with Azure services, Microsoft Graph API, and
experience integrating Python applications with Azure for access control reviews and reporting.
• Reporting and Visualization: Experience using reporting libraries in Python (Pandas, Matplotlib,
Plotly, Dash) to build dashboards and reports related to security and access control metrics.
• Communication Skills: Ability to collaborate with various stakeholders, explain complex
technical solutions, and deliver high-quality solutions on time.
• PlainID: Experience or familiarity with PlainID platforms for identity and access management.
• Azure OpenAI: Familiarity with Azure OpenAI technologies and their application in access
control and security workflows.
• Power BI: Experience with Microsoft Power BI for data visualization and reporting.
• Agile Methodologies: Experience working in Agile environments and familiarity with Scrum
methodologies for delivering security solutions.
Associate Data Scientist
Minneapolis, MN jobs
is remote.
Develop service specific knowledge through greater exposure to peers, internal experts, clients, regular self-study, and formal training opportunities
Gain exposure to a variety of program/project situations to develop business and organizational/planning skills
Retain knowledge gained and performance feedback provided to transfer into future work
Approach all problems and projects with a high level of professionalism, objectivity and an open mind to new ideas and solutions
Collaborate with internal teams to collect, analyze, and automate data processing
Leverage AI models, including LLMs, for developing intelligent solutions that enhance data-driven decision-making processes for both internal projects and external clients
Leverage machine learning methodologies, including non-linear, linear, and forecasting methods to help build solutions aimed at better understanding the business, making the business more efficient, and planning our future
Work under the guidance of a variety of Data Science team members, gain exposure to developing custom data models and algorithms to apply to data sets
Gain experience with predictive and inferential analytics, machine learning, and artificial intelligence techniques
Use existing processes and tools to monitor and analyze solution performance and accuracy and communicate findings to team members and end users
Contribute to automating business workflows by incorporating LLMs and other AI models to streamline processes and improve efficiency
Integrate AI-driven solutions within existing systems to provide advanced predictive capabilities and actionable insights
Learn to work individually as well as in collaboration with others
Desired Skills/Experience:
Bachelor's degree is required in the field of Statistics, Computer Science, Economics, Analytics, or Data Science preferred
1+ year of experience preferred
Experience with APIs, web scraping, SQL/no-SQL databases, and cloud-based data solutions preferred
Combination of relevant experience, education, and training may be accepted in lieu of degree
Benefits:
Medical, Dental, & Vision Insurance Plans
Employee-Owned Profit Sharing (ESOP)
401K offered
The approximate pay range for this position starting at $90,000 - $125,000. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
Senior Data Engineer
Austin, TX jobs
We are looking for a seasoned Azure Data Engineer to design, build, and optimize secure, scalable, and high-performance data solutions within the Microsoft Azure ecosystem. This will be a multi-year contract worked FULLY ONSITE in Austin, TX.
The ideal candidate brings deep technical expertise in data architecture, ETL/ELT engineering, data integration, and governance, along with hands-on experience in MDM, API Management, Lakehouse architectures, and data mesh or data hub frameworks. This position combines strategic architectural planning with practical, hands-on implementation, empowering cross-functional teams to leverage data as a key organizational asset.
Key Responsibilities
1. Data Architecture & Strategy
Design and deploy end-to-end Azure data platforms using Azure Data Lake, Azure Synapse Analytics, Azure Databricks, and Azure SQL Database.
Build and implement Lakehouse and medallion (Bronze/Silver/Gold) architectures for scalable and modular data processing.
Define and support data mesh and data hub patterns to promote domain-driven design and federated governance.
Establish standards for conceptual, logical, and physical data modeling across data warehouse and data lake environments.
2. Data Integration & Pipeline Development
Develop and maintain ETL/ELT pipelines using Azure Data Factory, Synapse Pipelines, and Databricks for both batch and streaming workloads.
Integrate diverse data sources (on-prem, cloud, SaaS, APIs) into a unified Azure data environment.
Optimize pipelines for cost-effectiveness, performance, and scalability.
3. Master Data Management (MDM) & Data Governance
Implement MDM solutions using Azure-native or third-party platforms (e.g., Profisee, Informatica, Semarchy).
Define and manage data governance, metadata, and data quality frameworks.
Partner with business teams to align data standards and maintain data integrity across domains.
4. API Management & Integration
Build and manage APIs for data access, transformation, and system integration using Azure API Management and Logic Apps.
Design secure, reliable data services for internal and external consumers.
Automate workflows and system integrations using Azure Functions, Logic Apps, and Power Automate.
5. Database & Platform Administration
Perform core DBA tasks, including performance tuning, query optimization, indexing, and backup/recovery for Azure SQL and Synapse.
Monitor and optimize cost, performance, and scalability across Azure data services.
Implement CI/CD and Infrastructure-as-Code (IaC) solutions using Azure DevOps, Terraform, or Bicep.
6. Collaboration & Leadership
Work closely with data scientists, analysts, business stakeholders, and application teams to deliver high-value data solutions.
Mentor junior engineers and define best practices for coding, data modeling, and solution design.
Contribute to enterprise-wide data strategy and roadmap development.
Required Qualifications
Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or related fields.
5+ years of hands-on experience in Azure-based data engineering and architecture.
Strong proficiency with the following:
Azure Data Factory, Azure Synapse, Azure Databricks, Azure Data Lake Storage Gen2
SQL, Python, PySpark, PowerShell
Azure API Management and Logic Apps
Solid understanding of data modeling approaches (3NF, dimensional modeling, Data Vault, star/snowflake schemas).
Proven experience with Lakehouse/medallion architectures and data mesh/data hub designs.
Familiarity with MDM concepts, data governance frameworks, and metadata management.
Experience with automation, data-focused CI/CD, and IaC.
Thorough understanding of Azure security, RBAC, Key Vault, and core networking principles.
What We Offer
Competitive compensation and benefits package
Luna Data Solutions, Inc. (LDS) provides equal employment opportunities to all employees. All applicants will be considered for employment. LDS prohibits discrimination and harassment of any type regarding age, race, color, religion, sexual orientation, gender identity, sex, national origin, genetics, protected veteran status, and/or disability status.
Data Engineer
Austin, TX jobs
About the Role
We are seeking a highly skilled Databricks Data Engineer with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions.
What We're Looking For
8+ years designing and delivering scalable data pipelines in modern data platforms
Deep experience in data engineering, data warehousing, and enterprise-grade solution delivery
Ability to lead cross-functional initiatives in matrixed teams
Advanced skills in SQL, Python, and ETL/ELT development, including performance tuning
Hands-on experience with Azure, Snowflake, and Databricks, including system integrations
Key Responsibilities
Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform
Modernize and enhance cloud-based data ecosystems on Azure, contributing to architecture, modeling, security, and CI/CD
Use Apache Airflow and similar tools for workflow automation and orchestration
Work with financial or regulated datasets while ensuring strong compliance and governance
Drive best practices in data quality, lineage, cataloging, and metadata management
Primary Technical Skills
Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks Notebooks
Design efficient Delta Lake models for reliability and performance
Implement and manage Unity Catalog for governance, RBAC, lineage, and secure data sharing
Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables
Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems
Automate ingestion and workflows using Python and REST APIs
Support downstream analytics for BI, data science, and application workloads
Write optimized SQL/T-SQL queries, stored procedures, and curated datasets
Automate DevOps workflows, testing pipelines, and workspace configurations
Additional Skills
Azure: Data Factory, Data Lake, Key Vault, Logic Apps, Functions
CI/CD: Azure DevOps
Orchestration: Apache Airflow (plus)
Streaming: Delta Live Tables
MDM: Profisee (nice-to-have)
Databases: SQL Server, Cosmos DB
Soft Skills
Strong analytical and problem-solving mindset
Excellent communication and cross-team collaboration
Detail-oriented with a high sense of ownership and accountability
Data Scientist
Phoenix, AZ jobs
We are seeking a Data Scientist to support advanced analytics and machine learning initiatives across the organization. This role involves working with large, complex datasets to uncover insights, validate data integrity, and build predictive models. A key focus will be developing and refining machine learning models that leverage sales and operational data to optimize pricing strategies at the store level.
Day-to-Day Responsibilities
Compare and validate numbers across multiple data systems
Investigate discrepancies and understand how metrics are derived
Perform data science and data analysis tasks
Build and maintain AI/ML models using Python
Interpret model results, fine-tune algorithms, and iterate based on findings
Validate and reconcile data from different sources to ensure accuracy
Work with sales and production data to produce item-level pricing recommendations
Support ongoing development of a new data warehouse and create queries as needed
Review Power BI dashboards (Power BI expertise not required)
Contribute to both ML-focused work and general data science responsibilities
Improve and refine an existing ML pricing model already in production
Qualifications
Strong proficiency with MS SQL Server
Experience creating and deploying machine learning models in Python
Ability to interpret, evaluate, and fine-tune model outputs
Experience validating and reconciling data across systems
Strong foundation in machine learning, data modeling, and backend data operations
Familiarity with querying and working with evolving data environments
Data Scientist
Alhambra, CA jobs
Title: Principal Data Scientist
Duration: 12 Months Contract
Additional Information
California Resident Candidates Only. This position is HYBRID (2 days onsite, 2 days telework). Interviews will be conducted via Microsoft Teams. The work schedule follows a 4/40 (10-hour days, Monday-Thursday), with the specific shift determined by the program manager. Shifts may range between 7:15 a.m. and 6:00 p.m.
Job description:
The Principal Data Scientist works to establish a comprehensive Data Science Program to advance data-driven decision-making, streamline operations, and fully leverage modern platforms including Databricks, or similar, to meet increasing demand for predictive analytics and AI solutions. The Principal Data Scientist will guide program development, provide training and mentorship to junior members of the team, accelerate adoption of advanced analytics, and build internal capacity through structured mentorship. The Principal Data Scientist will possess exceptional communication abilities, both verbal and written, with a strong customer service mindset and the ability to translate complex concepts into clear, actionable insights; strong analytical and business acumen, including foundational experience with regression, association analysis, outlier detection, and core data analysis principles; working knowledge of database design and organization, with the ability to partner effectively with Data Management and Data Engineering teams; outstanding time management and organizational skills, with demonstrated success managing multiple priorities and deliverables in parallel; a highly collaborative work style, coupled with the ability to operate independently, maintain focus, and drive projects forward with minimal oversight; a meticulous approach to quality, ensuring accuracy, reliability, and consistency in all deliverables; and proven mentorship capabilities, including the ability to guide, coach, and upskill junior data scientists and analysts.
Experience Required:
Five (5)+ years of professional experience leading data science initiatives, including developing machine learning models, statistical analyses, and end-to-end data science workflows in production environments.
Three (3)+ years of experience working with Databricks and similar cloud-based analytics platforms, including notebook development, feature engineering, ML model training, and workflow orchestration.
Three (3)+ years of experience applying advanced analytics and predictive modeling (e.g., regression, classification, clustering, forecasting, natural language processing).
Two (2)+ years of experience implementing MLOps practices, such as model versioning, CI/CD for ML, MLflow, automated pipelines, and model performance monitoring.
Two (2)+ years of experience collaborating with data engineering teams to design data pipelines, optimize data transformations, and implement Lakehouse or data warehouse architectures (e.g., Databricks, Snowflake, SQL-based platforms).
Two (2)+ years of experience mentoring or supervising junior data scientists or analysts, including code reviews, training, and structured skill development.
Two (2)+ years of experience with Python and SQL programming, using data sources such as SQL Server, Oracle, PostgreSQL, or similar relational databases.
One (1)+ year of experience operationalizing analytics within enterprise governance frameworks, partnering with Data Management, Security, and IT to ensure compliance, reproducibility, and best practices.
Education Required & certifications:
This classification requires possession of a Master's degree or higher in Data Science, Statistics, Computer Science, or a closely related field. Additional qualifying professional experience may be substituted for the required education on a year-for-year basis. At least one of the following industry-recognized certifications in data science or cloud analytics, such as:
Microsoft Azure Data Scientist Associate (DP-100)
Databricks Certified Data Scientist or Machine Learning Professional
AWS Machine Learning Specialty
Google Professional Data Engineer • or equivalent advanced analytics certifications. The certification is required and may not be substituted with additional experience.
About US Tech Solutions:
US Tech Solutions is a global staff augmentation firm providing a wide range of talent on-demand and total workforce solutions. To know more about US Tech Solutions, please visit ************************
US Tech Solutions is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Recruiter Details:
Name: T Saketh Ram Sharma
Email: *****************************
Internal Id: 25-54101