Senior Data Engineer
Charlotte, NC jobs
**NO 3rd Party vendor candidates or sponsorship**
Role Title: Senior Data Engineer
Client: Global construction and development company
Employment Type: Contract
Duration: 1 year
Preferred Location: Remote based in ET or CT time zones
Role Description:
The Senior Data Engineer will play a pivotal role in designing, architecting, and optimizing cloud-native data integration and Lakehouse solutions on Azure, with a strong emphasis on Microsoft Fabric adoption, PySpark/Spark-based transformations, and orchestrated pipelines. This role will lead end-to-end data engineering-from ingestion through APIs and Azure services to curated Lakehouse/warehouse layers-while ensuring scalable, secure, well-governed, and well-documented data products. The ideal candidate is hands-on in delivery and also brings data architecture knowledge to help shape patterns, standards, and solution designs.
Key Responsibilities
Design and implement end-to-end data pipelines and ELT/ETL workflows using Azure Data Factory (ADF), Synapse, and Microsoft Fabric.
Build and optimize PySpark/Spark transformations for large-scale processing, applying best practices for performance tuning (partitioning, joins, file sizing, incremental loads).
Develop and maintain API-heavy ingestion patterns, including REST/SOAP integrations, authentication/authorization handling, throttling, retries, and robust error handling.
Architect scalable ingestion, transformation, and serving solutions using Azure Data Lake / OneLake, Lakehouse patterns (Bronze/Silver/Gold), and data warehouse modeling practices.
Implement monitoring, logging, alerting, and operational runbooks for production pipelines; support incident triage and root-cause analysis.
Apply governance and security practices across the lifecycle, including access controls, data quality checks, lineage, and compliance requirements.
Write complex SQL, develop data models, and enable downstream consumption through analytics tools and curated datasets.
Drive engineering standards: reusable patterns, code reviews, documentation, source control, and CI/CD practices.
Requirements:
Bachelor's degree (or equivalent experience) in Computer Science, Engineering, or a related field.
5+ years of experience in data engineering with strong focus on Azure Cloud.
Strong experience with Azure Data Factory pipelines, orchestration patterns, parameterization, and production support.
Strong hands-on experience with Synapse (pipelines, SQL pools and/or Spark), and modern cloud data platform patterns.
Advanced PySpark/Spark experience for complex transformations and performance optimization.
Heavy experience with API-based integrations (building ingestion frameworks, handling auth, pagination, retries, rate limits, and resiliency).
Strong knowledge of SQL and data warehousing concepts (dimensional modeling, incremental processing, data quality validation).
Strong understanding of cloud data architectures including Data Lake, Lakehouse, and Data Warehouse patterns.
Preferred Skills
Experience with Microsoft Fabric (Lakehouse/Warehouse/OneLake, Pipelines, Dataflows Gen2, notebooks).
Architecture experience (formal or informal), such as contributing to solution designs, reference architectures, integration standards, and platform governance.
Experience with DevOps/CI-CD for data engineering using Azure DevOps or GitHub (deployment patterns, code promotion, testing).
Experience with Power BI and semantic model considerations for Lakehouse/warehouse-backed reporting.
Familiarity with data catalog/governance tooling (e.g., Microsoft Purview).
Senior Data Engineer
Nashville, TN jobs
Concert is a software and managed services company that promotes health by providing the digital infrastructure for reliable and efficient management of laboratory testing and precision medicine. We are wholeheartedly dedicated to enhancing the transparency and efficiency of health care. Our customers include health plans, provider systems, laboratories, and other important stakeholders. We are a growing organization driven by smart, creative people to help advance precision medicine and health care. Learn more about us at ***************
YOUR ROLE
Concert is seeking a skilled Senior Data Engineer to join our team. Your role will be pivotal in designing, developing, and maintaining our data infrastructure and pipelines, ensuring robust, scalable, and efficient data solutions. You will work closely with data scientists, analysts, and other engineers to support our mission of automating the application of clinical policy and payment through data-driven insights.
You will be joining an innovative, energetic, passionate team who will help you grow and build skills at the intersection of diagnostics, information technology and evidence-based clinical care.
As a Senior Data Engineer you will:
Design, develop, and maintain scalable and efficient data pipelines using AWS services such as Redshift, S3, Lambda, ECS, Step Functions, and Kinesis Data Streams.
Implement and manage data warehousing solutions, primarily with Redshift, and optimize existing data models for performance and scalability.
Utilize DBT (data build tool) for data transformation and modeling, ensuring data quality and consistency.
Develop and maintain ETL/ELT processes to ingest, process, and store large datasets from various sources.
Work with SageMaker for machine learning data preparation and integration.
Ensure data security, privacy, and compliance with industry regulations.
Collaborate with data scientists and analysts to understand data requirements and deliver solutions that meet their needs.
Monitor and troubleshoot data pipelines, identifying and resolving issues promptly.
Implement best practices for data engineering, including code reviews, testing, and automation.
Mentor junior data engineers and share knowledge on data engineering best practices.
Stay up-to-date with the latest advancements in data engineering, AWS services, and related technologies.
After 3 months on the job you will have:
Developed a strong understanding of Concert's data engineering infrastructure
Learned the business domain and how it maps to the information architecture
Made material contributions towards existing key results
After 6 months you will have:
Led a major initiative
Become the first point of contact when issues related to the data warehouse are identified
After 12 months you will have:
Taken responsibility for the long term direction of the data engineering infrastructure
Proposed and executed key results with an understanding of the business strategy
Communicated the business value of major technical initiatives to key non-technical business stakeholders
WHAT LEADS TO SUCCESS
Self-Motivated A team player with a positive attitude and a proactive approach to problem-solving.
Executes Well You are biased to action and get things done. You acknowledge unknowns and recover from setbacks well.
Comfort with Ambiguity You aren't afraid of uncertainty and blazing new trails, you care about building towards a future that is different from today.
Technical Bravery You are comfortable with new technologies and eager to dive in to understand data in the raw and in its processed states.
Mission-focused You are personally motivated to drive more affordable, equitable and effective integration of genomic technologies into clinical care.
Effective Communication You build rapport and great working relationships with senior leaders, peers, and use the relationships you've built to drive the company forward
RELEVANT SKILLS & EXPERIENCE
Minimum of 4 years experience working as a data engineer
Bachelor's degree in software or data engineering or comparable technical certification / experience
Ability to effectively communicate complex technical concepts to both technical and non-technical audiences.
Proven experience in designing and implementing data solutions on AWS, including Redshift, S3, Lambda, ECS, and Step Functions
Strong understanding of data warehousing principles and best practices
Experience with DBT for data transformation and modeling.
Proficiency in SQL and at least one programming language (e.g., Python, Scala)
Familiarity or experience with the following tools / concepts are a plus: BI tools such as Metabase; Healthcare claims data, security requirements, and HIPAA compliance; Kimball's dimensional modeling techniques; ZeroETL and Kinesis data streams
COMPENSATION
Concert is seeking top talent and offers competitive compensation based on skills and experience. Compensation will commensurate with experience. This position will report to the VP of Engineering.
LOCATION
Concert is based in Nashville, Tennessee and supports a remote work environment.
For further questions, please contact: ******************.
Data Engineer
Huntsville, AL jobs
Data Engineer
Direct Hire
Clearance Requirements: Active Top Secret Clearance (TS/SCI preferred)
Pay Rate: $150K - $170K
Position Description: Are you passionate about turning complex data into powerful insights that drive real-world impact? As a Data Engineer, you'll play a key role in designing, building, and optimizing data systems that support mission-critical initiatives. You'll collaborate with multidisciplinary teams to develop scalable pipelines, integrate structured and unstructured data, and enable advanced analytics solutions that power innovation and intelligence.
In this role, you'll contribute to projects that matter - from national security to enterprise-level analytics - using your expertise in Java, Oracle, and modern data engineering practices to build efficient, high-performance systems.
Responsibilities:
Design, develop, and maintain data pipelines and ETL workflows to transform and organize large-scale datasets.
Write clean, secure, and efficient Java code for data processing, automation, and integration.
Optimize Oracle database performance through tuning and advanced PL/SQL scripting.
Automate build, test, and deployment processes using CI/CD best practices.
Collaborate with analysts, developers, and stakeholders in an agile environment to deliver data solutions that meet evolving business and technical needs.
Evaluate and implement new tools or technologies to enhance data engineering efficiency and scalability.
Required Skills & Education:
Clearance: Active Top Secret (TS/SCI preferred).
Education: Bachelor's degree and 10+ years of experience as a Data Engineer in a large-scale enterprise environment, or 14+ years of equivalent experience in lieu of a degree.
Proven experience programming in Java for data analysis, automation, and data warehousing.
Strong hands-on experience with Oracle databases, including performance tuning and query optimization.
Skilled in PL/SQL scripting, stored procedures, and data manipulation.
Proficient in CI/CD pipelines, automated testing, and deployment workflows.
Ability to work independently and solve complex data challenges in a dynamic enterprise setting.
Preferred Qualifications:
Experience with ETL tools (e.g., Informatica).
Familiarity with data warehousing design and DevOps practices.
Experience working within Agile, Waterfall, or Iterative SDLC models.
Knowledge of Oracle VLDB (Very Large Database) concepts and optimization strategies.
Current TS/SCI clearance with polygraph is highly desirable.
About Seneca Resources
At Seneca Resources, we're more than just a staffing and consulting firm - we're your career partner. With offices across the U.S. and clients ranging from Fortune 500 corporations to federal agencies, we connect talented professionals to meaningful opportunities that drive innovation and impact.
When you join Seneca, you'll receive competitive pay, comprehensive benefits (including health, dental, vision, and 401(k)), and dedicated support from a team committed to your professional success.
We are proud to be an Equal Opportunity Employer and value diversity at all levels of our organization. All qualified individuals are encouraged to apply.
Data Engineer
Charlotte, NC jobs
Job Title: Azure Databricks Engineer (Onsite)
Years of Experience: 7- 12 Years
Full Time
We are seeking a highly skilled and motivated Technical Team Lead with extensive experience in Azure Databricks to join our dynamic team. The ideal candidate will possess a strong technical background, exceptional leadership abilities, and a passion for driving innovative solutions. As a Technical Team Lead, you will be responsible for guiding a team of developers and engineers in the design, development, and implementation of data driven solutions that leverage Azure Databricks.
Responsibilities:
Lead and mentor a team of technical professionals, fostering a collaborative and high performance culture.
Design and implement data processing solutions using Azure Databricks, ensuring scalability and efficiency.
Collaborate with cross functional teams to gather requirements and translate them into technical specifications.
Oversee the development lifecycle, from planning and design to deployment and maintenance.
Conduct code reviews and provide constructive feedback to team members to ensure code quality and adherence to best practices.
Stay up to date with industry trends and emerging technologies related to Azure Databricks and data engineering.
Facilitate communication between technical and non technical stakeholders to ensure alignment on project goals.
Identify and mitigate risks associated with project delivery and team performance.
Mandatory Skills:
Proven expertise in Azure Databricks, including experience with Spark, data pipelines, and data lakes.
Strong programming skills in languages such as Python, Scala, or SQL.
Experience with cloud based data storage solutions, particularly Azure Data Lake Storage and Azure SQL Database.
Solid understanding of data modeling, ETL processes, and data warehousing concepts.
Demonstrated ability to lead technical teams and manage multiple projects simultaneously.
Preferred Skills:
Familiarity with Azure DevOps for CI/CD processes.
Experience with machine learning frameworks and libraries.
Knowledge of data governance and compliance standards.
Strong analytical and problem solving skills.
Excellent communication and interpersonal skills.
Qualifications:
Bachelor's degree in Computer Science, Information Technology, or a related field.
7 10 years of experience in data engineering, software development, or a related technical role.
Proven track record of leading technical teams and delivering successful projects.
Relevant certifications in Azure or data engineering are a plus.
If you are a passionate Technical Team Lead with a strong background in Azure Databricks and a desire to drive innovation, we encourage you to apply and join our team
Data Engineer
Minneapolis, MN jobs
Job Title: Data Engineer
Job Type: Contract to Hire
USC of GC Holders only for contract to hire need having no sponsorship
Must have requirements:
GCP, SQL, Python, Airflow
System design mindset
Communication - ability to vocalize what they are doing, what/how they are achieving their work. Accents not an issue as long as they are comprehendible.
Healthcare not required, but a nice to have.
Location: Onsite - any 4 office location, focus is Minneapolis, Arlington, VA, Portland, OR, Raleigh, NC
100% onsite, then switch to 2-3x/week hybrid if they do well
Job Summary:
The Senior Cloud Data Engineer plays a key role in designing, building, and maintaining
data pipelines and infrastructure using Google Cloud Platform (GCP) BigQuery. The
incumbent will collaborate with data analysts, data scientists, and other engineers to
ensure timely access to high-quality data for data-driven decision-making across the
organization.
The Senior Cloud Data Engineer is a highly technical person that has mastered hands-on
coding in data processing solutions and scalable data pipelines to support analytics and
exploratory analysis. This role ensures new business requirements are decomposed and
implemented in the cohesive end-to-end designs that enable data integrity and quality, and
best support BI and analytic capability needs that power decision-making. This includes building data acquisition programs that handle the business's
growing data volume as part of the Data Lake in GCP BigQuery ecosystem and maintaining
a robust data catalog.
This is a Senior Data Engineering role within Data & Analytics' Data Core organization
working closely with leaders of the Data & Analytics. The incumbent will continually
improve the business's data and analytic solutions, processes, and data engineering
capabilities. The incumbent embraces industry best practices and trends and, through
acquired knowledge, drives process and system improvement opportunities.
Responsibilities:
• Design, develop, and implement data pipelines using GCP BigQuery, Dataflow, and
Airflow for data ingestion, transformation, and loading.
• Optimize data pipelines for performance, scalability, and cost-efficiency.
• Ensure data quality through data cleansing, validation, and monitoring processes.
• Develop and maintain data models and schemas in BigQuery to support various
data analysis needs.
• Automate data pipeline tasks using scripting languages like Python and tools like
Dataflow.
• Collaborate with data analysts and data scientists to understand data requirements
and translate them into technical data solutions.
• Leverage DevOps Terraform (IaC) to ensure seamless integration of data pipelines
with CI/CD workflows.
• Monitor and troubleshoot data pipelines and infrastructure to identify and resolve
issues.
• Stay up-to-date with the latest advancements in GCP BigQuery and other related
technologies.
• Document data pipelines and technical processes for future reference and
knowledge sharing.
Basic Requirements:
• Bachelor's degree or equivalent experience in Computer Science, Mathematics,
Information Technology or related field.
• 5+ years of solid experience as a data engineer.
• Strong understanding of data warehousing / datalake concepts and data modeling
principles.
• Proven experience with designing and implementing data pipelines using GCP
BigQuery, Dataflow and Aiflow.
• Strong SQL and scripting languages like Python (or similar) skills.
• Experience with data quality tools and techniques.
• Ability to work independently and as part of a team.
• Strong problem-solving and analytical skills.
• Passion for data and a desire to learn and adapt to new technologies.
• Experience with other GCP services like Cloud Storage, Dataflow, and Pub/Sub etc.
• Experience with cloud deployment and automation tools like Terraform.
• Experience with data visualization tools like Tableau or Power BI or Looker.
• Experience with healthcare data.
• Familiarity with machine learning, artificial intelligence and data science concepts.
• Experience with data governance and healthcare PHI data security best practices.
• Ability to work independently on tasks and projects to deliver data engineering
solutions.
• Ability to communicate effectively and convey complex technical concepts as well
as tasks / project updates.
The projected hourly range for this position is $78 to $89.
On-Demand Group (ODG) provides employee benefits which includes healthcare, dental, and vision insurance. ODG is an equal opportunity employer that does not discriminate on the basis of race, color, religion, gender, sexual orientation, age, national origin, disability, or any other characteristic protected by law.
Senior Data Platform Engineer (28702)
Minnetonka, MN jobs
Title: Senior Data Platform Engineer - Oracle/Snowflake/Azure
Job Type: Contract-to-Hire (6 months) *All candidates must be interested & eligible for conversion without sponsorship.
Industry: Health Insurance
Pay range: $65 to $78/hour
Key Technologies: Oracle, Snowflake, Azure Cloud, MS SQL
---
About the Role
We are seeking a highly skilled Senior Data Platform Engineer to join a leading healthcare organization headquartered in Minnetonka, MN. This role focuses on designing, implementing, and maintaining both legacy and modern data platforms that support enterprise operations. You will collaborate with experienced engineers and architects to optimize databases, develop data pipelines, and drive cloud integration initiatives.
This position is ideal for a seasoned professional who thrives on solving complex data challenges, contributing to modernization efforts, and working in a fast-paced Agile environment.
Responsibilities
Design, build, and maintain robust data pipelines across cloud and on-premises environments.
Administer, monitor, and optimize databases including Oracle, Snowflake, Azure SQL, and MS SQL.
Manage database provisioning, configuration, patching, and backup/recovery processes.
Collaborate with developers, analysts, and DBAs to troubleshoot issues and optimize queries.
Support data migration and integration efforts as part of cloud transformation initiatives.
Ensure database security, access controls, and compliance with internal standards.
Contribute to documentation, runbooks, and knowledge sharing within the team.
Participate in Agile ceremonies and planning activities, fostering a culture of shared ownership and continuous improvement.
Join an on-call rotation to support 24/7 database operations and incident response.
Required Qualifications
7+ years of experience in database engineering or a related technical role.
Hands-on experience with at least one of the following: Oracle, Snowflake, or Azure SQL Database.
Solid knowledge of cloud platforms (Azure preferred) and cloud-native data services.
Strong understanding of system performance tuning and query optimization.
Ability to work collaboratively and communicate effectively with technical peers.
Preferred Qualifications
Experience building and maintaining data pipelines in cloud or hybrid environments.
Familiarity with Liquibase or other database change management tools.
Proficiency in scripting or automation (e.g., Ansible, Python, Terraform).
Experience with CI/CD pipelines or DevOps practices.
Knowledge of monitoring tools and observability platforms.
Background in Agile or SAFe environments.
Salary range for this position is $110,400-$154,600.
Annual salary range placement will depend on a variety of factors including, but not limited to, education, work experience, applicable certifications and/or licensure, the position's scope and responsibility, internal pay equity and external market salary data.
Benefits
Dahl Consulting is proud to offer a comprehensive benefits package to eligible employees that will allow you to choose the best coverage to meet your family's needs. For details, please review the DAHL Benefits Summary: ***********************************************
Senior Data Engineer
Houston, TX jobs
About the Role
The Senior Data Engineer will play a critical role in building and scaling an enterprise data platform to enable analytics, reporting, and operational insights across the organization.
This position requires deep expertise in Snowflake and cloud technologies (AWS or Azure), along with strong upstream oil & gas domain experience. The engineer will design and optimize data pipelines, enforce data governance and quality standards, and collaborate with cross-functional teams to deliver reliable, scalable data solutions.
Key Responsibilities
Data Architecture & Engineering
Design, develop, and maintain scalable data pipelines using Snowflake, AWS/Azure, and modern data engineering tools.
Implement ETL/ELT processes integrating data from upstream systems (SCADA, production accounting, drilling, completions, etc.).
Architect data models supporting both operational reporting and advanced analytics.
Establish and maintain frameworks for data quality, validation, and lineage to ensure enterprise data trust.
Platform Development & Optimization
Lead the build and optimization of Snowflake-based data warehouses for performance and cost efficiency.
Design cloud-native data solutions leveraging AWS/Azure services (S3, Lambda, Azure Data Factory, Databricks).
Manage large-scale time-series and operational data processing workflows.
Implement strong security, access control, and governance practices.
Technical Leadership & Innovation
Mentor junior data engineers and provide technical leadership across the data platform team.
Research and introduce new technologies to enhance platform scalability and automation.
Build reusable frameworks, components, and utilities to streamline delivery.
Support AI/ML initiatives by delivering production-ready, high-quality data pipelines.
Business Partnership
Collaborate with stakeholders across business units to translate requirements into technical solutions.
Work with analysts and data scientists to enable self-service analytics and reporting.
Ensure data integration supports regulatory and compliance reporting.
Act as a bridge between business and technical teams to ensure alignment and impact.
Qualifications & Experience
Education
Bachelor's degree in Computer Science, Engineering, Information Systems, or a related field.
Advanced degree or relevant certifications (SnowPro, AWS/Azure Data Engineer, Databricks) preferred.
Experience
7+ years in data engineering roles, with at least 3 years on cloud data platforms.
Proven expertise in Snowflake and at least one major cloud platform (AWS or Azure).
Hands-on experience with upstream oil & gas data (wells, completions, SCADA, production, reserves, etc.).
Demonstrated success delivering operational and analytical data pipelines.
Technical Skills
Advanced SQL and Python programming skills.
Strong background in data modeling, ETL/ELT, cataloging, lineage, and data security.
Familiarity with Airflow, Azure Data Factory, or similar orchestration tools.
Experience with CI/CD, Git, and automated testing.
Knowledge of BI tools such as Power BI, Spotfire, or Tableau.
Understanding of AI/ML data preparation and integration.
Senior Data Analytics Engineer (Customer Data)
Irving, TX jobs
Our client is seeking a Senior Data Analytics Engineer (Customer Data) to join their team! This position is located in remote.
Build, optimize, and maintain customer data pipelines in PySpark/Databricks to support CDP-driven use cases across AWS/Azure/GCP
Transform raw and integrated customer data into analytics-ready datasets used for dashboards, reporting, segmentation, personalization, and downstream AI/ML applications
Develop and enrich customer behavior metrics, campaign analytics, and performance insights such as: ad engagement, lifecycle metrics, retention
Partner with Marketing, Sales, Product, and Data Science teams to translate business goals into metrics, features, and analytical data models
Build datasets consumed by Power BI/Tableau dashboards (hands-on dashboard creation not required)
Ensure high cluster performance and pipeline optimization in Databricks, including troubleshooting skewed joins, sorting, partitioning, and real-time processing needs
Work across multiple cloud and vendor ecosystems such as: AWS/Azure/GCP; Hightouch or comparable CDP vendors
Participate in the data ingestion and digestion phases, shaping integrated data into analytical layers for MarTech and BI
Contribute to and enforce data engineering standards, documentation, governance, and best practices across the organization
Desired Skills/Experience:
6+ years of experience in Data Engineering, Analytics Engineering, or related fields, including data modeling experience
Strong Data Engineering fundamentals with the ability to design pipelines, optimize performance, and deliver real-time or near-real-time datasets
Ability to deeply understand data, identifying gaps, designing meaningful transformations, and creating metrics with clear business context
Understanding of how customer data moves through Customer Data Platforms (CDPs) and how to design pipelines that integrate with them
Experience supporting Marketing, Customer Data, MarTech, CDP, segmentation, or personalization teams strongly preferred
Hands-on experience required with: Databricks, PySpark, Python, SQL, Building analytics datasets for dashboards/reporting and customer behavior analytics or campaign performance insights
Experience designing and implementing features that feed downstream AI or customer-facing applications
Benefits:
Medical, Dental, & Vision Insurance Plans
Employee-Owned Profit Sharing (ESOP)
401K offered
The approximate pay range for this position starting at $150-160,000+. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
Big Data Engineer
Santa Monica, CA jobs
Our client is seeking a Big Data Engineer to join their team! This position is located in Santa Monica, California.
Design and build core components of a large-scale data platform for both real-time and batch processing, owning key features of big data applications that evolve with business needs
Develop next-generation, cloud-based big data infrastructure supporting batch and streaming workloads, with continuous improvements to performance, scalability, reliability, and availability
Champion engineering excellence, promoting best practices such as design patterns, CI/CD, thorough code reviews, and automated testing
Drive innovation, contributing new ideas and applying cutting-edge technologies to deliver impactful solutions
Participate in the full software development lifecycle, including system design, experimentation, implementation, deployment, and testing
Collaborate closely with program managers, product managers, SDETs, and researchers in an open, agile, and highly innovative environment
Desired Skills/Experience:
Bachelor's degree in a STEM field such as: Science, Technology, Engineering, Mathematics
5+ years of relevant professional experience
4+ years of professional software development experience using Java, Scala, Python, or similar programming languages
3+ years of hands-on big data development experience with technologies such as Spark, Flink, SingleStore, Kafka, NiFi, and AWS big data tools
Strong understanding of system and application design, architecture principles, and distributed system fundamentals
Proven experience building highly available, scalable, and production-grade services
Genuine passion for technology, with the ability to work across interdisciplinary areas and adopt new tools or approaches
Experience processing massive datasets at the petabyte scale
Proficiency with cloud infrastructure and DevOps tools, such as Terraform, Kubernetes (K8s), Spinnaker, IAM, and ALB
Hands-on experience with modern data warehousing and analytics platforms, including ClickHouse, Druid, Snowflake, Impala, Presto, Kinesis, and more
Familiarity with common web development frameworks, such as Spring Boot, React.js, Vue.js, or Angular
Benefits:
Medical, Dental, & Vision Insurance Plans
Employee-Owned Profit Sharing (ESOP)
401K offered
The approximate pay range for this position is between $52.00 and $75.00. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
Data Engineer
Smithfield, RI jobs
Smithfield, RI and Westlake, TX
Full Time
Must Have Skills:
Strong SQL for querying and data validation
Oracle
AWS
ETL experience with Java Spring Batch (for the ETL data transformation).
Note: the ETL work is done in Java (so Python is only a nice to have). Must have Java for ETL which makes this role difficult to source on.
DO NOT need Data Engineer without Java
Senior Data Engineer
Glendale, CA jobs
Our client is seeking a Senior Data Engineer to join their team! This position is located in Glendale, California.
Contribute to maintaining, updating, and expanding existing Core Data platform data pipelines
Build tools and services to support data discovery, lineage, governance, and privacy
Collaborate with other software and data engineers and cross-functional teams
Work with a tech stack that includes Airflow, Spark, Databricks, Delta Lake, Kubernetes, and AWS
Collaborate with product managers, architects, and other engineers to drive the success of the Core Data platform
Contribute to developing and documenting internal and external standards and best practices for pipeline configurations, naming conventions, and more
Ensure high operational efficiency and quality of Core Data platform datasets to meet SLAs and ensure reliability and accuracy for stakeholders in Engineering, Data Science, Operations, and Analytics
Participate in agile and scrum ceremonies to collaborate and refine team processes
Engage with customers to build relationships, understand needs, and prioritize both innovative solutions and incremental platform improvements
Maintain detailed documentation of work and changes to support data quality and data governance requirements
Desired Skills/Experience:
5+ years of data engineering experience developing large data pipelines
Proficiency in at least one major programming language such as: Python, Java or Scala
Strong SQL skills and the ability to create queries to analyze complex datasets
Hands-on production experience with distributed processing systems such as Spark
Experience interacting with and ingesting data efficiently from API data sources
Experience coding with the Spark DataFrame API to create data engineering workflows in Databricks
Hands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines
Experience developing APIs with GraphQL
Deep understanding of AWS or other cloud providers, as well as infrastructure-as-code
Familiarity with data modeling techniques and data warehousing best practices
Strong algorithmic problem-solving skills
Excellent written and verbal communication skills
Advanced understanding of OLTP versus OLAP environments
Benefits:
Medical, Dental, & Vision Insurance Plans
Employee-Owned Profit Sharing (ESOP)
401K offered
The approximate pay range for this position is between $51.00 and $73.00. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
Azure Data Engineer
Irving, TX jobs
Our client is seeking an Azure Data Engineer to join their team! This position is located in Irving, Texas. THIS ROLE REQUIRES AN ONSITE INTERVIEW IN IRVING, please only apply if you are local and available to interview onsite.
Duties:
Lead the design, architecture, and implementation of key data initiatives and platform capabilities
Optimize existing data workflows and systems to improve performance, cost-efficiency, identifying and guiding teams to implement solutions
Lead and mentor a team of 2-5 data engineers, providing guidance on technical best practices, career development, and initiative execution
Contribute to the development of data engineering standards, processes, and documentation, promoting consistency and maintainability across teams while enabling business stakeholders
Desired Skills/Experience:
Bachelor's degree or equivalent in Computer Science, Mathematics, Software Engineering, Management Information Systems, etc.
5+ years of relevant work experience in data engineering
Strong technical skills in SQL, PySpark/Python, Azure, and Databricks
Deep understanding of data engineering fundamentals, including database architecture and design, ETL, etc.
Benefits:
Medical, Dental, & Vision Insurance Plans
Employee-Owned Profit Sharing (ESOP)
401K offered
The approximate pay range for this position starting at $140-145,000+. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
Direct Client:: W2 position:: Need Hadoop Developer with Spark, Python & Healthcare Exp.:: Jacksonville, FL (Hybrid) :: GC and USC Only::
Jacksonville, FL jobs
GC and USC Only
============================
Role: Hadoop/Analytics Developer
Job Description:
A Hadoop developer is responsible for the design, development and operations of systems that store and manage large amounts of data. Most Hadoop developers have a computer software background and have a degree in information systems, software engineering, computer science, or mathematics.
IT Developers are responsible for development, programming, coding of Information Technology solutions. IT Developers are responsible for documenting detailed system specifications, participation in unit testing and maintenance of planned and unplanned internally developed applications, evaluation and performance testing of purchased products. IT Developers are responsible for including IT Controls to protect the confidentiality, integrity, as well as availability of the application and data processed or output by the application. IT Developers are assigned to moderately complex development projects.
Job Requirements:
• Experience and understanding with unit testing, release procedures, coding design and documentation protocol as well as change management procedures
• Proficiency using versioning tools
• Thorough knowledge of Information Technology fields and computer systems
• Demonstrated organizational, analytical and interpersonal skills
• Flexible team player
• Ability to manage tasks independently and take ownership of responsibilities
• Ability to learn from mistakes and apply constructive feedback to improve performance
• Must demonstrate initiative and effective independent decision-making skills
• Ability to communicate technical information clearly and articulately
• Ability to adapt to a rapidly changing environment
• In-depth understanding of the systems development life cycle
• Proficiency programming in more than one object-oriented programming language
• Proficiency using standard desktop applications such as MS Suite and flowcharting tools such as Visio
• Proficiency using debugging tools
• High critical thinking skills to evaluate alternatives and present solutions that are consistent with business objectives and strategy
Specific Tools/Languages Required:
HADOOP
Spark
Python
SQL/Stored Procedures
Required Work Experience:
•3+ years related work experience, Professional experience with technical design and coding in the IT industry
AWS Data Engineer
McLean, VA jobs
Responsibilities:
Design, build, and maintain scalable data pipelines using AWS Glue and Databricks.
Develop and optimize ETL/ELT processes using PySpark and Python.
Collaborate with data scientists, analysts, and stakeholders to enable efficient data access and transformation.
Implement and maintain data lake and warehouse solutions on AWS (S3, Glue Catalog, Redshift, Athena, etc.).
Ensure data quality, consistency, and reliability across systems.
Optimize performance of large-scale distributed data processing workflows.
Develop automation scripts and frameworks for data ingestion, transformation, and validation.
Follow best practices for data governance, security, and compliance.
Required Skills & Experience:
5-8 years of hands-on experience in Data Engineering.
Strong proficiency in Python and PySpark for data processing and transformation.
Expertise in AWS services - particularly Glue, S3, Lambda, Redshift, and Athena.
Hands-on experience with Databricks for building and managing data pipelines.
Experience working with large-scale data systems and optimizing performance.
Solid understanding of data modeling, data lake architecture, and ETL design principles.
Strong problem-solving skills and ability to work independently in a fast-paced environment.
“Mindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of - Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.”
Data Scientist - ML, Python
McLean, VA jobs
10+years of experience required in Information Technology.
Python Programming: At least 5 years of hands-on experience with Python, particularly in
frameworks like FastAPI, Django, Flask, and experience using AI frameworks.
• Access Control Expertise: Strong understanding of access control models such as Role-Based
Access Control (RBAC) and Attribute-Based Access Control (ABAC).
• API and Connector Development: Experience in developing API connectors using Python for
extracting and managing access control data from platforms like Azure, SharePoint, Java, .NET,
WordPress, etc.
• AI and Machine Learning: Hands-on experience integrating AI into applications for automating
tasks such as access control reviews and identifying anomalies
• Cloud and Microsoft Technologies: Proficiency with Azure services, Microsoft Graph API, and
experience integrating Python applications with Azure for access control reviews and reporting.
• Reporting and Visualization: Experience using reporting libraries in Python (Pandas, Matplotlib,
Plotly, Dash) to build dashboards and reports related to security and access control metrics.
• Communication Skills: Ability to collaborate with various stakeholders, explain complex
technical solutions, and deliver high-quality solutions on time.
• PlainID: Experience or familiarity with PlainID platforms for identity and access management.
• Azure OpenAI: Familiarity with Azure OpenAI technologies and their application in access
control and security workflows.
• Power BI: Experience with Microsoft Power BI for data visualization and reporting.
• Agile Methodologies: Experience working in Agile environments and familiarity with Scrum
methodologies for delivering security solutions.
Associate Data Scientist
Minneapolis, MN jobs
is remote.
Develop service specific knowledge through greater exposure to peers, internal experts, clients, regular self-study, and formal training opportunities
Gain exposure to a variety of program/project situations to develop business and organizational/planning skills
Retain knowledge gained and performance feedback provided to transfer into future work
Approach all problems and projects with a high level of professionalism, objectivity and an open mind to new ideas and solutions
Collaborate with internal teams to collect, analyze, and automate data processing
Leverage AI models, including LLMs, for developing intelligent solutions that enhance data-driven decision-making processes for both internal projects and external clients
Leverage machine learning methodologies, including non-linear, linear, and forecasting methods to help build solutions aimed at better understanding the business, making the business more efficient, and planning our future
Work under the guidance of a variety of Data Science team members, gain exposure to developing custom data models and algorithms to apply to data sets
Gain experience with predictive and inferential analytics, machine learning, and artificial intelligence techniques
Use existing processes and tools to monitor and analyze solution performance and accuracy and communicate findings to team members and end users
Contribute to automating business workflows by incorporating LLMs and other AI models to streamline processes and improve efficiency
Integrate AI-driven solutions within existing systems to provide advanced predictive capabilities and actionable insights
Learn to work individually as well as in collaboration with others
Desired Skills/Experience:
Bachelor's degree is required in the field of Statistics, Computer Science, Economics, Analytics, or Data Science preferred
1+ year of experience preferred
Experience with APIs, web scraping, SQL/no-SQL databases, and cloud-based data solutions preferred
Combination of relevant experience, education, and training may be accepted in lieu of degree
Benefits:
Medical, Dental, & Vision Insurance Plans
Employee-Owned Profit Sharing (ESOP)
401K offered
The approximate pay range for this position starting at $90,000 - $125,000. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
Senior Data Engineer
Austin, TX jobs
We are looking for a seasoned Azure Data Engineer to design, build, and optimize secure, scalable, and high-performance data solutions within the Microsoft Azure ecosystem. This will be a multi-year contract worked FULLY ONSITE in Austin, TX.
The ideal candidate brings deep technical expertise in data architecture, ETL/ELT engineering, data integration, and governance, along with hands-on experience in MDM, API Management, Lakehouse architectures, and data mesh or data hub frameworks. This position combines strategic architectural planning with practical, hands-on implementation, empowering cross-functional teams to leverage data as a key organizational asset.
Key Responsibilities
1. Data Architecture & Strategy
Design and deploy end-to-end Azure data platforms using Azure Data Lake, Azure Synapse Analytics, Azure Databricks, and Azure SQL Database.
Build and implement Lakehouse and medallion (Bronze/Silver/Gold) architectures for scalable and modular data processing.
Define and support data mesh and data hub patterns to promote domain-driven design and federated governance.
Establish standards for conceptual, logical, and physical data modeling across data warehouse and data lake environments.
2. Data Integration & Pipeline Development
Develop and maintain ETL/ELT pipelines using Azure Data Factory, Synapse Pipelines, and Databricks for both batch and streaming workloads.
Integrate diverse data sources (on-prem, cloud, SaaS, APIs) into a unified Azure data environment.
Optimize pipelines for cost-effectiveness, performance, and scalability.
3. Master Data Management (MDM) & Data Governance
Implement MDM solutions using Azure-native or third-party platforms (e.g., Profisee, Informatica, Semarchy).
Define and manage data governance, metadata, and data quality frameworks.
Partner with business teams to align data standards and maintain data integrity across domains.
4. API Management & Integration
Build and manage APIs for data access, transformation, and system integration using Azure API Management and Logic Apps.
Design secure, reliable data services for internal and external consumers.
Automate workflows and system integrations using Azure Functions, Logic Apps, and Power Automate.
5. Database & Platform Administration
Perform core DBA tasks, including performance tuning, query optimization, indexing, and backup/recovery for Azure SQL and Synapse.
Monitor and optimize cost, performance, and scalability across Azure data services.
Implement CI/CD and Infrastructure-as-Code (IaC) solutions using Azure DevOps, Terraform, or Bicep.
6. Collaboration & Leadership
Work closely with data scientists, analysts, business stakeholders, and application teams to deliver high-value data solutions.
Mentor junior engineers and define best practices for coding, data modeling, and solution design.
Contribute to enterprise-wide data strategy and roadmap development.
Required Qualifications
Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or related fields.
5+ years of hands-on experience in Azure-based data engineering and architecture.
Strong proficiency with the following:
Azure Data Factory, Azure Synapse, Azure Databricks, Azure Data Lake Storage Gen2
SQL, Python, PySpark, PowerShell
Azure API Management and Logic Apps
Solid understanding of data modeling approaches (3NF, dimensional modeling, Data Vault, star/snowflake schemas).
Proven experience with Lakehouse/medallion architectures and data mesh/data hub designs.
Familiarity with MDM concepts, data governance frameworks, and metadata management.
Experience with automation, data-focused CI/CD, and IaC.
Thorough understanding of Azure security, RBAC, Key Vault, and core networking principles.
What We Offer
Competitive compensation and benefits package
Luna Data Solutions, Inc. (LDS) provides equal employment opportunities to all employees. All applicants will be considered for employment. LDS prohibits discrimination and harassment of any type regarding age, race, color, religion, sexual orientation, gender identity, sex, national origin, genetics, protected veteran status, and/or disability status.
Data Engineer
Austin, TX jobs
About the Role
We are seeking a highly skilled Databricks Data Engineer with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions.
What We're Looking For
8+ years designing and delivering scalable data pipelines in modern data platforms
Deep experience in data engineering, data warehousing, and enterprise-grade solution delivery
Ability to lead cross-functional initiatives in matrixed teams
Advanced skills in SQL, Python, and ETL/ELT development, including performance tuning
Hands-on experience with Azure, Snowflake, and Databricks, including system integrations
Key Responsibilities
Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform
Modernize and enhance cloud-based data ecosystems on Azure, contributing to architecture, modeling, security, and CI/CD
Use Apache Airflow and similar tools for workflow automation and orchestration
Work with financial or regulated datasets while ensuring strong compliance and governance
Drive best practices in data quality, lineage, cataloging, and metadata management
Primary Technical Skills
Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks Notebooks
Design efficient Delta Lake models for reliability and performance
Implement and manage Unity Catalog for governance, RBAC, lineage, and secure data sharing
Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables
Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems
Automate ingestion and workflows using Python and REST APIs
Support downstream analytics for BI, data science, and application workloads
Write optimized SQL/T-SQL queries, stored procedures, and curated datasets
Automate DevOps workflows, testing pipelines, and workspace configurations
Additional Skills
Azure: Data Factory, Data Lake, Key Vault, Logic Apps, Functions
CI/CD: Azure DevOps
Orchestration: Apache Airflow (plus)
Streaming: Delta Live Tables
MDM: Profisee (nice-to-have)
Databases: SQL Server, Cosmos DB
Soft Skills
Strong analytical and problem-solving mindset
Excellent communication and cross-team collaboration
Detail-oriented with a high sense of ownership and accountability
Data Scientist
Alhambra, CA jobs
Title: Principal Data Scientist
Duration: 12 Months Contract
Additional Information
California Resident Candidates Only. This position is HYBRID (2 days onsite, 2 days telework). Interviews will be conducted via Microsoft Teams. The work schedule follows a 4/40 (10-hour days, Monday-Thursday), with the specific shift determined by the program manager. Shifts may range between 7:15 a.m. and 6:00 p.m.
Job description:
The Principal Data Scientist works to establish a comprehensive Data Science Program to advance data-driven decision-making, streamline operations, and fully leverage modern platforms including Databricks, or similar, to meet increasing demand for predictive analytics and AI solutions. The Principal Data Scientist will guide program development, provide training and mentorship to junior members of the team, accelerate adoption of advanced analytics, and build internal capacity through structured mentorship. The Principal Data Scientist will possess exceptional communication abilities, both verbal and written, with a strong customer service mindset and the ability to translate complex concepts into clear, actionable insights; strong analytical and business acumen, including foundational experience with regression, association analysis, outlier detection, and core data analysis principles; working knowledge of database design and organization, with the ability to partner effectively with Data Management and Data Engineering teams; outstanding time management and organizational skills, with demonstrated success managing multiple priorities and deliverables in parallel; a highly collaborative work style, coupled with the ability to operate independently, maintain focus, and drive projects forward with minimal oversight; a meticulous approach to quality, ensuring accuracy, reliability, and consistency in all deliverables; and proven mentorship capabilities, including the ability to guide, coach, and upskill junior data scientists and analysts.
Experience Required:
Five (5)+ years of professional experience leading data science initiatives, including developing machine learning models, statistical analyses, and end-to-end data science workflows in production environments.
Three (3)+ years of experience working with Databricks and similar cloud-based analytics platforms, including notebook development, feature engineering, ML model training, and workflow orchestration.
Three (3)+ years of experience applying advanced analytics and predictive modeling (e.g., regression, classification, clustering, forecasting, natural language processing).
Two (2)+ years of experience implementing MLOps practices, such as model versioning, CI/CD for ML, MLflow, automated pipelines, and model performance monitoring.
Two (2)+ years of experience collaborating with data engineering teams to design data pipelines, optimize data transformations, and implement Lakehouse or data warehouse architectures (e.g., Databricks, Snowflake, SQL-based platforms).
Two (2)+ years of experience mentoring or supervising junior data scientists or analysts, including code reviews, training, and structured skill development.
Two (2)+ years of experience with Python and SQL programming, using data sources such as SQL Server, Oracle, PostgreSQL, or similar relational databases.
One (1)+ year of experience operationalizing analytics within enterprise governance frameworks, partnering with Data Management, Security, and IT to ensure compliance, reproducibility, and best practices.
Education Required & certifications:
This classification requires possession of a Master's degree or higher in Data Science, Statistics, Computer Science, or a closely related field. Additional qualifying professional experience may be substituted for the required education on a year-for-year basis. At least one of the following industry-recognized certifications in data science or cloud analytics, such as:
Microsoft Azure Data Scientist Associate (DP-100)
Databricks Certified Data Scientist or Machine Learning Professional
AWS Machine Learning Specialty
Google Professional Data Engineer • or equivalent advanced analytics certifications. The certification is required and may not be substituted with additional experience.
About US Tech Solutions:
US Tech Solutions is a global staff augmentation firm providing a wide range of talent on-demand and total workforce solutions. To know more about US Tech Solutions, please visit ************************
US Tech Solutions is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Recruiter Details:
Name: T Saketh Ram Sharma
Email: *****************************
Internal Id: 25-54101
Data Architect
Cincinnati, OH jobs
THIS IS A W2 (NOT C2C OR REFERRAL BASED) CONTRACT OPPORTUNITY
REMOTE MOSTLY WITH 1 DAY/MO ONSITE IN CINCINNATI-LOCAL CANDIDATES TAKE PREFERENCE
RATE: $75-85/HR WITH BENEFITS
We are seeking a highly skilled Data Architect to function in a consulting capacity to analyze, redesign, and optimize a Medical Payments client's environment. The ideal candidate will have deep expertise in SQL, Azure cloud services, and modern data architecture principles.
Responsibilities
Design and maintain scalable, secure, and high-performing data architectures.
Lead migration and modernization projects in heavy use production systems.
Develop and optimize data models, schemas, and integration strategies.
Implement data governance, security, and compliance standards.
Collaborate with business stakeholders to translate requirements into technical solutions.
Ensure data quality, consistency, and accessibility across systems.
Required Qualifications
Bachelor's degree in Computer Science, Information Systems, or related field.
Proven experience as a Data Architect or similar role.
Strong proficiency in SQL (query optimization, stored procedures, indexing).
Hands-on experience with Azure cloud services for data management and analytics.
Knowledge of data modeling, ETL processes, and data warehousing concepts.
Familiarity with security best practices and compliance frameworks.
Preferred Skills
Understanding of Electronic Health Records systems.
Understanding of Big Data technologies and modern data platforms outside the scope of this project.