Post job

Data Engineer jobs at Lee Hecht Harrison

- 4886 jobs
  • ETL/ELT Data Engineer (Secret Clearance) - Hybrid

    Launchcode 2.9company rating

    Austin, TX jobs

    LaunchCode is recruiting for a Software Data Engineer to work at one of our partner companies! Details: Full-Time W2, Salary Immediate opening Hybrid - Austin, TX (onsite 1-2 times a week) Pay $85K-$120K Minimum Experience: 4 years Security Clearance: Active DoD Secret Clearance Disclaimer: Please note that we are unable to provide work authorization or sponsorship for this role, now or in the future. Candidates requiring current or future sponsorship will not be considered. Job description Job Summary A Washington, DC-based software solutions provider founded in 2017, specializes in delivering mission-critical and enterprise solutions to the federal government. Originating from the Department of Defense's software factory ecosystem, the company focuses on Command and Control, Cybersecurity, Space, Geospatial, and Modeling & Simulation. The company leverages commercial technology to enhance the capabilities of the DoD, IC, and their end-users, with innovation driven by its Innovation centers. The company has a presence in Boston, MA, Colorado Springs, CO, San Antonio, TX, and St. Louis, MO. Why the company? Environment of Autonomy Innovative Commercial Approach People over process We are seeking a passionate Software Data Engineer to support the Army Software Factory (ASWF) in aligning with DoDM 8140.03 Cyber Workforce requirements and broader compliance mandates. The Army Software Factory (ASWF), a first-of-its-kind initiative under Army Futures Command, is revolutionizing the Army's approach to software development by training and employing self-sustaining technical talent from across the military and civilian workforce. Guided by the motto “By Soldiers, For Soldiers,” ASWF equips service members to develop mission-critical software solutions independently-especially vital for future contested environments where traditional technical support may be unavailable. This initiative also serves as a strategic prototype to modernize legacy IT processes and build technical readiness across the force to ensure battlefield dominance in the digital age. Required Skills: Active DoD Secret Clearance (Required) 4+ years of experience in data science, data engineering, or similar roles. Expertise in designing, building, and maintaining scalable ETL/ELT pipelines using tools and languages such as Python, SQL, Apache Spark, or Airflow. Strong proficiency in working with relational and NoSQL databases, including experience with database design, optimization, and query performance tuning (e.g., PostgreSQL, MySQL, MongoDB, Cassandra). Demonstrable experience with cloud data platforms and services (e.g., AWS Redshift, S3, Glue, Athena; Azure Data Lake, Data Factory, Synapse; Google BigQuery, Cloud Storage, Dataflow). Solid understanding of data warehousing concepts (e.g., Kimball, Inmon methodologies) and experience with data modeling for analytical purposes. Proficiency in at least one programming language commonly used in data engineering (e.g., Python, Java, Scala) for data manipulation, scripting, and automation. CompTIA Security+ Certified or otherwise DoDM 8140.03 (formerly DoD 8570.01-M) compliant. Nice to Have: Familiarity with SBIR technologies and transformative platform shifts Experience working in Agile or DevSecOps environments 2+ years of experience interfacing with Platform Engineers and data visibility team, manage AWS resources, and GitLab admin #LI-hybrid #austintx #ETLengineer #dataengineer #army #aswf #clearancejobs #clearedjobs #secretclearance #ETL
    $85k-120k yearly 5d ago
  • Senior Data Engineer

    Bayforce 4.4company rating

    Charlotte, NC jobs

    **NO 3rd Party vendor candidates or sponsorship** Role Title: Senior Data Engineer Client: Global construction and development company Employment Type: Contract Duration: 1 year Preferred Location: Remote based in ET or CT time zones Role Description: The Senior Data Engineer will play a pivotal role in designing, architecting, and optimizing cloud-native data integration and Lakehouse solutions on Azure, with a strong emphasis on Microsoft Fabric adoption, PySpark/Spark-based transformations, and orchestrated pipelines. This role will lead end-to-end data engineering-from ingestion through APIs and Azure services to curated Lakehouse/warehouse layers-while ensuring scalable, secure, well-governed, and well-documented data products. The ideal candidate is hands-on in delivery and also brings data architecture knowledge to help shape patterns, standards, and solution designs. Key Responsibilities Design and implement end-to-end data pipelines and ELT/ETL workflows using Azure Data Factory (ADF), Synapse, and Microsoft Fabric. Build and optimize PySpark/Spark transformations for large-scale processing, applying best practices for performance tuning (partitioning, joins, file sizing, incremental loads). Develop and maintain API-heavy ingestion patterns, including REST/SOAP integrations, authentication/authorization handling, throttling, retries, and robust error handling. Architect scalable ingestion, transformation, and serving solutions using Azure Data Lake / OneLake, Lakehouse patterns (Bronze/Silver/Gold), and data warehouse modeling practices. Implement monitoring, logging, alerting, and operational runbooks for production pipelines; support incident triage and root-cause analysis. Apply governance and security practices across the lifecycle, including access controls, data quality checks, lineage, and compliance requirements. Write complex SQL, develop data models, and enable downstream consumption through analytics tools and curated datasets. Drive engineering standards: reusable patterns, code reviews, documentation, source control, and CI/CD practices. Requirements: Bachelor's degree (or equivalent experience) in Computer Science, Engineering, or a related field. 5+ years of experience in data engineering with strong focus on Azure Cloud. Strong experience with Azure Data Factory pipelines, orchestration patterns, parameterization, and production support. Strong hands-on experience with Synapse (pipelines, SQL pools and/or Spark), and modern cloud data platform patterns. Advanced PySpark/Spark experience for complex transformations and performance optimization. Heavy experience with API-based integrations (building ingestion frameworks, handling auth, pagination, retries, rate limits, and resiliency). Strong knowledge of SQL and data warehousing concepts (dimensional modeling, incremental processing, data quality validation). Strong understanding of cloud data architectures including Data Lake, Lakehouse, and Data Warehouse patterns. Preferred Skills Experience with Microsoft Fabric (Lakehouse/Warehouse/OneLake, Pipelines, Dataflows Gen2, notebooks). Architecture experience (formal or informal), such as contributing to solution designs, reference architectures, integration standards, and platform governance. Experience with DevOps/CI-CD for data engineering using Azure DevOps or GitHub (deployment patterns, code promotion, testing). Experience with Power BI and semantic model considerations for Lakehouse/warehouse-backed reporting. Familiarity with data catalog/governance tooling (e.g., Microsoft Purview).
    $70k-93k yearly est. 2d ago
  • Senior Data Engineer

    Concert 4.0company rating

    Nashville, TN jobs

    Concert is a software and managed services company that promotes health by providing the digital infrastructure for reliable and efficient management of laboratory testing and precision medicine. We are wholeheartedly dedicated to enhancing the transparency and efficiency of health care. Our customers include health plans, provider systems, laboratories, and other important stakeholders. We are a growing organization driven by smart, creative people to help advance precision medicine and health care. Learn more about us at *************** YOUR ROLE Concert is seeking a skilled Senior Data Engineer to join our team. Your role will be pivotal in designing, developing, and maintaining our data infrastructure and pipelines, ensuring robust, scalable, and efficient data solutions. You will work closely with data scientists, analysts, and other engineers to support our mission of automating the application of clinical policy and payment through data-driven insights. You will be joining an innovative, energetic, passionate team who will help you grow and build skills at the intersection of diagnostics, information technology and evidence-based clinical care. As a Senior Data Engineer you will: Design, develop, and maintain scalable and efficient data pipelines using AWS services such as Redshift, S3, Lambda, ECS, Step Functions, and Kinesis Data Streams. Implement and manage data warehousing solutions, primarily with Redshift, and optimize existing data models for performance and scalability. Utilize DBT (data build tool) for data transformation and modeling, ensuring data quality and consistency. Develop and maintain ETL/ELT processes to ingest, process, and store large datasets from various sources. Work with SageMaker for machine learning data preparation and integration. Ensure data security, privacy, and compliance with industry regulations. Collaborate with data scientists and analysts to understand data requirements and deliver solutions that meet their needs. Monitor and troubleshoot data pipelines, identifying and resolving issues promptly. Implement best practices for data engineering, including code reviews, testing, and automation. Mentor junior data engineers and share knowledge on data engineering best practices. Stay up-to-date with the latest advancements in data engineering, AWS services, and related technologies. After 3 months on the job you will have: Developed a strong understanding of Concert's data engineering infrastructure Learned the business domain and how it maps to the information architecture Made material contributions towards existing key results After 6 months you will have: Led a major initiative Become the first point of contact when issues related to the data warehouse are identified After 12 months you will have: Taken responsibility for the long term direction of the data engineering infrastructure Proposed and executed key results with an understanding of the business strategy Communicated the business value of major technical initiatives to key non-technical business stakeholders WHAT LEADS TO SUCCESS Self-Motivated A team player with a positive attitude and a proactive approach to problem-solving. Executes Well You are biased to action and get things done. You acknowledge unknowns and recover from setbacks well. Comfort with Ambiguity You aren't afraid of uncertainty and blazing new trails, you care about building towards a future that is different from today. Technical Bravery You are comfortable with new technologies and eager to dive in to understand data in the raw and in its processed states. Mission-focused You are personally motivated to drive more affordable, equitable and effective integration of genomic technologies into clinical care. Effective Communication You build rapport and great working relationships with senior leaders, peers, and use the relationships you've built to drive the company forward RELEVANT SKILLS & EXPERIENCE Minimum of 4 years experience working as a data engineer Bachelor's degree in software or data engineering or comparable technical certification / experience Ability to effectively communicate complex technical concepts to both technical and non-technical audiences. Proven experience in designing and implementing data solutions on AWS, including Redshift, S3, Lambda, ECS, and Step Functions Strong understanding of data warehousing principles and best practices Experience with DBT for data transformation and modeling. Proficiency in SQL and at least one programming language (e.g., Python, Scala) Familiarity or experience with the following tools / concepts are a plus: BI tools such as Metabase; Healthcare claims data, security requirements, and HIPAA compliance; Kimball's dimensional modeling techniques; ZeroETL and Kinesis data streams COMPENSATION Concert is seeking top talent and offers competitive compensation based on skills and experience. Compensation will commensurate with experience. This position will report to the VP of Engineering. LOCATION Concert is based in Nashville, Tennessee and supports a remote work environment. For further questions, please contact: ******************.
    $75k-102k yearly est. 5d ago
  • Junior Data Engineer

    Brooksource 4.1company rating

    Columbus, OH jobs

    Contract-to-Hire Columbus, OH (Hybrid) Our healthcare services client is looking for an entry-level Data Engineer to join their team. You will play a pivotal role in maintaining and improving inventory and logistics management programs. Your day-to-day work will include leveraging machine learning and open-source technologies to drive improvements in data processes. Job Responsibilities Automate key processes and enhance data quality Improve injection processes and enhance machine learning capabilities Manage substitutions and allocations to streamline product ordering Work on logistics-related data engineering tasks Build and maintain ML models for predictive analytics Interface with various customer systems Collaborate on integrating AI models into customer service Qualifications Bachelor's degree in related field 0-2 years of relevant experience Proficiency in SQL and Python Understanding of GCP/BigQuery (or any cloud experience, basic certifications a plus). Knowledge of data science concepts. Business acumen and understanding (corporate experience or internship preferred). Familiarity with Tableau Strong analytical skills Attitude for collaboration and knowledge sharing Ability to present confidently in front of leaders Why Should You Apply? You will be part of custom technical training and professional development through our Elevate Program! Start your career with a Fortune 15 company! Access to cutting-edge technologies Opportunity for career growth Eight Eleven Group provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, national origin, age, sex, citizenship, disability, genetic information, gender, sexual orientation, gender identity, marital status, amnesty or status as a covered veteran in accordance with applicable federal, state, and local laws.
    $86k-117k yearly est. 2d ago
  • Senior Data Engineer

    Brooksource 4.1company rating

    Bethlehem, PA jobs

    Hybrid (Bethlehem, PA) Contract Applicants must be authorized to work in the U.S. without sponsorship We're looking for a Senior Data Engineer to join our growing technology team and help shape the future of our enterprise data landscape. This is a hands-on, high-impact opportunity to make recommendations, build and evolve a modern data platform using Snowflake and cloud-based EDW Solutions. How You'll Impact Results: Drive the evolution and architecture of scalable, secure, cloud-native data platforms Design, build, and maintain data models, pipelines, and integration patterns across the data lake, data warehouse, and consumption layers Lead deployment of long-term data products and infuse data and analytics capabilities across business and IT Optimize data pipelines and warehouse performance for accuracy, accessibility, and speed Collaborate cross-functionally to deliver data, experimentation, and analytics solutions Implement systems to monitor data quality and ensure reliability and availability of Production data for downstream users, leadership teams, and business processes Recommend and implement best practices for query performance, storage, and resource efficiency Test and clearly document data assets, pipelines, and architecture to support usability and scale Engage across project phases and serve as a key contributor in strategic data architecture initiatives Your Qualifications That Will Ensure Success: Required: 10+ years of experience in Information Technology Data Engineering: professional database and data warehouse development Advanced proficiency in SQL, data modeling, and performance tuning Experience in system configuration, security administration, and performance optimization Deep experience required with Snowflake and modern cloud data platforms (AWS, Azure, or GCP) Familiarity with developing cloud data applications (AWS, Azure, Google Cloud) and/or standard CI/CD tools like Azure DevOps or GitHub Strong analytical, problem-solving, and documentation skills Experience in system configuration, security administration, and performance optimization Proficiency with Microsoft Excel and common data analysis tools Ability to troubleshoot technical issues and provide system support to non-technical users. Preferred: Experience integrating SAP ECC data into cloud-native platforms Exposure to AI/ML, API development, or Boomi Atmosphere Prior experience in consumer packaged goods (CPG), Food / Beverage industry, or manufacturing
    $91k-126k yearly est. 5d ago
  • Data Engineer

    Brooksource 4.1company rating

    New York, NY jobs

    Data Engineer - Data Migration Project 6-Month Contract (ASAP Start) Hybrid - Manhattan, NY (3 days/week) We are seeking a Data Engineer to support a critical data migration initiative for a leading sports entertainment and gaming company headquartered in Manhattan, NY. This role will focus on transitioning existing data workflows and analytics pipelines from Amazon Redshift to Databricks, optimizing performance and ensuring seamless integration across operational reporting systems. The ideal candidate will have strong SQL and Python skills, experience working with Salesforce data, and a background in data engineering, ETL, or analytics pipeline optimization. This is a hybrid role requiring collaboration with cross-functional analytics, engineering, and operations teams to enhance data reliability and scalability. Minimum Qualifications: Advanced proficiency in SQL, Python, and SOQL Hands-on experience with Databricks, Redshift, Salesforce, and DataGrip Experience building and optimizing ETL workflows and pipelines Familiarity with Tableau for analytics and visualization Strong understanding of data migration and transformation best practices Ability to identify and resolve discrepancies between data environments Excellent analytical, troubleshooting, and communication skills Responsibilities: Modify and migrate existing workflows and pipelines from Redshift to Databricks. Rebuild data preprocessing structures that prepare Salesforce data for Tableau dashboards and ad hoc analytics. Identify and map Redshift data sources to their Databricks equivalents, accounting for any structural or data differences. Optimize and consolidate 200+ artifacts to improve efficiency and reduce redundancy. Implement Databricks-specific improvements to leverage platform capabilities and enhance workflow performance. Collaborate with analytics and engineering teams to ensure data alignment across business reporting systems. Apply a “build from scratch” mindset to design scalable, modernized workflows rather than direct lift-and-shift migrations. Identify dependencies on data sources not yet migrated and assist in prioritization efforts with the engineering team. What's in it for you? Opportunity to lead a high-impact data migration initiative at a top-tier gaming and entertainment organization. Exposure to modern data platforms and architecture, including Databricks and advanced analytics workflows. Collaborative environment with visibility across analytics, operations, and engineering functions. Ability to contribute to the foundation of scalable, efficient, and data-driven decision-making processes. EEO Statement: Eight Eleven Group provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, national origin, age, sex, citizenship, disability, genetic information, gender, sexual orientation, gender identity, marital status, amnesty or status as a covered veteran in accordance with applicable federal, state, and local laws.
    $101k-140k yearly est. 1d ago
  • Data Engineer

    Inceed 4.1company rating

    Denver, CO jobs

    Data Engineer Compensation: $80 - $90/hour, depending on experience Inceed has partnered with a great company to help find a skilled Data Engineer to join their team! Join a dynamic team as a contract Data Engineer, where you'll be the backbone of data-driven operations. This role offers the opportunity to work with a modern tech stack in a hybrid on-prem and cloud environment. You'll design and implement innovative solutions to complex challenges, collaborating with data scientists, location intelligence experts, and ML engineers. This exciting opportunity has opened due to a new project initiative and you'll be making a tangible impact. Key Responsibilities & Duties: Design and deploy scalable data pipelines and architectures Collaborate with stakeholders to deliver high-impact data solutions Integrate data from multiple sources ensuring quality and reliability Develop automation workflows and BI solutions Mentor others and contribute to the knowledge base Explore and implement emerging technologies Required Qualifications & Experience: 8+ years of experience in data engineering Experience with large oil and gas datasets Proficiency in SQL and Python Hands-on experience in cloud environments (Azure, AWS, or GCP) Familiarity with Apache Kafka, Apache Flink, or Azure Event Hubs Nice to Have Skills & Experience: Experience with Palantir Foundry Knowledge of query federation platforms Experience with modern data stack tools like dbt or Airflow Perks & Benefits: 3 different medical health insurance plans, dental, and vision insurance Voluntary and Long-term disability insurance Paid time off, 401k, and holiday pay Weekly direct deposit or pay card deposit If you are interested in learning more about the Data Engineer opportunity, please submit your resume for consideration. Our client is unable to provide sponsorship at this time. We are Inceed, a staffing direct placement firm who believes in the possibility of something better. Our mission is simple: We're here to help every person, whether client, candidate, or employee, find and secure what's better for them. Inceed is an equal opportunity employer. Inceed prohibits discrimination and harassment of any type and affords equal employment opportunities to employees and applicants without regard to race, color, religion, sex, sexual orientation, gender identity, or expression, pregnancy, age, national origin, disability status, genetic information, protected veteran status, or any other characteristic protected by law.
    $80-90 hourly 3d ago
  • Data Engineer

    Inceed 4.1company rating

    Denver, CO jobs

    Data Engineer Compensation: $ 80 - 90 /hour, depending on experience Inceed has partnered with a great energy company to help find a skilled Data Engineer to join their team! Join a dynamic team where you'll be at the forefront of data-driven operations. This role offers the autonomy to design and implement groundbreaking data architectures, working primarily remotely. This position is open due to exciting new projects. You'll be collaborating with data scientists and engineers, making impactful contributions to the company's success. Key Responsibilities & Duties: Design and deploy scalable data pipelines and architectures Collaborate with stakeholders to deliver high-impact data solutions Integrate data from various sources ensuring consistency and reliability Develop automation workflows and BI solutions Mentor others and advise on data process best practices Explore and implement emerging technologies Required Qualifications & Experience: 8+ years of data engineering experience Experience with PI Experience with SCADA Experience with Palantir Experience with large oil and gas datasets Proficiency in Python and SQL Hands-on experience in cloud environments (Azure, AWS, GCP) Nice to Have Skills & Experience: Familiarity with Apache Kafka or Flink Perks & Benefits: 3 different medical health insurance plans, dental, and vision insurance Voluntary and Long-term disability insurance Paid time off, 401k, and holiday pay Weekly direct deposit or pay card deposit If you are interested in learning more about the Data Engineer opportunity, please submit your resume for consideration. Our client is unable to provide sponsorship at this time. We are Inceed, a staffing direct placement firm who believes in the possibility of something better. Our mission is simple: We're here to help every person, whether client, candidate, or employee, find and secure what's better for them. Inceed is an equal opportunity employer. Inceed prohibits discrimination and harassment of any type and affords equal employment opportunities to employees and applicants without regard to race, color, religion, sex, sexual orientation, gender identity, or expression, pregnancy, age, national origin, disability status, genetic information, protected veteran status, or any other characteristic protected by law.
    $80-90 hourly 1d ago
  • Data Engineer

    Ztek Consulting 4.3company rating

    Hamilton, NJ jobs

    Key Responsibilities: Manage and support batch processes and data pipelines in Azure Databricks and Azure Data Factory. Integrate and process Bloomberg market data feeds and files into trading or analytics platforms. Monitor, troubleshoot, and resolve data and system issues related to trading applications and market data ingestion. Develop, automate, and optimize ETL pipelines using Python, Spark, and SQL. Manage FTP/SFTP file transfers between internal systems and external vendors. Ensure data quality, completeness, and timeliness for downstream trading and reporting systems. Collaborate with operations, application support, and infrastructure teams to resolve incidents and enhance data workflows. Required Skills & Experience: 10+ years of experience in data engineering or production support within financial services or trading environments. Hands-on experience with Azure Databricks, Azure Data Factory, and Azure Storage, Logic Apps, Fabric. Strong Python and SQL programming skills. Experience with Bloomberg data feeds (BPIPE, TSIP,SFTP). Experience with Git, CI/CD pipelines, and Azure DevOps. Proven ability to support batch jobs, troubleshoot failures, and manage job scheduling. Experience handling FTP/SFTP file transfers and automation (e.g., using scripts or managed file transfer tools). Solid understanding of equities trading, fixed income trading, trading workflows, and financial instruments. Excellent communication, problem-solving, and stakeholder management skills.
    $89k-125k yearly est. 3d ago
  • Data Engineer

    Addison Group 4.6company rating

    Coppell, TX jobs

    Title: Data Engineer Assignment Type: 6-12 month contract-to-hire Compensation: $65/hr-$75/hr W2 Work Model: Hybrid (4 days on-site, 1 day remote) Benefits: Medical, Dental, Vision, 401(k) What we need is someone who comes 8+ years of experience in the Data Engineering space who specializes in Microsoft Azure and Databricks. This person will be a part of multiple initiatives for the "New Development" and "Data Reporting" teams but will be primarily tasked with designing, building, maintaining, and automating their enterprise data architecture/pipelines within the cloud. Technology-wise we are needing to come with skills in Azure Databricks (5+ years), cloud-based environment (Azure and/or AWS), Azure DevOps (ADO), SQL (ETL, SSIS packages), and PySpark or Scala automation. Architecture experience in building pipelines, data modeling, data pipeline deployment, data mapping, etc. Top Skills: -8+ Years of Data Engineer/Business Intelligence -Databricks and Azure Data Factory *Most updated is Unity Catalog for Databricks* -Cloud-based environments (Azure or AWS) -Data Pipeline Architecture and CI/CD methodology -SQL -Automation (Python (PySpark), Scala)
    $65-75 hourly 3d ago
  • Data Engineer

    Mastech Digital 4.7company rating

    Jersey City, NJ jobs

    Mastech Digital Inc. (NYSE: MHH) is a minority-owned, publicly traded IT staffing and digital transformation services company. Headquartered in Pittsburgh, PA, and established in 1986, we serve clients nationwide through 11 U.S. offices. Role: Data Engineer Location: Merrimack, NH/Smithfield, RI/Jersey City, NJ Duration: Full-Time/W2 Job Description: Must-Haves: Python for running ETL batch jobs Heavy SQL for data analysis, validation and querying AWS and the ability to move the data through the data stages and into their target databases. The Postgres database is the target, so that is required. Nice to haves: Snowflake Java for API development is a nice to have (will teach this) Experience in asset management for domain knowledge. Production support debugging and processing of vendor data The Expertise and Skills You Bring A proven foundation in data engineering - bachelor's degree + preferred, 10+ years' experience Extensive experience with ETL technologies Design and develop ETL reporting and analytics solutions. Knowledge of Data Warehousing methodologies and concepts - preferred Advanced data manipulation languages and frameworks (JAVA, PYTHON, JSON) - required RMDS experience (Snowflake, PostgreSQL ) - required Knowledge of Cloud platforms and Services (AWS - IAM, EC2, S3, Lambda, RDS ) - required Designing and developing low to moderate complex data integration solution - required Experience with DevOps, Continuous Integration and Continuous Delivery (Maven, Jenkins, Stash, Ansible, Docker) will be preferred Expert in SQL and Stored Procedures on any Relational databases Good in debugging, analyzing and Production Support Application Development based on JIRA stories (Agile environment) Demonstrable experience with ETL tools (Informatica, Snaplogic) Experience in working with Python in an AWS environment Create, update, and maintain technical documentation for software-based projects and products. Solving production issues. Interact effectively with business partners to understand business requirements and assist in generation of technical requirements. Participate in architecture, technical design, and product implementation discussions. Working Knowledge of Unix/Linux operating systems and shell scripting Experience with developing sophisticated Continuous Integration & Continuous Delivery (CI/CD) pipeline including software configuration management, test automation, version control, static code analysis. Excellent interpersonal and communication skills Ability to work with global Agile teams Proven ability to deal with ambiguity and work in fast paced environment Ability to mentor junior data engineers. The Value You Deliver The associate would help the team in designing and building a best-in-class data solutions using very diversified tech stack. Strong experience of working in large teams and proven technical leadership capabilities Knowledge of enterprise-level implementations like data warehouses and automated solutions. Ability to negotiate, influence and work with business peers and management. Ability to develop and drive a strategy as per the needs of the team Good to have: Full-Stack Programming knowledge, hands-on test case/plan preparation within Jira
    $81k-105k yearly est. 4d ago
  • Senior Data Engineer

    Addison Group 4.6company rating

    Houston, TX jobs

    About the Role The Senior Data Engineer will play a critical role in building and scaling an enterprise data platform to enable analytics, reporting, and operational insights across the organization. This position requires deep expertise in Snowflake and cloud technologies (AWS or Azure), along with strong upstream oil & gas domain experience. The engineer will design and optimize data pipelines, enforce data governance and quality standards, and collaborate with cross-functional teams to deliver reliable, scalable data solutions. Key Responsibilities Data Architecture & Engineering Design, develop, and maintain scalable data pipelines using Snowflake, AWS/Azure, and modern data engineering tools. Implement ETL/ELT processes integrating data from upstream systems (SCADA, production accounting, drilling, completions, etc.). Architect data models supporting both operational reporting and advanced analytics. Establish and maintain frameworks for data quality, validation, and lineage to ensure enterprise data trust. Platform Development & Optimization Lead the build and optimization of Snowflake-based data warehouses for performance and cost efficiency. Design cloud-native data solutions leveraging AWS/Azure services (S3, Lambda, Azure Data Factory, Databricks). Manage large-scale time-series and operational data processing workflows. Implement strong security, access control, and governance practices. Technical Leadership & Innovation Mentor junior data engineers and provide technical leadership across the data platform team. Research and introduce new technologies to enhance platform scalability and automation. Build reusable frameworks, components, and utilities to streamline delivery. Support AI/ML initiatives by delivering production-ready, high-quality data pipelines. Business Partnership Collaborate with stakeholders across business units to translate requirements into technical solutions. Work with analysts and data scientists to enable self-service analytics and reporting. Ensure data integration supports regulatory and compliance reporting. Act as a bridge between business and technical teams to ensure alignment and impact. Qualifications & Experience Education Bachelor's degree in Computer Science, Engineering, Information Systems, or a related field. Advanced degree or relevant certifications (SnowPro, AWS/Azure Data Engineer, Databricks) preferred. Experience 7+ years in data engineering roles, with at least 3 years on cloud data platforms. Proven expertise in Snowflake and at least one major cloud platform (AWS or Azure). Hands-on experience with upstream oil & gas data (wells, completions, SCADA, production, reserves, etc.). Demonstrated success delivering operational and analytical data pipelines. Technical Skills Advanced SQL and Python programming skills. Strong background in data modeling, ETL/ELT, cataloging, lineage, and data security. Familiarity with Airflow, Azure Data Factory, or similar orchestration tools. Experience with CI/CD, Git, and automated testing. Knowledge of BI tools such as Power BI, Spotfire, or Tableau. Understanding of AI/ML data preparation and integration.
    $86k-125k yearly est. 5d ago
  • IT Data Engineer

    Inceed 4.1company rating

    Lakewood, CO jobs

    IT Data Engineer Compensation: $125k-$155k (DOE) Inceed has partnered with a great company to help find a skilled IT Data Engineer to join their team! Join a dynamic team where innovation meets opportunity. This role is pivotal in advancing AI and data modernization initiatives, bridging traditional database administration with cutting-edge AI data infrastructure. The team thrives on collaboration and offers a hybrid work schedule. Key Responsibilities & Duties: Design and maintain scalable data pipelines. Develop RAG workflows for AI information access. Build secure connectors and APIs for data retrieval. Monitor and optimize data flows for consistency. Lead database administration and performance tuning. Manage database upgrades and storage optimization. Implement database security controls and standards. Support application integrations and data migrations. Define and maintain data models and metadata. Collaborate with teams to ensure compliance requirements. Required Qualifications & Experience: Bachelor's degree in Computer Science or related field. 7+ years in database administration or data engineering. Advanced SQL and data modeling skills. Experience with AI and analytics data pipelines. Familiarity with cloud-based data ecosystems. Hands-on experience with RAG and vectorization. Proficiency in scripting languages like Python. Experience leading vendor-to-internal transitions. Nice to Have Skills & Experience: Experience integrating enterprise systems into data platforms. Knowledge of data governance frameworks. Understanding of semantic data modeling. Experience with cloud migration of database workloads. Perks & Benefits: This opportunity includes a comprehensive and competitive benefits package-details will be shared during later stages of the hiring process. Other Information: Hybrid work schedule This position requires a background check and drug test If you are interested in learning more about the IT Data Engineer opportunity, please submit your resume for consideration. Our client is unable to provide sponsorship at this time. We are Inceed, a staffing direct placement firm who believes in the possibility of something better. Our mission is simple: We're here to help every person, whether client, candidate, or employee, find and secure what's better for them. Inceed is an equal opportunity employer. Inceed prohibits discrimination and harassment of any type and affords equal employment opportunities to employees and applicants without regard to race, color, religion, sex, sexual orientation, gender identity, or expression, pregnancy, age, national origin, disability status, genetic information, protected veteran status, or any other characteristic protected by law.
    $125k-155k yearly 3d ago
  • Data Engineer

    Sharp Decisions 4.6company rating

    New York, NY jobs

    Hey All, We are looking for a mid-level data engineer. No third parties As a result of this expansion, we are seeking experienced software Data engineers with 5+ years of relevant experience to support the design and development of a strategic data platform for SMBC Capital Markets and Nikko Securities Group. Qualifications and Skills • Proven experience as a Data Engineer with experience in Azure cloud. • Experience implementing solutions using - • Azure cloud services • Azure Data Factory • Azure Lake Gen 2 • Azure Databases • Azure Data Fabric • API Gateway management • Azure Functions • Well versed with Azure Databricks • Strong SQL skills with RDMS or no SQL databases • Experience with developing APIs using FastAPI or similar frameworks in Python • Familiarity with the DevOps lifecycle (git, Jenkins, etc.), CI/CD processes • Good understanding of ETL/ELT processes • Experience in financial services industry, financial instruments, asset classes and market data are a plus.
    $85k-111k yearly est. 5d ago
  • Data Engineer

    Richard, Wayne & Roberts 4.3company rating

    Houston, TX jobs

    Python Data Engineer - Houston, TX (Onsite Only) A global energy and commodities organization is seeking an experienced Python Data Engineer to expand and optimize data assets that support high-impact analytics. This role works closely with traders, analysts, researchers, and data scientists to translate business needs into scalable technical solutions. The position is fully onsite due to the collaborative, fast-paced nature of the work. MUST come from an Oil & Gas organization, prefer commodity trading firm. CANNOT do C2C. Key Responsibilities Build modular, reusable Python components to connect external data sources with internal tools and databases. Partner with business stakeholders to define data ingestion and access requirements. Translate business requirements into well-designed technical deliverables. Maintain and enhance the central Python codebase following established standards. Contribute to internal developer tools and ETL frameworks, helping standardize and consolidate core functionality. Collaborate with global engineering teams and participate in internal Python community initiatives. Qualifications 7+ years of professional Python development experience. Strong background in data engineering and pipeline development. Experience with web scraping tools (Requests, BeautifulSoup, Selenium). Hands-on Oracle/PL SQL development, including stored procedures. Strong grasp of object-oriented design, design patterns, and service-oriented architectures. Experience with Agile/Scrum, code reviews, version control, and issue tracking. Familiarity with scientific computing libraries (Pandas, NumPy). Excellent communication skills. Industry experience in energy or commodities preferred. Exposure to containerization (Docker, Kubernetes) is a plus.
    $83k-120k yearly est. 5d ago
  • Azure Data Engineer

    Sharp Decisions 4.6company rating

    Jersey City, NJ jobs

    Title: Senior Azure Data Engineer Client: Major Japanese Bank Experience Level: Senior (10+ Years) The Senior Azure Data Engineer will design, build, and optimize enterprise data solutions within Microsoft Azure for a major Japanese bank. This role focuses on architecting scalable data pipelines, enhancing data lake environments, and ensuring security, compliance, and data governance best practices. Key Responsibilities: Develop, maintain, and optimize Azure-based data pipelines and ETL/ELT workflows. Design and implement Azure Data Lake, Synapse, Databricks, and ADF solutions. Ensure data security, compliance, lineage, and governance controls. Partner with architecture, data governance, and business teams to deliver high-quality data solutions. Troubleshoot performance issues and improve system efficiency. Required Skills: 10+ years of data engineering experience. Strong hands-on expertise with Azure Synapse, Azure Data Factory, Azure Databricks, Azure Data Lake, and Azure SQL. Azure certifications strongly preferred. Strong SQL, Python, and cloud data architecture skills. Experience in financial services or large enterprise environments preferred.
    $77k-101k yearly est. 5d ago
  • Senior Data Engineer

    Luna Data Solutions, Inc. 4.4company rating

    Austin, TX jobs

    We are looking for a seasoned Azure Data Engineer to design, build, and optimize secure, scalable, and high-performance data solutions within the Microsoft Azure ecosystem. This will be a multi-year contract worked FULLY ONSITE in Austin, TX. The ideal candidate brings deep technical expertise in data architecture, ETL/ELT engineering, data integration, and governance, along with hands-on experience in MDM, API Management, Lakehouse architectures, and data mesh or data hub frameworks. This position combines strategic architectural planning with practical, hands-on implementation, empowering cross-functional teams to leverage data as a key organizational asset. Key Responsibilities 1. Data Architecture & Strategy Design and deploy end-to-end Azure data platforms using Azure Data Lake, Azure Synapse Analytics, Azure Databricks, and Azure SQL Database. Build and implement Lakehouse and medallion (Bronze/Silver/Gold) architectures for scalable and modular data processing. Define and support data mesh and data hub patterns to promote domain-driven design and federated governance. Establish standards for conceptual, logical, and physical data modeling across data warehouse and data lake environments. 2. Data Integration & Pipeline Development Develop and maintain ETL/ELT pipelines using Azure Data Factory, Synapse Pipelines, and Databricks for both batch and streaming workloads. Integrate diverse data sources (on-prem, cloud, SaaS, APIs) into a unified Azure data environment. Optimize pipelines for cost-effectiveness, performance, and scalability. 3. Master Data Management (MDM) & Data Governance Implement MDM solutions using Azure-native or third-party platforms (e.g., Profisee, Informatica, Semarchy). Define and manage data governance, metadata, and data quality frameworks. Partner with business teams to align data standards and maintain data integrity across domains. 4. API Management & Integration Build and manage APIs for data access, transformation, and system integration using Azure API Management and Logic Apps. Design secure, reliable data services for internal and external consumers. Automate workflows and system integrations using Azure Functions, Logic Apps, and Power Automate. 5. Database & Platform Administration Perform core DBA tasks, including performance tuning, query optimization, indexing, and backup/recovery for Azure SQL and Synapse. Monitor and optimize cost, performance, and scalability across Azure data services. Implement CI/CD and Infrastructure-as-Code (IaC) solutions using Azure DevOps, Terraform, or Bicep. 6. Collaboration & Leadership Work closely with data scientists, analysts, business stakeholders, and application teams to deliver high-value data solutions. Mentor junior engineers and define best practices for coding, data modeling, and solution design. Contribute to enterprise-wide data strategy and roadmap development. Required Qualifications Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or related fields. 5+ years of hands-on experience in Azure-based data engineering and architecture. Strong proficiency with the following: Azure Data Factory, Azure Synapse, Azure Databricks, Azure Data Lake Storage Gen2 SQL, Python, PySpark, PowerShell Azure API Management and Logic Apps Solid understanding of data modeling approaches (3NF, dimensional modeling, Data Vault, star/snowflake schemas). Proven experience with Lakehouse/medallion architectures and data mesh/data hub designs. Familiarity with MDM concepts, data governance frameworks, and metadata management. Experience with automation, data-focused CI/CD, and IaC. Thorough understanding of Azure security, RBAC, Key Vault, and core networking principles. What We Offer Competitive compensation and benefits package Luna Data Solutions, Inc. (LDS) provides equal employment opportunities to all employees. All applicants will be considered for employment. LDS prohibits discrimination and harassment of any type regarding age, race, color, religion, sexual orientation, gender identity, sex, national origin, genetics, protected veteran status, and/or disability status.
    $74k-95k yearly est. 2d ago
  • Data Engineer

    Interactive Resources-IR 4.2company rating

    Austin, TX jobs

    About the Role We are seeking a highly skilled Databricks Data Engineer with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions. What We're Looking For 8+ years designing and delivering scalable data pipelines in modern data platforms Deep experience in data engineering, data warehousing, and enterprise-grade solution delivery Ability to lead cross-functional initiatives in matrixed teams Advanced skills in SQL, Python, and ETL/ELT development, including performance tuning Hands-on experience with Azure, Snowflake, and Databricks, including system integrations Key Responsibilities Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform Modernize and enhance cloud-based data ecosystems on Azure, contributing to architecture, modeling, security, and CI/CD Use Apache Airflow and similar tools for workflow automation and orchestration Work with financial or regulated datasets while ensuring strong compliance and governance Drive best practices in data quality, lineage, cataloging, and metadata management Primary Technical Skills Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks Notebooks Design efficient Delta Lake models for reliability and performance Implement and manage Unity Catalog for governance, RBAC, lineage, and secure data sharing Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems Automate ingestion and workflows using Python and REST APIs Support downstream analytics for BI, data science, and application workloads Write optimized SQL/T-SQL queries, stored procedures, and curated datasets Automate DevOps workflows, testing pipelines, and workspace configurations Additional Skills Azure: Data Factory, Data Lake, Key Vault, Logic Apps, Functions CI/CD: Azure DevOps Orchestration: Apache Airflow (plus) Streaming: Delta Live Tables MDM: Profisee (nice-to-have) Databases: SQL Server, Cosmos DB Soft Skills Strong analytical and problem-solving mindset Excellent communication and cross-team collaboration Detail-oriented with a high sense of ownership and accountability
    $84k-111k yearly est. 2d ago
  • Data Scientist

    The Intersect Group 4.2company rating

    Phoenix, AZ jobs

    We are seeking a Data Scientist to support advanced analytics and machine learning initiatives across the organization. This role involves working with large, complex datasets to uncover insights, validate data integrity, and build predictive models. A key focus will be developing and refining machine learning models that leverage sales and operational data to optimize pricing strategies at the store level. Day-to-Day Responsibilities Compare and validate numbers across multiple data systems Investigate discrepancies and understand how metrics are derived Perform data science and data analysis tasks Build and maintain AI/ML models using Python Interpret model results, fine-tune algorithms, and iterate based on findings Validate and reconcile data from different sources to ensure accuracy Work with sales and production data to produce item-level pricing recommendations Support ongoing development of a new data warehouse and create queries as needed Review Power BI dashboards (Power BI expertise not required) Contribute to both ML-focused work and general data science responsibilities Improve and refine an existing ML pricing model already in production Qualifications Strong proficiency with MS SQL Server Experience creating and deploying machine learning models in Python Ability to interpret, evaluate, and fine-tune model outputs Experience validating and reconciling data across systems Strong foundation in machine learning, data modeling, and backend data operations Familiarity with querying and working with evolving data environments
    $76k-109k yearly est. 2d ago
  • Data Architect

    Optech 4.6company rating

    Cincinnati, OH jobs

    THIS IS A W2 (NOT C2C OR REFERRAL BASED) CONTRACT OPPORTUNITY REMOTE MOSTLY WITH 1 DAY/MO ONSITE IN CINCINNATI-LOCAL CANDIDATES TAKE PREFERENCE RATE: $75-85/HR WITH BENEFITS We are seeking a highly skilled Data Architect to function in a consulting capacity to analyze, redesign, and optimize a Medical Payments client's environment. The ideal candidate will have deep expertise in SQL, Azure cloud services, and modern data architecture principles. Responsibilities Design and maintain scalable, secure, and high-performing data architectures. Lead migration and modernization projects in heavy use production systems. Develop and optimize data models, schemas, and integration strategies. Implement data governance, security, and compliance standards. Collaborate with business stakeholders to translate requirements into technical solutions. Ensure data quality, consistency, and accessibility across systems. Required Qualifications Bachelor's degree in Computer Science, Information Systems, or related field. Proven experience as a Data Architect or similar role. Strong proficiency in SQL (query optimization, stored procedures, indexing). Hands-on experience with Azure cloud services for data management and analytics. Knowledge of data modeling, ETL processes, and data warehousing concepts. Familiarity with security best practices and compliance frameworks. Preferred Skills Understanding of Electronic Health Records systems. Understanding of Big Data technologies and modern data platforms outside the scope of this project.
    $75-85 hourly 1d ago

Learn more about Lee Hecht Harrison jobs

View all jobs