Post job

Data Engineer jobs at ICONMA

- 3974 jobs
  • ETL/ELT Data Engineer (Secret Clearance) - Hybrid

    Launchcode 2.9company rating

    Austin, TX jobs

    LaunchCode is recruiting for a Software Data Engineer to work at one of our partner companies! Details: Full-Time W2, Salary Immediate opening Hybrid - Austin, TX (onsite 1-2 times a week) Pay $85K-$120K Minimum Experience: 4 years Security Clearance: Active DoD Secret Clearance Disclaimer: Please note that we are unable to provide work authorization or sponsorship for this role, now or in the future. Candidates requiring current or future sponsorship will not be considered. Job description Job Summary A Washington, DC-based software solutions provider founded in 2017, specializes in delivering mission-critical and enterprise solutions to the federal government. Originating from the Department of Defense's software factory ecosystem, the company focuses on Command and Control, Cybersecurity, Space, Geospatial, and Modeling & Simulation. The company leverages commercial technology to enhance the capabilities of the DoD, IC, and their end-users, with innovation driven by its Innovation centers. The company has a presence in Boston, MA, Colorado Springs, CO, San Antonio, TX, and St. Louis, MO. Why the company? Environment of Autonomy Innovative Commercial Approach People over process We are seeking a passionate Software Data Engineer to support the Army Software Factory (ASWF) in aligning with DoDM 8140.03 Cyber Workforce requirements and broader compliance mandates. The Army Software Factory (ASWF), a first-of-its-kind initiative under Army Futures Command, is revolutionizing the Army's approach to software development by training and employing self-sustaining technical talent from across the military and civilian workforce. Guided by the motto “By Soldiers, For Soldiers,” ASWF equips service members to develop mission-critical software solutions independently-especially vital for future contested environments where traditional technical support may be unavailable. This initiative also serves as a strategic prototype to modernize legacy IT processes and build technical readiness across the force to ensure battlefield dominance in the digital age. Required Skills: Active DoD Secret Clearance (Required) 4+ years of experience in data science, data engineering, or similar roles. Expertise in designing, building, and maintaining scalable ETL/ELT pipelines using tools and languages such as Python, SQL, Apache Spark, or Airflow. Strong proficiency in working with relational and NoSQL databases, including experience with database design, optimization, and query performance tuning (e.g., PostgreSQL, MySQL, MongoDB, Cassandra). Demonstrable experience with cloud data platforms and services (e.g., AWS Redshift, S3, Glue, Athena; Azure Data Lake, Data Factory, Synapse; Google BigQuery, Cloud Storage, Dataflow). Solid understanding of data warehousing concepts (e.g., Kimball, Inmon methodologies) and experience with data modeling for analytical purposes. Proficiency in at least one programming language commonly used in data engineering (e.g., Python, Java, Scala) for data manipulation, scripting, and automation. CompTIA Security+ Certified or otherwise DoDM 8140.03 (formerly DoD 8570.01-M) compliant. Nice to Have: Familiarity with SBIR technologies and transformative platform shifts Experience working in Agile or DevSecOps environments 2+ years of experience interfacing with Platform Engineers and data visibility team, manage AWS resources, and GitLab admin #LI-hybrid #austintx #ETLengineer #dataengineer #army #aswf #clearancejobs #clearedjobs #secretclearance #ETL
    $85k-120k yearly 1d ago
  • Senior Data Engineer

    Bayforce 4.4company rating

    Charlotte, NC jobs

    **NO 3rd Party vendor candidates or sponsorship** Role Title: Senior Data Engineer Client: Global construction and development company Employment Type: Contract Duration: 1 year Preferred Location: Remote based in ET or CT time zones Role Description: The Senior Data Engineer will play a pivotal role in designing, architecting, and optimizing cloud-native data integration and Lakehouse solutions on Azure, with a strong emphasis on Microsoft Fabric adoption, PySpark/Spark-based transformations, and orchestrated pipelines. This role will lead end-to-end data engineering-from ingestion through APIs and Azure services to curated Lakehouse/warehouse layers-while ensuring scalable, secure, well-governed, and well-documented data products. The ideal candidate is hands-on in delivery and also brings data architecture knowledge to help shape patterns, standards, and solution designs. Key Responsibilities Design and implement end-to-end data pipelines and ELT/ETL workflows using Azure Data Factory (ADF), Synapse, and Microsoft Fabric. Build and optimize PySpark/Spark transformations for large-scale processing, applying best practices for performance tuning (partitioning, joins, file sizing, incremental loads). Develop and maintain API-heavy ingestion patterns, including REST/SOAP integrations, authentication/authorization handling, throttling, retries, and robust error handling. Architect scalable ingestion, transformation, and serving solutions using Azure Data Lake / OneLake, Lakehouse patterns (Bronze/Silver/Gold), and data warehouse modeling practices. Implement monitoring, logging, alerting, and operational runbooks for production pipelines; support incident triage and root-cause analysis. Apply governance and security practices across the lifecycle, including access controls, data quality checks, lineage, and compliance requirements. Write complex SQL, develop data models, and enable downstream consumption through analytics tools and curated datasets. Drive engineering standards: reusable patterns, code reviews, documentation, source control, and CI/CD practices. Requirements: Bachelor's degree (or equivalent experience) in Computer Science, Engineering, or a related field. 5+ years of experience in data engineering with strong focus on Azure Cloud. Strong experience with Azure Data Factory pipelines, orchestration patterns, parameterization, and production support. Strong hands-on experience with Synapse (pipelines, SQL pools and/or Spark), and modern cloud data platform patterns. Advanced PySpark/Spark experience for complex transformations and performance optimization. Heavy experience with API-based integrations (building ingestion frameworks, handling auth, pagination, retries, rate limits, and resiliency). Strong knowledge of SQL and data warehousing concepts (dimensional modeling, incremental processing, data quality validation). Strong understanding of cloud data architectures including Data Lake, Lakehouse, and Data Warehouse patterns. Preferred Skills Experience with Microsoft Fabric (Lakehouse/Warehouse/OneLake, Pipelines, Dataflows Gen2, notebooks). Architecture experience (formal or informal), such as contributing to solution designs, reference architectures, integration standards, and platform governance. Experience with DevOps/CI-CD for data engineering using Azure DevOps or GitHub (deployment patterns, code promotion, testing). Experience with Power BI and semantic model considerations for Lakehouse/warehouse-backed reporting. Familiarity with data catalog/governance tooling (e.g., Microsoft Purview).
    $70k-93k yearly est. 3d ago
  • Guidewire DataHub/InfoCenter Engineer

    Aspire Systems 4.4company rating

    Stockton, CA jobs

    Hands on Experience on DataHub with InfoCenter Platform. Experience in Production support, BAU, Enhancement and Development. Works with businesses in identifying detailed analytical and operational reporting/extracts requirements. Collaborates with data analysts, architects, engineers and business stakeholders to understand data requirements. Able to create Microsoft SQL / ETL / SSIS complex queries. Handling ends to end loads Qualifications Experience on snowflake and DBT (Data Built tool). 6-9 yrs of Experience in P&C Insurance on Guidewire DataHub/InfoCenter Platform. Must have at least one DHIC on-premise or Cloud implementation experience Well versed with AWS Services - working with S3 storage, AURORA database Experience on SQL Server and Oracle databases Able to create PL/SQL stored procedures Hands-on experience on Guidewire ClaimCenter/PolicyCenter/BillingCenter data models. SAP BODS ETL design & Administration experience. Data Warehousing experience that includes analysis and development of Dataflows, mappings using needed transformations using BODS. Data Specifications hands-on experience. Experience on DataHub and InfoCenter Initial loads and Delta loads. Experience on DataHub and InfoCenter Guidewire Commit and Rollback utility. Extending entities & attributes in DataHub and InfoCenter experience. Experienced in Property & Casualty Insurance Industry. About Aspire Systems Aspire Systems is a $180+ million global technology services firm with over 4,500 employees worldwide, partnering with 275+ active customers. Founded in 1996, Aspire has grown steadily at a 19% CAGR since 2020. Headquartered in Singapore, we operate across the US, UK, LATAM, Europe, the Middle East, India, and Asia Pacific regions, with strong nearshore delivery centers in Poland and Mexico. Aspire has been consistently recognized among India s 100 Best Companies to Work For 12 consecutive years by the Great Place to Work Institute. Who We Are Aspire is built on deep expertise in Software Engineering, Digital Services, Testing, and Infrastructure & Application Support. We serve diverse industries including Independent Software Vendors, Retail, Banking & Financial Services, and Insurance. Our proven frameworks and accelerators enable us to create future-ready, scalable, and business-focused systems, helping customers across the globe embrace digital transformation at speed and scale. What We Believe At the heart of Aspire is our philosophy of Attention. Always. a commitment to investing care and focus on our customers, employees, and society Our Commitment to Diversity & Inclusion At Aspire Systems, we foster a work culture that appreciates diversity and inclusiveness. We understand that our multigenerational workforce represents different regions, cultures, economic backgrounds, races, genders, ethnicities, education levels, personalities, and religions. We believe these differences make us stronger and are committed to building an inclusive workplace where everyone feels respected and valued. Privacy Notice Aspire Systems values your privacy. Candidate information collected through this recruitment process will be used solely for hiring purposes, handled securely, and retained only as long as necessary in compliance with applicable privacy laws. Disclaimer The above statements are not intended to be a complete statement of job content, but rather to serve as a guide to the essential functions performed by the employee in this role. Organization retains the discretion to add or change the duties of the position at any time.
    $127k-167k yearly est. 1d ago
  • Data Engineer

    Brooksource 4.1company rating

    New York, NY jobs

    Data Engineer - Data Migration Project 6-Month Contract (ASAP Start) Hybrid - Manhattan, NY (3 days/week) We are seeking a Data Engineer to support a critical data migration initiative for a leading sports entertainment and gaming company headquartered in Manhattan, NY. This role will focus on transitioning existing data workflows and analytics pipelines from Amazon Redshift to Databricks, optimizing performance and ensuring seamless integration across operational reporting systems. The ideal candidate will have strong SQL and Python skills, experience working with Salesforce data, and a background in data engineering, ETL, or analytics pipeline optimization. This is a hybrid role requiring collaboration with cross-functional analytics, engineering, and operations teams to enhance data reliability and scalability. Minimum Qualifications: Advanced proficiency in SQL, Python, and SOQL Hands-on experience with Databricks, Redshift, Salesforce, and DataGrip Experience building and optimizing ETL workflows and pipelines Familiarity with Tableau for analytics and visualization Strong understanding of data migration and transformation best practices Ability to identify and resolve discrepancies between data environments Excellent analytical, troubleshooting, and communication skills Responsibilities: Modify and migrate existing workflows and pipelines from Redshift to Databricks. Rebuild data preprocessing structures that prepare Salesforce data for Tableau dashboards and ad hoc analytics. Identify and map Redshift data sources to their Databricks equivalents, accounting for any structural or data differences. Optimize and consolidate 200+ artifacts to improve efficiency and reduce redundancy. Implement Databricks-specific improvements to leverage platform capabilities and enhance workflow performance. Collaborate with analytics and engineering teams to ensure data alignment across business reporting systems. Apply a “build from scratch” mindset to design scalable, modernized workflows rather than direct lift-and-shift migrations. Identify dependencies on data sources not yet migrated and assist in prioritization efforts with the engineering team. What's in it for you? Opportunity to lead a high-impact data migration initiative at a top-tier gaming and entertainment organization. Exposure to modern data platforms and architecture, including Databricks and advanced analytics workflows. Collaborative environment with visibility across analytics, operations, and engineering functions. Ability to contribute to the foundation of scalable, efficient, and data-driven decision-making processes. EEO Statement: Eight Eleven Group provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, national origin, age, sex, citizenship, disability, genetic information, gender, sexual orientation, gender identity, marital status, amnesty or status as a covered veteran in accordance with applicable federal, state, and local laws.
    $101k-140k yearly est. 2d ago
  • Data Engineer

    Yochana 4.2company rating

    Charlotte, NC jobs

    Job Title: Azure Databricks Engineer (Onsite) Years of Experience: 7- 12 Years Full Time We are seeking a highly skilled and motivated Technical Team Lead with extensive experience in Azure Databricks to join our dynamic team. The ideal candidate will possess a strong technical background, exceptional leadership abilities, and a passion for driving innovative solutions. As a Technical Team Lead, you will be responsible for guiding a team of developers and engineers in the design, development, and implementation of data driven solutions that leverage Azure Databricks. Responsibilities: Lead and mentor a team of technical professionals, fostering a collaborative and high performance culture. Design and implement data processing solutions using Azure Databricks, ensuring scalability and efficiency. Collaborate with cross functional teams to gather requirements and translate them into technical specifications. Oversee the development lifecycle, from planning and design to deployment and maintenance. Conduct code reviews and provide constructive feedback to team members to ensure code quality and adherence to best practices. Stay up to date with industry trends and emerging technologies related to Azure Databricks and data engineering. Facilitate communication between technical and non technical stakeholders to ensure alignment on project goals. Identify and mitigate risks associated with project delivery and team performance. Mandatory Skills: Proven expertise in Azure Databricks, including experience with Spark, data pipelines, and data lakes. Strong programming skills in languages such as Python, Scala, or SQL. Experience with cloud based data storage solutions, particularly Azure Data Lake Storage and Azure SQL Database. Solid understanding of data modeling, ETL processes, and data warehousing concepts. Demonstrated ability to lead technical teams and manage multiple projects simultaneously. Preferred Skills: Familiarity with Azure DevOps for CI/CD processes. Experience with machine learning frameworks and libraries. Knowledge of data governance and compliance standards. Strong analytical and problem solving skills. Excellent communication and interpersonal skills. Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. 7 10 years of experience in data engineering, software development, or a related technical role. Proven track record of leading technical teams and delivering successful projects. Relevant certifications in Azure or data engineering are a plus. If you are a passionate Technical Team Lead with a strong background in Azure Databricks and a desire to drive innovation, we encourage you to apply and join our team
    $80k-111k yearly est. 1d ago
  • Data Engineer

    Addison Group 4.6company rating

    Coppell, TX jobs

    Title: Data Engineer Assignment Type: 6-12 month contract-to-hire Compensation: $65/hr-$75/hr W2 Work Model: Hybrid (4 days on-site, 1 day remote) Benefits: Medical, Dental, Vision, 401(k) What we need is someone who comes 8+ years of experience in the Data Engineering space who specializes in Microsoft Azure and Databricks. This person will be a part of multiple initiatives for the "New Development" and "Data Reporting" teams but will be primarily tasked with designing, building, maintaining, and automating their enterprise data architecture/pipelines within the cloud. Technology-wise we are needing to come with skills in Azure Databricks (5+ years), cloud-based environment (Azure and/or AWS), Azure DevOps (ADO), SQL (ETL, SSIS packages), and PySpark or Scala automation. Architecture experience in building pipelines, data modeling, data pipeline deployment, data mapping, etc. Top Skills: -8+ Years of Data Engineer/Business Intelligence -Databricks and Azure Data Factory *Most updated is Unity Catalog for Databricks* -Cloud-based environments (Azure or AWS) -Data Pipeline Architecture and CI/CD methodology -SQL -Automation (Python (PySpark), Scala)
    $65-75 hourly 4d ago
  • Data Engineer (Zero Trust)

    Kavaliro 4.2company rating

    Fort Belvoir, VA jobs

    Kavaliro is seeking a Zero Trust Security Architect / Data Engineer to support a mission-critical program by integrating secure architecture principles, strengthening data security, and advancing Zero Trust initiatives across the enterprise. Key Responsibilities Develop and implement program protection planning, including IT supply chain security, anti-tampering methods, and risk management aligned to DoD Zero Trust Architecture. Apply secure system design tools, automated analysis methods, and architectural frameworks to build resilient, least-privilege, continuously monitored environments. Integrate Zero Trust Data Pillar capabilities-data labeling, tagging, classification, encryption at rest/in transit, access policy definition, monitoring, and auditing. Analyze and interpret data from multiple structured and unstructured sources to support decision-making and identify anomalies or vulnerabilities. Assess cybersecurity principles, threats, and vulnerabilities impacting enterprise data systems, including risks such as corruption, exfiltration, and denial-of-service. Support systems engineering activities, ensuring secure integration of technologies and alignment with Zero Trust operational objectives. Design and maintain secure network architectures that balance security controls, mission requirements, and operational tradeoffs. Generate queries, algorithms, and reports to evaluate data structures, identify patterns, and improve system integrity and performance. Ensure compliance with organizational cybersecurity requirements, particularly confidentiality, integrity, availability, authentication, and non-repudiation. Evaluate impacts of cybersecurity lapses and implement safeguards to protect mission-critical data systems. Structure, format, and present data effectively across tools, dashboards, and reporting platforms. Maintain knowledge of enterprise information security architecture and database systems to support secure data flow and system design. Requirements Active TS/SCI security clearance (required). Deep knowledge of Zero Trust principles (never trust, always verify; explicit authentication; least privilege; continuous monitoring). Experience with program protection planning, IT supply chain risk management, and anti-tampering techniques. Strong understanding of cybersecurity principles, CIA triad requirements, and data-focused threats (corruption, exfiltration, denial-of-service). Proficiency in secure system design, automated systems analysis tools, and systems engineering processes. Ability to work with structured and unstructured data, including developing queries, algorithms, and analytical reports. Knowledge of database systems, enterprise information security architecture, and data structuring/presentation techniques. Understanding of network design processes, security tradeoffs, and enterprise architecture integration. Strong ability to interpret data from multiple tools to support security decision-making. Familiarity with impacts of cybersecurity lapses on data systems and operational environments. Kavaliro is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status, or any other characteristic protected by law.
    $85k-119k yearly est. 2d ago
  • Senior Data Engineer

    Addison Group 4.6company rating

    Houston, TX jobs

    About the Role The Senior Data Engineer will play a critical role in building and scaling an enterprise data platform to enable analytics, reporting, and operational insights across the organization. This position requires deep expertise in Snowflake and cloud technologies (AWS or Azure), along with strong upstream oil & gas domain experience. The engineer will design and optimize data pipelines, enforce data governance and quality standards, and collaborate with cross-functional teams to deliver reliable, scalable data solutions. Key Responsibilities Data Architecture & Engineering Design, develop, and maintain scalable data pipelines using Snowflake, AWS/Azure, and modern data engineering tools. Implement ETL/ELT processes integrating data from upstream systems (SCADA, production accounting, drilling, completions, etc.). Architect data models supporting both operational reporting and advanced analytics. Establish and maintain frameworks for data quality, validation, and lineage to ensure enterprise data trust. Platform Development & Optimization Lead the build and optimization of Snowflake-based data warehouses for performance and cost efficiency. Design cloud-native data solutions leveraging AWS/Azure services (S3, Lambda, Azure Data Factory, Databricks). Manage large-scale time-series and operational data processing workflows. Implement strong security, access control, and governance practices. Technical Leadership & Innovation Mentor junior data engineers and provide technical leadership across the data platform team. Research and introduce new technologies to enhance platform scalability and automation. Build reusable frameworks, components, and utilities to streamline delivery. Support AI/ML initiatives by delivering production-ready, high-quality data pipelines. Business Partnership Collaborate with stakeholders across business units to translate requirements into technical solutions. Work with analysts and data scientists to enable self-service analytics and reporting. Ensure data integration supports regulatory and compliance reporting. Act as a bridge between business and technical teams to ensure alignment and impact. Qualifications & Experience Education Bachelor's degree in Computer Science, Engineering, Information Systems, or a related field. Advanced degree or relevant certifications (SnowPro, AWS/Azure Data Engineer, Databricks) preferred. Experience 7+ years in data engineering roles, with at least 3 years on cloud data platforms. Proven expertise in Snowflake and at least one major cloud platform (AWS or Azure). Hands-on experience with upstream oil & gas data (wells, completions, SCADA, production, reserves, etc.). Demonstrated success delivering operational and analytical data pipelines. Technical Skills Advanced SQL and Python programming skills. Strong background in data modeling, ETL/ELT, cataloging, lineage, and data security. Familiarity with Airflow, Azure Data Factory, or similar orchestration tools. Experience with CI/CD, Git, and automated testing. Knowledge of BI tools such as Power BI, Spotfire, or Tableau. Understanding of AI/ML data preparation and integration.
    $86k-125k yearly est. 1d ago
  • Senior Data Analytics Engineer (Customer Data)

    Kellymitchell Group 4.5company rating

    Irving, TX jobs

    Our client is seeking a Senior Data Analytics Engineer (Customer Data) to join their team! This position is located in remote. Build, optimize, and maintain customer data pipelines in PySpark/Databricks to support CDP-driven use cases across AWS/Azure/GCP Transform raw and integrated customer data into analytics-ready datasets used for dashboards, reporting, segmentation, personalization, and downstream AI/ML applications Develop and enrich customer behavior metrics, campaign analytics, and performance insights such as: ad engagement, lifecycle metrics, retention Partner with Marketing, Sales, Product, and Data Science teams to translate business goals into metrics, features, and analytical data models Build datasets consumed by Power BI/Tableau dashboards (hands-on dashboard creation not required) Ensure high cluster performance and pipeline optimization in Databricks, including troubleshooting skewed joins, sorting, partitioning, and real-time processing needs Work across multiple cloud and vendor ecosystems such as: AWS/Azure/GCP; Hightouch or comparable CDP vendors Participate in the data ingestion and digestion phases, shaping integrated data into analytical layers for MarTech and BI Contribute to and enforce data engineering standards, documentation, governance, and best practices across the organization Desired Skills/Experience: 6+ years of experience in Data Engineering, Analytics Engineering, or related fields, including data modeling experience Strong Data Engineering fundamentals with the ability to design pipelines, optimize performance, and deliver real-time or near-real-time datasets Ability to deeply understand data, identifying gaps, designing meaningful transformations, and creating metrics with clear business context Understanding of how customer data moves through Customer Data Platforms (CDPs) and how to design pipelines that integrate with them Experience supporting Marketing, Customer Data, MarTech, CDP, segmentation, or personalization teams strongly preferred Hands-on experience required with: Databricks, PySpark, Python, SQL, Building analytics datasets for dashboards/reporting and customer behavior analytics or campaign performance insights Experience designing and implementing features that feed downstream AI or customer-facing applications Benefits: Medical, Dental, & Vision Insurance Plans Employee-Owned Profit Sharing (ESOP) 401K offered The approximate pay range for this position starting at $150-160,000+. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
    $150k-160k yearly 3d ago
  • Data Engineer

    Sharp Decisions 4.6company rating

    New York, NY jobs

    Hey All, We are looking for a mid-level data engineer. No third parties As a result of this expansion, we are seeking experienced software Data engineers with 5+ years of relevant experience to support the design and development of a strategic data platform for SMBC Capital Markets and Nikko Securities Group. Qualifications and Skills • Proven experience as a Data Engineer with experience in Azure cloud. • Experience implementing solutions using - • Azure cloud services • Azure Data Factory • Azure Lake Gen 2 • Azure Databases • Azure Data Fabric • API Gateway management • Azure Functions • Well versed with Azure Databricks • Strong SQL skills with RDMS or no SQL databases • Experience with developing APIs using FastAPI or similar frameworks in Python • Familiarity with the DevOps lifecycle (git, Jenkins, etc.), CI/CD processes • Good understanding of ETL/ELT processes • Experience in financial services industry, financial instruments, asset classes and market data are a plus.
    $85k-111k yearly est. 1d ago
  • Big Data Engineer

    Kellymitchell Group 4.5company rating

    Santa Monica, CA jobs

    Our client is seeking a Big Data Engineer to join their team! This position is located in Santa Monica, California. Design and build core components of a large-scale data platform for both real-time and batch processing, owning key features of big data applications that evolve with business needs Develop next-generation, cloud-based big data infrastructure supporting batch and streaming workloads, with continuous improvements to performance, scalability, reliability, and availability Champion engineering excellence, promoting best practices such as design patterns, CI/CD, thorough code reviews, and automated testing Drive innovation, contributing new ideas and applying cutting-edge technologies to deliver impactful solutions Participate in the full software development lifecycle, including system design, experimentation, implementation, deployment, and testing Collaborate closely with program managers, product managers, SDETs, and researchers in an open, agile, and highly innovative environment Desired Skills/Experience: Bachelor's degree in a STEM field such as: Science, Technology, Engineering, Mathematics 5+ years of relevant professional experience 4+ years of professional software development experience using Java, Scala, Python, or similar programming languages 3+ years of hands-on big data development experience with technologies such as Spark, Flink, SingleStore, Kafka, NiFi, and AWS big data tools Strong understanding of system and application design, architecture principles, and distributed system fundamentals Proven experience building highly available, scalable, and production-grade services Genuine passion for technology, with the ability to work across interdisciplinary areas and adopt new tools or approaches Experience processing massive datasets at the petabyte scale Proficiency with cloud infrastructure and DevOps tools, such as Terraform, Kubernetes (K8s), Spinnaker, IAM, and ALB Hands-on experience with modern data warehousing and analytics platforms, including ClickHouse, Druid, Snowflake, Impala, Presto, Kinesis, and more Familiarity with common web development frameworks, such as Spring Boot, React.js, Vue.js, or Angular Benefits: Medical, Dental, & Vision Insurance Plans Employee-Owned Profit Sharing (ESOP) 401K offered The approximate pay range for this position is between $52.00 and $75.00. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
    $52-75 hourly 3d ago
  • Senior Data Engineer

    Kellymitchell Group 4.5company rating

    Glendale, CA jobs

    Our client is seeking a Senior Data Engineer to join their team! This position is located in Glendale, California. Contribute to maintaining, updating, and expanding existing Core Data platform data pipelines Build tools and services to support data discovery, lineage, governance, and privacy Collaborate with other software and data engineers and cross-functional teams Work with a tech stack that includes Airflow, Spark, Databricks, Delta Lake, Kubernetes, and AWS Collaborate with product managers, architects, and other engineers to drive the success of the Core Data platform Contribute to developing and documenting internal and external standards and best practices for pipeline configurations, naming conventions, and more Ensure high operational efficiency and quality of Core Data platform datasets to meet SLAs and ensure reliability and accuracy for stakeholders in Engineering, Data Science, Operations, and Analytics Participate in agile and scrum ceremonies to collaborate and refine team processes Engage with customers to build relationships, understand needs, and prioritize both innovative solutions and incremental platform improvements Maintain detailed documentation of work and changes to support data quality and data governance requirements Desired Skills/Experience: 5+ years of data engineering experience developing large data pipelines Proficiency in at least one major programming language such as: Python, Java or Scala Strong SQL skills and the ability to create queries to analyze complex datasets Hands-on production experience with distributed processing systems such as Spark Experience interacting with and ingesting data efficiently from API data sources Experience coding with the Spark DataFrame API to create data engineering workflows in Databricks Hands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines Experience developing APIs with GraphQL Deep understanding of AWS or other cloud providers, as well as infrastructure-as-code Familiarity with data modeling techniques and data warehousing best practices Strong algorithmic problem-solving skills Excellent written and verbal communication skills Advanced understanding of OLTP versus OLAP environments Benefits: Medical, Dental, & Vision Insurance Plans Employee-Owned Profit Sharing (ESOP) 401K offered The approximate pay range for this position is between $51.00 and $73.00. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
    $51-73 hourly 2d ago
  • Azure Data Engineer

    Kellymitchell Group 4.5company rating

    Irving, TX jobs

    Our client is seeking an Azure Data Engineer to join their team! This position is located in Irving, Texas. THIS ROLE REQUIRES AN ONSITE INTERVIEW IN IRVING, please only apply if you are local and available to interview onsite. Duties: Lead the design, architecture, and implementation of key data initiatives and platform capabilities Optimize existing data workflows and systems to improve performance, cost-efficiency, identifying and guiding teams to implement solutions Lead and mentor a team of 2-5 data engineers, providing guidance on technical best practices, career development, and initiative execution Contribute to the development of data engineering standards, processes, and documentation, promoting consistency and maintainability across teams while enabling business stakeholders Desired Skills/Experience: Bachelor's degree or equivalent in Computer Science, Mathematics, Software Engineering, Management Information Systems, etc. 5+ years of relevant work experience in data engineering Strong technical skills in SQL, PySpark/Python, Azure, and Databricks Deep understanding of data engineering fundamentals, including database architecture and design, ETL, etc. Benefits: Medical, Dental, & Vision Insurance Plans Employee-Owned Profit Sharing (ESOP) 401K offered The approximate pay range for this position starting at $140-145,000+. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
    $140k-145k yearly 5d ago
  • Data Engineer

    Richard, Wayne & Roberts 4.3company rating

    Houston, TX jobs

    Python Data Engineer - Houston, TX (Onsite Only) A global energy and commodities organization is seeking an experienced Python Data Engineer to expand and optimize data assets that support high-impact analytics. This role works closely with traders, analysts, researchers, and data scientists to translate business needs into scalable technical solutions. The position is fully onsite due to the collaborative, fast-paced nature of the work. MUST come from an Oil & Gas organization, prefer commodity trading firm. CANNOT do C2C. Key Responsibilities Build modular, reusable Python components to connect external data sources with internal tools and databases. Partner with business stakeholders to define data ingestion and access requirements. Translate business requirements into well-designed technical deliverables. Maintain and enhance the central Python codebase following established standards. Contribute to internal developer tools and ETL frameworks, helping standardize and consolidate core functionality. Collaborate with global engineering teams and participate in internal Python community initiatives. Qualifications 7+ years of professional Python development experience. Strong background in data engineering and pipeline development. Experience with web scraping tools (Requests, BeautifulSoup, Selenium). Hands-on Oracle/PL SQL development, including stored procedures. Strong grasp of object-oriented design, design patterns, and service-oriented architectures. Experience with Agile/Scrum, code reviews, version control, and issue tracking. Familiarity with scientific computing libraries (Pandas, NumPy). Excellent communication skills. Industry experience in energy or commodities preferred. Exposure to containerization (Docker, Kubernetes) is a plus.
    $83k-120k yearly est. 1d ago
  • AWS Data Engineer

    Mindlance 4.6company rating

    McLean, VA jobs

    Responsibilities: Design, build, and maintain scalable data pipelines using AWS Glue and Databricks. Develop and optimize ETL/ELT processes using PySpark and Python. Collaborate with data scientists, analysts, and stakeholders to enable efficient data access and transformation. Implement and maintain data lake and warehouse solutions on AWS (S3, Glue Catalog, Redshift, Athena, etc.). Ensure data quality, consistency, and reliability across systems. Optimize performance of large-scale distributed data processing workflows. Develop automation scripts and frameworks for data ingestion, transformation, and validation. Follow best practices for data governance, security, and compliance. Required Skills & Experience: 5-8 years of hands-on experience in Data Engineering. Strong proficiency in Python and PySpark for data processing and transformation. Expertise in AWS services - particularly Glue, S3, Lambda, Redshift, and Athena. Hands-on experience with Databricks for building and managing data pipelines. Experience working with large-scale data systems and optimizing performance. Solid understanding of data modeling, data lake architecture, and ETL design principles. Strong problem-solving skills and ability to work independently in a fast-paced environment. “Mindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of - Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.”
    $85k-113k yearly est. 3d ago
  • Data Scientist - ML, Python

    Avance Consulting 4.4company rating

    McLean, VA jobs

    10+years of experience required in Information Technology. Python Programming: At least 5 years of hands-on experience with Python, particularly in frameworks like FastAPI, Django, Flask, and experience using AI frameworks. • Access Control Expertise: Strong understanding of access control models such as Role-Based Access Control (RBAC) and Attribute-Based Access Control (ABAC). • API and Connector Development: Experience in developing API connectors using Python for extracting and managing access control data from platforms like Azure, SharePoint, Java, .NET, WordPress, etc. • AI and Machine Learning: Hands-on experience integrating AI into applications for automating tasks such as access control reviews and identifying anomalies • Cloud and Microsoft Technologies: Proficiency with Azure services, Microsoft Graph API, and experience integrating Python applications with Azure for access control reviews and reporting. • Reporting and Visualization: Experience using reporting libraries in Python (Pandas, Matplotlib, Plotly, Dash) to build dashboards and reports related to security and access control metrics. • Communication Skills: Ability to collaborate with various stakeholders, explain complex technical solutions, and deliver high-quality solutions on time. • PlainID: Experience or familiarity with PlainID platforms for identity and access management. • Azure OpenAI: Familiarity with Azure OpenAI technologies and their application in access control and security workflows. • Power BI: Experience with Microsoft Power BI for data visualization and reporting. • Agile Methodologies: Experience working in Agile environments and familiarity with Scrum methodologies for delivering security solutions.
    $76k-111k yearly est. 3d ago
  • Senior Data Engineer

    Luna Data Solutions, Inc. 4.4company rating

    Austin, TX jobs

    We are looking for a seasoned Azure Data Engineer to design, build, and optimize secure, scalable, and high-performance data solutions within the Microsoft Azure ecosystem. This will be a multi-year contract worked FULLY ONSITE in Austin, TX. The ideal candidate brings deep technical expertise in data architecture, ETL/ELT engineering, data integration, and governance, along with hands-on experience in MDM, API Management, Lakehouse architectures, and data mesh or data hub frameworks. This position combines strategic architectural planning with practical, hands-on implementation, empowering cross-functional teams to leverage data as a key organizational asset. Key Responsibilities 1. Data Architecture & Strategy Design and deploy end-to-end Azure data platforms using Azure Data Lake, Azure Synapse Analytics, Azure Databricks, and Azure SQL Database. Build and implement Lakehouse and medallion (Bronze/Silver/Gold) architectures for scalable and modular data processing. Define and support data mesh and data hub patterns to promote domain-driven design and federated governance. Establish standards for conceptual, logical, and physical data modeling across data warehouse and data lake environments. 2. Data Integration & Pipeline Development Develop and maintain ETL/ELT pipelines using Azure Data Factory, Synapse Pipelines, and Databricks for both batch and streaming workloads. Integrate diverse data sources (on-prem, cloud, SaaS, APIs) into a unified Azure data environment. Optimize pipelines for cost-effectiveness, performance, and scalability. 3. Master Data Management (MDM) & Data Governance Implement MDM solutions using Azure-native or third-party platforms (e.g., Profisee, Informatica, Semarchy). Define and manage data governance, metadata, and data quality frameworks. Partner with business teams to align data standards and maintain data integrity across domains. 4. API Management & Integration Build and manage APIs for data access, transformation, and system integration using Azure API Management and Logic Apps. Design secure, reliable data services for internal and external consumers. Automate workflows and system integrations using Azure Functions, Logic Apps, and Power Automate. 5. Database & Platform Administration Perform core DBA tasks, including performance tuning, query optimization, indexing, and backup/recovery for Azure SQL and Synapse. Monitor and optimize cost, performance, and scalability across Azure data services. Implement CI/CD and Infrastructure-as-Code (IaC) solutions using Azure DevOps, Terraform, or Bicep. 6. Collaboration & Leadership Work closely with data scientists, analysts, business stakeholders, and application teams to deliver high-value data solutions. Mentor junior engineers and define best practices for coding, data modeling, and solution design. Contribute to enterprise-wide data strategy and roadmap development. Required Qualifications Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or related fields. 5+ years of hands-on experience in Azure-based data engineering and architecture. Strong proficiency with the following: Azure Data Factory, Azure Synapse, Azure Databricks, Azure Data Lake Storage Gen2 SQL, Python, PySpark, PowerShell Azure API Management and Logic Apps Solid understanding of data modeling approaches (3NF, dimensional modeling, Data Vault, star/snowflake schemas). Proven experience with Lakehouse/medallion architectures and data mesh/data hub designs. Familiarity with MDM concepts, data governance frameworks, and metadata management. Experience with automation, data-focused CI/CD, and IaC. Thorough understanding of Azure security, RBAC, Key Vault, and core networking principles. What We Offer Competitive compensation and benefits package Luna Data Solutions, Inc. (LDS) provides equal employment opportunities to all employees. All applicants will be considered for employment. LDS prohibits discrimination and harassment of any type regarding age, race, color, religion, sexual orientation, gender identity, sex, national origin, genetics, protected veteran status, and/or disability status.
    $74k-95k yearly est. 3d ago
  • Data Engineer

    Interactive Resources-IR 4.2company rating

    Austin, TX jobs

    About the Role We are seeking a highly skilled Databricks Data Engineer with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions. What We're Looking For 8+ years designing and delivering scalable data pipelines in modern data platforms Deep experience in data engineering, data warehousing, and enterprise-grade solution delivery Ability to lead cross-functional initiatives in matrixed teams Advanced skills in SQL, Python, and ETL/ELT development, including performance tuning Hands-on experience with Azure, Snowflake, and Databricks, including system integrations Key Responsibilities Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform Modernize and enhance cloud-based data ecosystems on Azure, contributing to architecture, modeling, security, and CI/CD Use Apache Airflow and similar tools for workflow automation and orchestration Work with financial or regulated datasets while ensuring strong compliance and governance Drive best practices in data quality, lineage, cataloging, and metadata management Primary Technical Skills Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks Notebooks Design efficient Delta Lake models for reliability and performance Implement and manage Unity Catalog for governance, RBAC, lineage, and secure data sharing Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems Automate ingestion and workflows using Python and REST APIs Support downstream analytics for BI, data science, and application workloads Write optimized SQL/T-SQL queries, stored procedures, and curated datasets Automate DevOps workflows, testing pipelines, and workspace configurations Additional Skills Azure: Data Factory, Data Lake, Key Vault, Logic Apps, Functions CI/CD: Azure DevOps Orchestration: Apache Airflow (plus) Streaming: Delta Live Tables MDM: Profisee (nice-to-have) Databases: SQL Server, Cosmos DB Soft Skills Strong analytical and problem-solving mindset Excellent communication and cross-team collaboration Detail-oriented with a high sense of ownership and accountability
    $84k-111k yearly est. 3d ago
  • Data Scientist

    Us Tech Solutions 4.4company rating

    Alhambra, CA jobs

    Title: Principal Data Scientist Duration: 12 Months Contract Additional Information California Resident Candidates Only. This position is HYBRID (2 days onsite, 2 days telework). Interviews will be conducted via Microsoft Teams. The work schedule follows a 4/40 (10-hour days, Monday-Thursday), with the specific shift determined by the program manager. Shifts may range between 7:15 a.m. and 6:00 p.m. Job description: The Principal Data Scientist works to establish a comprehensive Data Science Program to advance data-driven decision-making, streamline operations, and fully leverage modern platforms including Databricks, or similar, to meet increasing demand for predictive analytics and AI solutions. The Principal Data Scientist will guide program development, provide training and mentorship to junior members of the team, accelerate adoption of advanced analytics, and build internal capacity through structured mentorship. The Principal Data Scientist will possess exceptional communication abilities, both verbal and written, with a strong customer service mindset and the ability to translate complex concepts into clear, actionable insights; strong analytical and business acumen, including foundational experience with regression, association analysis, outlier detection, and core data analysis principles; working knowledge of database design and organization, with the ability to partner effectively with Data Management and Data Engineering teams; outstanding time management and organizational skills, with demonstrated success managing multiple priorities and deliverables in parallel; a highly collaborative work style, coupled with the ability to operate independently, maintain focus, and drive projects forward with minimal oversight; a meticulous approach to quality, ensuring accuracy, reliability, and consistency in all deliverables; and proven mentorship capabilities, including the ability to guide, coach, and upskill junior data scientists and analysts. Experience Required: Five (5)+ years of professional experience leading data science initiatives, including developing machine learning models, statistical analyses, and end-to-end data science workflows in production environments. Three (3)+ years of experience working with Databricks and similar cloud-based analytics platforms, including notebook development, feature engineering, ML model training, and workflow orchestration. Three (3)+ years of experience applying advanced analytics and predictive modeling (e.g., regression, classification, clustering, forecasting, natural language processing). Two (2)+ years of experience implementing MLOps practices, such as model versioning, CI/CD for ML, MLflow, automated pipelines, and model performance monitoring. Two (2)+ years of experience collaborating with data engineering teams to design data pipelines, optimize data transformations, and implement Lakehouse or data warehouse architectures (e.g., Databricks, Snowflake, SQL-based platforms). Two (2)+ years of experience mentoring or supervising junior data scientists or analysts, including code reviews, training, and structured skill development. Two (2)+ years of experience with Python and SQL programming, using data sources such as SQL Server, Oracle, PostgreSQL, or similar relational databases. One (1)+ year of experience operationalizing analytics within enterprise governance frameworks, partnering with Data Management, Security, and IT to ensure compliance, reproducibility, and best practices. Education Required & certifications: This classification requires possession of a Master's degree or higher in Data Science, Statistics, Computer Science, or a closely related field. Additional qualifying professional experience may be substituted for the required education on a year-for-year basis. At least one of the following industry-recognized certifications in data science or cloud analytics, such as: Microsoft Azure Data Scientist Associate (DP-100) Databricks Certified Data Scientist or Machine Learning Professional AWS Machine Learning Specialty Google Professional Data Engineer • or equivalent advanced analytics certifications. The certification is required and may not be substituted with additional experience. About US Tech Solutions: US Tech Solutions is a global staff augmentation firm providing a wide range of talent on-demand and total workforce solutions. To know more about US Tech Solutions, please visit ************************ US Tech Solutions is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Recruiter Details: Name: T Saketh Ram Sharma Email: ***************************** Internal Id: 25-54101
    $92k-133k yearly est. 4d ago
  • Data Architect

    Optech 4.6company rating

    Cincinnati, OH jobs

    THIS IS A W2 (NOT C2C OR REFERRAL BASED) CONTRACT OPPORTUNITY REMOTE MOSTLY WITH 1 DAY/MO ONSITE IN CINCINNATI-LOCAL CANDIDATES TAKE PREFERENCE RATE: $75-85/HR WITH BENEFITS We are seeking a highly skilled Data Architect to function in a consulting capacity to analyze, redesign, and optimize a Medical Payments client's environment. The ideal candidate will have deep expertise in SQL, Azure cloud services, and modern data architecture principles. Responsibilities Design and maintain scalable, secure, and high-performing data architectures. Lead migration and modernization projects in heavy use production systems. Develop and optimize data models, schemas, and integration strategies. Implement data governance, security, and compliance standards. Collaborate with business stakeholders to translate requirements into technical solutions. Ensure data quality, consistency, and accessibility across systems. Required Qualifications Bachelor's degree in Computer Science, Information Systems, or related field. Proven experience as a Data Architect or similar role. Strong proficiency in SQL (query optimization, stored procedures, indexing). Hands-on experience with Azure cloud services for data management and analytics. Knowledge of data modeling, ETL processes, and data warehousing concepts. Familiarity with security best practices and compliance frameworks. Preferred Skills Understanding of Electronic Health Records systems. Understanding of Big Data technologies and modern data platforms outside the scope of this project.
    $75-85 hourly 2d ago

Learn more about ICONMA jobs

View all jobs