Post job

Data Engineer jobs at Ascendo Resources

- 1940 jobs
  • ETL/ELT Data Engineer (Secret Clearance) - Hybrid

    Launchcode 2.9company rating

    Austin, TX jobs

    LaunchCode is recruiting for a Software Data Engineer to work at one of our partner companies! Details: Full-Time W2, Salary Immediate opening Hybrid - Austin, TX (onsite 1-2 times a week) Pay $85K-$120K Minimum Experience: 4 years Security Clearance: Active DoD Secret Clearance Disclaimer: Please note that we are unable to provide work authorization or sponsorship for this role, now or in the future. Candidates requiring current or future sponsorship will not be considered. Job description Job Summary A Washington, DC-based software solutions provider founded in 2017, specializes in delivering mission-critical and enterprise solutions to the federal government. Originating from the Department of Defense's software factory ecosystem, the company focuses on Command and Control, Cybersecurity, Space, Geospatial, and Modeling & Simulation. The company leverages commercial technology to enhance the capabilities of the DoD, IC, and their end-users, with innovation driven by its Innovation centers. The company has a presence in Boston, MA, Colorado Springs, CO, San Antonio, TX, and St. Louis, MO. Why the company? Environment of Autonomy Innovative Commercial Approach People over process We are seeking a passionate Software Data Engineer to support the Army Software Factory (ASWF) in aligning with DoDM 8140.03 Cyber Workforce requirements and broader compliance mandates. The Army Software Factory (ASWF), a first-of-its-kind initiative under Army Futures Command, is revolutionizing the Army's approach to software development by training and employing self-sustaining technical talent from across the military and civilian workforce. Guided by the motto “By Soldiers, For Soldiers,” ASWF equips service members to develop mission-critical software solutions independently-especially vital for future contested environments where traditional technical support may be unavailable. This initiative also serves as a strategic prototype to modernize legacy IT processes and build technical readiness across the force to ensure battlefield dominance in the digital age. Required Skills: Active DoD Secret Clearance (Required) 4+ years of experience in data science, data engineering, or similar roles. Expertise in designing, building, and maintaining scalable ETL/ELT pipelines using tools and languages such as Python, SQL, Apache Spark, or Airflow. Strong proficiency in working with relational and NoSQL databases, including experience with database design, optimization, and query performance tuning (e.g., PostgreSQL, MySQL, MongoDB, Cassandra). Demonstrable experience with cloud data platforms and services (e.g., AWS Redshift, S3, Glue, Athena; Azure Data Lake, Data Factory, Synapse; Google BigQuery, Cloud Storage, Dataflow). Solid understanding of data warehousing concepts (e.g., Kimball, Inmon methodologies) and experience with data modeling for analytical purposes. Proficiency in at least one programming language commonly used in data engineering (e.g., Python, Java, Scala) for data manipulation, scripting, and automation. CompTIA Security+ Certified or otherwise DoDM 8140.03 (formerly DoD 8570.01-M) compliant. Nice to Have: Familiarity with SBIR technologies and transformative platform shifts Experience working in Agile or DevSecOps environments 2+ years of experience interfacing with Platform Engineers and data visibility team, manage AWS resources, and GitLab admin #LI-hybrid #austintx #ETLengineer #dataengineer #army #aswf #clearancejobs #clearedjobs #secretclearance #ETL
    $85k-120k yearly 2d ago
  • Data Modeler II

    Airswift 4.9company rating

    Houston, TX jobs

    Job Title: Data Modeler II Type: W2 Contract (USA)/INC or T4 (Canada) Work Setup: Hybrid (On-site with flexibility to work from home two days per week) Industry: Oil & Gas Benefits: Health, Dental, Vision Job Summary We are seeking a Data Modeler II with a product-driven, innovative mindset to design and implement data solutions that deliver measurable business value for Supply Chain operations. This role combines technical expertise with project management responsibilities, requiring collaboration with IT teams to develop solutions for small and medium-sized business challenges. The ideal candidate will have hands-on experience with data transformation, AI integration, and ERP systems, while also being able to communicate technical concepts in clear, business-friendly language. Key Responsibilities Develop innovative data solutions leveraging knowledge of Supply Chain processes and oil & gas industry value drivers. Design and optimize ETL pipelines for scalable, high-performance data processing. Integrate solutions with enterprise data platforms and visualization tools. Gather and clean data from ERP systems for analytics and reporting. Utilize AI tools and prompt engineering to enhance data-driven solutions. Collaborate with IT and business stakeholders to deliver medium and low-level solutions for local issues. Oversee project timelines, resources, and stakeholder engagement. Document project objectives, requirements, and progress updates. Translate technical language into clear, non-technical terms for business users. Support continuous improvement and innovation in data engineering and analytics. Basic / Required Qualifications Bachelor's degree in Commerce (SCM), Data Science, Engineering, or related field. Hands-on experience with: Python for data transformation. ETL tools (Power Automate, Power Apps; Databricks is a plus). Oracle Cloud (Supply Chain and Financial modules). Knowledge of ERP systems (Oracle Cloud required; SAP preferred). Familiarity with AI integration and low-code development platforms. Strong understanding of Supply Chain processes; oil & gas experience preferred. Ability to manage projects and engage stakeholders effectively. Excellent communication skills for translating technical concepts into business language. Required Knowledge / Skills / Abilities Advanced proficiency in data science concepts, including statistical analysis and machine learning. Experience with prompt engineering and AI-driven solutions. Ability to clean and transform data for analytics and reporting. Strong documentation, troubleshooting, and analytical skills. Business-focused mindset with technical expertise. Ability to think outside the box and propose innovative solutions. Special Job Characteristics Hybrid work schedule (Wednesdays and Fridays remote). Ability to work independently and oversee own projects.
    $82k-115k yearly est. 2d ago
  • Data Engineer

    Brooksource 4.1company rating

    New York, NY jobs

    Data Engineer - Data Migration Project 6-Month Contract (ASAP Start) Hybrid - Manhattan, NY (3 days/week) We are seeking a Data Engineer to support a critical data migration initiative for a leading sports entertainment and gaming company headquartered in Manhattan, NY. This role will focus on transitioning existing data workflows and analytics pipelines from Amazon Redshift to Databricks, optimizing performance and ensuring seamless integration across operational reporting systems. The ideal candidate will have strong SQL and Python skills, experience working with Salesforce data, and a background in data engineering, ETL, or analytics pipeline optimization. This is a hybrid role requiring collaboration with cross-functional analytics, engineering, and operations teams to enhance data reliability and scalability. Minimum Qualifications: Advanced proficiency in SQL, Python, and SOQL Hands-on experience with Databricks, Redshift, Salesforce, and DataGrip Experience building and optimizing ETL workflows and pipelines Familiarity with Tableau for analytics and visualization Strong understanding of data migration and transformation best practices Ability to identify and resolve discrepancies between data environments Excellent analytical, troubleshooting, and communication skills Responsibilities: Modify and migrate existing workflows and pipelines from Redshift to Databricks. Rebuild data preprocessing structures that prepare Salesforce data for Tableau dashboards and ad hoc analytics. Identify and map Redshift data sources to their Databricks equivalents, accounting for any structural or data differences. Optimize and consolidate 200+ artifacts to improve efficiency and reduce redundancy. Implement Databricks-specific improvements to leverage platform capabilities and enhance workflow performance. Collaborate with analytics and engineering teams to ensure data alignment across business reporting systems. Apply a “build from scratch” mindset to design scalable, modernized workflows rather than direct lift-and-shift migrations. Identify dependencies on data sources not yet migrated and assist in prioritization efforts with the engineering team. What's in it for you? Opportunity to lead a high-impact data migration initiative at a top-tier gaming and entertainment organization. Exposure to modern data platforms and architecture, including Databricks and advanced analytics workflows. Collaborative environment with visibility across analytics, operations, and engineering functions. Ability to contribute to the foundation of scalable, efficient, and data-driven decision-making processes. EEO Statement: Eight Eleven Group provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, national origin, age, sex, citizenship, disability, genetic information, gender, sexual orientation, gender identity, marital status, amnesty or status as a covered veteran in accordance with applicable federal, state, and local laws.
    $101k-140k yearly est. 3d ago
  • Data Modeler

    Airswift 4.9company rating

    Midland, TX jobs

    Job Title: Data Modeler - Net Zero Program Analyst Type: W2 Contract (12-month duration) Work Setup: On-site Industry: Oil & Gas Benefits: Dental, Healthcare, Vision &401(k) Airswift is seeking a Data Modeler - Net Zero Program Analyst to join one of our major clients on a 12-month contract. This newly created role supports the company's decarbonization and Net Zero initiatives by managing and analyzing operational data to identify trends and optimize performance. The position involves working closely with operations and analytics teams to deliver actionable insights through data visualization and reporting. Responsibilities: Build and maintain Power BI dashboards to monitor emissions, operational metrics, and facility performance. Extract and organize data from systems such as SiteView, ProCount, and SAP for analysis and reporting. Conduct data validation and trend analysis to support sustainability and operational goals. Collaborate with field operations and project teams to interpret data and provide recommendations. Ensure data consistency across platforms and assist with integration efforts (coordination only, no coding required). Present findings through clear reports and visualizations for technical and non-technical stakeholders. Required Skills and Experience: 7+ years of experience in data analysis within Oil & Gas or Energy sectors. Strong proficiency in Power BI (required). Familiarity with SiteView, ProCount, and/or SAP (preferred). Ability to translate operational data into insights that support emissions reduction and facility optimization. Experience with surface facilities, emissions estimation, or power systems. Knowledge of other visualization tools (Tableau, Spotfire) is a plus. High School Diploma or GED required. Additional Details: Preference for Midland-based candidates; Houston-based candidates will need to travel to Midland periodically (travel reimbursed). No per diem offered. Office-based role with low exposure risk.
    $83k-116k yearly est. 4d ago
  • Lead Data Engineer

    Themesoft Inc. 3.7company rating

    Roseland, NJ jobs

    Job Title: Lead Data Engineer. Hybrid Role: 3 Times / Week. Type: 12 Months Contract - Rolling / Extendable Contract. Work Authorization: Candidates must be authorized to work in the U.S. without current or future sponsorship requirements. Must haves: AWS. Databricks. Lead experience- this can be supplemented for staff as well. Python. Pyspark. Contact Center Experience is a nice to have. Job Description: As a Lead Data Engineer, you will spearhead the design and delivery of a data hub/marketplace aimed at providing curated client service data to internal data consumers, including analysts, data scientists, analytic content authors, downstream applications, and data warehouses. You will develop a service data hub solution that enables internal data consumers to create and maintain data integration workflows, manage subscriptions, and access content to understand data meaning and lineage. You will design and maintain enterprise data models for contact center-oriented data lakes, warehouses, and analytic models (relational, OLAP/dimensional, columnar, etc.). You will collaborate with source system owners to define integration rules and data acquisition options (streaming, replication, batch, etc.). You will work with data engineers to define workflows and data quality monitors. You will perform detailed data analysis to understand the content and viability of data sources to meet desired use cases and help define and maintain enterprise data taxonomy and data catalog. This role requires clear, compelling, and influential communication skills. You will mentor developers and collaborate with peer architects and developers on other teams. TO SUCCEED IN THIS ROLE: Ability to define and design complex data integration solutions with general direction and stakeholder access. Capability to work independently and as part of a global, multi-faceted data warehousing and analytics team. Advanced knowledge of cloud-based data engineering and data warehousing solutions, especially AWS, Databricks, and/or Snowflake. Highly skilled in RDBMS platforms such as Oracle, SQLServer. Familiarity with NoSQL DB platforms like MongoDB. Understanding of data modeling and data engineering, including SQL and Python. Strong understanding of data quality, compliance, governance and security. Proficiency in languages such as Python, SQL, and PySpark. Experience in building data ingestion pipelines for structured and unstructured data for storage and optimal retrieval. Ability to design and develop scalable data pipelines. Knowledge of cloud-based and on-prem contact center technologies such as Salesforce.com, ServiceNow, Oracle CRM, Genesys Cloud, Genesys InfoMart, Calabrio Voice Recording, Nuance Voice Biometrics, IBM Chatbot, etc., is highly desirable. Experience with code repository and project tools such as GitHub, JIRA, and Confluence. Working experience with CI/CD (Continuous Integration & Continuous Deployment) process, with hands-on expertise in Jenkins, Terraform, Splunk, and Dynatrace. Highly innovative with an aptitude for foresight, systems thinking, and design thinking, with a bias towards simplifying processes. Detail-oriented with strong analytical, problem-solving, and organizational skills. Ability to clearly communicate with both technical and business teams. Knowledge of Informatica PowerCenter, Data Quality, and Data Catalog is a plus. Knowledge of Agile development methodologies is a plus. Having a Databricks data engineer associate certification is a plus but not mandatory. Data Engineer Requirements: Bachelor's degree in computer science, information technology, or a similar field. 8+ years of experience integrating and transforming contact center data into standard, consumption-ready data sets incorporating standardized KPIs, supporting metrics, attributes, and enterprise hierarchies. Expertise in designing and deploying data integration solutions using web services with client-driven workflows and subscription features. Knowledge of mathematical foundations and statistical analysis. Strong interpersonal skills. Excellent communication and presentation skills. Advanced troubleshooting skills. Regards, Purnima Pobbathy Senior Technical Recruiter ************ | ********************* |Themesoft Inc |
    $78k-106k yearly est. 4d ago
  • Data Engineer

    Ztek Consulting 4.3company rating

    Hamilton, NJ jobs

    Key Responsibilities: Manage and support batch processes and data pipelines in Azure Databricks and Azure Data Factory. Integrate and process Bloomberg market data feeds and files into trading or analytics platforms. Monitor, troubleshoot, and resolve data and system issues related to trading applications and market data ingestion. Develop, automate, and optimize ETL pipelines using Python, Spark, and SQL. Manage FTP/SFTP file transfers between internal systems and external vendors. Ensure data quality, completeness, and timeliness for downstream trading and reporting systems. Collaborate with operations, application support, and infrastructure teams to resolve incidents and enhance data workflows. Required Skills & Experience: 10+ years of experience in data engineering or production support within financial services or trading environments. Hands-on experience with Azure Databricks, Azure Data Factory, and Azure Storage, Logic Apps, Fabric. Strong Python and SQL programming skills. Experience with Bloomberg data feeds (BPIPE, TSIP,SFTP). Experience with Git, CI/CD pipelines, and Azure DevOps. Proven ability to support batch jobs, troubleshoot failures, and manage job scheduling. Experience handling FTP/SFTP file transfers and automation (e.g., using scripts or managed file transfer tools). Solid understanding of equities trading, fixed income trading, trading workflows, and financial instruments. Excellent communication, problem-solving, and stakeholder management skills.
    $89k-125k yearly est. 5d ago
  • Data Engineer

    Addison Group 4.6company rating

    Coppell, TX jobs

    Title: Data Engineer Assignment Type: 6-12 month contract-to-hire Compensation: $65/hr-$75/hr W2 Work Model: Hybrid (4 days on-site, 1 day remote) Benefits: Medical, Dental, Vision, 401(k) What we need is someone who comes 8+ years of experience in the Data Engineering space who specializes in Microsoft Azure and Databricks. This person will be a part of multiple initiatives for the "New Development" and "Data Reporting" teams but will be primarily tasked with designing, building, maintaining, and automating their enterprise data architecture/pipelines within the cloud. Technology-wise we are needing to come with skills in Azure Databricks (5+ years), cloud-based environment (Azure and/or AWS), Azure DevOps (ADO), SQL (ETL, SSIS packages), and PySpark or Scala automation. Architecture experience in building pipelines, data modeling, data pipeline deployment, data mapping, etc. Top Skills: -8+ Years of Data Engineer/Business Intelligence -Databricks and Azure Data Factory *Most updated is Unity Catalog for Databricks* -Cloud-based environments (Azure or AWS) -Data Pipeline Architecture and CI/CD methodology -SQL -Automation (Python (PySpark), Scala)
    $65-75 hourly 5d ago
  • Data Engineer

    Mastech Digital 4.7company rating

    Jersey City, NJ jobs

    Mastech Digital Inc. (NYSE: MHH) is a minority-owned, publicly traded IT staffing and digital transformation services company. Headquartered in Pittsburgh, PA, and established in 1986, we serve clients nationwide through 11 U.S. offices. Role: Data Engineer Location: Merrimack, NH/Smithfield, RI/Jersey City, NJ Duration: Full-Time/W2 Job Description: Must-Haves: Python for running ETL batch jobs Heavy SQL for data analysis, validation and querying AWS and the ability to move the data through the data stages and into their target databases. The Postgres database is the target, so that is required. Nice to haves: Snowflake Java for API development is a nice to have (will teach this) Experience in asset management for domain knowledge. Production support debugging and processing of vendor data The Expertise and Skills You Bring A proven foundation in data engineering - bachelor's degree + preferred, 10+ years' experience Extensive experience with ETL technologies Design and develop ETL reporting and analytics solutions. Knowledge of Data Warehousing methodologies and concepts - preferred Advanced data manipulation languages and frameworks (JAVA, PYTHON, JSON) - required RMDS experience (Snowflake, PostgreSQL ) - required Knowledge of Cloud platforms and Services (AWS - IAM, EC2, S3, Lambda, RDS ) - required Designing and developing low to moderate complex data integration solution - required Experience with DevOps, Continuous Integration and Continuous Delivery (Maven, Jenkins, Stash, Ansible, Docker) will be preferred Expert in SQL and Stored Procedures on any Relational databases Good in debugging, analyzing and Production Support Application Development based on JIRA stories (Agile environment) Demonstrable experience with ETL tools (Informatica, Snaplogic) Experience in working with Python in an AWS environment Create, update, and maintain technical documentation for software-based projects and products. Solving production issues. Interact effectively with business partners to understand business requirements and assist in generation of technical requirements. Participate in architecture, technical design, and product implementation discussions. Working Knowledge of Unix/Linux operating systems and shell scripting Experience with developing sophisticated Continuous Integration & Continuous Delivery (CI/CD) pipeline including software configuration management, test automation, version control, static code analysis. Excellent interpersonal and communication skills Ability to work with global Agile teams Proven ability to deal with ambiguity and work in fast paced environment Ability to mentor junior data engineers. The Value You Deliver The associate would help the team in designing and building a best-in-class data solutions using very diversified tech stack. Strong experience of working in large teams and proven technical leadership capabilities Knowledge of enterprise-level implementations like data warehouses and automated solutions. Ability to negotiate, influence and work with business peers and management. Ability to develop and drive a strategy as per the needs of the team Good to have: Full-Stack Programming knowledge, hands-on test case/plan preparation within Jira
    $81k-105k yearly est. 1d ago
  • Hadoop Developer (can do transfer - NO 3rd party Vendors) Plano, TX

    Mitchell Martin Inc. 4.5company rating

    Plano, TX jobs

    Here are the details: Long term contract / contract-to-hire possible' H1B Transfer candidates are WELCOME to apply! Green Card, EAD and US Citizens are encouraged to apply Position Summary Hadoop Developer to work in one or more than one projects in the hadoop data lake, including technical deliverables as per business needs. Work closely with project managers, Tech managers to Develop integrated technical applications in data lake platform, from conceptualization and project planning to the post-implementation support level. Responsible for developing complete life cycle of a Hadoop project implementation. Responsible to develop new re-usable utilities, understand and enhance existing utilities Primary Skill Hadoop Spark Hive Required Qualifications Bachelor's degree in a technical or business-related field, or equivalent education and related training Seven years of experience in data warehousing architectural approaches and minimum 3 years in big data (Cloudera) Exposure to and strong working knowledge of distributed systems Excellent understanding of client-service models and customer orientation in service delivery Ability to grasp the 'big picture' for a solution by considering all potential options in impacted area Aptitude to understand and adapt to newer technologies Assist in the evaluation of new solutions for integration into the Hadoop Roadmap/Strategy Motivate internal and external resources to deliver on project commitments
    $96k-122k yearly est. 4d ago
  • Data Engineer

    Robert Half 4.5company rating

    Dallas, TX jobs

    We are seeking a highly experienced Senior Data Engineer with deep expertise in modern data engineering frameworks and cloud-native architectures, primarily on AWS. This role focuses on designing, building, and optimizing scalable data pipelines and distributed systems. You will collaborate cross-functionally to deliver secure, high-quality data solutions that drive business decisions. Key Responsibilities Design & Build: Develop and maintain scalable, highly available AWS-based data pipelines, specializing in EKS/ECS containerized workloads and services like Glue, EMR, and Lake Formation. Orchestration: Implement automated data ingestion, transformation, and workflow orchestration using Airflow, NiFi, and AWS Step Functions. Real-time: Architect and implement real-time streaming solutions with Kafka, MSK, and Flink. Data Lake & Storage: Architect secure S3 data storage and govern data lakes using Lake Formation and Glue Data Catalog. Optimization: Optimize distributed processing solutions (Databricks, Spark, Hadoop) and troubleshoot performance across cloud-native systems. Governance: Ensure robust data quality, security, and governance via IAM, Lake Formation controls, and automated validations. Mentorship: Mentor junior team members and foster technical excellence. Requirements Experience: 7+ years in data engineering; strong hands-on experience designing cloud data pipelines. AWS Expertise: Deep proficiency in EKS, ECS, S3, Lake Formation, Glue, EMR, IAM, and MSK. Core Tools: Strong experience with Kafka, Airflow, NiFi, Databricks, Spark, Hadoop, and Flink. Coding: Proficiency in Python, Scala, or Java for building data pipelines and automation. Databases: Strong SQL skills and experience with relational/NoSQL databases (e.g., Redshift, DynamoDB). Cloud-Native Skills: Strong knowledge of Kubernetes, containerization, and CI/CD pipelines. Education: Bachelor's degree in Computer Science or related field.
    $86k-121k yearly est. 1d ago
  • Senior Data Engineer

    Addison Group 4.6company rating

    Houston, TX jobs

    About the Role The Senior Data Engineer will play a critical role in building and scaling an enterprise data platform to enable analytics, reporting, and operational insights across the organization. This position requires deep expertise in Snowflake and cloud technologies (AWS or Azure), along with strong upstream oil & gas domain experience. The engineer will design and optimize data pipelines, enforce data governance and quality standards, and collaborate with cross-functional teams to deliver reliable, scalable data solutions. Key Responsibilities Data Architecture & Engineering Design, develop, and maintain scalable data pipelines using Snowflake, AWS/Azure, and modern data engineering tools. Implement ETL/ELT processes integrating data from upstream systems (SCADA, production accounting, drilling, completions, etc.). Architect data models supporting both operational reporting and advanced analytics. Establish and maintain frameworks for data quality, validation, and lineage to ensure enterprise data trust. Platform Development & Optimization Lead the build and optimization of Snowflake-based data warehouses for performance and cost efficiency. Design cloud-native data solutions leveraging AWS/Azure services (S3, Lambda, Azure Data Factory, Databricks). Manage large-scale time-series and operational data processing workflows. Implement strong security, access control, and governance practices. Technical Leadership & Innovation Mentor junior data engineers and provide technical leadership across the data platform team. Research and introduce new technologies to enhance platform scalability and automation. Build reusable frameworks, components, and utilities to streamline delivery. Support AI/ML initiatives by delivering production-ready, high-quality data pipelines. Business Partnership Collaborate with stakeholders across business units to translate requirements into technical solutions. Work with analysts and data scientists to enable self-service analytics and reporting. Ensure data integration supports regulatory and compliance reporting. Act as a bridge between business and technical teams to ensure alignment and impact. Qualifications & Experience Education Bachelor's degree in Computer Science, Engineering, Information Systems, or a related field. Advanced degree or relevant certifications (SnowPro, AWS/Azure Data Engineer, Databricks) preferred. Experience 7+ years in data engineering roles, with at least 3 years on cloud data platforms. Proven expertise in Snowflake and at least one major cloud platform (AWS or Azure). Hands-on experience with upstream oil & gas data (wells, completions, SCADA, production, reserves, etc.). Demonstrated success delivering operational and analytical data pipelines. Technical Skills Advanced SQL and Python programming skills. Strong background in data modeling, ETL/ELT, cataloging, lineage, and data security. Familiarity with Airflow, Azure Data Factory, or similar orchestration tools. Experience with CI/CD, Git, and automated testing. Knowledge of BI tools such as Power BI, Spotfire, or Tableau. Understanding of AI/ML data preparation and integration.
    $86k-125k yearly est. 2d ago
  • Data Engineer

    Sharp Decisions 4.6company rating

    New York, NY jobs

    Hey All, We are looking for a mid-level data engineer. No third parties As a result of this expansion, we are seeking experienced software Data engineers with 5+ years of relevant experience to support the design and development of a strategic data platform for SMBC Capital Markets and Nikko Securities Group. Qualifications and Skills • Proven experience as a Data Engineer with experience in Azure cloud. • Experience implementing solutions using - • Azure cloud services • Azure Data Factory • Azure Lake Gen 2 • Azure Databases • Azure Data Fabric • API Gateway management • Azure Functions • Well versed with Azure Databricks • Strong SQL skills with RDMS or no SQL databases • Experience with developing APIs using FastAPI or similar frameworks in Python • Familiarity with the DevOps lifecycle (git, Jenkins, etc.), CI/CD processes • Good understanding of ETL/ELT processes • Experience in financial services industry, financial instruments, asset classes and market data are a plus.
    $85k-111k yearly est. 2d ago
  • Data Engineer

    Richard, Wayne & Roberts 4.3company rating

    Houston, TX jobs

    Python Data Engineer - Houston, TX (Onsite Only) A global energy and commodities organization is seeking an experienced Python Data Engineer to expand and optimize data assets that support high-impact analytics. This role works closely with traders, analysts, researchers, and data scientists to translate business needs into scalable technical solutions. The position is fully onsite due to the collaborative, fast-paced nature of the work. MUST come from an Oil & Gas organization, prefer commodity trading firm. CANNOT do C2C. Key Responsibilities Build modular, reusable Python components to connect external data sources with internal tools and databases. Partner with business stakeholders to define data ingestion and access requirements. Translate business requirements into well-designed technical deliverables. Maintain and enhance the central Python codebase following established standards. Contribute to internal developer tools and ETL frameworks, helping standardize and consolidate core functionality. Collaborate with global engineering teams and participate in internal Python community initiatives. Qualifications 7+ years of professional Python development experience. Strong background in data engineering and pipeline development. Experience with web scraping tools (Requests, BeautifulSoup, Selenium). Hands-on Oracle/PL SQL development, including stored procedures. Strong grasp of object-oriented design, design patterns, and service-oriented architectures. Experience with Agile/Scrum, code reviews, version control, and issue tracking. Familiarity with scientific computing libraries (Pandas, NumPy). Excellent communication skills. Industry experience in energy or commodities preferred. Exposure to containerization (Docker, Kubernetes) is a plus.
    $83k-120k yearly est. 2d ago
  • Azure Data Engineer

    Sharp Decisions 4.6company rating

    Jersey City, NJ jobs

    Title: Senior Azure Data Engineer Client: Major Japanese Bank Experience Level: Senior (10+ Years) The Senior Azure Data Engineer will design, build, and optimize enterprise data solutions within Microsoft Azure for a major Japanese bank. This role focuses on architecting scalable data pipelines, enhancing data lake environments, and ensuring security, compliance, and data governance best practices. Key Responsibilities: Develop, maintain, and optimize Azure-based data pipelines and ETL/ELT workflows. Design and implement Azure Data Lake, Synapse, Databricks, and ADF solutions. Ensure data security, compliance, lineage, and governance controls. Partner with architecture, data governance, and business teams to deliver high-quality data solutions. Troubleshoot performance issues and improve system efficiency. Required Skills: 10+ years of data engineering experience. Strong hands-on expertise with Azure Synapse, Azure Data Factory, Azure Databricks, Azure Data Lake, and Azure SQL. Azure certifications strongly preferred. Strong SQL, Python, and cloud data architecture skills. Experience in financial services or large enterprise environments preferred.
    $77k-101k yearly est. 2d ago
  • Senior Data Engineer

    Luna Data Solutions, Inc. 4.4company rating

    Austin, TX jobs

    We are looking for a seasoned Azure Data Engineer to design, build, and optimize secure, scalable, and high-performance data solutions within the Microsoft Azure ecosystem. This will be a multi-year contract worked FULLY ONSITE in Austin, TX. The ideal candidate brings deep technical expertise in data architecture, ETL/ELT engineering, data integration, and governance, along with hands-on experience in MDM, API Management, Lakehouse architectures, and data mesh or data hub frameworks. This position combines strategic architectural planning with practical, hands-on implementation, empowering cross-functional teams to leverage data as a key organizational asset. Key Responsibilities 1. Data Architecture & Strategy Design and deploy end-to-end Azure data platforms using Azure Data Lake, Azure Synapse Analytics, Azure Databricks, and Azure SQL Database. Build and implement Lakehouse and medallion (Bronze/Silver/Gold) architectures for scalable and modular data processing. Define and support data mesh and data hub patterns to promote domain-driven design and federated governance. Establish standards for conceptual, logical, and physical data modeling across data warehouse and data lake environments. 2. Data Integration & Pipeline Development Develop and maintain ETL/ELT pipelines using Azure Data Factory, Synapse Pipelines, and Databricks for both batch and streaming workloads. Integrate diverse data sources (on-prem, cloud, SaaS, APIs) into a unified Azure data environment. Optimize pipelines for cost-effectiveness, performance, and scalability. 3. Master Data Management (MDM) & Data Governance Implement MDM solutions using Azure-native or third-party platforms (e.g., Profisee, Informatica, Semarchy). Define and manage data governance, metadata, and data quality frameworks. Partner with business teams to align data standards and maintain data integrity across domains. 4. API Management & Integration Build and manage APIs for data access, transformation, and system integration using Azure API Management and Logic Apps. Design secure, reliable data services for internal and external consumers. Automate workflows and system integrations using Azure Functions, Logic Apps, and Power Automate. 5. Database & Platform Administration Perform core DBA tasks, including performance tuning, query optimization, indexing, and backup/recovery for Azure SQL and Synapse. Monitor and optimize cost, performance, and scalability across Azure data services. Implement CI/CD and Infrastructure-as-Code (IaC) solutions using Azure DevOps, Terraform, or Bicep. 6. Collaboration & Leadership Work closely with data scientists, analysts, business stakeholders, and application teams to deliver high-value data solutions. Mentor junior engineers and define best practices for coding, data modeling, and solution design. Contribute to enterprise-wide data strategy and roadmap development. Required Qualifications Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or related fields. 5+ years of hands-on experience in Azure-based data engineering and architecture. Strong proficiency with the following: Azure Data Factory, Azure Synapse, Azure Databricks, Azure Data Lake Storage Gen2 SQL, Python, PySpark, PowerShell Azure API Management and Logic Apps Solid understanding of data modeling approaches (3NF, dimensional modeling, Data Vault, star/snowflake schemas). Proven experience with Lakehouse/medallion architectures and data mesh/data hub designs. Familiarity with MDM concepts, data governance frameworks, and metadata management. Experience with automation, data-focused CI/CD, and IaC. Thorough understanding of Azure security, RBAC, Key Vault, and core networking principles. What We Offer Competitive compensation and benefits package Luna Data Solutions, Inc. (LDS) provides equal employment opportunities to all employees. All applicants will be considered for employment. LDS prohibits discrimination and harassment of any type regarding age, race, color, religion, sexual orientation, gender identity, sex, national origin, genetics, protected veteran status, and/or disability status.
    $74k-95k yearly est. 4d ago
  • Data Engineer

    Interactive Resources-IR 4.2company rating

    Austin, TX jobs

    About the Role We are seeking a highly skilled Databricks Data Engineer with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions. What We're Looking For 8+ years designing and delivering scalable data pipelines in modern data platforms Deep experience in data engineering, data warehousing, and enterprise-grade solution delivery Ability to lead cross-functional initiatives in matrixed teams Advanced skills in SQL, Python, and ETL/ELT development, including performance tuning Hands-on experience with Azure, Snowflake, and Databricks, including system integrations Key Responsibilities Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform Modernize and enhance cloud-based data ecosystems on Azure, contributing to architecture, modeling, security, and CI/CD Use Apache Airflow and similar tools for workflow automation and orchestration Work with financial or regulated datasets while ensuring strong compliance and governance Drive best practices in data quality, lineage, cataloging, and metadata management Primary Technical Skills Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks Notebooks Design efficient Delta Lake models for reliability and performance Implement and manage Unity Catalog for governance, RBAC, lineage, and secure data sharing Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems Automate ingestion and workflows using Python and REST APIs Support downstream analytics for BI, data science, and application workloads Write optimized SQL/T-SQL queries, stored procedures, and curated datasets Automate DevOps workflows, testing pipelines, and workspace configurations Additional Skills Azure: Data Factory, Data Lake, Key Vault, Logic Apps, Functions CI/CD: Azure DevOps Orchestration: Apache Airflow (plus) Streaming: Delta Live Tables MDM: Profisee (nice-to-have) Databases: SQL Server, Cosmos DB Soft Skills Strong analytical and problem-solving mindset Excellent communication and cross-team collaboration Detail-oriented with a high sense of ownership and accountability
    $84k-111k yearly est. 4d ago
  • Oracle Data Modeler

    Yoh, A Day & Zimmermann Company 4.7company rating

    Dallas, TX jobs

    Oracle Data Modeler (Erwin) 6+ month contract (W2 ONLY - NO C-C) Downtown Dallas, TX (Onsite) Primary responsibilities of the Data Modeler include designing, developing, and maintaining enterprise-grade data models that support critical business initiatives, analytics, and operational systems. The ideal candidate is proficient in industry-standard data modeling tools (with hands-on expertise in Erwin Data Modeler) and has deep experience with Oracle databases. The candidate will also translate complex business requirements into robust, scalable, and normalized data models while ensuring alignment with data governance, performance, and integration standards. Responsibilities Design and develop conceptual, logical, and physical data models using Erwin Data Modeler (required). Generate, review, and optimize DDL (Data Definition Language) scripts for database objects (tables, views, indexes, constraints, partitions, etc.). Perform forward and reverse engineering of data models from existing Oracle and SQL Server databases. Collaborate with data architects, DBAs, ETL developers, and business stakeholders to gather and refine requirements. Ensure data models adhere to normalization standards (3NF/BCNF), data integrity, and referential integrity. Support dimensional modeling (star/snowflake schemas) for data warehousing and analytics use cases. Conduct model reviews, impact analysis, and version control using Erwin or comparable tools. Participate in data governance initiatives, including metadata management, naming standards, and lineage documentation. Optimize models for performance, scalability, and maintainability across large-scale environments. Assist in database migrations, schema comparisons, and synchronization between environments (Dev/QA/Prod). Assist in optimizing existing Data Solutions Follow Oncor's Data Governance Policy and Information Classification and Protection Policy. Participate in design reviews and take guidance from the Data Architecture team members. Qualifications 3+ years of hands-on data modeling experience in enterprise environments. Expert proficiency with Erwin Data Modeler (version 9.x or higher preferred) - including subject areas, model templates, and DDL generation. Advanced SQL skills and deep understanding of Oracle (11g/12c/19c/21c). Strong command of DDL - creating and modifying tables, indexes, constraints, sequences, synonyms, and materialized views. Solid grasp of database internals: indexing strategies, partitioning, clustering, and query execution plans. Experience with data modeling best practices: normalization, denormalization, surrogate keys, slowly changing dimensions (SCD), and data vault (a plus). Familiarity with version control (e.g., Git) and model comparison/diff tools. Excellent communication skills - ability to document models clearly and present to technical and non-technical audiences. Self-Motivated, with an ability to multi-task Capable of presenting to all levels of audiences Works well in a team environment Experience with Hadoop/MongoDB a plus Estimated Min Rate: $63.00 Estimated Max Rate: $90.00 What's In It for You? We welcome you to be a part of the largest and legendary global staffing companies to meet your career aspirations. Yoh's network of client companies has been employing professionals like you for over 65 years in the U.S., UK and Canada. Join Yoh's extensive talent community that will provide you with access to Yoh's vast network of opportunities and gain access to this exclusive opportunity available to you. Benefit eligibility is in accordance with applicable laws and client requirements. Benefits include: Medical, Prescription, Dental & Vision Benefits (for employees working 20+ hours per week) Health Savings Account (HSA) (for employees working 20+ hours per week) Life & Disability Insurance (for employees working 20+ hours per week) MetLife Voluntary Benefits Employee Assistance Program (EAP) 401K Retirement Savings Plan Direct Deposit & weekly epayroll Referral Bonus Programs Certification and training opportunities Note: Any pay ranges displayed are estimations. Actual pay is determined by an applicant's experience, technical expertise, and other qualifications as listed in the job description. All qualified applicants are welcome to apply. Yoh, a Day & Zimmermann company, is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Visit ************************************************ to contact us if you are an individual with a disability and require accommodation in the application process. For California applicants, qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. All of the material job duties described in this posting are job duties for which a criminal history may have a direct, adverse, and negative relationship potentially resulting in the withdrawal of a conditional offer of employment. It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability. By applying and submitting your resume, you authorize Yoh to review and reformat your resume to meet Yoh's hiring clients' preferences. To learn more about Yoh's privacy practices, please see our Candidate Privacy Notice: **********************************
    $63 hourly 1d ago
  • Data Architect

    KPI Partners 4.8company rating

    Plano, TX jobs

    KPI Partners is a 5 times Gartner-recognized data, analytics, and AI consulting company. We are leaders in data engineering on Azure, AWS, Google, Snowflake, and Databricks. Founded in 2006, KPI has over 400 consultants and has successfully delivered over 1,000 projects to our clients. We are looking for skilled data engineers who want to work with the best team in data engineering. Title: Senior Data Architect Location: Plano, TX (Hybrid) Job Type: Contract - 6 Months Key Skills: SQL, PySpark, Databricks, and Azure Cloud Key Note: Looking for a Data Architect who is Hands-on with SQL, PySpark, Databricks, and Azure Cloud. About the Role: We are seeking a highly skilled and experienced Senior Data Architect to join our dynamic team at KPI, working on challenging and multi-year data transformation projects. This is an excellent opportunity for a talented data engineer to play a key role in building innovative data solutions using Azure Native Services and related technologies. If you are passionate about working with large-scale data systems and enjoy solving complex engineering problems, this role is for you. Key Responsibilities: Data Engineering: Design, development, and implementation of data pipelines and solutions using PySpark, SQL, and related technologies. Collaboration: Work closely with cross-functional teams to understand business requirements and translate them into robust data solutions. Data Warehousing: Design and implement data warehousing solutions, ensuring scalability, performance, and reliability. Continuous Learning: Stay up to date with modern technologies and trends in data engineering and apply them to improve our data platform. Mentorship: Provide guidance and mentorship to junior data engineers, ensuring best practices in coding, design, and development. Must-Have Skills & Qualifications: Minimum 12+ years of overall experience in IT Industry. 4+ years of experience in data engineering, with a strong background in building large-scale data solutions. 4+ years of hands-on experience developing and implementing data pipelines using Azure stack experience (Azure, ADF, Databricks, Functions) Proven expertise in SQL for querying, manipulating, and analyzing large datasets. Strong knowledge of ETL processes and data warehousing fundamentals. Self-motivated and independent, with a “let's get this done” mindset and the ability to thrive in a fast-paced and dynamic environment. Good-to-Have Skills: Databricks Certification is a plus. Data Modeling, Azure Architect Certification.
    $88k-123k yearly est. 4d ago
  • Senior Oracle Data Architect (HANDS ON)

    Yoh, A Day & Zimmermann Company 4.7company rating

    Dallas, TX jobs

    Oracle Data Architect (HANDS ON) 12+ month contract Downtown Dallas, TX (HYBRID) Primary responsibilities of the Senior Data Architect include designing and managing Data Architectural solutions for multiple environments including but not limited to Data Warehouse, ODS, Data Replication/ETL Data Management initiatives. The candidate will be in an expert role and will work closely with Business, DBA, ETL and Data Management teams providing solutions for complex Data related initiatives. This individual will also be responsible for developing and managing Data Governance and Master Data Management solutions. This candidate must have good technical and communication skills coupled with the ability to mentor effectively. Responsibilities Establishing policies, procedures and guidelines regarding all aspects of Data Governance Ensure data decisions are consistent and best practices are adhered to Ensure Data Standardization definitions, Data Dictionary and Data Lineage are kept up to date and accessible Work with ETL, Replication and DBA teams to determine best practices as it relates to data transformations, data movement and derivations Work with support teams to ensure consistent and pro-active support methodologies are in place for all aspects of data movements and data transformations Work with and mentor Data Architects and Data Analysts to ensure best practices are adhered to for database design and data management Assist in overall Architectural solutions including, but not limited to Data Warehouse, ODS, Data Replication/ETL Data Management initiatives Work with the business teams and Enterprise Architecture team to ensure best architectural solutions from a Data perspective Create a strategic roadmap for MDM implementation Responsible for implementing a Master Data Management tool Establishing policies, procedures and guidelines regarding all aspects of Master Data Management Ensure Architectural rules and design of the MDM process are documented and best practices are adhered to Qualifications MUST HAVE Data Modeling skills, Oracle Exadata, Golden Gate 5+ years of Data Architecture experience, including OLTP, Data Warehouse, Big Data 5+ years of Solution Architecture experience 5+ years of MDM experience 5+ years of Data Governance experience, working knowledge of best practices Extensive working knowledge of all aspects of Data Movement and Processing, including Middleware, ETL, API, OLAP and best practices for data tracking Good Communication skills Self-Motivated Capable of presenting to all levels of audiences Works well in a team environment Estimated Min Rate: $80.00 Estimated Max Rate: $90.00 What's In It for You? We welcome you to be a part of the largest and legendary global staffing companies to meet your career aspirations. Yoh's network of client companies has been employing professionals like you for over 65 years in the U.S., UK and Canada. Join Yoh's extensive talent community that will provide you with access to Yoh's vast network of opportunities and gain access to this exclusive opportunity available to you. Benefit eligibility is in accordance with applicable laws and client requirements. Benefits include: Medical, Prescription, Dental & Vision Benefits (for employees working 20+ hours per week) Health Savings Account (HSA) (for employees working 20+ hours per week) Life & Disability Insurance (for employees working 20+ hours per week) MetLife Voluntary Benefits Employee Assistance Program (EAP) 401K Retirement Savings Plan Direct Deposit & weekly epayroll Referral Bonus Programs Certification and training opportunities Note: Any pay ranges displayed are estimations. Actual pay is determined by an applicant's experience, technical expertise, and other qualifications as listed in the job description. All qualified applicants are welcome to apply. Yoh, a Day & Zimmermann company, is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Visit ************************************************ to contact us if you are an individual with a disability and require accommodation in the application process. For California applicants, qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. All of the material job duties described in this posting are job duties for which a criminal history may have a direct, adverse, and negative relationship potentially resulting in the withdrawal of a conditional offer of employment. It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability. By applying and submitting your resume, you authorize Yoh to review and reformat your resume to meet Yoh's hiring clients' preferences. To learn more about Yoh's privacy practices, please see our Candidate Privacy Notice: **********************************
    $80 hourly 2d ago
  • SAP Data Migration Developer

    Numeric Technologies 4.5company rating

    Englewood, NJ jobs

    SAP S4 Data Migration Developer Duration: 6 Months Rate: Competitive Market Rate This key role is responsible for development and configuration of SAP Data Services Platform with in Client's Corporate technology to deliver a successful data conversion and migration from SAP ECC to SAP S4 as part of project Keystone. KEY RESPONSIBILITIES - Responsible for SAP Data Services development, design, job creation and execution. Responsible for efficient design, performance tuning and ensuring timely data processing, validation & verification. Responsible for creating content within SAP Data Services for both master and transaction data conversion (standard SAP and Custom data objects). Responsible for data conversion using Staging tables and work with SAP teams for data loads in SAP S4 and MDG environments. Responsible for building validation rules, scorecards and data for consumption in Information Steward pursuant to conversion rules as per Functional Specifications. Responsible to adhere to project timelines, deliverables and account for object delivery for teams involved. To take part in meetings, execute plans, design, develop custom solutions within Clients O&T Engineering scope. Work in all facets of SAP Data Migration projects with focus on SAP S4 Data Migration using SAP Data Services Platform Hands-on development experience with ETL from legacy SAP ECC environment, conversions and jobs. Demonstrate capabilities with performance tuning, handling large data sets. Understand SAP tables, fields & load processes into SAP S4, MDG systems Build validation rules, customize, and deploy Information Steward scorecards, data reconciliation and validation Be a problem solver and build robust conversion, validation per requirements. SKILLS AND EXPERIENCE 6-8 years of experience in SAP Data Services application as a developer At least 2 SAP S4 Conversion projects with DMC, Staging Tables & updating SAP Master Data Governance Good communication skills, ability to deliver key objects on time and support with testing, mock cycles. 4-5 Years development experience in SAP Data Services 4.3 Designer, Information Steward Taking ownership and ensuring high quality results Active in seeking feedback and making necessary changes Specific previous experience - Proven experience in implementing SAP Data Services in a multinational environment. Experience in design of data loads of large volumes to SAP S4 from SAP ECC Must have used HANA Staging tables Experience in developing Information Steward for Data Reconciliation & Validation (not profiling) REQUIREMENTS Adhere to work availability schedule as noted above, be on time for meeting Written and verbal communication in English
    $78k-98k yearly est. 5d ago

Learn more about Ascendo Resources jobs

View all jobs