Senior Software Engineer
Data engineer job at Vorys, Sater, Seymour and Pease
Precision eControl (PeC) is a wholly owned ancillary business of Vorys, that provides integrated solutions to help brands control the sales of their products in the age of eCommerce. We have represented more than 300 brands, including many of the world's largest companies. PeC's full scope of services allows us to provide a truly comprehensive approach that delivers unique business value.
Position Summary:
The Senior Software Engineer (Front-End) will design, develop, and implement software solutions utilizing Laravel, TailwindCSS, HTML, SQL, and JavaScript. This position is responsible for developing backend and frontend components, database schemas and models, writing/maintaining tests, creating/maintaining deployment pipelines and environments, and responding to support issues and production bugs/outages. At this time, candidates who would work in the following states will not be considered for this role: AZ, CA, CO, CT, DE, DC, HI, IL, MA, ME, MI, MD, MN, NV, NJ, NY, RI, VT, and WA.
Essential Functions:
Develop and maintain front-end applications using Vue, Tailwind CSS, JavaScript, Filament, and related technologies.
Develop and maintain Laravel applications using PHP, Laravel, SQL, and related technologies.
Write and maintain unit tests and automated click tests.
Maintain and develop components for a shared design component library.
Participate in sprint ceremonies, collaborate with product and design.
Debug and troubleshoot issues, including production support, across the backend, frontend, and database components of the application.
Perform code reviews, provide feedback to other engineers, and ensure the quality of the codebase.
Maintain CI/CD pipelines, infrastructure, and databases.
Knowledge, Skills and Abilities Required:
5+ years of experience with Vue (or similar frameworks such as React or Svelte)
3+ years of experience integrating back-end business applications with front-end, preferably PHP/Laravel
Experience developing and maintaining frontend component libraries and working with Product/Design on UX
Experience performing code reviews and providing feedback/mentorship to fellow engineers
Experience debugging frontend and backend issues
Ability to collaborate closely with cross-functional teams, including designers and product managers
Ability to turn designs into responsive frontend code
Demonstrated knowledge of accessibility best practices
Desirable But Not Essential:
Experience building/maintaining design systems
Experience with TailwindCSS
Education and Experience:
Bachelor's degree in related discipline or combination of equivalent education and experience.
Bachelor's degree in computer science preferred.
5 - 7 years of experience in similar field.
The expected pay scale for this position is $135,000.00- $160,000.00 and represents our good faith estimate of the starting rate of pay at the time of posting. The actual compensation offered will depend on factors such as your qualifications, relevant experience, education, work location, and market conditions.
At PeC, we are dedicated to fostering a workplace where employees can succeed both personally and professionally. We offer competitive compensation along with a robust benefits package designed to support your health, well-being, and long-term goals. Our benefits include medical, dental, vision, FSA, life and disability coverage, paid maternity & parental leave, discretionary bonus opportunity, family building resources, identity theft protection, a 401(k) plan with discretionary employer contribution potential, and paid sick, personal and vacation time. Some benefits are provided automatically, while others may be available for voluntary enrollment. You'll also have access to opportunities for professional growth, work-life balance, and programs that recognize and celebrate your contributions.
Equal Opportunity Employer:
PeC does not discriminate in hiring or terms and conditions of employment because of an individual's sex (including pregnancy, childbirth, and related medical conditions), race, age, religion, national origin, ancestry, color, sexual orientation, gender identity, gender expression, genetic information, marital status, military/veteran status, disability, or any other characteristic protected by local, state or federal law. PeC only hires individuals authorized for employment in the United States.
PeC is committed to providing reasonable accommodations to qualified individuals in our employment application process unless doing so would constitute an undue hardship. If you need assistance or an accommodation in our employment application process due to a disability; due to a limitation related to, affected by, or arising out of pregnancy, childbirth, or related medical conditions; or due to a sincerely held religious belief, practice, or observance, please contact Julie McDonald, CHRO. Our policy regarding requests for reasonable accommodation applies to all aspects of the hiring process.
#LI-Remote
Auto-ApplyData Engineer - AI & Data Modernization
San Antonio, TX jobs
Job Family:
Data Science Consulting
Travel Required:
Up to 25%
Clearance Required:
Ability to Obtain Public Trust
What You Will Do:
We are seeking an experienced Data Engineer to join our growing AI and Data practice, with a dedicated focus on the Federal Civilian Agencies (FCA) market within the Communities, Energy & Infrastructure (CEI) segment. This individual will be a hands-on technical contributor, responsible for designing and implementing scalable data pipelines and interactive dashboards that enable federal clients to achieve mission outcomes, operational efficiency, and digital transformation.
This is a strategic delivery role for someone who thrives at the intersection of data engineering, cloud platforms, and public sector analytics.
Client Leadership & Delivery
Collaborate with FCA clients to understand data architecture and reporting needs.
Lead the development of ETL pipelines and dashboard integrations using Databricks and Tableau.
Ensure delivery excellence and measurable outcomes across data
migration and visualization efforts.
Solution Development & Innovation
Design and implement scalable ETL/ELT pipelines using Spark, SQL, and Python.
Develop and optimize Tableau dashboards aligned with federal reporting standards.
Apply AI/ML tools to automate metadata extraction, clustering, and dashboard scripting.
Practice & Team Leadership
Work closely with data architects, analysts, and cloud engineers to deliver integrated solutions.
Support documentation, testing, and deployment of data products.
Mentor junior developers and contribute to reusable frameworks and accelerators.
What You Will Need:
US Citizenship is required
Bachelor's degree is required
Minimum TWO (2) years of experience in data engineering and dashboard development
Proven experience with Databricks, Tableau, and cloud platforms (AWS, Azure)
Strong proficiency in SQL, Python, and Spark
Experience building ETL pipelines and integrating data sources into reporting platforms
Familiarity with data governance, metadata, and compliance frameworks
Excellent communication, facilitation, and stakeholder engagement skills
What Would Be Nice To Have:
AI/LLM Certifications
Experience working with FCA clients such as DOT, GSA, USDA, or similar
Familiarity with federal contracting and procurement processes
What We Offer:
Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace.
Benefits include:
Medical, Rx, Dental & Vision Insurance
Personal and Family Sick Time & Company Paid Holidays
Position may be eligible for a discretionary variable incentive bonus
Parental Leave and Adoption Assistance
401(k) Retirement Plan
Basic Life & Supplemental Life
Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts
Short-Term & Long-Term Disability
Student Loan PayDown
Tuition Reimbursement, Personal Development & Learning Opportunities
Skills Development & Certifications
Employee Referral Program
Corporate Sponsored Events & Community Outreach
Emergency Back-Up Childcare Program
Mobility Stipend
About Guidehouse
Guidehouse is an Equal Opportunity Employer-Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation.
Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco.
If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at ************** or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation.
All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or ************************. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process.
If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse's Ethics Hotline. If you want to check the validity of correspondence you have received, please contact *************************. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant's dealings with unauthorized third parties.
Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.
Auto-ApplyETL/ELT Data Engineer (Secret Clearance) - Hybrid
Austin, TX jobs
LaunchCode is recruiting for a Software Data Engineer to work at one of our partner companies!
Details:
Full-Time W2, Salary
Immediate opening
Hybrid - Austin, TX (onsite 1-2 times a week)
Pay $85K-$120K
Minimum Experience: 4 years
Security Clearance: Active DoD Secret Clearance
Disclaimer: Please note that we are unable to provide work authorization or sponsorship for this role, now or in the future. Candidates requiring current or future sponsorship will not be considered.
Job description
Job Summary
A Washington, DC-based software solutions provider founded in 2017, specializes in delivering mission-critical and enterprise solutions to the federal government. Originating from the Department of Defense's software factory ecosystem, the company focuses on Command and Control, Cybersecurity, Space, Geospatial, and Modeling & Simulation. The company leverages commercial technology to enhance the capabilities of the DoD, IC, and their end-users, with innovation driven by its Innovation centers. The company has a presence in Boston, MA, Colorado Springs, CO, San Antonio, TX, and St. Louis, MO.
Why the company?
Environment of Autonomy
Innovative Commercial Approach
People over process
We are seeking a passionate Software Data Engineer to support the Army Software Factory (ASWF) in aligning with DoDM 8140.03 Cyber Workforce requirements and broader compliance mandates. The Army Software Factory (ASWF), a first-of-its-kind initiative under Army Futures Command, is revolutionizing the Army's approach to software development by training and employing self-sustaining technical talent from across the military and civilian workforce. Guided by the motto “By Soldiers, For Soldiers,” ASWF equips service members to develop mission-critical software solutions independently-especially vital for future contested environments where traditional technical support may be unavailable. This initiative also serves as a strategic prototype to modernize legacy IT processes and build technical readiness across the force to ensure battlefield dominance in the digital age.
Required Skills:
Active DoD Secret Clearance (Required)
4+ years of experience in data science, data engineering, or similar roles.
Expertise in designing, building, and maintaining scalable ETL/ELT pipelines using tools and languages such as Python, SQL, Apache Spark, or Airflow.
Strong proficiency in working with relational and NoSQL databases, including experience with database design, optimization, and query performance tuning (e.g., PostgreSQL, MySQL, MongoDB, Cassandra).
Demonstrable experience with cloud data platforms and services (e.g., AWS Redshift, S3, Glue, Athena; Azure Data Lake, Data Factory, Synapse; Google BigQuery, Cloud Storage, Dataflow).
Solid understanding of data warehousing concepts (e.g., Kimball, Inmon methodologies) and experience with data modeling for analytical purposes.
Proficiency in at least one programming language commonly used in data engineering (e.g., Python, Java, Scala) for data manipulation, scripting, and automation.
CompTIA Security+ Certified or otherwise DoDM 8140.03 (formerly DoD 8570.01-M) compliant.
Nice to Have:
Familiarity with SBIR technologies and transformative platform shifts
Experience working in Agile or DevSecOps environments
2+ years of experience interfacing with Platform Engineers and data visibility team, manage AWS resources, and GitLab admin
#LI-hybrid #austintx #ETLengineer #dataengineer #army #aswf #clearancejobs #clearedjobs #secretclearance #ETL
Data Modeler II
Houston, TX jobs
Job Title: Data Modeler II
Type: W2 Contract (USA)/INC or T4 (Canada)
Work Setup: Hybrid (On-site with flexibility to work from home two days per week)
Industry: Oil & Gas
Benefits: Health, Dental, Vision
Job Summary
We are seeking a Data Modeler II with a product-driven, innovative mindset to design and implement data solutions that deliver measurable business value for Supply Chain operations. This role combines technical expertise with project management responsibilities, requiring collaboration with IT teams to develop solutions for small and medium-sized business challenges. The ideal candidate will have hands-on experience with data transformation, AI integration, and ERP systems, while also being able to communicate technical concepts in clear, business-friendly language.
Key Responsibilities
Develop innovative data solutions leveraging knowledge of Supply Chain processes and oil & gas industry value drivers.
Design and optimize ETL pipelines for scalable, high-performance data processing.
Integrate solutions with enterprise data platforms and visualization tools.
Gather and clean data from ERP systems for analytics and reporting.
Utilize AI tools and prompt engineering to enhance data-driven solutions.
Collaborate with IT and business stakeholders to deliver medium and low-level solutions for local issues.
Oversee project timelines, resources, and stakeholder engagement.
Document project objectives, requirements, and progress updates.
Translate technical language into clear, non-technical terms for business users.
Support continuous improvement and innovation in data engineering and analytics.
Basic / Required Qualifications
Bachelor's degree in Commerce (SCM), Data Science, Engineering, or related field.
Hands-on experience with:
Python for data transformation.
ETL tools (Power Automate, Power Apps; Databricks is a plus).
Oracle Cloud (Supply Chain and Financial modules).
Knowledge of ERP systems (Oracle Cloud required; SAP preferred).
Familiarity with AI integration and low-code development platforms.
Strong understanding of Supply Chain processes; oil & gas experience preferred.
Ability to manage projects and engage stakeholders effectively.
Excellent communication skills for translating technical concepts into business language.
Required Knowledge / Skills / Abilities
Advanced proficiency in data science concepts, including statistical analysis and machine learning.
Experience with prompt engineering and AI-driven solutions.
Ability to clean and transform data for analytics and reporting.
Strong documentation, troubleshooting, and analytical skills.
Business-focused mindset with technical expertise.
Ability to think outside the box and propose innovative solutions.
Special Job Characteristics
Hybrid work schedule (Wednesdays and Fridays remote).
Ability to work independently and oversee own projects.
Junior Data Engineer
Columbus, OH jobs
Contract-to-Hire
Columbus, OH (Hybrid)
Our healthcare services client is looking for an entry-level Data Engineer to join their team. You will play a pivotal role in maintaining and improving inventory and logistics management programs. Your day-to-day work will include leveraging machine learning and open-source technologies to drive improvements in data processes.
Job Responsibilities
Automate key processes and enhance data quality
Improve injection processes and enhance machine learning capabilities
Manage substitutions and allocations to streamline product ordering
Work on logistics-related data engineering tasks
Build and maintain ML models for predictive analytics
Interface with various customer systems
Collaborate on integrating AI models into customer service
Qualifications
Bachelor's degree in related field
0-2 years of relevant experience
Proficiency in SQL and Python
Understanding of GCP/BigQuery (or any cloud experience, basic certifications a plus).
Knowledge of data science concepts.
Business acumen and understanding (corporate experience or internship preferred).
Familiarity with Tableau
Strong analytical skills
Attitude for collaboration and knowledge sharing
Ability to present confidently in front of leaders
Why Should You Apply?
You will be part of custom technical training and professional development through our Elevate Program!
Start your career with a Fortune 15 company!
Access to cutting-edge technologies
Opportunity for career growth
Eight Eleven Group provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, national origin, age, sex, citizenship, disability, genetic information, gender, sexual orientation, gender identity, marital status, amnesty or status as a covered veteran in accordance with applicable federal, state, and local laws.
Data Modeler
Midland, TX jobs
Job Title: Data Modeler - Net Zero Program Analyst
Type: W2 Contract (12-month duration)
Work Setup: On-site
Industry: Oil & Gas
Benefits: Dental, Healthcare, Vision &401(k)
Airswift is seeking a Data Modeler - Net Zero Program Analyst to join one of our major clients on a 12-month contract. This newly created role supports the company's decarbonization and Net Zero initiatives by managing and analyzing operational data to identify trends and optimize performance. The position involves working closely with operations and analytics teams to deliver actionable insights through data visualization and reporting.
Responsibilities:
Build and maintain Power BI dashboards to monitor emissions, operational metrics, and facility performance.
Extract and organize data from systems such as SiteView, ProCount, and SAP for analysis and reporting.
Conduct data validation and trend analysis to support sustainability and operational goals.
Collaborate with field operations and project teams to interpret data and provide recommendations.
Ensure data consistency across platforms and assist with integration efforts (coordination only, no coding required).
Present findings through clear reports and visualizations for technical and non-technical stakeholders.
Required Skills and Experience:
7+ years of experience in data analysis within Oil & Gas or Energy sectors.
Strong proficiency in Power BI (required).
Familiarity with SiteView, ProCount, and/or SAP (preferred).
Ability to translate operational data into insights that support emissions reduction and facility optimization.
Experience with surface facilities, emissions estimation, or power systems.
Knowledge of other visualization tools (Tableau, Spotfire) is a plus.
High School Diploma or GED required.
Additional Details:
Preference for Midland-based candidates; Houston-based candidates will need to travel to Midland periodically (travel reimbursed).
No per diem offered.
Office-based role with low exposure risk.
Data Engineer
Coppell, TX jobs
Title: Data Engineer
Assignment Type: 6-12 month contract-to-hire
Compensation: $65/hr-$75/hr W2
Work Model: Hybrid (4 days on-site, 1 day remote)
Benefits: Medical, Dental, Vision, 401(k)
What we need is someone who comes 8+ years of experience in the Data Engineering space who specializes in Microsoft Azure and Databricks. This person will be a part of multiple initiatives for the "New Development" and "Data Reporting" teams but will be primarily tasked with designing, building, maintaining, and automating their enterprise data architecture/pipelines within the cloud.
Technology-wise we are needing to come with skills in Azure Databricks (5+ years), cloud-based environment (Azure and/or AWS), Azure DevOps (ADO), SQL (ETL, SSIS packages), and PySpark or Scala automation. Architecture experience in building pipelines, data modeling, data pipeline deployment, data mapping, etc.
Top Skills:
-8+ Years of Data Engineer/Business Intelligence
-Databricks and Azure Data Factory *Most updated is Unity Catalog for Databricks*
-Cloud-based environments (Azure or AWS)
-Data Pipeline Architecture and CI/CD methodology
-SQL
-Automation (Python (PySpark), Scala)
Hadoop Developer (can do transfer - NO 3rd party Vendors) Plano, TX
Plano, TX jobs
Here are the details:
Long term contract / contract-to-hire possible'
H1B Transfer candidates are WELCOME to apply!
Green Card, EAD and US Citizens are encouraged to apply
Position Summary
Hadoop Developer to work in one or more than one projects in the hadoop data lake, including technical deliverables as per business needs.
Work closely with project managers, Tech managers to Develop integrated technical applications in data lake platform, from conceptualization and project planning to the post-implementation support level.
Responsible for developing complete life cycle of a Hadoop project implementation.
Responsible to develop new re-usable utilities, understand and enhance existing utilities
Primary Skill
Hadoop
Spark
Hive
Required Qualifications
Bachelor's degree in a technical or business-related field, or equivalent education and related training
Seven years of experience in data warehousing architectural approaches and minimum 3 years in big data (Cloudera)
Exposure to and strong working knowledge of distributed systems
Excellent understanding of client-service models and customer orientation in service delivery
Ability to grasp the 'big picture' for a solution by considering all potential options in impacted area
Aptitude to understand and adapt to newer technologies
Assist in the evaluation of new solutions for integration into the Hadoop Roadmap/Strategy
Motivate internal and external resources to deliver on project commitments
Data Engineer
Dallas, TX jobs
We are seeking a highly experienced Senior Data Engineer with deep expertise in modern data engineering frameworks and cloud-native architectures, primarily on AWS. This role focuses on designing, building, and optimizing scalable data pipelines and distributed systems.
You will collaborate cross-functionally to deliver secure, high-quality data solutions that drive business decisions.
Key Responsibilities
Design & Build: Develop and maintain scalable, highly available AWS-based data pipelines, specializing in EKS/ECS containerized workloads and services like Glue, EMR, and Lake Formation.
Orchestration: Implement automated data ingestion, transformation, and workflow orchestration using Airflow, NiFi, and AWS Step Functions.
Real-time: Architect and implement real-time streaming solutions with Kafka, MSK, and Flink.
Data Lake & Storage: Architect secure S3 data storage and govern data lakes using Lake Formation and Glue Data Catalog.
Optimization: Optimize distributed processing solutions (Databricks, Spark, Hadoop) and troubleshoot performance across cloud-native systems.
Governance: Ensure robust data quality, security, and governance via IAM, Lake Formation controls, and automated validations.
Mentorship: Mentor junior team members and foster technical excellence.
Requirements
Experience: 7+ years in data engineering; strong hands-on experience designing cloud data pipelines.
AWS Expertise: Deep proficiency in EKS, ECS, S3, Lake Formation, Glue, EMR, IAM, and MSK.
Core Tools: Strong experience with Kafka, Airflow, NiFi, Databricks, Spark, Hadoop, and Flink.
Coding: Proficiency in Python, Scala, or Java for building data pipelines and automation.
Databases: Strong SQL skills and experience with relational/NoSQL databases (e.g., Redshift, DynamoDB).
Cloud-Native Skills: Strong knowledge of Kubernetes, containerization, and CI/CD pipelines.
Education: Bachelor's degree in Computer Science or related field.
Senior Data Engineer
Houston, TX jobs
About the Role
The Senior Data Engineer will play a critical role in building and scaling an enterprise data platform to enable analytics, reporting, and operational insights across the organization.
This position requires deep expertise in Snowflake and cloud technologies (AWS or Azure), along with strong upstream oil & gas domain experience. The engineer will design and optimize data pipelines, enforce data governance and quality standards, and collaborate with cross-functional teams to deliver reliable, scalable data solutions.
Key Responsibilities
Data Architecture & Engineering
Design, develop, and maintain scalable data pipelines using Snowflake, AWS/Azure, and modern data engineering tools.
Implement ETL/ELT processes integrating data from upstream systems (SCADA, production accounting, drilling, completions, etc.).
Architect data models supporting both operational reporting and advanced analytics.
Establish and maintain frameworks for data quality, validation, and lineage to ensure enterprise data trust.
Platform Development & Optimization
Lead the build and optimization of Snowflake-based data warehouses for performance and cost efficiency.
Design cloud-native data solutions leveraging AWS/Azure services (S3, Lambda, Azure Data Factory, Databricks).
Manage large-scale time-series and operational data processing workflows.
Implement strong security, access control, and governance practices.
Technical Leadership & Innovation
Mentor junior data engineers and provide technical leadership across the data platform team.
Research and introduce new technologies to enhance platform scalability and automation.
Build reusable frameworks, components, and utilities to streamline delivery.
Support AI/ML initiatives by delivering production-ready, high-quality data pipelines.
Business Partnership
Collaborate with stakeholders across business units to translate requirements into technical solutions.
Work with analysts and data scientists to enable self-service analytics and reporting.
Ensure data integration supports regulatory and compliance reporting.
Act as a bridge between business and technical teams to ensure alignment and impact.
Qualifications & Experience
Education
Bachelor's degree in Computer Science, Engineering, Information Systems, or a related field.
Advanced degree or relevant certifications (SnowPro, AWS/Azure Data Engineer, Databricks) preferred.
Experience
7+ years in data engineering roles, with at least 3 years on cloud data platforms.
Proven expertise in Snowflake and at least one major cloud platform (AWS or Azure).
Hands-on experience with upstream oil & gas data (wells, completions, SCADA, production, reserves, etc.).
Demonstrated success delivering operational and analytical data pipelines.
Technical Skills
Advanced SQL and Python programming skills.
Strong background in data modeling, ETL/ELT, cataloging, lineage, and data security.
Familiarity with Airflow, Azure Data Factory, or similar orchestration tools.
Experience with CI/CD, Git, and automated testing.
Knowledge of BI tools such as Power BI, Spotfire, or Tableau.
Understanding of AI/ML data preparation and integration.
Data Engineer
Houston, TX jobs
Python Data Engineer - Houston, TX (Onsite Only)
A global energy and commodities organization is seeking an experienced Python Data Engineer to expand and optimize data assets that support high-impact analytics. This role works closely with traders, analysts, researchers, and data scientists to translate business needs into scalable technical solutions. The position is fully onsite due to the collaborative, fast-paced nature of the work.
MUST come from an Oil & Gas organization, prefer commodity trading firm.
CANNOT do C2C.
Key Responsibilities
Build modular, reusable Python components to connect external data sources with internal tools and databases.
Partner with business stakeholders to define data ingestion and access requirements.
Translate business requirements into well-designed technical deliverables.
Maintain and enhance the central Python codebase following established standards.
Contribute to internal developer tools and ETL frameworks, helping standardize and consolidate core functionality.
Collaborate with global engineering teams and participate in internal Python community initiatives.
Qualifications
7+ years of professional Python development experience.
Strong background in data engineering and pipeline development.
Experience with web scraping tools (Requests, BeautifulSoup, Selenium).
Hands-on Oracle/PL SQL development, including stored procedures.
Strong grasp of object-oriented design, design patterns, and service-oriented architectures.
Experience with Agile/Scrum, code reviews, version control, and issue tracking.
Familiarity with scientific computing libraries (Pandas, NumPy).
Excellent communication skills.
Industry experience in energy or commodities preferred.
Exposure to containerization (Docker, Kubernetes) is a plus.
Head of Data Science & AI
Austin, TX jobs
Duration: 6 month contract-to-hire
Compensation: $150K-160K
Work schedule: Monday-Friday (8 AM-5PM CST) - onsite 3x per week
Benefits: This position is eligible for medical, dental, vision and 401(k)
The Head of Data Science & AI leads the organization's data science strategy and team, driving advanced analytics and AI initiatives to deliver business value and innovation. This role sets the strategic direction for data science, ensures alignment with organizational goals, and promotes a data-driven culture. It involves close collaboration with business and technology teams to identify opportunities for leveraging machine learning and AI to improve operations and customer experiences.
Key Responsibilities
Develop and execute a data science strategy and roadmap aligned with business objectives.
Build and lead the data science team, providing mentorship and fostering growth.
Partner with business leaders to identify challenges and deliver actionable insights.
Oversee design and deployment of predictive models, algorithms, and analytical frameworks.
Ensure data integrity, governance, and security in collaboration with engineering teams.
Communicate complex insights to non-technical stakeholders.
Manage infrastructure, tools, and budget for data science initiatives.
Drive experimentation with emerging AI technologies and ensure ethical AI practices.
Oversee full AI model lifecycle: development, deployment, monitoring, and compliance.
Qualifications
8+ years in data science/analytics with leadership experience.
Expertise in Python, R, SQL, and ML frameworks (TensorFlow, PyTorch, Scikit-Learn).
Experience deploying ML models and monitoring performance.
Familiarity with visualization tools (Tableau, Power BI).
Strong knowledge of data governance, advanced statistical methods, and AI trends.
Skills in project management tools (MS Project, JIRA) and software development best practices (CI/CD, Git, Agile).
Please apply directly to be considered.
Senior Data Engineer
Austin, TX jobs
We are looking for a seasoned Azure Data Engineer to design, build, and optimize secure, scalable, and high-performance data solutions within the Microsoft Azure ecosystem. This will be a multi-year contract worked FULLY ONSITE in Austin, TX.
The ideal candidate brings deep technical expertise in data architecture, ETL/ELT engineering, data integration, and governance, along with hands-on experience in MDM, API Management, Lakehouse architectures, and data mesh or data hub frameworks. This position combines strategic architectural planning with practical, hands-on implementation, empowering cross-functional teams to leverage data as a key organizational asset.
Key Responsibilities
1. Data Architecture & Strategy
Design and deploy end-to-end Azure data platforms using Azure Data Lake, Azure Synapse Analytics, Azure Databricks, and Azure SQL Database.
Build and implement Lakehouse and medallion (Bronze/Silver/Gold) architectures for scalable and modular data processing.
Define and support data mesh and data hub patterns to promote domain-driven design and federated governance.
Establish standards for conceptual, logical, and physical data modeling across data warehouse and data lake environments.
2. Data Integration & Pipeline Development
Develop and maintain ETL/ELT pipelines using Azure Data Factory, Synapse Pipelines, and Databricks for both batch and streaming workloads.
Integrate diverse data sources (on-prem, cloud, SaaS, APIs) into a unified Azure data environment.
Optimize pipelines for cost-effectiveness, performance, and scalability.
3. Master Data Management (MDM) & Data Governance
Implement MDM solutions using Azure-native or third-party platforms (e.g., Profisee, Informatica, Semarchy).
Define and manage data governance, metadata, and data quality frameworks.
Partner with business teams to align data standards and maintain data integrity across domains.
4. API Management & Integration
Build and manage APIs for data access, transformation, and system integration using Azure API Management and Logic Apps.
Design secure, reliable data services for internal and external consumers.
Automate workflows and system integrations using Azure Functions, Logic Apps, and Power Automate.
5. Database & Platform Administration
Perform core DBA tasks, including performance tuning, query optimization, indexing, and backup/recovery for Azure SQL and Synapse.
Monitor and optimize cost, performance, and scalability across Azure data services.
Implement CI/CD and Infrastructure-as-Code (IaC) solutions using Azure DevOps, Terraform, or Bicep.
6. Collaboration & Leadership
Work closely with data scientists, analysts, business stakeholders, and application teams to deliver high-value data solutions.
Mentor junior engineers and define best practices for coding, data modeling, and solution design.
Contribute to enterprise-wide data strategy and roadmap development.
Required Qualifications
Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or related fields.
5+ years of hands-on experience in Azure-based data engineering and architecture.
Strong proficiency with the following:
Azure Data Factory, Azure Synapse, Azure Databricks, Azure Data Lake Storage Gen2
SQL, Python, PySpark, PowerShell
Azure API Management and Logic Apps
Solid understanding of data modeling approaches (3NF, dimensional modeling, Data Vault, star/snowflake schemas).
Proven experience with Lakehouse/medallion architectures and data mesh/data hub designs.
Familiarity with MDM concepts, data governance frameworks, and metadata management.
Experience with automation, data-focused CI/CD, and IaC.
Thorough understanding of Azure security, RBAC, Key Vault, and core networking principles.
What We Offer
Competitive compensation and benefits package
Luna Data Solutions, Inc. (LDS) provides equal employment opportunities to all employees. All applicants will be considered for employment. LDS prohibits discrimination and harassment of any type regarding age, race, color, religion, sexual orientation, gender identity, sex, national origin, genetics, protected veteran status, and/or disability status.
Data Engineer
Austin, TX jobs
About the Role
We are seeking a highly skilled Databricks Data Engineer with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions.
What We're Looking For
8+ years designing and delivering scalable data pipelines in modern data platforms
Deep experience in data engineering, data warehousing, and enterprise-grade solution delivery
Ability to lead cross-functional initiatives in matrixed teams
Advanced skills in SQL, Python, and ETL/ELT development, including performance tuning
Hands-on experience with Azure, Snowflake, and Databricks, including system integrations
Key Responsibilities
Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform
Modernize and enhance cloud-based data ecosystems on Azure, contributing to architecture, modeling, security, and CI/CD
Use Apache Airflow and similar tools for workflow automation and orchestration
Work with financial or regulated datasets while ensuring strong compliance and governance
Drive best practices in data quality, lineage, cataloging, and metadata management
Primary Technical Skills
Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks Notebooks
Design efficient Delta Lake models for reliability and performance
Implement and manage Unity Catalog for governance, RBAC, lineage, and secure data sharing
Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables
Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems
Automate ingestion and workflows using Python and REST APIs
Support downstream analytics for BI, data science, and application workloads
Write optimized SQL/T-SQL queries, stored procedures, and curated datasets
Automate DevOps workflows, testing pipelines, and workspace configurations
Additional Skills
Azure: Data Factory, Data Lake, Key Vault, Logic Apps, Functions
CI/CD: Azure DevOps
Orchestration: Apache Airflow (plus)
Streaming: Delta Live Tables
MDM: Profisee (nice-to-have)
Databases: SQL Server, Cosmos DB
Soft Skills
Strong analytical and problem-solving mindset
Excellent communication and cross-team collaboration
Detail-oriented with a high sense of ownership and accountability
Oracle Data Modeler
Dallas, TX jobs
Oracle Data Modeler (Erwin) 6+ month contract (W2 ONLY - NO C-C) Downtown Dallas, TX (Onsite) Primary responsibilities of the Data Modeler include designing, developing, and maintaining enterprise-grade data models that support critical business initiatives, analytics, and operational systems. The ideal candidate is proficient in industry-standard data modeling tools (with hands-on expertise in Erwin Data Modeler) and has deep experience with Oracle databases. The candidate will also translate complex business requirements into robust, scalable, and normalized data models while ensuring alignment with data governance, performance, and integration standards.
Responsibilities
Design and develop conceptual, logical, and physical data models using Erwin Data Modeler (required).
Generate, review, and optimize DDL (Data Definition Language) scripts for database objects (tables, views, indexes, constraints, partitions, etc.).
Perform forward and reverse engineering of data models from existing Oracle and SQL Server databases.
Collaborate with data architects, DBAs, ETL developers, and business stakeholders to gather and refine requirements.
Ensure data models adhere to normalization standards (3NF/BCNF), data integrity, and referential integrity.
Support dimensional modeling (star/snowflake schemas) for data warehousing and analytics use cases.
Conduct model reviews, impact analysis, and version control using Erwin or comparable tools.
Participate in data governance initiatives, including metadata management, naming standards, and lineage documentation.
Optimize models for performance, scalability, and maintainability across large-scale environments.
Assist in database migrations, schema comparisons, and synchronization between environments (Dev/QA/Prod).
Assist in optimizing existing Data Solutions
Follow Oncor's Data Governance Policy and Information Classification and Protection Policy.
Participate in design reviews and take guidance from the Data Architecture team members.
Qualifications
3+ years of hands-on data modeling experience in enterprise environments.
Expert proficiency with Erwin Data Modeler (version 9.x or higher preferred) - including subject areas, model templates, and DDL generation.
Advanced SQL skills and deep understanding of Oracle (11g/12c/19c/21c).
Strong command of DDL - creating and modifying tables, indexes, constraints, sequences, synonyms, and materialized views.
Solid grasp of database internals: indexing strategies, partitioning, clustering, and query execution plans.
Experience with data modeling best practices: normalization, denormalization, surrogate keys, slowly changing dimensions (SCD), and data vault (a plus).
Familiarity with version control (e.g., Git) and model comparison/diff tools.
Excellent communication skills - ability to document models clearly and present to technical and non-technical audiences.
Self-Motivated, with an ability to multi-task
Capable of presenting to all levels of audiences
Works well in a team environment
Experience with Hadoop/MongoDB a plus
Estimated Min Rate: $63.00
Estimated Max Rate: $90.00
What's In It for You?
We welcome you to be a part of the largest and legendary global staffing companies to meet your career aspirations. Yoh's network of client companies has been employing professionals like you for over 65 years in the U.S., UK and Canada. Join Yoh's extensive talent community that will provide you with access to Yoh's vast network of opportunities and gain access to this exclusive opportunity available to you. Benefit eligibility is in accordance with applicable laws and client requirements. Benefits include:
Medical, Prescription, Dental & Vision Benefits (for employees working 20+ hours per week)
Health Savings Account (HSA) (for employees working 20+ hours per week)
Life & Disability Insurance (for employees working 20+ hours per week)
MetLife Voluntary Benefits
Employee Assistance Program (EAP)
401K Retirement Savings Plan
Direct Deposit & weekly epayroll
Referral Bonus Programs
Certification and training opportunities
Note: Any pay ranges displayed are estimations. Actual pay is determined by an applicant's experience, technical expertise, and other qualifications as listed in the job description. All qualified applicants are welcome to apply.
Yoh, a Day & Zimmermann company, is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Visit ************************************************ to contact us if you are an individual with a disability and require accommodation in the application process.
For California applicants, qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. All of the material job duties described in this posting are job duties for which a criminal history may have a direct, adverse, and negative relationship potentially resulting in the withdrawal of a conditional offer of employment.
It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability.
By applying and submitting your resume, you authorize Yoh to review and reformat your resume to meet Yoh's hiring clients' preferences. To learn more about Yoh's privacy practices, please see our Candidate Privacy Notice: **********************************
Data Architect
Cincinnati, OH jobs
THIS IS A W2 (NOT C2C OR REFERRAL BASED) CONTRACT OPPORTUNITY
REMOTE MOSTLY WITH 1 DAY/MO ONSITE IN CINCINNATI-LOCAL CANDIDATES TAKE PREFERENCE
RATE: $75-85/HR WITH BENEFITS
We are seeking a highly skilled Data Architect to function in a consulting capacity to analyze, redesign, and optimize a Medical Payments client's environment. The ideal candidate will have deep expertise in SQL, Azure cloud services, and modern data architecture principles.
Responsibilities
Design and maintain scalable, secure, and high-performing data architectures.
Lead migration and modernization projects in heavy use production systems.
Develop and optimize data models, schemas, and integration strategies.
Implement data governance, security, and compliance standards.
Collaborate with business stakeholders to translate requirements into technical solutions.
Ensure data quality, consistency, and accessibility across systems.
Required Qualifications
Bachelor's degree in Computer Science, Information Systems, or related field.
Proven experience as a Data Architect or similar role.
Strong proficiency in SQL (query optimization, stored procedures, indexing).
Hands-on experience with Azure cloud services for data management and analytics.
Knowledge of data modeling, ETL processes, and data warehousing concepts.
Familiarity with security best practices and compliance frameworks.
Preferred Skills
Understanding of Electronic Health Records systems.
Understanding of Big Data technologies and modern data platforms outside the scope of this project.
Data Architect
Dallas, TX jobs
Primary responsibilities of the Senior Data Architect include designing and managing Data Architectural solutions for multiple environments including but not limited to Data Warehouse, ODS, Data Replication/ETL Data Management initiatives. The candidate will be in an expert role and will work closely with Business, DBA, ETL and Data Management teams providing solutions for complex Data related initiatives. This individual will also be responsible for developing and managing Data Governance and Master Data Management solutions. This candidate must have good technical and communication skills coupled with the ability to mentor effectively.
Responsibilities
Establishing policies, procedures and guidelines regarding all aspects of Data Governance
Ensure data decisions are consistent, and best practices are adhered to
Ensure Data Standardization definitions, Data Dictionary and Data Lineage are kept up to date and accessible
Work with ETL, Replication and DBA teams to determine best practices as it relates to data transformations, data movement and derivations
Work with support teams to ensure consistent and pro-active support methodologies are in place for all aspects of data movements and data transformations
Work with and mentor Data Architects and Data Analysts to ensure best practices are adhered to for database design and data management
Assist in overall Architectural solutions including, but not limited to Data Warehouse, ODS, Data Replication/ETL Data Management initiatives
Work with the business teams and Enterprise Architecture team to ensure best architectural solutions from a Data perspective
Create a strategic roadmap for MDM implementation
Responsible for implementing a Master Data Management tool
Establishing policies, procedures and guidelines regarding all aspects of Master Data Management
Ensure Architectural rules and design of the MDM process are documented and best practices are adhered to
Qualifications
5+ years of Data Architecture experience, including OLTP, Data Warehouse, Big Data
5+ years of Solution Architecture experience
5+ years of MDM experience
5+ years of Data Governance experience, working knowledge of best practices
Extensive working knowledge of all aspects of Data Movement and Processing, including Middleware, ETL, API, OLAP and best practices for data tracking
Good Communication skills
Self-Motivated
Capable of presenting to all levels of audiences
Works well in a team environment
Data Architect
Plano, TX jobs
KPI Partners is a 5 times Gartner-recognized data, analytics, and AI consulting company. We are leaders in data engineering on Azure, AWS, Google, Snowflake, and Databricks. Founded in 2006, KPI has over 400 consultants and has successfully delivered over 1,000 projects to our clients. We are looking for skilled data engineers who want to work with the best team in data engineering.
Title: Senior Data Architect
Location: Plano, TX (Hybrid)
Job Type: Contract - 6 Months
Key Skills: SQL, PySpark, Databricks, and Azure Cloud
Key Note: Looking for a Data Architect who is Hands-on with SQL, PySpark, Databricks, and Azure Cloud.
About the Role:
We are seeking a highly skilled and experienced Senior Data Architect to join our dynamic team at KPI, working on challenging and multi-year data transformation projects. This is an excellent opportunity for a talented data engineer to play a key role in building innovative data solutions using Azure Native Services and related technologies. If you are passionate about working with large-scale data systems and enjoy solving complex engineering problems, this role is for you.
Key Responsibilities:
Data Engineering: Design, development, and implementation of data pipelines and solutions using PySpark, SQL, and related technologies.
Collaboration: Work closely with cross-functional teams to understand business requirements and translate them into robust data solutions.
Data Warehousing: Design and implement data warehousing solutions, ensuring scalability, performance, and reliability.
Continuous Learning: Stay up to date with modern technologies and trends in data engineering and apply them to improve our data platform.
Mentorship: Provide guidance and mentorship to junior data engineers, ensuring best practices in coding, design, and development.
Must-Have Skills & Qualifications:
Minimum 12+ years of overall experience in IT Industry.
4+ years of experience in data engineering, with a strong background in building large-scale data solutions.
4+ years of hands-on experience developing and implementing data pipelines using Azure stack experience (Azure, ADF, Databricks, Functions)
Proven expertise in SQL for querying, manipulating, and analyzing large datasets.
Strong knowledge of ETL processes and data warehousing fundamentals.
Self-motivated and independent, with a “let's get this done” mindset and the ability to thrive in a fast-paced and dynamic environment.
Good-to-Have Skills:
Databricks Certification is a plus.
Data Modeling, Azure Architect Certification.
Senior Oracle Data Architect (HANDS ON)
Dallas, TX jobs
Oracle Data Architect (HANDS ON) 12+ month contract Downtown Dallas, TX (HYBRID) Primary responsibilities of the Senior Data Architect include designing and managing Data Architectural solutions for multiple environments including but not limited to Data Warehouse, ODS, Data Replication/ETL Data Management initiatives. The candidate will be in an expert role and will work closely with Business, DBA, ETL and Data Management teams providing solutions for complex Data related initiatives. This individual will also be responsible for developing and managing Data Governance and Master Data Management solutions. This candidate must have good technical and communication skills coupled with the ability to mentor effectively.
Responsibilities
Establishing policies, procedures and guidelines regarding all aspects of Data Governance
Ensure data decisions are consistent and best practices are adhered to
Ensure Data Standardization definitions, Data Dictionary and Data Lineage are kept up to date and accessible
Work with ETL, Replication and DBA teams to determine best practices as it relates to data transformations, data movement and derivations
Work with support teams to ensure consistent and pro-active support methodologies are in place for all aspects of data movements and data transformations
Work with and mentor Data Architects and Data Analysts to ensure best practices are adhered to for database design and data management
Assist in overall Architectural solutions including, but not limited to Data Warehouse, ODS, Data Replication/ETL Data Management initiatives
Work with the business teams and Enterprise Architecture team to ensure best architectural solutions from a Data perspective
Create a strategic roadmap for MDM implementation
Responsible for implementing a Master Data Management tool
Establishing policies, procedures and guidelines regarding all aspects of Master Data Management
Ensure Architectural rules and design of the MDM process are documented and best practices are adhered to
Qualifications
MUST HAVE Data Modeling skills, Oracle Exadata, Golden Gate
5+ years of Data Architecture experience, including OLTP, Data Warehouse, Big Data
5+ years of Solution Architecture experience
5+ years of MDM experience
5+ years of Data Governance experience, working knowledge of best practices
Extensive working knowledge of all aspects of Data Movement and Processing, including Middleware, ETL, API, OLAP and best practices for data tracking
Good Communication skills
Self-Motivated
Capable of presenting to all levels of audiences
Works well in a team environment
Estimated Min Rate: $80.00
Estimated Max Rate: $90.00
What's In It for You?
We welcome you to be a part of the largest and legendary global staffing companies to meet your career aspirations. Yoh's network of client companies has been employing professionals like you for over 65 years in the U.S., UK and Canada. Join Yoh's extensive talent community that will provide you with access to Yoh's vast network of opportunities and gain access to this exclusive opportunity available to you. Benefit eligibility is in accordance with applicable laws and client requirements. Benefits include:
Medical, Prescription, Dental & Vision Benefits (for employees working 20+ hours per week)
Health Savings Account (HSA) (for employees working 20+ hours per week)
Life & Disability Insurance (for employees working 20+ hours per week)
MetLife Voluntary Benefits
Employee Assistance Program (EAP)
401K Retirement Savings Plan
Direct Deposit & weekly epayroll
Referral Bonus Programs
Certification and training opportunities
Note: Any pay ranges displayed are estimations. Actual pay is determined by an applicant's experience, technical expertise, and other qualifications as listed in the job description. All qualified applicants are welcome to apply.
Yoh, a Day & Zimmermann company, is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Visit ************************************************ to contact us if you are an individual with a disability and require accommodation in the application process.
For California applicants, qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. All of the material job duties described in this posting are job duties for which a criminal history may have a direct, adverse, and negative relationship potentially resulting in the withdrawal of a conditional offer of employment.
It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability.
By applying and submitting your resume, you authorize Yoh to review and reformat your resume to meet Yoh's hiring clients' preferences. To learn more about Yoh's privacy practices, please see our Candidate Privacy Notice: **********************************
Senior Business Data Architect
Cincinnati, OH jobs
Job Title: Senior Business Data Architect
Who we are:
Vernovis is a Total Talent Solutions company that specializes in Technology, Cybersecurity, Finance & Accounting functions. At Vernovis, we help these professionals achieve their career goals, matching them with innovative projects and dynamic direct hire opportunities in Ohio and across the Midwest.
Client Overview:
Vernovis is partnering with a fast-paced manufacturing company that is looking to hire a Sr. Data Manager. This is a great opportunity for an experienced Snowflake professional to elevate their career leading a team and designing the architecture of the data warehouse.
If interested, please email Wendy Kolkmeyer at ***********************
What You'll Do:
Architect and optimize the enterprise data warehouse using Snowflake.
Develop and maintain automated data pipelines with Fivetran or similar ETL tools.
Design and enhance DBT data models to support analytics, reporting, and operational decision-making.
Oversee and improve Power BI reporting, ensuring data is accurate, accessible, and actionable for business users.
Establish and enforce enterprise data governance, standards, policies, and best practices.
Collaborate with business leaders to translate requirements into scalable, high-quality data solutions.
Enable advanced analytics and AI/ML initiatives through proper data structuring and readiness.
Drive cross-functional alignment, communication, and stakeholder engagement.
Lead, mentor, and develop members of the data team.
Ensure compliance, conduct system audits, and maintain business continuity plans.
What Experience You'll Have:
7+ years of experience in data architecture, data engineering, or enterprise data management.
Expertise in Snowflake, Fivetran (or similar ETL tools), DBT, and Power BI.
Strong proficiency in SQL and modern data architecture principles.
Proven track record in data governance, modeling, and data quality frameworks.
Demonstrated experience leading teams and managing complex data initiatives.
Ability to communicate technical concepts clearly and collaborate effectively with business stakeholders.
What is Nice to Have:
Manufacturing experience
Vernovis does not accept inquiries from Corp to Corp recruiting companies. Applicants must be currently authorized to work in the United States on a full-time basis and not violate any immigration or discrimination laws.
Vernovis provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws.
This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation, and training.