Data Scientist 2
Data scientist job in Dallas, TX
Cullerton Group has a new opportunity for a Data Scientist 2. The work will be done onsite full-time, with flexibility for candidates located in Illinois (Mossville) or Texas (Dallas) depending on business needs. This is a long-term 12-month position that can lead to permanent employment with our client. Compensation is up to $58.72/hr + full benefits (vision, dental, health insurance, 401k, and holiday pay).
Job Summary
Cullerton Group is seeking a motivated and analytical Data Scientist to support strategic sourcing and cost management initiatives through advanced data analytics and reporting. This role focuses on developing insights from complex datasets to guide decision-making, improve cost visibility, and support category strategy execution. The ideal candidate will collaborate with cross-functional teams, apply statistical and analytical methods, and contribute independently to analytics-driven projects that deliver measurable business value.
Key ResponsibilitiesDevelop and maintain scorecards, dashboards, and reports by consolidating data from multiple enterprise sources
Perform data collection, validation, and analysis to support strategic sourcing and cost savings initiatives
Apply statistical analysis and modeling techniques to identify trends, risks, and optimization opportunities
Support monthly and recurring reporting processes, including cost tracking and performance metrics
Collaborate with category teams, strategy leaders, and peers to translate analytics into actionable insights
Required QualificationsBachelor's degree in a quantitative field such as Data Science, Statistics, Engineering, Computer Science, Economics, Mathematics, or similar (or Master's degree in lieu of experience)
3-5 years of professional experience performing quantitative analysis (internships accepted)
Proficiency with analytics and data visualization tools, including Power BI
Strong problem-solving skills with the ability to communicate insights clearly to technical and non-technical audiences
Preferred QualificationsExperience with advanced statistical methods (regression, hypothesis testing, ANOVA, statistical process control)
Practical exposure to machine learning techniques such as clustering, logistic regression, random forests, or similar models
Experience with cloud platforms (AWS, Azure, or Google Cloud)
Familiarity with procurement, sourcing, cost management, or manufacturing-related analytics
Strong initiative, collaboration skills, and commitment to continuous learning in analytics
Why This Role?
This position offers the opportunity to work on high-impact analytics projects that directly support sourcing strategy, cost optimization, and operational decision-making. You will collaborate with diverse teams, gain exposure to large-scale enterprise data, and contribute to meaningful initiatives that drive measurable business outcomes. Cullerton Group provides a professional consulting environment with growth potential, strong client partnerships, and long-term career opportunities.
Data Scientist (F2F Interview)
Data scientist job in Dallas, TX
W2 Contract
Dallas, TX (Onsite)
We are seeking an experienced Data Scientist to join our team in Dallas, Texas. The ideal candidate will have a strong foundation in machine learning, data modeling, and statistical analysis, with the ability to transform complex datasets into clear, actionable insights that drive business impact.
Key Responsibilities
Develop, implement, and optimize machine learning models to support business objectives.
Perform exploratory data analysis, feature engineering, and predictive modeling.
Translate analytical findings into meaningful recommendations for technical and non-technical stakeholders.
Collaborate with cross-functional teams to identify data-driven opportunities and improve decision-making.
Build scalable data pipelines and maintain robust analytical workflows.
Communicate insights through reports, dashboards, and data visualizations.
Qualifications
Bachelor's or Master's degree in Data Science, Statistics, Computer Science, or a related field.
Proven experience working with machine learning algorithms and statistical modeling techniques.
Proficiency in Python or R, along with hands-on experience using libraries such as Pandas, NumPy, Scikit-learn, or TensorFlow.
Strong SQL skills and familiarity with relational or NoSQL databases.
Experience with data visualization tools (e.g., Tableau, Power BI, matplotlib).
Excellent problem-solving, communication, and collaboration skills.
Applied Data Scientist/ Data Science Engineer
Data scientist job in Austin, TX
Role: Applied Data Scientist/ Data Science Engineer
Yrs. of experience: 8+ Yrs.
Job type : Fulltime
Job Responsibilities:
You will be part of a team that innovates and collaborates with internal stakeholders to deliver world-class solutions with a customer first mentality. This group is passionate about the data science field and is motivated to find opportunity in, and develop solutions for, evolving challenges.
You will:
Solve business and customer issues utilizing AI/ML - Mandatory
Build prototypes and scalable AI/ML solutions that will be integrated into software products
Collaborate with software engineers, business stakeholders and product owners in an Agile environment
Have complete ownership of model outcomes and drive continuous improvement
Essential Requirements:
Strong coding skills in Python and SQL - Mandatory
Machine Learning knowledge (Deep learning, Information Retrieval (RAG), GenAI , Classification, Forecasting, Regression, etc. on large datasets) with experience in ML model deployment
Ability to work with internal stakeholders to transfer business questions into quantitative problem statements
Ability to effectively communicate data science progress to non-technical internal stakeholders
Ability to lead a team of data scientists is a plus
Experience with Big Data technologies and/or software development is a plus
Senior Data Retention & Protection Consultant: Disaster Recovery
Data scientist job in Dallas, TX
Technology Recovery Services provides subject matter expertise and direction on complex IT disaster recovery projects/initiatives and supports IT disaster recovery technical planning, coordination and service maturity working across IT, business resilience, risk management, regulatory and compliance.
Summary of Essential Functions:
Govern disaster recovery plans and procedures for critical business applications and infrastructure.
Create, update, and publish disaster recovery related policies, procedures, and guidelines.
Ensure annual updates and validations of DR policies and procedures to maintain readiness and resilience.
Maintain upto-date knowledge of disaster recovery and business continuity best practices.
Perform regular disaster recovery testing, including simulation exercises, incident response simulations, tabletop exercises, and actual failover drills to validate procedures and identify improvements.
Train staff and educate employees on disaster recovery processes, their roles during incidents, and adherence to disaster recovery policies.
Coordinates Technology Response to Natural Disasters and Aircraft Accidents
Qualifications:
Strong knowledge of Air vault and ransomware recovery technologies
Proven ability to build, cultivate, and promote strong relationships with internal customers at all levels of the organization, as well as with Technology counterparts, business partners, and external groups
Proficiency in handling operational issues effectively and understanding escalation, communication, and crisis management
Demonstrated call control and situation management skills under fast paced, highly dynamic situations
Knowledge of basic IT and Airline Ecosystems
Understand SLA's, engagement process and urgency needed to engage teams during critical situations
Ability to understand and explain interconnected application functionality in a complex environment and share knowledge with peers
Skilled in a Customer centric attitude and the ability to focus on providing best-in-class service for customers and stakeholders
Ability to execute with a high level of operational urgency with an ability to maintain calm, and work closely with a team and stakeholders during a critical situation while using project management skills
Ability to present to C Level executives with outstanding communication skills
Ability to lead a large group up to 200 people including support, development, leaders and executives on a single call
Ability to effectively triage - be able to detect and determine symptom vs cause and capture key data from various sources, systems and people
Knowledge of business strategies and priorities
Excellent communication and stakeholder engagement skills.
Required:
3 plus years of similar
or related experience in such fields as Disaster Recovery, Business Continuity and Enterprise Operational Resilience.
Working knowledge of Disaster Recovery professional practices, including Business Impact Analysis, disaster recovery plan (DRP), redundancy and failover mechanisms DR related regulatory requirement, and Business Continuity Plan exercises and audits.
Ability to motivate, influence, and train others.
Strong analytical skills and problem-solving skills using data analysis tools including Alteryx and Tableau.
Ability to communicate technical and operational issues clearly to both technical and nontechnical audiences.
Data Engineer
Data scientist job in Houston, TX
We are looking for a talented and motivated Python Data Engineers. We need help expanding our data assets in support of our analytical capabilities in a full-time role. This role will have the opportunity to interface directly with our traders, analysts, researchers and data scientists to drive out requirements and deliver a wide range of data related needs.
What you will do:
- Translate business requirements into technical deliveries. Drive out requirements for data ingestion and access
- Maintain the cleanliness of our Python codebase, while adhering to existing designs and coding conventions as much as possible
- Contribute to our developer tools and Python ETL toolkit, including standardization and consolidation of core functionality
- Efficiently coordinate with the rest of our team in different locations
Qualifications
- 6+ years of enterprise-level coding experience with Python
- Computer Science, MIS or related degree
- Familiarity with Pandas and NumPy packages
- Experience with Data Engineering and building data pipelines
- Experience scraping websites with Requests, Beautiful Soup, Selenium, etc.
- Strong understating of object-oriented design, design patterns, SOA architectures
- Proficient understanding of peer-reviewing, code versioning, and bug/issue tracking tools.
- Strong communication skills
- Familiarity with containerization solutions like Docker and Kubernetes is a plus
Senior Data Engineer
Data scientist job in Austin, TX
We are looking for a seasoned Azure Data Engineer to design, build, and optimize secure, scalable, and high-performance data solutions within the Microsoft Azure ecosystem. This will be a multi-year contract worked FULLY ONSITE in Austin, TX.
The ideal candidate brings deep technical expertise in data architecture, ETL/ELT engineering, data integration, and governance, along with hands-on experience in MDM, API Management, Lakehouse architectures, and data mesh or data hub frameworks. This position combines strategic architectural planning with practical, hands-on implementation, empowering cross-functional teams to leverage data as a key organizational asset.
Key Responsibilities
1. Data Architecture & Strategy
Design and deploy end-to-end Azure data platforms using Azure Data Lake, Azure Synapse Analytics, Azure Databricks, and Azure SQL Database.
Build and implement Lakehouse and medallion (Bronze/Silver/Gold) architectures for scalable and modular data processing.
Define and support data mesh and data hub patterns to promote domain-driven design and federated governance.
Establish standards for conceptual, logical, and physical data modeling across data warehouse and data lake environments.
2. Data Integration & Pipeline Development
Develop and maintain ETL/ELT pipelines using Azure Data Factory, Synapse Pipelines, and Databricks for both batch and streaming workloads.
Integrate diverse data sources (on-prem, cloud, SaaS, APIs) into a unified Azure data environment.
Optimize pipelines for cost-effectiveness, performance, and scalability.
3. Master Data Management (MDM) & Data Governance
Implement MDM solutions using Azure-native or third-party platforms (e.g., Profisee, Informatica, Semarchy).
Define and manage data governance, metadata, and data quality frameworks.
Partner with business teams to align data standards and maintain data integrity across domains.
4. API Management & Integration
Build and manage APIs for data access, transformation, and system integration using Azure API Management and Logic Apps.
Design secure, reliable data services for internal and external consumers.
Automate workflows and system integrations using Azure Functions, Logic Apps, and Power Automate.
5. Database & Platform Administration
Perform core DBA tasks, including performance tuning, query optimization, indexing, and backup/recovery for Azure SQL and Synapse.
Monitor and optimize cost, performance, and scalability across Azure data services.
Implement CI/CD and Infrastructure-as-Code (IaC) solutions using Azure DevOps, Terraform, or Bicep.
6. Collaboration & Leadership
Work closely with data scientists, analysts, business stakeholders, and application teams to deliver high-value data solutions.
Mentor junior engineers and define best practices for coding, data modeling, and solution design.
Contribute to enterprise-wide data strategy and roadmap development.
Required Qualifications
Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or related fields.
5+ years of hands-on experience in Azure-based data engineering and architecture.
Strong proficiency with the following:
Azure Data Factory, Azure Synapse, Azure Databricks, Azure Data Lake Storage Gen2
SQL, Python, PySpark, PowerShell
Azure API Management and Logic Apps
Solid understanding of data modeling approaches (3NF, dimensional modeling, Data Vault, star/snowflake schemas).
Proven experience with Lakehouse/medallion architectures and data mesh/data hub designs.
Familiarity with MDM concepts, data governance frameworks, and metadata management.
Experience with automation, data-focused CI/CD, and IaC.
Thorough understanding of Azure security, RBAC, Key Vault, and core networking principles.
What We Offer
Competitive compensation and benefits package
Luna Data Solutions, Inc. (LDS) provides equal employment opportunities to all employees. All applicants will be considered for employment. LDS prohibits discrimination and harassment of any type regarding age, race, color, religion, sexual orientation, gender identity, sex, national origin, genetics, protected veteran status, and/or disability status.
Data Architect
Data scientist job in Plano, TX
KPI Partners is a 5 times Gartner-recognized data, analytics, and AI consulting company. We are leaders in data engineering on Azure, AWS, Google, Snowflake, and Databricks. Founded in 2006, KPI has over 400 consultants and has successfully delivered over 1,000 projects to our clients. We are looking for skilled data engineers who want to work with the best team in data engineering.
Title: Senior Data Architect
Location: Plano, TX (Hybrid)
Job Type: Contract - 6 Months
Key Skills: SQL, PySpark, Databricks, and Azure Cloud
Key Note: Looking for a Data Architect who is Hands-on with SQL, PySpark, Databricks, and Azure Cloud.
About the Role:
We are seeking a highly skilled and experienced Senior Data Architect to join our dynamic team at KPI, working on challenging and multi-year data transformation projects. This is an excellent opportunity for a talented data engineer to play a key role in building innovative data solutions using Azure Native Services and related technologies. If you are passionate about working with large-scale data systems and enjoy solving complex engineering problems, this role is for you.
Key Responsibilities:
Data Engineering: Design, development, and implementation of data pipelines and solutions using PySpark, SQL, and related technologies.
Collaboration: Work closely with cross-functional teams to understand business requirements and translate them into robust data solutions.
Data Warehousing: Design and implement data warehousing solutions, ensuring scalability, performance, and reliability.
Continuous Learning: Stay up to date with modern technologies and trends in data engineering and apply them to improve our data platform.
Mentorship: Provide guidance and mentorship to junior data engineers, ensuring best practices in coding, design, and development.
Must-Have Skills & Qualifications:
Minimum 12+ years of overall experience in IT Industry.
4+ years of experience in data engineering, with a strong background in building large-scale data solutions.
4+ years of hands-on experience developing and implementing data pipelines using Azure stack experience (Azure, ADF, Databricks, Functions)
Proven expertise in SQL for querying, manipulating, and analyzing large datasets.
Strong knowledge of ETL processes and data warehousing fundamentals.
Self-motivated and independent, with a “let's get this done” mindset and the ability to thrive in a fast-paced and dynamic environment.
Good-to-Have Skills:
Databricks Certification is a plus.
Data Modeling, Azure Architect Certification.
Data Engineer
Data scientist job in Austin, TX
About the Role
We are seeking a highly skilled Databricks Data Engineer with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions.
What We're Looking For
8+ years designing and delivering scalable data pipelines in modern data platforms
Deep experience in data engineering, data warehousing, and enterprise-grade solution delivery
Ability to lead cross-functional initiatives in matrixed teams
Advanced skills in SQL, Python, and ETL/ELT development, including performance tuning
Hands-on experience with Azure, Snowflake, and Databricks, including system integrations
Key Responsibilities
Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform
Modernize and enhance cloud-based data ecosystems on Azure, contributing to architecture, modeling, security, and CI/CD
Use Apache Airflow and similar tools for workflow automation and orchestration
Work with financial or regulated datasets while ensuring strong compliance and governance
Drive best practices in data quality, lineage, cataloging, and metadata management
Primary Technical Skills
Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks Notebooks
Design efficient Delta Lake models for reliability and performance
Implement and manage Unity Catalog for governance, RBAC, lineage, and secure data sharing
Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables
Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems
Automate ingestion and workflows using Python and REST APIs
Support downstream analytics for BI, data science, and application workloads
Write optimized SQL/T-SQL queries, stored procedures, and curated datasets
Automate DevOps workflows, testing pipelines, and workspace configurations
Additional Skills
Azure: Data Factory, Data Lake, Key Vault, Logic Apps, Functions
CI/CD: Azure DevOps
Orchestration: Apache Airflow (plus)
Streaming: Delta Live Tables
MDM: Profisee (nice-to-have)
Databases: SQL Server, Cosmos DB
Soft Skills
Strong analytical and problem-solving mindset
Excellent communication and cross-team collaboration
Detail-oriented with a high sense of ownership and accountability
Data Engineer
Data scientist job in Houston, TX
Job Title: Senior Software Engineer / Quant Developer (JG4 Level)
Duration: Long-term contract with possibility of extension
The Senior Data Engineer will design and build robust data foundations and end-to-end data solutions to enable the business to maximize value from data. This role plays a critical part in fostering a data-driven culture across both IT and business stakeholder communities. The Senior Data Engineer will act as a subject matter expert (SME), lead solution design and delivery, mentor junior engineers, and translate Data Strategy and Vision into scalable, high-quality IT solutions.
Key Responsibilities
Design, build, and maintain enterprise-grade data foundations and end-to-end data solutions.
Serve as a subject matter expert in data engineering, data modeling, and solution architecture.
Translate business data strategy and vision into scalable technical solutions.
Mentor and guide junior data engineers and contribute to continuous capability building.
Drive the rollout and adoption of Data Foundation initiatives across the business.
Coordinate change management, incident management, and problem management processes.
Present insights, reports, and technical findings to key stakeholders.
Drive implementation efficiency across pilots and future projects to reduce cost, accelerate delivery, and maximize business value.
Actively contribute to community initiatives such as Centers of Excellence (CoE) and Communities of Practice (CoP).
Collaborate effectively with both technical teams and business leaders.
Key Characteristics
Highly curious technology expert with a continuous learning mindset.
Strong data-domain expertise with deep technical focus.
Excellent communicator who can engage both technical and non-technical stakeholders.
Trusted advisor to leadership and cross-functional teams.
Strong driver of execution, quality, and delivery excellence.
Mandatory Skills
Cloud Platforms: AWS, Azure, SAP -
Expert Level
ELT:
Expert Level
Data Modeling:
Expert Level
Data Integration & Ingestion
Data Manipulation & Processing
DevOps & Version Control: GitHub, GitHub Actions, Azure DevOps
Data & Analytics Tools: Data Factory, Databricks, SQL DB, Synapse, Stream Analytics, Glue, Airflow, Kinesis, Redshift, SonarQube, PyTest
Optional / Nice-to-Have Skills
Experience leading projects or running a Scrum team.
Experience with BPC and Planning.
Exposure to external technical ecosystems.
Documentation using MkDocs.
Lead Data Engineer
Data scientist job in Plano, TX
Job Title: Senior Lead Data Engineer
Type: Full Time
Our client is seeking a hands-on Senior Data Engineer to guide development teams in building scalable, cloud-native solutions. This leadership role combines technical expertise with mentoring, driving innovation across Agile teams to deliver high-impact applications using modern full-stack technologies.
About the Role
You'll lead developers focused on machine learning, microservices, and full-stack systems while collaborating with product managers to create robust, performant solutions. Stay ahead of tech trends, experiment with emerging tools, and contribute to engineering communities through mentoring and knowledge sharing.
Key Responsibilities
Design, develop, test, deploy, and support full-stack solutions across Agile teams.
Lead engineering teams specializing in ML, distributed microservices, and cloud systems.
Build with Java, Scala, Python, RDBMS/NoSQL databases, and cloud data warehouses like Redshift/Snowflake.
Partner with product managers to deliver cloud-based applications powering exceptional user experiences.
Perform unit testing and code reviews for rigorous design, clean code, and peak performance.
Experiment with new technologies and mentor peers in internal/external tech communities.
What You Bring
Required
Bachelor's degree in Computer Science, Engineering, or related field.
6+ years in application development.
2+ years with big data technologies.
1+ year with cloud platforms (AWS, Azure, Google Cloud).
Preferred
Master's degree.
9+ years in app dev (Python, SQL, Scala, Java).
4+ years public cloud, real-time streaming, NoSQL (MongoDB/Cassandra), data warehousing.
5+ years distributed tools (Hadoop, Spark, Kafka, etc.).
4+ years UNIX/Linux/shell scripting; 2+ years Agile practices.
Senior Data Engineer
Data scientist job in Dallas, TX
About Us
Longbridge Securities, founded in March 2019 and headquartered in Singapore, is a next-generation online brokerage platform. Established by a team of seasoned finance professionals and technical experts from leading global firms, we are committed to advancing financial technology innovation. Our mission is to empower every investor by offering enhanced financial opportunities.
What You'll Do
As part of our global expansion, we're seeking a Data Engineer to design and build batch/real-time data warehouses and maintain data platforms that power trading and research for the US market. You'll work on data pipelines, APIs, storage systems, and quality monitoring to ensure reliable, scalable, and efficient data services.
Responsibilities:
Design and build batch/real-time data warehouses to support the US market growth
Develop efficient ETL pipelines to optimize data processing performance and ensure data quality/stability
Build a unified data middleware layer to reduce business data development costs and improve service reusability
Collaborate with business teams to identify core metrics and data requirements, delivering actionable data solutions
Discover data insights through collaboration with the business owner
Maintain and develop enterprise data platforms for the US market
Qualifications
7+ years of data engineering experience with a proven track record in data platform/data warehouse projects
Proficient in Hadoop ecosystem (Hive, Kafka, Spark, Flink), Trino, SQL, and at least one programming language (Python/Java/Scala)
Solid understanding of data warehouse modeling (dimensional modeling, star/snowflake schemas) and ETL performance optimization
Familiarity with AWS/cloud platforms and experience with Docker, Kubernetes
Experience with open-source data platform development, familiar with at least one relational database (MySQL/PostgreSQL)
Strong cross-department collaboration skills to translate business requirements into technical solutions
Bachelor's degree or higher in Computer Science, Data Science, Statistics, or related fields
Comfortable working in a fast-moving fintech/tech startup environment
Qualifications
7+ years of data engineering experience with a proven track record in data platform/data warehouse projects
Proficient in Hadoop ecosystem (Hive, Kafka, Spark, Flink), Trino, SQL, and at least one programming language (Python/Java/Scala)
Solid understanding of data warehouse modeling (dimensional modeling, star/snowflake schemas) and ETL performance optimization
Familiarity with AWS/cloud platforms and experience with Docker, Kubernetes
Experience with open-source data platform development, familiar with at least one relational database (MySQL/PostgreSQL)
Strong cross-department collaboration skills to translate business requirements into technical solutions
Bachelor's degree or higher in Computer Science, Data Science, Statistics, or related fields
Comfortable working in a fast-moving fintech/tech startup environment
Proficiency in Mandarin and English at the business communication level for international team collaboration
Bonus Point:
Experience with DolphinScheduler and SeaTunnel is a plus
Staff Data Engineer
Data scientist job in Houston, TX
Staff Data Engineer - Houston, TX or US Remote
A Series B funded startup who are building the infrastructure that powers how residential HVAC systems are monitored, maintained, and serviced are looking for a Staff Data Engineer to join their team.
What will I be doing:
Help architect and build the core data platform that powers the company's intelligence - from ingestion and transformation to serving and analytics
Design and implement scalable data pipelines (batch and streaming) across diverse data sources including IoT sensors, operational databases, and external systems
Work with high-performance database technologies
Define foundational data models and abstractions that enable high-quality, consistent access for analytics, product, and ML workloads
Collaborate with AI/ML, Product, and Software Engineering teams to enable data-driven decision-making and real-time intelligence
Establish engineering best practices for data quality, observability, lineage, and governance
Evaluate and integrate modern data technologies (e.g., Redshift, S3, Spark, Airflow, dbt, Kafka, Databricks, Snowflake, etc.) to evolve the platform's capabilities
Mentor engineers across teams
What are we looking for:
8+ years of experience as a software or data engineer, including ownership of large-scale data systems used for analytics or ML
Deep expertise in building and maintaining data pipelines and ETL frameworks (Python, Spark, Airflow, dbt, etc.)
Strong background in modern data infrastructure
Proficiency with SQL and experience designing performant, maintainable data models
Solid understanding of CI/CD, infrastructure-as-code, and observability best practices
Experience enabling ML workflows and understanding of data needs across the model lifecycle
Comfort working with cloud-native data platforms (AWS preferred)
Strong software engineering fundamentals
Excellent communicator
What's in it for me:
Competitive compensation up to $250,000 dependent on experience and location
Foundational role as the first Staff Data Engineer
Work hand-in-hand with the Head of Data to design and implement systems, pipelines, and abstractions that make us an AI-native company
Apply now for immediate consideration!
Data Analytics Engineer
Data scientist job in Houston, TX
Title: Data Analytics Engineer
Type: 6 Month Contract (Full-time is possible after contract period)
Schedule: Hybrid (3-4 days onsite)
Sector: Oil & Gas
Overview: You will be instrumental in developing and maintaining data models while delivering insightful analyses of maintenance operations, including uptime/downtime, work order metrics, and asset health.
Key Responsibilities:
Aggregate and transform raw data from systems such as CMMS, ERP, and SCADA into refined datasets and actionable reports/visualizations using tools like SQL, Python, Power BI, and/or Spotfire.
Own the creation and maintenance of dashboards for preventative and predictive maintenance.
Collaborate cross-functionally to identify data requirements, key performance indicators (KPIs), and reporting gaps.
Ensure high data quality through rigorous testing, validation, and documentation.
Qualifications and Skills:
Bachelor's degree required.
Proficiency in Python and SQL is essential.
Knowledge of API rules and protocols.
Experience organizing development workflows using GitHub.
Familiarity with Machine Learning is a plus.
Preference for candidates with experience in water midstream/infrastructure or Oil & Gas sectors.
Expertise in dashboard creation using tools like Tableau, Spotfire, Excel, or Power BI.
Ability to clearly communicate technical concepts to non-technical stakeholders.
Strong organizational skills and a customer-service mindset.
Capability to work independently or collaboratively with minimal supervision.
Exceptional analytical and problem-solving skills, with a strategic approach to prioritization.
Ability to analyze data, situations, and processes to make informed decisions or resolve issues, with regular communication to management.
Excellent written and verbal communication skills.
Lead Data Engineer(Databricks, DLT (Delta Live Tables)
Data scientist job in Houston, TX
a. Relevant experience to be more than 8-9 years, Strong and proficient in Databricks, DLT (Delta Live Tables) framework and Pyspark, need excellent communication.
Thanks
Rakesh Pathak | Senior Technical Recruiter
Phone: ************
*************************| ***************
**********************************************************
On-premise Data Engineer (Python, SQL, Databases)
Data scientist job in Houston, TX
I want to do 3 rounds of interviews:
Teams virtual tech interview with senior developers
Karat screening
In person interview with managers and directors
5+ years of experience in data engineering, sql and nosql databases like oracle, sql server, postgres, db2, elastic, mongo db and advanced python skills.
Advanced application development experience with implementing business logic utilizing SQL procedures and NOSQL utilities
Experience with design and development of scalable and performant processes
Expert in Python development and fast API microservices what IDEs and tools did they use to code and test?
Development experience with real time user interactive applications and communicate between UI and database what protocols/data formats did they use to interact between db and UI
Python Data Engineer
Data scientist job in Houston, TX
Job Title: Python Data Engineer
Experience & Skills
5+ years in Data Engineering with strong SQL and NoSQL database skills:
Databases: Oracle, SQL Server, Postgres, DB2, Elasticsearch, MongoDB
Advanced Python development and FastAPI microservices experience
Application development experience implementing business logic via SQL stored procedures and NoSQL utilities
Experience designing scalable and performant processes:
Must provide metrics: transactions/day, largest DB table size, concurrent users, API response times
Real-time interactive applications with UI-to-database communication:
Must explain protocols and data formats used (e.g., JSON, REST, WebSockets)
Experience using LLM models, coding agents, and testing agents:
Provide specific examples of problem-solving
Ability to handle support and development simultaneously:
Detail daily split between support and development, ticketing system usage, or direct user interaction
Bachelor's degree in Computer Science or relevant major
Strong analytic skills, AI tool usage, multitasking, self-management, and direct collaboration with business users
Not a Good Fit
Experience limited to ETL / backend processes / data transfer between databases
Experience only on cloud platforms (Azure, AWS, GCP) without SQL/NoSQL + Python expertise
Dexian stands at the forefront of Talent + Technology solutions with a presence spanning more than 70 locations worldwide and a team exceeding 10,000 professionals. As one of the largest technology and professional staffing companies and one of the largest minority-owned staffing companies in the United States, Dexian combines over 30 years of industry expertise with cutting-edge technologies to deliver comprehensive global services and support.
Dexian connects the right talent and the right technology with the right organizations to deliver trajectory-changing results that help everyone achieve their ambitions and goals. To learn more, please visit ********************
Dexian is an Equal Opportunity Employer that recruits and hires qualified candidates without regard to race, religion, sex, sexual orientation, gender identity, age, national origin, ancestry, citizenship, disability, or veteran status.
Data Engineer
Data scientist job in Austin, TX
We are seeking a Data Engineer to join a dynamic Agile team and support the build and enhancement of a large-scale data integration hub. This role requires hands-on experience in data acquisition, ETL automation, SQL development, and performance analytics.
What You'll Do
✔ Lead technical work within Agile development teams
✔ Automate ETL processes using Informatica Power Center / IICS
✔ Develop complex Oracle/Snowflake SQL scripts & views
✔ Integrate data from multiple sources (Oracle, SQL Server, Excel, Access, PDF)
✔ Support CI/CD and deployment processes
✔ Produce technical documentation, diagrams & mockups
✔ Collaborate with architects, engineers & business stakeholders
✔ Participate in Sprint ceremonies & requirements sessions
✔ Ensure data quality, validation & accuracy
Must Have Experience
✅ 8+ years:
Informatica Power Center / IICS
ETL workflow development
SQL development (Oracle/Snowflake)
Data warehousing & analytics
Technical documentation (Visio/Erwin, MS Office, MS Project)
Senior Data Engineer (USC AND GC ONLY)
Data scientist job in Richardson, TX
Now Hiring: Senior Data Engineer (GCP / Big Data / ETL)
Duration: 6 Months (Possible Extension)
We're seeking an experienced Senior Data Engineer with deep expertise in Data Warehousing, ETL, Big Data, and modern GCP-based data pipelines. This role is ideal for someone who thrives in cross-functional environments and can architect, optimize, and scale enterprise-level data solutions on the cloud.
Must-Have Skills (Non-Negotiable)
9+ years in Data Engineering & Data Warehousing
9+ years hands-on ETL experience (Informatica, DataStage, etc.)
9+ years working with Teradata
3+ years hands-on GCP and BigQuery
Experience with Dataflow, Pub/Sub, Cloud Storage, and modern GCP data pipelines
Strong background in query optimization, data structures, metadata & workload management
Experience delivering microservices-based data solutions
Proficiency in Big Data & cloud architecture
3+ years with SQL & NoSQL
3+ years with Python or similar scripting languages
3+ years with Docker, Kubernetes, CI/CD for data pipelines
Expertise in deploying & scaling apps in containerized environments (K8s)
Strong communication, analytical thinking, and ability to collaborate across technical & non-technical teams
Familiarity with AGILE/SDLC methodologies
Key Responsibilities
Build, enhance, and optimize modern data pipelines on GCP
Implement scalable ETL frameworks, data structures, and workflow dependency management
Architect and tune BigQuery datasets, queries, and storage layers
Collaborate with cross-functional teams to define data requirements and support business objectives
Lead efforts in containerized deployments, CI/CD integrations, and performance optimization
Drive clarity in project goals, timelines, and deliverables during Agile planning sessions
📩 Interested? Apply now or DM us to explore this opportunity! You can share resumes at ******************* OR Call us on *****************
Python Data Engineer- THADC5693417
Data scientist job in Houston, TX
Must Haves:
Strong proficiency in Python; 5+ years' experience.
Expertise in Fast API and microservices architecture and coding
Linking python based apps with sql and nosql db's
Deployments on docker, Kubernetes and monitoring tools
Experience with Automated testing and test-driven development
Git source control, git actions, ci/cd , VS code and copilot
Expertise in both on prem sql dbs (oracle, sql server, Postgres, db2) and no sql databases
Working knowledge of data warehousing and ETL Able to explain the business functionality of the projects/applications they have worked on
Ability to multi task and simultaneously work on multiple projects.
NO CLOUD - they are on prem
Day to Day:
Insight Global is looking for a Python Data Engineer for one of our largest oil and gas clients in Downtown Houston, TX. This person will be responsible for building python-based relationships between back-end SQL and NoSQL databases, architecting and coding Fast API and Microservices, and performing testing on back-office applications. The ideal candidate will have experience developing applications utilizing python and microservices and implementing complex business functionality utilizing python.
Data Engineer
Data scientist job in Dallas, TX
Junior Data Engineer
DESCRIPTION: BeaconFire is based in Central NJ, specializing in Software Development, Web Development, and Business Intelligence; looking for candidates who are good communicators and self-motivated. You will play a key role in building, maintaining, and operating integrations, reporting pipelines, and data transformation systems.
Qualifications:
Passion for data and a deep desire to learn.
Master's Degree in Computer Science/Information Technology, Data Analytics/Data
Science, or related discipline.
Intermediate Python. Experience in data processing is a plus. (Numpy, Pandas, etc)
Experience with relational databases (SQL Server, Oracle, MySQL, etc.)
Strong written and verbal communication skills.
Ability to work both independently and as part of a team.
Responsibilities:
Collaborate with the analytics team to find reliable data solutions to meet the business needs.
Design and implement scalable ETL or ELT processes to support the business demand for data.
Perform data extraction, manipulation, and production from database tables.
Build utilities, user-defined functions, and frameworks to better enable data flow patterns.
Build and incorporate automated unit tests, participate in integration testing efforts.
Work with teams to resolve operational & performance issues.
Work with architecture/engineering leads and other teams to ensure quality solutions are implemented, and engineering best practices are defined and adhered to.
Compensation: $65,000.00 to $80,000.00 /year
BeaconFire is an e-verified company. Work visa sponsorship is available.