Data Scientist (F2F Interview)
Senior data scientist job in Dallas, TX
W2 Contract
Dallas, TX (Onsite)
We are seeking an experienced Data Scientist to join our team in Dallas, Texas. The ideal candidate will have a strong foundation in machine learning, data modeling, and statistical analysis, with the ability to transform complex datasets into clear, actionable insights that drive business impact.
Key Responsibilities
Develop, implement, and optimize machine learning models to support business objectives.
Perform exploratory data analysis, feature engineering, and predictive modeling.
Translate analytical findings into meaningful recommendations for technical and non-technical stakeholders.
Collaborate with cross-functional teams to identify data-driven opportunities and improve decision-making.
Build scalable data pipelines and maintain robust analytical workflows.
Communicate insights through reports, dashboards, and data visualizations.
Qualifications
Bachelor's or Master's degree in Data Science, Statistics, Computer Science, or a related field.
Proven experience working with machine learning algorithms and statistical modeling techniques.
Proficiency in Python or R, along with hands-on experience using libraries such as Pandas, NumPy, Scikit-learn, or TensorFlow.
Strong SQL skills and familiarity with relational or NoSQL databases.
Experience with data visualization tools (e.g., Tableau, Power BI, matplotlib).
Excellent problem-solving, communication, and collaboration skills.
Data Scientist with Gen Ai and Python experience
Senior data scientist job in Plano, TX
About Company,
Droisys is an innovation technology company focused on helping companies accelerate their digital initiatives from strategy and planning through execution. We leverage deep technical expertise, Agile methodologies, and data-driven intelligence to modernize systems of engagement and simplify human/tech interaction.
Amazing things happen when we work in environments where everyone feels a true sense of belonging and when candidates have the requisite skills and opportunities to succeed. At Droisys, we invest in our talent and support career growth, and we are always on the lookout for amazing talent who can contribute to our growth by delivering top results for our clients. Join us to challenge yourself and accomplish work that matters.
Here's the job details,
Data Scientist with Gen Ai and Python experience
Plano, TX- 5 days Onsite
18+ Months
Job Overview:
Competent Data Scientist, who is independent, results driven and is capable of taking business requirements and building out the technologies to generate statistically sound analysis and production grade ML models
DS skills with GenAI and LLM Knowledge,
Expertise in Python/Spark and their related libraries and frameworks.
Experience in building training ML pipelines and efforts involved in ML Model deployment.
Experience in other ML concepts - Real time distributed model inferencing pipeline, Champion/Challenger framework, A/B Testing, Model.
Familiar with DS/ML Production implementation.
Excellent problem-solving skills, with attention to detail, focus on quality and timely delivery of assigned tasks.
Azure cloud and Databricks prior knowledge will be a big plus.
Droisys is an equal opportunity employer. We do not discriminate based on race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law. Droisys believes in diversity, inclusion, and belonging, and we are committed to fostering a diverse work environment.
Data Modeler
Senior data scientist job in Plano, TX
Plano TX- Nearby candidates only
W2 Candidates
Must Have:
5+ years of experience with data modeling, warehousing, analysis & data profiling experience and ability to identify trends and anomalies in the data
Experience on AWS technologies like S3, AWS Glue, EMR, and IAM roles/permissions
Experience with one or more query language (e.g., SQL, PL/SQL, DDL, SparkSQL, Scala)
Experience working with relational database such as Teradata and handling both structured and unstructured datasets
Data Modeling tools (Any of - Erwin, Power Designer, ER Studio)
Preferred / Ideal to have -
Proficiency in Python
Experience with NoSQL, non-relational databases / data stores (e.g., object storage, document or key-value stores, graph databases, column-family databases)
Experience with Snowflake and Databricks
Oracle Data Modeler
Senior data scientist job in Dallas, TX
Oracle Data Modeler (Erwin) 6+ month contract (W2 ONLY - NO C-C) Downtown Dallas, TX (Onsite) Primary responsibilities of the Data Modeler include designing, developing, and maintaining enterprise-grade data models that support critical business initiatives, analytics, and operational systems. The ideal candidate is proficient in industry-standard data modeling tools (with hands-on expertise in Erwin Data Modeler) and has deep experience with Oracle databases. The candidate will also translate complex business requirements into robust, scalable, and normalized data models while ensuring alignment with data governance, performance, and integration standards.
Responsibilities
Design and develop conceptual, logical, and physical data models using Erwin Data Modeler (required).
Generate, review, and optimize DDL (Data Definition Language) scripts for database objects (tables, views, indexes, constraints, partitions, etc.).
Perform forward and reverse engineering of data models from existing Oracle and SQL Server databases.
Collaborate with data architects, DBAs, ETL developers, and business stakeholders to gather and refine requirements.
Ensure data models adhere to normalization standards (3NF/BCNF), data integrity, and referential integrity.
Support dimensional modeling (star/snowflake schemas) for data warehousing and analytics use cases.
Conduct model reviews, impact analysis, and version control using Erwin or comparable tools.
Participate in data governance initiatives, including metadata management, naming standards, and lineage documentation.
Optimize models for performance, scalability, and maintainability across large-scale environments.
Assist in database migrations, schema comparisons, and synchronization between environments (Dev/QA/Prod).
Assist in optimizing existing Data Solutions
Follow Oncor's Data Governance Policy and Information Classification and Protection Policy.
Participate in design reviews and take guidance from the Data Architecture team members.
Qualifications
3+ years of hands-on data modeling experience in enterprise environments.
Expert proficiency with Erwin Data Modeler (version 9.x or higher preferred) - including subject areas, model templates, and DDL generation.
Advanced SQL skills and deep understanding of Oracle (11g/12c/19c/21c).
Strong command of DDL - creating and modifying tables, indexes, constraints, sequences, synonyms, and materialized views.
Solid grasp of database internals: indexing strategies, partitioning, clustering, and query execution plans.
Experience with data modeling best practices: normalization, denormalization, surrogate keys, slowly changing dimensions (SCD), and data vault (a plus).
Familiarity with version control (e.g., Git) and model comparison/diff tools.
Excellent communication skills - ability to document models clearly and present to technical and non-technical audiences.
Self-Motivated, with an ability to multi-task
Capable of presenting to all levels of audiences
Works well in a team environment
Experience with Hadoop/MongoDB a plus
Estimated Min Rate: $63.00
Estimated Max Rate: $90.00
What's In It for You?
We welcome you to be a part of the largest and legendary global staffing companies to meet your career aspirations. Yoh's network of client companies has been employing professionals like you for over 65 years in the U.S., UK and Canada. Join Yoh's extensive talent community that will provide you with access to Yoh's vast network of opportunities and gain access to this exclusive opportunity available to you. Benefit eligibility is in accordance with applicable laws and client requirements. Benefits include:
Medical, Prescription, Dental & Vision Benefits (for employees working 20+ hours per week)
Health Savings Account (HSA) (for employees working 20+ hours per week)
Life & Disability Insurance (for employees working 20+ hours per week)
MetLife Voluntary Benefits
Employee Assistance Program (EAP)
401K Retirement Savings Plan
Direct Deposit & weekly epayroll
Referral Bonus Programs
Certification and training opportunities
Note: Any pay ranges displayed are estimations. Actual pay is determined by an applicant's experience, technical expertise, and other qualifications as listed in the job description. All qualified applicants are welcome to apply.
Yoh, a Day & Zimmermann company, is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Visit ************************************************ to contact us if you are an individual with a disability and require accommodation in the application process.
For California applicants, qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. All of the material job duties described in this posting are job duties for which a criminal history may have a direct, adverse, and negative relationship potentially resulting in the withdrawal of a conditional offer of employment.
It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability.
By applying and submitting your resume, you authorize Yoh to review and reformat your resume to meet Yoh's hiring clients' preferences. To learn more about Yoh's privacy practices, please see our Candidate Privacy Notice: **********************************
Senior Data Architect
Senior data scientist job in Dallas, TX
Akkodis
is seeking a
Senior Data Architect
for a
Contract
with a client located in
Dallas, TX (Hybrid).
Pay Range:
$80/hr - $90/hr, The rate may be negotiable based on experience, education, geographic location, and other factors
Must have Oracle Exadata, ETL, Informatica, Golden Gate, and MDM.
Job Description:
Primary responsibilities of the Senior Data Architect include designing and managing Data Architectural solutions for multiple environments, including but not limited to Data Warehouse, ODS, Data Replication/ETL Data Management initiatives. The candidate will be in an expert role and will work closely with Business, DBA, ETL, and Data Management teams, providing solutions for complex data-related initiatives. This individual will also be responsible for developing and managing Data Governance and Master Data Management solutions. This candidate must have good technical and communication skills coupled with the ability to mentor effectively.
Responsibilities
Establishing policies, procedures, and guidelines regarding all aspects of Data Governance
Ensure data decisions are consistent and best practices are adhered to
Ensure Data Standardization definitions, Data Dictionary, and Data Lineage are kept up to date and accessible
Work with ETL, Replication, and DBA teams to determine best practices as it relates to data transformations, data movement, and derivations
Work with support teams to ensure consistent and proactive support methodologies are in place for all aspects of data movements and data transformations
Work with and mentor Data Architects and Data Analysts to ensure best practices are adhered to for database design and data management
Assist in overall Architectural solutions, including, but not limited to, Data Warehouse, ODS, Data Replication/ETL Data Management initiatives
Work with the business teams and Enterprise Architecture team to ensure the best architectural solutions from a Data perspective
Create a strategic roadmap for MDM implementation
Responsible for implementing a Master Data Management tool
Establishing policies, procedures, and guidelines regarding all aspects of Master Data Management
Ensure Architectural rules and design of the MDM process are documented, and best practices are adhered to
Qualifications
5+ years of Data Architecture experience, including OLTP, Data Warehouse, and Big Data
5+ years of Solution Architecture experience
5+ years of MDM experience
5+ years of Data Governance experience, working knowledge of best practices
Extensive working knowledge of all aspects of Data Movement and Processing, including Middleware, ETL, API, OLAP, and best practices for data tracking
Good Communication skills
Self-Motivated
Capable of presenting to all levels of audiences
Works well in a team environment
If you are interested in this role, then please click
APPLY NOW
. For other opportunities available at
Akkodis
, or any questions, please contact
Anirudh Srivastava at ************ or ***********************************.
Equal Opportunity Employer/Veterans/Disabled
Benefit offerings include medical, dental, vision, term life insurance, short-term disability insurance, additional voluntary benefits, commuter benefits, and a 401K plan. Our program provides employees the flexibility to choose the type of coverage that meets their individual needs. Available paid leave may include Paid Sick Leave, where required by law; any other paid leave required by Federal, State, or local law; and Holiday pay upon meeting eligibility criteria.
Disclaimer:
These benefit offerings do not apply to client-recruited jobs and jobs that are direct hires to a client.
To read our Candidate Privacy Information Statement, which explains how we will use your information, please visit
******************************************
Sr. Data Engineer
Senior data scientist job in Dallas, TX
Trinity Industries is searching for a Sr. Data Engineer to join our Data Analytics team in Dallas, TX! The successful candidate will work with the Trinity Rail teams to develop and maintain data pipelines in Azure utilizing Databricks, Python and SQL.
Join our team today and be a part of Delivering Goods for the Good of All!
What you'll do:
Facilitate technical design of complex data sourcing, transformation and aggregation logic, ensuring business analytics requirements are met
Work with leadership to prioritize business and information needs
Engage with product and app development teams to gather requirements, and create technical requirements
Utilize and implement data engineering best practices and coding strategies
Be responsible for data ingress into storage
What you'll need:
Bachelor's Degree Computer Science, Information Management, or related field required; Masters preferred
8+ years in data engineering including prior experience in data transformation
Databricks experience building data pipelines using the medallion architecture, bronze to gold
Advanced skills in Spark and structured streaming, SQL, Python
Technical expertise regarding data models, database design/development, data mining and other segmentation techniques
Experience with data conversion, interface and report development
Experience working with IOT and/or geospatial data in a cloud environment (Azure)
Adept at queries, report writing and presenting findings
Prior experience coding utilizing repositories and multiple coding environments
Must possess effective communication skills, both verbal and written
Strong organizational, time management and multi-tasking skills
Experience with data conversion, interface and report development
Adept at queries, report writing and presenting findings
Process improvement and automation a plus
Nice to have:
Databricks Data Engineering Associate or Professional Certification > 2023
Data Engineer
Senior data scientist job in Dallas, TX
We are seeking a highly experienced Senior Data Engineer with deep expertise in modern data engineering frameworks and cloud-native architectures, primarily on AWS. This role focuses on designing, building, and optimizing scalable data pipelines and distributed systems.
You will collaborate cross-functionally to deliver secure, high-quality data solutions that drive business decisions.
Key Responsibilities
Design & Build: Develop and maintain scalable, highly available AWS-based data pipelines, specializing in EKS/ECS containerized workloads and services like Glue, EMR, and Lake Formation.
Orchestration: Implement automated data ingestion, transformation, and workflow orchestration using Airflow, NiFi, and AWS Step Functions.
Real-time: Architect and implement real-time streaming solutions with Kafka, MSK, and Flink.
Data Lake & Storage: Architect secure S3 data storage and govern data lakes using Lake Formation and Glue Data Catalog.
Optimization: Optimize distributed processing solutions (Databricks, Spark, Hadoop) and troubleshoot performance across cloud-native systems.
Governance: Ensure robust data quality, security, and governance via IAM, Lake Formation controls, and automated validations.
Mentorship: Mentor junior team members and foster technical excellence.
Requirements
Experience: 7+ years in data engineering; strong hands-on experience designing cloud data pipelines.
AWS Expertise: Deep proficiency in EKS, ECS, S3, Lake Formation, Glue, EMR, IAM, and MSK.
Core Tools: Strong experience with Kafka, Airflow, NiFi, Databricks, Spark, Hadoop, and Flink.
Coding: Proficiency in Python, Scala, or Java for building data pipelines and automation.
Databases: Strong SQL skills and experience with relational/NoSQL databases (e.g., Redshift, DynamoDB).
Cloud-Native Skills: Strong knowledge of Kubernetes, containerization, and CI/CD pipelines.
Education: Bachelor's degree in Computer Science or related field.
Senior Data Engineer
Senior data scientist job in Dallas, TX
About Us
Longbridge Securities, founded in March 2019 and headquartered in Singapore, is a next-generation online brokerage platform. Established by a team of seasoned finance professionals and technical experts from leading global firms, we are committed to advancing financial technology innovation. Our mission is to empower every investor by offering enhanced financial opportunities.
What You'll Do
As part of our global expansion, we're seeking a Data Engineer to design and build batch/real-time data warehouses and maintain data platforms that power trading and research for the US market. You'll work on data pipelines, APIs, storage systems, and quality monitoring to ensure reliable, scalable, and efficient data services.
Responsibilities:
Design and build batch/real-time data warehouses to support the US market growth
Develop efficient ETL pipelines to optimize data processing performance and ensure data quality/stability
Build a unified data middleware layer to reduce business data development costs and improve service reusability
Collaborate with business teams to identify core metrics and data requirements, delivering actionable data solutions
Discover data insights through collaboration with the business owner
Maintain and develop enterprise data platforms for the US market
Qualifications
7+ years of data engineering experience with a proven track record in data platform/data warehouse projects
Proficient in Hadoop ecosystem (Hive, Kafka, Spark, Flink), Trino, SQL, and at least one programming language (Python/Java/Scala)
Solid understanding of data warehouse modeling (dimensional modeling, star/snowflake schemas) and ETL performance optimization
Familiarity with AWS/cloud platforms and experience with Docker, Kubernetes
Experience with open-source data platform development, familiar with at least one relational database (MySQL/PostgreSQL)
Strong cross-department collaboration skills to translate business requirements into technical solutions
Bachelor's degree or higher in Computer Science, Data Science, Statistics, or related fields
Comfortable working in a fast-moving fintech/tech startup environment
Qualifications
7+ years of data engineering experience with a proven track record in data platform/data warehouse projects
Proficient in Hadoop ecosystem (Hive, Kafka, Spark, Flink), Trino, SQL, and at least one programming language (Python/Java/Scala)
Solid understanding of data warehouse modeling (dimensional modeling, star/snowflake schemas) and ETL performance optimization
Familiarity with AWS/cloud platforms and experience with Docker, Kubernetes
Experience with open-source data platform development, familiar with at least one relational database (MySQL/PostgreSQL)
Strong cross-department collaboration skills to translate business requirements into technical solutions
Bachelor's degree or higher in Computer Science, Data Science, Statistics, or related fields
Comfortable working in a fast-moving fintech/tech startup environment
Proficiency in Mandarin and English at the business communication level for international team collaboration
Bonus Point:
Experience with DolphinScheduler and SeaTunnel is a plus
GCP Data Engineer
Senior data scientist job in Dallas, TX
MUST BE USC or Green Card; No vendors
GCP Data Engineer/Lead Onsite
Required Qualifications:
9+ years' experience and hands on with Data Warehousing.
9+ years of hands on ETL (e.g., Informatica/DataStage) experience
3+ years of hands-on Big query
3+ years of hands on GCP
9+ years of Teradata hands on experience
9+ years working in a cross-functional environment.
3+ years of hands-on experience with Google Cloud Platform services like Big Query, Dataflow, Pub/Sub, and Cloud Storage
3+ years of hands-on experience building modern data pipelines with GCP platform
3+ years of experience with Query optimization, data structures, transformation, metadata, dependency, and workload management
3+ years of experience with SQL, NoSQL
3+ years of experience in data engineering with a focus on microservices-based data solutions
3+ years of containerization (Docker, Kubernetes) and CI/CD for data pipeline
3+ years of experience with Python (or a comparable scripting language)
3+ years of experience with Big data and cloud architecture
3+ years of experience with deployment/scaling of apps on containerized environment (Kubernetes,)
Excellent oral and written communications skills; ability to interact effectively with all levels within the organization.
Working knowledge of AGILE/SDLC methodology
Excellent analytical and problem-solving skills.
Ability to interact and work effectively with technical & non-technical levels within the organization.
Ability to drive clarity of purpose and goals during release and planning activities.
Excellent organizational skills including ability to prioritize tasks efficiently with high level of attention to detail.
Dexian is an Equal Opportunity Employer that recruits and hires qualified candidates without regard to race, religion, sex, sexual orientation, gender identity, age, national origin, ancestry, citizenship, disability, or veteran status.
Senior Data Engineer
Senior data scientist job in Plano, TX
Ascendion is a full-service digital engineering solutions company. We make and manage software platforms and products that power growth and deliver captivating experiences to consumers and employees. Our engineering, cloud, data, experience design, and talent solution capabilities accelerate transformation and impact for enterprise clients. Headquartered in New Jersey, our workforce of 6,000+ Ascenders delivers solutions from around the globe. Ascendion is built differently to engineer the next.
Ascendion | Engineering to elevate life
We have a culture built on opportunity, inclusion, and a spirit of partnership. Come, change the world with us:
Build the coolest tech for world's leading brands
Solve complex problems - and learn new skills
Experience the power of transforming digital engineering for Fortune 500 clients
Master your craft with leading training programs and hands-on experience
Experience a community of change makers!
Join a culture of high-performing innovators with endless ideas and a passion for tech. Our culture is the fabric of our company, and it is what makes us unique and diverse. The way we share ideas, learning, experiences, successes, and joy allows everyone to be their best at Ascendion.
*** About the Role ***
Job Title: Senior Data Engineer
Key Responsibilities:
Design, develop, and maintain scalable and reliable data pipelines and ETL workflows.
Build and optimize data models and queries in Snowflake to support analytics and reporting needs.
Develop data processing and automation scripts using Python.
Implement and manage data orchestration workflows using Airflow, Airbyte, or similar tools.
Work with AWS data services including EMR, Glue, and Kafka for large-scale data ingestion and processing.
Ensure data quality, reliability, and performance across data pipelines.
Collaborate with analytics, product, and engineering teams to understand data requirements and deliver robust solutions.
Monitor, troubleshoot, and optimize data workflows for performance and cost efficiency.
Required Skills & Qualifications:
8+ years of hands-on experience as a Data Engineer.
Strong proficiency in SQL and Snowflake.
Extensive experience with ETL frameworks and data pipeline orchestration tools (Airflow, Airbyte, or similar).
Proficiency in Python for data processing and automation.
Hands-on experience with AWS data services, including EMR, Glue, and Kafka.
Strong understanding of data warehousing, data modeling, and distributed data processing concepts.
Nice to Have:
Experience working with streaming data pipelines.
Familiarity with data governance, security, and compliance best practices.
Experience mentoring junior engineers and leading technical initiatives.
Salary Range: The salary for this position is between $130,000- $140,000 annually. Factors which may affect pay within this range may include geography/market, skills, education, experience, and other qualifications of the successful candidate.
Benefits: The Company offers the following benefits for this position, subject to applicable eligibility requirements: [medical insurance] [dental insurance] [vision insurance] [401(k) retirement plan] [long-term disability insurance] [short-term disability insurance] [5 personal days accrued each calendar year. The Paid time off benefits meet the paid sick and safe time laws that pertains to the City/ State] [10-15 days of paid vacation time] [6 paid holidays and 1 floating holiday per calendar year] [Ascendion Learning Management System]
Want to change the world? Let us know.
Tell us about your experiences, education, and ambitions. Bring your knowledge, unique viewpoint, and creativity to the table. Let's talk!
Data Engineer
Senior data scientist job in Dallas, TX
Junior Data Engineer
DESCRIPTION: BeaconFire is based in Central NJ, specializing in Software Development, Web Development, and Business Intelligence; looking for candidates who are good communicators and self-motivated. You will play a key role in building, maintaining, and operating integrations, reporting pipelines, and data transformation systems.
Qualifications:
Passion for data and a deep desire to learn.
Master's Degree in Computer Science/Information Technology, Data Analytics/Data
Science, or related discipline.
Intermediate Python. Experience in data processing is a plus. (Numpy, Pandas, etc)
Experience with relational databases (SQL Server, Oracle, MySQL, etc.)
Strong written and verbal communication skills.
Ability to work both independently and as part of a team.
Responsibilities:
Collaborate with the analytics team to find reliable data solutions to meet the business needs.
Design and implement scalable ETL or ELT processes to support the business demand for data.
Perform data extraction, manipulation, and production from database tables.
Build utilities, user-defined functions, and frameworks to better enable data flow patterns.
Build and incorporate automated unit tests, participate in integration testing efforts.
Work with teams to resolve operational & performance issues.
Work with architecture/engineering leads and other teams to ensure quality solutions are implemented, and engineering best practices are defined and adhered to.
Compensation: $65,000.00 to $80,000.00 /year
BeaconFire is an e-verified company. Work visa sponsorship is available.
Data Engineer(python, Pyspark, data bricks)
Senior data scientist job in Dallas, TX
Job Title: Data Engineer(python, Pyspark, data bricks)
Data Engineer with strong proficiency in SQL, Python, and PySpark to support high-performance data pipelines and analytics initiatives. This role will focus on scalable data processing, transformation, and integration efforts that enable business insights, regulatory compliance, and operational efficiency.
Data Engineer - SQL, Python and Pyspark Expert (Onsite - Dallas, TX)
Key Responsibilities
Design, develop, and optimize ETL/ELT pipelines using SQL, Python, and PySpark for large-scale data environments
Implement scalable data processing workflows in distributed data platforms (e.g., Hadoop, Databricks, or Spark environments)
Partner with business stakeholders to understand and model mortgage lifecycle data (origination, underwriting, servicing, foreclosure, etc.)
Create and maintain data marts, views, and reusable data components to support downstream reporting and analytics
Ensure data quality, consistency, security, and lineage across all stages of data processing
Assist in data migration and modernization efforts to cloud-based data warehouses (e.g., Snowflake, Azure Synapse, GCP BigQuery)
Document data flows, logic, and transformation rules
Troubleshoot performance and quality issues in batch and real-time pipelines
Support compliance-related reporting (e.g., HMDA, CFPB)
Required Qualifications
6+ years of experience in data engineering or data development
Advanced expertise in SQL (joins, CTEs, optimization, partitioning, etc.)
Strong hands-on skills in Python for scripting, data wrangling, and automation
Proficient in PySpark for building distributed data pipelines and processing large volumes of structured/unstructured data
Experience working with mortgage banking data sets and domain knowledge is highly preferred
Strong understanding of data modeling (dimensional, normalized, star schema)
Experience with cloud-based platforms (e.g., Azure Databricks, AWS EMR, GCP Dataproc)
Familiarity with ETL tools, orchestration frameworks (e.g., Airflow, ADF, dbt)
Azure Data Engineer Sr
Senior data scientist job in Irving, TX
Minimum 7 years of relevant work experience in data engineering, with at least 2 years in a data modeling.
Strong technical foundation in Python, SQL, and experience with cloud platforms (Azure,).
Deep understanding of data engineering fundamentals, including database architecture and design, Extract, transform and load (ETL) processes, data lakes, data warehousing, and both batch and streaming technologies.
Experience with data orchestration tools (e.g., Airflow), data processing frameworks (e.g., Spark, Databricks), and data visualization tools (e.g., Tableau, Power BI).
Proven ability to lead a team of engineers, fostering a collaborative and high-performing environment.
Senior Data Engineer (USC AND GC ONLY)
Senior data scientist job in Richardson, TX
Now Hiring: Senior Data Engineer (GCP / Big Data / ETL)
Duration: 6 Months (Possible Extension)
We're seeking an experienced Senior Data Engineer with deep expertise in Data Warehousing, ETL, Big Data, and modern GCP-based data pipelines. This role is ideal for someone who thrives in cross-functional environments and can architect, optimize, and scale enterprise-level data solutions on the cloud.
Must-Have Skills (Non-Negotiable)
9+ years in Data Engineering & Data Warehousing
9+ years hands-on ETL experience (Informatica, DataStage, etc.)
9+ years working with Teradata
3+ years hands-on GCP and BigQuery
Experience with Dataflow, Pub/Sub, Cloud Storage, and modern GCP data pipelines
Strong background in query optimization, data structures, metadata & workload management
Experience delivering microservices-based data solutions
Proficiency in Big Data & cloud architecture
3+ years with SQL & NoSQL
3+ years with Python or similar scripting languages
3+ years with Docker, Kubernetes, CI/CD for data pipelines
Expertise in deploying & scaling apps in containerized environments (K8s)
Strong communication, analytical thinking, and ability to collaborate across technical & non-technical teams
Familiarity with AGILE/SDLC methodologies
Key Responsibilities
Build, enhance, and optimize modern data pipelines on GCP
Implement scalable ETL frameworks, data structures, and workflow dependency management
Architect and tune BigQuery datasets, queries, and storage layers
Collaborate with cross-functional teams to define data requirements and support business objectives
Lead efforts in containerized deployments, CI/CD integrations, and performance optimization
Drive clarity in project goals, timelines, and deliverables during Agile planning sessions
📩 Interested? Apply now or DM us to explore this opportunity! You can share resumes at ********************* OR Call us on *****************
Senior BI Data Modeler
Senior data scientist job in Dallas, TX
We are seeking a highly skilled Data Modeler / BI Developer to join our team. This role will focus on designing and implementing enterprise-level data models, ensuring data security, and enabling advanced analytics capabilities within our Primoris BI platforms. The ideal candidate will have strong technical expertise, excellent problem-solving skills, and the ability to collaborate effectively with cross-functional teams.
Key Responsibilities
Collaborate with the Data Ingestion team to design and develop the “Gold” layer within a Medallion Architecture.
Design and implement data security and masking standards, processes, and solutions across various data stores and reporting layers.
Build and execute enterprise-level data models using multiple data sources for business analytics and reporting in Power BI.
Partner with business leaders to identify and prioritize data analysis and platform enhancement needs.
Work with analytics teams and business leaders to determine requirements for composite data models.
Communicate data model structures to visualization and analytics teams.
Develop and optimize complex DAX expressions and SQL queries for data manipulation.
Troubleshoot and resolve issues, identifying root causes to prevent recurrence.
Escalate critical issues when appropriate and ensure timely resolution.
Contribute to the evolution of Machine Learning (ML) and AI model development processes.
Qualifications
Bachelor's degree in Business Administration, Information Technology, or a related field.
2+ years experience ensuring data quality (completeness, validity, consistency, timeliness, accuracy).
2+ years experience organizing and preparing data models for analysis using systematic approaches.
Demonstrated experience with AI-enabled platforms for data modernization.
Experience delivering work using Agile/Scrum practices and software release cycles.
Proficient in Azure, Databricks, SQL, Python, Power BI, and DAX.
Good knowledge of CI/CD and deployment processes.
3+ years experience working with clients and delivering under tight deadlines.
Prior experience with projects of similar size and scope.
Ability to work independently and collaboratively in a team environment.
Skills & Competencies
Exceptional organizational and time management skills.
Ability to manage stakeholder expectations and influence decisions.
High attention to detail and commitment to quality.
Strong leadership and team-building capabilities.
Ability to adapt to changing priorities and work under pressure.
Data Engineer
Senior data scientist job in Coppell, TX
IDR is seeking a Data Engineer to join one of our top clients for an opportunity in Coppell, TX. This role involves designing, building, and maintaining enterprise-grade data architectures, with a focus on cloud-based data engineering, analytics, and machine learning applications. The company operates within the technology and data services industry, providing innovative solutions to large-scale clients.
Position Overview for the Data Engineer:
Develop and maintain scalable data pipelines utilizing Databricks and Azure environments
Design data models and optimize ETL/ELT processes for large datasets
Collaborate with cross-functional teams to implement data solutions supporting analytics, BI, and ML projects
Ensure data quality, availability, and performance across enterprise systems
Automate workflows and implement CI/CD pipelines to improve data deployment processes
Requirements for the Data Engineer:
8-10 years of experience on modern data platforms with a strong background in cloud-based data engineering
Strong expertise in Databricks (PySpark/Scala, Delta Lake, Unity Catalog)
Hands-on experience with Azure (AWS/GCP also acceptable IF Super strong in Databricks)
Advanced SQL skills and strong experience with data modeling, ETL/ELT development and data orchestration
Experience with CI/CD (Azure DevOps, GitHub Actions, Terraform, etc.)
What's in it for you?
Competitive compensation package
Full Benefits; Medical, Vision, Dental, and more!
Opportunity to get in with an industry leading organization.
Why IDR?
25+ Years of Proven Industry Experience in 4 major markets
Employee Stock Ownership Program
Dedicated Engagement Manager who is committed to you and your success.
Medical, Dental, Vision, and Life Insurance
ClearlyRated's Best of Staffing Client and Talent Award winner 12 years in a row.
AZURE DATA ENGINEER (Databrick certified and DATA FACTORY.)
Senior data scientist job in Irving, TX
AZURE DATA ENGINEER with DATA FACTORY.
Databrick certified
3 days a week onsite, can be based out of Irving TX or Houston TX.
Rate is 45 W2.
Data Engineer
Senior data scientist job in Irving, TX
W2 Contract to Hire Role with Monthly Travel to the Dallas Texas area
We are looking for a highly skilled and independent Data Engineer to support our analytics and data science teams, as well as external client data needs. This role involves writing and optimizing complex SQL queries, generating client-specific data extracts, and building scalable ETL pipelines using Azure Data Factory. The ideal candidate will have a strong foundation in data engineering, with a collaborative mindset and the ability to work across teams and systems.
Duties/Responsibilities:Develop and optimize complex SQL queries to support internal analytics and external client data requests.
Generate custom data lists and extracts based on client specifications and business rules.
Design, build, and maintain efficient ETL pipelines using Azure Data Factory.
Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver high-quality solutions.
Work with Salesforce data; familiarity with SOQL is preferred but not required.
Support Power BI reporting through basic data modeling and integration.
Assist in implementing MLOps practices for model deployment and monitoring.
Use Python for data manipulation, automation, and integration tasks.
Ensure data quality, consistency, and security across all workflows and systems.
Required Skills/Abilities/Attributes:
5+ years of experience in data engineering or a related field.
Strong proficiency in SQL, including query optimization and performance tuning.
Experience with Azure Data Factory, with git repository and pipeline deployment.
Ability to translate client requirements into accurate and timely data outputs.
Working knowledge of Python for data-related tasks.
Strong problem-solving skills and ability to work independently.
Excellent communication and documentation skills.
Preferred Skills/ExperiencePrevious knowledge of building pipelines for ML models.
Extensive experience creating/managing stored procedures and functions in MS SQL Server
2+ years of experience in cloud architecture (Azure, AWS, etc)
Experience with ‘code management' systems (Azure Devops)
2+ years of reporting design and management (PowerBI Preferred)
Ability to influence others through the articulation of ideas, concepts, benefits, etc.
Education and Experience:
Bachelor's degree in a computer science field or applicable business experience.
Minimum 3 years of experience in a Data Engineering role
Healthcare experience preferred.
Physical Requirements:Prolonged periods sitting at a desk and working on a computer.
Ability to lift 20 lbs.
GCP Data Engineer
Senior data scientist job in Richardson, TX
Infosys is seeking a Google Cloud (GCP) data engineer with experience in Github and python. In this role, you will enable digital transformation for our clients in a global delivery model, research on technologies independently, recommend appropriate solutions and contribute to technology-specific best practices and standards. You will be responsible to interface with key stakeholders and apply your technical proficiency across different stages of the Software Development Life Cycle. You will be part of a learning culture, where teamwork and collaboration are encouraged, excellence is rewarded, and diversity is respected and valued.
Required Qualifications:
Candidate must be located within commuting distance of Richardson, TX or be willing to relocate to the area. This position may require travel in the US
Bachelor's degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education.
Candidates authorized to work for any employer in the United States without employer-based visa sponsorship are welcome to apply. Infosys is unable to provide immigration sponsorship for this role at this time
At least 4 years of Information Technology experience.
Experience working with technologies like - GCP with data engineering - data flow / air flow, pub sub/ kafta, data proc/Hadoop, Big Query.
ETL development experience with strong SQL background such as Python/R, Scala, Java, Hive, Spark, Kafka
Strong knowledge on Python Program development to build reusable frameworks, enhance existing frameworks.
Application build experience with core GCP Services like Dataproc, GKE, Composer,
Deep understanding GCP IAM & Github.
Must have done IAM set up
Knowledge on CICD pipeline using Terraform in Git.
Preferred Qualifications:
Good knowledge on Google Big Query, using advance SQL programing techniques to build Big Query Data sets in Ingestion and Transformation layer.
Experience in Relational Modeling, Dimensional Modeling and Modeling of Unstructured Data
Knowledge on Airflow Dag creation, execution, and monitoring.
Good understanding of Agile software development frameworks
Ability to work in teams in a diverse, multi-stakeholder environment comprising of Business and Technology teams.
Experience and desire to work in a global delivery environment.
GCP Data Engineer
Senior data scientist job in Fort Worth, TX
Job Title: GCP Data Engineer
Employment Type: W2/CTH
Client: Direct
We are seeking a highly skilled Data Engineer with strong expertise in Python, SQL, and Google Cloud Platform (GCP) services. The ideal candidate will have 6-8 years of hands-on experience in building and maintaining scalable data pipelines, working with APIs, and leveraging GCP tools such as BigQuery, Cloud Composer, and Dataflow.
Core Responsibilities:
• Design, build, and maintain scalable data pipelines to support analytics and business operations.
• Develop and optimize ETL processes for structured and unstructured data.
• Work with BigQuery, Cloud Composer, and other GCP services to manage data workflows.
• Collaborate with data analysts and business teams to ensure data availability and quality.
• Integrate data from multiple sources using APIs and custom scripts.
• Monitor and troubleshoot pipeline performance and reliability.
Technical Skills:
o Strong proficiency in Python and SQL.
o Experience with data pipeline development and ETL frameworks.
• GCP Expertise:
o Hands-on experience with BigQuery, Cloud Composer, and Dataflow.
• Additional Requirements:
o Familiarity with workflow orchestration tools and cloud-based data architecture.
o Strong problem-solving and analytical skills.
o Excellent communication and collaboration abilities.