Data Scientist with Python ML/NLP
Senior data scientist job in Addison, TX
Role: Data Scientist with Python ML/NLP
Yrs. of experience: 10+ Yrs.
Fulltime
Job Responsibilities:
We're looking for a Data Scientist who can be responsible for designing, building and maintaining document capture applications. The ideal candidate will have a solid background in software engineering with experience in building Machine Learning NLP Models and good familiarity with Gen AI Models.
High Level Skills Required
Primary - 7+ years' as Data Scientist or related roles
Bachelor's degree in Computer Science, or a related technical field
Deep understanding and some exposure to new Gen AI Open-source Models
At least 5 years programming experience in software development and Agile process
At least 5 years Python (or equivalent) programming experience to work with ML/NLP models.
Data Scientist (F2F Interview)
Senior data scientist job in Dallas, TX
W2 Contract
Dallas, TX (Onsite)
We are seeking an experienced Data Scientist to join our team in Dallas, Texas. The ideal candidate will have a strong foundation in machine learning, data modeling, and statistical analysis, with the ability to transform complex datasets into clear, actionable insights that drive business impact.
Key Responsibilities
Develop, implement, and optimize machine learning models to support business objectives.
Perform exploratory data analysis, feature engineering, and predictive modeling.
Translate analytical findings into meaningful recommendations for technical and non-technical stakeholders.
Collaborate with cross-functional teams to identify data-driven opportunities and improve decision-making.
Build scalable data pipelines and maintain robust analytical workflows.
Communicate insights through reports, dashboards, and data visualizations.
Qualifications
Bachelor's or Master's degree in Data Science, Statistics, Computer Science, or a related field.
Proven experience working with machine learning algorithms and statistical modeling techniques.
Proficiency in Python or R, along with hands-on experience using libraries such as Pandas, NumPy, Scikit-learn, or TensorFlow.
Strong SQL skills and familiarity with relational or NoSQL databases.
Experience with data visualization tools (e.g., Tableau, Power BI, matplotlib).
Excellent problem-solving, communication, and collaboration skills.
Data Modeler
Senior data scientist job in Plano, TX
Plano TX- Nearby candidates only
W2 Candidates
Must Have:
5+ years of experience with data modeling, warehousing, analysis & data profiling experience and ability to identify trends and anomalies in the data
Experience on AWS technologies like S3, AWS Glue, EMR, and IAM roles/permissions
Experience with one or more query language (e.g., SQL, PL/SQL, DDL, SparkSQL, Scala)
Experience working with relational database such as Teradata and handling both structured and unstructured datasets
Data Modeling tools (Any of - Erwin, Power Designer, ER Studio)
Preferred / Ideal to have -
Proficiency in Python
Experience with NoSQL, non-relational databases / data stores (e.g., object storage, document or key-value stores, graph databases, column-family databases)
Experience with Snowflake and Databricks
Principal Data Modeler and Database Engineer (Onsite)
Senior data scientist job in Dallas, TX
2025-08-21
Country:
United States of America Onsite
U.S. Citizen, U.S. Person, or Immigration Status Requirements:
Active and transferable U.S. government issued security clearance is required prior to start date. U.S. citizenship is required, as only U.S. citizens are eligible for a security clearance
Security Clearance:
TS/SCI without Polygraph
At Raytheon, the foundation of everything we do is rooted in our values and a higher calling - to help our nation and allies defend freedoms and deter aggression. We bring the strength of more than 100 years of experience and renowned engineering expertise to meet the needs of today's mission and stay ahead of tomorrow's threat. Our team solves tough, meaningful problems that create a safer, more secure world.
Raytheon is hiring a Principal Data Modeler and Database Engineer to support our teams in Richardson TX or Aurora CO. You will have the opportunity to directly impact the world around you and contribute to classified programs and technologies you are passionate about. In this position you would be familiar with designing, creating, updating, and managing databases and high performance data models.
What You Will Do
Identify and understand customer needs and apply sound, demonstrable understanding of database principles, theories, and concepts related to software engineering to translate those needs to viable design solutions.
Translate needs and design requirements into physical data models.
Analyze business requirements, RFCs, and technical specifications to identify data modeling impacts and design needs.
Collaborate with systems engineers, product owners, architects, and stakeholders to gather and validate data requirements.
Design new database structures (tables, indexes, constraints, views) and modify existing models to accommodate program needs.
Author and implement database data model changes (DDL) using database change control automation.
Work with developers on best practices for interacting with databases, writing and maintaining SQL statements.
Utilize data modeling tools such as Oracle SQL Developer Data Modeler, Oracle Designer, and others.
Demonstrate critical thinking skills with the ability to communicate concepts and ideas clearly.
Use proven problem solving and analytical skills.
Work in an Agile/Scrum environment.
Must be able to obtain and maintain SCI program access and complete polygraphs.
Obtain Security+ certification within 60 days of start.
Qualifications You Must Have
Typically requires a degree in Science, Technology, Engineering, or Mathematics (STEM) and a minimum of 8 years of Database Engineering experience.
Five (5) years of experience performing relational data modeling with Oracle and PostgreSQL Relational Database Management Systems (RDBMS).
Eight (8) years of experience with Oracle SQL and PL/SQL and PostgreSQL SQL and pl/pgsql skills.
Experience with bash scripting and python.
Experience working with Systems and Software Engineers to design, implement, and modify backend databases to interact with applications.
Experience with Linux command line skills to perform database administration activities, deployment automation.
Active and transferable Top Secret U.S. government issued security clearance is required prior to start date with the ability to obtain a TS/SCI prior to start date. U.S. Citizenship is required as only U.S. Citizens are eligible for a clearance.
Qualifications We Prefer
Existing Security+ Certification (or higher).
Experience using AWS RDS for Oracle and AWS Aurora PostgreSQL database services.
Experience with NoSQL database products such as MongoDB and DynamoDB.
Data architecture experience for critical design decisions (such as the suitability of a product/technology for a given use case).
Experience troubleshooting database issues such as Disaster Recovery, replication & backups and other related issues.
Experience troubleshooting database performance issues.
Strong SQL tuning experience in Oracle and PostgreSQL.
Experience with database security controls and audit & compliance requirements.
Experience virtualizing or containerizing databases.
Experience with automating routine tasks.
Experience using Oracle Spatial and/or PostgreSQL PostGIS.
Experience using git source control.
Experience using Atlassian Jira and Confluence.
What We Offer
Whether you're just starting out on your career journey or are an experienced professional, we offer a total rewards package that goes above and beyond with compensation; healthcare, wellness, retirement, and work/life benefits; career development and recognition programs. Some of the benefits we offer include parental (including paternal) leave, flexible work schedules, achievement awards, educational assistance, and child/adult backup care.
Relocation Eligibility - Relocation assistance is available.
Learn More & Apply Now!
Please consider the following role type definition as you apply for this role. Onsite: Employees who are working in Onsite roles will work primarily onsite. This includes all production and maintenance employees, as they are essential to the development of our products.
This position requires a security clearance. DCSA Consolidated Adjudication Services (DCSA), an agency of the Department of Defense, handles and adjudicates the security clearance process. More information about Security Clearances can be found on the US Department of State government website here:
North Texas: , CO: Aurora, CO:
We Are RTX
#LI-Onsite
#LI-HS30
The salary range for this role is 101,000 USD - 203,000 USD. The salary range provided is a good faith estimate representative of all experience levels. RTX considers several factors when extending an offer, including but not limited to, the role, function and associated responsibilities, a candidate's work experience, location, education/training, and key skills.Hired applicants may be eligible for benefits, including but not limited to, medical, dental, vision, life insurance, short-term disability, long-term disability, 401(k) match, flexible spending accounts, flexible work schedules, employee assistance program, Employee Scholar Program, parental leave, paid time off, and holidays. Specific benefits are dependent upon the specific business unit as well as whether or not the position is covered by a collective-bargaining agreement.Hired applicants may be eligible for annual short-term and/or long-term incentive compensation programs depending on the level of the position and whether or not it is covered by a collective-bargaining agreement. Payments under these annual programs are not guaranteed and are dependent upon a variety of factors including, but not limited to, individual performance, business unit performance, and/or the company's performance.This role is a U.S.-based role. If the successful candidate resides in a U.S. territory, the appropriate pay structure and benefits will apply.RTX anticipates the application window closing approximately 40 days from the date the notice was posted. However, factors such as candidate flow and business necessity may require RTX to shorten or extend the application window.
RTX is an Equal Opportunity Employer. xevrcyc All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or veteran status, or any other applicable state or federal protected class. RTX provides affirmative action in employment for qualified Individuals with a Disability and Protected Veterans in compliance with Section 503 of the Rehabilitation Act and the Vietnam Era Veterans' Readjustment Assistance Act.
Privacy Policy and Terms:
Click on this link to read the Policy and Terms
Senior Data Engineer
Senior data scientist job in Plano, TX
Ascendion is a full-service digital engineering solutions company. We make and manage software platforms and products that power growth and deliver captivating experiences to consumers and employees. Our engineering, cloud, data, experience design, and talent solution capabilities accelerate transformation and impact for enterprise clients. Headquartered in New Jersey, our workforce of 6,000+ Ascenders delivers solutions from around the globe. Ascendion is built differently to engineer the next.
Ascendion | Engineering to elevate life
We have a culture built on opportunity, inclusion, and a spirit of partnership. Come, change the world with us:
Build the coolest tech for world's leading brands
Solve complex problems - and learn new skills
Experience the power of transforming digital engineering for Fortune 500 clients
Master your craft with leading training programs and hands-on experience
Experience a community of change makers!
Join a culture of high-performing innovators with endless ideas and a passion for tech. Our culture is the fabric of our company, and it is what makes us unique and diverse. The way we share ideas, learning, experiences, successes, and joy allows everyone to be their best at Ascendion.
*** About the Role ***
Job Title: Senior Data Engineer
Key Responsibilities:
Design, develop, and maintain scalable and reliable data pipelines and ETL workflows.
Build and optimize data models and queries in Snowflake to support analytics and reporting needs.
Develop data processing and automation scripts using Python.
Implement and manage data orchestration workflows using Airflow, Airbyte, or similar tools.
Work with AWS data services including EMR, Glue, and Kafka for large-scale data ingestion and processing.
Ensure data quality, reliability, and performance across data pipelines.
Collaborate with analytics, product, and engineering teams to understand data requirements and deliver robust solutions.
Monitor, troubleshoot, and optimize data workflows for performance and cost efficiency.
Required Skills & Qualifications:
8+ years of hands-on experience as a Data Engineer.
Strong proficiency in SQL and Snowflake.
Extensive experience with ETL frameworks and data pipeline orchestration tools (Airflow, Airbyte, or similar).
Proficiency in Python for data processing and automation.
Hands-on experience with AWS data services, including EMR, Glue, and Kafka.
Strong understanding of data warehousing, data modeling, and distributed data processing concepts.
Nice to Have:
Experience working with streaming data pipelines.
Familiarity with data governance, security, and compliance best practices.
Experience mentoring junior engineers and leading technical initiatives.
Salary Range: The salary for this position is between $130,000- $140,000 annually. Factors which may affect pay within this range may include geography/market, skills, education, experience, and other qualifications of the successful candidate.
Benefits: The Company offers the following benefits for this position, subject to applicable eligibility requirements: [medical insurance] [dental insurance] [vision insurance] [401(k) retirement plan] [long-term disability insurance] [short-term disability insurance] [5 personal days accrued each calendar year. The Paid time off benefits meet the paid sick and safe time laws that pertains to the City/ State] [10-15 days of paid vacation time] [6 paid holidays and 1 floating holiday per calendar year] [Ascendion Learning Management System]
Want to change the world? Let us know.
Tell us about your experiences, education, and ambitions. Bring your knowledge, unique viewpoint, and creativity to the table. Let's talk!
Senior Data Engineer
Senior data scientist job in Plano, TX
We are seeking a highly skilled Senior Data Engineer with AI / ML desires to design, build, and scale next-generation data and machine learning infrastructure. This role is ideal for a hands-on technical expert who thrives in building complex systems from the ground up, has deep experience in Google Cloud Platform (GCP), and is excited about stepping into management and technical leadership. You will work across engineering, data science, and executive leadership teams to architect cloud-native solutions, optimize real-time data pipelines, and help shape our long-term AI/ML engineering strategy.
Key Responsibilities
Cloud & Platform Engineering
Architect, build, and maintain high-performance data and ML infrastructure on GCP using best-in-class cloud-native tools and services.
Lead the design of scalable cloud architectures, with a strong focus on resilience, automation, and cost-effective operation.
Build applications and services from scratch, ensuring they are modular, maintainable, and scalable.
Real-Time & Distributed Systems
Design and optimize real-time data processing pipelines capable of handling high-volume, low-latency traffic.
Implement and fine-tune load balancing strategies to support fault tolerance and performance across distributed systems.
Lead system design for high availability, horizontal scaling, and microservices communication patterns.
AI/ML Engineering
Partner with ML engineers and data scientists to deploy, monitor, and scale machine learning workflows.
Create and maintain ML-focused CI/CD pipelines, model deployment frameworks, and automated testing harnesses.
Open-Source & Code Quality
Contribute to and maintain open-source projects, including active GitHub repositories.
Champion best practices across code reviews, version control, and documentation.
Establish, document, and enforce advanced testing methodologies, including integration, regression, performance, and automated testing frameworks.
Leadership & Collaboration
Serve as a technical leader and mentor within the engineering team.
Collaborate effectively with senior leadership and executive stakeholders, translating complex engineering concepts into strategic insights.
Provide guidance and direction to junior engineers, with an eye toward growing into a people leadership role.
Required Qualifications
Bachelor's Degree from an accredited university. Master's Degree Highly Preferred
Expert-level experience with GCP, including services such as BigQuery, Cloud Run, Pub/Sub, Dataflow, GKE, and Vertex AI.
Strong background in cloud architecture and distributed system design (GCP preferred; AWS/Azure acceptable).
Proven ability to build applications, platforms, and services from scratch.
Advanced skills in traffic load balancing, autoscaling, and performance tuning.
Deep experience with real-time data systems, streaming frameworks, and low-latency infrastructure.
Strong track record of open-source contributions and maintaining GitHub repositories.
Expertise in testing methodologies across the software lifecycle.
Excellent communication skills with comfort interacting directly with executive leadership.
Demonstrated interest or experience in team leadership or management.
Preferred Qualifications
Experience with microservices, Kubernetes, and service mesh technologies.
Familiarity with MLOps tooling and frameworks (Kubeflow, MLflow, Vertex AI pipelines).
Strong Python, Go, or similar programming expertise.
Prior experience in fast-growth or startup environments.
Azure Data Engineer Sr
Senior data scientist job in Irving, TX
Minimum 7 years of relevant work experience in data engineering, with at least 2 years in a data modeling.
Strong technical foundation in Python, SQL, and experience with cloud platforms (Azure,).
Deep understanding of data engineering fundamentals, including database architecture and design, Extract, transform and load (ETL) processes, data lakes, data warehousing, and both batch and streaming technologies.
Experience with data orchestration tools (e.g., Airflow), data processing frameworks (e.g., Spark, Databricks), and data visualization tools (e.g., Tableau, Power BI).
Proven ability to lead a team of engineers, fostering a collaborative and high-performing environment.
Senior Data Engineer (USC AND GC ONLY)
Senior data scientist job in Richardson, TX
Now Hiring: Senior Data Engineer (GCP / Big Data / ETL)
Duration: 6 Months (Possible Extension)
We're seeking an experienced Senior Data Engineer with deep expertise in Data Warehousing, ETL, Big Data, and modern GCP-based data pipelines. This role is ideal for someone who thrives in cross-functional environments and can architect, optimize, and scale enterprise-level data solutions on the cloud.
Must-Have Skills (Non-Negotiable)
9+ years in Data Engineering & Data Warehousing
9+ years hands-on ETL experience (Informatica, DataStage, etc.)
9+ years working with Teradata
3+ years hands-on GCP and BigQuery
Experience with Dataflow, Pub/Sub, Cloud Storage, and modern GCP data pipelines
Strong background in query optimization, data structures, metadata & workload management
Experience delivering microservices-based data solutions
Proficiency in Big Data & cloud architecture
3+ years with SQL & NoSQL
3+ years with Python or similar scripting languages
3+ years with Docker, Kubernetes, CI/CD for data pipelines
Expertise in deploying & scaling apps in containerized environments (K8s)
Strong communication, analytical thinking, and ability to collaborate across technical & non-technical teams
Familiarity with AGILE/SDLC methodologies
Key Responsibilities
Build, enhance, and optimize modern data pipelines on GCP
Implement scalable ETL frameworks, data structures, and workflow dependency management
Architect and tune BigQuery datasets, queries, and storage layers
Collaborate with cross-functional teams to define data requirements and support business objectives
Lead efforts in containerized deployments, CI/CD integrations, and performance optimization
Drive clarity in project goals, timelines, and deliverables during Agile planning sessions
📩 Interested? Apply now or DM us to explore this opportunity! You can share resumes at ********************* OR Call us on *****************
Senior Data Engineer
Senior data scientist job in Dallas, TX
About Us
Longbridge Securities, founded in March 2019 and headquartered in Singapore, is a next-generation online brokerage platform. Established by a team of seasoned finance professionals and technical experts from leading global firms, we are committed to advancing financial technology innovation. Our mission is to empower every investor by offering enhanced financial opportunities.
What You'll Do
As part of our global expansion, we're seeking a Data Engineer to design and build batch/real-time data warehouses and maintain data platforms that power trading and research for the US market. You'll work on data pipelines, APIs, storage systems, and quality monitoring to ensure reliable, scalable, and efficient data services.
Responsibilities:
Design and build batch/real-time data warehouses to support the US market growth
Develop efficient ETL pipelines to optimize data processing performance and ensure data quality/stability
Build a unified data middleware layer to reduce business data development costs and improve service reusability
Collaborate with business teams to identify core metrics and data requirements, delivering actionable data solutions
Discover data insights through collaboration with the business owner
Maintain and develop enterprise data platforms for the US market
Qualifications
7+ years of data engineering experience with a proven track record in data platform/data warehouse projects
Proficient in Hadoop ecosystem (Hive, Kafka, Spark, Flink), Trino, SQL, and at least one programming language (Python/Java/Scala)
Solid understanding of data warehouse modeling (dimensional modeling, star/snowflake schemas) and ETL performance optimization
Familiarity with AWS/cloud platforms and experience with Docker, Kubernetes
Experience with open-source data platform development, familiar with at least one relational database (MySQL/PostgreSQL)
Strong cross-department collaboration skills to translate business requirements into technical solutions
Bachelor's degree or higher in Computer Science, Data Science, Statistics, or related fields
Comfortable working in a fast-moving fintech/tech startup environment
Bonus Point:
Experience with DolphinScheduler and SeaTunnel is a plus
Sr Data Engineer(only w2, onsite, need to be local)
Senior data scientist job in Irving, TX
Bachelor s degree or equivalent in Computer Science, Mathematics, Software Engineering, Management Information Systems, Computer Engineering/Electrical Engineering, or any Engineering field or quantitative discipline such as Physics or Statistics.
Minimum 6 years of relevant work experience in data engineering, with at least 2 years in a data modeling.
Strong technical foundation in Python, SQL, and experience with cloud platforms (for example, AWS, Azure,).
Deep understanding of data engineering fundamentals, including database architecture and design, Extract, transform and load (ETL) processes, data lakes, data warehousing, and both batch and streaming technologies.
Experience with data orchestration tools (e.g., Airflow), data processing frameworks (e.g., Spark, Databricks), and data visualization tools (e.g., Tableau, Power BI).
Proven ability to lead a team of engineers, fostering a collaborative and high-performing environment.
Good communication, interpersonal, and presentation skills, with the ability to effectively communicate with both technical and non-technical audiences.
ADDITIONAL SKILLS AND OTHER REQUIREMENTS
Nice to have skill set include:
Agile experience
Dataiku
Power BI
Data Engineer
Senior data scientist job in Irving, TX
W2 Contract to Hire Role with Monthly Travel to the Dallas Texas area
We are looking for a highly skilled and independent Data Engineer to support our analytics and data science teams, as well as external client data needs. This role involves writing and optimizing complex SQL queries, generating client-specific data extracts, and building scalable ETL pipelines using Azure Data Factory. The ideal candidate will have a strong foundation in data engineering, with a collaborative mindset and the ability to work across teams and systems.
Duties/Responsibilities:Develop and optimize complex SQL queries to support internal analytics and external client data requests.
Generate custom data lists and extracts based on client specifications and business rules.
Design, build, and maintain efficient ETL pipelines using Azure Data Factory.
Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver high-quality solutions.
Work with Salesforce data; familiarity with SOQL is preferred but not required.
Support Power BI reporting through basic data modeling and integration.
Assist in implementing MLOps practices for model deployment and monitoring.
Use Python for data manipulation, automation, and integration tasks.
Ensure data quality, consistency, and security across all workflows and systems.
Required Skills/Abilities/Attributes:
5+ years of experience in data engineering or a related field.
Strong proficiency in SQL, including query optimization and performance tuning.
Experience with Azure Data Factory, with git repository and pipeline deployment.
Ability to translate client requirements into accurate and timely data outputs.
Working knowledge of Python for data-related tasks.
Strong problem-solving skills and ability to work independently.
Excellent communication and documentation skills.
Preferred Skills/ExperiencePrevious knowledge of building pipelines for ML models.
Extensive experience creating/managing stored procedures and functions in MS SQL Server
2+ years of experience in cloud architecture (Azure, AWS, etc)
Experience with ‘code management' systems (Azure Devops)
2+ years of reporting design and management (PowerBI Preferred)
Ability to influence others through the articulation of ideas, concepts, benefits, etc.
Education and Experience:
Bachelor's degree in a computer science field or applicable business experience.
Minimum 3 years of experience in a Data Engineering role
Healthcare experience preferred.
Physical Requirements:Prolonged periods sitting at a desk and working on a computer.
Ability to lift 20 lbs.
Senior Data Systems Analyst
Senior data scientist job in Roanoke, TX
Immediate need for a talented Senior Data Systems Analyst. This is a 12 months contract opportunity with long-term potential and is located in Westlake, TX(Hybrid). Please review the job description below and contact me ASAP if you are interested.
Job ID: 25-93280
Pay Range: $57 - $67/hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location).
Key Requirements and Technology Experience:
Bachelor's or Master's Degree in a technology related field (e.g. Engineering, Computer Science, etc.) required
6-8 years of working experience
You have working in investment banking, finance, venture Client, asset management, or wealth management (Required)
You have experience analyzing existing system functionality, leveraging data models and SQL queries and reflects the findings in clear functional documentation.
You have significant expertise in data related projects in the areas of requirements gathering, analysis, interpretation and translating them to data solutions.
You have dimensional data modelling design and implementation experience.
You understand and communicate sophisticated concepts effectively to a variety of audiences, both technical and non-technical. You make the complicated simple.
Strong SQL experience (ANSI SQL, Oracle, Client)
You have a passion for continuous learning, mastering of new skills, and staying on top of the latest innovations across data analytics.
You are fluent in agile work methodologies such as scrum and its application.
You are known as a go-getter, with the ability to unravel a mystery; data, business process or organizational.
Strong results-orientated contributor with an ability to clearly articulate and deliver business value.
Expert level SQL for data analysis and querying
Understanding of AWS cloud data models using Snowflake
Experience analyzing data, performing data readiness, identifying data sources and remediating issues for given requirements-all so the data engineers can execute.
Systems Analyst mindset to understand how the systems are coming together, how they interact and how they connect, and then write concise business requirements tying the technical and business case together.
Understand distributed systems to connect multiple systems
Should have financial domain/asset management domain to understand complex security nuances.
Our client is a leading Financial Services Industry and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration.
Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
Senior BI Data Modeler
Senior data scientist job in Dallas, TX
We are seeking a highly skilled Data Modeler / BI Developer to join our team. This role will focus on designing and implementing enterprise-level data models, ensuring data security, and enabling advanced analytics capabilities within our Primoris BI platforms. The ideal candidate will have strong technical expertise, excellent problem-solving skills, and the ability to collaborate effectively with cross-functional teams.
Key Responsibilities
Collaborate with the Data Ingestion team to design and develop the “Gold” layer within a Medallion Architecture.
Design and implement data security and masking standards, processes, and solutions across various data stores and reporting layers.
Build and execute enterprise-level data models using multiple data sources for business analytics and reporting in Power BI.
Partner with business leaders to identify and prioritize data analysis and platform enhancement needs.
Work with analytics teams and business leaders to determine requirements for composite data models.
Communicate data model structures to visualization and analytics teams.
Develop and optimize complex DAX expressions and SQL queries for data manipulation.
Troubleshoot and resolve issues, identifying root causes to prevent recurrence.
Escalate critical issues when appropriate and ensure timely resolution.
Contribute to the evolution of Machine Learning (ML) and AI model development processes.
Qualifications
Bachelor's degree in Business Administration, Information Technology, or a related field.
2+ years experience ensuring data quality (completeness, validity, consistency, timeliness, accuracy).
2+ years experience organizing and preparing data models for analysis using systematic approaches.
Demonstrated experience with AI-enabled platforms for data modernization.
Experience delivering work using Agile/Scrum practices and software release cycles.
Proficient in Azure, Databricks, SQL, Python, Power BI, and DAX.
Good knowledge of CI/CD and deployment processes.
3+ years experience working with clients and delivering under tight deadlines.
Prior experience with projects of similar size and scope.
Ability to work independently and collaboratively in a team environment.
Skills & Competencies
Exceptional organizational and time management skills.
Ability to manage stakeholder expectations and influence decisions.
High attention to detail and commitment to quality.
Strong leadership and team-building capabilities.
Ability to adapt to changing priorities and work under pressure.
Data Engineer
Senior data scientist job in Dallas, TX
Must be local to TX
Data Engineer - SQL, Python and Pyspark Expert (Onsite - Dallas, TX)
Data Engineer with strong proficiency in SQL, Python, and PySpark to support high-performance data pipelines and analytics initiatives. This role will focus on scalable data processing, transformation, and integration efforts that enable business insights, regulatory compliance, and operational efficiency.
Key Responsibilities
Design, develop, and optimize ETL/ELT pipelines using SQL, Python, and PySpark for large-scale data environments
Implement scalable data processing workflows in distributed data platforms (e.g., Hadoop, Databricks, or Spark environments)
Partner with business stakeholders to understand and model mortgage lifecycle data (origination, underwriting, servicing, foreclosure, etc.)
Create and maintain data marts, views, and reusable data components to support downstream reporting and analytics
Ensure data quality, consistency, security, and lineage across all stages of data processing
Assist in data migration and modernization efforts to cloud-based data warehouses (e.g., Snowflake, Azure Synapse, GCP BigQuery)
Document data flows, logic, and transformation rules
Troubleshoot performance and quality issues in batch and real-time pipelines
Support compliance-related reporting (e.g., HMDA, CFPB)
Required Qualifications
6+ years of experience in data engineering or data development
Advanced expertise in SQL (joins, CTEs, optimization, partitioning, etc.)
Strong hands-on skills in Python for scripting, data wrangling, and automation
Proficient in PySpark for building distributed data pipelines and processing large volumes of structured/unstructured data
Experience working with mortgage banking data sets and domain knowledge is highly preferred
Strong understanding of data modeling (dimensional, normalized, star schema)
Experience with cloud-based platforms (e.g., Azure Databricks, AWS EMR, GCP Dataproc)
Familiarity with ETL tools, orchestration frameworks (e.g., Airflow, ADF, dbt)
Data Engineer
Senior data scientist job in Dallas, TX
Junior Data Engineer
DESCRIPTION: BeaconFire is based in Central NJ, specializing in Software Development, Web Development, and Business Intelligence; looking for candidates who are good communicators and self-motivated. You will play a key role in building, maintaining, and operating integrations, reporting pipelines, and data transformation systems.
Qualifications:
Passion for data and a deep desire to learn.
Master's Degree in Computer Science/Information Technology, Data Analytics/Data
Science, or related discipline.
Intermediate Python. Experience in data processing is a plus. (Numpy, Pandas, etc)
Experience with relational databases (SQL Server, Oracle, MySQL, etc.)
Strong written and verbal communication skills.
Ability to work both independently and as part of a team.
Responsibilities:
Collaborate with the analytics team to find reliable data solutions to meet the business needs.
Design and implement scalable ETL or ELT processes to support the business demand for data.
Perform data extraction, manipulation, and production from database tables.
Build utilities, user-defined functions, and frameworks to better enable data flow patterns.
Build and incorporate automated unit tests, participate in integration testing efforts.
Work with teams to resolve operational & performance issues.
Work with architecture/engineering leads and other teams to ensure quality solutions are implemented, and engineering best practices are defined and adhered to.
Compensation: $65,000.00 to $80,000.00 /year
BeaconFire is an e-verified company. Work visa sponsorship is available.
Snowflake Data Engineering with AWS, Python and PySpark
Senior data scientist job in Frisco, TX
Job Title: Snowflake Data Engineering with AWS, Python and PySpark
Duration: 12 months
Required Skills & Experience:
10+ years of experience in data engineering and data integration roles.
Experts working with snowflake ecosystem integrated with AWS services & PySpark.
8+ years of Core Data engineering skills - Handson on experience with Snowflake ecosystem + AWS experience, Core SQL, Snowflake, Python Programming.
5+ years Handson experience in building new data pipeline frameworks with AWS, Snowflake, Python and able to explore new ingestion frame works.
Handson with Snowflake architecture, Virtual Warehouses, Storage, and Caching, Snow pipe, Streams, Tasks, and Stages.
Experience with cloud platforms (AWS, Azure, or GCP) and integration with Snowflake.
Snowflake SQL and Stored Procedures (JavaScript or Python-based).
Proficient in Python for data ingestion, transformation, and automation.
Solid understanding of data warehousing concepts (ETL, ELT, data modeling, star/snowflake schema).
Hands-on with orchestration tools (Airflow, dbt, Azure Data Factory, or similar).
Proficiency in SQL and performance tuning.
Familiar with Git-based version control, CI/CD pipelines, and DevOps best practices.
Strong communication skills and ability to collaborate in agile teams.
GCP Data Engineer
Senior data scientist job in Richardson, TX
Infosys is seeking a Google Cloud (GCP) data engineer with experience in Github and python. In this role, you will enable digital transformation for our clients in a global delivery model, research on technologies independently, recommend appropriate solutions and contribute to technology-specific best practices and standards. You will be responsible to interface with key stakeholders and apply your technical proficiency across different stages of the Software Development Life Cycle. You will be part of a learning culture, where teamwork and collaboration are encouraged, excellence is rewarded, and diversity is respected and valued.
Required Qualifications:
Candidate must be located within commuting distance of Richardson, TX or be willing to relocate to the area. This position may require travel in the US
Bachelor's degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education.
Candidates authorized to work for any employer in the United States without employer-based visa sponsorship are welcome to apply. Infosys is unable to provide immigration sponsorship for this role at this time
At least 4 years of Information Technology experience.
Experience working with technologies like - GCP with data engineering - data flow / air flow, pub sub/ kafta, data proc/Hadoop, Big Query.
ETL development experience with strong SQL background such as Python/R, Scala, Java, Hive, Spark, Kafka
Strong knowledge on Python Program development to build reusable frameworks, enhance existing frameworks.
Application build experience with core GCP Services like Dataproc, GKE, Composer,
Deep understanding GCP IAM & Github.
Must have done IAM set up
Knowledge on CICD pipeline using Terraform in Git.
Preferred Qualifications:
Good knowledge on Google Big Query, using advance SQL programing techniques to build Big Query Data sets in Ingestion and Transformation layer.
Experience in Relational Modeling, Dimensional Modeling and Modeling of Unstructured Data
Knowledge on Airflow Dag creation, execution, and monitoring.
Good understanding of Agile software development frameworks
Ability to work in teams in a diverse, multi-stakeholder environment comprising of Business and Technology teams.
Experience and desire to work in a global delivery environment.
Data Engineer
Senior data scientist job in Fort Worth, TX
ABOUT OUR CLIENT
Our Client is a privately held, well-capitalized energy company based in Fort Worth, Texas with a strong track record of success across upstream, midstream, and mineral operations throughout the United States. The leadership team is composed of highly experienced professionals who have worked together across multiple ventures and basins. They are committed to fostering a collaborative, high-integrity culture that values intellectual curiosity, accountability, and continuous improvement.
ABOUT THE ROLE
Our Client is seeking a skilled and motivated Data Engineer to join their growing technology team. This role plays a key part in managing and optimizing data systems, designing and maintaining ETL processes, and improving data workflows across departments. The successful candidate will have deep technical expertise, a strong background in database architecture and data integration, and the ability to collaborate cross-functionally to enhance data management and accessibility. Candidates with extensive experience may be considered for a Senior Data Engineer title.
RESPONSIBILITIES
Design, implement, and evolve database architecture and schemas to support scalable and efficient data storage and retrieval.
Build, manage, and maintain end-to-end data pipelines, including automation of ingestion and transformation processes.
Monitor, troubleshoot, and optimize data pipeline performance to ensure data quality and reliability.
Document all aspects of the data pipeline architecture, including data sources, transformations, and job scheduling.
Optimize database performance by managing indexing, queries, stored procedures, and views.
Develop frameworks and tools for reusable ETL processes and efficient data handling across formats such as CSV, JSON, and Parquet.
Ensure proper version control and adherence to coding standards, security protocols, and performance best practices.
Collaborate with cross-functional teams including engineering, operations, land, finance, and accounting to streamline data workflows.
QUALIFICATIONS
Excellent verbal and written communication skills.
Strong organizational, analytical, and problem-solving abilities.
Proficient in Microsoft Office Suite and other related software.
Experienced in programming languages such as R, Python, and SQL.
Proficient in making and optimizing API calls for data integration.
Strong experience with cloud platforms such as Azure Data Lake, Azure Data Studio, Azure Databricks, and/or Snowflake.
Proficient in CI/CD principles and tools.
High integrity, humility, and a strong sense of accountability and teamwork.
A self-starter with a continuous improvement mindset and passion for evolving technologies.
REQUIRED EDUCATION AND EXPERIENCE
Bachelor's degree in computer science, software engineering, or a related field.
2+ years of experience in data engineering, database management, or software engineering.
Master's degree or additional certification a plus, but not required.
Exposure to geospatial or GIS data is a plus.
PHYSICAL REQUIREMENTS
Prolonged periods of sitting and working at a computer.
Ability to lift up to 15 pounds occasionally.
***********************************************************************************
NO AGENCY OR C2C CANDIDATES WILL BE CONSIDERED
VISA SPONSORSHIP IS NOT OFFERED NOR AVAILABLE FOR H1-B NOR F1 OPT
***********************************************************************************
GCP Data Engineer
Senior data scientist job in Fort Worth, TX
Job Title: GCP Data Engineer
Employment Type: W2/CTH
Client: Direct
We are seeking a highly skilled Data Engineer with strong expertise in Python, SQL, and Google Cloud Platform (GCP) services. The ideal candidate will have 6-8 years of hands-on experience in building and maintaining scalable data pipelines, working with APIs, and leveraging GCP tools such as BigQuery, Cloud Composer, and Dataflow.
Core Responsibilities:
• Design, build, and maintain scalable data pipelines to support analytics and business operations.
• Develop and optimize ETL processes for structured and unstructured data.
• Work with BigQuery, Cloud Composer, and other GCP services to manage data workflows.
• Collaborate with data analysts and business teams to ensure data availability and quality.
• Integrate data from multiple sources using APIs and custom scripts.
• Monitor and troubleshoot pipeline performance and reliability.
Technical Skills:
o Strong proficiency in Python and SQL.
o Experience with data pipeline development and ETL frameworks.
• GCP Expertise:
o Hands-on experience with BigQuery, Cloud Composer, and Dataflow.
• Additional Requirements:
o Familiarity with workflow orchestration tools and cloud-based data architecture.
o Strong problem-solving and analytical skills.
o Excellent communication and collaboration abilities.
Principal Data Scientist : Product to Market (P2M) Optimization
Senior data scientist job in Coppell, TX
About Gap Inc. Our brands bridge the gaps we see in the world. Old Navy democratizes style to ensure everyone has access to quality fashion at every price point. Athleta unleashes the potential of every woman, regardless of body size, age or ethnicity. Banana Republic believes in sustainable luxury for all. And Gap inspires the world to bring individuality to modern, responsibly made essentials.
This simple idea-that we all deserve to belong, and on our own terms-is core to who we are as a company and how we make decisions. Our team is made up of thousands of people across the globe who take risks, think big, and do good for our customers, communities, and the planet. Ready to learn fast, create with audacity and lead boldly? Join our team.
About the Role
Gap Inc. is seeking a Principal Data Scientist with deep expertise in operations research and machine learning to lead the design and deployment of advanced analytics solutions across the Product-to-Market (P2M) space. This role focuses on driving enterprise-scale impact through optimization and data science initiatives spanning pricing, inventory, and assortment optimization.
The Principal Data Scientist serves as a senior technical and strategic thought partner, defining solution architectures, influencing product and business decisions, and ensuring that analytical solutions are both technically rigorous and operationally viable. The ideal candidate can lead end-to-end solutioning independently, manage ambiguity and complex stakeholder dynamics, and communicate technical and business risk effectively across teams and leadership levels.
What You'll Do
* Lead the framing, design, and delivery of advanced optimization and machine learning solutions for high-impact retail supply chain challenges.
* Partner with product, engineering, and business leaders to define analytics roadmaps, influence strategic priorities, and align technical investments with business goals.
* Provide technical leadership to other data scientists through mentorship, design reviews, and shared best practices in solution design and production deployment.
* Evaluate and communicate solution risks proactively, grounding recommendations in realistic assessments of data, system readiness, and operational feasibility.
* Evaluate, quantify, and communicate the business impact of deployed solutions using statistical and causal inference methods, ensuring benefit realization is measured rigorously and credibly.
* Serve as a trusted advisor by effectively managing stakeholder expectations, influencing decision-making, and translating analytical outcomes into actionable business insights.
* Drive cross-functional collaboration by working closely with engineering, product management, and business partners to ensure model deployment and adoption success.
* Quantify business benefits from deployed solutions using rigorous statistical and causal inference methods, ensuring that model outcomes translate into measurable value
* Design and implement robust, scalable solutions using Python, SQL, and PySpark on enterprise data platforms such as Databricks and GCP.
* Contribute to the development of enterprise standards for reproducible research, model governance, and analytics quality.
Who You Are
* Master's or Ph.D. in Operations Research, Operations Management, Industrial Engineering, Applied Mathematics, or a closely related quantitative discipline.
* 10+ years of experience developing, deploying, and scaling optimization and data science solutions in retail, supply chain, or similar complex domains.
* Proven track record of delivering production-grade analytical solutions that have influenced business strategy and delivered measurable outcomes.
* Strong expertise in operations research methods, including linear, nonlinear, and mixed-integer programming, stochastic modeling, and simulation.
* Deep technical proficiency in Python, SQL, and PySpark, with experience in optimization and ML libraries such as Pyomo, Gurobi, OR-Tools, scikit-learn, and MLlib.
* Hands-on experience with enterprise platforms such as Databricks and cloud environments
* Demonstrated ability to assess, communicate, and mitigate risk across analytical, technical, and business dimensions.
* Excellent communication and storytelling skills, with a proven ability to convey complex analytical concepts to technical and non-technical audiences.
* Strong collaboration and influence skills, with experience leading cross-functional teams in matrixed organizations.
* Experience managing code quality, CI/CD pipelines, and GitHub-based workflows.
Preferred Qualifications
* Experience shaping and executing multi-year analytics strategies in retail or supply chain domains.
* Proven ability to balance long-term innovation with short-term deliverables.
* Background in agile product development and stakeholder alignment for enterprise-scale initiatives.
Benefits at Gap Inc.
* Merchandise discount for our brands: 50% off regular-priced merchandise at Old Navy, Gap, Banana Republic and Athleta, and 30% off at Outlet for all employees.
* One of the most competitive Paid Time Off plans in the industry.*
* Employees can take up to five "on the clock" hours each month to volunteer at a charity of their choice.*
* Extensive 401(k) plan with company matching for contributions up to four percent of an employee's base pay.*
* Employee stock purchase plan.*
* Medical, dental, vision and life insurance.*
* See more of the benefits we offer.
* For eligible employees
Gap Inc. is an equal-opportunity employer and is committed to providing a workplace free from harassment and discrimination. We are committed to recruiting, hiring, training and promoting qualified people of all backgrounds, and make all employment decisions without regard to any protected status. We have received numerous awards for our long-held commitment to equality and will continue to foster a diverse and inclusive environment of belonging. In 2022, we were recognized by Forbes as one of the World's Best Employers and one of the Best Employers for Diversity.