Post job

Data scientist jobs in Irving, TX

- 424 jobs
All
Data Scientist
Data Engineer
Data Consultant
Actuary
  • Data Scientist 2

    Cullerton Group

    Data scientist job in Dallas, TX

    Cullerton Group has a new opportunity for a Data Scientist 2. The work will be done onsite full-time, with flexibility for candidates located in Illinois (Mossville) or Texas (Dallas) depending on business needs. This is a long-term 12-month position that can lead to permanent employment with our client. Compensation is up to $58.72/hr + full benefits (vision, dental, health insurance, 401k, and holiday pay). Job Summary Cullerton Group is seeking a motivated and analytical Data Scientist to support strategic sourcing and cost management initiatives through advanced data analytics and reporting. This role focuses on developing insights from complex datasets to guide decision-making, improve cost visibility, and support category strategy execution. The ideal candidate will collaborate with cross-functional teams, apply statistical and analytical methods, and contribute independently to analytics-driven projects that deliver measurable business value. Key ResponsibilitiesDevelop and maintain scorecards, dashboards, and reports by consolidating data from multiple enterprise sources Perform data collection, validation, and analysis to support strategic sourcing and cost savings initiatives Apply statistical analysis and modeling techniques to identify trends, risks, and optimization opportunities Support monthly and recurring reporting processes, including cost tracking and performance metrics Collaborate with category teams, strategy leaders, and peers to translate analytics into actionable insights Required QualificationsBachelor's degree in a quantitative field such as Data Science, Statistics, Engineering, Computer Science, Economics, Mathematics, or similar (or Master's degree in lieu of experience) 3-5 years of professional experience performing quantitative analysis (internships accepted) Proficiency with analytics and data visualization tools, including Power BI Strong problem-solving skills with the ability to communicate insights clearly to technical and non-technical audiences Preferred QualificationsExperience with advanced statistical methods (regression, hypothesis testing, ANOVA, statistical process control) Practical exposure to machine learning techniques such as clustering, logistic regression, random forests, or similar models Experience with cloud platforms (AWS, Azure, or Google Cloud) Familiarity with procurement, sourcing, cost management, or manufacturing-related analytics Strong initiative, collaboration skills, and commitment to continuous learning in analytics Why This Role? This position offers the opportunity to work on high-impact analytics projects that directly support sourcing strategy, cost optimization, and operational decision-making. You will collaborate with diverse teams, gain exposure to large-scale enterprise data, and contribute to meaningful initiatives that drive measurable business outcomes. Cullerton Group provides a professional consulting environment with growth potential, strong client partnerships, and long-term career opportunities.
    $58.7 hourly 3d ago
  • Data Scientist (F2F Interview)

    GBIT (Global Bridge Infotech Inc.

    Data scientist job in Dallas, TX

    W2 Contract Dallas, TX (Onsite) We are seeking an experienced Data Scientist to join our team in Dallas, Texas. The ideal candidate will have a strong foundation in machine learning, data modeling, and statistical analysis, with the ability to transform complex datasets into clear, actionable insights that drive business impact. Key Responsibilities Develop, implement, and optimize machine learning models to support business objectives. Perform exploratory data analysis, feature engineering, and predictive modeling. Translate analytical findings into meaningful recommendations for technical and non-technical stakeholders. Collaborate with cross-functional teams to identify data-driven opportunities and improve decision-making. Build scalable data pipelines and maintain robust analytical workflows. Communicate insights through reports, dashboards, and data visualizations. Qualifications Bachelor's or Master's degree in Data Science, Statistics, Computer Science, or a related field. Proven experience working with machine learning algorithms and statistical modeling techniques. Proficiency in Python or R, along with hands-on experience using libraries such as Pandas, NumPy, Scikit-learn, or TensorFlow. Strong SQL skills and familiarity with relational or NoSQL databases. Experience with data visualization tools (e.g., Tableau, Power BI, matplotlib). Excellent problem-solving, communication, and collaboration skills.
    $69k-97k yearly est. 5d ago
  • Data Scientist

    Synechron 4.4company rating

    Data scientist job in Dallas, TX

    We are At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets. Our challenge We are seeking a talented and analytical Data Scientist to join our team. The ideal candidate will leverage advanced data analysis, statistical modeling, and machine learning techniques to drive insights, optimize loan processes, improve risk assessment, and enhance customer experiences in mortgage and lending domains. Additional Information* The base salary for this position will vary based on geography and other factors. In accordance with law, the base salary for this role if filled within Dallas, TX is $110k - $120k/year & benefits (see below). The Role Responsibilities: Analyze large volumes of loan and mortgage data to identify key trends, patterns, and risk factors. Develop and implement predictive models for credit scoring, risk segmentation, loan default prediction, and fraud detection. Collaborate with product teams, underwriters, and risk managers to understand business requirements and translate them into analytical solutions. Build data pipelines and automate data ingestion, cleaning, and processing workflows related to loans and mortgage portfolios. Conduct feature engineering to improve model accuracy and robustness. Monitor model performance over time and recalibrate models as needed based on changing market conditions. Create dashboards and reports to communicate insights and support decision-making processes. Ensure data quality, integrity, and compliance with regulatory standards. Stay updated on industry trends, emerging techniques, and regulatory changes affecting mortgage and lending projects Requirements: Strong knowledge of mortgage products, loan lifecycle, credit risk, and underwriting processes. Experience with Kafka, Hadoop, Hive, or other big data tools. Familiarity with containerization (Docker) and orchestration (Kubernetes). Understanding of data security, privacy, and compliance standards. Knowledge of streaming data processing and real-time analytics. We offer: A highly competitive compensation and benefits package. A multinational organization with 58 offices in 21 countries and the possibility to work abroad. 10 days of paid annual leave (plus sick leave and national holidays). Maternity & paternity leave plans. A comprehensive insurance plan including medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region). Retirement savings plans. A higher education certification policy. Commuter benefits (varies by region). Extensive training opportunities, focused on skills, substantive knowledge, and personal development. On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses. Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups. Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms. A flat and approachable organization. A truly diverse, fun-loving, and global work culture. S YNECHRON'S DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
    $110k-120k yearly 1d ago
  • Senior Data Governance Consultant (Informatica)

    Paradigm Technology 4.2company rating

    Data scientist job in Plano, TX

    Senior Data Governance Consultant (Informatica) About Paradigm - Intelligence Amplified Paradigm is a strategic consulting firm that turns vision into tangible results. For over 30 years, we've helped Fortune 500 and high-growth organizations accelerate business outcomes across data, cloud, and AI. From strategy through execution, we empower clients to make smarter decisions, move faster, and maximize return on their technology investments. What sets us apart isn't just what we do, it's how we do it. Driven by a clear mission and values rooted in integrity, excellence, and collaboration, we deliver work that creates lasting impact. At Paradigm, your ideas are heard, your growth is prioritized, your contributions make a difference. Summary: We are seeking a Senior Data Governance Consultant to lead and enhance data governance capabilities across a financial services organization The Senior Data Governance Consultant will collaborate closely with business, risk, compliance, technology, and data management teams to define data standards, strengthen data controls, and drive a culture of data accountability and stewardship The ideal candidate will have deep experience in developing and implementing data governance frameworks, data policies, and control mechanisms that ensure compliance, consistency, and trust in enterprise data assets Hands-on experience with Informatica, including Master Data Management (MDM) or Informatica Data Management Cloud (IDMC), is preferred This position is Remote, with occasional travel to Plano, TX Responsibilities: Data Governance Frameworks: Design, implement, and enhance data governance frameworks aligned with regulatory expectations (e.g., BCBS 239, GDPR, CCPA, DORA) and internal control standards Policy & Standards Development: Develop, maintain, and operationalize data policies, standards, and procedures that govern data quality, metadata management, data lineage, and data ownership Control Design & Implementation: Define and embed data control frameworks across data lifecycle processes to ensure data integrity, accuracy, completeness, and timeliness Risk & Compliance Alignment: Work with risk and compliance teams to identify data-related risks and ensure appropriate mitigation and monitoring controls are in place Stakeholder Engagement: Partner with data owners, stewards, and business leaders to promote governance practices and drive adoption of governance tools and processes Data Quality Management: Define and monitor data quality metrics and KPIs, establishing escalation and remediation procedures for data quality issues Metadata & Lineage: Support metadata and data lineage initiatives to increase transparency and enable traceability across systems and processes Reporting & Governance Committees: Prepare materials and reporting for data governance forums, risk committees, and senior management updates Change Management & Training: Develop communication and training materials to embed governance culture and ensure consistent understanding across the organization Required Qualifications: 7+ years of experience in data governance, data management, or data risk roles within financial services (banking, insurance, or asset management preferred) Strong knowledge of data policy development, data standards, and control frameworks Proven experience aligning data governance initiatives with regulatory and compliance requirements Familiarity with Informatica data governance and metadata tools Excellent communication skills with the ability to influence senior stakeholders and translate technical concepts into business language Deep understanding of data management principles (DAMA-DMBOK, DCAM, or equivalent frameworks) Bachelor's or Master's Degree in Information Management, Data Science, Computer Science, Business, or related field Preferred Qualifications: Hands-on experience with Informatica, including Master Data Management (MDM) or Informatica Data Management Cloud (IDMC), is preferred Experience with data risk management or data control testing Knowledge of financial regulatory frameworks (e.g., Basel, MiFID II, Solvency II, BCBS 239) Certifications, such as Informatica, CDMP, or DCAM Background in consulting or large-scale data transformation programs Key Competencies: Strategic and analytical thinking Strong governance and control mindset Excellent stakeholder and relationship management Ability to drive organizational change and embed governance culture Attention to detail with a pragmatic approach Why Join Paradigm At Paradigm, integrity drives innovation. You'll collaborate with curious, dedicated teammates, solving complex problems and unlocking immense data value for leading organizations. If you seek a place where your voice is heard, growth is supported, and your work creates lasting business value, you belong at Paradigm. Learn more at ******************** Policy Disclosure: Paradigm maintains a strict drug-free workplace policy. All offers of employment are contingent upon successfully passing a standard 5-panel drug screen. Please note that a positive test result for any prohibited substance, including marijuana, will result in disqualification from employment, regardless of state laws permitting its use. This policy applies consistently across all positions and locations.
    $76k-107k yearly est. 1d ago
  • Data Engineer

    Addison Group 4.6company rating

    Data scientist job in Coppell, TX

    Title: Data Engineer Assignment Type: 6-12 month contract-to-hire Compensation: $65/hr-$75/hr W2 Work Model: Hybrid (4 days on-site, 1 day remote) Benefits: Medical, Dental, Vision, 401(k) What we need is someone who comes 8+ years of experience in the Data Engineering space who specializes in Microsoft Azure and Databricks. This person will be a part of multiple initiatives for the "New Development" and "Data Reporting" teams but will be primarily tasked with designing, building, maintaining, and automating their enterprise data architecture/pipelines within the cloud. Technology-wise we are needing to come with skills in Azure Databricks (5+ years), cloud-based environment (Azure and/or AWS), Azure DevOps (ADO), SQL (ETL, SSIS packages), and PySpark or Scala automation. Architecture experience in building pipelines, data modeling, data pipeline deployment, data mapping, etc. Top Skills: -8+ Years of Data Engineer/Business Intelligence -Databricks and Azure Data Factory *Most updated is Unity Catalog for Databricks* -Cloud-based environments (Azure or AWS) -Data Pipeline Architecture and CI/CD methodology -SQL -Automation (Python (PySpark), Scala)
    $65-75 hourly 2d ago
  • Life Actuary

    USAA 4.7company rating

    Data scientist job in Plano, TX

    Why USAA? At USAA, our mission is to empower our members to achieve financial security through highly competitive products, exceptional service and trusted advice. We seek to be the #1 choice for the military community and their families. Embrace a fulfilling career at USAA, where our core values - honesty, integrity, loyalty and service - define how we treat each other and our members. Be part of what truly makes us special and impactful. The Opportunity We are seeking a qualified Life Actuary to join our diverse team. The ideal candidate will possess strong risk management skills, with a particular focus on Interest Rate Risk Management and broader financial risk experience. This role requires an individual who has acquired their ASA designation or FSA designation and has a few years of meaningful experience. Key responsibilities will include experience in Asset-Liability Management (ALM), encompassing liquidity management, asset allocation, cashflow matching, and duration targeting. You will also be responsible for conducting asset adequacy testing to ensure the sufficiency of assets to meet future obligations. Experience in product pricing, especially for annuity products. Furthermore, an understanding of Risk-Based Capital (RBC) frameworks and methodologies is required. Proficiency with actuarial software platforms, with a strong preference for AXIS, is highly advantageous. We offer a flexible work environment that requires an individual to be in the office 4 days per week. This position can be based in one of the following locations: San Antonio, TX, Plano, TX, Phoenix, AZ, Colorado Springs, CO, Charlotte, NC, or Tampa, FL. Relocation assistance is not available for this position. What you'll do: Performs complex work assignments using actuarial modeling software driven models for pricing, valuation, and/or risk management. Reviews laws and regulations to ensure all processes are compliant; and provides recommendations for improvements and monitors industry communications regarding potential changes to existing laws and regulations. Runs models, generates reports, and presents recommendations and detailed analysis of all model runs to Actuarial Leadership. May make recommendations for model adjustments and improvements, when appropriate. Shares knowledge with team members and serves as a resource to team on raised issues and navigates obstacles to deliver work product. Leads or participates as a key resource on moderately complex projects through concept, planning, execution, and implementation phases with minimal guidance, involving cross functional actuarial areas. Develops exhibits and reports that help explain proposals/findings and provides information in an understandable and usable format for partners. Identifies and provides recommended solutions to business problems independently, often presenting recommendation to leadership. Maintains accurate price level, price structure, data availability and other requirements to achieve profitability and competitive goals. Identifies critical assumptions to monitor and suggest timely remedies to correct or prevent unfavorable trends. Tests impact of assumptions by identifying sources of gain and loss, the appropriate premiums, interest margins, reserves, and cash values for profitability and viability of new and existing products. Advises management on issues and serves as a primary resource for their individual team members on raised issues. Ensures risks associated with business activities are identified, measured, monitored, and controlled in accordance with risk and compliance policies and procedures. What you have: Bachelor's degree; OR 4 years of related experience (in addition to the minimum years of experience required) may be substituted in lieu of degree. 4 years relevant actuarial or analytical experience and attainment of Fellow within the Society of Actuaries; OR 8 years relevant actuarial experience and attainment of Associate within the Society of Actuaries. Experience performing complex work assignments using actuarial modeling software driven models for pricing, valuation, and/or risk management. Experience presenting complex actuarial analysis and recommendations to technical and non-technical audiences. What sets you apart: Asset-Liability Management (ALM): Experience in ALM, including expertise in liquidity management, asset allocation, cashflow matching, and duration targeting. Asset Adequacy Testing: Experience conducting asset adequacy testing to ensure the sufficiency of assets to meet future obligations. Product Pricing: Experience in pricing financial products, with a particular emphasis on annuity products. Risk-Based Capital (RBC): Experience with risk-based capital frameworks and methodologies. Actuarial Software Proficiency: Familiarity with actuarial software platforms. Experience with AXIS is considered a significant advantage. Actuarial Designations: Attainment of Society of Actuaries Associateship (ASA) or Fellowship (FSA). Compensation range: The salary range for this position is: $127,310 - $243,340. USAA does not provide visa sponsorship for this role. Please do not apply for this role if at any time (now or in the future) you will need immigration support (i.e., H-1B, TN, STEM OPT Training Plans, etc.). Compensation: USAA has an effective process for assessing market data and establishing ranges to ensure we remain competitive. You are paid within the salary range based on your experience and market data of the position. The actual salary for this role may vary by location. Employees may be eligible for pay incentives based on overall corporate and individual performance and at the discretion of the USAA Board of Directors. The above description reflects the details considered necessary to describe the principal functions of the job and should not be construed as a detailed description of all the work requirements that may be performed in the job. Benefits: At USAA our employees enjoy best-in-class benefits to support their physical, financial, and emotional wellness. These benefits include comprehensive medical, dental and vision plans, 401(k), pension, life insurance, parental benefits, adoption assistance, paid time off program with paid holidays plus 16 paid volunteer hours, and various wellness programs. Additionally, our career path planning and continuing education assists employees with their professional goals. For more details on our outstanding benefits, visit our benefits page on USAAjobs.com. Applications for this position are accepted on an ongoing basis, this posting will remain open until the position is filled. Thus, interested candidates are encouraged to apply the same day they view this posting. USAA is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
    $82k-103k yearly est. Auto-Apply 1d ago
  • Data Engineer

    Robert Half 4.5company rating

    Data scientist job in Dallas, TX

    We are seeking a highly experienced Senior Data Engineer with deep expertise in modern data engineering frameworks and cloud-native architectures, primarily on AWS. This role focuses on designing, building, and optimizing scalable data pipelines and distributed systems. You will collaborate cross-functionally to deliver secure, high-quality data solutions that drive business decisions. Key Responsibilities Design & Build: Develop and maintain scalable, highly available AWS-based data pipelines, specializing in EKS/ECS containerized workloads and services like Glue, EMR, and Lake Formation. Orchestration: Implement automated data ingestion, transformation, and workflow orchestration using Airflow, NiFi, and AWS Step Functions. Real-time: Architect and implement real-time streaming solutions with Kafka, MSK, and Flink. Data Lake & Storage: Architect secure S3 data storage and govern data lakes using Lake Formation and Glue Data Catalog. Optimization: Optimize distributed processing solutions (Databricks, Spark, Hadoop) and troubleshoot performance across cloud-native systems. Governance: Ensure robust data quality, security, and governance via IAM, Lake Formation controls, and automated validations. Mentorship: Mentor junior team members and foster technical excellence. Requirements Experience: 7+ years in data engineering; strong hands-on experience designing cloud data pipelines. AWS Expertise: Deep proficiency in EKS, ECS, S3, Lake Formation, Glue, EMR, IAM, and MSK. Core Tools: Strong experience with Kafka, Airflow, NiFi, Databricks, Spark, Hadoop, and Flink. Coding: Proficiency in Python, Scala, or Java for building data pipelines and automation. Databases: Strong SQL skills and experience with relational/NoSQL databases (e.g., Redshift, DynamoDB). Cloud-Native Skills: Strong knowledge of Kubernetes, containerization, and CI/CD pipelines. Education: Bachelor's degree in Computer Science or related field.
    $86k-121k yearly est. 3d ago
  • Sr. Data Engineer

    Trinity Industries, Inc. 4.5company rating

    Data scientist job in Dallas, TX

    Trinity Industries is searching for a Sr. Data Engineer to join our Data Analytics team in Dallas, TX! The successful candidate will work with the Trinity Rail teams to develop and maintain data pipelines in Azure utilizing Databricks, Python and SQL. Join our team today and be a part of Delivering Goods for the Good of All! What you'll do: Facilitate technical design of complex data sourcing, transformation and aggregation logic, ensuring business analytics requirements are met Work with leadership to prioritize business and information needs Engage with product and app development teams to gather requirements, and create technical requirements Utilize and implement data engineering best practices and coding strategies Be responsible for data ingress into storage What you'll need: Bachelor's Degree Computer Science, Information Management, or related field required; Masters preferred 8+ years in data engineering including prior experience in data transformation Databricks experience building data pipelines using the medallion architecture, bronze to gold Advanced skills in Spark and structured streaming, SQL, Python Technical expertise regarding data models, database design/development, data mining and other segmentation techniques Experience with data conversion, interface and report development Experience working with IOT and/or geospatial data in a cloud environment (Azure) Adept at queries, report writing and presenting findings Prior experience coding utilizing repositories and multiple coding environments Must possess effective communication skills, both verbal and written Strong organizational, time management and multi-tasking skills Experience with data conversion, interface and report development Adept at queries, report writing and presenting findings Process improvement and automation a plus Nice to have: Databricks Data Engineering Associate or Professional Certification > 2023
    $83k-116k yearly est. 2d ago
  • Senior Data Engineer

    Longbridge 3.6company rating

    Data scientist job in Dallas, TX

    About Us Longbridge Securities, founded in March 2019 and headquartered in Singapore, is a next-generation online brokerage platform. Established by a team of seasoned finance professionals and technical experts from leading global firms, we are committed to advancing financial technology innovation. Our mission is to empower every investor by offering enhanced financial opportunities. What You'll Do As part of our global expansion, we're seeking a Data Engineer to design and build batch/real-time data warehouses and maintain data platforms that power trading and research for the US market. You'll work on data pipelines, APIs, storage systems, and quality monitoring to ensure reliable, scalable, and efficient data services. Responsibilities: Design and build batch/real-time data warehouses to support the US market growth Develop efficient ETL pipelines to optimize data processing performance and ensure data quality/stability Build a unified data middleware layer to reduce business data development costs and improve service reusability Collaborate with business teams to identify core metrics and data requirements, delivering actionable data solutions Discover data insights through collaboration with the business owner Maintain and develop enterprise data platforms for the US market Qualifications 7+ years of data engineering experience with a proven track record in data platform/data warehouse projects Proficient in Hadoop ecosystem (Hive, Kafka, Spark, Flink), Trino, SQL, and at least one programming language (Python/Java/Scala) Solid understanding of data warehouse modeling (dimensional modeling, star/snowflake schemas) and ETL performance optimization Familiarity with AWS/cloud platforms and experience with Docker, Kubernetes Experience with open-source data platform development, familiar with at least one relational database (MySQL/PostgreSQL) Strong cross-department collaboration skills to translate business requirements into technical solutions Bachelor's degree or higher in Computer Science, Data Science, Statistics, or related fields Comfortable working in a fast-moving fintech/tech startup environment Qualifications 7+ years of data engineering experience with a proven track record in data platform/data warehouse projects Proficient in Hadoop ecosystem (Hive, Kafka, Spark, Flink), Trino, SQL, and at least one programming language (Python/Java/Scala) Solid understanding of data warehouse modeling (dimensional modeling, star/snowflake schemas) and ETL performance optimization Familiarity with AWS/cloud platforms and experience with Docker, Kubernetes Experience with open-source data platform development, familiar with at least one relational database (MySQL/PostgreSQL) Strong cross-department collaboration skills to translate business requirements into technical solutions Bachelor's degree or higher in Computer Science, Data Science, Statistics, or related fields Comfortable working in a fast-moving fintech/tech startup environment Proficiency in Mandarin and English at the business communication level for international team collaboration Bonus Point: Experience with DolphinScheduler and SeaTunnel is a plus
    $83k-116k yearly est. 3d ago
  • Data Engineer

    IDR, Inc. 4.3company rating

    Data scientist job in Coppell, TX

    IDR is seeking a Data Engineer to join one of our top clients for an opportunity in Coppell, TX. This role involves designing, building, and maintaining enterprise-grade data architectures, with a focus on cloud-based data engineering, analytics, and machine learning applications. The company operates within the technology and data services industry, providing innovative solutions to large-scale clients. Position Overview for the Data Engineer: Develop and maintain scalable data pipelines utilizing Databricks and Azure environments Design data models and optimize ETL/ELT processes for large datasets Collaborate with cross-functional teams to implement data solutions supporting analytics, BI, and ML projects Ensure data quality, availability, and performance across enterprise systems Automate workflows and implement CI/CD pipelines to improve data deployment processes Requirements for the Data Engineer: 8-10 years of experience on modern data platforms with a strong background in cloud-based data engineering Strong expertise in Databricks (PySpark/Scala, Delta Lake, Unity Catalog) Hands-on experience with Azure (AWS/GCP also acceptable IF Super strong in Databricks) Advanced SQL skills and strong experience with data modeling, ETL/ELT development and data orchestration Experience with CI/CD (Azure DevOps, GitHub Actions, Terraform, etc.) What's in it for you? Competitive compensation package Full Benefits; Medical, Vision, Dental, and more! Opportunity to get in with an industry leading organization. Why IDR? 25+ Years of Proven Industry Experience in 4 major markets Employee Stock Ownership Program Dedicated Engagement Manager who is committed to you and your success. Medical, Dental, Vision, and Life Insurance ClearlyRated's Best of Staffing Client and Talent Award winner 12 years in a row.
    $75k-103k yearly est. 5d ago
  • Data Engineer

    Ledelsea

    Data scientist job in Irving, TX

    W2 Contract to Hire Role with Monthly Travel to the Dallas Texas area We are looking for a highly skilled and independent Data Engineer to support our analytics and data science teams, as well as external client data needs. This role involves writing and optimizing complex SQL queries, generating client-specific data extracts, and building scalable ETL pipelines using Azure Data Factory. The ideal candidate will have a strong foundation in data engineering, with a collaborative mindset and the ability to work across teams and systems. Duties/Responsibilities:Develop and optimize complex SQL queries to support internal analytics and external client data requests. Generate custom data lists and extracts based on client specifications and business rules. Design, build, and maintain efficient ETL pipelines using Azure Data Factory. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver high-quality solutions. Work with Salesforce data; familiarity with SOQL is preferred but not required. Support Power BI reporting through basic data modeling and integration. Assist in implementing MLOps practices for model deployment and monitoring. Use Python for data manipulation, automation, and integration tasks. Ensure data quality, consistency, and security across all workflows and systems. Required Skills/Abilities/Attributes: 5+ years of experience in data engineering or a related field. Strong proficiency in SQL, including query optimization and performance tuning. Experience with Azure Data Factory, with git repository and pipeline deployment. Ability to translate client requirements into accurate and timely data outputs. Working knowledge of Python for data-related tasks. Strong problem-solving skills and ability to work independently. Excellent communication and documentation skills. Preferred Skills/ExperiencePrevious knowledge of building pipelines for ML models. Extensive experience creating/managing stored procedures and functions in MS SQL Server 2+ years of experience in cloud architecture (Azure, AWS, etc) Experience with ‘code management' systems (Azure Devops) 2+ years of reporting design and management (PowerBI Preferred) Ability to influence others through the articulation of ideas, concepts, benefits, etc. Education and Experience: Bachelor's degree in a computer science field or applicable business experience. Minimum 3 years of experience in a Data Engineering role Healthcare experience preferred. Physical Requirements:Prolonged periods sitting at a desk and working on a computer. Ability to lift 20 lbs.
    $76k-103k yearly est. 5d ago
  • Azure Data Engineer Sr

    Resolve Tech Solutions 4.4company rating

    Data scientist job in Irving, TX

    Minimum 7 years of relevant work experience in data engineering, with at least 2 years in a data modeling. Strong technical foundation in Python, SQL, and experience with cloud platforms (Azure,). Deep understanding of data engineering fundamentals, including database architecture and design, Extract, transform and load (ETL) processes, data lakes, data warehousing, and both batch and streaming technologies. Experience with data orchestration tools (e.g., Airflow), data processing frameworks (e.g., Spark, Databricks), and data visualization tools (e.g., Tableau, Power BI). Proven ability to lead a team of engineers, fostering a collaborative and high-performing environment.
    $76k-100k yearly est. 3d ago
  • Senior Data Engineer (USC AND GC ONLY)

    Wise Skulls

    Data scientist job in Richardson, TX

    Now Hiring: Senior Data Engineer (GCP / Big Data / ETL) Duration: 6 Months (Possible Extension) We're seeking an experienced Senior Data Engineer with deep expertise in Data Warehousing, ETL, Big Data, and modern GCP-based data pipelines. This role is ideal for someone who thrives in cross-functional environments and can architect, optimize, and scale enterprise-level data solutions on the cloud. Must-Have Skills (Non-Negotiable) 9+ years in Data Engineering & Data Warehousing 9+ years hands-on ETL experience (Informatica, DataStage, etc.) 9+ years working with Teradata 3+ years hands-on GCP and BigQuery Experience with Dataflow, Pub/Sub, Cloud Storage, and modern GCP data pipelines Strong background in query optimization, data structures, metadata & workload management Experience delivering microservices-based data solutions Proficiency in Big Data & cloud architecture 3+ years with SQL & NoSQL 3+ years with Python or similar scripting languages 3+ years with Docker, Kubernetes, CI/CD for data pipelines Expertise in deploying & scaling apps in containerized environments (K8s) Strong communication, analytical thinking, and ability to collaborate across technical & non-technical teams Familiarity with AGILE/SDLC methodologies Key Responsibilities Build, enhance, and optimize modern data pipelines on GCP Implement scalable ETL frameworks, data structures, and workflow dependency management Architect and tune BigQuery datasets, queries, and storage layers Collaborate with cross-functional teams to define data requirements and support business objectives Lead efforts in containerized deployments, CI/CD integrations, and performance optimization Drive clarity in project goals, timelines, and deliverables during Agile planning sessions 📩 Interested? Apply now or DM us to explore this opportunity! You can share resumes at ********************* OR Call us on *****************
    $76k-103k yearly est. 1d ago
  • Sr Data Engineer(only w2, onsite, need to be local)

    CBTS 4.9company rating

    Data scientist job in Irving, TX

    Bachelor s degree or equivalent in Computer Science, Mathematics, Software Engineering, Management Information Systems, Computer Engineering/Electrical Engineering, or any Engineering field or quantitative discipline such as Physics or Statistics. Minimum 6 years of relevant work experience in data engineering, with at least 2 years in a data modeling. Strong technical foundation in Python, SQL, and experience with cloud platforms (for example, AWS, Azure,). Deep understanding of data engineering fundamentals, including database architecture and design, Extract, transform and load (ETL) processes, data lakes, data warehousing, and both batch and streaming technologies. Experience with data orchestration tools (e.g., Airflow), data processing frameworks (e.g., Spark, Databricks), and data visualization tools (e.g., Tableau, Power BI). Proven ability to lead a team of engineers, fostering a collaborative and high-performing environment. Good communication, interpersonal, and presentation skills, with the ability to effectively communicate with both technical and non-technical audiences. ADDITIONAL SKILLS AND OTHER REQUIREMENTS Nice to have skill set include: Agile experience Dataiku Power BI
    $68k-100k yearly est. 2d ago
  • Data Engineer(python, pyspark, databricks)

    Anblicks 4.5company rating

    Data scientist job in Dallas, TX

    Job Title: Data Engineer(python, pyspark, databricks) Data Engineer with strong proficiency in SQL, Python, and PySpark to support high-performance data pipelines and analytics initiatives. This role will focus on scalable data processing, transformation, and integration efforts that enable business insights, regulatory compliance, and operational efficiency. Data Engineer - SQL, Python and Pyspark Expert (Onsite - Dallas, TX) Key Responsibilities Design, develop, and optimize ETL/ELT pipelines using SQL, Python, and PySpark for large-scale data environments Implement scalable data processing workflows in distributed data platforms (e.g., Hadoop, Databricks, or Spark environments) Partner with business stakeholders to understand and model mortgage lifecycle data (origination, underwriting, servicing, foreclosure, etc.) Create and maintain data marts, views, and reusable data components to support downstream reporting and analytics Ensure data quality, consistency, security, and lineage across all stages of data processing Assist in data migration and modernization efforts to cloud-based data warehouses (e.g., Snowflake, Azure Synapse, GCP BigQuery) Document data flows, logic, and transformation rules Troubleshoot performance and quality issues in batch and real-time pipelines Support compliance-related reporting (e.g., HMDA, CFPB) Required Qualifications 6+ years of experience in data engineering or data development Advanced expertise in SQL (joins, CTEs, optimization, partitioning, etc.) Strong hands-on skills in Python for scripting, data wrangling, and automation Proficient in PySpark for building distributed data pipelines and processing large volumes of structured/unstructured data Experience working with mortgage banking data sets and domain knowledge is highly preferred Strong understanding of data modeling (dimensional, normalized, star schema) Experience with cloud-based platforms (e.g., Azure Databricks, AWS EMR, GCP Dataproc) Familiarity with ETL tools, orchestration frameworks (e.g., Airflow, ADF, dbt)
    $75k-102k yearly est. 2d ago
  • Snowflake Data Engineering with AWS, Python and PySpark

    Infovision Inc. 4.4company rating

    Data scientist job in Frisco, TX

    Job Title: Snowflake Data Engineering with AWS, Python and PySpark Duration: 12 months Required Skills & Experience: 10+ years of experience in data engineering and data integration roles. Experts working with snowflake ecosystem integrated with AWS services & PySpark. 8+ years of Core Data engineering skills - Handson on experience with Snowflake ecosystem + AWS experience, Core SQL, Snowflake, Python Programming. 5+ years Handson experience in building new data pipeline frameworks with AWS, Snowflake, Python and able to explore new ingestion frame works. Handson with Snowflake architecture, Virtual Warehouses, Storage, and Caching, Snow pipe, Streams, Tasks, and Stages. Experience with cloud platforms (AWS, Azure, or GCP) and integration with Snowflake. Snowflake SQL and Stored Procedures (JavaScript or Python-based). Proficient in Python for data ingestion, transformation, and automation. Solid understanding of data warehousing concepts (ETL, ELT, data modeling, star/snowflake schema). Hands-on with orchestration tools (Airflow, dbt, Azure Data Factory, or similar). Proficiency in SQL and performance tuning. Familiar with Git-based version control, CI/CD pipelines, and DevOps best practices. Strong communication skills and ability to collaborate in agile teams.
    $76k-99k yearly est. 1d ago
  • Data Engineer

    Beaconfire Inc.

    Data scientist job in Dallas, TX

    Junior Data Engineer DESCRIPTION: BeaconFire is based in Central NJ, specializing in Software Development, Web Development, and Business Intelligence; looking for candidates who are good communicators and self-motivated. You will play a key role in building, maintaining, and operating integrations, reporting pipelines, and data transformation systems. Qualifications: Passion for data and a deep desire to learn. Master's Degree in Computer Science/Information Technology, Data Analytics/Data Science, or related discipline. Intermediate Python. Experience in data processing is a plus. (Numpy, Pandas, etc) Experience with relational databases (SQL Server, Oracle, MySQL, etc.) Strong written and verbal communication skills. Ability to work both independently and as part of a team. Responsibilities: Collaborate with the analytics team to find reliable data solutions to meet the business needs. Design and implement scalable ETL or ELT processes to support the business demand for data. Perform data extraction, manipulation, and production from database tables. Build utilities, user-defined functions, and frameworks to better enable data flow patterns. Build and incorporate automated unit tests, participate in integration testing efforts. Work with teams to resolve operational & performance issues. Work with architecture/engineering leads and other teams to ensure quality solutions are implemented, and engineering best practices are defined and adhered to. Compensation: $65,000.00 to $80,000.00 /year BeaconFire is an e-verified company. Work visa sponsorship is available.
    $65k-80k yearly 5d ago
  • Data Engineer

    Search Services 3.5company rating

    Data scientist job in Fort Worth, TX

    ABOUT OUR CLIENT Our Client is a privately held, well-capitalized energy company based in Fort Worth, Texas with a strong track record of success across upstream, midstream, and mineral operations throughout the United States. The leadership team is composed of highly experienced professionals who have worked together across multiple ventures and basins. They are committed to fostering a collaborative, high-integrity culture that values intellectual curiosity, accountability, and continuous improvement. ABOUT THE ROLE Our Client is seeking a skilled and motivated Data Engineer to join their growing technology team. This role plays a key part in managing and optimizing data systems, designing and maintaining ETL processes, and improving data workflows across departments. The successful candidate will have deep technical expertise, a strong background in database architecture and data integration, and the ability to collaborate cross-functionally to enhance data management and accessibility. Candidates with extensive experience may be considered for a Senior Data Engineer title. RESPONSIBILITIES Design, implement, and evolve database architecture and schemas to support scalable and efficient data storage and retrieval. Build, manage, and maintain end-to-end data pipelines, including automation of ingestion and transformation processes. Monitor, troubleshoot, and optimize data pipeline performance to ensure data quality and reliability. Document all aspects of the data pipeline architecture, including data sources, transformations, and job scheduling. Optimize database performance by managing indexing, queries, stored procedures, and views. Develop frameworks and tools for reusable ETL processes and efficient data handling across formats such as CSV, JSON, and Parquet. Ensure proper version control and adherence to coding standards, security protocols, and performance best practices. Collaborate with cross-functional teams including engineering, operations, land, finance, and accounting to streamline data workflows. QUALIFICATIONS Excellent verbal and written communication skills. Strong organizational, analytical, and problem-solving abilities. Proficient in Microsoft Office Suite and other related software. Experienced in programming languages such as R, Python, and SQL. Proficient in making and optimizing API calls for data integration. Strong experience with cloud platforms such as Azure Data Lake, Azure Data Studio, Azure Databricks, and/or Snowflake. Proficient in CI/CD principles and tools. High integrity, humility, and a strong sense of accountability and teamwork. A self-starter with a continuous improvement mindset and passion for evolving technologies. REQUIRED EDUCATION AND EXPERIENCE Bachelor's degree in computer science, software engineering, or a related field. 2+ years of experience in data engineering, database management, or software engineering. Master's degree or additional certification a plus, but not required. Exposure to geospatial or GIS data is a plus. PHYSICAL REQUIREMENTS Prolonged periods of sitting and working at a computer. Ability to lift up to 15 pounds occasionally. *********************************************************************************** NO AGENCY OR C2C CANDIDATES WILL BE CONSIDERED VISA SPONSORSHIP IS NOT OFFERED NOR AVAILABLE FOR H1-B NOR F1 OPT ***********************************************************************************
    $84k-117k yearly est. 3d ago
  • GCP Data Engineer

    Methodhub

    Data scientist job in Fort Worth, TX

    Job Title: GCP Data Engineer Employment Type: W2/CTH Client: Direct We are seeking a highly skilled Data Engineer with strong expertise in Python, SQL, and Google Cloud Platform (GCP) services. The ideal candidate will have 6-8 years of hands-on experience in building and maintaining scalable data pipelines, working with APIs, and leveraging GCP tools such as BigQuery, Cloud Composer, and Dataflow. Core Responsibilities: • Design, build, and maintain scalable data pipelines to support analytics and business operations. • Develop and optimize ETL processes for structured and unstructured data. • Work with BigQuery, Cloud Composer, and other GCP services to manage data workflows. • Collaborate with data analysts and business teams to ensure data availability and quality. • Integrate data from multiple sources using APIs and custom scripts. • Monitor and troubleshoot pipeline performance and reliability. Technical Skills: o Strong proficiency in Python and SQL. o Experience with data pipeline development and ETL frameworks. • GCP Expertise: o Hands-on experience with BigQuery, Cloud Composer, and Dataflow. • Additional Requirements: o Familiarity with workflow orchestration tools and cloud-based data architecture. o Strong problem-solving and analytical skills. o Excellent communication and collaboration abilities.
    $76k-104k yearly est. 1d ago
  • GCP Data Engineer

    Infosys 4.4company rating

    Data scientist job in Richardson, TX

    Infosys is seeking a Google Cloud (GCP) data engineer with experience in Github and python. In this role, you will enable digital transformation for our clients in a global delivery model, research on technologies independently, recommend appropriate solutions and contribute to technology-specific best practices and standards. You will be responsible to interface with key stakeholders and apply your technical proficiency across different stages of the Software Development Life Cycle. You will be part of a learning culture, where teamwork and collaboration are encouraged, excellence is rewarded, and diversity is respected and valued. Required Qualifications: Candidate must be located within commuting distance of Richardson, TX or be willing to relocate to the area. This position may require travel in the US Bachelor's degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education. Candidates authorized to work for any employer in the United States without employer-based visa sponsorship are welcome to apply. Infosys is unable to provide immigration sponsorship for this role at this time At least 4 years of Information Technology experience. Experience working with technologies like - GCP with data engineering - data flow / air flow, pub sub/ kafta, data proc/Hadoop, Big Query. ETL development experience with strong SQL background such as Python/R, Scala, Java, Hive, Spark, Kafka Strong knowledge on Python Program development to build reusable frameworks, enhance existing frameworks. Application build experience with core GCP Services like Dataproc, GKE, Composer, Deep understanding GCP IAM & Github. Must have done IAM set up Knowledge on CICD pipeline using Terraform in Git. Preferred Qualifications: Good knowledge on Google Big Query, using advance SQL programing techniques to build Big Query Data sets in Ingestion and Transformation layer. Experience in Relational Modeling, Dimensional Modeling and Modeling of Unstructured Data Knowledge on Airflow Dag creation, execution, and monitoring. Good understanding of Agile software development frameworks Ability to work in teams in a diverse, multi-stakeholder environment comprising of Business and Technology teams. Experience and desire to work in a global delivery environment.
    $72k-91k yearly est. 1d ago

Learn more about data scientist jobs

How much does a data scientist earn in Irving, TX?

The average data scientist in Irving, TX earns between $59,000 and $113,000 annually. This compares to the national average data scientist range of $75,000 to $148,000.

Average data scientist salary in Irving, TX

$82,000

What are the biggest employers of Data Scientists in Irving, TX?

The biggest employers of Data Scientists in Irving, TX are:
  1. Amazon
  2. Tek Spikes
  3. Brillio
  4. ServiceLink
  5. Gartner
  6. Gap International
  7. Rapinno Tech
Job type you want
Full Time
Part Time
Internship
Temporary