Post job

Data scientist jobs in Lewisville, TX

- 419 jobs
All
Data Scientist
Data Engineer
Data Modeler
Data Consultant
Actuary
Data Science Internship
  • Data Scientist with Python ML/NLP

    Programmers.Io 3.8company rating

    Data scientist job in Addison, TX

    Role: Data Scientist with Python ML/NLP Yrs. of experience: 10+ Yrs. Fulltime Job Responsibilities: We're looking for a Data Scientist who can be responsible for designing, building and maintaining document capture applications. The ideal candidate will have a solid background in software engineering with experience in building Machine Learning NLP Models and good familiarity with Gen AI Models. High Level Skills Required Primary - 7+ years' as Data Scientist or related roles Bachelor's degree in Computer Science, or a related technical field Deep understanding and some exposure to new Gen AI Open-source Models At least 5 years programming experience in software development and Agile process At least 5 years Python (or equivalent) programming experience to work with ML/NLP models.
    $72k-104k yearly est. 2d ago
  • Senior Data Governance Consultant (Informatica)

    Paradigm Technology 4.2company rating

    Data scientist job in Plano, TX

    Senior Data Governance Consultant (Informatica) About Paradigm - Intelligence Amplified Paradigm is a strategic consulting firm that turns vision into tangible results. For over 30 years, we've helped Fortune 500 and high-growth organizations accelerate business outcomes across data, cloud, and AI. From strategy through execution, we empower clients to make smarter decisions, move faster, and maximize return on their technology investments. What sets us apart isn't just what we do, it's how we do it. Driven by a clear mission and values rooted in integrity, excellence, and collaboration, we deliver work that creates lasting impact. At Paradigm, your ideas are heard, your growth is prioritized, your contributions make a difference. Summary: We are seeking a Senior Data Governance Consultant to lead and enhance data governance capabilities across a financial services organization The Senior Data Governance Consultant will collaborate closely with business, risk, compliance, technology, and data management teams to define data standards, strengthen data controls, and drive a culture of data accountability and stewardship The ideal candidate will have deep experience in developing and implementing data governance frameworks, data policies, and control mechanisms that ensure compliance, consistency, and trust in enterprise data assets Hands-on experience with Informatica, including Master Data Management (MDM) or Informatica Data Management Cloud (IDMC), is preferred This position is Remote, with occasional travel to Plano, TX Responsibilities: Data Governance Frameworks: Design, implement, and enhance data governance frameworks aligned with regulatory expectations (e.g., BCBS 239, GDPR, CCPA, DORA) and internal control standards Policy & Standards Development: Develop, maintain, and operationalize data policies, standards, and procedures that govern data quality, metadata management, data lineage, and data ownership Control Design & Implementation: Define and embed data control frameworks across data lifecycle processes to ensure data integrity, accuracy, completeness, and timeliness Risk & Compliance Alignment: Work with risk and compliance teams to identify data-related risks and ensure appropriate mitigation and monitoring controls are in place Stakeholder Engagement: Partner with data owners, stewards, and business leaders to promote governance practices and drive adoption of governance tools and processes Data Quality Management: Define and monitor data quality metrics and KPIs, establishing escalation and remediation procedures for data quality issues Metadata & Lineage: Support metadata and data lineage initiatives to increase transparency and enable traceability across systems and processes Reporting & Governance Committees: Prepare materials and reporting for data governance forums, risk committees, and senior management updates Change Management & Training: Develop communication and training materials to embed governance culture and ensure consistent understanding across the organization Required Qualifications: 7+ years of experience in data governance, data management, or data risk roles within financial services (banking, insurance, or asset management preferred) Strong knowledge of data policy development, data standards, and control frameworks Proven experience aligning data governance initiatives with regulatory and compliance requirements Familiarity with Informatica data governance and metadata tools Excellent communication skills with the ability to influence senior stakeholders and translate technical concepts into business language Deep understanding of data management principles (DAMA-DMBOK, DCAM, or equivalent frameworks) Bachelor's or Master's Degree in Information Management, Data Science, Computer Science, Business, or related field Preferred Qualifications: Hands-on experience with Informatica, including Master Data Management (MDM) or Informatica Data Management Cloud (IDMC), is preferred Experience with data risk management or data control testing Knowledge of financial regulatory frameworks (e.g., Basel, MiFID II, Solvency II, BCBS 239) Certifications, such as Informatica, CDMP, or DCAM Background in consulting or large-scale data transformation programs Key Competencies: Strategic and analytical thinking Strong governance and control mindset Excellent stakeholder and relationship management Ability to drive organizational change and embed governance culture Attention to detail with a pragmatic approach Why Join Paradigm At Paradigm, integrity drives innovation. You'll collaborate with curious, dedicated teammates, solving complex problems and unlocking immense data value for leading organizations. If you seek a place where your voice is heard, growth is supported, and your work creates lasting business value, you belong at Paradigm. Learn more at ******************** Policy Disclosure: Paradigm maintains a strict drug-free workplace policy. All offers of employment are contingent upon successfully passing a standard 5-panel drug screen. Please note that a positive test result for any prohibited substance, including marijuana, will result in disqualification from employment, regardless of state laws permitting its use. This policy applies consistently across all positions and locations.
    $76k-107k yearly est. 5d ago
  • Senior Data Engineer

    Ascendion

    Data scientist job in Plano, TX

    Ascendion is a full-service digital engineering solutions company. We make and manage software platforms and products that power growth and deliver captivating experiences to consumers and employees. Our engineering, cloud, data, experience design, and talent solution capabilities accelerate transformation and impact for enterprise clients. Headquartered in New Jersey, our workforce of 6,000+ Ascenders delivers solutions from around the globe. Ascendion is built differently to engineer the next. Ascendion | Engineering to elevate life We have a culture built on opportunity, inclusion, and a spirit of partnership. Come, change the world with us: Build the coolest tech for world's leading brands Solve complex problems - and learn new skills Experience the power of transforming digital engineering for Fortune 500 clients Master your craft with leading training programs and hands-on experience Experience a community of change makers! Join a culture of high-performing innovators with endless ideas and a passion for tech. Our culture is the fabric of our company, and it is what makes us unique and diverse. The way we share ideas, learning, experiences, successes, and joy allows everyone to be their best at Ascendion. *** About the Role *** Job Title: Senior Data Engineer Key Responsibilities: Design, develop, and maintain scalable and reliable data pipelines and ETL workflows. Build and optimize data models and queries in Snowflake to support analytics and reporting needs. Develop data processing and automation scripts using Python. Implement and manage data orchestration workflows using Airflow, Airbyte, or similar tools. Work with AWS data services including EMR, Glue, and Kafka for large-scale data ingestion and processing. Ensure data quality, reliability, and performance across data pipelines. Collaborate with analytics, product, and engineering teams to understand data requirements and deliver robust solutions. Monitor, troubleshoot, and optimize data workflows for performance and cost efficiency. Required Skills & Qualifications: 8+ years of hands-on experience as a Data Engineer. Strong proficiency in SQL and Snowflake. Extensive experience with ETL frameworks and data pipeline orchestration tools (Airflow, Airbyte, or similar). Proficiency in Python for data processing and automation. Hands-on experience with AWS data services, including EMR, Glue, and Kafka. Strong understanding of data warehousing, data modeling, and distributed data processing concepts. Nice to Have: Experience working with streaming data pipelines. Familiarity with data governance, security, and compliance best practices. Experience mentoring junior engineers and leading technical initiatives. Salary Range: The salary for this position is between $130,000- $140,000 annually. Factors which may affect pay within this range may include geography/market, skills, education, experience, and other qualifications of the successful candidate. Benefits: The Company offers the following benefits for this position, subject to applicable eligibility requirements: [medical insurance] [dental insurance] [vision insurance] [401(k) retirement plan] [long-term disability insurance] [short-term disability insurance] [5 personal days accrued each calendar year. The Paid time off benefits meet the paid sick and safe time laws that pertains to the City/ State] [10-15 days of paid vacation time] [6 paid holidays and 1 floating holiday per calendar year] [Ascendion Learning Management System] Want to change the world? Let us know. Tell us about your experiences, education, and ambitions. Bring your knowledge, unique viewpoint, and creativity to the table. Let's talk!
    $130k-140k yearly 4d ago
  • Life Actuary

    USAA 4.7company rating

    Data scientist job in Plano, TX

    Why USAA? At USAA, our mission is to empower our members to achieve financial security through highly competitive products, exceptional service and trusted advice. We seek to be the #1 choice for the military community and their families. Embrace a fulfilling career at USAA, where our core values - honesty, integrity, loyalty and service - define how we treat each other and our members. Be part of what truly makes us special and impactful. The Opportunity We are seeking a qualified Life Actuary to join our diverse team. The ideal candidate will possess strong risk management skills, with a particular focus on Interest Rate Risk Management and broader financial risk experience. This role requires an individual who has acquired their ASA designation or FSA designation and has a few years of meaningful experience. Key responsibilities will include experience in Asset-Liability Management (ALM), encompassing liquidity management, asset allocation, cashflow matching, and duration targeting. You will also be responsible for conducting asset adequacy testing to ensure the sufficiency of assets to meet future obligations. Experience in product pricing, especially for annuity products. Furthermore, an understanding of Risk-Based Capital (RBC) frameworks and methodologies is required. Proficiency with actuarial software platforms, with a strong preference for AXIS, is highly advantageous. We offer a flexible work environment that requires an individual to be in the office 4 days per week. This position can be based in one of the following locations: San Antonio, TX, Plano, TX, Phoenix, AZ, Colorado Springs, CO, Charlotte, NC, or Tampa, FL. Relocation assistance is not available for this position. What you'll do: Performs complex work assignments using actuarial modeling software driven models for pricing, valuation, and/or risk management. Reviews laws and regulations to ensure all processes are compliant; and provides recommendations for improvements and monitors industry communications regarding potential changes to existing laws and regulations. Runs models, generates reports, and presents recommendations and detailed analysis of all model runs to Actuarial Leadership. May make recommendations for model adjustments and improvements, when appropriate. Shares knowledge with team members and serves as a resource to team on raised issues and navigates obstacles to deliver work product. Leads or participates as a key resource on moderately complex projects through concept, planning, execution, and implementation phases with minimal guidance, involving cross functional actuarial areas. Develops exhibits and reports that help explain proposals/findings and provides information in an understandable and usable format for partners. Identifies and provides recommended solutions to business problems independently, often presenting recommendation to leadership. Maintains accurate price level, price structure, data availability and other requirements to achieve profitability and competitive goals. Identifies critical assumptions to monitor and suggest timely remedies to correct or prevent unfavorable trends. Tests impact of assumptions by identifying sources of gain and loss, the appropriate premiums, interest margins, reserves, and cash values for profitability and viability of new and existing products. Advises management on issues and serves as a primary resource for their individual team members on raised issues. Ensures risks associated with business activities are identified, measured, monitored, and controlled in accordance with risk and compliance policies and procedures. What you have: Bachelor's degree; OR 4 years of related experience (in addition to the minimum years of experience required) may be substituted in lieu of degree. 4 years relevant actuarial or analytical experience and attainment of Fellow within the Society of Actuaries; OR 8 years relevant actuarial experience and attainment of Associate within the Society of Actuaries. Experience performing complex work assignments using actuarial modeling software driven models for pricing, valuation, and/or risk management. Experience presenting complex actuarial analysis and recommendations to technical and non-technical audiences. What sets you apart: Asset-Liability Management (ALM): Experience in ALM, including expertise in liquidity management, asset allocation, cashflow matching, and duration targeting. Asset Adequacy Testing: Experience conducting asset adequacy testing to ensure the sufficiency of assets to meet future obligations. Product Pricing: Experience in pricing financial products, with a particular emphasis on annuity products. Risk-Based Capital (RBC): Experience with risk-based capital frameworks and methodologies. Actuarial Software Proficiency: Familiarity with actuarial software platforms. Experience with AXIS is considered a significant advantage. Actuarial Designations: Attainment of Society of Actuaries Associateship (ASA) or Fellowship (FSA). Compensation range: The salary range for this position is: $127,310 - $243,340. USAA does not provide visa sponsorship for this role. Please do not apply for this role if at any time (now or in the future) you will need immigration support (i.e., H-1B, TN, STEM OPT Training Plans, etc.). Compensation: USAA has an effective process for assessing market data and establishing ranges to ensure we remain competitive. You are paid within the salary range based on your experience and market data of the position. The actual salary for this role may vary by location. Employees may be eligible for pay incentives based on overall corporate and individual performance and at the discretion of the USAA Board of Directors. The above description reflects the details considered necessary to describe the principal functions of the job and should not be construed as a detailed description of all the work requirements that may be performed in the job. Benefits: At USAA our employees enjoy best-in-class benefits to support their physical, financial, and emotional wellness. These benefits include comprehensive medical, dental and vision plans, 401(k), pension, life insurance, parental benefits, adoption assistance, paid time off program with paid holidays plus 16 paid volunteer hours, and various wellness programs. Additionally, our career path planning and continuing education assists employees with their professional goals. For more details on our outstanding benefits, visit our benefits page on USAAjobs.com. Applications for this position are accepted on an ongoing basis, this posting will remain open until the position is filled. Thus, interested candidates are encouraged to apply the same day they view this posting. USAA is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
    $82k-103k yearly est. Auto-Apply 4d ago
  • Senior Data Engineer

    Robert Half 4.5company rating

    Data scientist job in Plano, TX

    We are seeking a highly skilled Senior Data Engineer with AI / ML desires to design, build, and scale next-generation data and machine learning infrastructure. This role is ideal for a hands-on technical expert who thrives in building complex systems from the ground up, has deep experience in Google Cloud Platform (GCP), and is excited about stepping into management and technical leadership. You will work across engineering, data science, and executive leadership teams to architect cloud-native solutions, optimize real-time data pipelines, and help shape our long-term AI/ML engineering strategy. Key Responsibilities Cloud & Platform Engineering Architect, build, and maintain high-performance data and ML infrastructure on GCP using best-in-class cloud-native tools and services. Lead the design of scalable cloud architectures, with a strong focus on resilience, automation, and cost-effective operation. Build applications and services from scratch, ensuring they are modular, maintainable, and scalable. Real-Time & Distributed Systems Design and optimize real-time data processing pipelines capable of handling high-volume, low-latency traffic. Implement and fine-tune load balancing strategies to support fault tolerance and performance across distributed systems. Lead system design for high availability, horizontal scaling, and microservices communication patterns. AI/ML Engineering Partner with ML engineers and data scientists to deploy, monitor, and scale machine learning workflows. Create and maintain ML-focused CI/CD pipelines, model deployment frameworks, and automated testing harnesses. Open-Source & Code Quality Contribute to and maintain open-source projects, including active GitHub repositories. Champion best practices across code reviews, version control, and documentation. Establish, document, and enforce advanced testing methodologies, including integration, regression, performance, and automated testing frameworks. Leadership & Collaboration Serve as a technical leader and mentor within the engineering team. Collaborate effectively with senior leadership and executive stakeholders, translating complex engineering concepts into strategic insights. Provide guidance and direction to junior engineers, with an eye toward growing into a people leadership role. Required Qualifications Bachelor's Degree from an accredited university. Master's Degree Highly Preferred Expert-level experience with GCP, including services such as BigQuery, Cloud Run, Pub/Sub, Dataflow, GKE, and Vertex AI. Strong background in cloud architecture and distributed system design (GCP preferred; AWS/Azure acceptable). Proven ability to build applications, platforms, and services from scratch. Advanced skills in traffic load balancing, autoscaling, and performance tuning. Deep experience with real-time data systems, streaming frameworks, and low-latency infrastructure. Strong track record of open-source contributions and maintaining GitHub repositories. Expertise in testing methodologies across the software lifecycle. Excellent communication skills with comfort interacting directly with executive leadership. Demonstrated interest or experience in team leadership or management. Preferred Qualifications Experience with microservices, Kubernetes, and service mesh technologies. Familiarity with MLOps tooling and frameworks (Kubeflow, MLflow, Vertex AI pipelines). Strong Python, Go, or similar programming expertise. Prior experience in fast-growth or startup environments.
    $85k-120k yearly est. 2d ago
  • Senior Data Engineer

    Longbridge 3.6company rating

    Data scientist job in Dallas, TX

    About Us Longbridge Securities, founded in March 2019 and headquartered in Singapore, is a next-generation online brokerage platform. Established by a team of seasoned finance professionals and technical experts from leading global firms, we are committed to advancing financial technology innovation. Our mission is to empower every investor by offering enhanced financial opportunities. What You'll Do As part of our global expansion, we're seeking a Data Engineer to design and build batch/real-time data warehouses and maintain data platforms that power trading and research for the US market. You'll work on data pipelines, APIs, storage systems, and quality monitoring to ensure reliable, scalable, and efficient data services. Responsibilities: Design and build batch/real-time data warehouses to support the US market growth Develop efficient ETL pipelines to optimize data processing performance and ensure data quality/stability Build a unified data middleware layer to reduce business data development costs and improve service reusability Collaborate with business teams to identify core metrics and data requirements, delivering actionable data solutions Discover data insights through collaboration with the business owner Maintain and develop enterprise data platforms for the US market Qualifications 7+ years of data engineering experience with a proven track record in data platform/data warehouse projects Proficient in Hadoop ecosystem (Hive, Kafka, Spark, Flink), Trino, SQL, and at least one programming language (Python/Java/Scala) Solid understanding of data warehouse modeling (dimensional modeling, star/snowflake schemas) and ETL performance optimization Familiarity with AWS/cloud platforms and experience with Docker, Kubernetes Experience with open-source data platform development, familiar with at least one relational database (MySQL/PostgreSQL) Strong cross-department collaboration skills to translate business requirements into technical solutions Bachelor's degree or higher in Computer Science, Data Science, Statistics, or related fields Comfortable working in a fast-moving fintech/tech startup environment Bonus Point: Experience with DolphinScheduler and SeaTunnel is a plus
    $83k-116k yearly est. 2d ago
  • GenAi Data Engineer

    Insight Global

    Data scientist job in Plano, TX

    Must Haves: 6-8 years of hands on Python development 6-8 years of experience with Angular or React 2-3 years of experience developing AI / ML frameworks using tools like MLFlow and KubeFlow including implementation of fine-tuning techniques Experience building and deploying API-based applications using FastAPI, JWT authentication, and API Gateway integration Hands on Devops experience with Git / Bitbucket, Jenkins, SonarQube, Artifactory OR Ansible Job Description: Insight Global is looking for a highly skilled Data Engineer to sit in Plano, Texas to join one of their largest Financial Services clients. This individual should come with deep expertise on building AI / ML frameworks and developing Retrieval-Augmented Generation (RAG) pipelines using Large Language Models (LLM) like ChatGBT, CoPilot, Gemini, Grok etc. Job Responsibilities: • Develop RAG pipelines using Large Language Models (LLM) • Codes solutions and unit test to deliver a requirement / story per the defined acceptance criteria and compliance requirements • Performs continuous integration and continuous development (CI-CD) activities This individual is required to be onsite 3 days a week in Plano, TX, Charlotte, NC or New Jersey
    $76k-103k yearly est. 5d ago
  • Sr. Snowflake Data Engineer

    Headway Tek Inc.

    Data scientist job in Dallas, TX

    Job Title : Sr. Snowflake Data Engineer Duration : 1 Year This role is for an expert Snowflake Data Engineer with 12+ years in software and data engineering, including 3-5 years focused on building and optimizing data microservices and delivering end-to-end solutions. You will design, implement, and deploy scalable Snowflake architectures, collaborating with business and technology teams to ensure efficient, high-quality data delivery from start to finish. Job Requirements: Design, develop, and optimize complex data pipelines and ETL processes using Snowflake and complementary cloud technologies. Architect and implement scalable data warehouse solutions to support business requirements and analytics needs. Lead data migration and modernization projects from legacy platforms to Snowflake. Develop and enforce best practices for Snowflake architecture, security, performance tuning, and cost optimization. Mentor and guide junior and mid-level engineers, fostering a culture of technical excellence and continuous learning. Collaborate closely with data scientists, analysts, and business stakeholders to understand requirements and deliver robust solutions. Develop and maintain documentation, data models, and data dictionaries for Snowflake environments. Monitor, troubleshoot, and resolve issues related to data integrity, performance, and reliability. Evaluate and integrate new data tools and technologies to enhance the data engineering ecosystem. Ensure compliance with company policies and industry standards regarding data security and governance. Required Qualifications: Bachelor's or master's degree in computer science, Engineering, Information Systems, or a related field. 12+ years of experience in data engineering or a related field, with demonstrable expertise in building and maintaining large-scale data systems. 3+ years of hands-on experience with Snowflake, including data modeling, performance tuning, and advanced SQL. Strong experience with ETL/ELT tools (e.g., Informatica, Talend, dbt, Apache Airflow) and data integration techniques. Proficiency in programming languages such as Python, Scala, or Java for data processing. Deep understanding of cloud data platforms (AWS, Azure, or GCP) and their integration with Snowflake. Proven track record of leading complex data migration and modernization projects. Excellent analytical, problem-solving, and communication skills. Experience with data security best practices, access controls, and compliance requirements. Familiarity with CI/CD pipelines and DevOps practices in the context of data engineering. Preferred Skills: Snowflake certification(s) strongly preferred. Experience with real-time data processing frameworks (e.g., Kafka, Spark Streaming). Knowledge of BI/reporting tools (e.g., Tableau, Power BI, Looker). Exposure to machine learning workflows and data science collaboration.
    $76k-103k yearly est. 1d ago
  • Senior Data Engineer (USC AND GC ONLY)

    Wise Skulls

    Data scientist job in Richardson, TX

    Now Hiring: Senior Data Engineer (GCP / Big Data / ETL) Duration: 6 Months (Possible Extension) We're seeking an experienced Senior Data Engineer with deep expertise in Data Warehousing, ETL, Big Data, and modern GCP-based data pipelines. This role is ideal for someone who thrives in cross-functional environments and can architect, optimize, and scale enterprise-level data solutions on the cloud. Must-Have Skills (Non-Negotiable) 9+ years in Data Engineering & Data Warehousing 9+ years hands-on ETL experience (Informatica, DataStage, etc.) 9+ years working with Teradata 3+ years hands-on GCP and BigQuery Experience with Dataflow, Pub/Sub, Cloud Storage, and modern GCP data pipelines Strong background in query optimization, data structures, metadata & workload management Experience delivering microservices-based data solutions Proficiency in Big Data & cloud architecture 3+ years with SQL & NoSQL 3+ years with Python or similar scripting languages 3+ years with Docker, Kubernetes, CI/CD for data pipelines Expertise in deploying & scaling apps in containerized environments (K8s) Strong communication, analytical thinking, and ability to collaborate across technical & non-technical teams Familiarity with AGILE/SDLC methodologies Key Responsibilities Build, enhance, and optimize modern data pipelines on GCP Implement scalable ETL frameworks, data structures, and workflow dependency management Architect and tune BigQuery datasets, queries, and storage layers Collaborate with cross-functional teams to define data requirements and support business objectives Lead efforts in containerized deployments, CI/CD integrations, and performance optimization Drive clarity in project goals, timelines, and deliverables during Agile planning sessions 📩 Interested? Apply now or DM us to explore this opportunity! You can share resumes at ********************* OR Call us on *****************
    $76k-103k yearly est. 4d ago
  • Junior Data Engineer

    Intelliswift-An LTTS Company

    Data scientist job in Lewisville, TX

    Title: Junior Data Engineer Duration: 6+ Months Contract Looking for W2 Contracts only What you will be doing: -- Data Engineering with big data technologies for customer experience -- Build and manage API's -- Creates ETL -- Can design, build, manage databases -- Creates / builds data pipelines -- Works with very large data volume What you will need: -- 3-5 years of experience in Data Engineering -- Solid experience with Python -- Solid experience with SQL -- Experience building and managing API -- Experience designing / creating / building a database from the ground up -- Experience creating / building data pipelines -- Experience with ETL -- Experience with Data Warehouse A Plus: -- Experience with AWS, Snowflake
    $76k-103k yearly est. 1d ago
  • Data Modeler

    People Consultancy Services (PCS

    Data scientist job in Plano, TX

    Plano TX- Nearby candidates only W2 Candidates Must Have: 5+ years of experience with data modeling, warehousing, analysis & data profiling experience and ability to identify trends and anomalies in the data Experience on AWS technologies like S3, AWS Glue, EMR, and IAM roles/permissions Experience with one or more query language (e.g., SQL, PL/SQL, DDL, SparkSQL, Scala) Experience working with relational database such as Teradata and handling both structured and unstructured datasets Data Modeling tools (Any of - Erwin, Power Designer, ER Studio) Preferred / Ideal to have - Proficiency in Python Experience with NoSQL, non-relational databases / data stores (e.g., object storage, document or key-value stores, graph databases, column-family databases) Experience with Snowflake and Databricks
    $79k-108k yearly est. 2d ago
  • Data Modeler

    Tata Consultancy Services 4.3company rating

    Data scientist job in Plano, TX

    Must Have Technical/Functional Skills • Strong knowledge of data modeling tools (e.g., ERwin, PowerDesigner, ER/Studio). • Proven experience as a Data Modeler or similar role. • Proficiency in SQL and database technologies (e.g., Oracle, SQL Server, MySQL, Teradata). • Understanding of data warehousing concepts and ETL processes. • Document data models and maintain metadata repositories. • Excellent analytical and problem-solving skills. • Strong communication and collaboration abilities. • Work with database administrators to implement and optimize data models in production environments. • Knowledge of data governance frameworks and compliance standards. Roles & Responsibilities • Design, develop, and maintain data models that support business processes and analytics. • Strong understanding of data architecture, database design, and data warehousing concepts, and will work closely with business analysts, data architects, and developers to ensure data integrity and usability • Design and develop conceptual, logical, and physical data models based on business requirements. • Collaborate with stakeholders to understand data needs and translate them into effective data models. • Analyze existing data structures and recommend improvements or optimizations. • Ensure data models align with enterprise data architecture standards and best practices. • Support data integration and ETL processes by providing clear data definitions and relationships. • Document data models and maintain metadata repositories. • Work with database administrators to implement and optimize data models in production environments. • Assist in data governance and data quality initiatives. • Stay current with industry trends and emerging data modeling tools and techniques. Generic Managerial Skills, If any • Excellent analytical and problem-solving skills. • Strong communication and collaboration abilities. Base Salary Range: $100,000 - $130,000 per annum TCS Employee Benefits Summary: Discretionary Annual Incentive. Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans. Family Support: Maternal & Parental Leaves. Insurance Options: Auto & Home Insurance, Identity Theft Protection.< /span> Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement. Time Off: Vacation, Time Off, Sick Leave & Holidays. Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
    $100k-130k yearly 1d ago
  • Sr Data Engineer(only w2, onsite, need to be local)

    CBTS 4.9company rating

    Data scientist job in Irving, TX

    Bachelor s degree or equivalent in Computer Science, Mathematics, Software Engineering, Management Information Systems, Computer Engineering/Electrical Engineering, or any Engineering field or quantitative discipline such as Physics or Statistics. Minimum 6 years of relevant work experience in data engineering, with at least 2 years in a data modeling. Strong technical foundation in Python, SQL, and experience with cloud platforms (for example, AWS, Azure,). Deep understanding of data engineering fundamentals, including database architecture and design, Extract, transform and load (ETL) processes, data lakes, data warehousing, and both batch and streaming technologies. Experience with data orchestration tools (e.g., Airflow), data processing frameworks (e.g., Spark, Databricks), and data visualization tools (e.g., Tableau, Power BI). Proven ability to lead a team of engineers, fostering a collaborative and high-performing environment. Good communication, interpersonal, and presentation skills, with the ability to effectively communicate with both technical and non-technical audiences. ADDITIONAL SKILLS AND OTHER REQUIREMENTS Nice to have skill set include: Agile experience Dataiku Power BI
    $68k-100k yearly est. 1d ago
  • GCP Data Engineer/Lead

    Dexian

    Data scientist job in Irving, TX

    Required Qualifications: 9+ years' experience and hands on with Data Warehousing. 9+ years of hands on ETL (e.g., Informatica/DataStage) experience 3+ years of hands-on Big query 3+ years of hands on GCP 9+ years of Teradata hands on experience 9+ years working in a cross-functional environment. 3+ years of hands-on experience with Google Cloud Platform services like Big Query, Dataflow, Pub/Sub, and Cloud Storage 3+ years of hands-on experience building modern data pipelines with GCP platform 3+ years of experience with Query optimization, data structures, transformation, metadata, dependency, and workload management 3+ years of experience with SQL, NoSQL 3+ years of experience in data engineering with a focus on microservices-based data solutions 3+ years of containerization (Docker, Kubernetes) and CI/CD for data pipeline 3+ years of experience with Python (or a comparable scripting language) 3+ years of experience with Big data and cloud architecture 3+ years of experience with deployment/scaling of apps on containerized environment (Kubernetes,) Excellent oral and written communications skills; ability to interact effectively with all levels within the organization. Working knowledge of AGILE/SDLC methodology Excellent analytical and problem-solving skills. Ability to interact and work effectively with technical & non-technical levels within the organization. Ability to drive clarity of purpose and goals during release and planning activities. Excellent organizational skills including ability to prioritize tasks efficiently with high level of attention to detail.. Dexian is an Equal Opportunity Employer that recruits and hires qualified candidates without regard to race, religion, sex, sexual orientation, gender identity, age, national origin, ancestry, citizenship, disability, or veteran status. Dexian stands at the forefront of Talent + Technology solutions with a presence spanning more than 70 locations worldwide and a team exceeding 10,000 professionals. As one of the largest technology and professional staffing companies and one of the largest minority-owned staffing companies in the United States, Dexian combines over 30 years of industry expertise with cutting-edge technologies to deliver comprehensive global services and support. Dexian connects the right talent and the right technology with the right organizations to deliver trajectory-changing results that help everyone achieve their ambitions and goals. To learn more, please visit ******************** Dexian is an Equal Opportunity Employer that recruits and hires qualified candidates without regard to race, religion, sex, sexual orientation, gender identity, age, national origin, ancestry, citizenship, disability, or veteran status.
    $76k-103k yearly est. 2d ago
  • Data Engineer

    Ledelsea

    Data scientist job in Irving, TX

    W2 Contract to Hire Role with Monthly Travel to the Dallas Texas area We are looking for a highly skilled and independent Data Engineer to support our analytics and data science teams, as well as external client data needs. This role involves writing and optimizing complex SQL queries, generating client-specific data extracts, and building scalable ETL pipelines using Azure Data Factory. The ideal candidate will have a strong foundation in data engineering, with a collaborative mindset and the ability to work across teams and systems. Duties/Responsibilities:Develop and optimize complex SQL queries to support internal analytics and external client data requests. Generate custom data lists and extracts based on client specifications and business rules. Design, build, and maintain efficient ETL pipelines using Azure Data Factory. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver high-quality solutions. Work with Salesforce data; familiarity with SOQL is preferred but not required. Support Power BI reporting through basic data modeling and integration. Assist in implementing MLOps practices for model deployment and monitoring. Use Python for data manipulation, automation, and integration tasks. Ensure data quality, consistency, and security across all workflows and systems. Required Skills/Abilities/Attributes: 5+ years of experience in data engineering or a related field. Strong proficiency in SQL, including query optimization and performance tuning. Experience with Azure Data Factory, with git repository and pipeline deployment. Ability to translate client requirements into accurate and timely data outputs. Working knowledge of Python for data-related tasks. Strong problem-solving skills and ability to work independently. Excellent communication and documentation skills. Preferred Skills/ExperiencePrevious knowledge of building pipelines for ML models. Extensive experience creating/managing stored procedures and functions in MS SQL Server 2+ years of experience in cloud architecture (Azure, AWS, etc) Experience with ‘code management' systems (Azure Devops) 2+ years of reporting design and management (PowerBI Preferred) Ability to influence others through the articulation of ideas, concepts, benefits, etc. Education and Experience: Bachelor's degree in a computer science field or applicable business experience. Minimum 3 years of experience in a Data Engineering role Healthcare experience preferred. Physical Requirements:Prolonged periods sitting at a desk and working on a computer. Ability to lift 20 lbs.
    $76k-103k yearly est. 3d ago
  • Data Engineer(python, Pyspark, data bricks)

    Anblicks 4.5company rating

    Data scientist job in Dallas, TX

    Job Title: Data Engineer(python, Pyspark, data bricks) Data Engineer with strong proficiency in SQL, Python, and PySpark to support high-performance data pipelines and analytics initiatives. This role will focus on scalable data processing, transformation, and integration efforts that enable business insights, regulatory compliance, and operational efficiency. Data Engineer - SQL, Python and Pyspark Expert (Onsite - Dallas, TX) Key Responsibilities Design, develop, and optimize ETL/ELT pipelines using SQL, Python, and PySpark for large-scale data environments Implement scalable data processing workflows in distributed data platforms (e.g., Hadoop, Databricks, or Spark environments) Partner with business stakeholders to understand and model mortgage lifecycle data (origination, underwriting, servicing, foreclosure, etc.) Create and maintain data marts, views, and reusable data components to support downstream reporting and analytics Ensure data quality, consistency, security, and lineage across all stages of data processing Assist in data migration and modernization efforts to cloud-based data warehouses (e.g., Snowflake, Azure Synapse, GCP BigQuery) Document data flows, logic, and transformation rules Troubleshoot performance and quality issues in batch and real-time pipelines Support compliance-related reporting (e.g., HMDA, CFPB) Required Qualifications 6+ years of experience in data engineering or data development Advanced expertise in SQL (joins, CTEs, optimization, partitioning, etc.) Strong hands-on skills in Python for scripting, data wrangling, and automation Proficient in PySpark for building distributed data pipelines and processing large volumes of structured/unstructured data Experience working with mortgage banking data sets and domain knowledge is highly preferred Strong understanding of data modeling (dimensional, normalized, star schema) Experience with cloud-based platforms (e.g., Azure Databricks, AWS EMR, GCP Dataproc) Familiarity with ETL tools, orchestration frameworks (e.g., Airflow, ADF, dbt)
    $75k-102k yearly est. 5d ago
  • Data Engineer

    Beaconfire Inc.

    Data scientist job in Dallas, TX

    Junior Data Engineer DESCRIPTION: BeaconFire is based in Central NJ, specializing in Software Development, Web Development, and Business Intelligence; looking for candidates who are good communicators and self-motivated. You will play a key role in building, maintaining, and operating integrations, reporting pipelines, and data transformation systems. Qualifications: Passion for data and a deep desire to learn. Master's Degree in Computer Science/Information Technology, Data Analytics/Data Science, or related discipline. Intermediate Python. Experience in data processing is a plus. (Numpy, Pandas, etc) Experience with relational databases (SQL Server, Oracle, MySQL, etc.) Strong written and verbal communication skills. Ability to work both independently and as part of a team. Responsibilities: Collaborate with the analytics team to find reliable data solutions to meet the business needs. Design and implement scalable ETL or ELT processes to support the business demand for data. Perform data extraction, manipulation, and production from database tables. Build utilities, user-defined functions, and frameworks to better enable data flow patterns. Build and incorporate automated unit tests, participate in integration testing efforts. Work with teams to resolve operational & performance issues. Work with architecture/engineering leads and other teams to ensure quality solutions are implemented, and engineering best practices are defined and adhered to. Compensation: $65,000.00 to $80,000.00 /year BeaconFire is an e-verified company. Work visa sponsorship is available.
    $65k-80k yearly 4d ago
  • Snowflake Data Engineering with AWS, Python and PySpark

    Infovision Inc. 4.4company rating

    Data scientist job in Frisco, TX

    Job Title: Snowflake Data Engineering with AWS, Python and PySpark Duration: 12 months Required Skills & Experience: 10+ years of experience in data engineering and data integration roles. Experts working with snowflake ecosystem integrated with AWS services & PySpark. 8+ years of Core Data engineering skills - Handson on experience with Snowflake ecosystem + AWS experience, Core SQL, Snowflake, Python Programming. 5+ years Handson experience in building new data pipeline frameworks with AWS, Snowflake, Python and able to explore new ingestion frame works. Handson with Snowflake architecture, Virtual Warehouses, Storage, and Caching, Snow pipe, Streams, Tasks, and Stages. Experience with cloud platforms (AWS, Azure, or GCP) and integration with Snowflake. Snowflake SQL and Stored Procedures (JavaScript or Python-based). Proficient in Python for data ingestion, transformation, and automation. Solid understanding of data warehousing concepts (ETL, ELT, data modeling, star/snowflake schema). Hands-on with orchestration tools (Airflow, dbt, Azure Data Factory, or similar). Proficiency in SQL and performance tuning. Familiar with Git-based version control, CI/CD pipelines, and DevOps best practices. Strong communication skills and ability to collaborate in agile teams.
    $76k-99k yearly est. 5d ago
  • Data Engineer

    Search Services 3.5company rating

    Data scientist job in Fort Worth, TX

    ABOUT OUR CLIENT Our Client is a privately held, well-capitalized energy company based in Fort Worth, Texas with a strong track record of success across upstream, midstream, and mineral operations throughout the United States. The leadership team is composed of highly experienced professionals who have worked together across multiple ventures and basins. They are committed to fostering a collaborative, high-integrity culture that values intellectual curiosity, accountability, and continuous improvement. ABOUT THE ROLE Our Client is seeking a skilled and motivated Data Engineer to join their growing technology team. This role plays a key part in managing and optimizing data systems, designing and maintaining ETL processes, and improving data workflows across departments. The successful candidate will have deep technical expertise, a strong background in database architecture and data integration, and the ability to collaborate cross-functionally to enhance data management and accessibility. Candidates with extensive experience may be considered for a Senior Data Engineer title. RESPONSIBILITIES Design, implement, and evolve database architecture and schemas to support scalable and efficient data storage and retrieval. Build, manage, and maintain end-to-end data pipelines, including automation of ingestion and transformation processes. Monitor, troubleshoot, and optimize data pipeline performance to ensure data quality and reliability. Document all aspects of the data pipeline architecture, including data sources, transformations, and job scheduling. Optimize database performance by managing indexing, queries, stored procedures, and views. Develop frameworks and tools for reusable ETL processes and efficient data handling across formats such as CSV, JSON, and Parquet. Ensure proper version control and adherence to coding standards, security protocols, and performance best practices. Collaborate with cross-functional teams including engineering, operations, land, finance, and accounting to streamline data workflows. QUALIFICATIONS Excellent verbal and written communication skills. Strong organizational, analytical, and problem-solving abilities. Proficient in Microsoft Office Suite and other related software. Experienced in programming languages such as R, Python, and SQL. Proficient in making and optimizing API calls for data integration. Strong experience with cloud platforms such as Azure Data Lake, Azure Data Studio, Azure Databricks, and/or Snowflake. Proficient in CI/CD principles and tools. High integrity, humility, and a strong sense of accountability and teamwork. A self-starter with a continuous improvement mindset and passion for evolving technologies. REQUIRED EDUCATION AND EXPERIENCE Bachelor's degree in computer science, software engineering, or a related field. 2+ years of experience in data engineering, database management, or software engineering. Master's degree or additional certification a plus, but not required. Exposure to geospatial or GIS data is a plus. PHYSICAL REQUIREMENTS Prolonged periods of sitting and working at a computer. Ability to lift up to 15 pounds occasionally. *********************************************************************************** NO AGENCY OR C2C CANDIDATES WILL BE CONSIDERED VISA SPONSORSHIP IS NOT OFFERED NOR AVAILABLE FOR H1-B NOR F1 OPT ***********************************************************************************
    $84k-117k yearly est. 2d ago
  • Lead Data Engineer

    Capital One 4.7company rating

    Data scientist job in Plano, TX

    Do you love building and pioneering in the technology space? Do you enjoy solving complex business problems in a fast-paced, collaborative, inclusive, and iterative delivery environment? At Capital One, you'll be part of a big group of makers, breakers, doers and disruptors, who solve real problems and meet real customer needs. We are seeking Data Engineers who are passionate about marrying data with emerging technologies. As a Capital One Lead Data Engineer, you'll have the opportunity to be on the forefront of driving a major transformation within Capital One. What You'll Do: Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions in full-stack development tools and technologies Work with a team of developers with deep experience in machine learning, distributed microservices, and full stack systems Utilize programming languages like Java, Scala, Python and Open Source RDBMS and NoSQL databases and Cloud based data warehousing services such as Redshift and Snowflake Share your passion for staying on top of tech trends, experimenting with and learning new technologies, participating in internal & external technology communities, and mentoring other members of the engineering community Collaborate with digital product managers, and deliver robust cloud-based solutions that drive powerful experiences to help millions of Americans achieve financial empowerment Perform unit tests and conduct reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance Basic Qualifications: Bachelor's Degree At least 4 years of experience in application development (Internship experience does not apply) At least 2 years of experience in big data technologies At least 1 year experience with cloud computing (AWS, Microsoft Azure, Google Cloud) Preferred Qualifications: 7+ years of experience in application development including Python, SQL, Scala, or Java 4+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 4+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 4+ year experience working on real-time data and streaming applications 4+ years of experience with NoSQL implementation (Mongo, Cassandra) 4+ years of data warehousing experience (Redshift or Snowflake) 4+ years of experience with UNIX/Linux including basic commands and shell scripting 2+ years of experience with Agile engineering practices At this time, Capital One will not sponsor a new applicant for employment authorization, or offer any immigration related support for this position (i.e. H1B, F-1 OPT, F-1 STEM OPT, F-1 CPT, J-1, TN, or another type of work authorization). The minimum and maximum full-time annual salaries for this role are listed below, by location. Please note that this salary information is solely for candidates hired to perform work within one of these locations, and refers to the amount Capital One is willing to pay at the time of this posting. Salaries for part-time roles will be prorated based upon the agreed upon number of hours to be regularly worked. McLean, VA: $193,400 - $220,700 for Lead Data Engineer Plano, TX: $175,800 - $200,700 for Lead Data Engineer Richmond, VA: $175,800 - $200,700 for Lead Data Engineer Candidates hired to work in other locations will be subject to the pay range associated with that location, and the actual annualized salary amount offered to any candidate at the time of hire will be reflected solely in the candidate's offer letter. This role is also eligible to earn performance based incentive compensation, which may include cash bonus(es) and/or long term incentives (LTI). Incentives could be discretionary or non discretionary depending on the plan. Capital One offers a comprehensive, competitive, and inclusive set of health, financial and other benefits that support your total well-being. Learn more at the Capital One Careers website . Eligibility varies based on full or part-time status, exempt or non-exempt status, and management level. This role is expected to accept applications for a minimum of 5 business days.No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections ; New York City's Fair Chance Act; Philadelphia's Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1- or via email at . All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC).
    $77k-99k yearly est. 18h ago

Learn more about data scientist jobs

How much does a data scientist earn in Lewisville, TX?

The average data scientist in Lewisville, TX earns between $59,000 and $112,000 annually. This compares to the national average data scientist range of $75,000 to $148,000.

Average data scientist salary in Lewisville, TX

$81,000

What are the biggest employers of Data Scientists in Lewisville, TX?

The biggest employers of Data Scientists in Lewisville, TX are:
  1. Paycom
  2. Gap International
Job type you want
Full Time
Part Time
Internship
Temporary