Senior Data Governance Consultant (Informatica)
Data engineer job in Plano, TX
Senior Data Governance Consultant (Informatica)
About Paradigm - Intelligence Amplified
Paradigm is a strategic consulting firm that turns vision into tangible results. For over 30 years, we've helped Fortune 500 and high-growth organizations accelerate business outcomes across data, cloud, and AI. From strategy through execution, we empower clients to make smarter decisions, move faster, and maximize return on their technology investments. What sets us apart isn't just what we do, it's how we do it. Driven by a clear mission and values rooted in integrity, excellence, and collaboration, we deliver work that creates lasting impact. At Paradigm, your ideas are heard, your growth is prioritized, your contributions make a difference.
Summary:
We are seeking a Senior Data Governance Consultant to lead and enhance data governance capabilities across a financial services organization
The Senior Data Governance Consultant will collaborate closely with business, risk, compliance, technology, and data management teams to define data standards, strengthen data controls, and drive a culture of data accountability and stewardship
The ideal candidate will have deep experience in developing and implementing data governance frameworks, data policies, and control mechanisms that ensure compliance, consistency, and trust in enterprise data assets
Hands-on experience with Informatica, including Master Data Management (MDM) or Informatica Data Management Cloud (IDMC), is preferred
This position is Remote, with occasional travel to Plano, TX
Responsibilities:
Data Governance Frameworks:
Design, implement, and enhance data governance frameworks aligned with regulatory expectations (e.g., BCBS 239, GDPR, CCPA, DORA) and internal control standards
Policy & Standards Development:
Develop, maintain, and operationalize data policies, standards, and procedures that govern data quality, metadata management, data lineage, and data ownership
Control Design & Implementation:
Define and embed data control frameworks across data lifecycle processes to ensure data integrity, accuracy, completeness, and timeliness
Risk & Compliance Alignment:
Work with risk and compliance teams to identify data-related risks and ensure appropriate mitigation and monitoring controls are in place
Stakeholder Engagement:
Partner with data owners, stewards, and business leaders to promote governance practices and drive adoption of governance tools and processes
Data Quality Management:
Define and monitor data quality metrics and KPIs, establishing escalation and remediation procedures for data quality issues
Metadata & Lineage:
Support metadata and data lineage initiatives to increase transparency and enable traceability across systems and processes
Reporting & Governance Committees:
Prepare materials and reporting for data governance forums, risk committees, and senior management updates
Change Management & Training:
Develop communication and training materials to embed governance culture and ensure consistent understanding across the organization
Required Qualifications:
7+ years of experience in data governance, data management, or data risk roles within financial services (banking, insurance, or asset management preferred)
Strong knowledge of data policy development, data standards, and control frameworks
Proven experience aligning data governance initiatives with regulatory and compliance requirements
Familiarity with Informatica data governance and metadata tools
Excellent communication skills with the ability to influence senior stakeholders and translate technical concepts into business language
Deep understanding of data management principles (DAMA-DMBOK, DCAM, or equivalent frameworks)
Bachelor's or Master's Degree in Information Management, Data Science, Computer Science, Business, or related field
Preferred Qualifications:
Hands-on experience with Informatica, including Master Data Management (MDM) or Informatica Data Management Cloud (IDMC), is preferred
Experience with data risk management or data control testing
Knowledge of financial regulatory frameworks (e.g., Basel, MiFID II, Solvency II, BCBS 239)
Certifications, such as Informatica, CDMP, or DCAM
Background in consulting or large-scale data transformation programs
Key Competencies:
Strategic and analytical thinking
Strong governance and control mindset
Excellent stakeholder and relationship management
Ability to drive organizational change and embed governance culture
Attention to detail with a pragmatic approach
Why Join Paradigm
At Paradigm, integrity drives innovation. You'll collaborate with curious, dedicated teammates, solving complex problems and unlocking immense data value for leading organizations. If you seek a place where your voice is heard, growth is supported, and your work creates lasting business value, you belong at Paradigm.
Learn more at ********************
Policy Disclosure:
Paradigm maintains a strict drug-free workplace policy. All offers of employment are contingent upon successfully passing a standard 5-panel drug screen. Please note that a positive test result for any prohibited substance, including marijuana, will result in disqualification from employment, regardless of state laws permitting its use. This policy applies consistently across all positions and locations.
Senior Data Retention & Protection Consultant: Disaster Recovery
Data engineer job in Dallas, TX
Technology Recovery Services provides subject matter expertise and direction on complex IT disaster recovery projects/initiatives and supports IT disaster recovery technical planning, coordination and service maturity working across IT, business resilience, risk management, regulatory and compliance.
Summary of Essential Functions:
Govern disaster recovery plans and procedures for critical business applications and infrastructure.
Create, update, and publish disaster recovery related policies, procedures, and guidelines.
Ensure annual updates and validations of DR policies and procedures to maintain readiness and resilience.
Maintain upto-date knowledge of disaster recovery and business continuity best practices.
Perform regular disaster recovery testing, including simulation exercises, incident response simulations, tabletop exercises, and actual failover drills to validate procedures and identify improvements.
Train staff and educate employees on disaster recovery processes, their roles during incidents, and adherence to disaster recovery policies.
Coordinates Technology Response to Natural Disasters and Aircraft Accidents
Qualifications:
Strong knowledge of Air vault and ransomware recovery technologies
Proven ability to build, cultivate, and promote strong relationships with internal customers at all levels of the organization, as well as with Technology counterparts, business partners, and external groups
Proficiency in handling operational issues effectively and understanding escalation, communication, and crisis management
Demonstrated call control and situation management skills under fast paced, highly dynamic situations
Knowledge of basic IT and Airline Ecosystems
Understand SLA's, engagement process and urgency needed to engage teams during critical situations
Ability to understand and explain interconnected application functionality in a complex environment and share knowledge with peers
Skilled in a Customer centric attitude and the ability to focus on providing best-in-class service for customers and stakeholders
Ability to execute with a high level of operational urgency with an ability to maintain calm, and work closely with a team and stakeholders during a critical situation while using project management skills
Ability to present to C Level executives with outstanding communication skills
Ability to lead a large group up to 200 people including support, development, leaders and executives on a single call
Ability to effectively triage - be able to detect and determine symptom vs cause and capture key data from various sources, systems and people
Knowledge of business strategies and priorities
Excellent communication and stakeholder engagement skills.
Required:
3 plus years of similar
or related experience in such fields as Disaster Recovery, Business Continuity and Enterprise Operational Resilience.
Working knowledge of Disaster Recovery professional practices, including Business Impact Analysis, disaster recovery plan (DRP), redundancy and failover mechanisms DR related regulatory requirement, and Business Continuity Plan exercises and audits.
Ability to motivate, influence, and train others.
Strong analytical skills and problem-solving skills using data analysis tools including Alteryx and Tableau.
Ability to communicate technical and operational issues clearly to both technical and nontechnical audiences.
Sr. Data Engineer
Data engineer job in Dallas, TX
Trinity Industries is searching for a Sr. Data Engineer to join our Data Analytics team in Dallas, TX! The successful candidate will work with the Trinity Rail teams to develop and maintain data pipelines in Azure utilizing Databricks, Python and SQL.
Join our team today and be a part of Delivering Goods for the Good of All!
What you'll do:
Facilitate technical design of complex data sourcing, transformation and aggregation logic, ensuring business analytics requirements are met
Work with leadership to prioritize business and information needs
Engage with product and app development teams to gather requirements, and create technical requirements
Utilize and implement data engineering best practices and coding strategies
Be responsible for data ingress into storage
What you'll need:
Bachelor's Degree Computer Science, Information Management, or related field required; Masters preferred
8+ years in data engineering including prior experience in data transformation
Databricks experience building data pipelines using the medallion architecture, bronze to gold
Advanced skills in Spark and structured streaming, SQL, Python
Technical expertise regarding data models, database design/development, data mining and other segmentation techniques
Experience with data conversion, interface and report development
Experience working with IOT and/or geospatial data in a cloud environment (Azure)
Adept at queries, report writing and presenting findings
Prior experience coding utilizing repositories and multiple coding environments
Must possess effective communication skills, both verbal and written
Strong organizational, time management and multi-tasking skills
Experience with data conversion, interface and report development
Adept at queries, report writing and presenting findings
Process improvement and automation a plus
Nice to have:
Databricks Data Engineering Associate or Professional Certification > 2023
Data Scientist (F2F Interview)
Data engineer job in Dallas, TX
W2 Contract
Dallas, TX (Onsite)
We are seeking an experienced Data Scientist to join our team in Dallas, Texas. The ideal candidate will have a strong foundation in machine learning, data modeling, and statistical analysis, with the ability to transform complex datasets into clear, actionable insights that drive business impact.
Key Responsibilities
Develop, implement, and optimize machine learning models to support business objectives.
Perform exploratory data analysis, feature engineering, and predictive modeling.
Translate analytical findings into meaningful recommendations for technical and non-technical stakeholders.
Collaborate with cross-functional teams to identify data-driven opportunities and improve decision-making.
Build scalable data pipelines and maintain robust analytical workflows.
Communicate insights through reports, dashboards, and data visualizations.
Qualifications
Bachelor's or Master's degree in Data Science, Statistics, Computer Science, or a related field.
Proven experience working with machine learning algorithms and statistical modeling techniques.
Proficiency in Python or R, along with hands-on experience using libraries such as Pandas, NumPy, Scikit-learn, or TensorFlow.
Strong SQL skills and familiarity with relational or NoSQL databases.
Experience with data visualization tools (e.g., Tableau, Power BI, matplotlib).
Excellent problem-solving, communication, and collaboration skills.
Data Modeler
Data engineer job in Plano, TX
Plano TX- Nearby candidates only
W2 Candidates
Must Have:
5+ years of experience with data modeling, warehousing, analysis & data profiling experience and ability to identify trends and anomalies in the data
Experience on AWS technologies like S3, AWS Glue, EMR, and IAM roles/permissions
Experience with one or more query language (e.g., SQL, PL/SQL, DDL, SparkSQL, Scala)
Experience working with relational database such as Teradata and handling both structured and unstructured datasets
Data Modeling tools (Any of - Erwin, Power Designer, ER Studio)
Preferred / Ideal to have -
Proficiency in Python
Experience with NoSQL, non-relational databases / data stores (e.g., object storage, document or key-value stores, graph databases, column-family databases)
Experience with Snowflake and Databricks
Data Engineer
Data engineer job in Irving, TX
W2 Contract to Hire Role with Monthly Travel to the Dallas Texas area
We are looking for a highly skilled and independent Data Engineer to support our analytics and data science teams, as well as external client data needs. This role involves writing and optimizing complex SQL queries, generating client-specific data extracts, and building scalable ETL pipelines using Azure Data Factory. The ideal candidate will have a strong foundation in data engineering, with a collaborative mindset and the ability to work across teams and systems.
Duties/Responsibilities:Develop and optimize complex SQL queries to support internal analytics and external client data requests.
Generate custom data lists and extracts based on client specifications and business rules.
Design, build, and maintain efficient ETL pipelines using Azure Data Factory.
Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver high-quality solutions.
Work with Salesforce data; familiarity with SOQL is preferred but not required.
Support Power BI reporting through basic data modeling and integration.
Assist in implementing MLOps practices for model deployment and monitoring.
Use Python for data manipulation, automation, and integration tasks.
Ensure data quality, consistency, and security across all workflows and systems.
Required Skills/Abilities/Attributes:
5+ years of experience in data engineering or a related field.
Strong proficiency in SQL, including query optimization and performance tuning.
Experience with Azure Data Factory, with git repository and pipeline deployment.
Ability to translate client requirements into accurate and timely data outputs.
Working knowledge of Python for data-related tasks.
Strong problem-solving skills and ability to work independently.
Excellent communication and documentation skills.
Preferred Skills/ExperiencePrevious knowledge of building pipelines for ML models.
Extensive experience creating/managing stored procedures and functions in MS SQL Server
2+ years of experience in cloud architecture (Azure, AWS, etc)
Experience with ‘code management' systems (Azure Devops)
2+ years of reporting design and management (PowerBI Preferred)
Ability to influence others through the articulation of ideas, concepts, benefits, etc.
Education and Experience:
Bachelor's degree in a computer science field or applicable business experience.
Minimum 3 years of experience in a Data Engineering role
Healthcare experience preferred.
Physical Requirements:Prolonged periods sitting at a desk and working on a computer.
Ability to lift 20 lbs.
Data Engineer
Data engineer job in Coppell, TX
IDR is seeking a Data Engineer to join one of our top clients for an opportunity in Coppell, TX. This role involves designing, building, and maintaining enterprise-grade data architectures, with a focus on cloud-based data engineering, analytics, and machine learning applications. The company operates within the technology and data services industry, providing innovative solutions to large-scale clients.
Position Overview for the Data Engineer:
Develop and maintain scalable data pipelines utilizing Databricks and Azure environments
Design data models and optimize ETL/ELT processes for large datasets
Collaborate with cross-functional teams to implement data solutions supporting analytics, BI, and ML projects
Ensure data quality, availability, and performance across enterprise systems
Automate workflows and implement CI/CD pipelines to improve data deployment processes
Requirements for the Data Engineer:
8-10 years of experience on modern data platforms with a strong background in cloud-based data engineering
Strong expertise in Databricks (PySpark/Scala, Delta Lake, Unity Catalog)
Hands-on experience with Azure (AWS/GCP also acceptable IF Super strong in Databricks)
Advanced SQL skills and strong experience with data modeling, ETL/ELT development and data orchestration
Experience with CI/CD (Azure DevOps, GitHub Actions, Terraform, etc.)
What's in it for you?
Competitive compensation package
Full Benefits; Medical, Vision, Dental, and more!
Opportunity to get in with an industry leading organization.
Why IDR?
25+ Years of Proven Industry Experience in 4 major markets
Employee Stock Ownership Program
Dedicated Engagement Manager who is committed to you and your success.
Medical, Dental, Vision, and Life Insurance
ClearlyRated's Best of Staffing Client and Talent Award winner 12 years in a row.
GCP Data Engineer
Data engineer job in Fort Worth, TX
Job Title: GCP Data Engineer
Employment Type: W2/CTH
Client: Direct
We are seeking a highly skilled Data Engineer with strong expertise in Python, SQL, and Google Cloud Platform (GCP) services. The ideal candidate will have 6-8 years of hands-on experience in building and maintaining scalable data pipelines, working with APIs, and leveraging GCP tools such as BigQuery, Cloud Composer, and Dataflow.
Core Responsibilities:
• Design, build, and maintain scalable data pipelines to support analytics and business operations.
• Develop and optimize ETL processes for structured and unstructured data.
• Work with BigQuery, Cloud Composer, and other GCP services to manage data workflows.
• Collaborate with data analysts and business teams to ensure data availability and quality.
• Integrate data from multiple sources using APIs and custom scripts.
• Monitor and troubleshoot pipeline performance and reliability.
Technical Skills:
o Strong proficiency in Python and SQL.
o Experience with data pipeline development and ETL frameworks.
• GCP Expertise:
o Hands-on experience with BigQuery, Cloud Composer, and Dataflow.
• Additional Requirements:
o Familiarity with workflow orchestration tools and cloud-based data architecture.
o Strong problem-solving and analytical skills.
o Excellent communication and collaboration abilities.
Data Engineer
Data engineer job in Dallas, TX
We are seeking a highly skilled Data Engineer with 5+ years of hands-on experience to design, build, and optimize scalable data pipelines and modern data platforms. The ideal candidate will have strong expertise in cloud data engineering, ETL/ELT development, real-time streaming, and data modeling, with a solid understanding of distributed systems and best engineering practices.
Design, develop, and maintain scalable ETL/ELT pipelines for ingestion, transformation, and processing of structured and unstructured data.
Build real-time and batch data pipelines using tools such as Kafka, Spark, AWS Glue, Kinesis, or similar technologies.
Develop and optimize data models, warehouse layers, and high-performance data architectures.
Implement data quality checks, data validation frameworks, and ensure data reliability and consistency across systems.
Collaborate with Data Analysts, Data Scientists, and cross-functional teams to deliver efficient and accessible data solutions.
Deploy and manage data infrastructure using AWS / Azure / GCP cloud services.
Write clean, efficient, and reusable code in Python/Scala/SQL.
Monitor pipeline performance, troubleshoot issues, and drive continuous improvement.
Implement CI/CD pipelines, version control, and automation for production workloads.
Ensure data governance, security, and compliance in all engineering workflows.
Required Skills & Qualifications
5+ years of experience as a Data Engineer in a production environment.
Strong proficiency in Python, SQL, and distributed processing frameworks (Spark, PySpark, Hadoop).
Hands-on experience with cloud platforms: AWS, Azure, or GCP.
Experience with streaming technologies: Kafka, Kinesis, Spark Streaming, Flink (any).
Strong understanding of data warehousing concepts (Star/Snowflake schemas, dimensional modeling).
Experience with ETL/ELT tools (Glue, Airflow, DBT, Informatica, etc.).
Solid understanding of DevOps practices: Git, CI/CD, Terraform/CloudFormation (bonus).
Experience working with relational and NoSQL databases (Redshift, Snowflake, BigQuery, DynamoDB, etc.).
Excellent problem-solving, communication, and analytical skills.
Senior Data Engineer (USC AND GC ONLY)
Data engineer job in Richardson, TX
Now Hiring: Senior Data Engineer (GCP / Big Data / ETL)
Duration: 6 Months (Possible Extension)
We're seeking an experienced Senior Data Engineer with deep expertise in Data Warehousing, ETL, Big Data, and modern GCP-based data pipelines. This role is ideal for someone who thrives in cross-functional environments and can architect, optimize, and scale enterprise-level data solutions on the cloud.
Must-Have Skills (Non-Negotiable)
9+ years in Data Engineering & Data Warehousing
9+ years hands-on ETL experience (Informatica, DataStage, etc.)
9+ years working with Teradata
3+ years hands-on GCP and BigQuery
Experience with Dataflow, Pub/Sub, Cloud Storage, and modern GCP data pipelines
Strong background in query optimization, data structures, metadata & workload management
Experience delivering microservices-based data solutions
Proficiency in Big Data & cloud architecture
3+ years with SQL & NoSQL
3+ years with Python or similar scripting languages
3+ years with Docker, Kubernetes, CI/CD for data pipelines
Expertise in deploying & scaling apps in containerized environments (K8s)
Strong communication, analytical thinking, and ability to collaborate across technical & non-technical teams
Familiarity with AGILE/SDLC methodologies
Key Responsibilities
Build, enhance, and optimize modern data pipelines on GCP
Implement scalable ETL frameworks, data structures, and workflow dependency management
Architect and tune BigQuery datasets, queries, and storage layers
Collaborate with cross-functional teams to define data requirements and support business objectives
Lead efforts in containerized deployments, CI/CD integrations, and performance optimization
Drive clarity in project goals, timelines, and deliverables during Agile planning sessions
📩 Interested? Apply now or DM us to explore this opportunity! You can share resumes at ******************* OR Call us on *****************
Data Engineer(python, Pyspark, data bricks)
Data engineer job in Dallas, TX
Job Title: Data Engineer(python, Pyspark, data bricks)
Data Engineer with strong proficiency in SQL, Python, and PySpark to support high-performance data pipelines and analytics initiatives. This role will focus on scalable data processing, transformation, and integration efforts that enable business insights, regulatory compliance, and operational efficiency.
Data Engineer - SQL, Python and Pyspark Expert (Onsite - Dallas, TX)
Key Responsibilities
Design, develop, and optimize ETL/ELT pipelines using SQL, Python, and PySpark for large-scale data environments
Implement scalable data processing workflows in distributed data platforms (e.g., Hadoop, Databricks, or Spark environments)
Partner with business stakeholders to understand and model mortgage lifecycle data (origination, underwriting, servicing, foreclosure, etc.)
Create and maintain data marts, views, and reusable data components to support downstream reporting and analytics
Ensure data quality, consistency, security, and lineage across all stages of data processing
Assist in data migration and modernization efforts to cloud-based data warehouses (e.g., Snowflake, Azure Synapse, GCP BigQuery)
Document data flows, logic, and transformation rules
Troubleshoot performance and quality issues in batch and real-time pipelines
Support compliance-related reporting (e.g., HMDA, CFPB)
Required Qualifications
6+ years of experience in data engineering or data development
Advanced expertise in SQL (joins, CTEs, optimization, partitioning, etc.)
Strong hands-on skills in Python for scripting, data wrangling, and automation
Proficient in PySpark for building distributed data pipelines and processing large volumes of structured/unstructured data
Experience working with mortgage banking data sets and domain knowledge is highly preferred
Strong understanding of data modeling (dimensional, normalized, star schema)
Experience with cloud-based platforms (e.g., Azure Databricks, AWS EMR, GCP Dataproc)
Familiarity with ETL tools, orchestration frameworks (e.g., Airflow, ADF, dbt)
Data Engineer
Data engineer job in Dallas, TX
Junior Data Engineer
DESCRIPTION: BeaconFire is based in Central NJ, specializing in Software Development, Web Development, and Business Intelligence; looking for candidates who are good communicators and self-motivated. You will play a key role in building, maintaining, and operating integrations, reporting pipelines, and data transformation systems.
Qualifications:
Passion for data and a deep desire to learn.
Master's Degree in Computer Science/Information Technology, Data Analytics/Data
Science, or related discipline.
Intermediate Python. Experience in data processing is a plus. (Numpy, Pandas, etc)
Experience with relational databases (SQL Server, Oracle, MySQL, etc.)
Strong written and verbal communication skills.
Ability to work both independently and as part of a team.
Responsibilities:
Collaborate with the analytics team to find reliable data solutions to meet the business needs.
Design and implement scalable ETL or ELT processes to support the business demand for data.
Perform data extraction, manipulation, and production from database tables.
Build utilities, user-defined functions, and frameworks to better enable data flow patterns.
Build and incorporate automated unit tests, participate in integration testing efforts.
Work with teams to resolve operational & performance issues.
Work with architecture/engineering leads and other teams to ensure quality solutions are implemented, and engineering best practices are defined and adhered to.
Compensation: $65,000.00 to $80,000.00 /year
BeaconFire is an e-verified company. Work visa sponsorship is available.
Azure Data Engineer
Data engineer job in Irving, TX
Our client is seeking an Azure Data Engineer to join their team! This position is located in Irving, Texas. THIS ROLE REQUIRES AN ONSITE INTERVIEW IN IRVING, please only apply if you are local and available to interview onsite.
Duties:
Lead the design, architecture, and implementation of key data initiatives and platform capabilities
Optimize existing data workflows and systems to improve performance, cost-efficiency, identifying and guiding teams to implement solutions
Lead and mentor a team of 2-5 data engineers, providing guidance on technical best practices, career development, and initiative execution
Contribute to the development of data engineering standards, processes, and documentation, promoting consistency and maintainability across teams while enabling business stakeholders
Desired Skills/Experience:
Bachelor's degree or equivalent in Computer Science, Mathematics, Software Engineering, Management Information Systems, etc.
5+ years of relevant work experience in data engineering
Strong technical skills in SQL, PySpark/Python, Azure, and Databricks
Deep understanding of data engineering fundamentals, including database architecture and design, ETL, etc.
Benefits:
Medical, Dental, & Vision Insurance Plans
Employee-Owned Profit Sharing (ESOP)
401K offered
The approximate pay range for this position starting at $140-145,000+. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
Azure Data Engineer Sr
Data engineer job in Irving, TX
Minimum 7 years of relevant work experience in data engineering, with at least 2 years in a data modeling.
Strong technical foundation in Python, SQL, and experience with cloud platforms (Azure,).
Deep understanding of data engineering fundamentals, including database architecture and design, Extract, transform and load (ETL) processes, data lakes, data warehousing, and both batch and streaming technologies.
Experience with data orchestration tools (e.g., Airflow), data processing frameworks (e.g., Spark, Databricks), and data visualization tools (e.g., Tableau, Power BI).
Proven ability to lead a team of engineers, fostering a collaborative and high-performing environment.
Data Architect
Data engineer job in Plano, TX
KPI Partners is a 5 times Gartner-recognized data, analytics, and AI consulting company. We are leaders in data engineering on Azure, AWS, Google, Snowflake, and Databricks. Founded in 2006, KPI has over 400 consultants and has successfully delivered over 1,000 projects to our clients. We are looking for skilled data engineers who want to work with the best team in data engineering.
Title: Senior Data Architect
Location: Plano, TX (Hybrid)
Job Type: Contract - 6 Months
Key Skills: SQL, PySpark, Databricks, and Azure Cloud
Key Note: Looking for a Data Architect who is Hands-on with SQL, PySpark, Databricks, and Azure Cloud.
About the Role:
We are seeking a highly skilled and experienced Senior Data Architect to join our dynamic team at KPI, working on challenging and multi-year data transformation projects. This is an excellent opportunity for a talented data engineer to play a key role in building innovative data solutions using Azure Native Services and related technologies. If you are passionate about working with large-scale data systems and enjoy solving complex engineering problems, this role is for you.
Key Responsibilities:
Data Engineering: Design, development, and implementation of data pipelines and solutions using PySpark, SQL, and related technologies.
Collaboration: Work closely with cross-functional teams to understand business requirements and translate them into robust data solutions.
Data Warehousing: Design and implement data warehousing solutions, ensuring scalability, performance, and reliability.
Continuous Learning: Stay up to date with modern technologies and trends in data engineering and apply them to improve our data platform.
Mentorship: Provide guidance and mentorship to junior data engineers, ensuring best practices in coding, design, and development.
Must-Have Skills & Qualifications:
Minimum 12+ years of overall experience in IT Industry.
4+ years of experience in data engineering, with a strong background in building large-scale data solutions.
4+ years of hands-on experience developing and implementing data pipelines using Azure stack experience (Azure, ADF, Databricks, Functions)
Proven expertise in SQL for querying, manipulating, and analyzing large datasets.
Strong knowledge of ETL processes and data warehousing fundamentals.
Self-motivated and independent, with a “let's get this done” mindset and the ability to thrive in a fast-paced and dynamic environment.
Good-to-Have Skills:
Databricks Certification is a plus.
Data Modeling, Azure Architect Certification.
GCP Data Engineer
Data engineer job in Richardson, TX
Infosys is seeking a Google Cloud (GCP) data engineer with experience in Github and python. In this role, you will enable digital transformation for our clients in a global delivery model, research on technologies independently, recommend appropriate solutions and contribute to technology-specific best practices and standards. You will be responsible to interface with key stakeholders and apply your technical proficiency across different stages of the Software Development Life Cycle. You will be part of a learning culture, where teamwork and collaboration are encouraged, excellence is rewarded, and diversity is respected and valued.
Required Qualifications:
Candidate must be located within commuting distance of Richardson, TX or be willing to relocate to the area. This position may require travel in the US
Bachelor's degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education.
Candidates authorized to work for any employer in the United States without employer-based visa sponsorship are welcome to apply. Infosys is unable to provide immigration sponsorship for this role at this time
At least 4 years of Information Technology experience.
Experience working with technologies like - GCP with data engineering - data flow / air flow, pub sub/ kafta, data proc/Hadoop, Big Query.
ETL development experience with strong SQL background such as Python/R, Scala, Java, Hive, Spark, Kafka
Strong knowledge on Python Program development to build reusable frameworks, enhance existing frameworks.
Application build experience with core GCP Services like Dataproc, GKE, Composer,
Deep understanding GCP IAM & Github.
Must have done IAM set up
Knowledge on CICD pipeline using Terraform in Git.
Preferred Qualifications:
Good knowledge on Google Big Query, using advance SQL programing techniques to build Big Query Data sets in Ingestion and Transformation layer.
Experience in Relational Modeling, Dimensional Modeling and Modeling of Unstructured Data
Knowledge on Airflow Dag creation, execution, and monitoring.
Good understanding of Agile software development frameworks
Ability to work in teams in a diverse, multi-stakeholder environment comprising of Business and Technology teams.
Experience and desire to work in a global delivery environment.
Azure Data Architect
Data engineer job in Dallas, TX
About Us:
LTIMindtree is a global technology consulting and digital solutions company that enables enterprises across industries to reimagine business models, accelerate innovation, and maximize growth by harnessing digital technologies. As a digital transformation partner to more than 700+ clients, LTIMindtree brings extensive domain and technology expertise to help drive superior competitive differentiation, customer experiences, and business outcomes in a converging world. Powered by nearly 90,000 talented and entrepreneurial professionals across more than 30 countries, LTIMindtree - a Larsen & Toubro Group company - combines the industry-acclaimed strengths of erstwhile Larsen and Toubro Infotech and Mindtree in solving the most complex business challenges and delivering transformation at scale. For more information, please visit ********************
Job Title: Data Architect
Work Location
Dallas, Texas
Job Description:
The ideal candidate will have a good understanding of big data technologies data engineering and cloud computing DWH Projects with a focus on Azure Databricks
Work closely with business stakeholders and other IT teams to understand requirements and define the scope for engagement with reasonable timeline
Ensure proper documentation of architecture processes while ensuring compliance with security and governance standards
Ensure best practices are followed by team in terms of code quality data security and scalability
Stay updated with the latest developments in Databricks and associated technologies to drive innovation
12 years of experience along with 5 years of data Analytics project experience
Experience with Azure Databricks notebook development and Delta Lake
Good understanding of Azure services like Azure Data Lake Azure Synapse and Azure Data Factory Fabric
Experience with ETLELT processes data warehousing and building data lakes
SQL skills and familiarity with NoSQL databases
Experience with CICD pipelines and version control systems like Git
Soft Skills
Excellent communication skills with the ability to explain complex technical concepts to nontechnical stakeholders
Strong problem-solving skills and a proactive approach to identifying and resolving issues
Leadership skills with the ability to manage and mentor a team of data engineers
Power BI for dashboarding and reporting
Microsoft Fabric for analytics and integration tasks
Spark Streaming for processing real time data streams
Over 12 years of IT experience including 4 years specializing in developing data ingestion and transformation pipelines using Databricks Synapse notebooks and Azure Data Factory
Good understanding on different domain industries with respect to data Analytics project DWH projects
Should be good in Excel and Power Point
Good understanding and experience with Delta tables Delta Lake and Azure Data Lake Storage Gen2
Experience in building and optimizing query layers using Databricks SQL
Familiarity with modern CICD practices especially in the context of Databricks and cloud native solutions
Benefits/perks listed below may vary depending on the nature of your employment with LTIMindtree (“LTIM”):
Benefits and Perks:
Comprehensive Medical Plan Covering Medical, Dental, Vision
Short Term and Long-Term Disability Coverage
401(k) Plan with Company match
Life Insurance
Vacation Time, Sick Leave, Paid Holidays
Paid Paternity and Maternity Leave
The range displayed on each job posting reflects the minimum and maximum salary target for the position across all US locations. Within the range, individual pay is determined by work location and job level and additional factors including job-related skills, experience, and relevant education or training. Depending on the position offered, other forms of compensation may be provided as part of overall compensation like an annual performance-based bonus, sales incentive pay and other forms of bonus or variable compensation.
Disclaimer: The compensation and benefits information provided herein is accurate as of the date of this posting.
LTIMindtree is an equal opportunity employer that is committed to diversity in the workplace. Our employment decisions are made without regard to race, color, creed, religion, sex (including pregnancy, childbirth or related medical conditions), gender identity or expression, national origin, ancestry, age, family-care status, veteran status, marital status, civil union status, domestic partnership status, military service, handicap or disability or history of handicap or disability, genetic information, atypical hereditary cellular or blood trait, union affiliation, affectional or sexual orientation or preference, or any other characteristic protected by applicable federal, state, or local law, except where such considerations are bona fide occupational qualifications permitted by law.
Senior Data Architect
Data engineer job in Dallas, TX
Akkodis
is seeking a
Senior Data Architect
for a
Contract
with a client located in
Dallas, TX (Hybrid).
Pay Range:
$80/hr - $90/hr, The rate may be negotiable based on experience, education, geographic location, and other factors
Must have Oracle Exadata, ETL, Informatica, Golden Gate, and MDM.
Job Description:
Primary responsibilities of the Senior Data Architect include designing and managing Data Architectural solutions for multiple environments, including but not limited to Data Warehouse, ODS, Data Replication/ETL Data Management initiatives. The candidate will be in an expert role and will work closely with Business, DBA, ETL, and Data Management teams, providing solutions for complex data-related initiatives. This individual will also be responsible for developing and managing Data Governance and Master Data Management solutions. This candidate must have good technical and communication skills coupled with the ability to mentor effectively.
Responsibilities
Establishing policies, procedures, and guidelines regarding all aspects of Data Governance
Ensure data decisions are consistent and best practices are adhered to
Ensure Data Standardization definitions, Data Dictionary, and Data Lineage are kept up to date and accessible
Work with ETL, Replication, and DBA teams to determine best practices as it relates to data transformations, data movement, and derivations
Work with support teams to ensure consistent and proactive support methodologies are in place for all aspects of data movements and data transformations
Work with and mentor Data Architects and Data Analysts to ensure best practices are adhered to for database design and data management
Assist in overall Architectural solutions, including, but not limited to, Data Warehouse, ODS, Data Replication/ETL Data Management initiatives
Work with the business teams and Enterprise Architecture team to ensure the best architectural solutions from a Data perspective
Create a strategic roadmap for MDM implementation
Responsible for implementing a Master Data Management tool
Establishing policies, procedures, and guidelines regarding all aspects of Master Data Management
Ensure Architectural rules and design of the MDM process are documented, and best practices are adhered to
Qualifications
5+ years of Data Architecture experience, including OLTP, Data Warehouse, and Big Data
5+ years of Solution Architecture experience
5+ years of MDM experience
5+ years of Data Governance experience, working knowledge of best practices
Extensive working knowledge of all aspects of Data Movement and Processing, including Middleware, ETL, API, OLAP, and best practices for data tracking
Good Communication skills
Self-Motivated
Capable of presenting to all levels of audiences
Works well in a team environment
If you are interested in this role, then please click
APPLY NOW
. For other opportunities available at
Akkodis
, or any questions, please contact
Anirudh Srivastava at ************ or ***********************************.
Equal Opportunity Employer/Veterans/Disabled
Benefit offerings include medical, dental, vision, term life insurance, short-term disability insurance, additional voluntary benefits, commuter benefits, and a 401K plan. Our program provides employees the flexibility to choose the type of coverage that meets their individual needs. Available paid leave may include Paid Sick Leave, where required by law; any other paid leave required by Federal, State, or local law; and Holiday pay upon meeting eligibility criteria.
Disclaimer:
These benefit offerings do not apply to client-recruited jobs and jobs that are direct hires to a client.
To read our Candidate Privacy Information Statement, which explains how we will use your information, please visit
******************************************
Lead GCP Data Engineer/Architect
Data engineer job in Richardson, TX
We are seeking a highly experienced Lead GCP Data Engineer to design, build, and optimize scalable data engineering solutions on Google Cloud Platform. The ideal candidate will take ownership of building robust data pipelines, ensuring best practices, and leading engineering teams to deliver high-quality data solutions for analytics, reporting, and business operations.
Key Responsibilities
Lead the design, development, and deployment of data pipelines and data integration workflows on GCP.
Build and optimize data ingestion, transformation, and storage using tools such as Dataflow, Dataproc, Pub/Sub, Composer, BigQuery, Cloud Storage, and Cloud Functions.
Collaborate with data architects, analysts, and business teams to translate requirements into technical solutions.
Develop and maintain ETL/ELT frameworks, ensuring scalability, performance, and reliability.
Implement and enforce best practices around data quality, data validation, metadata management, and documentation.
Conduct performance tuning for BigQuery, Dataflow, Spark jobs, and data pipelines.
Drive cost optimization strategies for GCP data workloads.
Ensure compliance with data security, governance, and access control policies.
Provide technical leadership, mentoring, and code reviews for the data engineering team.
Contribute to architecture discussions and technology strategy for cloud data platforms.
Lead Data Scientist
Data engineer job in Plano, TX
Title: Lead Data Scientist
Duration: 12 month contract + extensions
Required Skills & Experience
7+ years of experience as a Data Scientist
Strong coding skills in Python (pandas, numpy, scikit-learn) and SQL.
Experience with regression, elasticity estimation, clustering, and experimental validation.
Comfort working with optimization outputs and constraints.
Ability to wrangle data from cloud-based sources and iterate quickly with stakeholders.
Job Description
A retail client is hiring a Lead Data Scientist to join their Profit to Serve (PTS) initiative-a priority program driving scenario modeling and last-mile routing optimization to improve profitability and service. PTS is a foundational enabler in our AI strategy, connecting precision costing with automated insights and routing simulations. You'll work on-site in Plano, TX, partnering directly with transformation leaders to shape a high-impact initiative at the intersection of AI, data science, and operational excellence. If you love fast iteration, hands-on coding, and solving real-world business problems, this is your opportunity to make an impact at scale.
What You'll Do
• Build and iterate models for cost impact estimation, price elasticity, and routing optimization.
• Design clustering approaches for route/service segmentation and validate optimization assumptions.
• Translate ambiguous business questions into testable models and ship prototypes weekly.
• Collaborate closely with business partners and data science leads to deliver reusable solutions.