SailPoint Engineer
Requirements engineer job in Roanoke, TX
Immediate need for a talented SailPoint Engineer. This is a 12+ Month Contract opportunity with long-term potential and is located in Westlake, TX(Hybrid). Please review the job description below and contact me ASAP if you are interested.
Job ID:25-95045
Pay Range: $65 - $70/hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location).
Key Requirements and Technology Experience:
Must have skills: -Sailpoint Identity IQ platform - Lifecycle Manager, Certifications, Roles, Joiner/Mover/Leaver events
Active Directory, JDBC, SCIM 2.0, Azure Active Directory Programming
Java, BeanShell/JavaScript, Angular, SQL.
XML, JSON, REST, SQL, Web and Application Servers like Tomcat.
API's (REST, SCIM) leveraging Java based development
Experience in building, installing, testing IIQ using Services Standards Build/Deployment (SSB/SSD)
Sailpoint IdenityIQ - LCM, Certifications,
Roles Java/BeanShell SSB/SSD
Webservice - REST, SCIM
SQL/PL
Active Directory
You Have B.S.in Computer Science preferred, Engineering / Mathematics or comparable 8+ years of experience in building and developing using SailPoint IdentityIQ product Expertise onboarding applications, Lifecycle events, Certifications, Roles in SailPoint IdentityIQ product
Hands on experience with automation & pipeline implementation (Testing, Continuous Integration / Continuous Delivery pipeline).
You have hands-on experience in designing and developing using the following technologies:
Expertise working with Sailpoint Identity IQ platform - Lifecycle Manager, Certifications, Roles, Joiner/Mover/Leaver events
Expertise onboarding applications with connectors like Active Directory, JDBC, SCIM 2.0, Azure Active Directory Programming
Expertise with the following programming languages: Java, BeanShell/JavaScript, Angular, SQL.
Expertise developing using XML, JSON, REST, SQL, Web and Application Servers like Tomcat.
Expertise developing API's (REST, SCIM) leveraging Java based development
Experience in building, installing, testing IIQ using Services Standards Build/Deployment (SSB/SSD)
Experience installing, patching, and upgrading IIQ Experience in analyzing & troubleshooting issues in various components of IIQ
Experience with Application Lifecycle Management tools such as GIT, Maven, Jenkins, Artifactory, Veracode, Sonar
You have experience working in an agile environment (Scrum and Kanban)
You possess strong engineering skills and experience developing maintainable, scalable multi-tiered applications
You have maniacal focus on automation and always looking towards reducing waste and improving efficiencies
You should enjoy communicating and learning the business behind the application.
Our client is a leading Banking Industry and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration.
Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
AL/ML Engineer- Full Time
Requirements engineer job in Irving, TX
Job Title - AI/ML Engineer
Job Type- Full Time
Key Responsibilities
Model Development: Design, build, and optimize machine learning models for predictive analytics, classification, recommendation systems, and NLP.
Data Processing: Collect, clean, and preprocess large datasets from various sources for training and evaluation.
Deployment: Implement and deploy ML models into production environments using frameworks like TensorFlow, PyTorch, or Scikit-learn.
Performance Monitoring: Continuously monitor and improve model accuracy, efficiency, and scalability.
Collaboration: Work closely with data engineers, software developers, and product teams to integrate AI solutions into applications.
Research & Innovation: Stay updated with the latest advancements in AI/ML and apply cutting-edge techniques to business challenges.
Documentation: Maintain clear documentation of models, processes, and workflows.
Required Skills & Qualifications
Bachelor's or Master's degree in Computer Science, Data Science, or related field.
Strong proficiency in Python, R, or Java.
Hands-on experience with ML frameworks (TensorFlow, PyTorch, Scikit-learn).
Knowledge of data structures, algorithms, and software engineering principles.
Experience with cloud platforms (AWS, Azure, GCP) and MLOps tools.
Familiarity with big data technologies (Spark, Hadoop) is a plus.
Excellent problem-solving and analytical skills.
Preferred Qualifications
Experience in Natural Language Processing (NLP), Computer Vision, or Deep Learning.
Understanding of model interpretability and ethical AI practices.
Prior experience deploying models in production environments.
AI/ML Engineer
Requirements engineer job in Dallas, TX
About the Role
Apexon is seeking an experienced AI/ML Engineer with strong expertise in LLM development, MLOps, and building scalable GenAI solutions. You will design, build, and operationalize AI/ML systems that support enterprise clients across healthcare, BFSI, retail, and digital transformation engagements.
The ideal candidate has hands-on experience building end-to-end machine learning pipelines, optimizing large language model workflows, and deploying secure ML systems in production environments.
Responsibilities
LLM & AI Solution Development
Build, fine-tune, evaluate, and optimize Large Language Models (LLMs) for client-specific use cases such as document intelligence, chatbot automation, code generation, and workflow orchestration.
Develop RAG (Retrieval-Augmented Generation) pipelines using enterprise knowledge bases.
Implement prompt engineering, guardrails, hallucination reduction strategies, and safety frameworks.
Work with transformer-based architectures (GPT, LLaMA, Mistral, Falcon, etc.) and develop optimized model variants for low-latency and cost-efficient inference.
Machine Learning Engineering
Develop scalable ML systems including feature pipelines, training jobs, and batch/real-time inference services.
Build and automate training, validation, and monitoring workflows for predictive and GenAI models.
Perform offline evaluation, A/B testing, performance benchmarking, and business KPI tracking.
MLOps & Platform Engineering
Build and maintain end-to-end MLOps pipelines using:
AWS SageMaker, Databricks, MLflow, Kubernetes, Docker, Terraform, Airflow
Manage CICD pipelines for model deployment, versioning, reproducibility, and governance.
Implement enterprise-grade model monitoring (data drift, performance, cost, safety).
Maintain infrastructure for vector stores, embeddings pipelines, feature stores, and inference endpoints.
Data Engineering & Infrastructure
Build data pipelines for structured and unstructured data using:
Snowflake, S3, Kafka, Delta Lake, Spark (PySpark)
Work on data ingestion, transformation, quality checks, cataloging, and secure storage.
Ensure all systems adhere to Apexon and client-specific security, IAM, and compliance standards.
Cross-Functional Collaboration
Partner with product managers, data engineers, cloud architects, and QA teams.
Translate business requirements into scalable AI/ML solutions.
Ensure model explainability, governance documentation, and compliance adherence.
Basic Qualifications
Bachelor's or Master's degree in Computer Science, Engineering, AI/ML, Data Science, or related field.
4+ years of experience in AI/ML engineering, including 1+ years working with LLMs/GenAI.
Strong experience with Python, Transformers, PyTorch/TensorFlow, and NLP frameworks.
Hands-on expertise with MLOps platforms: SageMaker, MLflow, Databricks, Kubernetes, Docker.
Strong SQL and data engineering experience (Snowflake, S3, Spark, Kafka).
Preferred Qualifications
Experience implementing Generative AI solutions for enterprise clients.
Expertise in distributed training, quantization, optimization, and GPU acceleration.
Experience with:
Vector Databases (Pinecone, Weaviate, FAISS)
RAG frameworks (LangChain, LlamaIndex)
Monitoring tools (Prometheus, Grafana, CloudWatch)
Understanding of model governance, fairness evaluation, and client compliance frameworks.
CyberArk Engineer
Requirements engineer job in Frisco, TX
You will be responsible for delivery and buildout of a Privileged Access ecosystem and apply comprehensive knowledge of privileged access security controls to the completion of complex assignments. You will identify and recommend changes in procedures, processes, and scope of delivery. This position reports to the Director of Privileged Access Engineering.
What you will do:
Troubleshoot complex heterogeneous environments related to privileged access technologies through server log and network traffic analysis, leaning on experience with troubleshooting and analysis techniques and tools.
Understand taxonomy of privileges on named or shared privileged accounts.
Incorporate cybersecurity best practices for technology governance over privileged account lifecycles.
Development of PAM (CyberArk) connection components and plugins as needed utilizing various scripting tools (PowerShell, python) and rest API's.
Develop regular reporting and be accountable for deliverables.
Perform disaster resiliency tests, discovery audits, and can present findings to management in order to ensure security and integrity of the systems.
What you will need to have:
8+ years' experience in IT.
5+ years' experience in Cyber Security.
3+ years' experience in implementation, integration, and operations of privileged access technologies (CyberArk and all its components).
3+ years' experience in systems and network administration (Windows, Unix/Linux, Network devices) and good knowledge of PKI, Authentication tools and protocols (like SAML, Radius, PING), MFA.
2+ years' experience with privileged access controls in Unix and Windows environments.
2+ years' experience with broader IAM ecosystem of directories, identity management, and access management controls.
1+ years' experience in a senior technical role (have a deep understanding of the product) with IAM/PAM products such as CyberArk and its components.
Bachelor's degree in computer science, or a relevant field, or an equivalent combination of education, work, and/or military experience.
What would be great to have:
2+ years' experience in onboarding and managing privileged credentials across Windows, Linux/Unix, databases, networking devices and other platforms.
2+ years' experience in development/scripting (shell, PowerShell, python and utilizing rest API methods and other current tools including AI to assist in automation activities like provisioning of vault components, accounts and implementing access controls.
1+ years' experience in coming up with technical solutions and being able to present to management related to PAM.
1+ years' experience in ability to interface with Corporate Audit and External Audit functions for regulatory compliance.
Cybersecurity certifications such as CISA, CISSP and CyberArk certifications - CDE, Sentry, Defender.
Endpoint Engineer
Requirements engineer job in Plano, TX
About Us:
At DivergeIT, we're recognized for our innovative approach to IT consulting and managed services. We help organizations navigate digital transformation by delivering tailored technology strategies that drive business growth. We're seeking a highly skilled and motivated Service Engineer to join our dynamic team and further our commitment to excellence.
Why Join DivergeIT?
At DivergeIT, we offer a collaborative and innovative work environment where your expertise will be valued, and your contributions will make a tangible impact. We are committed to supporting your professional growth through continuous learning opportunities and certifications.
Position Overview
As an Endpoint Engineer, you will be responsible for the provisioning, configuration, and lifecycle management of Windows-based endpoints, including both physical workstations and Azure Virtual Desktop (AVD) environments. This position requires extensive experience with Microsoft Intune, SCCM, Patch My PC, 1E, and Autopilot, along with proficiency in creating, deploying, and maintaining Windows images. You will collaborate closely with the client's IT security team and work alongside our dedicated service desk to ensure optimal endpoint performance and security compliance.
Key Responsibilities
Design, create, and maintain standardized Windows images for physical workstations and Azure Virtual Desktop (AVD) using tools such as SCCM, Microsoft Deployment Toolkit (MDT), and Intune.
Manage and automate device provisioning and deployment through Windows Autopilot.
Administer patching, software deployment, and update compliance using Patch My PC, SCCM, and Intune.
Utilize 1E tools (e.g., Nomad, Tachyon) to support remote management, compliance, and endpoint performance monitoring.
Collaborate with the client's IT security team to implement and maintain endpoint security baselines and compliance standards.
Provide escalation support to our dedicated service desk and help drive resolution of complex endpoint issues.
Maintain up-to-date documentation for image creation, deployment processes, and system configurations.
Monitor and optimize AVD performance, scaling, and configuration consistency.
Stay informed about changes in Microsoft endpoint management tools and provide recommendations for improvements or modernization efforts.
Required Qualifications
3+ years of experience in an endpoint management or systems engineering role, preferably in an MSP or enterprise IT environment.
Expertise in creating and managing Windows images for both physical endpoints and AVD environments.
Strong understanding of Windows 10/11 OS deployment, device provisioning, group policy, and compliance management.
Experience working collaboratively with IT security and help desk teams.
Excellent troubleshooting, documentation, and communication skills.
Proficiency with:
Microsoft Intune (Endpoint Manager)
System Center Configuration Manager (SCCM)
Windows Autopilot
Patch My PC
1E Nomad, Tachyon
Preferred Qualifications
Microsoft certifications (e.g., MD-102, MS-101, AZ-140).
Experience with hybrid environments and Azure AD Join/Hybrid Join.
Familiarity with AVD scaling plans, FSLogix, and host pool image management.
Scripting knowledge (PowerShell) for automation of endpoint and imaging tasks.
Exposure to Zero Trust security models and conditional access policies.
HP NonStop Engineer (W2)
Requirements engineer job in Plano, TX
Open to: Jersey City, Tampa, FL, Columbus, OH, Plano, TX
W2 Hiring
Onsite from day one
We are seeking an experienced HP NonStop Engineer to support, enhance, and automate operations on mission-critical HPE NonStop systems. The ideal candidate will have strong hands-on experience with NonStop environments and a passion for automating manual operational tasks using modern scripting and configuration management tools.
Automate Operations and System manual tasks
Skills:
Python, Ansible, Java, HPE Nonstop TACL, Prognosis
SRE Engineer with Azure AI
Requirements engineer job in Plano, TX
Overall experience: 8 to 10+ years' experience performing Production Support for Mission Critical, high performance applications (Customer Care, Retail and eCommerce customer/agent facing application experience preferred).
Experience using Docker, Kubernetes and Microsoft Azure Cloud, Unix, Networking and troubleshooting knowledge.
Experience with Application & Infrastructure Performance Monitoring tools like Dynatrace.
Experience with Application Log Analytics tools like Elastic.
Experience with visualization tools like Kibana and Grafana.
EFK stack experience preferred.
Creation of Dashboards on Dynatrace, ELK and Grafana.
Debugging java log, debugging microservices log.
Experience in Relational & NoSQL databases like Oracle & Cassandra.
Experience with Site Reliability Engineering preferred.
Generative AI and Workflow Automation skills • Demonstrated experience leveraging AI driven tools for automating end to end operational workflows.
Demonstrated experience using Text Generative & Code Generative AI Models • Automation, Gen AI & Agentic Workflow Technical Skills to include GPT 4o LLM.
Advanced LangGraph and LangChain Google DialogFlow (Api.ai) Google Vertex Databricks, Spark, Snowflake.
AWS Cloud Engineer
Requirements engineer job in Plano, TX
Interview process: Onsite (Plano, TX)
We are looking for a highly skilled Senior AWS Data/Backend Engineer with strong expertise in C#, AWS Lambda, AWS Glue, and distributed caching technologies. The ideal candidate will excel at building and optimizing high-performance cloud-native APIs and backend services in a fully AWS-based environment. This role focuses heavily on API scalability, performance tuning, and data pipeline optimization.
🔧 Key Responsibilities
Design, develop, and optimize AWS Lambda functions in C# for low-latency, high-throughput workloads.
Implement and manage distributed caching using Redis/ElastiCache or OpenSearch.
Enhance and support AWS Glue ETL pipelines, data structures, and workflows.
Architect scalable backend services using API Gateway, Lambda, S3, and Glue Catalog.
Optimize data querying patterns across Aurora, DynamoDB, Redshift, or similar databases.
Perform end-to-end performance profiling and bottleneck analysis across APIs and data pipelines.
Improve observability with CloudWatch, X-Ray, and structured logging.
Ensure all cloud solutions meet best practices for security, scalability, reliability, and cost efficiency.
Collaborate with cross-functional teams to deliver high-performance APIs and backend systems.
✅ Required Skills & Experience
5+ years in backend engineering, cloud engineering, or data engineering.
Strong expertise in C#/.NET with hands-on AWS Lambda experience.
Experience implementing Redis/ElastiCache or OpenSearch caching layers.
Hands-on with AWS Glue, Glue Catalog, ETL optimization, and data lakes.
Strong knowledge of core AWS services: Lambda, API Gateway, S3, IAM, CloudWatch.
Experience with cloud databases: Aurora, DynamoDB, Redshift.
Strong understanding of data modeling, partitioning, and schema optimization.
Excellent debugging, performance tuning, and problem-solving abilities.
✨ Preferred Qualifications
Experience with event-driven architectures (Kinesis, Kafka, SNS, SQS).
Familiarity with CI/CD pipelines and automated deployments.
Knowledge of AWS cost optimization practices.
Understanding of DevOps/SRE monitoring and high-availability concepts.
Kubernetes Engineer
Requirements engineer job in Plano, TX
Hands on experience of Kubernetes engineering and development.
Minimum 5-7+ years of experience in working with hybrid Infra architectures
Experience in analyzing the architecture of On Prem Infrastructure for Applications (Network, Storage, Processing, Backup/DR etc).
Strong understanding of Infrastructure capacity planning, monitoring, upgrades, IaaC automations using Terraform, Ansible, CICD using Jenkins/Github Actions.
Experience working with engineering teams to define best practices and processes as appropriate to support the entire infrastructure lifecycle - Plan, Build, Deploy, and Operate such as automate lifecycle activities - self-service, orchestration and provisioning, configuration management.
Experience working with engineering teams to define best practices and processes as appropriate to support the entire infrastructure lifecycle - Plan, Build, Deploy, and Operate such as automate lifecycle activities - self-service, orchestration and provisioning, configuration management.
Experience defining infrastructure direction.
Drive continuous improvement including design, and standardization of process and methodologies.
Experience assessing feasibility, complexity and scope of new capabilities and solutions
Base Salary Range: $100,000 - $110,000 per annum
TCS Employee Benefits Summary:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
Backend Engineer (Distributed Systems and Kubernetes)
Requirements engineer job in Dallas, TX
Software Engineer - Batch Compute (Kubernetes / HPC)
Dallas (Hybrid) | 💼 Full-time
A leading, well-funded quantitative research and technology firm is looking for a Software Engineer to join a team building and running a large-scale, high-performance batch compute platform.
You'll be working on modern Kubernetes-based infrastructure that powers complex research and ML workloads at serious scale, including contributions to a well-known open-source scheduling project used for multi-cluster batch computing.
What you'll be doing
• Building and developing backend services, primarily in Go (Python, C++, C# backgrounds are fine)
• Working on large-scale batch scheduling and distributed systems on Kubernetes
• Operating and improving HPC-style workloads, CI/CD pipelines, and Linux-based platforms
• Optimising data flows across systems using tools like PostgreSQL
• Debugging and improving performance across infrastructure, networking, and software layers
What they're looking for
• Strong software engineering background with an interest in Kubernetes and batch workloads
• Experience with Kubernetes internals (controllers, operators, schedulers)
• Exposure to HPC, job schedulers, or DAG-based workflows
• Familiarity with cloud platforms (ideally AWS), observability tooling, and event-driven systems
Why it's worth a look
• Market-leading compensation plus bonus
• Hybrid setup from a brand-new Dallas office
• Strong work/life balance and excellent benefits
• Generous relocation support if needed
• The chance to work at genuine scale on technically hard problems
If you're interested (or know someone who might be), drop me a message and I'm happy to share more details anonymously.
Autodesk Vault Engineer-- CDC5695559
Requirements engineer job in Plano, TX
Auto Desk Upgrade
Lead planning, execution and validation of Autodesk Vault upgrade
Collaborate with engineering CAD, and IT Team to support Vault upgrade roadmap.
Ensure robust data migration, backup and recovery strategies.
Conduct required validation after upgrade
Document upgrade procedures and train end user as needed.
Coordinate with Autodesk support for unresolved upgrade related issues.
SCCM Package deployment:
Validation SCCM packages work as expected.
Investigate/resolve any installation failures after package installation.
Monitor deployment success rates and troubleshoot issues.
Track and resolve user- reported bugs or regressions introduced during the upgrade.
Support rollback or contingency plans if critical issues arises.
Manage Vault user roles, permissions, and access controls.
Support CAD teams with Vault- related workflows.
Manually install/updates SCCM for some applications.
Senior Data Engineer
Requirements engineer job in Plano, TX
Ascendion is a full-service digital engineering solutions company. We make and manage software platforms and products that power growth and deliver captivating experiences to consumers and employees. Our engineering, cloud, data, experience design, and talent solution capabilities accelerate transformation and impact for enterprise clients. Headquartered in New Jersey, our workforce of 6,000+ Ascenders delivers solutions from around the globe. Ascendion is built differently to engineer the next.
Ascendion | Engineering to elevate life
We have a culture built on opportunity, inclusion, and a spirit of partnership. Come, change the world with us:
Build the coolest tech for world's leading brands
Solve complex problems - and learn new skills
Experience the power of transforming digital engineering for Fortune 500 clients
Master your craft with leading training programs and hands-on experience
Experience a community of change makers!
Join a culture of high-performing innovators with endless ideas and a passion for tech. Our culture is the fabric of our company, and it is what makes us unique and diverse. The way we share ideas, learning, experiences, successes, and joy allows everyone to be their best at Ascendion.
*** About the Role ***
Job Title: Senior Data Engineer
Key Responsibilities:
Design, develop, and maintain scalable and reliable data pipelines and ETL workflows.
Build and optimize data models and queries in Snowflake to support analytics and reporting needs.
Develop data processing and automation scripts using Python.
Implement and manage data orchestration workflows using Airflow, Airbyte, or similar tools.
Work with AWS data services including EMR, Glue, and Kafka for large-scale data ingestion and processing.
Ensure data quality, reliability, and performance across data pipelines.
Collaborate with analytics, product, and engineering teams to understand data requirements and deliver robust solutions.
Monitor, troubleshoot, and optimize data workflows for performance and cost efficiency.
Required Skills & Qualifications:
8+ years of hands-on experience as a Data Engineer.
Strong proficiency in SQL and Snowflake.
Extensive experience with ETL frameworks and data pipeline orchestration tools (Airflow, Airbyte, or similar).
Proficiency in Python for data processing and automation.
Hands-on experience with AWS data services, including EMR, Glue, and Kafka.
Strong understanding of data warehousing, data modeling, and distributed data processing concepts.
Nice to Have:
Experience working with streaming data pipelines.
Familiarity with data governance, security, and compliance best practices.
Experience mentoring junior engineers and leading technical initiatives.
Salary Range: The salary for this position is between $130,000- $140,000 annually. Factors which may affect pay within this range may include geography/market, skills, education, experience, and other qualifications of the successful candidate.
Benefits: The Company offers the following benefits for this position, subject to applicable eligibility requirements: [medical insurance] [dental insurance] [vision insurance] [401(k) retirement plan] [long-term disability insurance] [short-term disability insurance] [5 personal days accrued each calendar year. The Paid time off benefits meet the paid sick and safe time laws that pertains to the City/ State] [10-15 days of paid vacation time] [6 paid holidays and 1 floating holiday per calendar year] [Ascendion Learning Management System]
Want to change the world? Let us know.
Tell us about your experiences, education, and ambitions. Bring your knowledge, unique viewpoint, and creativity to the table. Let's talk!
GCP Data Engineer
Requirements engineer job in Dallas, TX
MUST BE USC or Green Card; No vendors
GCP Data Engineer/Lead Onsite
Required Qualifications:
9+ years' experience and hands on with Data Warehousing.
9+ years of hands on ETL (e.g., Informatica/DataStage) experience
3+ years of hands-on Big query
3+ years of hands on GCP
9+ years of Teradata hands on experience
9+ years working in a cross-functional environment.
3+ years of hands-on experience with Google Cloud Platform services like Big Query, Dataflow, Pub/Sub, and Cloud Storage
3+ years of hands-on experience building modern data pipelines with GCP platform
3+ years of experience with Query optimization, data structures, transformation, metadata, dependency, and workload management
3+ years of experience with SQL, NoSQL
3+ years of experience in data engineering with a focus on microservices-based data solutions
3+ years of containerization (Docker, Kubernetes) and CI/CD for data pipeline
3+ years of experience with Python (or a comparable scripting language)
3+ years of experience with Big data and cloud architecture
3+ years of experience with deployment/scaling of apps on containerized environment (Kubernetes,)
Excellent oral and written communications skills; ability to interact effectively with all levels within the organization.
Working knowledge of AGILE/SDLC methodology
Excellent analytical and problem-solving skills.
Ability to interact and work effectively with technical & non-technical levels within the organization.
Ability to drive clarity of purpose and goals during release and planning activities.
Excellent organizational skills including ability to prioritize tasks efficiently with high level of attention to detail.
Dexian is an Equal Opportunity Employer that recruits and hires qualified candidates without regard to race, religion, sex, sexual orientation, gender identity, age, national origin, ancestry, citizenship, disability, or veteran status.
Senior Data Engineer (USC AND GC ONLY)
Requirements engineer job in Richardson, TX
Now Hiring: Senior Data Engineer (GCP / Big Data / ETL)
Duration: 6 Months (Possible Extension)
We're seeking an experienced Senior Data Engineer with deep expertise in Data Warehousing, ETL, Big Data, and modern GCP-based data pipelines. This role is ideal for someone who thrives in cross-functional environments and can architect, optimize, and scale enterprise-level data solutions on the cloud.
Must-Have Skills (Non-Negotiable)
9+ years in Data Engineering & Data Warehousing
9+ years hands-on ETL experience (Informatica, DataStage, etc.)
9+ years working with Teradata
3+ years hands-on GCP and BigQuery
Experience with Dataflow, Pub/Sub, Cloud Storage, and modern GCP data pipelines
Strong background in query optimization, data structures, metadata & workload management
Experience delivering microservices-based data solutions
Proficiency in Big Data & cloud architecture
3+ years with SQL & NoSQL
3+ years with Python or similar scripting languages
3+ years with Docker, Kubernetes, CI/CD for data pipelines
Expertise in deploying & scaling apps in containerized environments (K8s)
Strong communication, analytical thinking, and ability to collaborate across technical & non-technical teams
Familiarity with AGILE/SDLC methodologies
Key Responsibilities
Build, enhance, and optimize modern data pipelines on GCP
Implement scalable ETL frameworks, data structures, and workflow dependency management
Architect and tune BigQuery datasets, queries, and storage layers
Collaborate with cross-functional teams to define data requirements and support business objectives
Lead efforts in containerized deployments, CI/CD integrations, and performance optimization
Drive clarity in project goals, timelines, and deliverables during Agile planning sessions
📩 Interested? Apply now or DM us to explore this opportunity! You can share resumes at ********************* OR Call us on *****************
Azure Data Engineer Sr
Requirements engineer job in Irving, TX
Minimum 7 years of relevant work experience in data engineering, with at least 2 years in a data modeling.
Strong technical foundation in Python, SQL, and experience with cloud platforms (Azure,).
Deep understanding of data engineering fundamentals, including database architecture and design, Extract, transform and load (ETL) processes, data lakes, data warehousing, and both batch and streaming technologies.
Experience with data orchestration tools (e.g., Airflow), data processing frameworks (e.g., Spark, Databricks), and data visualization tools (e.g., Tableau, Power BI).
Proven ability to lead a team of engineers, fostering a collaborative and high-performing environment.
Data Engineer(python, Pyspark, data bricks)
Requirements engineer job in Dallas, TX
Job Title: Data Engineer(python, Pyspark, data bricks)
Data Engineer with strong proficiency in SQL, Python, and PySpark to support high-performance data pipelines and analytics initiatives. This role will focus on scalable data processing, transformation, and integration efforts that enable business insights, regulatory compliance, and operational efficiency.
Data Engineer - SQL, Python and Pyspark Expert (Onsite - Dallas, TX)
Key Responsibilities
Design, develop, and optimize ETL/ELT pipelines using SQL, Python, and PySpark for large-scale data environments
Implement scalable data processing workflows in distributed data platforms (e.g., Hadoop, Databricks, or Spark environments)
Partner with business stakeholders to understand and model mortgage lifecycle data (origination, underwriting, servicing, foreclosure, etc.)
Create and maintain data marts, views, and reusable data components to support downstream reporting and analytics
Ensure data quality, consistency, security, and lineage across all stages of data processing
Assist in data migration and modernization efforts to cloud-based data warehouses (e.g., Snowflake, Azure Synapse, GCP BigQuery)
Document data flows, logic, and transformation rules
Troubleshoot performance and quality issues in batch and real-time pipelines
Support compliance-related reporting (e.g., HMDA, CFPB)
Required Qualifications
6+ years of experience in data engineering or data development
Advanced expertise in SQL (joins, CTEs, optimization, partitioning, etc.)
Strong hands-on skills in Python for scripting, data wrangling, and automation
Proficient in PySpark for building distributed data pipelines and processing large volumes of structured/unstructured data
Experience working with mortgage banking data sets and domain knowledge is highly preferred
Strong understanding of data modeling (dimensional, normalized, star schema)
Experience with cloud-based platforms (e.g., Azure Databricks, AWS EMR, GCP Dataproc)
Familiarity with ETL tools, orchestration frameworks (e.g., Airflow, ADF, dbt)
AZURE DATA ENGINEER (Databrick certified and DATA FACTORY.)
Requirements engineer job in Irving, TX
AZURE DATA ENGINEER with DATA FACTORY.
Databrick certified
3 days a week onsite, can be based out of Irving TX or Houston TX.
Rate is 45 W2.
Data Engineer
Requirements engineer job in Irving, TX
W2 Contract to Hire Role with Monthly Travel to the Dallas Texas area
We are looking for a highly skilled and independent Data Engineer to support our analytics and data science teams, as well as external client data needs. This role involves writing and optimizing complex SQL queries, generating client-specific data extracts, and building scalable ETL pipelines using Azure Data Factory. The ideal candidate will have a strong foundation in data engineering, with a collaborative mindset and the ability to work across teams and systems.
Duties/Responsibilities:Develop and optimize complex SQL queries to support internal analytics and external client data requests.
Generate custom data lists and extracts based on client specifications and business rules.
Design, build, and maintain efficient ETL pipelines using Azure Data Factory.
Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver high-quality solutions.
Work with Salesforce data; familiarity with SOQL is preferred but not required.
Support Power BI reporting through basic data modeling and integration.
Assist in implementing MLOps practices for model deployment and monitoring.
Use Python for data manipulation, automation, and integration tasks.
Ensure data quality, consistency, and security across all workflows and systems.
Required Skills/Abilities/Attributes:
5+ years of experience in data engineering or a related field.
Strong proficiency in SQL, including query optimization and performance tuning.
Experience with Azure Data Factory, with git repository and pipeline deployment.
Ability to translate client requirements into accurate and timely data outputs.
Working knowledge of Python for data-related tasks.
Strong problem-solving skills and ability to work independently.
Excellent communication and documentation skills.
Preferred Skills/ExperiencePrevious knowledge of building pipelines for ML models.
Extensive experience creating/managing stored procedures and functions in MS SQL Server
2+ years of experience in cloud architecture (Azure, AWS, etc)
Experience with ‘code management' systems (Azure Devops)
2+ years of reporting design and management (PowerBI Preferred)
Ability to influence others through the articulation of ideas, concepts, benefits, etc.
Education and Experience:
Bachelor's degree in a computer science field or applicable business experience.
Minimum 3 years of experience in a Data Engineering role
Healthcare experience preferred.
Physical Requirements:Prolonged periods sitting at a desk and working on a computer.
Ability to lift 20 lbs.
Data Engineer
Requirements engineer job in Dallas, TX
Junior Data Engineer
DESCRIPTION: BeaconFire is based in Central NJ, specializing in Software Development, Web Development, and Business Intelligence; looking for candidates who are good communicators and self-motivated. You will play a key role in building, maintaining, and operating integrations, reporting pipelines, and data transformation systems.
Qualifications:
Passion for data and a deep desire to learn.
Master's Degree in Computer Science/Information Technology, Data Analytics/Data
Science, or related discipline.
Intermediate Python. Experience in data processing is a plus. (Numpy, Pandas, etc)
Experience with relational databases (SQL Server, Oracle, MySQL, etc.)
Strong written and verbal communication skills.
Ability to work both independently and as part of a team.
Responsibilities:
Collaborate with the analytics team to find reliable data solutions to meet the business needs.
Design and implement scalable ETL or ELT processes to support the business demand for data.
Perform data extraction, manipulation, and production from database tables.
Build utilities, user-defined functions, and frameworks to better enable data flow patterns.
Build and incorporate automated unit tests, participate in integration testing efforts.
Work with teams to resolve operational & performance issues.
Work with architecture/engineering leads and other teams to ensure quality solutions are implemented, and engineering best practices are defined and adhered to.
Compensation: $65,000.00 to $80,000.00 /year
BeaconFire is an e-verified company. Work visa sponsorship is available.
GCP Data Engineer
Requirements engineer job in Fort Worth, TX
Job Title: GCP Data Engineer
Employment Type: W2/CTH
Client: Direct
We are seeking a highly skilled Data Engineer with strong expertise in Python, SQL, and Google Cloud Platform (GCP) services. The ideal candidate will have 6-8 years of hands-on experience in building and maintaining scalable data pipelines, working with APIs, and leveraging GCP tools such as BigQuery, Cloud Composer, and Dataflow.
Core Responsibilities:
• Design, build, and maintain scalable data pipelines to support analytics and business operations.
• Develop and optimize ETL processes for structured and unstructured data.
• Work with BigQuery, Cloud Composer, and other GCP services to manage data workflows.
• Collaborate with data analysts and business teams to ensure data availability and quality.
• Integrate data from multiple sources using APIs and custom scripts.
• Monitor and troubleshoot pipeline performance and reliability.
Technical Skills:
o Strong proficiency in Python and SQL.
o Experience with data pipeline development and ETL frameworks.
• GCP Expertise:
o Hands-on experience with BigQuery, Cloud Composer, and Dataflow.
• Additional Requirements:
o Familiarity with workflow orchestration tools and cloud-based data architecture.
o Strong problem-solving and analytical skills.
o Excellent communication and collaboration abilities.