Engineer IV
Requirements engineer job in Dallas, TX
Additional Information Job Number25190111 Job CategoryEngineering & Facilities LocationThe Ritz-Carlton Dallas, 2121 McKinney Avenue, Dallas, Texas, United States, 75201VIEW ON MAP ScheduleFull Time Located Remotely?N Type Non-Management
Respond and attend to guest repair requests. Communicate with guests/customers to resolve maintenance issues. Perform preventive maintenance on tools and equipment, including cleaning and lubrication. Visually inspect tools, equipment, or machines. Carry equipment (e.g., tools, radio). Identify, locate, and operate all shut-off valves for equipment and utility shut-offs for buildings. Maintain maintenance inventory and requisition parts and supplies as needed. Assure each day's activities and problems that occur are communicated to the other shifts using approved communication programs and standards. Display thorough knowledge of building systems, emergency response, and building documentation including reading standard blue prints and electrical schematics concerning plumbing and HVAC. Display advanced engineering operations skills and general mechanical ability. Display professional journeyman level expertise in at least three of the following areas with basic skills in the remaining: air conditioning and refrigeration, electrical, plumbing, carpentry and finish skills, mechanical, general building management, pneumatic/electronic systems and controls, and/or energy conservation. Display solid knowledge and skill in the safe use of hand and power tools and other materials required to perform repair and maintenance tasks. Safely perform highly complex repairs of the physical property, electrical, plumbing and mechanical equipment, air conditioners, refrigeration and pool heaters - ensuring all methods, materials and practices meet company standards and Local and National codes - with little or no supervision. Perform routine inspections of the entire property, noting safety hazards, lack of illumination, down equipment (such as ice makers, fans, extractors, pumps), and take immediate corrective action. Inspect and repair all mechanical equipment including, but not limited to: appliances, HVAC, electrical and plumbing components, diagnose and repair of boilers, pumps and related components. Use the Lockout/Tagout system before performing any maintenance work. Display thorough knowledge of maintenance contracts and vendors. Display advanced knowledge of engineering computer programs related to preventative maintenance, energy management, and other systems, including devices that interact with such programs. Perform advanced troubleshooting of hotel Mechanical, Electrical, and Plumbing (MEP) systems. Display the ability to train and mentor other engineers (e.g., Engineers I, II, and III) as necessary and supervise work in progress and act in a supervisory role in the absence of supervisors and/or management. Display ability to perform Engineer on Duty responsibilities, including readings and rounds.
Follow all company and safety and security policies and procedures; report any maintenance problems, safety hazards, accidents, or injuries; complete safety training and certifications; and properly store flammable materials. Ensure uniform and personal appearances are clean and professional, maintain confidentiality of proprietary information, and protect company assets. Welcome and acknowledge all guests according to company standards, anticipate and address guests' service needs, assist individuals with disabilities, and thank guests with genuine appreciation. Adhere to quality expectations and standards. Develop and maintain positive working relationships with others, support team to reach common goals, and listen and respond appropriately to the concerns of other employees. Speak with others using clear and professional language. Move, lift, carry, push, pull, and place objects weighing less than or equal to 50 pounds without assistance and heavier lifting or movement tasks with assistance. Move up and down stairs, service ramps, and/or ladders. Reach overhead and below the knees, including bending, twisting, pulling, and stooping. Enter and locate work-related information using computers and/or point of sale systems. Perform other reasonable job duties as requested.
PREFERRED QUALIFICATIONS
Education: High school diploma or G.E.D. equivalent.
Certificate in two-year technical diploma program for HVAC/refrigeration.
Related Work Experience: Extensive experience and training in general maintenance (advanced repairs), electrical or refrigeration,
exterior and interior surface preparation and painting.
At least 2 years of hotel engineering/maintenance experience.
Supervisory Experience: No supervisory experience.
REQUIRED QUALIFICATIONS
License or Certification: Valid Driver's License
License or certification in refrigeration or electrical
(earned, or currently working towards receiving)
Universal Chlorofluorocarbon (CFC) certification
Must meet applicable state and federal certification and/or licensing requirements.
At Marriott International, we are dedicated to being an equal opportunity employer, welcoming all and providing access to opportunity. We actively foster an environment where the unique backgrounds of our associates are valued and celebrated. Our greatest strength lies in the rich blend of culture, talent, and experiences of our associates. We are committed to non-discrimination on any protected basis, including disability, veteran status, or other basis protected by applicable law.
At more than 100 award-winning properties worldwide, The Ritz-Carlton Ladies and Gentlemen create experiences so exceptional that long after a guest stays with us, the experience stays with them. Attracting the world's top hospitality professionals who curate lifelong memories, we believe that everyone succeeds when they are empowered to be creative, thoughtful and compassionate.
Every day, we set the standard for rare and special luxury service the world over and pride ourselves on delivering excellence in the care and comfort of our guests.
Your role will be to ensure that the “Gold Standards” of The Ritz-Carlton are delivered graciously and thoughtfully every day. The Gold Standards are the foundation of The Ritz-Carlton and are what guides us each day to be better than the next. It is this foundation and our belief that our culture drives success by which The Ritz Carlton has earned the reputation as a global brand leader in luxury hospitality. As part of our team, you will learn and exemplify the Gold Standards, such as our Employee Promise, Credo and our Service Values. And our promise to you is that we offer the chance to be proud of the work you do and who you work with.
In joining The Ritz-Carlton, you join a portfolio of brands with Marriott International. Be where you can do your best work, begin your purpose, belong to an amazing global team, and become the best version of you.
SailPoint Engineer
Requirements engineer job in Roanoke, TX
Immediate need for a talented SailPoint Engineer. This is a 12+ Month Contract opportunity with long-term potential and is located in Westlake, TX(Hybrid). Please review the job description below and contact me ASAP if you are interested.
Job ID:25-95045
Pay Range: $65 - $70/hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location).
Key Requirements and Technology Experience:
Must have skills: -Sailpoint Identity IQ platform - Lifecycle Manager, Certifications, Roles, Joiner/Mover/Leaver events
Active Directory, JDBC, SCIM 2.0, Azure Active Directory Programming
Java, BeanShell/JavaScript, Angular, SQL.
XML, JSON, REST, SQL, Web and Application Servers like Tomcat.
API's (REST, SCIM) leveraging Java based development
Experience in building, installing, testing IIQ using Services Standards Build/Deployment (SSB/SSD)
Sailpoint IdenityIQ - LCM, Certifications,
Roles Java/BeanShell SSB/SSD
Webservice - REST, SCIM
SQL/PL
Active Directory
You Have B.S.in Computer Science preferred, Engineering / Mathematics or comparable 8+ years of experience in building and developing using SailPoint IdentityIQ product Expertise onboarding applications, Lifecycle events, Certifications, Roles in SailPoint IdentityIQ product
Hands on experience with automation & pipeline implementation (Testing, Continuous Integration / Continuous Delivery pipeline).
You have hands-on experience in designing and developing using the following technologies:
Expertise working with Sailpoint Identity IQ platform - Lifecycle Manager, Certifications, Roles, Joiner/Mover/Leaver events
Expertise onboarding applications with connectors like Active Directory, JDBC, SCIM 2.0, Azure Active Directory Programming
Expertise with the following programming languages: Java, BeanShell/JavaScript, Angular, SQL.
Expertise developing using XML, JSON, REST, SQL, Web and Application Servers like Tomcat.
Expertise developing API's (REST, SCIM) leveraging Java based development
Experience in building, installing, testing IIQ using Services Standards Build/Deployment (SSB/SSD)
Experience installing, patching, and upgrading IIQ Experience in analyzing & troubleshooting issues in various components of IIQ
Experience with Application Lifecycle Management tools such as GIT, Maven, Jenkins, Artifactory, Veracode, Sonar
You have experience working in an agile environment (Scrum and Kanban)
You possess strong engineering skills and experience developing maintainable, scalable multi-tiered applications
You have maniacal focus on automation and always looking towards reducing waste and improving efficiencies
You should enjoy communicating and learning the business behind the application.
Our client is a leading Banking Industry and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration.
Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
AI/ML Engineer
Requirements engineer job in Dallas, TX
About the Role
Apexon is seeking an experienced AI/ML Engineer with strong expertise in LLM development, MLOps, and building scalable GenAI solutions. You will design, build, and operationalize AI/ML systems that support enterprise clients across healthcare, BFSI, retail, and digital transformation engagements.
The ideal candidate has hands-on experience building end-to-end machine learning pipelines, optimizing large language model workflows, and deploying secure ML systems in production environments.
Responsibilities
LLM & AI Solution Development
Build, fine-tune, evaluate, and optimize Large Language Models (LLMs) for client-specific use cases such as document intelligence, chatbot automation, code generation, and workflow orchestration.
Develop RAG (Retrieval-Augmented Generation) pipelines using enterprise knowledge bases.
Implement prompt engineering, guardrails, hallucination reduction strategies, and safety frameworks.
Work with transformer-based architectures (GPT, LLaMA, Mistral, Falcon, etc.) and develop optimized model variants for low-latency and cost-efficient inference.
Machine Learning Engineering
Develop scalable ML systems including feature pipelines, training jobs, and batch/real-time inference services.
Build and automate training, validation, and monitoring workflows for predictive and GenAI models.
Perform offline evaluation, A/B testing, performance benchmarking, and business KPI tracking.
MLOps & Platform Engineering
Build and maintain end-to-end MLOps pipelines using:
AWS SageMaker, Databricks, MLflow, Kubernetes, Docker, Terraform, Airflow
Manage CICD pipelines for model deployment, versioning, reproducibility, and governance.
Implement enterprise-grade model monitoring (data drift, performance, cost, safety).
Maintain infrastructure for vector stores, embeddings pipelines, feature stores, and inference endpoints.
Data Engineering & Infrastructure
Build data pipelines for structured and unstructured data using:
Snowflake, S3, Kafka, Delta Lake, Spark (PySpark)
Work on data ingestion, transformation, quality checks, cataloging, and secure storage.
Ensure all systems adhere to Apexon and client-specific security, IAM, and compliance standards.
Cross-Functional Collaboration
Partner with product managers, data engineers, cloud architects, and QA teams.
Translate business requirements into scalable AI/ML solutions.
Ensure model explainability, governance documentation, and compliance adherence.
Basic Qualifications
Bachelor's or Master's degree in Computer Science, Engineering, AI/ML, Data Science, or related field.
4+ years of experience in AI/ML engineering, including 1+ years working with LLMs/GenAI.
Strong experience with Python, Transformers, PyTorch/TensorFlow, and NLP frameworks.
Hands-on expertise with MLOps platforms: SageMaker, MLflow, Databricks, Kubernetes, Docker.
Strong SQL and data engineering experience (Snowflake, S3, Spark, Kafka).
Preferred Qualifications
Experience implementing Generative AI solutions for enterprise clients.
Expertise in distributed training, quantization, optimization, and GPU acceleration.
Experience with:
Vector Databases (Pinecone, Weaviate, FAISS)
RAG frameworks (LangChain, LlamaIndex)
Monitoring tools (Prometheus, Grafana, CloudWatch)
Understanding of model governance, fairness evaluation, and client compliance frameworks.
AL/ML Engineer- Full Time
Requirements engineer job in Irving, TX
Job Title - AI/ML Engineer
Job Type- Full Time
Key Responsibilities
Model Development: Design, build, and optimize machine learning models for predictive analytics, classification, recommendation systems, and NLP.
Data Processing: Collect, clean, and preprocess large datasets from various sources for training and evaluation.
Deployment: Implement and deploy ML models into production environments using frameworks like TensorFlow, PyTorch, or Scikit-learn.
Performance Monitoring: Continuously monitor and improve model accuracy, efficiency, and scalability.
Collaboration: Work closely with data engineers, software developers, and product teams to integrate AI solutions into applications.
Research & Innovation: Stay updated with the latest advancements in AI/ML and apply cutting-edge techniques to business challenges.
Documentation: Maintain clear documentation of models, processes, and workflows.
Required Skills & Qualifications
Bachelor's or Master's degree in Computer Science, Data Science, or related field.
Strong proficiency in Python, R, or Java.
Hands-on experience with ML frameworks (TensorFlow, PyTorch, Scikit-learn).
Knowledge of data structures, algorithms, and software engineering principles.
Experience with cloud platforms (AWS, Azure, GCP) and MLOps tools.
Familiarity with big data technologies (Spark, Hadoop) is a plus.
Excellent problem-solving and analytical skills.
Preferred Qualifications
Experience in Natural Language Processing (NLP), Computer Vision, or Deep Learning.
Understanding of model interpretability and ethical AI practices.
Prior experience deploying models in production environments.
CyberArk Engineer
Requirements engineer job in Frisco, TX
You will be responsible for delivery and buildout of a Privileged Access ecosystem and apply comprehensive knowledge of privileged access security controls to the completion of complex assignments. You will identify and recommend changes in procedures, processes, and scope of delivery. This position reports to the Director of Privileged Access Engineering.
What you will do:
Troubleshoot complex heterogeneous environments related to privileged access technologies through server log and network traffic analysis, leaning on experience with troubleshooting and analysis techniques and tools.
Understand taxonomy of privileges on named or shared privileged accounts.
Incorporate cybersecurity best practices for technology governance over privileged account lifecycles.
Development of PAM (CyberArk) connection components and plugins as needed utilizing various scripting tools (PowerShell, python) and rest API's.
Develop regular reporting and be accountable for deliverables.
Perform disaster resiliency tests, discovery audits, and can present findings to management in order to ensure security and integrity of the systems.
What you will need to have:
8+ years' experience in IT.
5+ years' experience in Cyber Security.
3+ years' experience in implementation, integration, and operations of privileged access technologies (CyberArk and all its components).
3+ years' experience in systems and network administration (Windows, Unix/Linux, Network devices) and good knowledge of PKI, Authentication tools and protocols (like SAML, Radius, PING), MFA.
2+ years' experience with privileged access controls in Unix and Windows environments.
2+ years' experience with broader IAM ecosystem of directories, identity management, and access management controls.
1+ years' experience in a senior technical role (have a deep understanding of the product) with IAM/PAM products such as CyberArk and its components.
Bachelor's degree in computer science, or a relevant field, or an equivalent combination of education, work, and/or military experience.
What would be great to have:
2+ years' experience in onboarding and managing privileged credentials across Windows, Linux/Unix, databases, networking devices and other platforms.
2+ years' experience in development/scripting (shell, PowerShell, python and utilizing rest API methods and other current tools including AI to assist in automation activities like provisioning of vault components, accounts and implementing access controls.
1+ years' experience in coming up with technical solutions and being able to present to management related to PAM.
1+ years' experience in ability to interface with Corporate Audit and External Audit functions for regulatory compliance.
Cybersecurity certifications such as CISA, CISSP and CyberArk certifications - CDE, Sentry, Defender.
Observability Engineer
Requirements engineer job in Allen, TX
Observability Engineer / Dynatrace (Custom Data Integration)
• Design and implement observability frameworks by integrating and managing monitoring, logging, and tracing for cloud-native and on-premises systems.
• Build systems to collect and ingest data from various sources, often through APIs, and manage time-series data at scale.
• Create custom dashboards, alerts, and reporting views to provide clear, actionable insights into system performance.
• Develop and maintain status pages that provide a user-friendly web experience to communicate system health to stakeholders.
• Leverage tools such as Prometheus and Grafana for time-series data collection and visualization, and Dynatrace for deep-dive analysis and monitoring.
• Use data-driven insights to improve system reliability, reduce Mean Time To Resolution (MTTR), and optimize resource usage.
• Proficiency in Prometheus, Grafana, and Dynatrace.
• Experience with time-series data, including PromQL.
• Strong knowledge of API design, data modeling, and data pipelines.
• Expertise in scripting and backend development (e.g., Python, Go, Java).
• Proven experience designing and scaling observability stacks for production systems.
• Hands-on experience with cloud platforms (AWS, Azure).
• Experience with containerization (e.g., Kubernetes).
• Familiarity with dashboards as code and Terraform.
Note : If you are interested please share me your resumes to ********************* or else reach me at **********.
AWS Cloud Engineer
Requirements engineer job in Plano, TX
Interview process: Onsite (Plano, TX)
We are looking for a highly skilled Senior AWS Data/Backend Engineer with strong expertise in C#, AWS Lambda, AWS Glue, and distributed caching technologies. The ideal candidate will excel at building and optimizing high-performance cloud-native APIs and backend services in a fully AWS-based environment. This role focuses heavily on API scalability, performance tuning, and data pipeline optimization.
🔧 Key Responsibilities
Design, develop, and optimize AWS Lambda functions in C# for low-latency, high-throughput workloads.
Implement and manage distributed caching using Redis/ElastiCache or OpenSearch.
Enhance and support AWS Glue ETL pipelines, data structures, and workflows.
Architect scalable backend services using API Gateway, Lambda, S3, and Glue Catalog.
Optimize data querying patterns across Aurora, DynamoDB, Redshift, or similar databases.
Perform end-to-end performance profiling and bottleneck analysis across APIs and data pipelines.
Improve observability with CloudWatch, X-Ray, and structured logging.
Ensure all cloud solutions meet best practices for security, scalability, reliability, and cost efficiency.
Collaborate with cross-functional teams to deliver high-performance APIs and backend systems.
✅ Required Skills & Experience
5+ years in backend engineering, cloud engineering, or data engineering.
Strong expertise in C#/.NET with hands-on AWS Lambda experience.
Experience implementing Redis/ElastiCache or OpenSearch caching layers.
Hands-on with AWS Glue, Glue Catalog, ETL optimization, and data lakes.
Strong knowledge of core AWS services: Lambda, API Gateway, S3, IAM, CloudWatch.
Experience with cloud databases: Aurora, DynamoDB, Redshift.
Strong understanding of data modeling, partitioning, and schema optimization.
Excellent debugging, performance tuning, and problem-solving abilities.
✨ Preferred Qualifications
Experience with event-driven architectures (Kinesis, Kafka, SNS, SQS).
Familiarity with CI/CD pipelines and automated deployments.
Knowledge of AWS cost optimization practices.
Understanding of DevOps/SRE monitoring and high-availability concepts.
SRE Engineer with Azure AI
Requirements engineer job in Plano, TX
Overall experience: 8 to 10+ years' experience performing Production Support for Mission Critical, high performance applications (Customer Care, Retail and eCommerce customer/agent facing application experience preferred).
Experience using Docker, Kubernetes and Microsoft Azure Cloud, Unix, Networking and troubleshooting knowledge.
Experience with Application & Infrastructure Performance Monitoring tools like Dynatrace.
Experience with Application Log Analytics tools like Elastic.
Experience with visualization tools like Kibana and Grafana.
EFK stack experience preferred.
Creation of Dashboards on Dynatrace, ELK and Grafana.
Debugging java log, debugging microservices log.
Experience in Relational & NoSQL databases like Oracle & Cassandra.
Experience with Site Reliability Engineering preferred.
Generative AI and Workflow Automation skills • Demonstrated experience leveraging AI driven tools for automating end to end operational workflows.
Demonstrated experience using Text Generative & Code Generative AI Models • Automation, Gen AI & Agentic Workflow Technical Skills to include GPT 4o LLM.
Advanced LangGraph and LangChain Google DialogFlow (Api.ai) Google Vertex Databricks, Spark, Snowflake.
HP NonStop Engineer (W2)
Requirements engineer job in Plano, TX
Open to: Jersey City, Tampa, FL, Columbus, OH, Plano, TX
W2 Hiring
Onsite from day one
We are seeking an experienced HP NonStop Engineer to support, enhance, and automate operations on mission-critical HPE NonStop systems. The ideal candidate will have strong hands-on experience with NonStop environments and a passion for automating manual operational tasks using modern scripting and configuration management tools.
Automate Operations and System manual tasks
Skills:
Python, Ansible, Java, HPE Nonstop TACL, Prognosis
Endpoint Engineer
Requirements engineer job in Plano, TX
About Us:
At DivergeIT, we're recognized for our innovative approach to IT consulting and managed services. We help organizations navigate digital transformation by delivering tailored technology strategies that drive business growth. We're seeking a highly skilled and motivated Service Engineer to join our dynamic team and further our commitment to excellence.
Why Join DivergeIT?
At DivergeIT, we offer a collaborative and innovative work environment where your expertise will be valued, and your contributions will make a tangible impact. We are committed to supporting your professional growth through continuous learning opportunities and certifications.
Position Overview
As an Endpoint Engineer, you will be responsible for the provisioning, configuration, and lifecycle management of Windows-based endpoints, including both physical workstations and Azure Virtual Desktop (AVD) environments. This position requires extensive experience with Microsoft Intune, SCCM, Patch My PC, 1E, and Autopilot, along with proficiency in creating, deploying, and maintaining Windows images. You will collaborate closely with the client's IT security team and work alongside our dedicated service desk to ensure optimal endpoint performance and security compliance.
Key Responsibilities
Design, create, and maintain standardized Windows images for physical workstations and Azure Virtual Desktop (AVD) using tools such as SCCM, Microsoft Deployment Toolkit (MDT), and Intune.
Manage and automate device provisioning and deployment through Windows Autopilot.
Administer patching, software deployment, and update compliance using Patch My PC, SCCM, and Intune.
Utilize 1E tools (e.g., Nomad, Tachyon) to support remote management, compliance, and endpoint performance monitoring.
Collaborate with the client's IT security team to implement and maintain endpoint security baselines and compliance standards.
Provide escalation support to our dedicated service desk and help drive resolution of complex endpoint issues.
Maintain up-to-date documentation for image creation, deployment processes, and system configurations.
Monitor and optimize AVD performance, scaling, and configuration consistency.
Stay informed about changes in Microsoft endpoint management tools and provide recommendations for improvements or modernization efforts.
Required Qualifications
3+ years of experience in an endpoint management or systems engineering role, preferably in an MSP or enterprise IT environment.
Expertise in creating and managing Windows images for both physical endpoints and AVD environments.
Strong understanding of Windows 10/11 OS deployment, device provisioning, group policy, and compliance management.
Experience working collaboratively with IT security and help desk teams.
Excellent troubleshooting, documentation, and communication skills.
Proficiency with:
Microsoft Intune (Endpoint Manager)
System Center Configuration Manager (SCCM)
Windows Autopilot
Patch My PC
1E Nomad, Tachyon
Preferred Qualifications
Microsoft certifications (e.g., MD-102, MS-101, AZ-140).
Experience with hybrid environments and Azure AD Join/Hybrid Join.
Familiarity with AVD scaling plans, FSLogix, and host pool image management.
Scripting knowledge (PowerShell) for automation of endpoint and imaging tasks.
Exposure to Zero Trust security models and conditional access policies.
Kubernetes Engineer
Requirements engineer job in Plano, TX
Hands on experience of Kubernetes engineering and development.
Minimum 5-7+ years of experience in working with hybrid Infra architectures
Experience in analyzing the architecture of On Prem Infrastructure for Applications (Network, Storage, Processing, Backup/DR etc).
Strong understanding of Infrastructure capacity planning, monitoring, upgrades, IaaC automations using Terraform, Ansible, CICD using Jenkins/Github Actions.
Experience working with engineering teams to define best practices and processes as appropriate to support the entire infrastructure lifecycle - Plan, Build, Deploy, and Operate such as automate lifecycle activities - self-service, orchestration and provisioning, configuration management.
Experience working with engineering teams to define best practices and processes as appropriate to support the entire infrastructure lifecycle - Plan, Build, Deploy, and Operate such as automate lifecycle activities - self-service, orchestration and provisioning, configuration management.
Experience defining infrastructure direction.
Drive continuous improvement including design, and standardization of process and methodologies.
Experience assessing feasibility, complexity and scope of new capabilities and solutions
Base Salary Range: $100,000 - $110,000 per annum
TCS Employee Benefits Summary:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
Backend Engineer (Distributed Systems and Kubernetes)
Requirements engineer job in Dallas, TX
Software Engineer - Batch Compute (Kubernetes / HPC)
Dallas (Hybrid) | 💼 Full-time
A leading, well-funded quantitative research and technology firm is looking for a Software Engineer to join a team building and running a large-scale, high-performance batch compute platform.
You'll be working on modern Kubernetes-based infrastructure that powers complex research and ML workloads at serious scale, including contributions to a well-known open-source scheduling project used for multi-cluster batch computing.
What you'll be doing
• Building and developing backend services, primarily in Go (Python, C++, C# backgrounds are fine)
• Working on large-scale batch scheduling and distributed systems on Kubernetes
• Operating and improving HPC-style workloads, CI/CD pipelines, and Linux-based platforms
• Optimising data flows across systems using tools like PostgreSQL
• Debugging and improving performance across infrastructure, networking, and software layers
What they're looking for
• Strong software engineering background with an interest in Kubernetes and batch workloads
• Experience with Kubernetes internals (controllers, operators, schedulers)
• Exposure to HPC, job schedulers, or DAG-based workflows
• Familiarity with cloud platforms (ideally AWS), observability tooling, and event-driven systems
Why it's worth a look
• Market-leading compensation plus bonus
• Hybrid setup from a brand-new Dallas office
• Strong work/life balance and excellent benefits
• Generous relocation support if needed
• The chance to work at genuine scale on technically hard problems
If you're interested (or know someone who might be), drop me a message and I'm happy to share more details anonymously.
Autodesk Vault Engineer-- CDC5695559
Requirements engineer job in Plano, TX
Auto Desk Upgrade
Lead planning, execution and validation of Autodesk Vault upgrade
Collaborate with engineering CAD, and IT Team to support Vault upgrade roadmap.
Ensure robust data migration, backup and recovery strategies.
Conduct required validation after upgrade
Document upgrade procedures and train end user as needed.
Coordinate with Autodesk support for unresolved upgrade related issues.
SCCM Package deployment:
Validation SCCM packages work as expected.
Investigate/resolve any installation failures after package installation.
Monitor deployment success rates and troubleshoot issues.
Track and resolve user- reported bugs or regressions introduced during the upgrade.
Support rollback or contingency plans if critical issues arises.
Manage Vault user roles, permissions, and access controls.
Support CAD teams with Vault- related workflows.
Manually install/updates SCCM for some applications.
Senior Data Engineer
Requirements engineer job in Plano, TX
Ascendion is a full-service digital engineering solutions company. We make and manage software platforms and products that power growth and deliver captivating experiences to consumers and employees. Our engineering, cloud, data, experience design, and talent solution capabilities accelerate transformation and impact for enterprise clients. Headquartered in New Jersey, our workforce of 6,000+ Ascenders delivers solutions from around the globe. Ascendion is built differently to engineer the next.
Ascendion | Engineering to elevate life
We have a culture built on opportunity, inclusion, and a spirit of partnership. Come, change the world with us:
Build the coolest tech for world's leading brands
Solve complex problems - and learn new skills
Experience the power of transforming digital engineering for Fortune 500 clients
Master your craft with leading training programs and hands-on experience
Experience a community of change makers!
Join a culture of high-performing innovators with endless ideas and a passion for tech. Our culture is the fabric of our company, and it is what makes us unique and diverse. The way we share ideas, learning, experiences, successes, and joy allows everyone to be their best at Ascendion.
*** About the Role ***
Job Title: Senior Data Engineer
Key Responsibilities:
Design, develop, and maintain scalable and reliable data pipelines and ETL workflows.
Build and optimize data models and queries in Snowflake to support analytics and reporting needs.
Develop data processing and automation scripts using Python.
Implement and manage data orchestration workflows using Airflow, Airbyte, or similar tools.
Work with AWS data services including EMR, Glue, and Kafka for large-scale data ingestion and processing.
Ensure data quality, reliability, and performance across data pipelines.
Collaborate with analytics, product, and engineering teams to understand data requirements and deliver robust solutions.
Monitor, troubleshoot, and optimize data workflows for performance and cost efficiency.
Required Skills & Qualifications:
8+ years of hands-on experience as a Data Engineer.
Strong proficiency in SQL and Snowflake.
Extensive experience with ETL frameworks and data pipeline orchestration tools (Airflow, Airbyte, or similar).
Proficiency in Python for data processing and automation.
Hands-on experience with AWS data services, including EMR, Glue, and Kafka.
Strong understanding of data warehousing, data modeling, and distributed data processing concepts.
Nice to Have:
Experience working with streaming data pipelines.
Familiarity with data governance, security, and compliance best practices.
Experience mentoring junior engineers and leading technical initiatives.
Salary Range: The salary for this position is between $130,000- $140,000 annually. Factors which may affect pay within this range may include geography/market, skills, education, experience, and other qualifications of the successful candidate.
Benefits: The Company offers the following benefits for this position, subject to applicable eligibility requirements: [medical insurance] [dental insurance] [vision insurance] [401(k) retirement plan] [long-term disability insurance] [short-term disability insurance] [5 personal days accrued each calendar year. The Paid time off benefits meet the paid sick and safe time laws that pertains to the City/ State] [10-15 days of paid vacation time] [6 paid holidays and 1 floating holiday per calendar year] [Ascendion Learning Management System]
Want to change the world? Let us know.
Tell us about your experiences, education, and ambitions. Bring your knowledge, unique viewpoint, and creativity to the table. Let's talk!
Senior Data Engineer (USC AND GC ONLY)
Requirements engineer job in Richardson, TX
Now Hiring: Senior Data Engineer (GCP / Big Data / ETL)
Duration: 6 Months (Possible Extension)
We're seeking an experienced Senior Data Engineer with deep expertise in Data Warehousing, ETL, Big Data, and modern GCP-based data pipelines. This role is ideal for someone who thrives in cross-functional environments and can architect, optimize, and scale enterprise-level data solutions on the cloud.
Must-Have Skills (Non-Negotiable)
9+ years in Data Engineering & Data Warehousing
9+ years hands-on ETL experience (Informatica, DataStage, etc.)
9+ years working with Teradata
3+ years hands-on GCP and BigQuery
Experience with Dataflow, Pub/Sub, Cloud Storage, and modern GCP data pipelines
Strong background in query optimization, data structures, metadata & workload management
Experience delivering microservices-based data solutions
Proficiency in Big Data & cloud architecture
3+ years with SQL & NoSQL
3+ years with Python or similar scripting languages
3+ years with Docker, Kubernetes, CI/CD for data pipelines
Expertise in deploying & scaling apps in containerized environments (K8s)
Strong communication, analytical thinking, and ability to collaborate across technical & non-technical teams
Familiarity with AGILE/SDLC methodologies
Key Responsibilities
Build, enhance, and optimize modern data pipelines on GCP
Implement scalable ETL frameworks, data structures, and workflow dependency management
Architect and tune BigQuery datasets, queries, and storage layers
Collaborate with cross-functional teams to define data requirements and support business objectives
Lead efforts in containerized deployments, CI/CD integrations, and performance optimization
Drive clarity in project goals, timelines, and deliverables during Agile planning sessions
📩 Interested? Apply now or DM us to explore this opportunity! You can share resumes at ********************* OR Call us on *****************
Senior Data Engineer
Requirements engineer job in Dallas, TX
About Us
Longbridge Securities, founded in March 2019 and headquartered in Singapore, is a next-generation online brokerage platform. Established by a team of seasoned finance professionals and technical experts from leading global firms, we are committed to advancing financial technology innovation. Our mission is to empower every investor by offering enhanced financial opportunities.
What You'll Do
As part of our global expansion, we're seeking a Data Engineer to design and build batch/real-time data warehouses and maintain data platforms that power trading and research for the US market. You'll work on data pipelines, APIs, storage systems, and quality monitoring to ensure reliable, scalable, and efficient data services.
Responsibilities:
Design and build batch/real-time data warehouses to support the US market growth
Develop efficient ETL pipelines to optimize data processing performance and ensure data quality/stability
Build a unified data middleware layer to reduce business data development costs and improve service reusability
Collaborate with business teams to identify core metrics and data requirements, delivering actionable data solutions
Discover data insights through collaboration with the business owner
Maintain and develop enterprise data platforms for the US market
Qualifications
7+ years of data engineering experience with a proven track record in data platform/data warehouse projects
Proficient in Hadoop ecosystem (Hive, Kafka, Spark, Flink), Trino, SQL, and at least one programming language (Python/Java/Scala)
Solid understanding of data warehouse modeling (dimensional modeling, star/snowflake schemas) and ETL performance optimization
Familiarity with AWS/cloud platforms and experience with Docker, Kubernetes
Experience with open-source data platform development, familiar with at least one relational database (MySQL/PostgreSQL)
Strong cross-department collaboration skills to translate business requirements into technical solutions
Bachelor's degree or higher in Computer Science, Data Science, Statistics, or related fields
Comfortable working in a fast-moving fintech/tech startup environment
Qualifications
7+ years of data engineering experience with a proven track record in data platform/data warehouse projects
Proficient in Hadoop ecosystem (Hive, Kafka, Spark, Flink), Trino, SQL, and at least one programming language (Python/Java/Scala)
Solid understanding of data warehouse modeling (dimensional modeling, star/snowflake schemas) and ETL performance optimization
Familiarity with AWS/cloud platforms and experience with Docker, Kubernetes
Experience with open-source data platform development, familiar with at least one relational database (MySQL/PostgreSQL)
Strong cross-department collaboration skills to translate business requirements into technical solutions
Bachelor's degree or higher in Computer Science, Data Science, Statistics, or related fields
Comfortable working in a fast-moving fintech/tech startup environment
Proficiency in Mandarin and English at the business communication level for international team collaboration
Bonus Point:
Experience with DolphinScheduler and SeaTunnel is a plus
Senior DevOps Engineer
Requirements engineer job in Dallas, TX
Qorali is excited to have a new role that we can share with you to take your career to the next level! This role is involved with modern technologies that are integrated deeply with different platforms and operations with significant opportunities within the company to have continuous growth. You will work with teams to implement monitoring practices to enhance the environment and efficiency for both cloud and on-prem spaces.
Expectations for role
Tracking metrics with alerts and notifications with runbooks for operational monitoring, availability and scalability
Implementation of resolutions for optimization for different services with the team
Incident response production while keeping automation
Lead the for the team improvement in research, retrospectives, and discussion/code reviews
Mentoring junior team members
Maintenance of large-scale systems with the ability to troubleshoot and problem solve.
Technical Skills
6+ years of DevOps experience
AWS (preferred) or Azure
Experience with monitoring environments including tools such as Splunk, AppDynamics, Datadog, Prometheus or Grafana.
Scripting languages (Java, Python)
Containerization creation in Kubernetes, Docker or Rancher
CI/CD experience (Jenkins preferred)
Leveraging the use of language models to enhance DevOps automation workflow
Benefits
15% bonus
20+ PTO
6% 401k match
Health, vision, dental and life plans
Two days of remote working per week
This role is unable to support Visa Sponsorship or C2C. C2H.
Azure Data Engineer Sr
Requirements engineer job in Irving, TX
Minimum 7 years of relevant work experience in data engineering, with at least 2 years in a data modeling.
Strong technical foundation in Python, SQL, and experience with cloud platforms (Azure,).
Deep understanding of data engineering fundamentals, including database architecture and design, Extract, transform and load (ETL) processes, data lakes, data warehousing, and both batch and streaming technologies.
Experience with data orchestration tools (e.g., Airflow), data processing frameworks (e.g., Spark, Databricks), and data visualization tools (e.g., Tableau, Power BI).
Proven ability to lead a team of engineers, fostering a collaborative and high-performing environment.
Data Engineer
Requirements engineer job in Dallas, TX
Must be local to TX
Data Engineer - SQL, Python and Pyspark Expert (Onsite - Dallas, TX)
Data Engineer with strong proficiency in SQL, Python, and PySpark to support high-performance data pipelines and analytics initiatives. This role will focus on scalable data processing, transformation, and integration efforts that enable business insights, regulatory compliance, and operational efficiency.
Key Responsibilities
Design, develop, and optimize ETL/ELT pipelines using SQL, Python, and PySpark for large-scale data environments
Implement scalable data processing workflows in distributed data platforms (e.g., Hadoop, Databricks, or Spark environments)
Partner with business stakeholders to understand and model mortgage lifecycle data (origination, underwriting, servicing, foreclosure, etc.)
Create and maintain data marts, views, and reusable data components to support downstream reporting and analytics
Ensure data quality, consistency, security, and lineage across all stages of data processing
Assist in data migration and modernization efforts to cloud-based data warehouses (e.g., Snowflake, Azure Synapse, GCP BigQuery)
Document data flows, logic, and transformation rules
Troubleshoot performance and quality issues in batch and real-time pipelines
Support compliance-related reporting (e.g., HMDA, CFPB)
Required Qualifications
6+ years of experience in data engineering or data development
Advanced expertise in SQL (joins, CTEs, optimization, partitioning, etc.)
Strong hands-on skills in Python for scripting, data wrangling, and automation
Proficient in PySpark for building distributed data pipelines and processing large volumes of structured/unstructured data
Experience working with mortgage banking data sets and domain knowledge is highly preferred
Strong understanding of data modeling (dimensional, normalized, star schema)
Experience with cloud-based platforms (e.g., Azure Databricks, AWS EMR, GCP Dataproc)
Familiarity with ETL tools, orchestration frameworks (e.g., Airflow, ADF, dbt)
Data Engineer
Requirements engineer job in Dallas, TX
Junior Data Engineer
DESCRIPTION: BeaconFire is based in Central NJ, specializing in Software Development, Web Development, and Business Intelligence; looking for candidates who are good communicators and self-motivated. You will play a key role in building, maintaining, and operating integrations, reporting pipelines, and data transformation systems.
Qualifications:
Passion for data and a deep desire to learn.
Master's Degree in Computer Science/Information Technology, Data Analytics/Data
Science, or related discipline.
Intermediate Python. Experience in data processing is a plus. (Numpy, Pandas, etc)
Experience with relational databases (SQL Server, Oracle, MySQL, etc.)
Strong written and verbal communication skills.
Ability to work both independently and as part of a team.
Responsibilities:
Collaborate with the analytics team to find reliable data solutions to meet the business needs.
Design and implement scalable ETL or ELT processes to support the business demand for data.
Perform data extraction, manipulation, and production from database tables.
Build utilities, user-defined functions, and frameworks to better enable data flow patterns.
Build and incorporate automated unit tests, participate in integration testing efforts.
Work with teams to resolve operational & performance issues.
Work with architecture/engineering leads and other teams to ensure quality solutions are implemented, and engineering best practices are defined and adhered to.
Compensation: $65,000.00 to $80,000.00 /year
BeaconFire is an e-verified company. Work visa sponsorship is available.