Mobile Engineering
Requirements engineer job in Austin, TX
JLL empowers you to shape a brighter way.
Our people at JLL and JLL Technologies are shaping the future of real estate for a better world by combining world class services, advisory and technology for our clients. We are committed to hiring the best, most talented people and empowering them to thrive, grow meaningful careers and to find a place where they belong. Whether you've got deep experience in commercial real estate, skilled trades or technology, or you're looking to apply your relevant experience to a new industry, join our team as we help shape a brighter way forward.
Mobile Engineering - JLL
What this job involves: This position focuses on the hands-on performance of ongoing preventive maintenance and repair work orders across multiple facility locations. You will maintain, operate, and repair building systems including HVAC, electrical, plumbing, and other critical infrastructure components. This mobile role requires you to travel between assigned buildings, conduct facility inspections, respond to emergencies, and ensure all systems operate efficiently to support client occupancy and satisfaction across JLL's building portfolio.
What your day-to-day will look like:
• Perform ongoing preventive maintenance and repair work orders on facility mechanical, electrical and other installed systems, equipment, and components.
• Maintain, operate, and repair all HVAC systems and associated equipment, electrical distribution equipment, plumbing systems, building interior/exterior repair, and related grounds.
• Conduct assigned facility inspections and due diligence efforts, reporting conditions that impact client occupancy and operations.
• Respond effectively to all emergencies and after-hours building activities as required.
• Prepare and submit summary reports to management listing conditions found during assigned work and recommend corrective actions.
• Study and maintain familiarity with building automation systems, fire/life safety systems, and other building-related equipment.
• Maintain compliance with all safety procedures, recognize hazards, and propose elimination methods while adhering to State, County, or City Ordinances, Codes, and Laws.
Required Qualifications:
• Valid state driver's license and Universal CFC Certification.
• Minimum four years of technical experience in all aspects of building engineering with strong background in packaged and split HVAC units, plumbing, and electrical systems.
• Physical ability to lift up to 80 lbs and climb ladders up to 30 ft.
• Ability to read schematics and technical drawings.
• Availability for on-call duties and overtime as required.
• Must pass background, drug/alcohol, and MVR screening process.
Preferred Qualifications:
• Experience with building automation systems and fire/life safety systems.
• Knowledge of CMMS systems such as Corrigo for work order management.
• Strong troubleshooting and problem-solving abilities across multiple building systems.
• Experience working in commercial building environments.
• Commitment to ongoing safety training and professional development.
Location: Mobile position covering Austin, TX and surrounding area.
Work Shift: Standard business hours with on-call availability
#HVACjobs
This position does not provide visa sponsorship. Candidates must be authorized to work in the United States without employer sponsorship.
Location:
On-site -Austin, TX
If this job description resonates with you, we encourage you to apply, even if you don't meet all the requirements. We're interested in getting to know you and what you bring to the table!
Personalized benefits that support personal well-being and growth:
JLL recognizes the impact that the workplace can have on your wellness, so we offer a supportive culture and comprehensive benefits package that prioritizes mental, physical and emotional health. Some of these benefits may include:
401(k) plan with matching company contributions
Comprehensive Medical, Dental & Vision Care
Paid parental leave at 100% of salary
Paid Time Off and Company Holidays
Early access to earned wages through Daily Pay
JLL Privacy Notice
Jones Lang LaSalle (JLL), together with its subsidiaries and affiliates, is a leading global provider of real estate and investment management services. We take our responsibility to protect the personal information provided to us seriously. Generally the personal information we collect from you are for the purposes of processing in connection with JLL's recruitment process. We endeavour to keep your personal information secure with appropriate level of security and keep for as long as we need it for legitimate business or legal reasons. We will then delete it safely and securely.
For more information about how JLL processes your personal data, please view our Candidate Privacy Statement.
For additional details please see our career site pages for each country.
For candidates in the United States, please see a full copy of our Equal Employment Opportunity policy here.
Jones Lang LaSalle (“JLL”) is an Equal Opportunity Employer and is committed to working with and providing reasonable accommodations to individuals with disabilities. If you need a reasonable accommodation because of a disability for any part of the employment process - including the online application and/or overall selection process - you may email us at ******************. This email is only to request an accommodation. Please direct any other general recruiting inquiries to our Contact Us page > I want to work for JLL.
Accepting applications on an ongoing basis until candidate identified.
Vision Engineer
Requirements engineer job in Dallas, TX
$130-150k base salary
Hybrid - Joplin MO, Tuscaloosa AL, Dallas TX, Frederick MD, Phillipsburg KS, Knoxville TN
I'm currently partnering with a US-based building materials manufacturer who are looking for a Vision Engineer to lead the development and management oof their AI computer vision/machine vision solutions to ensure the quality of internal products.
Core responsibilities include:
Developing AI models and vision algorithms for inspection
Collecting datasets for robust models
Overseeing the cleaning/preprocessing of image data across a manufacturing environment
Incorporating normalization, noise reduction and data augmentation techniques to enhance data usability/model performance and maintain preprocessing pipelines
Use new data and feedback t continuously refine existing models to maximize efficiency
Designing validation tests to ensure consistent model performance in different scenarios and implementing frameworks to assess vision system performance
Key Skills & Experience:
Good programming skills with C++ and Python - will be exposed to some 3rd party software too
Crucial that this person has a manufacturing background (Automated Vehicle, Robotics and BioMedical experience is less relevant here)
Demonstrable experience with computer vision models and machine learning frameworks in a manufacturing setting
Ideally should have a good understanding of data augmentation techniques and different image formats, as well as having managed large datasets in the past
Bachelors Degree in Computer Science, Machine Learning or related fields
Interested in this role? Apply now or share a copy of your resume to ***************************
ServiceNow Engineer
Requirements engineer job in Austin, TX
'Proficient in ServiceNow and how to write and manipulate objects in ServiceNow (both Server side and Client side) with 6+ years of hands on experience. -
Experience in Service Catalog and building workflows/record producers.
Good experience in writing Business Rules, Script INcludes and other Server side components.
- Experience in writing highly performant Web Applications and an eye for detail.
- Experience in performing test code written on ServiceNow platform.
Roles and responsibilities:
- Work on building processes on ServiceNow.
- Lead projects, and implement
- Work with QA team to make sure all the defects are fixed and the final product is ready for customer release.
- Work on radars and deliver them within the project timelines.
- Highlight risks/concerns to Apple leads.
- Provide deployment instructions
- Detailed documentation of the functionality through code and also in the internal Project management tools.
Experience in HRSD module of ServiceNow is a plus.'
Best Regards,
Dipendra Gupta
Technical Recruiter
*****************************
Engineer
Requirements engineer job in Austin, TX
Job Title: Kubernetes Engineer
Work Schedule: Tuesday - Thursday (Onsite)
We are seeking an experienced Kubernetes Engineer to design, build, manage, and maintain Kubernetes clusters across on-premises and cloud environments. The ideal candidate will have expert-level Kubernetes administration skills, strong Linux fundamentals, and hands-on experience with cluster upgrades, automation, and troubleshooting complex distributed systems.
This role requires a self-starter who can take ownership of projects end-to-end and collaborate effectively with cross-functional teams.
Roles & Responsibilities
Design, build, and administer Kubernetes clusters from scratch in both on-premises and cloud environments.
Manage and maintain all Kubernetes components (API server, etcd, scheduler, controller manager, kubelet, networking, etc.).
Perform Kubernetes version upgrades, patching, and cluster lifecycle management with minimal downtime.
Troubleshoot and resolve complex issues related to Kubernetes, container orchestration, networking, and distributed systems.
Work extensively with Linux systems, including performance tuning, security, and system-level debugging.
Build and manage containerized workloads using Docker and related container technologies.
Develop and maintain CI/CD pipelines and automate infrastructure and deployments using tools such as Ansible.
Ensure high availability, scalability, reliability, and security of Kubernetes platforms.
Collaborate with development, infrastructure, and security teams to support application deployments and platform improvements.
Take full ownership of assigned projects, from design and implementation to documentation and ongoing support.
Mandatory Skills
Strong hands-on experience with Kubernetes in on-premises environments.
Proven experience performing Kubernetes version upgrades in production environments.
Strong Linux expertise, including system administration and troubleshooting.
Required Skills & Qualifications
Expert-level experience in Kubernetes administration.
Deep understanding of distributed systems, networking, and Linux internals.
Hands-on experience with Docker and containerization concepts.
Proficiency with CI/CD pipelines and automation tools such as Ansible.
Strong problem-solving skills and ability to work independently.
Excellent communication skills and ability to collaborate across technical and non-technical teams.
AI/ML Engineer
Requirements engineer job in Dallas, TX
About the Role
Apexon is seeking an experienced AI/ML Engineer with strong expertise in LLM development, MLOps, and building scalable GenAI solutions. You will design, build, and operationalize AI/ML systems that support enterprise clients across healthcare, BFSI, retail, and digital transformation engagements.
The ideal candidate has hands-on experience building end-to-end machine learning pipelines, optimizing large language model workflows, and deploying secure ML systems in production environments.
Responsibilities
LLM & AI Solution Development
Build, fine-tune, evaluate, and optimize Large Language Models (LLMs) for client-specific use cases such as document intelligence, chatbot automation, code generation, and workflow orchestration.
Develop RAG (Retrieval-Augmented Generation) pipelines using enterprise knowledge bases.
Implement prompt engineering, guardrails, hallucination reduction strategies, and safety frameworks.
Work with transformer-based architectures (GPT, LLaMA, Mistral, Falcon, etc.) and develop optimized model variants for low-latency and cost-efficient inference.
Machine Learning Engineering
Develop scalable ML systems including feature pipelines, training jobs, and batch/real-time inference services.
Build and automate training, validation, and monitoring workflows for predictive and GenAI models.
Perform offline evaluation, A/B testing, performance benchmarking, and business KPI tracking.
MLOps & Platform Engineering
Build and maintain end-to-end MLOps pipelines using:
AWS SageMaker, Databricks, MLflow, Kubernetes, Docker, Terraform, Airflow
Manage CICD pipelines for model deployment, versioning, reproducibility, and governance.
Implement enterprise-grade model monitoring (data drift, performance, cost, safety).
Maintain infrastructure for vector stores, embeddings pipelines, feature stores, and inference endpoints.
Data Engineering & Infrastructure
Build data pipelines for structured and unstructured data using:
Snowflake, S3, Kafka, Delta Lake, Spark (PySpark)
Work on data ingestion, transformation, quality checks, cataloging, and secure storage.
Ensure all systems adhere to Apexon and client-specific security, IAM, and compliance standards.
Cross-Functional Collaboration
Partner with product managers, data engineers, cloud architects, and QA teams.
Translate business requirements into scalable AI/ML solutions.
Ensure model explainability, governance documentation, and compliance adherence.
Basic Qualifications
Bachelor's or Master's degree in Computer Science, Engineering, AI/ML, Data Science, or related field.
4+ years of experience in AI/ML engineering, including 1+ years working with LLMs/GenAI.
Strong experience with Python, Transformers, PyTorch/TensorFlow, and NLP frameworks.
Hands-on expertise with MLOps platforms: SageMaker, MLflow, Databricks, Kubernetes, Docker.
Strong SQL and data engineering experience (Snowflake, S3, Spark, Kafka).
Preferred Qualifications
Experience implementing Generative AI solutions for enterprise clients.
Expertise in distributed training, quantization, optimization, and GPU acceleration.
Experience with:
Vector Databases (Pinecone, Weaviate, FAISS)
RAG frameworks (LangChain, LlamaIndex)
Monitoring tools (Prometheus, Grafana, CloudWatch)
Understanding of model governance, fairness evaluation, and client compliance frameworks.
Gen AI Engineer with GCP
Requirements engineer job in Dallas, TX
Role : Gen AI Engineer with GCP
Type: Contract
Note : Need 10+ years of experience resumes
Must have : Gen AI , LLM , RAG , MLOPS , Vertex AI ,GCP exp
Design and build end-to-end AI/ML systems and applications, from experimentation and data preprocessing to production deployment.
Implement and optimize Generative AI models (text, image, multimodal) and integrate capabilities like Retrieval-Augmented Generation (RAG) and prompt engineering strategies to enhance LLMs with external knowledge sources.
Leverage a wide range of GCP services, including Vertex AI, Big Query, Cloud Run, GKE (Google Kubernetes Engine), Dataflow, and Pub/Sub, to build, train, and deploy custom AI models and solutions.
Manage the entire model lifecycle, including training, evaluation, fine-tuning, versioning, deployment, and monitoring performance in production environments.
Optimize models and systems for improved performance, scalability, efficiency, and cost, implementing techniques like model quantization and GPU memory optimization.
Build and maintain scalable and reliable ML pipelines using MLOps practices, employing tools like Docker and Kubernetes for containerization and CI/CD pipelines for automated deployment.
Document technical designs, processes, and best practices, and potentially mentor junior team members
Aveva PI Engineer
Requirements engineer job in Houston, TX
Title: Aveva PI Engineer
Duration: 08 to 12 months (possible extension)
Mandatory Qualifications
3-5 years of hands-on experience with AVEVA PI Historian
AVEVA PI certifications:
PI System Infrastructure Specialist
PI System Installation Specialist
Experience with PI Data Archive versions: 2010, 2012, 2016 R2, 2017, 2018 SP3
Expertise in PI Data Archive migration from legacy versions
Strong understanding of the Utilities/Energy industry, especially distribution engineering
Experience integrating SCADA and real-time systems with PI
Experience with PI Interfaces:
PI-OPC, PI-PI, PI-DNP3, PI-RDBMS, PI-Modbus, PI-UFL
High-Availability PI deployments and interface failover configurations
Experience with:
PI Asset Framework (AF)
PI Analyses, Event Frames, Notifications
PI Vision, ProcessBook, DataLink
Experience with PI Developer tools:
PI OLEDB, PI ODBC, AF SDK, PI Web API
Custom application development using .NET (C#, ASP.NET)
Cloud Engineer
Requirements engineer job in Houston, TX
Job Title: Cloud Engineer
The Cloud Engineer is responsible for designing, deploying, managing, and optimizing cloud-based infrastructure and services. This role ensures secure, scalable, and efficient cloud environments across platforms such as AWS, Azure, or Google Cloud. The position supports both project initiatives and day-to-day operations, with a focus on automation, reliability, and cost optimization.
Key Responsibilities
Design, implement, and maintain cloud infrastructure including compute, storage, networking, and security resources
Build and manage Infrastructure as Code (IaC) using tools such as Terraform, CloudFormation, ARM/Bicep, or Pulumi
Develop automation scripts and workflows to improve deployment speed, consistency, and reliability
Monitor cloud environments for performance, cost efficiency, availability, and security compliance
Implement and manage identity, access, and security controls within cloud platforms
Support containerization and orchestration solutions (Docker, Kubernetes, ECS, AKS, GKE)
Collaborate with development, DevOps, security, and IT teams to design cloud-native and hybrid solutions
Troubleshoot cloud-related issues spanning compute, networking, storage, automation, and CI/CD pipelines
Assist with cloud migrations, optimization efforts, and architecture improvements
Maintain documentation for cloud deployments, standards, and operational procedures
Required Qualifications and Experience
2-5+ years of hands-on experience with AWS, Azure, or Google Cloud
Strong understanding of cloud networking (VPCs, VNets, routing, firewalls, load balancers)
Experience with automation and Infrastructure as Code tools (Terraform, CloudFormation, ARM, etc.)
Familiarity with scripting languages such as Python, PowerShell, or Bash
Experience with CI/CD tools (GitHub Actions, GitLab CI, Jenkins, Azure DevOps, etc.)
Knowledge of cloud security best practices, IAM principles, and compliance standards
Strong troubleshooting and analytical skills
Preferred Qualifications
Cloud certifications such as AWS Solutions Architect, AWS SysOps, Azure Administrator, Azure Solutions Architect, or Google Cloud Engineer
Experience with serverless technologies (Lambda, Azure Functions, Cloud Run)
Familiarity with monitoring and logging tools (CloudWatch, Azure Monitor, Stackdriver, Datadog)
Experience with hybrid cloud environments and on-prem integration
Knowledge of DevOps practices and container orchestration
Education
Bachelor's degree in Computer Science, Information Technology, Engineering, or related field preferred
Equivalent hands-on experience will be considered
Databricks Engineer with DLT & Pyspark (USC & GC)
Requirements engineer job in Houston, TX
• Relevant experience to be more than 5-6 years, Very Proficient in Databricks, DLT (Delta Live Tables) framework and PySpark, adept at good technical conversations
Thanks & Regards
Alok Ranjan Pathak | Team Lead - US Staffing
Email: *********************** | Desk: **************
Ampstek LLC - Global IT Partner | ***************
Python Engineer
Requirements engineer job in Austin, TX
Python Backend Developer (6-Month Contract) - Austin, TX
We're seeking an experienced Python Developer to join a fast-paced gaming technology team on a 6-month contract in Austin, TX. The successful candidate will work on backend systems that power large-scale, high-performance gaming platforms.
Contract Details:
Duration: 6 months (potential for extension)
Location: Austin, TX (hybrid 2 days a week)
Rate: 50-60/hr w2
Industry: Casino gaming and entertainment technology
Key Responsibilities:
Design, develop, and maintain scalable backend services using Python.
Collaborate with cross-functional teams including frontend, DevOps, and game engineers.
Optimize APIs and data pipelines for performance and reliability.
Contribute to code reviews, architecture discussions, and technical documentation.
Required Skills:
4+ years of professional experience in backend development with Python (e.g., FastAPI, Flask, or Django).
Strong understanding of RESTful APIs and microservice architecture.
Familiarity with cloud infrastructure (AWS, GCP, or Azure).
Excellent debugging and problem-solving skills.
Please apply with an up to date resume
User Interface Engineer
Requirements engineer job in Austin, TX
We are looking for a talented UI engineer to explore new initiatives with the AiDP Frameworks team. On the team you will investigate new component library frameworks, and enable new ways of building UI with AI agents. The Frameworks team builds and maintains the Bricks component library. Bricks is a multi-framework component library built with Web Components and the Stencil framework. We are looking to understand potential alternatives to Stencil for managing a highly used component library that works across the most popular UI frameworks
Responsibilities:
Investigate cross-platform component library technologies and build proof-of-concept applications to evaluate their features and limitations.
Implement and enhance a component library-based MCP (Model Context Protocol) server.
Architect and develop multi-step AI agents that integrate with UI component libraries and frameworks.
Ensure AI agents are reliable, scalable, and cost-optimized for high-volume environments.
Produce comprehensive technical analyses comparing frameworks and provide actionable recommendations.
Work independently to deliver high-quality, well-tested code.
Qualifications:
Web Components: At least 1 year
AI Agents
MCP (Model Context Protocol)
Responsive Design: 2-5 years
TypeScript: 2-5 years
HTML/CSS/JavaScript: 2-5 years
Component Libraries: At least 1 year
Nice to Have:
ReactJS: At least 1 year
Vue.js: At least 1 year
AngularJS: At least 1 year
Sass/Scss: At least 1 year
Accessibility: At least 1 year
Stencil.js: At least 1 year
CyberArk Engineer
Requirements engineer job in Frisco, TX
You will be responsible for delivery and buildout of a Privileged Access ecosystem and apply comprehensive knowledge of privileged access security controls to the completion of complex assignments. You will identify and recommend changes in procedures, processes, and scope of delivery. This position reports to the Director of Privileged Access Engineering.
What you will do:
Troubleshoot complex heterogeneous environments related to privileged access technologies through server log and network traffic analysis, leaning on experience with troubleshooting and analysis techniques and tools.
Understand taxonomy of privileges on named or shared privileged accounts.
Incorporate cybersecurity best practices for technology governance over privileged account lifecycles.
Development of PAM (CyberArk) connection components and plugins as needed utilizing various scripting tools (PowerShell, python) and rest API's.
Develop regular reporting and be accountable for deliverables.
Perform disaster resiliency tests, discovery audits, and can present findings to management in order to ensure security and integrity of the systems.
What you will need to have:
8+ years' experience in IT.
5+ years' experience in Cyber Security.
3+ years' experience in implementation, integration, and operations of privileged access technologies (CyberArk and all its components).
3+ years' experience in systems and network administration (Windows, Unix/Linux, Network devices) and good knowledge of PKI, Authentication tools and protocols (like SAML, Radius, PING), MFA.
2+ years' experience with privileged access controls in Unix and Windows environments.
2+ years' experience with broader IAM ecosystem of directories, identity management, and access management controls.
1+ years' experience in a senior technical role (have a deep understanding of the product) with IAM/PAM products such as CyberArk and its components.
Bachelor's degree in computer science, or a relevant field, or an equivalent combination of education, work, and/or military experience.
What would be great to have:
2+ years' experience in onboarding and managing privileged credentials across Windows, Linux/Unix, databases, networking devices and other platforms.
2+ years' experience in development/scripting (shell, PowerShell, python and utilizing rest API methods and other current tools including AI to assist in automation activities like provisioning of vault components, accounts and implementing access controls.
1+ years' experience in coming up with technical solutions and being able to present to management related to PAM.
1+ years' experience in ability to interface with Corporate Audit and External Audit functions for regulatory compliance.
Cybersecurity certifications such as CISA, CISSP and CyberArk certifications - CDE, Sentry, Defender.
Endpoint Engineer
Requirements engineer job in Plano, TX
About Us:
At DivergeIT, we're recognized for our innovative approach to IT consulting and managed services. We help organizations navigate digital transformation by delivering tailored technology strategies that drive business growth. We're seeking a highly skilled and motivated Service Engineer to join our dynamic team and further our commitment to excellence.
Why Join DivergeIT?
At DivergeIT, we offer a collaborative and innovative work environment where your expertise will be valued, and your contributions will make a tangible impact. We are committed to supporting your professional growth through continuous learning opportunities and certifications.
Position Overview
As an Endpoint Engineer, you will be responsible for the provisioning, configuration, and lifecycle management of Windows-based endpoints, including both physical workstations and Azure Virtual Desktop (AVD) environments. This position requires extensive experience with Microsoft Intune, SCCM, Patch My PC, 1E, and Autopilot, along with proficiency in creating, deploying, and maintaining Windows images. You will collaborate closely with the client's IT security team and work alongside our dedicated service desk to ensure optimal endpoint performance and security compliance.
Key Responsibilities
Design, create, and maintain standardized Windows images for physical workstations and Azure Virtual Desktop (AVD) using tools such as SCCM, Microsoft Deployment Toolkit (MDT), and Intune.
Manage and automate device provisioning and deployment through Windows Autopilot.
Administer patching, software deployment, and update compliance using Patch My PC, SCCM, and Intune.
Utilize 1E tools (e.g., Nomad, Tachyon) to support remote management, compliance, and endpoint performance monitoring.
Collaborate with the client's IT security team to implement and maintain endpoint security baselines and compliance standards.
Provide escalation support to our dedicated service desk and help drive resolution of complex endpoint issues.
Maintain up-to-date documentation for image creation, deployment processes, and system configurations.
Monitor and optimize AVD performance, scaling, and configuration consistency.
Stay informed about changes in Microsoft endpoint management tools and provide recommendations for improvements or modernization efforts.
Required Qualifications
3+ years of experience in an endpoint management or systems engineering role, preferably in an MSP or enterprise IT environment.
Expertise in creating and managing Windows images for both physical endpoints and AVD environments.
Strong understanding of Windows 10/11 OS deployment, device provisioning, group policy, and compliance management.
Experience working collaboratively with IT security and help desk teams.
Excellent troubleshooting, documentation, and communication skills.
Proficiency with:
Microsoft Intune (Endpoint Manager)
System Center Configuration Manager (SCCM)
Windows Autopilot
Patch My PC
1E Nomad, Tachyon
Preferred Qualifications
Microsoft certifications (e.g., MD-102, MS-101, AZ-140).
Experience with hybrid environments and Azure AD Join/Hybrid Join.
Familiarity with AVD scaling plans, FSLogix, and host pool image management.
Scripting knowledge (PowerShell) for automation of endpoint and imaging tasks.
Exposure to Zero Trust security models and conditional access policies.
AL/ML Engineer- Full Time
Requirements engineer job in Irving, TX
Job Title - AI/ML Engineer
Job Type- Full Time
Key Responsibilities
Model Development: Design, build, and optimize machine learning models for predictive analytics, classification, recommendation systems, and NLP.
Data Processing: Collect, clean, and preprocess large datasets from various sources for training and evaluation.
Deployment: Implement and deploy ML models into production environments using frameworks like TensorFlow, PyTorch, or Scikit-learn.
Performance Monitoring: Continuously monitor and improve model accuracy, efficiency, and scalability.
Collaboration: Work closely with data engineers, software developers, and product teams to integrate AI solutions into applications.
Research & Innovation: Stay updated with the latest advancements in AI/ML and apply cutting-edge techniques to business challenges.
Documentation: Maintain clear documentation of models, processes, and workflows.
Required Skills & Qualifications
Bachelor's or Master's degree in Computer Science, Data Science, or related field.
Strong proficiency in Python, R, or Java.
Hands-on experience with ML frameworks (TensorFlow, PyTorch, Scikit-learn).
Knowledge of data structures, algorithms, and software engineering principles.
Experience with cloud platforms (AWS, Azure, GCP) and MLOps tools.
Familiarity with big data technologies (Spark, Hadoop) is a plus.
Excellent problem-solving and analytical skills.
Preferred Qualifications
Experience in Natural Language Processing (NLP), Computer Vision, or Deep Learning.
Understanding of model interpretability and ethical AI practices.
Prior experience deploying models in production environments.
Kubernetes Engineer
Requirements engineer job in Plano, TX
Hands on experience of Kubernetes engineering and development.
Minimum 5-7+ years of experience in working with hybrid Infra architectures
Experience in analyzing the architecture of On Prem Infrastructure for Applications (Network, Storage, Processing, Backup/DR etc).
Strong understanding of Infrastructure capacity planning, monitoring, upgrades, IaaC automations using Terraform, Ansible, CICD using Jenkins/Github Actions.
Experience working with engineering teams to define best practices and processes as appropriate to support the entire infrastructure lifecycle - Plan, Build, Deploy, and Operate such as automate lifecycle activities - self-service, orchestration and provisioning, configuration management.
Experience working with engineering teams to define best practices and processes as appropriate to support the entire infrastructure lifecycle - Plan, Build, Deploy, and Operate such as automate lifecycle activities - self-service, orchestration and provisioning, configuration management.
Experience defining infrastructure direction.
Drive continuous improvement including design, and standardization of process and methodologies.
Experience assessing feasibility, complexity and scope of new capabilities and solutions
Base Salary Range: $100,000 - $110,000 per annum
TCS Employee Benefits Summary:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
Backend Engineer (Distributed Systems and Kubernetes)
Requirements engineer job in Dallas, TX
Software Engineer - Batch Compute (Kubernetes / HPC)
Dallas (Hybrid) | 💼 Full-time
A leading, well-funded quantitative research and technology firm is looking for a Software Engineer to join a team building and running a large-scale, high-performance batch compute platform.
You'll be working on modern Kubernetes-based infrastructure that powers complex research and ML workloads at serious scale, including contributions to a well-known open-source scheduling project used for multi-cluster batch computing.
What you'll be doing
• Building and developing backend services, primarily in Go (Python, C++, C# backgrounds are fine)
• Working on large-scale batch scheduling and distributed systems on Kubernetes
• Operating and improving HPC-style workloads, CI/CD pipelines, and Linux-based platforms
• Optimising data flows across systems using tools like PostgreSQL
• Debugging and improving performance across infrastructure, networking, and software layers
What they're looking for
• Strong software engineering background with an interest in Kubernetes and batch workloads
• Experience with Kubernetes internals (controllers, operators, schedulers)
• Exposure to HPC, job schedulers, or DAG-based workflows
• Familiarity with cloud platforms (ideally AWS), observability tooling, and event-driven systems
Why it's worth a look
• Market-leading compensation plus bonus
• Hybrid setup from a brand-new Dallas office
• Strong work/life balance and excellent benefits
• Generous relocation support if needed
• The chance to work at genuine scale on technically hard problems
If you're interested (or know someone who might be), drop me a message and I'm happy to share more details anonymously.
Engineer
Requirements engineer job in Texarkana, TX
FEDITC, LLC is a fast-growing business supporting DoD and other intelligence agencies worldwide. FEDITC develops mission critical national security systems throughout the world directly supporting the Warfighter, DoD Leadership, & the country. We are proud & honored to provide these services.
Overview of position:
We are looking for an Engineer to work in the Texarkana area.
The Engineer will play a key role in supporting depot maintenance and production operations in Texarkana, focusing on the design, development, and improvement of complex equipment and tooling used in the overhaul, repair, modification, and upgrade of both wheeled and tracked military vehicles.
This position requires a highly skilled engineer capable of performing original design studies, developing innovative solutions for specialized vehicle systems-including hulls, suspensions, engines, transmissions, and electronic components-and integrating advanced automation technologies such as robotics and machine vision into depot operations.
The Engineer will also oversee the fabrication, assembly, and implementation of production and test equipment, ensure proper function and efficiency, and provide training and technical support to operational personnel.
An active NACI and a United States Citizenship is required to be considered for this position.
Responsibilities
Perform original design studies related to the concept and design of equipment, fixtures, and tooling to support primary vehicle systems and their components, including: Hulls, chassis, suspensions, turrets, armament, engines, transmissions, final drives, fire control instruments, electronic components, hydraulic components, and auxiliary equipment.
Provide complex independent support for the depot mission in the conceptual design, improvement, and installation of mission production equipment, associated facilities, methods, and procedures to predict, evaluate, and specify results.
Monitor technological developments of equipment used in both private industry and government operations.
Review mission overhaul, repair, modification, and upgrade programs to ensure present systems and methods perform required functions in the most economical manner.
Design complete and complex production and test equipment for the depot maintenance program.
Oversee the purchase and fabrication of equipment, fixtures, and tools-many of which are unique due to specialized requirements for tracked and wheeled vehicles and artillery maintenance operations not found commercially or within existing designs.
Incorporate flexible automation such as robotics and machine vision technology into design efforts.
Oversee assembly and ensure proper operation/function of equipment.
Demonstrate, train, and release equipment to operating shop personnel.
Experience/Skills:
5-10 years of relevant engineering experience required.
Strong knowledge of mechanical design principles, manufacturing processes, and automation technologies.
Experience with production or test equipment design for vehicle systems is highly desirable.
Ability to manage multiple design and implementation projects simultaneously.
Clearance:
Active NACI Clearance is required.
Must be a United States Citizen and pass a background check.
Maintain applicable security clearance(s) at the level required by the client and/or applicable certification(s) as requested by FEDITC and/or required by FEDITC'S Client(s)/Customer(s)/Prime contractor(s).
FEDITC, LLC. is committed to fostering an inclusive workplace and provides equal employment opportunities (EEO) to all employees and applicants for employment. We do not employ AI tools in our decision-making processes. Regardless of race, color, religion, sex (including pregnancy), sexual orientation, gender identity or expression, national origin, age, disability, genetic information, marital status, amnesty, or status as a covered veteran, FEDITC, LLC. ensures that all employment decisions are made in accordance with applicable federal, state, and local laws. Our commitment to non-discrimination in employment extends to every location in which our company operates.
Autodesk Vault Engineer-- CDC5695559
Requirements engineer job in Plano, TX
Auto Desk Upgrade
Lead planning, execution and validation of Autodesk Vault upgrade
Collaborate with engineering CAD, and IT Team to support Vault upgrade roadmap.
Ensure robust data migration, backup and recovery strategies.
Conduct required validation after upgrade
Document upgrade procedures and train end user as needed.
Coordinate with Autodesk support for unresolved upgrade related issues.
SCCM Package deployment:
Validation SCCM packages work as expected.
Investigate/resolve any installation failures after package installation.
Monitor deployment success rates and troubleshoot issues.
Track and resolve user- reported bugs or regressions introduced during the upgrade.
Support rollback or contingency plans if critical issues arises.
Manage Vault user roles, permissions, and access controls.
Support CAD teams with Vault- related workflows.
Manually install/updates SCCM for some applications.
Data Engineer
Requirements engineer job in Austin, TX
About the Role
We are seeking a highly skilled Databricks Data Engineer with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions.
What We're Looking For
8+ years designing and delivering scalable data pipelines in modern data platforms
Deep experience in data engineering, data warehousing, and enterprise-grade solution delivery
Ability to lead cross-functional initiatives in matrixed teams
Advanced skills in SQL, Python, and ETL/ELT development, including performance tuning
Hands-on experience with Azure, Snowflake, and Databricks, including system integrations
Key Responsibilities
Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform
Modernize and enhance cloud-based data ecosystems on Azure, contributing to architecture, modeling, security, and CI/CD
Use Apache Airflow and similar tools for workflow automation and orchestration
Work with financial or regulated datasets while ensuring strong compliance and governance
Drive best practices in data quality, lineage, cataloging, and metadata management
Primary Technical Skills
Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks Notebooks
Design efficient Delta Lake models for reliability and performance
Implement and manage Unity Catalog for governance, RBAC, lineage, and secure data sharing
Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables
Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems
Automate ingestion and workflows using Python and REST APIs
Support downstream analytics for BI, data science, and application workloads
Write optimized SQL/T-SQL queries, stored procedures, and curated datasets
Automate DevOps workflows, testing pipelines, and workspace configurations
Additional Skills
Azure: Data Factory, Data Lake, Key Vault, Logic Apps, Functions
CI/CD: Azure DevOps
Orchestration: Apache Airflow (plus)
Streaming: Delta Live Tables
MDM: Profisee (nice-to-have)
Databases: SQL Server, Cosmos DB
Soft Skills
Strong analytical and problem-solving mindset
Excellent communication and cross-team collaboration
Detail-oriented with a high sense of ownership and accountability
Data Engineer
Requirements engineer job in McAllen, TX
Apply Description
At Rhodes, our core purpose is to enhance the lives of our customers and team through building communities. We specialize in developing master planned communities and construction of high quality, energy efficient homes across the Rio Grande Valley. Since 2019, Esperanza Homes, a Rhodes Company, ranked nationally on the Top 200 Builders List and is on an aggressive growth trajectory to make the Top 100 Builders list while serving more communities across South Texas.
Rhodes was founded in the early 1990's as a land acquisition and holding company. In 2006, the company shifted gears and ventured into residential/commercial land development and home building with the formation of Esperanza Homes. Rhodes Enterprises has grown to one of the largest developers of residential, commercial and master planned communities in the Rio Grande Valley. For more than a decade, we have grown to serve the communities of Mission, Donna, McAllen, Edinburg and Brownsville. We are passionate about our customers, building exceptional homes, our team and the communities where we live, work and serve.
When you choose to work at Rhodes, you are part of a passionate and high performing Team! You will work alongside team members who set and reach ambitious goals every day and are excited to continue to grow and build communities.
Benefits of being a part of our Team include:
Competitive Compensation including Bonus & Profit-Sharing Programs
Health Care - Medical/Dental/Vision/Prescription Drug Coverage
Employer Paid Health Reimbursement Account for Medically Enrolled Staff
401(k) with Company Matching Contributions
Disability Programs
Employee & Dependent Life Insurance
Vacation & Company Holidays
Employee Home Purchase Rebate Program
Employee Assistance Program (EAP)
Role Mission: As a Data Engineer at Rhodes, you'll transform raw data into actionable intelligence, driving key business decisions that shape the company's strategic direction. Reporting to the Director of Data and IT, you'll develop advanced machine learning models and statistical analyses to uncover valuable insights from complex datasets. You'll automate data pipelines, translate business questions into data science solutions, and create sophisticated visualizations to communicate findings effectively. By collaborating across teams and staying current with cutting-edge techniques, you'll identify opportunities for data-driven improvements and predictive analytics. Your work will empower leaders to make faster, more informed decisions, propelling Rhodes towards innovation and market leadership in its data-driven transformation.
Measured Performance Goals:
90% Projects Completed on Time
90% Reports Delivered on Time
95% Staff Satisfaction Rate (Send a survey to people we have partnered with over quarter)
87% Help Desk Tickets Completed on time for Urgent Tickets
Accountabilities:
Project Management:
Create project plans for each project request
Provide updates for project champion and Director of Data and IT
Deploy projects across the organization from individuals to org-wide deployments
Reports
Conduct analyses to drive valuable business insights, using analytics tools, Python, and SQL to access and manipulate multivariate data
Complete reports for the reporting calendar on time
Provide ad-hoc analysis support to identify trends that drive business performance
Communicate updates on report requests
Maintain an inventory/documentation of all reports created
Dashboards:
Build data visualizations and multi-faceted dashboards that convey key performance metrics, significant trends, and relationships across multiple data sources within the content domain
Ideate and develop new metrics and analytical approaches, measuring content performance for both tactical and strategic decision-making
Partner with leaders to develop dashboards for their departments
Provide support & maintenance to dashboards in Qlik
Maintain an inventory/documentation of all dashboards created
Storytelling:
Identify, analyze, and interpret trends or patterns using multiple information sources
Effectively communicate results through compelling data storytelling across the organization
Data Engineering:
Write code to create custom data pipelines from third-party systems for ETL into the data warehouse.
Ensure data governance is incorporated into the development of visualizations
Design and implement robust data models utilizing our data warehouse to support advanced analytics and facilitate insights generation through our business intelligence platform.
Develop data pipelines to efficiently process and prepare large datasets for model training and inference
Machine Learning:
Implement and fine-tune machine learning models using established frameworks and libraries to solve complex business problems
Select and apply appropriate pre-built machine learning algorithms and tools to analyze large datasets and extract meaningful insights
Optimize model performance through hyperparameter tuning, feature engineering, and data preprocessing techniques
Collaborate with cross-functional teams to understand business requirements and translate them into machine learning solutions
Lives Rhodes Core Values:
Act with Integrity, No Exceptions - believes there is no such thing as compromise when it comes to acting with integrity in everything you do.
Honor our Team - honors team members as individuals and professionals, no exceptions.
Never Be Satisfied -continuously looks for ways to improve every aspect of our business
Best in Class Customer Experience -provide a best-in-class customer experience - every time, with every customer
Community Leadership - takes pride in actively engaging in the communities we serve, making them better for our future.
Embodies Characteristics of the Rhodes Team
Believes and is committed to our purpose to enhance the lives of our customers and our team through building communities
Is driven by outcomes and results, and wants to be held accountable for them
Has a propensity for action, willing to make mistakes by doing in order to learn and improve quickly
Thrives in an entrepreneurial, high-growth environment; is comfortable with ambiguity and change
Seeks and responds well to feedback, which is shared often and freely across all levels of the organization
Works through silos and forges strong cross-departmental relationships in order to achieve outcomes
Supervisory Responsibilities:
-None
Qualifications, Knowledge and Skills:
Bachelor's Degree
Experience using Business Intelligence tools such as Qlik Sense
Experience with Python programming language
Proficient in all Microsoft Office Applications as well as have savvy computer skills
Advanced Excel and reporting skills
Strong in communication skills both oral and written as well as strong in organization skills
Deadline driven and organized
Customer service oriented
Understand the need to be flexible and prioritize tasks in order to meet deadlines
Preferred Qualifications, Knowledge, and Skills:
Master's Degree
Experience implementing machine learning models and statistical analyses to solve business problems and drive decision-making
Experience with Snowflake support analytical and operational needs
FLSA Status: Exempt
Essential Functions (Mental/Physical/Environmental Requirements):
The physical demands described here are representative of those that must be met by a team member with or without reasonable accommodations, to successfully perform the essential functions of this job.
Report to office daily and adhere to schedule
Ability to oversee direct reports daily and provide guidance as needed
Ability to access, input, and retrieve information from a computer and/or electronic device
Ability to have face to face conversations with customers, co-workers and manager
Ability to sit or stand for long periods of time and move around work environment as needed
Ability to operate a motor vehicle