Post job

Requirements engineer jobs in McKinney, TX

- 1,195 jobs
All
Requirements Engineer
Data Engineer
Devops Engineer
  • IVR/ Google CCAI Engineer

    Ascii Group, LLC 3.6company rating

    Requirements engineer job in Richardson, TX

    Hi, We have a position which is suitable to your skillset. Please go through the below JD and let me know your interest. Title : IVR/ Google CCAI Engineer Duration : 12 months Relevant Experience (in Yrs.): 5+ years Detailed Job Description: Must Have skills : Java and VXML/IVR Preferred skills : Java, VXML/IVR, Spring boot, Google CCAI, SQL, Nuance Speech Recognizer, WebLogic, Microservices Thanks & Regards Venkatesh Kundurthi Team Lead || ASCII Group, LLC Office: ************** Ext. 104; Direct: *************** 38345 W. 10 Mile Rd, Ste.#365; Farmington, MI 48335 Email: ********************** Website: ******************
    $80k-108k yearly est. 5d ago
  • ServiceNow CMDB Engineer

    Summit Tech Partners 3.5company rating

    Requirements engineer job in Irving, TX

    Employment Type: Full-Time, Direct Hire (W2 Only - No sponsorship available) About the Role We're seeking a skilled and driven ServiceNow CMDB Engineer to join our team in Irving, TX. This is a hands-on, onsite role focused on designing, implementing, and maintaining a robust Configuration Management Database (CMDB) aligned with ServiceNow's Common Service Data Model (CSDM). You'll play a critical role in enhancing IT operations, asset management, and service delivery across the enterprise. Responsibilities Architect, configure, and maintain the ServiceNow CMDB to support ITOM and ITAM initiatives Implement and optimize CSDM frameworks to ensure data integrity and alignment with business services Collaborate with cross-functional teams to define CI classes, relationships, and lifecycle processes Develop and enforce CMDB governance, data quality standards, and reconciliation rules Integrate CMDB with discovery tools and external data sources Support audits, compliance, and reporting requirements related to ITIL processes Troubleshoot and resolve CMDB-related issues and performance bottlenecks Qualifications 3+ years of hands-on experience with ServiceNow CMDB and CSDM implementation Strong understanding of ITIL practices and ITOM/ITAM modules Proven ability to manage CI lifecycle and maintain data accuracy Experience with ServiceNow Discovery, Service Mapping, and integrations ServiceNow Certified System Administrator (CSA) or higher certifications preferred Excellent communication and documentation skills Must be authorized to work in the U.S. without sponsorship Perks & Benefits Competitive compensation package Collaborative and innovative work environment Opportunity to work with cutting-edge ServiceNow technologies
    $73k-102k yearly est. 3d ago
  • CyberArk Engineer

    Talent 360 Solutions

    Requirements engineer job in Frisco, TX

    You will be responsible for delivery and buildout of a Privileged Access ecosystem and apply comprehensive knowledge of privileged access security controls to the completion of complex assignments. You will identify and recommend changes in procedures, processes, and scope of delivery. This position reports to the Director of Privileged Access Engineering. What you will do: Troubleshoot complex heterogeneous environments related to privileged access technologies through server log and network traffic analysis, leaning on experience with troubleshooting and analysis techniques and tools. Understand taxonomy of privileges on named or shared privileged accounts. Incorporate cybersecurity best practices for technology governance over privileged account lifecycles. Development of PAM (CyberArk) connection components and plugins as needed utilizing various scripting tools (PowerShell, python) and rest API's. Develop regular reporting and be accountable for deliverables. Perform disaster resiliency tests, discovery audits, and can present findings to management in order to ensure security and integrity of the systems. What you will need to have: 8+ years' experience in IT. 5+ years' experience in Cyber Security. 3+ years' experience in implementation, integration, and operations of privileged access technologies (CyberArk and all its components). 3+ years' experience in systems and network administration (Windows, Unix/Linux, Network devices) and good knowledge of PKI, Authentication tools and protocols (like SAML, Radius, PING), MFA. 2+ years' experience with privileged access controls in Unix and Windows environments. 2+ years' experience with broader IAM ecosystem of directories, identity management, and access management controls. 1+ years' experience in a senior technical role (have a deep understanding of the product) with IAM/PAM products such as CyberArk and its components. Bachelor's degree in computer science, or a relevant field, or an equivalent combination of education, work, and/or military experience. What would be great to have: 2+ years' experience in onboarding and managing privileged credentials across Windows, Linux/Unix, databases, networking devices and other platforms. 2+ years' experience in development/scripting (shell, PowerShell, python and utilizing rest API methods and other current tools including AI to assist in automation activities like provisioning of vault components, accounts and implementing access controls. 1+ years' experience in coming up with technical solutions and being able to present to management related to PAM. 1+ years' experience in ability to interface with Corporate Audit and External Audit functions for regulatory compliance. Cybersecurity certifications such as CISA, CISSP and CyberArk certifications - CDE, Sentry, Defender.
    $69k-96k yearly est. 3d ago
  • AL/ML Engineer- Full Time

    Maveric Systems Limited

    Requirements engineer job in Irving, TX

    Job Title - AI/ML Engineer Job Type- Full Time Key Responsibilities Model Development: Design, build, and optimize machine learning models for predictive analytics, classification, recommendation systems, and NLP. Data Processing: Collect, clean, and preprocess large datasets from various sources for training and evaluation. Deployment: Implement and deploy ML models into production environments using frameworks like TensorFlow, PyTorch, or Scikit-learn. Performance Monitoring: Continuously monitor and improve model accuracy, efficiency, and scalability. Collaboration: Work closely with data engineers, software developers, and product teams to integrate AI solutions into applications. Research & Innovation: Stay updated with the latest advancements in AI/ML and apply cutting-edge techniques to business challenges. Documentation: Maintain clear documentation of models, processes, and workflows. Required Skills & Qualifications Bachelor's or Master's degree in Computer Science, Data Science, or related field. Strong proficiency in Python, R, or Java. Hands-on experience with ML frameworks (TensorFlow, PyTorch, Scikit-learn). Knowledge of data structures, algorithms, and software engineering principles. Experience with cloud platforms (AWS, Azure, GCP) and MLOps tools. Familiarity with big data technologies (Spark, Hadoop) is a plus. Excellent problem-solving and analytical skills. Preferred Qualifications Experience in Natural Language Processing (NLP), Computer Vision, or Deep Learning. Understanding of model interpretability and ethical AI practices. Prior experience deploying models in production environments.
    $70k-96k yearly est. 3d ago
  • Microsoft 365 Engineer

    Precision Technologies 4.3company rating

    Requirements engineer job in Dallas, TX

    Role : Microsoft 365 Engineer Type: Contract Description - We are seeking a highly skilled Microsoft 365 Engineer with deep expertise in SharePoint development & administration, Power Platform solutions, and proven experience delivering SharePoint and Microsoft 365 migration projects. The ideal candidate combines a strong technical foundation with excellent problem-solving, communication, and cross-functional collaboration skills. This role plays a key part in designing, developing, migrating, integrating, and supporting Microsoft 365 solutions to enhance organizational productivity and collaboration. Key Responsibilities 1. SharePoint Development & Administration Design, develop, and deploy SharePoint Online solutions including sites, libraries, lists, content types, workflows, and modern page customizations. Manage and configure SharePoint architecture, permissions, site collections, governance policies, and security structures. Develop custom solutions using SPFx, PowerShell, PnP, JavaScript, and other relevant technologies. Optimize search, content management, and information architecture. Perform SharePoint environment health checks, performance tuning, and compliance/governance management. 2. Power Platform Development Design and build end-to-end business solutions using Power Apps, Power Automate, and Power BI. Create custom connectors, automation flows, forms, dashboards, and integrations aligned with business requirements. Support and troubleshoot Power Platform applications and governance best practices. Ensure solutions follow Microsoft best practices for scalability, performance, and security. 3. Microsoft 365 Migration & Integration Lead or support end-to-end SharePoint migrations (on-premises to SharePoint Online, tenant-to-tenant, legacy platforms, etc.). Execute Microsoft 365 migration tasks, including OneDrive, Teams, Exchange Online, and related workloads. Utilize migration tools such as ShareGate, Metalogix, AvePoint, or Microsoft-native tooling. Integrate Microsoft 365 components with third-party applications, line-of-business systems, and internal applications. Plan and implement data structure mapping, content assessment, remediation, and validation. 4. Microsoft 365 Administration & Support Provide Tier 2/3 support for all Microsoft 365 services including SharePoint, Teams, OneDrive, Power Platform, and Azure AD. Monitor service health, security alerts, compliance notifications, and tenant configurations. Manage identity, access, conditional access policies, and license administration. Educate and support end-users, create documentation, and develop training materials as needed. Troubleshoot issues across Microsoft 365 ecosystem and execute root cause analysis. Required Qualifications Bachelor's degree in computer science, Information Technology, or related field (or equivalent experience). 5+ years hands-on experience with SharePoint Online and Microsoft 365 administration. Strong proficiency in SharePoint development (SPFx, JavaScript, PowerShell, PnP, APIs). Demonstrated experience performing SharePoint migrations and Microsoft 365 integrations. Proven ability to design solutions using Power Apps, Power Automate, and Power Platform governance. Experience with Azure AD, Exchange Online, Teams administration, and Microsoft 365 security concepts. Familiarity with Agile methodologies and working in cross-functional project teams. Preferred Qualifications Microsoft certifications such as: Microsoft 365 Certified: Developer Associate Microsoft 365 Certified: Administrator Associate Power Platform Developer/Functional Consultant SharePoint Associate Experience with REST APIs, Graph API, .NET, and scripting languages. Knowledge of compliance, retention, DLP, and governance frameworks. Experience in enterprise-level operational support environments. Soft Skills Strong communication and client-facing interpersonal skills. Detail-oriented with excellent documentation habits. Ability to manage multiple projects and deadlines in a dynamic environment. Strong analytical and troubleshooting capabilities.
    $67k-84k yearly est. 3d ago
  • Cloud Engineer

    ABC Co 4.6company rating

    Requirements engineer job in Dallas, TX

    TOP MUST HAVES: Experience with overseeing Azure and AWS infrastructures Windows servers experience must be US Citizen Job Title: Cloud Engineer Job Summary: We are seeking a skilled and motivated Cloud Engineer to join our IT team. The ideal candidate will have hands-on experience managing both Azure and AWS cloud environments, with a focus on configuring, deploying, and maintaining cloud infrastructure to support business operations efficiently and securely. The Cloud Engineer will play a critical role in optimizing cloud resources and minimizing costs while ensuring that our cloud-based services are scalable, reliable, and secure. Key Responsibilities: Cloud Infrastructure Management: Design, implement, and manage cloud infrastructure across Azure and AWS platforms. Provision, monitor, and maintain resources, including virtual machines, databases, storage, networking, and other cloud services. Ensure cloud architecture is scalable, secure, and supports high availability and disaster recovery. Cost Optimization: Monitor and optimize cloud costs by identifying unused or underutilized resources and making recommendations for cost-saving opportunities. Implement cost management tools and best practices to control cloud expenditures across AWS and Azure environments. Automation and Orchestration: Develop and maintain infrastructure as code (IaC) using tools like Azure Resource Manager (ARM) templates. Automate routine cloud infrastructure tasks to improve efficiency and reduce manual intervention. Security and Compliance: Implement security best practices and governance to protect cloud assets, including identity and access management (IAM), network security, encryption, and data protection. Ensure cloud environments are compliant with relevant regulations and organizational security policies. Performance Monitoring and Troubleshooting: Monitor system performance and usage to ensure optimal functionality and uptime. Troubleshoot cloud infrastructure issues and work with internal teams to resolve problems promptly. Collaboration and Documentation: Collaborate with development, operations, and security teams to ensure seamless integration of cloud resources. Maintain up-to-date documentation for cloud infrastructure, configurations, and processes. Qualifications: Education: Bachelors degree in Computer Science, Information Technology, or a related field. Equivalent work experience may be considered. Experience: 3-5 years of experience managing cloud infrastructure on Azure and AWS platforms. Strong understanding of cloud cost optimization techniques and tools. Skills: Proficiency in cloud security, networking, and performance optimization. Hands-on experience with cloud monitoring and management tools (e.g., Azure Monitor, AWS CloudWatch). Familiarity with scripting and automation using PowerShell, Python, or similar languages. Strong problem-solving skills and ability to troubleshoot complex issues. Certifications: Relevant certifications such as AWS Certified Solutions Architect, Azure Administrator, or Cloud Practitioner preferred. Soft Skills: Strong communication and collaboration skills. Ability to work independently and as part of a team in a fast-paced environment. Attention to detail and a proactive approach to problem-solving. “Benefit offerings include medical, dental, vision, life insurance, short-term disability, additional voluntary benefits, EAP program, commuter benefits, and 401K plan. Our program provides employees the flexibility to choose the type of coverage that meets their individual needs. Available paid leave may include Paid Sick Leave, where required by law; any other paid leave required by Federal, State, or local law; and Holiday pay upon meeting eligibility criteria.” “Equal Opportunity Employer/Veterans/Disabled To read our Candidate Privacy Information Statement, which explains how we will use your information, please navigate to ******************************************* The Company will consider qualified applicants with arrest and conviction records in accordance with federal, state, and local laws and/or security clearance requirements, including, as applicable: • The California Fair Chance Act • Los Angeles City Fair Chance Ordinance • Los Angeles County Fair Chance Ordinance for Employers • San Francisco Fair Chance Ordinance”
    $74k-96k yearly est. 1d ago
  • AWS Cloud Engineer

    Tixy Tech

    Requirements engineer job in Plano, TX

    Interview process: Onsite (Plano, TX) We are looking for a highly skilled Senior AWS Data/Backend Engineer with strong expertise in C#, AWS Lambda, AWS Glue, and distributed caching technologies. The ideal candidate will excel at building and optimizing high-performance cloud-native APIs and backend services in a fully AWS-based environment. This role focuses heavily on API scalability, performance tuning, and data pipeline optimization. 🔧 Key Responsibilities Design, develop, and optimize AWS Lambda functions in C# for low-latency, high-throughput workloads. Implement and manage distributed caching using Redis/ElastiCache or OpenSearch. Enhance and support AWS Glue ETL pipelines, data structures, and workflows. Architect scalable backend services using API Gateway, Lambda, S3, and Glue Catalog. Optimize data querying patterns across Aurora, DynamoDB, Redshift, or similar databases. Perform end-to-end performance profiling and bottleneck analysis across APIs and data pipelines. Improve observability with CloudWatch, X-Ray, and structured logging. Ensure all cloud solutions meet best practices for security, scalability, reliability, and cost efficiency. Collaborate with cross-functional teams to deliver high-performance APIs and backend systems. ✅ Required Skills & Experience 5+ years in backend engineering, cloud engineering, or data engineering. Strong expertise in C#/.NET with hands-on AWS Lambda experience. Experience implementing Redis/ElastiCache or OpenSearch caching layers. Hands-on with AWS Glue, Glue Catalog, ETL optimization, and data lakes. Strong knowledge of core AWS services: Lambda, API Gateway, S3, IAM, CloudWatch. Experience with cloud databases: Aurora, DynamoDB, Redshift. Strong understanding of data modeling, partitioning, and schema optimization. Excellent debugging, performance tuning, and problem-solving abilities. ✨ Preferred Qualifications Experience with event-driven architectures (Kinesis, Kafka, SNS, SQS). Familiarity with CI/CD pipelines and automated deployments. Knowledge of AWS cost optimization practices. Understanding of DevOps/SRE monitoring and high-availability concepts.
    $70k-96k yearly est. 16h ago
  • AI/ML Engineer

    Apexon

    Requirements engineer job in Dallas, TX

    About the Role Apexon is seeking an experienced AI/ML Engineer with strong expertise in LLM development, MLOps, and building scalable GenAI solutions. You will design, build, and operationalize AI/ML systems that support enterprise clients across healthcare, BFSI, retail, and digital transformation engagements. The ideal candidate has hands-on experience building end-to-end machine learning pipelines, optimizing large language model workflows, and deploying secure ML systems in production environments. Responsibilities LLM & AI Solution Development Build, fine-tune, evaluate, and optimize Large Language Models (LLMs) for client-specific use cases such as document intelligence, chatbot automation, code generation, and workflow orchestration. Develop RAG (Retrieval-Augmented Generation) pipelines using enterprise knowledge bases. Implement prompt engineering, guardrails, hallucination reduction strategies, and safety frameworks. Work with transformer-based architectures (GPT, LLaMA, Mistral, Falcon, etc.) and develop optimized model variants for low-latency and cost-efficient inference. Machine Learning Engineering Develop scalable ML systems including feature pipelines, training jobs, and batch/real-time inference services. Build and automate training, validation, and monitoring workflows for predictive and GenAI models. Perform offline evaluation, A/B testing, performance benchmarking, and business KPI tracking. MLOps & Platform Engineering Build and maintain end-to-end MLOps pipelines using: AWS SageMaker, Databricks, MLflow, Kubernetes, Docker, Terraform, Airflow Manage CICD pipelines for model deployment, versioning, reproducibility, and governance. Implement enterprise-grade model monitoring (data drift, performance, cost, safety). Maintain infrastructure for vector stores, embeddings pipelines, feature stores, and inference endpoints. Data Engineering & Infrastructure Build data pipelines for structured and unstructured data using: Snowflake, S3, Kafka, Delta Lake, Spark (PySpark) Work on data ingestion, transformation, quality checks, cataloging, and secure storage. Ensure all systems adhere to Apexon and client-specific security, IAM, and compliance standards. Cross-Functional Collaboration Partner with product managers, data engineers, cloud architects, and QA teams. Translate business requirements into scalable AI/ML solutions. Ensure model explainability, governance documentation, and compliance adherence. Basic Qualifications Bachelor's or Master's degree in Computer Science, Engineering, AI/ML, Data Science, or related field. 4+ years of experience in AI/ML engineering, including 1+ years working with LLMs/GenAI. Strong experience with Python, Transformers, PyTorch/TensorFlow, and NLP frameworks. Hands-on expertise with MLOps platforms: SageMaker, MLflow, Databricks, Kubernetes, Docker. Strong SQL and data engineering experience (Snowflake, S3, Spark, Kafka). Preferred Qualifications Experience implementing Generative AI solutions for enterprise clients. Expertise in distributed training, quantization, optimization, and GPU acceleration. Experience with: Vector Databases (Pinecone, Weaviate, FAISS) RAG frameworks (LangChain, LlamaIndex) Monitoring tools (Prometheus, Grafana, CloudWatch) Understanding of model governance, fairness evaluation, and client compliance frameworks.
    $70k-96k yearly est. 1d ago
  • Kubernetes Engineer

    Tata Consultancy Services 4.3company rating

    Requirements engineer job in Plano, TX

    Hands on experience of Kubernetes engineering and development. Minimum 5-7+ years of experience in working with hybrid Infra architectures Experience in analyzing the architecture of On Prem Infrastructure for Applications (Network, Storage, Processing, Backup/DR etc). Strong understanding of Infrastructure capacity planning, monitoring, upgrades, IaaC automations using Terraform, Ansible, CICD using Jenkins/Github Actions. Experience working with engineering teams to define best practices and processes as appropriate to support the entire infrastructure lifecycle - Plan, Build, Deploy, and Operate such as automate lifecycle activities - self-service, orchestration and provisioning, configuration management. Experience working with engineering teams to define best practices and processes as appropriate to support the entire infrastructure lifecycle - Plan, Build, Deploy, and Operate such as automate lifecycle activities - self-service, orchestration and provisioning, configuration management. Experience defining infrastructure direction. Drive continuous improvement including design, and standardization of process and methodologies. Experience assessing feasibility, complexity and scope of new capabilities and solutions Base Salary Range: $100,000 - $110,000 per annum TCS Employee Benefits Summary: Discretionary Annual Incentive. Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans. Family Support: Maternal & Parental Leaves. Insurance Options: Auto & Home Insurance, Identity Theft Protection. Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement. Time Off: Vacation, Time Off, Sick Leave & Holidays. Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
    $100k-110k yearly 16h ago
  • Backend Engineer (Distributed Systems and Kubernetes)

    Arcus Search 3.9company rating

    Requirements engineer job in Dallas, TX

    Software Engineer - Batch Compute (Kubernetes / HPC) Dallas (Hybrid) | 💼 Full-time A leading, well-funded quantitative research and technology firm is looking for a Software Engineer to join a team building and running a large-scale, high-performance batch compute platform. You'll be working on modern Kubernetes-based infrastructure that powers complex research and ML workloads at serious scale, including contributions to a well-known open-source scheduling project used for multi-cluster batch computing. What you'll be doing • Building and developing backend services, primarily in Go (Python, C++, C# backgrounds are fine) • Working on large-scale batch scheduling and distributed systems on Kubernetes • Operating and improving HPC-style workloads, CI/CD pipelines, and Linux-based platforms • Optimising data flows across systems using tools like PostgreSQL • Debugging and improving performance across infrastructure, networking, and software layers What they're looking for • Strong software engineering background with an interest in Kubernetes and batch workloads • Experience with Kubernetes internals (controllers, operators, schedulers) • Exposure to HPC, job schedulers, or DAG-based workflows • Familiarity with cloud platforms (ideally AWS), observability tooling, and event-driven systems Why it's worth a look • Market-leading compensation plus bonus • Hybrid setup from a brand-new Dallas office • Strong work/life balance and excellent benefits • Generous relocation support if needed • The chance to work at genuine scale on technically hard problems If you're interested (or know someone who might be), drop me a message and I'm happy to share more details anonymously.
    $75k-106k yearly est. 1d ago
  • Autodesk Vault Engineer-- CDC5695559

    Compunnel Inc. 4.4company rating

    Requirements engineer job in Plano, TX

    Auto Desk Upgrade Lead planning, execution and validation of Autodesk Vault upgrade Collaborate with engineering CAD, and IT Team to support Vault upgrade roadmap. Ensure robust data migration, backup and recovery strategies. Conduct required validation after upgrade Document upgrade procedures and train end user as needed. Coordinate with Autodesk support for unresolved upgrade related issues. SCCM Package deployment: Validation SCCM packages work as expected. Investigate/resolve any installation failures after package installation. Monitor deployment success rates and troubleshoot issues. Track and resolve user- reported bugs or regressions introduced during the upgrade. Support rollback or contingency plans if critical issues arises. Manage Vault user roles, permissions, and access controls. Support CAD teams with Vault- related workflows. Manually install/updates SCCM for some applications.
    $77k-99k yearly est. 1d ago
  • Azure Data Engineer

    Kellymitchell Group 4.5company rating

    Requirements engineer job in Irving, TX

    Our client is seeking an Azure Data Engineer to join their team! This position is located in Irving, Texas. THIS ROLE REQUIRES AN ONSITE INTERVIEW IN IRVING, please only apply if you are local and available to interview onsite. Duties: Lead the design, architecture, and implementation of key data initiatives and platform capabilities Optimize existing data workflows and systems to improve performance, cost-efficiency, identifying and guiding teams to implement solutions Lead and mentor a team of 2-5 data engineers, providing guidance on technical best practices, career development, and initiative execution Contribute to the development of data engineering standards, processes, and documentation, promoting consistency and maintainability across teams while enabling business stakeholders Desired Skills/Experience: Bachelor's degree or equivalent in Computer Science, Mathematics, Software Engineering, Management Information Systems, etc. 5+ years of relevant work experience in data engineering Strong technical skills in SQL, PySpark/Python, Azure, and Databricks Deep understanding of data engineering fundamentals, including database architecture and design, ETL, etc. Benefits: Medical, Dental, & Vision Insurance Plans Employee-Owned Profit Sharing (ESOP) 401K offered The approximate pay range for this position starting at $140-145,000+. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
    $140k-145k yearly 3d ago
  • Senior Data Engineer (USC AND GC ONLY)

    Wise Skulls

    Requirements engineer job in Richardson, TX

    Now Hiring: Senior Data Engineer (GCP / Big Data / ETL) Duration: 6 Months (Possible Extension) We're seeking an experienced Senior Data Engineer with deep expertise in Data Warehousing, ETL, Big Data, and modern GCP-based data pipelines. This role is ideal for someone who thrives in cross-functional environments and can architect, optimize, and scale enterprise-level data solutions on the cloud. Must-Have Skills (Non-Negotiable) 9+ years in Data Engineering & Data Warehousing 9+ years hands-on ETL experience (Informatica, DataStage, etc.) 9+ years working with Teradata 3+ years hands-on GCP and BigQuery Experience with Dataflow, Pub/Sub, Cloud Storage, and modern GCP data pipelines Strong background in query optimization, data structures, metadata & workload management Experience delivering microservices-based data solutions Proficiency in Big Data & cloud architecture 3+ years with SQL & NoSQL 3+ years with Python or similar scripting languages 3+ years with Docker, Kubernetes, CI/CD for data pipelines Expertise in deploying & scaling apps in containerized environments (K8s) Strong communication, analytical thinking, and ability to collaborate across technical & non-technical teams Familiarity with AGILE/SDLC methodologies Key Responsibilities Build, enhance, and optimize modern data pipelines on GCP Implement scalable ETL frameworks, data structures, and workflow dependency management Architect and tune BigQuery datasets, queries, and storage layers Collaborate with cross-functional teams to define data requirements and support business objectives Lead efforts in containerized deployments, CI/CD integrations, and performance optimization Drive clarity in project goals, timelines, and deliverables during Agile planning sessions 📩 Interested? Apply now or DM us to explore this opportunity! You can share resumes at ******************* OR Call us on *****************
    $76k-103k yearly est. 16h ago
  • Senior DevOps Engineer

    Noetic

    Requirements engineer job in Dallas, TX

    About the Role We are seeking a Senior DevOps Engineer with 7+ years of experience in working hands-on and leading DevOps automation and deployments to join our dynamic and fast-paced team in Dallas, TX. This individual will be the engineering and coding expert in DevOps delivery, orchestration, and management - playing a crucial role in enhancing our environment builds and software deployment lifecycle across cloud and on-prem infrastructure that run our applications. This person should have deep platform knowledge with infrastructure automation, continuous integration/delivery (CI/CD), and system reliability practices. They are responsible for ensuring highly available, scalable high performing applications to customers and end-users' satisfaction. Key Responsibilities Lead DevOps functions across the organization with a focus on scalability, automation, orchestration and reliability. Automate, orchestrate and administer Kubernetes clusters and Docker containers in both on-premises and AWS cloud environments. Design, implement, and maintain CI/CD pipelines, ensuring seamless integration with all components of the SDLC (Software Development Life Cycle). Manage and optimize environments for code deployment, infrastructure provisioning, and system migrations. Collaborate with development, QA, and operations teams to align DevOps processes with Agile/Scrum methodologies. Champion best practices in DevOps, promoting automation, continuous integration, testing, and deployment. Utilize GitHub and Gitlab systems for streamlined code collaboration and versioning. Troubleshoot complex deployment issues across various environments (legacy and cloud). Present strategies, progress, and results to both technical and non-technical stakeholders, ensuring alignment and transparency Job role may involve 24/7 operational support and on-call rotations in future Required Qualifications 7+ years of experience in DevOps engineering with a proven track record of leading deployment strategies and best practices. Strong experience with Kubernetes, Docker, GitHub automation scripting and deployments. Hands-on expertise (including scripting) in CI/CD pipeline delivery, environment management, code deployment, and system migration. Solid understanding of both legacy systems and modern cloud-based development environments. Familiarity with SDLC tools and integration points across development, testing, and release processes. Strong knowledge of Agile/Scrum methodologies and implementation within DevOps frameworks. Exceptional communication and presentation skills-able to interface with cross-functional teams and be an active advisor and change initiator Comfortable working in a fast-paced and demanding environment with tight deadlines and shifting priorities. Bachelor's degree in engineering, Mathematics, Statistics, or Physical Sciences or equivalent experience. Certifications in DevOps, AWS Services, etc. are desirable but not necessary. You are a great fit if you have one or many of these skills: On-Prem and AWS Deep understanding and hands-on experience with: Compute: EC2, Lambda, K8S, Docker, Ansible, Terrraform, etc. Storage: S3, EBS, EFS, Glacier Networking: VPC, Subnets, Route 53, NAT, Transit Gateway, ALB/NLB, Direct Connect IAM: Policies, Roles, STS, least privilege, service-linked roles Monitoring: CloudWatch, X-Ray, AWS Config, Grafana, Prometheus, etc. Security: KMS, Secrets Manager, Systems Manager Parameter Store, GuardDuty, Security Hub 2. Infrastructure as Code (IaC) Expert with at least one: Terraform (most in-demand), CloudFormation, etc. Modular, reusable, version-controlled infrastructure Experience with drift detection, change sets, and rollback strategies CI/CD Pipeline Automation Build and maintain complex pipelines Integration with GitHub Actions, GitLab CI/CD, Jenkins, etc Practices: Canary, blue/green, rolling deployments Test automation, linting, static/dynamic code analysis Pipeline failure recovery and rollback automation Containerization and Orchestration Expert-level with Docker (build, tag, secure, scan, push) Container orchestration using: ECS with Fargate or EC2 Kubernetes with Helm, ArgoCD, etc. Manage scaling, service discovery, health checks and secret injections Observability and Logging Design and implementation: monitoring and alerting metrics and dashboards DevSecOps practices integrated into pipelines IAM design and security protocols for audit and compliance Compliance (HIPAA, SOC2, ISO 27001, etc.) Secrets and key management via AWS Secrets Manager, etc. Tooling & Scripting Proficient in scripting and tooling Linux Shell Scripting (RHEL, Alma, Ubuntu, etc.) CLI tools: AWS CLI, curl, etc. Automation with Terraform, Ansible, Chef, etc. Collaboration & Leadership Experience Team Player Collaborates closely with Dev, Security, and QA teams Help set infrastructure standards Lead incident response (IM), Change Management (CM) and postmortems (RCA) Drives DevOps culture: continuous learning, feedback loops, automation-first Why Dallas? Why In-Office? Building a futuristic trusted global network requires intense, high-bandwidth collaboration. We believe that being physically present allows for spontaneous whiteboard sessions, rapid feedback loops, and deep relationship-building crucial to tackling this mission. You'll have unparalleled access to the entire team, accelerating your learning, impact, and ability to influence the trajectory of Noetic. We're building a tight-knit, mission-driven culture here in Dallas, and your presence is key. Benefits & Perks Competitive salary and performance-based bonuses. Comprehensive health, dental, and vision insurance. 401(k) plan with company match. Unlimited paid time off. On-site gym. Daily lunch. We welcome applications from all qualified candidates who are authorized to work in the U.S. However, we are unable to provide visa sponsorship at this time. Noetic Global Corp. is an equal opportunity employer. Noetic complies with all applicable federal, state, and local laws regarding recruitment and hiring. All qualified applicants are considered for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, citizenship status, disability, protected veteran status, or any other category protected by applicable federal, state, or local laws. No phone calls or agencies please.
    $81k-108k yearly est. 2d ago
  • Senior DevOps Engineer

    Qorali

    Requirements engineer job in Dallas, TX

    Qorali is excited to have a new role that we can share with you to take your career to the next level! This role is involved with modern technologies that are integrated deeply with different platforms and operations with significant opportunities within the company to have continuous growth. You will work with teams to implement monitoring practices to enhance the environment and efficiency for both cloud and on-prem spaces. Expectations for role Tracking metrics with alerts and notifications with runbooks for operational monitoring, availability and scalability Implementation of resolutions for optimization for different services with the team Incident response production while keeping automation Lead the for the team improvement in research, retrospectives, and discussion/code reviews Mentoring junior team members Maintenance of large-scale systems with the ability to troubleshoot and problem solve. Technical Skills 6+ years of DevOps experience AWS (preferred) or Azure Experience with monitoring environments including tools such as Splunk, AppDynamics, Datadog, Prometheus or Grafana. Scripting languages (Java, Python) Containerization creation in Kubernetes, Docker or Rancher CI/CD experience (Jenkins preferred) Leveraging the use of language models to enhance DevOps automation workflow Benefits 15% bonus 20+ PTO 6% 401k match Health, vision, dental and life plans Two days of remote working per week This role is unable to support Visa Sponsorship or C2C. C2H.
    $81k-108k yearly est. 16h ago
  • Data Engineer

    SVT It Infotech

    Requirements engineer job in Dallas, TX

    We are seeking a highly skilled Data Engineer with 5+ years of hands-on experience to design, build, and optimize scalable data pipelines and modern data platforms. The ideal candidate will have strong expertise in cloud data engineering, ETL/ELT development, real-time streaming, and data modeling, with a solid understanding of distributed systems and best engineering practices. Design, develop, and maintain scalable ETL/ELT pipelines for ingestion, transformation, and processing of structured and unstructured data. Build real-time and batch data pipelines using tools such as Kafka, Spark, AWS Glue, Kinesis, or similar technologies. Develop and optimize data models, warehouse layers, and high-performance data architectures. Implement data quality checks, data validation frameworks, and ensure data reliability and consistency across systems. Collaborate with Data Analysts, Data Scientists, and cross-functional teams to deliver efficient and accessible data solutions. Deploy and manage data infrastructure using AWS / Azure / GCP cloud services. Write clean, efficient, and reusable code in Python/Scala/SQL. Monitor pipeline performance, troubleshoot issues, and drive continuous improvement. Implement CI/CD pipelines, version control, and automation for production workloads. Ensure data governance, security, and compliance in all engineering workflows. Required Skills & Qualifications 5+ years of experience as a Data Engineer in a production environment. Strong proficiency in Python, SQL, and distributed processing frameworks (Spark, PySpark, Hadoop). Hands-on experience with cloud platforms: AWS, Azure, or GCP. Experience with streaming technologies: Kafka, Kinesis, Spark Streaming, Flink (any). Strong understanding of data warehousing concepts (Star/Snowflake schemas, dimensional modeling). Experience with ETL/ELT tools (Glue, Airflow, DBT, Informatica, etc.). Solid understanding of DevOps practices: Git, CI/CD, Terraform/CloudFormation (bonus). Experience working with relational and NoSQL databases (Redshift, Snowflake, BigQuery, DynamoDB, etc.). Excellent problem-solving, communication, and analytical skills.
    $76k-103k yearly est. 16h ago
  • Azure Data Engineer Sr

    Resolve Tech Solutions 4.4company rating

    Requirements engineer job in Irving, TX

    Minimum 7 years of relevant work experience in data engineering, with at least 2 years in a data modeling. Strong technical foundation in Python, SQL, and experience with cloud platforms (Azure,). Deep understanding of data engineering fundamentals, including database architecture and design, Extract, transform and load (ETL) processes, data lakes, data warehousing, and both batch and streaming technologies. Experience with data orchestration tools (e.g., Airflow), data processing frameworks (e.g., Spark, Databricks), and data visualization tools (e.g., Tableau, Power BI). Proven ability to lead a team of engineers, fostering a collaborative and high-performing environment.
    $76k-100k yearly est. 3d ago
  • Data Engineer(python, pyspark, databricks)

    Anblicks 4.5company rating

    Requirements engineer job in Dallas, TX

    Job Title: Data Engineer(python, pyspark, databricks) Data Engineer with strong proficiency in SQL, Python, and PySpark to support high-performance data pipelines and analytics initiatives. This role will focus on scalable data processing, transformation, and integration efforts that enable business insights, regulatory compliance, and operational efficiency. Data Engineer - SQL, Python and Pyspark Expert (Onsite - Dallas, TX) Key Responsibilities Design, develop, and optimize ETL/ELT pipelines using SQL, Python, and PySpark for large-scale data environments Implement scalable data processing workflows in distributed data platforms (e.g., Hadoop, Databricks, or Spark environments) Partner with business stakeholders to understand and model mortgage lifecycle data (origination, underwriting, servicing, foreclosure, etc.) Create and maintain data marts, views, and reusable data components to support downstream reporting and analytics Ensure data quality, consistency, security, and lineage across all stages of data processing Assist in data migration and modernization efforts to cloud-based data warehouses (e.g., Snowflake, Azure Synapse, GCP BigQuery) Document data flows, logic, and transformation rules Troubleshoot performance and quality issues in batch and real-time pipelines Support compliance-related reporting (e.g., HMDA, CFPB) Required Qualifications 6+ years of experience in data engineering or data development Advanced expertise in SQL (joins, CTEs, optimization, partitioning, etc.) Strong hands-on skills in Python for scripting, data wrangling, and automation Proficient in PySpark for building distributed data pipelines and processing large volumes of structured/unstructured data Experience working with mortgage banking data sets and domain knowledge is highly preferred Strong understanding of data modeling (dimensional, normalized, star schema) Experience with cloud-based platforms (e.g., Azure Databricks, AWS EMR, GCP Dataproc) Familiarity with ETL tools, orchestration frameworks (e.g., Airflow, ADF, dbt)
    $75k-102k yearly est. 2d ago
  • Data Engineer

    Ledelsea

    Requirements engineer job in Irving, TX

    W2 Contract to Hire Role with Monthly Travel to the Dallas Texas area We are looking for a highly skilled and independent Data Engineer to support our analytics and data science teams, as well as external client data needs. This role involves writing and optimizing complex SQL queries, generating client-specific data extracts, and building scalable ETL pipelines using Azure Data Factory. The ideal candidate will have a strong foundation in data engineering, with a collaborative mindset and the ability to work across teams and systems. Duties/Responsibilities:Develop and optimize complex SQL queries to support internal analytics and external client data requests. Generate custom data lists and extracts based on client specifications and business rules. Design, build, and maintain efficient ETL pipelines using Azure Data Factory. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver high-quality solutions. Work with Salesforce data; familiarity with SOQL is preferred but not required. Support Power BI reporting through basic data modeling and integration. Assist in implementing MLOps practices for model deployment and monitoring. Use Python for data manipulation, automation, and integration tasks. Ensure data quality, consistency, and security across all workflows and systems. Required Skills/Abilities/Attributes: 5+ years of experience in data engineering or a related field. Strong proficiency in SQL, including query optimization and performance tuning. Experience with Azure Data Factory, with git repository and pipeline deployment. Ability to translate client requirements into accurate and timely data outputs. Working knowledge of Python for data-related tasks. Strong problem-solving skills and ability to work independently. Excellent communication and documentation skills. Preferred Skills/ExperiencePrevious knowledge of building pipelines for ML models. Extensive experience creating/managing stored procedures and functions in MS SQL Server 2+ years of experience in cloud architecture (Azure, AWS, etc) Experience with ‘code management' systems (Azure Devops) 2+ years of reporting design and management (PowerBI Preferred) Ability to influence others through the articulation of ideas, concepts, benefits, etc. Education and Experience: Bachelor's degree in a computer science field or applicable business experience. Minimum 3 years of experience in a Data Engineering role Healthcare experience preferred. Physical Requirements:Prolonged periods sitting at a desk and working on a computer. Ability to lift 20 lbs.
    $76k-103k yearly est. 16h ago
  • Data Engineer

    Beaconfire Inc.

    Requirements engineer job in Dallas, TX

    Junior Data Engineer DESCRIPTION: BeaconFire is based in Central NJ, specializing in Software Development, Web Development, and Business Intelligence; looking for candidates who are good communicators and self-motivated. You will play a key role in building, maintaining, and operating integrations, reporting pipelines, and data transformation systems. Qualifications: Passion for data and a deep desire to learn. Master's Degree in Computer Science/Information Technology, Data Analytics/Data Science, or related discipline. Intermediate Python. Experience in data processing is a plus. (Numpy, Pandas, etc) Experience with relational databases (SQL Server, Oracle, MySQL, etc.) Strong written and verbal communication skills. Ability to work both independently and as part of a team. Responsibilities: Collaborate with the analytics team to find reliable data solutions to meet the business needs. Design and implement scalable ETL or ELT processes to support the business demand for data. Perform data extraction, manipulation, and production from database tables. Build utilities, user-defined functions, and frameworks to better enable data flow patterns. Build and incorporate automated unit tests, participate in integration testing efforts. Work with teams to resolve operational & performance issues. Work with architecture/engineering leads and other teams to ensure quality solutions are implemented, and engineering best practices are defined and adhered to. Compensation: $65,000.00 to $80,000.00 /year BeaconFire is an e-verified company. Work visa sponsorship is available.
    $65k-80k yearly 16h ago

Learn more about requirements engineer jobs

How much does a requirements engineer earn in McKinney, TX?

The average requirements engineer in McKinney, TX earns between $60,000 and $110,000 annually. This compares to the national average requirements engineer range of $62,000 to $120,000.

Average requirements engineer salary in McKinney, TX

$81,000

What are the biggest employers of Requirements Engineers in McKinney, TX?

The biggest employers of Requirements Engineers in McKinney, TX are:
  1. KONE
  2. Motorola Solutions
  3. Procaccianti Group
  4. Robbinsre
  5. Tpghotelsandresorts
  6. Tru By Hilton McKinney
Job type you want
Full Time
Part Time
Internship
Temporary