Post job

Requirements engineer jobs in Glen Burnie, MD

- 3,869 jobs
All
Requirements Engineer
Data Engineer
Devops Engineer
Systems Engineer
Staff Engineer
  • Applied AI Interface Engineer

    Mantech 4.5company rating

    Requirements engineer job in Alexandria, VA

    MANTECH seeks a motivated, career and customer-oriented Applied AI Interface Engineer to join our team in Alexandria, VA. As part of the position, you will act as a Software Engineer designing and implementing services and components for AI applications. Responsibilities include but are not limited to: Designs and builds User Interfaces using modern UX/UI standards. Develops, implements, and maintains full-stack software solutions for AI-enabled applications. Works closely with the Software Architect to understand project requirements and translate them into technical specifications. Develops and integrates AI and ML capabilities on a cloud-hosted data platform that supports significant market adoption, high performance, and strict access control and governance. Stays current with advancements in AI, machine learning, and software engineering, incorporating best practices into the development process. Documents software designs, code, and processes to ensure maintainability, scalability, and knowledge sharing among team members. Participates in code reviews and provides constructive feedback to peers to ensure code quality, adherence to coding standards, and knowledge transfer within the team. Minimum Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. 4 or more years (2 with Master's) of experience in software development, systems integration, data management, or related fields. Proficiency in JavaScript, including familiarity with modern frameworks and libraries such as React, Angular, or Vue.js. Solid knowledge of HTML and CSS, including responsive design principles and front-end workflows. Knowledge of Python and REST API frameworks. Basic understanding of user interface (UI) and user experience (UX) design principles, with the ability to collaborate with designers to translate wireframes into functional code. Problem-Solving Skills: Strong analytical and problem-solving abilities, with the capacity to debug and resolve issues related to front-end code. Experience with Generative AI including API access to large language models (LLMs). Preferred Qualifications: Experience with Docker, Kubernetes, or other containerization technology. Experience working in AWS environments. Strong analytical and problem-solving skills. Excellent communication and interpersonal skills. Ability to work effectively in a team-oriented environment. Experience working with data ingest and transformation. Clearance Requirements: Must possess a current and active TS/SCI clearance Physical Requirements: The person in this position must be able to remain in a stationary position 50% of the time. Occasionally move about inside the office to access file cabinets, office machinery, or to communicate with co-workers, management, and customers, via email, phone, and or virtual communication, which may involve delivering presentations.
    $63k-79k yearly est. 1d ago
  • Quantitative Engineer

    KMM Technologies, Inc. 3.4company rating

    Requirements engineer job in Reston, VA

    Quantitative Engineer (Testing, Automation & Model Validation) Duration: Long Term MUST HAVE: Python Development AWS Cloud Services Finance / Mortgage Job Description: Quantitative Engineer (Testing, Automation & Model Validation) We are seeking a Quantitative Engineer for a contract position with strong expertise in testing, automation, and quantitative analysis to join the MFCW application team. The ideal candidate will have hands-on experience with AWS cloud services, Python development, and a solid understanding of software testing methodologies combined with quantitative modeling principles. This role involves designing and implementing automated test frameworks, validating complex models, generating test cases, performing functional and regression testing, and maintaining comprehensive documentation to ensure accuracy and reliability of analytical systems. ________________________________________ Responsibilities • Automated Testing & Frameworks Design, develop, and maintain automated test scripts and frameworks for regression and performance testing of quantitative models and applications. • Test Case Generation & Execution Generate detailed test cases based on functional requirements, model specifications, and quantitative assumptions. • Model Vetting & Validation Perform rigorous testing of financial and analytical models, including sensitivity analysis, back-testing, and stress testing in AWS environments. • Data Setup & Simulation Prepare datasets and execute model runs in AWS for validation and scenario analysis. • Defect Resolution & Optimization Collaborate with development and modeling teams to identify, troubleshoot, and resolve defects in model logic and automation workflows. • Documentation & Governance Document test plans, validation results, and maintain compliance with model governance standards. • Continuous Improvement Enhance and optimize existing test automation processes, ensuring scalability and reproducibility. • Cloud Integration Utilize AWS services (EC2, S3, Lambda) for deployment, testing environments, and automation pipelines. • Quality Assurance Ensure code quality and testing rigor through best practices and continuous integration. ________________________________________ Required Skills & Qualifications • Education: Bachelor's degree in Computer Science, Engineering, Applied Mathematics, or related discipline. Minimum 8+ years of experience in software testing, automation, or quantitative engineering. • Programming: Strong proficiency in Python for development and automation. Exposure to Java and R is desirable. • Cloud Expertise: Hands-on experience with AWS services (EC2, S3, Lambda) and cloud-based testing environments. • Testing Expertise: o Ability to design and execute unit tests, integration tests, regression tests, and model validation tests. o Experience in creating and managing test cases and documentation. • Automation: Knowledge of test automation frameworks and tools (e.g., PyTest, Selenium, or similar). Familiarity with microservices and distributed systems is a plus. • Quantitative Analysis: Understanding of statistical testing, model risk concepts, and performance metrics for analytical models. • Version Control & CI/CD: Familiarity with Git and continuous integration/deployment pipelines. • Soft Skills: Strong analytical and problem-solving skills, excellent communication, and documentation abilities.
    $74k-98k yearly est. 1d ago
  • AWS DevSecOps Engineer

    VITG

    Requirements engineer job in Ellicott City, MD

    VITG is seeking a DevSecOps Engineer responsible for automating security integration throughout the CI/CD pipeline and the AWS cloud environment. This role will "shift security left" by taking the lead on implementing security-as-code tools, managing their usage, ensuring their proper configuration and compliance, and proactively embedding security policy into the development process. Our ideal candidate is passionate about being part of a "change," and working in a dynamic and highly collaborative environment focused on speed, stability, and security. The DevSecOps Engineer provides hands-on expertise to integrate and maintain the security posture for corporate systems that support Federal programs, ensuring a successful program Authority To Operate (ATO). You will be responsible for developing, monitoring, and maintaining systems and procedures to safeguard internal information systems, networks, and CI/CD pipelines through automation. Applicant Requirements: US citizen or must be authorized to work in the United States Must have lived in the USA for three years of the last five years Must be able to obtain a US federal gov badge and eligible for Public Trust clearance Must be able to pass a background check, including a drug test Job Responsibilities: Develop, implement, and maintain security automation throughout the entire SDLC, integrating security into the CI/CD pipelines using Jenkins/Github and Infrastructure-as-Code (IaC) principles. Run and manage security scans with tools such as Snyk (SAST/SCA) and establish automated tracking and enforcement mechanisms for vulnerability remediation. Integrate and manage security workloads running on AWS containers and ensure container image scanning and runtime security policies are enforced. Design, manage, and maintain source code for AWS infrastructure in GitHub and manage automated pipelines, ensuring security checks and gates are embedded in every deployment. Maintain security information on JIRA/Confluence and actively participate in agile DevSecOps practices, promoting a "Secure-by-Design" culture. Provides hands-on support for developing, coordinating, implementing, and enforcing information systems security policies, standards, and methodologies as code. Maintain operational security posture for Enterprise Salesforce FISMA system by ensuring security is baked into configuration and deployment practices. Implement security tools, security tool usage, and policy-as-code to ensure configurations remain compliant and configured properly, all while ensuring a successful program ATO. Automate vulnerability/risk assessment analysis to support continuous monitoring and authorization. Manages changes to the system and assesses the security impact of those changes through automated compliance checks. Assists with the management of security aspects of the information system and performs day-to-day security operations of the system Evaluate security solutions to ensure they meet security requirements for processing classified information Performs vulnerability/risk assessment analysis to support certification and accreditation Prepares and reviews documentation to include System Security Plans (SSPs), Risk Assessment Reports, Certification and Accreditation (C&A) packages, and System Requirements Traceability Matrices (SRTMs) Qualifications & Skills: Bachelor's or Master's degree in Computer Science, Engineering, Information Technology, or a related discipline Minimum of 6 years related experience in Information Technology including 4 years in the DevSecOps or Application Security (AppSec) space. Demonstrated hands-on experience in cloud environments such as AWS Commercial and GovCloud, specifically with security automation, logging, and monitoring services (e.g., GuardDuty, Security Hub, CloudTrail). Expertise in CI/CD pipeline management and the integration of security tools for Static Application Security Testing (SAST), Dynamic Application Security Testing (DAST), and Software Composition Analysis (SCA). Required: Strong hands-on experience with AWS, Snyk, GitHub, JIRA, and Confluence to implement and manage the end-to-end DevSecOps toolchain. Demonstrated work experience with Infrastructure-as-Code (IaC) security (e.g., using Checkov or Terrascan on Terraform/CloudFormation). (Preferred) Experience with Salesforce Platform and tool ecosystem (Preferred) Salesforce or any other platform tool - Configuration/Setup of External Client Applications and Secure Communications (TLS) (Preferred) AppOmni - Have used it and can manage issues, perform new org additions and configurations. Strong background in the certification and accreditation process (ATO) and the ability to automate compliance checks against frameworks like FISMA, NIST, and FedRAMP. Possesses working knowledge of business security practices, current security automation tools, and policy-as-code implementation. Demonstrated working knowledge of vulnerability assessment and penetration testing processes, focusing on how to automate these checks. Experience with Government Agency Security Assessment Process in support of maintaining and/or establishing an ATO and the appropriate boundary. Experience with, understanding of and adherence to guidelines such as FISMA, NIST, HIPPA, and IRS Pub-1075 (Preferred) Preferred Certifications: Require AWS DevOps or SysOps or equivalent Certification Preferably possess industry certification such as the CISSP, CEH, GIAC, etc Job Type: Full Time Salary: BOE Benefits: 401(k) with employer contribution Medical/Dental/Vision insurance (option for full coverage for employee) Life, ST/LT insurance Professional development opportunities Schedule: 8 hour shift May include minimal after hours support depending on deployment schedule Work Type: Hybrid remote in Ellicott City, MD 21043 1 to 2 days in office weekly
    $65k-87k yearly est. 3d ago
  • Platform Engineer

    SNI Technology

    Requirements engineer job in Arlington, VA

    As a disclaimer, I do not work with third party agencies or c2c firms on active positions, and my client is only accepting USC or GC candidates at this time.. Please do not contact me in regard to positions if you do not fit the above-given parameters. Details -- Arlington, VA area Senior Platform Engineer (DevOps) -- Mid-Upper 100s base salary range, before benefits and bonus -- 4 days onsite in Arlington area (1 day remote) Senior Platform Engineer - DevOps Focus Join a dynamic, multi-disciplinary team responsible for the architecture, design, implementation, and support of public and private cloud environments. This role involves building and maintaining systems and applications across diverse platforms, with a strong emphasis on automation, orchestration, and operational excellence. Key Responsibilities Design, implement, and maintain systems and applications in multi-vendor cloud environments. Develop tools and processes for automated deployment, monitoring, and infrastructure management. Lead engineering efforts on projects, including architectural design and implementation. Evaluate and optimize software, systems, and workflows to enhance performance and reliability. Architect automation solutions using tools like Terraform, CloudFormation, Ansible, and Puppet. Ensure high availability and performance of systems through proactive monitoring and tuning. Collaborate across teams to troubleshoot and optimize production systems and services. Mentor junior engineers and contribute to knowledge sharing and team development. Maintain documentation and ensure compliance with audit and security standards. Qualifications Bachelor's degree in Computer Science, Information Technology, or equivalent experience. 10+ years of relevant experience, including at least 3 years in cloud platforms, CI/CD, scripting, or infrastructure-as-code. Strong background in DevOps, system administration, or software development. Expertise in software build and deployment pipelines. Solid understanding of cybersecurity principles in DevOps and CI/CD environments.
    $68k-90k yearly est. 1d ago
  • Quantitative Engineer (Testing, Automation & Model Validation)

    Technology Ventures 4.6company rating

    Requirements engineer job in Reston, VA

    We are seeking a Quantitative Engineer for a contract position with strong expertise in testing, automation, and quantitative analysis to join the application team. The ideal candidate will have hands-on experience with AWS cloud services, Python development, and a solid understanding of software testing methodologies combined with quantitative modeling principles. This role involves designing and implementing automated test frameworks, validating complex models, generating test cases, performing functional and regression testing, and maintaining comprehensive documentation to ensure accuracy and reliability of analytical systems. ________________________________________ Responsibilities • Automated Testing & Frameworks Design, develop, and maintain automated test scripts and frameworks for regression and performance testing of quantitative models and applications. • Test Case Generation & Execution Generate detailed test cases based on functional requirements, model specifications, and quantitative assumptions. • Model Vetting & Validation Perform rigorous testing of financial and analytical models, including sensitivity analysis, back-testing, and stress testing in AWS environments. • Data Setup & Simulation Prepare datasets and execute model runs in AWS for validation and scenario analysis. • Defect Resolution & Optimization Collaborate with development and modeling teams to identify, troubleshoot, and resolve defects in model logic and automation workflows. • Documentation & Governance Document test plans, validation results, and maintain compliance with model governance standards. • Continuous Improvement Enhance and optimize existing test automation processes, ensuring scalability and reproducibility. • Cloud Integration Utilize AWS services (EC2, S3, Lambda) for deployment, testing environments, and automation pipelines. • Quality Assurance Ensure code quality and testing rigor through best practices and continuous integration. ________________________________________ Required Skills & Qualifications • Education: Bachelor's degree in Computer Science, Engineering, Applied Mathematics, or related discipline. Minimum 8+ years of experience in software testing, automation, or quantitative engineering. • Programming: Strong proficiency in Python for development and automation. Exposure to Java and R is desirable. • Cloud Expertise: Hands-on experience with AWS services (EC2, S3, Lambda) and cloud-based testing environments. • Testing Expertise: o Ability to design and execute unit tests, integration tests, regression tests, and model validation tests. o Experience in creating and managing test cases and documentation. • Automation: Knowledge of test automation frameworks and tools (e.g., PyTest, Selenium, or similar). Familiarity with microservices and distributed systems is a plus. • Quantitative Analysis: Understanding of statistical testing, model risk concepts, and performance metrics for analytical models. • Version Control & CI/CD: Familiarity with Git and continuous integration/deployment pipelines. • Soft Skills: Strong analytical and problem-solving skills, excellent communication, and documentation abilities.
    $68k-94k yearly est. 3d ago
  • UiPath Engineer

    Wonderbotz

    Requirements engineer job in McLean, VA

    Need Only Local Candidates from Nearby Area. Top Skills Must: UiPath Document Understanding Python Developer Role and Responsibilities Your specific duties will be based on your experience as an UiPath developer. In this role, you will be responsible for designing and delivering UiPath solutions in accordance with WonderBotz standards and best practices. You will work closely together with our enthusiastic team of both business and technical specialists. You will be part of a fast-growing and successful team that helps our clients get the maximum benefit. Expected Activities: • Support development of UiPath strategies, including assessing opportunities • Under the supervision of more experienced developers, define, design, and develop automation on UiPath platforms for clients, including POCs, pilots, and production automation. More senior developers will be expected to work independently • Participate in workshops and interviews with business process SMEs to gather and confirm business process details & documenting process definitions. More senior developers will lead these workshops and interviews. • Participate in design and configuration sessions and apply feedback to improve and enhance work products. More senior developers will lead these sessions. • Work alongside newly trained developers to guide and mentor them. Qualifications and Skills • Have mastered or have a strong desire to master a leading RPA tool (UiPath a must, Blue Prism, Automation Anywhere), including advanced RPA vendor certification. • At least one year of hands-on experience with at least one of the following programming languages (e.g. .Net, Java, VB, C#/C, HTML/CSS, Python, Web Services, mainframe, web applications, SQL, data integration tools, technical automation tools). More senior developers should have a minimum of 2 to 4 years of this hands-on experience. • Reasonably proficiency in reading Microsoft Office Visio or other equivalent process flow-charting tool or workflow-based logic • Extra - Any prior work or academic experience with Document management and processing tools (e.g. Kofax, ABBYY, Data Cap), Data integration tools (e.g. Informatica, Microsoft SSIS), Technical automation tools (e.g. shell scripting, PHP), or Business process management tools (e.g. Pega).
    $67k-90k yearly est. 2d ago
  • Platform Engineer (Azure)

    Kamispro

    Requirements engineer job in Baltimore, MD

    Seeking a Platform Engineer with strong cloud engineering experience, specializing in building and managing scalable infrastructure in Microsoft Azure. This role supports cross-functional teams by designing, implementing, and maintaining Infrastructure as Code (IaC) to enable reliable, secure, and efficient cloud operations. Hybrid schedule (onsite three days per week). *Must be legally authorized to work in the U.S. without current or future sponsorship. Responsibilities: Design, implement, and manage Azure cloud infrastructure using IaC. Develop automation scripts and tools to streamline deployments, scaling, and operational tasks. Monitor and optimize system performance, capacity, and cost. Conduct incident response, root-cause analysis, and post-incident reviews. Implement monitoring, logging, and tracing solutions to ensure observability. Collaborate with development and operations teams to improve system reliability and architecture. Requirements: Bachelor's degree in Computer Science or a related field. 4+ years technical expertise in cloud engineering with strong technical experience in the following areas: Virtual Networks, Load Balancers, VPN Gateways, Network Security Groups and other relevant Azure networking components. Managing microservices based applications deployed on AKS. Designing, deploying and managing Kubernetes clusters within AKS (YAML, HELM). Experience designing scalable, secure and resilient solutions using Azure services, optimizing for cost and performance and creating disaster recovery plans. Experience implementing and maintaining IaC solutions using Terraform to automate the creation and management of Azure resources. Using Azure Monitor, Application Insights, and other tools to track key metrics, set up alerts, and diagnose problems. Well versed in Windows and has the skills and confidence to work on Linux when needed. Experience with Azure CLI, PowerShell/Azure PowerShell for automation, provisioning and configuration of Azure environments. Proven ability to troubleshoot and optimize Azure-native services such as API Management, Azure Functions, Storage, Service Bus, Key Vault, and Container Apps. Familiarity with Azure DevOps, GitHub and ADZO pipelines. Excellent written and oral communication skills.. Azure Virtual Machines, Virtual Networks, ExpressRoute, Azure Active Directory, AD Connect, Operations Management Suite/Log Analytics, Azure Monitor, Azure Site Recovery, Azure Backup, Azure App Services, Azure Storage, Azure SQL Server, Azure Automation. Certifications are a plus: Azure Administrator, Azure Developer, Azure Architect, Azure DevOps, Terraform, MCSE, Security +. Experience building and deploying Azure resources using IaC (Terraform, Ansible). Creating and managing CI/CD pipelines using tools such as Azure DevOps (Git) to deploy both infrastructure and applications. VMWare ESXi. System Center Configuration Manager. Microsoft InTune. Active Directory. Bash experience. Experience with serverless deployment, maintenance and associated tool chain. Working knowledge of ITIL disciplines.
    $65k-87k yearly est. 16h ago
  • NetOps Engineer

    Techperm Incorporated

    Requirements engineer job in Baltimore, MD

    Key Required Skills: Strong knowledge of SMARTS platform NETOPS, Redhat Linux, TCP/IP network knowledge, Github bitbucket. • Work with advanced technical principles, theories, and concepts. • Challenge with working on complex technical problems and providing innovative solutions. • Work with highly experienced and technical resources • All other duties as assigned or directed REQUIRED • Bachelor's Degree in Computer Science, Mathematics, Engineering or a related field. • Masters or Doctorate degree may substitute for required experience • 5-7 years of experience with a high-level scripting language such as Python or Perl • 5 years of experience or more as a Network Engineer focusing on network performance management. • 5 years of experience with SMARTS enterprise network monitoring system, with at least 3 years specific to implementation and configuration of SMARTS tool. • NetOPS expertise and Broadcom in depth knowledge • 4 years of experience with RHEL combine between versions 7 and 8. • Must be able to obtain and maintain a Public Trust. Contract requirement. *** Selected candidate must be willing to work on-site in Woodlawn, MD 5 days a week. • Perl and Python advanced scripting skills. • In depth hands on experience to program in a Java enterprise (J2EE) environment using platforms such as Java Server Faces v2 (JSF 2), Enterprise Java Beans v3 (EJB 3), Java-based Ajax library. • Knowledge/experience/advanced skills on Broadcom Service Assurance (former SMARTS) monitoring tool. • Experience working on Red Hat Enterprise Linux environments. • TCP/IP networks knowledge. • Broadcom NetOps monitoring tools knowledge and experience is a plus. • VMWare systems administration • MS Visio experience creating Network Diagrams • MS Project, creating and updating project plans. • Ability to produce operational documentation using industry best practices in technical writing. DESIRED • Perl, Python, and Java advanced programmer skills. • Developing on JBoss EAP environments skills and experience. • MySQL database management experience. • Knowledge of version control tools like Git or Bitbucket. • Experience developing on Service Now is a plus. Education: • Bachelor's Degree with 7+ years of experience • Must be able to obtain and maintain a Public Trust. Contract requirement.
    $65k-87k yearly est. 3d ago
  • Platform Engineer

    Mills Thomas

    Requirements engineer job in Baltimore, MD

    Essential Duties and Responsibilities Design, implement, and manage Infrastructure as Code (IaC) solutions for cloud environments on the Microsoft Azure platform. Develop automation scripts and tools to streamline deployments, scaling activities, and infrastructure operations. Identify and resolve performance bottlenecks, tune systems, and optimize performance to enhance efficiency and user experience. Analyze system capacity, plan for future growth, and design architectures capable of handling traffic spikes and increased workloads. Respond to incidents, troubleshoot issues, and conduct post-incident reviews to determine root causes and prevent recurrence. Implement monitoring and observability solutions, ensuring comprehensive insights through metrics, logging, and tracing. Requirements Bachelor's degree in Computer Science or a related discipline. 4+ years of hands-on cloud engineering experience, including strong expertise in Azure networking components such as Virtual Networks, Load Balancers, VPN Gateways, and Network Security Groups. Experience managing microservices-based applications deployed on Azure Kubernetes Service (AKS). Proven experience designing, deploying, and operating Kubernetes clusters on AKS (YAML, Helm). Strong background in designing scalable, secure, and resilient Azure solutions, with a focus on cost optimization, performance tuning, and disaster recovery planning. Proficiency implementing and maintaining IaC solutions using Terraform to automate Azure resource provisioning and configuration. Experience using Azure Monitor, Application Insights, and related tools to track key metrics, configure alerts, and diagnose issues. Strong Windows background with the ability and confidence to work in Linux environments. Proficiency with Azure CLI and PowerShell/Azure PowerShell for automation and environment provisioning. Demonstrated ability to troubleshoot and optimize Azure services including API Management, Azure Functions, Storage, Service Bus, Key Vault, and Container Apps. Familiarity with Azure DevOps, GitHub, and ADZO pipelines. Experience with key Azure services including Virtual Machines, ExpressRoute, Azure Active Directory, AD Connect, Log Analytics/Operations Management Suite, Azure Site Recovery, Azure Backup, Azure App Services, Azure Storage, Azure SQL, and Azure Automation. Relevant certifications are a plus: Azure Administrator, Azure Developer, Azure Architect, Azure DevOps Engineer, Terraform, MCSE, Security+. Experience building and deploying Azure resources using IaC technologies (Terraform, Ansible). Experience creating and managing CI/CD pipelines with Azure DevOps (Git) for both infrastructure and application deployments.
    $65k-87k yearly est. 3d ago
  • AI Engineer

    Insight Global

    Requirements engineer job in Linthicum, MD

    Must-Haves: Solid understanding of Generative AI Models (LLMS, RAG and RAG pipelines, Prompt Engineering, model tuning and etc.) OpenAI preferred but not required Familiarity with cloud-based (AWS) AI/ML services Expertise in Python for AI/ML Development (Including frameworks like PyTorch, sklearn) Proficiency with Docker and Kubernetes for containerized deployments Familiarity with version control systems (e.g., Git) and modern development workflows. Familiarity with CI/CD tools like Jenkins Plusses: Secret or TS Clearance JavaScript (Frameworks like React) HTML/CSS Job Description: A client of Insight Global is seeking an AI Architect/Engineer to drive our artificial intelligence initiatives. This role requires a unique blend of strategic thinking and hands-on implementation expertise. The ideal candidate will be able to understand business objectives and translate them into AI architectures and pipelines, while also possessing the technical skills to develop and deploy production-ready AI models to support use cases. A key focus will be on implementing solutions that automate business and operational processes, leveraging machine learning, robotic process automation, and other automation technologies, improving efficiency and reducing friction. Pay: $80-100/hr
    $65k-87k yearly est. 1d ago
  • PACS Engineer

    Convergenz

    Requirements engineer job in Washington, DC

    We are seeking a highly skilled Physical Access Control (PACS) Engineer to join our PACS security team. The PACS SME will provide comprehensive support for the PACS Connector application, ensuring seamless integration and operation across DHS Components. The ideal candidate will have extensive experience designing, installing, and maintaining PACS hardware and software, as well as developing standards, SOPs, and installation guides. This role requires both technical expertise and the ability to create enterprise-wide documentation and standards that align with security, compliance, and operational requirements. Responsibilities On-boarding Support: Conduct preplanning meetings to understand PACS IP addresses, TCP ports, and scheduling requirements. Coordinate with NOSC to establish connectivity between the PACS and PACS Connector. Configuration Management: Configure PACS, Validation System, and Component Users within the application. Support Services: Provide formal support via the Salesforce ticket system and ad-hoc support as needed. Documentation: Document the system architecture and update the System Design Document (SDD) regularly or when changes occur. Training and Policy: Develop and contribute to training materials, Quick Reference Guides, and policy documentation. Project Review: Participate in the review of project Statements of Work (SOWs), Independent Government Cost Estimates (IGCEs), and market research for PACS equipment. Testing and Evaluation: Operate the PACS Test and Evaluation (T&E) lab to evaluate software and hardware integrations before production implementation. Additional Tasks: Perform other tasks as directed. Qualifications Required: Bachelor's degree in Information Technology, Security Engineering, or related field; or equivalent work experience. 5-7 years of experience with PACS hardware and software installation, configuration, and maintenance. Proven experience developing PACS and Security standards, SOPs, and installation documentation. Hands-on experience with PACS vendor solutions (e.g., Lenel, CCURE 9000, Identiv, HID, Honeywell, Genetec). Strong understanding of PACS integration with enterprise security systems (identity management systems). Preferred: Strong technical writing and documentation skills. Knowledge of compliance frameworks such as NIST 800-53, FICAM, or related federal standards. Experience in enterprise or federal government environments. Professional certifications such as Certified System Engineer ICAM PACS (CSEIP) or vendor-specific PACS certifications. Knowledge of networking fundamentals (IP addressing, VLANs, TCP/IP). Skills & Competencies Expertise in physical access control system design and deployment. Excellent problem-solving and troubleshooting abilities. Strong organizational and documentation skills. Ability to work collaboratively across IT, facilities, and security teams. Strong communication skills, both written and verbal. We are an Equal Opportunity Employer committed to a diverse and inclusive workplace. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, age, gender identity, national origin, disability, or veteran status. We value diverse perspectives and actively seek to create an inclusive environment that celebrates the unique qualities of all employees.
    $72k-95k yearly est. 1d ago
  • Cloud Engineer

    Providge Consulting

    Requirements engineer job in Washington, DC

    Senior Cloud Systems Engineer Note: Period of performance may extend beyond 12/31/2026. The Senior Cloud Systems Engineer will serve as a key member of a team responsible for designing, building, and maintaining a cloud business platform within the Federal Reserve Board. The ideal candidate is an experienced cloud professional with deep expertise in AWS GovCloud and a strong background in modern software development, DevOps, and cloud architecture. This role requires at least ten (10) years of technology experience-five (5) of which must be focused specifically on cloud technologies-and at least five (5) years supporting cloud platforms directly. The candidate must meet the qualifications outlined in the Technical Services BOA, Section 3.5.6.2 for the labor category Senior Cloud Systems Engineer, and demonstrate substantial hands-on experience with cloud engineering, automation, and secure system design. Required Knowledge and Experience The candidate shall possess the following expertise: Software Development & SDLC Experience developing software within structured SDLCs including DevOps, Agile, Lean, or Iterative methodologies. Proficiency in at least one high-level programming language such as Python, PowerShell, or Infrastructure-as-Code frameworks (CloudFormation, AWS CDK, Terraform). Preferred: Python 3.x data engineering (pandas, Databricks, SPARK). Cloud Architecture & Engineering Experience designing, deploying, and migrating secure, scalable systems across IaaS, PaaS, and SaaS environments. Strong understanding of cloud-based architectures, including web servers, caching, application servers, load balancers, and storage. Preferred: AWS Load Balancer experience. Familiarity with loose coupling, stateless systems, and cloud-ready application design best practices. Experience transitioning legacy systems to cloud-ready architectures. DevOps & Automation Hands-on experience with DevOps pipelines, CI/CD automation, containers, and cloud deployment best practices. Preferred: GitLab or Azure DevOps. Experience with containerization technologies (Docker, Kubernetes, AWS EKS, ECS, or Fargate). Identity, Security & Compliance Understanding of cloud federation and authentication technologies such as SAML, Okta, OAuth, and OpenID Connect, and their application to enterprise/public-facing systems. Awareness of cloud information-security risks and best practices within high-security environments. Preferred: Experience with government cloud deployments (FedRAMP, FISMA, high-security architectures). AWS Expertise Experience with AWS and AWS GovCloud service offerings. Highly desirable: Direct configuration and management of AWS components including: CloudTrail, CloudWatch, S3 RDS or Aurora, PostgreSQL Lambda, Glue, MWAA (Airflow) API Gateway Cost Optimization & Governance Understanding of cloud provider cost structures and ability to design cost-effective solutions. APIs & Data Services Knowledge of RESTful services, XML, or JSON. Preferred: Experience with AWS API Gateway. Experience in cloud analytics/data-processing/ML architectures (e.g., data lakes, lake formations). Familiarity with Data Quality testing tools such as Great Expectations or Glue Data Quality. Observability & Architecture Frameworks Experience integrating cloud systems with enterprise observability platforms (e.g., Splunk). Familiarity with the AWS Well-Architected Framework. Experience defining platform-level requirements. Education (Preferred) Bachelor's degree in Computer Science, Information Technology, or related field. Duties and Responsibilities The Contractor shall provide, including but not limited to, the following services: Design and Develop Cloud Applications Create secure, maintainable, and scalable custom cloud-based applications. Set Up and Configure DevSecOps Environments Build and maintain DevSecOps pipelines for cloud environments, integrating automation, security, and continuous delivery best practices. Build and Support Cloud Infrastructure Collaborate with the cloud platform team and development teams to architect, manage, and enable cloud infrastructure and platforms supporting business application development
    $72k-95k yearly est. 1d ago
  • AI Engineer

    Mindlance 4.6company rating

    Requirements engineer job in Washington, DC

    Job Title: Developer Premium II - AI Engineer Duration: 7 Months with long term extension Hybrid Onsite: 4 days per week from Day 1 AI Engineer: The AI Engineer will play a pivotal role in designing, developing, and deploying artificial intelligence solutions that enhance operational efficiency, automate decision-making, and support strategic initiatives for the environmental and social specialists within the client. This role is central to the VPU's digital transformation efforts and will contribute to the development of scalable, ethical, and innovative AI systems. Qualifications and Experience Education: Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or related field. Experience: Minimum 3 years of experience in AI/ML model development and deployment. Experience with MLOps tools (e.g., MLflow), Docker, and cloud platforms (AWS, Azure, GCP). Proven track record in implementing LLMs, RAG, NLP model development and GenAI solutions. Technical Skills: Skilled in - Azure AI/Google Vertex Search, Vector Databases, fine-tuning the RAG, NLP model development, API Management (facilitates access to different sources of data) Proficiency in Python, TensorFlow, PyTorch, and NLP frameworks. Expertise deep learning, computer vision, and large language models. Familiarity with REST APIs, NoSQL, and RDBMS. Certifications (Preferred): Microsoft Certified: Azure AI Engineer Associate Google Machine Learning Engineer SAFe Agile Software Engineer (ASE) Certification in AI Ethics Objectives of the Assignment: Develop and implement AI models and algorithms tailored to business needs. Integrate AI solutions into existing systems and workflows. Ensure ethical compliance and data privacy in all AI initiatives. Support user adoption through training and documentation. Support existing AI solutions by refinement, troubleshooting, and reconfiguration Scope of Work and Responsibilities: AI Solution Development: Collaborate with cross-functional teams to identify AI opportunities. Train, validate, and optimize machine learning models. Translate business requirements to technical specifications. AI Solution Implementation Develop code, deploy AI models and into production environments, and conduct ongoing model training Monitor performance and troubleshoot issues and engage in fine-tuning the solutions to improve accuracy Ensure compliance with ethical standards and data governance policies. User Training and Adoption: Conduct training sessions for stakeholders on AI tools. Develop user guides and technical documentation. Data Analysis and Research: Collect, preprocess, and engineer large datasets for machine learning and AI applications. Recommend and Implement Data Cleaning and Preparation Analyse and use structured and unstructured data (including geospatial data) to extract features and actionable insights. Monitor data quality, detect bias, and manage model/data drift in production environments. Research emerging AI technologies and recommend improvements. Governance, Strategy, Support, and Maintenance: Advise client's staff on AI strategy and policy implications Contribute to the team's AI roadmap and innovation agenda. Provide continuous support and contribute towards maintenance and future enhancements. Deliverables: Work on Proof of Concepts to study the technical feasibility of AI Use Cases Functional AI applications integrated into business systems. Documentation of model/application architecture, training data, and performance metrics. Training materials and user guides. Develop, train, and deploy AI models tailored to business needs “Mindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of - Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.”
    $84k-117k yearly est. 1d ago
  • SharePoint Engineer

    Bluewater Federal Solutions 4.1company rating

    Requirements engineer job in Washington, DC

    BlueWater Federal is looking for a SharePoint Engineer to support the Department of Energy in Washington, DC. As the SharePoint Engineer, you will be responsible for designing, implementing, and maintaining SharePoint environments and solutions. This includes configuring sites, libraries, workflows, and web parts, ensuring system security, and supporting business processes through automation and integration. Responsibilities • Install, configure, and maintain SharePoint servers (on-premises and/or SharePoint Online). • Monitor system performance, troubleshoot issues, and apply patches or updates. • Manage permissions, security settings, and compliance requirements. • Design and deploy SharePoint solutions, including custom workflows, forms, and web parts. • Migrate data and content from legacy systems to SharePoint using scripts or third-party tools. • Customize SharePoint sites to meet organizational needs. • Collaborate with IT teams and IA. • Provide technical support to end-users and site owners and create documentation • Ensure adherence to security standards and organizational policies. • Maintain knowledge of SharePoint best practices and emerging technologies. Qualifications • Bachelor's degree • 10+ years of experience with SharePoint administration with a deep understanding of SharePoint Architecture, features and best practices. • Must have an active Top Secret clearance with the ability to obtain a Q and SCI clearance • Proficiency in PowerShell scripting for automation. • Experience with migrating SharePoint versions on-premises or online (preferably using ShareGate) • SharePoint components (Search, Taxonomy, Managed Metadata). • Patching SharePoint server to meet organization security standards. • Experience with HTML, CSS, JavaScript, REST API, and SQL is preferred BlueWater Federal Solutions is proud to be an Equal Opportunity Employer. All qualified candidates will be considered without regard to race, color, religion, national origin, age, disability, sexual orientation, gender identity, status as a protected veteran, or any other characteristic protected by law. BlueWater Federal Solutions is a VEVRAA federal contractor and we request priority referral of veterans.
    $79k-107k yearly est. 16h ago
  • DevOps Engineer

    Us Tech Solutions 4.4company rating

    Requirements engineer job in McLean, VA

    The candidate should be able to drive implementation and improvement of tools and technologies for enterprise adoption in accordance with operational and security standards. Practice and promote a Site Reliability Engineering (SRE) culture to improve and operate cloud platform offerings to the enterprise while working toward innovation, automation, and operational excellence. Automation experience is a must for this position. Ability to provide 24x7 operational support on a periodic basis and involvement in Issue resolution is a must. Must Have Qualifications: Must have 5+ years of have on experience with AWS CloudFormation and Terraform. Automation through Shell Scripting and Python required (Ansible nice to have). 3+ years of experience with EKS and Kubernetes Technical expertise: 7+ years of overall information technology experience with an emphasis on integration and delivery of virtual/cloud platforms to enterprise applications. At least 5 years of proven experience with AWS CloudFormation, Terraform, or similar tools. 3+ years of experience with engineering and supporting containerization technology (OpenShift, Kubernetes, AWS(ECS/EKS), etc.) at scale. Experience in Python, Ansible and shell scripting to automate routine operation tasks. Experience in Tetrate, Rancher, ArgoCD are highly preferred. About US Tech Solutions: US Tech Solutions is a global staff augmentation firm providing a wide range of talent on-demand and total workforce solutions. To know more about US Tech Solutions, please visit *********************** US Tech Solutions is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Recruiter Details: Aishwarya Chandra Email: **************************************** Job ID: 25-53450
    $95k-130k yearly est. 1d ago
  • Cloud Data Engineer- Databricks

    Infocepts 3.7company rating

    Requirements engineer job in McLean, VA

    Purpose: We are seeking a highly skilled Cloud Data Engineer with deep expertise in Databricks and modern cloud platforms such as AWS, Azure, or GCP. This role is ideal for professionals who are passionate about building next-generation data platforms, optimizing complex data workflows, and enabling advanced analytics and AI in cloud-native environments. You'll have the opportunity to work with Fortune-500 organizations in data and analytics, helping them unlock the full potential of their data through innovative, scalable solutions. Key Result Areas and Activities: Design and implement robust, scalable data engineering solutions. Build and optimize data pipelines using Databricks, including serverless capabilities, Unity Catalog, and Mosaic AI. Collaborate with analytics and AI teams to enable real-time and batch data workflows. Support and improve cloud-native data platforms (AWS, Azure, GCP). Ensure adherence to best practices in data modeling, warehousing, and governance. Contribute to automation of data workflows using CI/CD, DevOps, or DataOps practices. Implement and maintain workflow orchestration tools like Apache Airflow and dbt. Roles & Responsibilities Essential Skills 4+ years of experience in data engineering with a focus on scalable solutions. Strong hands-on experience with Databricks in a cloud environment. Proficiency in Spark and Python for data processing. Solid understanding of data modeling, data warehousing, and architecture principles. Experience working with at least one major cloud provider (AWS, Azure, or GCP). Familiarity with CI/CD pipelines and data workflow automation. Desirable Skills Direct experience with Unity Catalog and Mosaic AI within Databricks. Working knowledge of DevOps/DataOps principles in a data engineering context. Exposure to Apache Airflow, dbt, and modern data orchestration frameworks. Qualifications Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or a related field. Relevant certifications in cloud platforms (AWS/Azure/GCP) or Databricks are a plus. Qualities: Able to consult, write, and present persuasively Able to work in a self-organized and cross-functional team Able to iterate based on new information, peer reviews, and feedback Able to work seamlessly with clients across multiple geographies Research focused mindset Excellent analytical, presentation, reporting, documentation and interactive skills "Infocepts is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law."
    $77k-105k yearly est. 16h ago
  • Senior Data Engineer

    Zillion Technologies, Inc. 3.9company rating

    Requirements engineer job in McLean, VA

    The candidate must have 5+ years of hands on experience working with PySpark/Python, microservices architecture, AWS EKS, SQL, Postgres, DB2, Snowflake, Behave OR Cucumber frameworks, Pytest (unit testing), automation testing and regression testing. Experience with tools such as Jenkins, SonarQube AND/OR Fortify are preferred for this role. Experience in Angular and DevOps are nice to haves for this role. Must Have Qualifications: PySpark/Python based microservices, AWS EKS, Postgres SQL Database, Behave/Cucumber for automation, Pytest, Snowflake, Jenkins, SonarQube and Fortify. Responsibilities: Development of microservices based on Python, PySpark, AWS EKS, AWS Postgres for a data-oriented modernization project. New System: Python and PySpark, AWS Postgres DB, Behave/Cucumber for automation, and Pytest Perform System, functional and data analysis on the current system and create technical/functional requirement documents. Current System: Informatica, SAS, AutoSys, DB2 Write automated tests using Behave/cucumber, based on the new micro-services-based architecture Promote top code quality and solve issues related to performance tuning and scalability. Strong skills in DevOps, Docker/container-based deployments to AWS EKS using Jenkins and experience with SonarQube and Fortify. Able to communicate and engage with business teams and analyze the current business requirements (BRS documents) and create necessary data mappings. Preferred strong skills and experience in reporting applications development and data analysis Knowledge in Agile methodologies and technical documentation.
    $77k-109k yearly est. 4d ago
  • Data Engineer

    The Ash Group

    Requirements engineer job in Falls Church, VA

    *** W2 Contract Only - No C2C - No 3rd Parties *** The Ash Group is hiring a new Programmer Analyst Principal (Data Engineer) for our client (a global leader providing advanced systems and support in defense, aerospace, and security) based in Falls Church, VA. In this role, you'll be designing, implementing, and optimizing large-scale data systems and ETL pipelines, with a strong focus on using Amazon Redshift and AWS services to ensure data quality and integrity for complex defense programs. Compensation, Benefits, and Role Info Competitive pay rate of $65 per hour. Medical, dental, vision, direct primary care benefits, and, after six months of employment, a 4% matched 401(k) plan with immediate 100% vesting. Type: 12-month contract with potential extension or conversion. Location: On-site in Falls Church, VA. What You'll Be Doing Design and implement large-scale ETL data pipelines using AWS Glue and Python/PySpark to ingest, transform, and load data from various sources. Build and maintain robust data warehouses, focusing on Amazon Redshift, including data modeling and governance. Write and optimize complex, highly-performant SQL queries across large datasets (Redshift, Oracle, SQL Server). Collaborate with cross-functional teams (data scientists, analysts) to understand requirements and deliver end-to-end data solutions. Troubleshoot, optimize performance, and resolve data-related issues like pipeline failures and data quality bottlenecks. What We're Looking For 8+ years of hands-on experience in data engineering, focusing on designing and implementing large-scale data systems. 5+ years of experience in building production-level ETL pipelines using AWS Glue and Python/PySpark. Deep proficiency in SQL, including query optimization, indexing, and performance tuning across data warehouses like Amazon Redshift. Strong understanding of database design principles, data modeling (star/snowflake schemas), and data governance. Experience with data processing/orchestration frameworks such as Apache Airflow, Apache Kafka, or Fivetran. If you're a seasoned data engineering professional passionate about building scalable data solutions and driving innovation in cloud-based environments, we want to hear from you. This is an exciting opportunity to work on cutting-edge technologies, collaborate with cross-functional teams, and make a meaningful impact on data-driven decision-making. Apply now to be part of a forward-thinking organization where your expertise will shape the future of our data infrastructure. #DataEngineer #DataEngineering #AWSEngineer #Redshift #ETL #PySpark #DataPipeline #Westminster #ColoradoJobs #Contract
    $65 hourly 16h ago
  • DATA ENGINEER WITH JAVA

    Akkodis

    Requirements engineer job in McLean, VA

    Akkodis is seeking a DATA ENGINEER WITH JAVA, ETL PIPELINES, SPARK ,AWS GLUE for a Contract job with a client in Mclean VA/Wilmington DE . Ideally looking for applicants with a solid background in the financial services industry . Desired Skill Sets DATA ENGINEER WITH JAVA, ETL PIPELINES, SPARK ,AWS GLUE Looking for Sr developer & lead Below is the rate for each position Location: McLean, VA (primary) or Wilmington, DE (secondary)-Hybrid - 3days week Sr Developer -Rate Range: $90/hour to $95/hour; The rate may be negotiable based on experience, education, geographic location, and other factors. Lead Developer- Rate Range: $110/hour to $115/hour; The rate may be negotiable based on experience, education, geographic location, and other factors ONLY SENIOR FOLKS Job Title: Backend Developer (x2) Location: Schedule: hybrid Duration: ~7 months Interview: 1 hour zoom video interview Must Have: Data engineer Java -MUST -Not a backend API development but someone used & worked on it from Big data prospect) Spark ETL pipelines AWS (stepfunction, Glue, & Lambda) previous C1 experience Other Notes: Building ETL pipeline, extraction & loading Receive file from Discover, transfer to C1 ecosystem PLEASE NOTE Data migration - ETL This team is DATA not APIs (backend not frontend) Need Java AND Spark Biggest struggle is he sees a lot of developers who do product support remediations NO fast-paced developments This team needs REALLY strong hands-on developers who take ownership (he HATES following up) If you are interested in this role, then please click APPLY NOW. For other opportunities available at Akkodis, or any questions, feel free to contact me at ********** /*********************************. Equal Opportunity Employer/Veterans/Disabled Benefit offerings available for our associates include medical, dental, vision, life insurance, short-term disability, additional voluntary benefits, an EAP program, commuter benefits, and a 401K plan. Our benefit offerings provide employees the flexibility to choose the type of coverage that meets their individual needs. In addition, our associates may be eligible for paid leave including Paid Sick Leave or any other paid leave required by Federal, State, or local law, as well as Holiday pay where applicable. Disclaimer: These benefit offerings do not apply to client-recruited jobs and jobs that are direct hires to a client. To read our Candidate Privacy Information Statement, which explains how we will use your information, please visit ****************************************** The Company will consider qualified applicants with arrest and conviction records in accordance with federal, state, and local laws and/or security clearance requirements, including, as applicable: · The California Fair Chance Act · Los Angeles City Fair Chance Ordinance · Los Angeles County Fair Chance Ordinance for Employers · San Francisco Fair Chance Ordinance
    $79k-108k yearly est. 16h ago
  • Staff Engineer (Flightdeck and Avionics) (Tysons (McLean))

    Air Line Pilots Association

    Requirements engineer job in McLean, VA

    Are you the right candidate for this opportunity Make sure to read the full description below. Staff Engineer (Flightdeck and Avionics) City: Tysons (McLean) State: VA Country:US Type: 4 (Exempt, Bargaining Unit 1 (EB) # of Openings: 1 Company Name: Air Line Pilots Association, Int'l., Tysons (McLean), Virginia, United States Description: Staff Engineer (Flightdeck and Avionics) The Air Line Pilots Association, International (ALPA), the largest airline pilot union in the world and the largest non-governmental aviation safety organization in the world (representing over 80,000 pilots at 42 U.S. and Canadian airlines) seeks an experienced Staff Engineer for our Tysons (McLean), Virginia office. The Staff Engineer provides project coordination, subject matter expertise, and analysis in support of the activities, programs, and safety initiatives of the Association in the areas of aviation safety and security related to current and future flight deck technologies and avionics of civilian transport category airplanes. This staff engineers daily work focuses on policies, regulations, and standards that include the design, integration, and operation of future avionics, the architecture of the flight deck environment, and other related topics. The position actively participates in activities that involve continuing airworthiness of avionics systems, proposed rules related to airplane avionics and flight decks, and associated policies. The position also participates in the development of industry standards at organizations such as RTCA, and ASTM, for the next generation of avionics and airline flight decks that will be utilized by ALPA members. The Staff Engineer coordinates with and acts as a liaison between ALPA and appropriate government agencies and the broader aviation industry, creating and expanding their network of government and industry contacts. They develop written communications to the membership, to governments in response to their requests for comments, and to industry groups on flight deck-related safety and security issues, as needed, and in coordination with relevant Association leadership, Committee representatives, and staff. This position supports the Associations representation to government agencies such as the Federal Aviation Administration (FAA) and Transport Canada (TC), Transportation Security Administration (TSA), and NAV CANADA. They also support ALPA Air Safety Organization (ASO) pilot representatives on relevant internal and external committees and assist in conducting meetings between ALPA and industry/government stakeholders on areas of assigned work. Travel: 10 - 20%. ALPA is an equal opportunity employer that is committed to diversity and inclusion in a safe workplace. We prohibit discrimination, harassment and harmful behavior of any kind based on race, color, sex, religion, sexual orientation, national origin, gender identity, caste, disability, genetic information, pregnancy, or other protected characteristics as outlined in federal or provincial laws. We highly value everyone and all are encouraged to apply, including minorities, veterans, and people with disabilities. This position is covered by a collective bargaining agreement. ALPA is a member-driven, staff supported, union with two internal professional unions. Minimum Requirements: Bachelor's degree in engineering with specialization in one of the following: avionics, avionics technician, flight deck architecture, or avionics engineering. Coursework or training in avionics design and flight deck design is preferred from an accredited college or university, or the equivalent combination of education and practical experience. Minimum three (3) years of previous work experience in the civil aviation industry working on transport category airplane flight decks and avionics, five (5) or more years strongly preferred. Must be able to demonstrate previous work related to the standards development and design integration and operation of avionics and/or flight deck systems on civilian transport category airplanes. An understanding of aviation industry flight deck and avionics standards development at RTCA, ASTM, SAE, or similar aviation standards organizations is strongly preferred. Former employment at airlines, the FAA, or an industry civil aviation segment in a relevant capacity is extremely helpful. Special expertise in U.S. and Canadian Aviation Regulations (FARs and CARs), policies pertaining to systems certification and continuing airworthiness, and standards development related to avionics and flight deck technologies for aircraft, AAM, and knowledge of ICAO and/or international safety data systems. The ability to learn quickly and seamlessly adapt to changing demands is extremely important. Strong interpersonal and communication skills, oral and written, for effective interaction with staff, pilot members, and external contacts Technical writing and presentation skills to communicate effectively with Association members and Executives, government and industry representatives and stakeholders, and staff. Able to work independently and to be proactive in identifying and responding to issues and problems. On a rotational basis, answers calls to the ALPA Worldwide Accident Hotline and the Pilot Peer Support line, including outside business hours, weekends, and holidays. Software: Microsoft Word, Outlook, PowerPoint, and Excel. Able to work flexible schedule on short notice as needed. Physical Demands: Note: The physical demands described herein are characteristic of those that must be met to successfully perform the essential functions of this position. Reasonable accommodations may be made to enable individuals to perform the essential physical activities of this position described below. Constantly operates a computer/smartphone/tablet. Regularly required to maintain a stationary position; move about the office and the local metropolitan area; determine what others have said or written; and, converse with others and exchange accurate information. Regularly required to sit, stand, bend, reach, and move about the office and travel (locally, nationally, and internationally). May also include occasional bending, stooping, squatting, and/or pushing and pulling or moving, e.g., to pack, unpack, and/or move cases. Occasionally required to move, raise, reach, and/or retrieve binders, books, boxes, and files up to ten (10) pounds (lbs.). While on travel, could be responsible to move, raise, reach, and/or retrieve luggage weighing as much as 50 lbs. (Assistance may not always be available.) ALPA offers competitive salaries with terrific benefits, including: 401k Plan with Non-Elective Employer Contribution of 12% plus 2% contribution into a Market-Based Cash Balance Plan (MBCBP) after 180 days of employment. No employee contribution required! Employees can choose to make pre-tax, Roth, and after-tax contributions to the 401(k). The plan permits in-plan Roth conversions and has a 4-year vesting schedule. The MBCBP has a 3-year cliff vesting period; Generous health care benefits on day one Aetna PPO Plan, Aetna High Deductible Health Plan, and a Kaiser HMO Plan (where available), which includes coverage for medical, dental, and vision benefits for the employee, their spouse, and/or dependent children. Employees do not have to pay premiums if they enroll in the Aetna High Deductible Health Plan or the Kaiser HMO Plan; days paid vacation and holidays per year plus 2 volunteer days per year; Generous sick and bereavement leave; Competitive parental leave; Company-paid coverage for long term disability, life insurance, and accidental death & dismemberment; Flexible Spending and Health Savings accounts; Retiree health plan and a retiree health account (retiree HRA). Funds in the retiree health account can be used to pay for eligible medical expenses in retirement; Education Assistance Program that reimburses 100% of eligible expenses; Optional benefits include a legal plan, supplemental life insurance, college 529 plan, pet insurance, long term care insurance, and qualified transportation fringe benefits (where available); and, Partial remote work opportunities are available after six months of service. PROJECTED ANNUAL SALARY RANGE: $ 96,027.00 $ 137,179.00 Relocation not provided. xevrcyc Sponsorship not available for this position. PM19 Compensation details: 96 Yearly Salary PIbc9fbda68154-38
    $96k-137.2k yearly 1d ago

Learn more about requirements engineer jobs

How much does a requirements engineer earn in Glen Burnie, MD?

The average requirements engineer in Glen Burnie, MD earns between $57,000 and $99,000 annually. This compares to the national average requirements engineer range of $62,000 to $120,000.

Average requirements engineer salary in Glen Burnie, MD

$75,000

What are the biggest employers of Requirements Engineers in Glen Burnie, MD?

The biggest employers of Requirements Engineers in Glen Burnie, MD are:
  1. Age Inc
  2. Amazon
  3. LINK
  4. ManTech
  5. Whitman
  6. Jacobs Enterprises
  7. Contact Government Services, LLC
  8. Cpower
  9. EOA Technologies
  10. RSC2
Job type you want
Full Time
Part Time
Internship
Temporary