Applied AI Interface Engineer
Requirements engineer job in Alexandria, VA
MANTECH seeks a motivated, career and customer-oriented Applied AI Interface Engineer to join our team in Alexandria, VA. As part of the position, you will act as a Software Engineer designing and implementing services and components for AI applications.
Responsibilities include but are not limited to:
Designs and builds User Interfaces using modern UX/UI standards.
Develops, implements, and maintains full-stack software solutions for AI-enabled applications.
Works closely with the Software Architect to understand project requirements and translate them into technical specifications.
Develops and integrates AI and ML capabilities on a cloud-hosted data platform that supports significant market adoption, high performance, and strict access control and governance.
Stays current with advancements in AI, machine learning, and software engineering, incorporating best practices into the development process.
Documents software designs, code, and processes to ensure maintainability, scalability, and knowledge sharing among team members.
Participates in code reviews and provides constructive feedback to peers to ensure code quality, adherence to coding standards, and knowledge transfer within the team.
Minimum Qualifications:
Bachelor's degree in Computer Science, Information Technology, or a related field.
4 or more years (2 with Master's) of experience in software development, systems integration, data management, or related fields.
Proficiency in JavaScript, including familiarity with modern frameworks and libraries such as React, Angular, or Vue.js.
Solid knowledge of HTML and CSS, including responsive design principles and front-end workflows.
Knowledge of Python and REST API frameworks.
Basic understanding of user interface (UI) and user experience (UX) design principles, with the ability to collaborate with designers to translate wireframes into functional code.
Problem-Solving Skills: Strong analytical and problem-solving abilities, with the capacity to debug and resolve issues related to front-end code.
Experience with Generative AI including API access to large language models (LLMs).
Preferred Qualifications:
Experience with Docker, Kubernetes, or other containerization technology.
Experience working in AWS environments.
Strong analytical and problem-solving skills.
Excellent communication and interpersonal skills. Ability to work effectively in a team-oriented environment.
Experience working with data ingest and transformation.
Clearance Requirements:
Must possess a current and active TS/SCI clearance
Physical Requirements:
The person in this position must be able to remain in a stationary position 50% of the time.
Occasionally move about inside the office to access file cabinets, office machinery, or to communicate with co-workers, management, and customers, via email, phone, and or virtual communication, which may involve delivering presentations.
Telephony Engineer
Requirements engineer job in Hyattsville, MD
Work with Application Developers to ensure the new applications implemented on the enterprise network perform within expectations and do not negatively affect the performance of existing applications.
Design and monitor application performance tests, analyze performance test data, make recommendations on application design and hardware resource needs.
Present analysis and capacity projections to systems management and business partners. Identify changes in the systems architecture, applications or hardware that may impact performance or capacity of Windows based Servers.
Design, develop, and produce performance and capacity reports that accurately reflect the state of the systems being monitored.
Analyze performance data and determine appropriate thresholds for automated alerting.
Experience Level: 3-6 Years.
High level of experience with Server Systems.
Advanced Knowledge and Experience with Server Capacity Planning, Server Performance and Server Performance Software Use/Support.
DUTIES:
The Telephony Network Administrator will assist with the proper maintenance of the LAN, Wireless, and WAN voice and data networks. The Associate IT Support Technician will provide Level 1 support and troubleshooting to resolve Network and Telephony issues. The Associate IT Support Technician will assist with ensuring the day-to-day operation, reliability, and integrity of the enterprise voice communications network including, but not limited to client hardware and software issues related to Network Telephony. This includes installing, maintaining, and managing all data network/telephony equipment such as phones, routers, LAN switches, wireless, servers, backup systems, network management and diagnostic tools, etc. under supervision. The IT Associate Support Technician will also assist with network maintenance and system upgrades including service packs, patches, hotfixes, and security configurations. The Associate IT Support Technician will perform basic inventory management processes and create equipment Return Material Authorizations RMAs and track these through to problem resolution.
Cloud Engineer
Requirements engineer job in Washington, DC
Job Title: Principal Cloud Engineer
Duration: 6 Months Contract with Ext
Active Public Trust Clearance Required
Seeking an experienced Principal Cloud Engineer with deep hands-on expertise in AWS or Azure to design, build, and support cloud infrastructure for a federal environment. Must have strong IaC skills, DevOps experience, and knowledge of FedRAMP and cloud governance.
Required Skills & Experience:
12+ years IT, including 8+ years hands-on cloud deployment (AWS/Azure).
Strong Linux/Windows administration and troubleshooting.
Experience with FedRAMP cloud services.
Strong automation and DevOps background.
Understanding of Agile methodologies.
Professional Architect Certification (AWS or Azure).
Cloud-Specific Experience:
AWS: IAM, VPC, subnets, TGW, WAF, CloudFront, Route53, Systems Manager, Config, Control Tower, stack sets.
Azure: Subscriptions, Resource Groups, VNets, Subnets, App Services, Azure Batch, VMs, tagging, policies, and security.
Cloud Engineer
Requirements engineer job in Washington, DC
About us:
Creative Information Technology Inc (CITI) is an esteemed IT enterprise renowned for its exceptional customer service and innovation. We serve both government and commercial sectors, offering a range of solutions such as Healthcare IT, Human Services, Identity Credentialing, Cloud Computing, and Big Data Analytics. With clients in the US and abroad, we hold key contract vehicles including GSA IT Schedule 70, NIH CIO-SP3, GSA Alliant, and DHS-Eagle II.
Join us in driving growth and seizing new business opportunities!
Position Summary
We are seeking cloud professionals proficient in administering the AWS GovCloud cloud service platform. Become part of a team responsible for building a cloud business platform within the Federal Reserve Board.
Responsibilities:
Design and create custom-built cloud applications
Set up and configure DevSecOps for cloud environments
Partner with the cloud platform team and development team to build, manage, and enable the underlying cloud infrastructure and platforms in support of a business application development project.
Required Skills:
At least five years of demonstrated experience supporting cloud platforms. As a Senior Cloud Systems Engineer, we require 10 years of experience in the field of technology, including application development, 5 of which focused on supporting cloud technologies.
Developing software according to software development lifecycles (SDLCs), including DevOps, Agile, Lean, or Iterative.
Designing, deploying, and migrating secure and maintainable systems for Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) environments.
Proficient in at least one high-level programming language such as Python, PowerShell, or Infrastructure as Code (AWS CloudFormation, AWS CDK, Terraform, etc.).
Python 3.x data engineering with pandas, Databricks, or SPARK preferred.
DevOps, CI/CD pipelines and automation, containers, and related best practices for cloud deployment. Experience with GitLab or Azure DevOps preferred.
Service offerings in Amazon Web Services (AWS) and government platform experience preferred.
Understanding of cloud-based architectures, web servers, caching, application servers,
load balancers, and storage. Experience with AWS Load Balancer.
Familiarity with loose coupling, stateless systems, and best practices for designing cloud-ready applications.
Understanding of cloud federation technologies such as SAML, Okta, OAuth, and
OpenID Connect and how to apply these technologies to enterprise and public-facing applications.
Understanding of cloud vendor cost structures and the ability to optimize solutions for cost effectiveness.
Experience transitioning legacy systems to cloud-ready architecture.
Understanding of RESTful Web Services, XML, or JSON. Experience with AWS API
Gateway.
Awareness of cloud information security risks and best practices, especially in a highly secure operating environment.
Familiarity with government cloud deployments (FedRAMP, FISMA, high security environments, etc.)
Highly Desirable Skills (not mandatory):
Bachelor's degree in computer science, information technology, or related field.
Experience with direct configuration and management of AWS components, including
CloudTrail, CloudWatch, S3, RDS or Aurora, PostgresSQL, Lambda, Glue or MWAA
(Airflow)
Experience with designing or implementing cloud architectures for analytics/data
processing/machine learning use-cases, such as data lakes, lake formations, etc.
Experience with containerization as a strategy for cloud-enabling legacy applications
such as Docker and Kubernetes (AWS EKS, ECS, or Fargate preferred)
Familiarity with the Well Architected Framework.
Experience with defining platform requirements.
Experience integrating with enterprise observability platforms such as Splunk.
Experience with Data Quality testing tooling (e.g., Great Expectations, Glue Data
Quality)
How to Apply:
To apply for the Data Architect role, please submit your resume to *******************.
Applications will be reviewed on a rolling basis until the position is filled. We look forward to reviewing your application.
Platform Engineer
Requirements engineer job in Arlington, VA
As a disclaimer, I do not work with third party agencies or c2c firms on active positions, and my client is only accepting USC or GC candidates at this time.. Please do not contact me in regard to positions if you do not fit the above-given parameters.
Details
-- Arlington, VA area Senior Platform Engineer (DevOps)
-- Mid-Upper 100s base salary range, before benefits and bonus
-- 4 days onsite in Arlington area (1 day remote)
Senior Platform Engineer - DevOps Focus
Join a dynamic, multi-disciplinary team responsible for the architecture, design, implementation, and support of public and private cloud environments. This role involves building and maintaining systems and applications across diverse platforms, with a strong emphasis on automation, orchestration, and operational excellence.
Key Responsibilities
Design, implement, and maintain systems and applications in multi-vendor cloud environments.
Develop tools and processes for automated deployment, monitoring, and infrastructure management.
Lead engineering efforts on projects, including architectural design and implementation.
Evaluate and optimize software, systems, and workflows to enhance performance and reliability.
Architect automation solutions using tools like Terraform, CloudFormation, Ansible, and Puppet.
Ensure high availability and performance of systems through proactive monitoring and tuning.
Collaborate across teams to troubleshoot and optimize production systems and services.
Mentor junior engineers and contribute to knowledge sharing and team development.
Maintain documentation and ensure compliance with audit and security standards.
Qualifications
Bachelor's degree in Computer Science, Information Technology, or equivalent experience.
10+ years of relevant experience, including at least 3 years in cloud platforms, CI/CD, scripting, or infrastructure-as-code.
Strong background in DevOps, system administration, or software development.
Expertise in software build and deployment pipelines.
Solid understanding of cybersecurity principles in DevOps and CI/CD environments.
AWS DevSecOps Engineer
Requirements engineer job in Ellicott City, MD
VITG is seeking a DevSecOps Engineer responsible for automating security integration throughout the CI/CD pipeline and the AWS cloud environment. This role will "shift security left" by taking the lead on implementing security-as-code tools, managing their usage, ensuring their proper configuration and compliance, and proactively embedding security policy into the development process. Our ideal candidate is passionate about being part of a "change," and working in a dynamic and highly collaborative environment focused on speed, stability, and security.
The DevSecOps Engineer provides hands-on expertise to integrate and maintain the security posture for corporate systems that support Federal programs, ensuring a successful program Authority To Operate (ATO). You will be responsible for developing, monitoring, and maintaining systems and procedures to safeguard internal information systems, networks, and CI/CD pipelines through automation.
Applicant Requirements:
US citizen or must be authorized to work in the United States
Must have lived in the USA for three years of the last five years
Must be able to obtain a US federal gov badge and eligible for Public Trust clearance
Must be able to pass a background check, including a drug test
Job Responsibilities:
Develop, implement, and maintain security automation throughout the entire SDLC, integrating security into the CI/CD pipelines using Jenkins/Github and Infrastructure-as-Code (IaC) principles.
Run and manage security scans with tools such as Snyk (SAST/SCA) and establish automated tracking and enforcement mechanisms for vulnerability remediation.
Integrate and manage security workloads running on AWS containers and ensure container image scanning and runtime security policies are enforced.
Design, manage, and maintain source code for AWS infrastructure in GitHub and manage automated pipelines, ensuring security checks and gates are embedded in every deployment.
Maintain security information on JIRA/Confluence and actively participate in agile DevSecOps practices, promoting a "Secure-by-Design" culture.
Provides hands-on support for developing, coordinating, implementing, and enforcing information systems security policies, standards, and methodologies as code.
Maintain operational security posture for Enterprise Salesforce FISMA system by ensuring security is baked into configuration and deployment practices.
Implement security tools, security tool usage, and policy-as-code to ensure configurations remain compliant and configured properly, all while ensuring a successful program ATO.
Automate vulnerability/risk assessment analysis to support continuous monitoring and authorization.
Manages changes to the system and assesses the security impact of those changes through automated compliance checks.
Assists with the management of security aspects of the information system and performs day-to-day security operations of the system
Evaluate security solutions to ensure they meet security requirements for processing classified information
Performs vulnerability/risk assessment analysis to support certification and accreditation
Prepares and reviews documentation to include System Security Plans (SSPs), Risk Assessment Reports, Certification and Accreditation (C&A) packages, and System Requirements Traceability Matrices (SRTMs)
Qualifications & Skills:
Bachelor's or Master's degree in Computer Science, Engineering, Information Technology, or a related discipline
Minimum of 6 years related experience in Information Technology including 4 years in the DevSecOps or Application Security (AppSec) space.
Demonstrated hands-on experience in cloud environments such as AWS Commercial and GovCloud, specifically with security automation, logging, and monitoring services (e.g., GuardDuty, Security Hub, CloudTrail).
Expertise in CI/CD pipeline management and the integration of security tools for Static Application Security Testing (SAST), Dynamic Application Security Testing (DAST), and Software Composition Analysis (SCA).
Required: Strong hands-on experience with AWS, Snyk, GitHub, JIRA, and Confluence to implement and manage the end-to-end DevSecOps toolchain.
Demonstrated work experience with Infrastructure-as-Code (IaC) security (e.g., using Checkov or Terrascan on Terraform/CloudFormation).
(Preferred) Experience with Salesforce Platform and tool ecosystem
(Preferred) Salesforce or any other platform tool - Configuration/Setup of External Client Applications and Secure Communications (TLS)
(Preferred) AppOmni - Have used it and can manage issues, perform new org additions and configurations.
Strong background in the certification and accreditation process (ATO) and the ability to automate compliance checks against frameworks like FISMA, NIST, and FedRAMP.
Possesses working knowledge of business security practices, current security automation tools, and policy-as-code implementation.
Demonstrated working knowledge of vulnerability assessment and penetration testing processes, focusing on how to automate these checks.
Experience with Government Agency Security Assessment Process in support of maintaining and/or establishing an ATO and the appropriate boundary.
Experience with, understanding of and adherence to guidelines such as FISMA, NIST, HIPPA, and IRS Pub-1075 (Preferred)
Preferred Certifications:
Require AWS DevOps or SysOps or equivalent Certification
Preferably possess industry certification such as the CISSP, CEH, GIAC, etc
Job Type: Full Time
Salary: BOE
Benefits:
401(k) with employer contribution
Medical/Dental/Vision insurance (option for full coverage for employee)
Life, ST/LT insurance
Professional development opportunities
Schedule:
8 hour shift
May include minimal after hours support depending on deployment schedule
Work Type:
Hybrid remote in Ellicott City, MD 21043
1 to 2 days in office weekly
UiPath Engineer
Requirements engineer job in McLean, VA
Need Only Local Candidates from Nearby Area.
Top Skills Must:
UiPath
Document Understanding
Python
Developer Role and Responsibilities
Your specific duties will be based on your experience as an UiPath developer. In this role, you will be responsible for designing and delivering UiPath solutions in accordance with WonderBotz standards and best practices. You will work closely together with our enthusiastic team of both business and technical specialists. You will be part of a fast-growing and successful team that helps our clients get the maximum benefit.
Expected Activities:
• Support development of UiPath strategies, including assessing opportunities
• Under the supervision of more experienced developers, define, design, and develop automation on UiPath platforms for clients, including POCs, pilots, and production automation. More senior developers will be expected to work independently
• Participate in workshops and interviews with business process SMEs to gather and confirm business process details & documenting process definitions. More senior developers will lead these workshops and interviews.
• Participate in design and configuration sessions and apply feedback to improve and enhance work products. More senior developers will lead these sessions.
• Work alongside newly trained developers to guide and mentor them.
Qualifications and Skills
• Have mastered or have a strong desire to master a leading RPA tool (UiPath a must, Blue Prism, Automation Anywhere), including advanced RPA vendor certification.
• At least one year of hands-on experience with at least one of the following programming languages (e.g. .Net, Java, VB, C#/C, HTML/CSS, Python, Web Services, mainframe, web applications, SQL, data integration tools, technical automation tools). More senior developers should have a minimum of 2 to 4 years of this hands-on experience.
• Reasonably proficiency in reading Microsoft Office Visio or other equivalent process flow-charting tool or workflow-based logic
• Extra - Any prior work or academic experience with Document management and processing tools (e.g. Kofax, ABBYY, Data Cap), Data integration tools (e.g. Informatica, Microsoft SSIS), Technical automation tools (e.g. shell scripting, PHP), or Business process management tools (e.g. Pega).
Platform Engineer (Azure)
Requirements engineer job in Baltimore, MD
Seeking a Platform Engineer with strong cloud engineering experience, specializing in building and managing scalable infrastructure in Microsoft Azure. This role supports cross-functional teams by designing, implementing, and maintaining Infrastructure as Code (IaC) to enable reliable, secure, and efficient cloud operations. Hybrid schedule (onsite three days per week).
*Must be legally authorized to work in the U.S. without current or future sponsorship.
Responsibilities:
Design, implement, and manage Azure cloud infrastructure using IaC.
Develop automation scripts and tools to streamline deployments, scaling, and operational tasks.
Monitor and optimize system performance, capacity, and cost.
Conduct incident response, root-cause analysis, and post-incident reviews.
Implement monitoring, logging, and tracing solutions to ensure observability.
Collaborate with development and operations teams to improve system reliability and architecture.
Requirements:
Bachelor's degree in Computer Science or a related field.
4+ years technical expertise in cloud engineering with strong technical experience in the following areas: Virtual Networks, Load Balancers, VPN Gateways, Network Security Groups and other relevant Azure networking components.
Managing microservices based applications deployed on AKS.
Designing, deploying and managing Kubernetes clusters within AKS (YAML, HELM).
Experience designing scalable, secure and resilient solutions using Azure services, optimizing for cost and performance and creating disaster recovery plans.
Experience implementing and maintaining IaC solutions using Terraform to automate the creation and management of Azure resources.
Using Azure Monitor, Application Insights, and other tools to track key metrics, set up alerts, and diagnose problems.
Well versed in Windows and has the skills and confidence to work on Linux when needed.
Experience with Azure CLI, PowerShell/Azure PowerShell for automation, provisioning and configuration of Azure environments.
Proven ability to troubleshoot and optimize Azure-native services such as API Management, Azure Functions, Storage, Service Bus, Key Vault, and Container Apps.
Familiarity with Azure DevOps, GitHub and ADZO pipelines.
Excellent written and oral communication skills..
Azure Virtual Machines, Virtual Networks, ExpressRoute, Azure Active Directory, AD Connect, Operations Management Suite/Log Analytics, Azure Monitor, Azure Site Recovery, Azure Backup, Azure App Services, Azure Storage, Azure SQL Server, Azure Automation.
Certifications are a plus: Azure Administrator, Azure Developer, Azure Architect, Azure DevOps, Terraform, MCSE, Security +.
Experience building and deploying Azure resources using IaC (Terraform, Ansible).
Creating and managing CI/CD pipelines using tools such as Azure DevOps (Git) to deploy both infrastructure and applications.
VMWare ESXi.
System Center Configuration Manager.
Microsoft InTune.
Active Directory.
Bash experience.
Experience with serverless deployment, maintenance and associated tool chain.
Working knowledge of ITIL disciplines.
Platform Engineer
Requirements engineer job in Baltimore, MD
Essential Duties and Responsibilities
Design, implement, and manage Infrastructure as Code (IaC) solutions for cloud environments on the Microsoft Azure platform.
Develop automation scripts and tools to streamline deployments, scaling activities, and infrastructure operations.
Identify and resolve performance bottlenecks, tune systems, and optimize performance to enhance efficiency and user experience.
Analyze system capacity, plan for future growth, and design architectures capable of handling traffic spikes and increased workloads.
Respond to incidents, troubleshoot issues, and conduct post-incident reviews to determine root causes and prevent recurrence.
Implement monitoring and observability solutions, ensuring comprehensive insights through metrics, logging, and tracing.
Requirements
Bachelor's degree in Computer Science or a related discipline.
4+ years of hands-on cloud engineering experience, including strong expertise in Azure networking components such as Virtual Networks, Load Balancers, VPN Gateways, and Network Security Groups.
Experience managing microservices-based applications deployed on Azure Kubernetes Service (AKS).
Proven experience designing, deploying, and operating Kubernetes clusters on AKS (YAML, Helm).
Strong background in designing scalable, secure, and resilient Azure solutions, with a focus on cost optimization, performance tuning, and disaster recovery planning.
Proficiency implementing and maintaining IaC solutions using Terraform to automate Azure resource provisioning and configuration.
Experience using Azure Monitor, Application Insights, and related tools to track key metrics, configure alerts, and diagnose issues.
Strong Windows background with the ability and confidence to work in Linux environments.
Proficiency with Azure CLI and PowerShell/Azure PowerShell for automation and environment provisioning.
Demonstrated ability to troubleshoot and optimize Azure services including API Management, Azure Functions, Storage, Service Bus, Key Vault, and Container Apps.
Familiarity with Azure DevOps, GitHub, and ADZO pipelines.
Experience with key Azure services including Virtual Machines, ExpressRoute, Azure Active Directory, AD Connect, Log Analytics/Operations Management Suite, Azure Site Recovery, Azure Backup, Azure App Services, Azure Storage, Azure SQL, and Azure Automation.
Relevant certifications are a plus: Azure Administrator, Azure Developer, Azure Architect, Azure DevOps Engineer, Terraform, MCSE, Security+.
Experience building and deploying Azure resources using IaC technologies (Terraform, Ansible).
Experience creating and managing CI/CD pipelines with Azure DevOps (Git) for both infrastructure and application deployments.
NetOps Engineer
Requirements engineer job in Baltimore, MD
Key Required Skills:
Strong knowledge of SMARTS platform
NETOPS,
Redhat Linux,
TCP/IP network knowledge,
Github
bitbucket.
• Work with advanced technical principles, theories, and concepts.
• Challenge with working on complex technical problems and providing innovative solutions.
• Work with highly experienced and technical resources
• All other duties as assigned or directed
REQUIRED
• Bachelor's Degree in Computer Science, Mathematics, Engineering or a related field.
• Masters or Doctorate degree may substitute for required experience
• 5-7 years of experience with a high-level scripting language such as Python or Perl
• 5 years of experience or more as a Network Engineer focusing on network performance management.
• 5 years of experience with SMARTS enterprise network monitoring system, with at least 3 years specific to implementation and configuration of SMARTS tool.
• NetOPS expertise and Broadcom in depth knowledge
• 4 years of experience with RHEL combine between versions 7 and 8.
• Must be able to obtain and maintain a Public Trust. Contract requirement.
*** Selected candidate must be willing to work on-site in Woodlawn, MD 5 days a week.
• Perl and Python advanced scripting skills.
• In depth hands on experience to program in a Java enterprise (J2EE) environment using platforms such as Java Server Faces v2 (JSF 2), Enterprise Java Beans v3 (EJB 3), Java-based Ajax library.
• Knowledge/experience/advanced skills on Broadcom Service Assurance (former SMARTS) monitoring tool.
• Experience working on Red Hat Enterprise Linux environments.
• TCP/IP networks knowledge.
• Broadcom NetOps monitoring tools knowledge and experience is a plus.
• VMWare systems administration
• MS Visio experience creating Network Diagrams
• MS Project, creating and updating project plans.
• Ability to produce operational documentation using industry best practices in technical writing.
DESIRED
• Perl, Python, and Java advanced programmer skills.
• Developing on JBoss EAP environments skills and experience.
• MySQL database management experience.
• Knowledge of version control tools like Git or Bitbucket.
• Experience developing on Service Now is a plus.
Education:
• Bachelor's Degree with 7+ years of experience
• Must be able to obtain and maintain a Public Trust. Contract requirement.
PACS Engineer
Requirements engineer job in Washington, DC
We are seeking a highly skilled Physical Access Control (PACS) Engineer to join our PACS security team. The PACS SME will provide comprehensive support for the PACS Connector application, ensuring seamless integration and operation across DHS Components. The ideal candidate will have extensive experience designing, installing, and maintaining PACS hardware and software, as well as developing standards, SOPs, and installation guides. This role requires both technical expertise and the ability to create enterprise-wide documentation and standards that align with security, compliance, and operational requirements.
Responsibilities
On-boarding Support: Conduct preplanning meetings to understand PACS IP addresses, TCP ports, and scheduling requirements. Coordinate with NOSC to establish connectivity between the PACS and PACS Connector.
Configuration Management: Configure PACS, Validation System, and Component Users within the application.
Support Services: Provide formal support via the Salesforce ticket system and ad-hoc support as needed.
Documentation: Document the system architecture and update the System Design Document (SDD) regularly or when changes occur.
Training and Policy: Develop and contribute to training materials, Quick Reference Guides, and policy documentation.
Project Review: Participate in the review of project Statements of Work (SOWs), Independent Government Cost Estimates (IGCEs), and market research for PACS equipment.
Testing and Evaluation: Operate the PACS Test and Evaluation (T&E) lab to evaluate software and hardware integrations before production implementation.
Additional Tasks: Perform other tasks as directed.
Qualifications
Required:
Bachelor's degree in Information Technology, Security Engineering, or related field; or equivalent work experience.
5-7 years of experience with PACS hardware and software installation, configuration, and maintenance.
Proven experience developing PACS and Security standards, SOPs, and installation documentation.
Hands-on experience with PACS vendor solutions (e.g., Lenel, CCURE 9000, Identiv, HID, Honeywell, Genetec).
Strong understanding of PACS integration with enterprise security systems (identity management systems).
Preferred:
Strong technical writing and documentation skills.
Knowledge of compliance frameworks such as NIST 800-53, FICAM, or related federal standards.
Experience in enterprise or federal government environments.
Professional certifications such as Certified System Engineer ICAM PACS (CSEIP) or vendor-specific PACS certifications.
Knowledge of networking fundamentals (IP addressing, VLANs, TCP/IP).
Skills & Competencies
Expertise in physical access control system design and deployment.
Excellent problem-solving and troubleshooting abilities.
Strong organizational and documentation skills.
Ability to work collaboratively across IT, facilities, and security teams.
Strong communication skills, both written and verbal.
We are an Equal Opportunity Employer committed to a diverse and inclusive workplace. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, age, gender identity, national origin, disability, or veteran status. We value diverse perspectives and actively seek to create an inclusive environment that celebrates the unique qualities of all employees.
AI Engineer
Requirements engineer job in Linthicum, MD
Must-Haves:
Solid understanding of Generative AI Models (LLMS, RAG and RAG pipelines, Prompt Engineering, model tuning and etc.) OpenAI preferred but not required
Familiarity with cloud-based (AWS) AI/ML services
Expertise in Python for AI/ML Development (Including frameworks like PyTorch, sklearn)
Proficiency with Docker and Kubernetes for containerized deployments
Familiarity with version control systems (e.g., Git) and modern development workflows.
Familiarity with CI/CD tools like Jenkins
Plusses:
Secret or TS Clearance
JavaScript (Frameworks like React)
HTML/CSS
Job Description:
A client of Insight Global is seeking an AI Architect/Engineer to drive our artificial intelligence initiatives. This role requires a unique blend of strategic thinking and hands-on implementation expertise. The ideal candidate will be able to understand business objectives and translate them into AI architectures and pipelines, while also possessing the technical skills to develop and deploy production-ready AI models to support use cases. A key focus will be on implementing solutions that automate business and operational processes, leveraging machine learning, robotic process automation, and other automation technologies, improving efficiency and reducing friction.
Pay:
$80-100/hr
DevOps Engineer
Requirements engineer job in Washington, DC
Description: As a DevOps Engineer, you will play a critical role in bridging the gap between development and operations by automating and streamlining the build, test, and deployment processes. You will support automation of multiple processes. You will work closely with software developers, system operators, and other IT staff members to manage code releases and ensure the smooth operation of production environments. Your expertise will be essential in driving the adoption of best practices for continuous integration and continuous deployment (CI/CD) and in maintaining a robust and scalable infrastructure.
Responsibilities
• Develop, implement, and manage CI/CD pipelines to automate the build, test, and deployment processes.
• Collaborate with development and operations teams to streamline code integration and delivery.
• Manage and monitor production environments, ensuring high availability and performance.
• Automate infrastructure provisioning and configuration management using tools such as Ansible.
• Implement and maintain monitoring, logging, and alerting solutions to ensure system reliability and performance.
• Identify and resolve infrastructure and deployment issues, ensuring minimal disruption to services.
• Ensure security best practices are followed in the deployment and operation of applications and infrastructure.
• Work with cross-functional teams to develop and implement scalable and resilient system architectures.
• Continuously evaluate and improve processes, tools, and technologies to enhance the efficiency and effectiveness of the DevOps workflow.
Skills
• Strong understanding of CI/CD principles and hands-on experience with tools such as GitLab, Terraform, Helm, DB Config management (Liquibase / Flyaway), Jenkins, GitLab CI, CircleCI, Travis CI
• Proficiency in scripting and automation languages such as Python, Bash, or PowerShell.
• Experience with infrastructure as code (IaC) tools like Ansible.
• Solid understanding of cloud platforms, especially AWS.
• Familiarity with containerization and orchestration tools such as Docker, Kubernetes and OpenShift.
• Experience with monitoring and logging tools like Prometheus, Grafana, ELK stack, or Splunk.
• Strong problem-solving skills and the ability to troubleshoot complex system issues.
Clearance: Applicants selected will be subject to a security investigation and may need to meet eligibility requirements for access to classified information.
Compensation and Benefits
Salary Range: $60.00 - $75.00 (Compensation is determined by various factors, including but not limited to location, work experience, skills, education, certifications, seniority, and business needs. This range may be modified in the future.)
Benefits: Gridiron offers a comprehensive benefits package including medical, dental, vision insurance, HSA, FSA, 401(k), disability & ADD insurance, life and pet insurance to eligible employees. Full-time and part-time employees working at least 30 hours per week on a regular basis are eligible to participate in Gridiron's benefits programs.
Gridiron IT Solutions is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status or disability status.
Gridiron IT is a Women Owned Small Business (WOSB) headquartered in the Washington, D.C. area that supports our clients' missions throughout the United States. Gridiron IT specializes in providing comprehensive IT services tailored to meet the needs of federal agencies. Our capabilities include IT Infrastructure & Cloud Services, Cyber Security, Software Integration & Development, Data Solution & AI, and Enterprise Applications. These capabilities are backed by Gridiron IT's experienced workforce and our commitment to ensuring we meet and exceed our clients' expectations.
DevOps Engineer
Requirements engineer job in McLean, VA
The candidate should be able to drive implementation and improvement of tools and technologies for enterprise adoption in accordance with operational and security standards.
Practice and promote a Site Reliability Engineering (SRE) culture to improve and operate cloud platform offerings to the
enterprise while working toward innovation, automation, and operational excellence.
Automation experience is a must for this position.
Ability to provide 24x7 operational support on a periodic basis and involvement in Issue resolution is a must.
Must Have Qualifications:
Must have 5+ years of have on experience with AWS CloudFormation and Terraform. Automation through Shell Scripting and Python required (Ansible nice to have). 3+ years of experience with EKS and Kubernetes
Technical expertise:
7+ years of overall information technology experience with an emphasis on integration and delivery of virtual/cloud platforms to enterprise applications.
At least 5 years of proven experience with AWS CloudFormation, Terraform, or similar tools.
3+ years of experience with engineering and supporting containerization technology (OpenShift, Kubernetes, AWS(ECS/EKS), etc.) at scale.
Experience in Python, Ansible and shell scripting to automate routine operation tasks.
Experience in Tetrate, Rancher, ArgoCD are highly preferred.
About US Tech Solutions:
US Tech Solutions is a global staff augmentation firm providing a wide range of talent on-demand and total workforce solutions. To know more about US Tech Solutions, please visit ***********************
US Tech Solutions is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Recruiter Details:
Aishwarya Chandra
Email: ****************************************
Job ID: 25-53450
AWS Data Engineer
Requirements engineer job in McLean, VA
Responsibilities:
Design, build, and maintain scalable data pipelines using AWS Glue and Databricks.
Develop and optimize ETL/ELT processes using PySpark and Python.
Collaborate with data scientists, analysts, and stakeholders to enable efficient data access and transformation.
Implement and maintain data lake and warehouse solutions on AWS (S3, Glue Catalog, Redshift, Athena, etc.).
Ensure data quality, consistency, and reliability across systems.
Optimize performance of large-scale distributed data processing workflows.
Develop automation scripts and frameworks for data ingestion, transformation, and validation.
Follow best practices for data governance, security, and compliance.
Required Skills & Experience:
5-8 years of hands-on experience in Data Engineering.
Strong proficiency in Python and PySpark for data processing and transformation.
Expertise in AWS services - particularly Glue, S3, Lambda, Redshift, and Athena.
Hands-on experience with Databricks for building and managing data pipelines.
Experience working with large-scale data systems and optimizing performance.
Solid understanding of data modeling, data lake architecture, and ETL design principles.
Strong problem-solving skills and ability to work independently in a fast-paced environment.
“Mindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of - Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.”
Data Engineer / Big data Engineer
Requirements engineer job in McLean, VA
Immediate need for a talented Data Engineer / Big data Engineer. This is a 12 months contract opportunity with long-term potential and is located in Mclean, VA(Hybrid). Please review the job description below and contact me ASAP if you are interested.
Job ID: 25-93504
Pay Range: $70 - $75/hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location).
Key Responsibilities:
Design, develop, and maintain data pipelines leveraging Python, Spark/PySpark, and cloud-native services.
Build and optimize data workflows, ETL processes, and transformations for large-scale structured and semi-structured datasets.
Write advanced and efficient SQL queries against Snowflake, including joins, window functions, and performance tuning.
Develop backend and automation tools using Golang and/or Python as needed.
Implement scalable, secure, and high-quality data solutions across AWS services such as S3, Lambda, Glue, Step Functions, EMR, and CloudWatch.
Troubleshoot complex production data issues, including pipeline failures, data quality gaps, and cloud environment challenges.
Perform root-cause analysis and implement automation to prevent recurring issues.
Collaborate with data scientists, analysts, platform engineers, and product teams to enable reliable, high-quality data access.
Ensure compliance with enterprise governance, data quality, and cloud security standards.
Participate in Agile ceremonies, code reviews, and DevOps practices to ensure high engineering quality.
Key Requirements and Technology Experience:
Proficiency in Python with experience building scalable data pipelines or ETL processes.
Strong hands-on experience with Spark/PySpark for distributed data processing.
Experience writing complex SQL queries (Snowflake preferred), including optimization and performance tuning.
Working knowledge of AWS cloud services used in data engineering (S3, Glue, Lambda, EMR, Step Functions, CloudWatch, IAM).
Experience with Golang for scripting, backend services, or performance-critical processes.
Strong debugging, troubleshooting, and analytical skills across cloud and data ecosystems.
Familiarity with CI/CD workflows, Git, and automated testing.
Our client is a leading Banking and Financial Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration.
Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
DevOps Engineer (Terraform)
Requirements engineer job in Annapolis, MD
Veteran-Owned Firm Seeking a DevOps Engineer (Terraform) with TS/SCI for a role in Annapolis Junction, MD
My name is Stephen Hrutka. I lead a Veteran-Owned management consulting firm in Washington, DC. We specialize in Technical and Cleared Recruiting for the Department of Defense (DoD), the Intelligence Community (IC), and other advanced defense agencies.
At HRUCKUS, we support fellow Veteran-Owned businesses by helping them recruit for positions across organizations such as the VA, SBA, HHS, DARPA, and other leading-edge R&D-focused defense agencies.
We seek to fill a DevOps Engineer (Terraform) position in Annapolis Junction, MD.
The ideal candidate must hold an active TS/SCI clearance and have 5-8+ years of experience with a BS/BA, 3-5+ years with an MS/MA, or 0-2+ years with a PhD. Experience in software engineering and integration of Commercial Off-the-Shelf software products is required, as is a current DoD 8570.1-M IAT Level III certification.
If you're interested, I'll gladly provide more details about the role and discuss your qualifications further.
Thanks,
Stephen M Hrutka
Principal Consultant
HRUCKUS LLC
Executive Summary: HRUCKUS seeks a DevOps Engineer (Terraform for a role supporting the Department of the Navy in Annapolis Junction, MD.
Position Description: The DevOps Engineer with a strong specialization in Terraform will be responsible for designing, implementing, and maintaining scalable infrastructure solutions using Terraform, ensuring efficient deployment and management of resources across various cloud environments. A deep understanding of Infrastructure as Code (IaC) principles and practices is essential for this role.
Position Job Duties:
While your primary focus will be on Terraform, we also value a strong working knowledge of Ansible to automate configuration management and streamline operational tasks. You will collaborate closely with development, operations, and security teams to enhance system reliability, scalability, and performance
Design and implement cloud infrastructure using Terraform and Hashicorp Configuration Language (HCL)
Automate provisioning, scaling, and infrastructure management to ensure reliability and efficiency
Collaborate with development and operations teams to support cloud initiatives and streamline deployment processes
Monitor cloud environments, perform regular maintenance, and troubleshoot issues as they arise
Implement and enforce best practices for cloud security, compliance, and risk management
Conduct ongoing security testing and code reviews to identify and mitigate vulnerabilities
Contribute to architectural designs and technical documentation ,and provide engineering solutions for improved security
Stay current with new tools, technologies, and industry best practices, and advise teams on secure coding practices
Integrate Terraform with Ansible to automate the provisioning and configuration of infrastructure, ensuring a seamless and consistent infrastructure-as-code (IaC) deployment process across cloud and on-prem environments
Use Terraform to manage and provision infrastructure resources (e.g., compute, storage, networking) and leverage Ansible for post-provisioning configuration management, including application deployments and security hardening
Collaborate with engineering teams to design end-to-end automation workflows that combine Terraform's infrastructure provisioning with Ansible's configuration management, ensuring scalable, repeatable, and secure deployments
Position Qualifications:
TS/SCI level clearance is required
Requires 5-8+ years with BS/BA, 3-5+ years with an MS/MA, or 0-2+ years with a PhD
Experience in software engineering and integration of Commercial Off-the-Shelf software products
Certification Requirement:
Must have a current DoD 8570.1-M IAT Level III certification
Details:
Job Title: DevOps Engineer (Terraform & SOR-2)
3 Site Locations:
Annapolis Junction, MD (Primary)
JBAB (Depends on Responsibilities)
Landover, MD (Future)
Security Clearance Requirement: TS/SCI
Assignment Type: Full-time
Salary: Maximum annual salary of $145,000
Cloud Data Engineer- Databricks
Requirements engineer job in McLean, VA
Purpose:
We are seeking a highly skilled Cloud Data Engineer with deep expertise in Databricks and modern cloud platforms such as AWS, Azure, or GCP. This role is ideal for professionals who are passionate about building next-generation data platforms, optimizing complex data workflows, and enabling advanced analytics and AI in cloud-native environments. You'll have the opportunity to work with Fortune-500 organizations in data and analytics, helping them unlock the full potential of their data through innovative, scalable solutions.
Key Result Areas and Activities:
Design and implement robust, scalable data engineering solutions.
Build and optimize data pipelines using Databricks, including serverless capabilities, Unity Catalog, and Mosaic AI.
Collaborate with analytics and AI teams to enable real-time and batch data workflows.
Support and improve cloud-native data platforms (AWS, Azure, GCP).
Ensure adherence to best practices in data modeling, warehousing, and governance.
Contribute to automation of data workflows using CI/CD, DevOps, or DataOps practices.
Implement and maintain workflow orchestration tools like Apache Airflow and dbt.
Roles & Responsibilities
Essential Skills
4+ years of experience in data engineering with a focus on scalable solutions.
Strong hands-on experience with Databricks in a cloud environment.
Proficiency in Spark and Python for data processing.
Solid understanding of data modeling, data warehousing, and architecture principles.
Experience working with at least one major cloud provider (AWS, Azure, or GCP).
Familiarity with CI/CD pipelines and data workflow automation.
Desirable Skills
Direct experience with Unity Catalog and Mosaic AI within Databricks.
Working knowledge of DevOps/DataOps principles in a data engineering context.
Exposure to Apache Airflow, dbt, and modern data orchestration frameworks.
Qualifications
Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or a related field.
Relevant certifications in cloud platforms (AWS/Azure/GCP) or Databricks are a plus.
Qualities:
Able to consult, write, and present persuasively
Able to work in a self-organized and cross-functional team
Able to iterate based on new information, peer reviews, and feedback
Able to work seamlessly with clients across multiple geographies
Research focused mindset
Excellent analytical, presentation, reporting, documentation and interactive skills
"Infocepts is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law."
Systems Engineer
Requirements engineer job in Annapolis, MD
MANTECH seeks a motivated, career and customer-oriented Systems Engineer III to join our team in Annapolis Junction, Maryland.
ManTech is currently seeking Systems Engineer III to join our team at Annapolis Junction, MD. We are seeking a Senior Systems Engineer who is a seasoned IT professional responsible for designing, implementing, and a top-level trouble shooter supporting a high-level architecture of an organization's backend infrastructure. You will communicate with other project personnel, Government stakeholders, and senior executives.
Responsibilities include but are not limited to:
Administer and maintain IT infrastructure by utilizing expert-level knowledge of the Linux operating system, along with deep expertise in server administration, virtualization, and cloud platforms like AWS or Azure, to manage the organization's backend systems.
Develop and implement automation solutions through creating scalable automation services using strong scripting and orchestration skills, with proficiency in tools like Python, Bash, and Ansible.
Design and manage system architecture by applying systems engineering principles to design, implement, and manage the high-level architecture of the organization's backend infrastructure.
Ensure network and system security through a strong understanding of networking and security fundamentals, including TCP/IP, firewalls, and VPNs, to protect the organization's infrastructure.
Communicate with stakeholders - Interact directly with government personnel, senior executives, and other project members to discuss technical issues and project status.
Provide technical guidance and troubleshooting as a top-level troubleshooter, addressing technical considerations, problems, and issues for both internal teams and government stakeholders.
Minimum Qualifications:
Bachelor's degree in STEM field or Highschool and 13 years of experience; Associates and 11 years of experience in lieu of degree, a candidate holding a relevant master's degree may be granted 2 years' experience credit for that extra education, a candidate holding a relevant PhD degree may be granted 4 years' experience credit for that extra education.
5 years of position-relevant work experience.
DoD 8570.01-m IAT Level II.
Strong proficiency in scripting and automation, with hands-on experience in Python, Bash, and an infrastructure orchestration tool like Ansible.
Experience in front-end web development to create internal tools and solutions.
Proven background as an Orchestration Engineer or in a similar role focused on building scalable automation services.
Solid grasp of TCP/IP networking, firewalls, VLANs, and VPNs.
Expert-level knowledge of the Linux operating system.
Preferred Qualifications:
Experience with full-stack development, including creating and interacting with databases and APIs.
Knowledge of Infrastructure as Code (IaC) principles, preferably with experience using Terraform.
Familiarity with public cloud platforms (AWS, Azure, GCP) and container technologies (Docker, Kubernetes).
Proficiency with virtualization platforms such as VMware vSphere or Microsoft Hyper-V.
Exceptional troubleshooting and problem-solving skills, with a talent for systematically diagnosing and resolving complex technical issues.
A foundational understanding of traditional Systems Administration principles
Clearance Requirements:
Must have a current/active TS/SCI
Physical Requirements:
The person in this position must have the ability to perform tasks while stationary 50% of the time at a computer, with specific emphasis on visual acuity for close-range work. Occasionally move about inside the office to access file cabinets, office machinery, or to communicate with co-workers, management, and customers, via email, phone, and or virtual communication, which may involve delivering presentations
AI Software Engineer
Requirements engineer job in Baltimore, MD
My client, ranked among Fortune's World's Most Admired Companies, is a global leader in investment management and is seeking an AI Software Engineer to join the team. We have a Mid & Senior level openings
***Amazing salary, bonus, & some of the best benefits in the marketplace!!
Primary responsibility:
Develop and deliver AI-driven software solutions, collaborating with stakeholders and working alongside top AI industry leaders to build end-to-end applications and AI agents.
What we're looking for:
2-12 years of software engineering experience with strong coding and system design skills.
Experience designing, building, or deploying AI-driven applications, intelligent agents, or automation solutions.
Familiarity with AI frameworks, cloud-native architectures (AWS, containerized microservices), or data-driven systems.
Passion for learning emerging technologies and applying AI to solve real business problems.
Key Responsibilities:
Design, code, test, and deploy AI-driven applications and intelligent agents.
Collaborate in agile squads across front-end, back-end, data science, and UX.
Translate business requirements into AI-powered solutions and actionable insights.
Contribute to AI agent design, orchestration, prompt engineering, and deployment.
Use cloud and AI technologies: AWS, Databricks, Salesforce, containerized microservices, and AI/ML frameworks.
Qualifications:
BS/MS in Computer Science
Experience with Python, JavaScript, or Java (preferred beyond just Java)
Proven ability to deliver complete solutions, not just scripts
Hands-on experience with cloud-native architectures (AWS, Azure), containerized microservices, and agent frameworks
Stable career history; entrepreneurial and collaborative mindset
#LI-GT