Applied AI Interface Engineer
Requirements engineer job in Alexandria, VA
MANTECH seeks a motivated, career and customer-oriented Applied AI Interface Engineer to join our team in Alexandria, VA. As part of the position, you will act as a Software Engineer designing and implementing services and components for AI applications.
Responsibilities include but are not limited to:
Designs and builds User Interfaces using modern UX/UI standards.
Develops, implements, and maintains full-stack software solutions for AI-enabled applications.
Works closely with the Software Architect to understand project requirements and translate them into technical specifications.
Develops and integrates AI and ML capabilities on a cloud-hosted data platform that supports significant market adoption, high performance, and strict access control and governance.
Stays current with advancements in AI, machine learning, and software engineering, incorporating best practices into the development process.
Documents software designs, code, and processes to ensure maintainability, scalability, and knowledge sharing among team members.
Participates in code reviews and provides constructive feedback to peers to ensure code quality, adherence to coding standards, and knowledge transfer within the team.
Minimum Qualifications:
Bachelor's degree in Computer Science, Information Technology, or a related field.
4 or more years (2 with Master's) of experience in software development, systems integration, data management, or related fields.
Proficiency in JavaScript, including familiarity with modern frameworks and libraries such as React, Angular, or Vue.js.
Solid knowledge of HTML and CSS, including responsive design principles and front-end workflows.
Knowledge of Python and REST API frameworks.
Basic understanding of user interface (UI) and user experience (UX) design principles, with the ability to collaborate with designers to translate wireframes into functional code.
Problem-Solving Skills: Strong analytical and problem-solving abilities, with the capacity to debug and resolve issues related to front-end code.
Experience with Generative AI including API access to large language models (LLMs).
Preferred Qualifications:
Experience with Docker, Kubernetes, or other containerization technology.
Experience working in AWS environments.
Strong analytical and problem-solving skills.
Excellent communication and interpersonal skills. Ability to work effectively in a team-oriented environment.
Experience working with data ingest and transformation.
Clearance Requirements:
Must possess a current and active TS/SCI clearance
Physical Requirements:
The person in this position must be able to remain in a stationary position 50% of the time.
Occasionally move about inside the office to access file cabinets, office machinery, or to communicate with co-workers, management, and customers, via email, phone, and or virtual communication, which may involve delivering presentations.
Cloud Engineer
Requirements engineer job in Washington, DC
Job Title: Principal Cloud Engineer
Duration: 6 Months Contract with Ext
Active Public Trust Clearance Required
Seeking an experienced Principal Cloud Engineer with deep hands-on expertise in AWS or Azure to design, build, and support cloud infrastructure for a federal environment. Must have strong IaC skills, DevOps experience, and knowledge of FedRAMP and cloud governance.
Required Skills & Experience:
12+ years IT, including 8+ years hands-on cloud deployment (AWS/Azure).
Strong Linux/Windows administration and troubleshooting.
Experience with FedRAMP cloud services.
Strong automation and DevOps background.
Understanding of Agile methodologies.
Professional Architect Certification (AWS or Azure).
Cloud-Specific Experience:
AWS: IAM, VPC, subnets, TGW, WAF, CloudFront, Route53, Systems Manager, Config, Control Tower, stack sets.
Azure: Subscriptions, Resource Groups, VNets, Subnets, App Services, Azure Batch, VMs, tagging, policies, and security.
AI Engineer
Requirements engineer job in Linthicum, MD
Must-Haves:
Solid understanding of Generative AI Models (LLMS, RAG and RAG pipelines, Prompt Engineering, model tuning and etc.) OpenAI preferred but not required
Familiarity with cloud-based (AWS) AI/ML services
Expertise in Python for AI/ML Development (Including frameworks like PyTorch, sklearn)
Proficiency with Docker and Kubernetes for containerized deployments
Familiarity with version control systems (e.g., Git) and modern development workflows.
Familiarity with CI/CD tools like Jenkins
Plusses:
Secret or TS Clearance
JavaScript (Frameworks like React)
HTML/CSS
Job Description:
A client of Insight Global is seeking an AI Architect/Engineer to drive our artificial intelligence initiatives. This role requires a unique blend of strategic thinking and hands-on implementation expertise. The ideal candidate will be able to understand business objectives and translate them into AI architectures and pipelines, while also possessing the technical skills to develop and deploy production-ready AI models to support use cases. A key focus will be on implementing solutions that automate business and operational processes, leveraging machine learning, robotic process automation, and other automation technologies, improving efficiency and reducing friction.
Pay:
$80-100/hr
Platform Engineer (Azure)
Requirements engineer job in Baltimore, MD
Seeking a Platform Engineer with strong cloud engineering experience, specializing in building and managing scalable infrastructure in Microsoft Azure. This role supports cross-functional teams by designing, implementing, and maintaining Infrastructure as Code (IaC) to enable reliable, secure, and efficient cloud operations. Hybrid schedule (onsite three days per week).
*Must be legally authorized to work in the U.S. without current or future sponsorship.
Responsibilities:
Design, implement, and manage Azure cloud infrastructure using IaC.
Develop automation scripts and tools to streamline deployments, scaling, and operational tasks.
Monitor and optimize system performance, capacity, and cost.
Conduct incident response, root-cause analysis, and post-incident reviews.
Implement monitoring, logging, and tracing solutions to ensure observability.
Collaborate with development and operations teams to improve system reliability and architecture.
Requirements:
Bachelor's degree in Computer Science or a related field.
4+ years technical expertise in cloud engineering with strong technical experience in the following areas: Virtual Networks, Load Balancers, VPN Gateways, Network Security Groups and other relevant Azure networking components.
Managing microservices based applications deployed on AKS.
Designing, deploying and managing Kubernetes clusters within AKS (YAML, HELM).
Experience designing scalable, secure and resilient solutions using Azure services, optimizing for cost and performance and creating disaster recovery plans.
Experience implementing and maintaining IaC solutions using Terraform to automate the creation and management of Azure resources.
Using Azure Monitor, Application Insights, and other tools to track key metrics, set up alerts, and diagnose problems.
Well versed in Windows and has the skills and confidence to work on Linux when needed.
Experience with Azure CLI, PowerShell/Azure PowerShell for automation, provisioning and configuration of Azure environments.
Proven ability to troubleshoot and optimize Azure-native services such as API Management, Azure Functions, Storage, Service Bus, Key Vault, and Container Apps.
Familiarity with Azure DevOps, GitHub and ADZO pipelines.
Excellent written and oral communication skills..
Azure Virtual Machines, Virtual Networks, ExpressRoute, Azure Active Directory, AD Connect, Operations Management Suite/Log Analytics, Azure Monitor, Azure Site Recovery, Azure Backup, Azure App Services, Azure Storage, Azure SQL Server, Azure Automation.
Certifications are a plus: Azure Administrator, Azure Developer, Azure Architect, Azure DevOps, Terraform, MCSE, Security +.
Experience building and deploying Azure resources using IaC (Terraform, Ansible).
Creating and managing CI/CD pipelines using tools such as Azure DevOps (Git) to deploy both infrastructure and applications.
VMWare ESXi.
System Center Configuration Manager.
Microsoft InTune.
Active Directory.
Bash experience.
Experience with serverless deployment, maintenance and associated tool chain.
Working knowledge of ITIL disciplines.
Platform Engineer
Requirements engineer job in Baltimore, MD
Essential Duties and Responsibilities
Design, implement, and manage Infrastructure as Code (IaC) solutions for cloud environments on the Microsoft Azure platform.
Develop automation scripts and tools to streamline deployments, scaling activities, and infrastructure operations.
Identify and resolve performance bottlenecks, tune systems, and optimize performance to enhance efficiency and user experience.
Analyze system capacity, plan for future growth, and design architectures capable of handling traffic spikes and increased workloads.
Respond to incidents, troubleshoot issues, and conduct post-incident reviews to determine root causes and prevent recurrence.
Implement monitoring and observability solutions, ensuring comprehensive insights through metrics, logging, and tracing.
Requirements
Bachelor's degree in Computer Science or a related discipline.
4+ years of hands-on cloud engineering experience, including strong expertise in Azure networking components such as Virtual Networks, Load Balancers, VPN Gateways, and Network Security Groups.
Experience managing microservices-based applications deployed on Azure Kubernetes Service (AKS).
Proven experience designing, deploying, and operating Kubernetes clusters on AKS (YAML, Helm).
Strong background in designing scalable, secure, and resilient Azure solutions, with a focus on cost optimization, performance tuning, and disaster recovery planning.
Proficiency implementing and maintaining IaC solutions using Terraform to automate Azure resource provisioning and configuration.
Experience using Azure Monitor, Application Insights, and related tools to track key metrics, configure alerts, and diagnose issues.
Strong Windows background with the ability and confidence to work in Linux environments.
Proficiency with Azure CLI and PowerShell/Azure PowerShell for automation and environment provisioning.
Demonstrated ability to troubleshoot and optimize Azure services including API Management, Azure Functions, Storage, Service Bus, Key Vault, and Container Apps.
Familiarity with Azure DevOps, GitHub, and ADZO pipelines.
Experience with key Azure services including Virtual Machines, ExpressRoute, Azure Active Directory, AD Connect, Log Analytics/Operations Management Suite, Azure Site Recovery, Azure Backup, Azure App Services, Azure Storage, Azure SQL, and Azure Automation.
Relevant certifications are a plus: Azure Administrator, Azure Developer, Azure Architect, Azure DevOps Engineer, Terraform, MCSE, Security+.
Experience building and deploying Azure resources using IaC technologies (Terraform, Ansible).
Experience creating and managing CI/CD pipelines with Azure DevOps (Git) for both infrastructure and application deployments.
NetOps Engineer
Requirements engineer job in Baltimore, MD
Key Required Skills:
Strong knowledge of SMARTS platform
NETOPS,
Redhat Linux,
TCP/IP network knowledge,
Github
bitbucket.
• Work with advanced technical principles, theories, and concepts.
• Challenge with working on complex technical problems and providing innovative solutions.
• Work with highly experienced and technical resources
• All other duties as assigned or directed
REQUIRED
• Bachelor's Degree in Computer Science, Mathematics, Engineering or a related field.
• Masters or Doctorate degree may substitute for required experience
• 5-7 years of experience with a high-level scripting language such as Python or Perl
• 5 years of experience or more as a Network Engineer focusing on network performance management.
• 5 years of experience with SMARTS enterprise network monitoring system, with at least 3 years specific to implementation and configuration of SMARTS tool.
• NetOPS expertise and Broadcom in depth knowledge
• 4 years of experience with RHEL combine between versions 7 and 8.
• Must be able to obtain and maintain a Public Trust. Contract requirement.
*** Selected candidate must be willing to work on-site in Woodlawn, MD 5 days a week.
• Perl and Python advanced scripting skills.
• In depth hands on experience to program in a Java enterprise (J2EE) environment using platforms such as Java Server Faces v2 (JSF 2), Enterprise Java Beans v3 (EJB 3), Java-based Ajax library.
• Knowledge/experience/advanced skills on Broadcom Service Assurance (former SMARTS) monitoring tool.
• Experience working on Red Hat Enterprise Linux environments.
• TCP/IP networks knowledge.
• Broadcom NetOps monitoring tools knowledge and experience is a plus.
• VMWare systems administration
• MS Visio experience creating Network Diagrams
• MS Project, creating and updating project plans.
• Ability to produce operational documentation using industry best practices in technical writing.
DESIRED
• Perl, Python, and Java advanced programmer skills.
• Developing on JBoss EAP environments skills and experience.
• MySQL database management experience.
• Knowledge of version control tools like Git or Bitbucket.
• Experience developing on Service Now is a plus.
Education:
• Bachelor's Degree with 7+ years of experience
• Must be able to obtain and maintain a Public Trust. Contract requirement.
AWS DevSecOps Engineer
Requirements engineer job in Ellicott City, MD
VITG is seeking a DevSecOps Engineer responsible for automating security integration throughout the CI/CD pipeline and the AWS cloud environment. This role will "shift security left" by taking the lead on implementing security-as-code tools, managing their usage, ensuring their proper configuration and compliance, and proactively embedding security policy into the development process. Our ideal candidate is passionate about being part of a "change," and working in a dynamic and highly collaborative environment focused on speed, stability, and security.
The DevSecOps Engineer provides hands-on expertise to integrate and maintain the security posture for corporate systems that support Federal programs, ensuring a successful program Authority To Operate (ATO). You will be responsible for developing, monitoring, and maintaining systems and procedures to safeguard internal information systems, networks, and CI/CD pipelines through automation.
Applicant Requirements:
US citizen or must be authorized to work in the United States
Must have lived in the USA for three years of the last five years
Must be able to obtain a US federal gov badge and eligible for Public Trust clearance
Must be able to pass a background check, including a drug test
Job Responsibilities:
Develop, implement, and maintain security automation throughout the entire SDLC, integrating security into the CI/CD pipelines using Jenkins/Github and Infrastructure-as-Code (IaC) principles.
Run and manage security scans with tools such as Snyk (SAST/SCA) and establish automated tracking and enforcement mechanisms for vulnerability remediation.
Integrate and manage security workloads running on AWS containers and ensure container image scanning and runtime security policies are enforced.
Design, manage, and maintain source code for AWS infrastructure in GitHub and manage automated pipelines, ensuring security checks and gates are embedded in every deployment.
Maintain security information on JIRA/Confluence and actively participate in agile DevSecOps practices, promoting a "Secure-by-Design" culture.
Provides hands-on support for developing, coordinating, implementing, and enforcing information systems security policies, standards, and methodologies as code.
Maintain operational security posture for Enterprise Salesforce FISMA system by ensuring security is baked into configuration and deployment practices.
Implement security tools, security tool usage, and policy-as-code to ensure configurations remain compliant and configured properly, all while ensuring a successful program ATO.
Automate vulnerability/risk assessment analysis to support continuous monitoring and authorization.
Manages changes to the system and assesses the security impact of those changes through automated compliance checks.
Assists with the management of security aspects of the information system and performs day-to-day security operations of the system
Evaluate security solutions to ensure they meet security requirements for processing classified information
Performs vulnerability/risk assessment analysis to support certification and accreditation
Prepares and reviews documentation to include System Security Plans (SSPs), Risk Assessment Reports, Certification and Accreditation (C&A) packages, and System Requirements Traceability Matrices (SRTMs)
Qualifications & Skills:
Bachelor's or Master's degree in Computer Science, Engineering, Information Technology, or a related discipline
Minimum of 6 years related experience in Information Technology including 4 years in the DevSecOps or Application Security (AppSec) space.
Demonstrated hands-on experience in cloud environments such as AWS Commercial and GovCloud, specifically with security automation, logging, and monitoring services (e.g., GuardDuty, Security Hub, CloudTrail).
Expertise in CI/CD pipeline management and the integration of security tools for Static Application Security Testing (SAST), Dynamic Application Security Testing (DAST), and Software Composition Analysis (SCA).
Required: Strong hands-on experience with AWS, Snyk, GitHub, JIRA, and Confluence to implement and manage the end-to-end DevSecOps toolchain.
Demonstrated work experience with Infrastructure-as-Code (IaC) security (e.g., using Checkov or Terrascan on Terraform/CloudFormation).
(Preferred) Experience with Salesforce Platform and tool ecosystem
(Preferred) Salesforce or any other platform tool - Configuration/Setup of External Client Applications and Secure Communications (TLS)
(Preferred) AppOmni - Have used it and can manage issues, perform new org additions and configurations.
Strong background in the certification and accreditation process (ATO) and the ability to automate compliance checks against frameworks like FISMA, NIST, and FedRAMP.
Possesses working knowledge of business security practices, current security automation tools, and policy-as-code implementation.
Demonstrated working knowledge of vulnerability assessment and penetration testing processes, focusing on how to automate these checks.
Experience with Government Agency Security Assessment Process in support of maintaining and/or establishing an ATO and the appropriate boundary.
Experience with, understanding of and adherence to guidelines such as FISMA, NIST, HIPPA, and IRS Pub-1075 (Preferred)
Preferred Certifications:
Require AWS DevOps or SysOps or equivalent Certification
Preferably possess industry certification such as the CISSP, CEH, GIAC, etc
Job Type: Full Time
Salary: BOE
Benefits:
401(k) with employer contribution
Medical/Dental/Vision insurance (option for full coverage for employee)
Life, ST/LT insurance
Professional development opportunities
Schedule:
8 hour shift
May include minimal after hours support depending on deployment schedule
Work Type:
Hybrid remote in Ellicott City, MD 21043
1 to 2 days in office weekly
AI Engineer
Requirements engineer job in Washington, DC
Job Title: Developer Premium II - AI Engineer
Duration: 7 Months with long term extension
Hybrid Onsite: 4 days per week from Day 1
AI Engineer: The AI Engineer will play a pivotal role in designing, developing, and deploying artificial intelligence solutions that enhance operational efficiency, automate decision-making, and support strategic initiatives for the environmental and social specialists within the client. This role is central to the VPU's digital transformation efforts and will contribute to the development of scalable, ethical, and innovative AI systems.
Qualifications and Experience
Education: Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or related field.
Experience:
Minimum 3 years of experience in AI/ML model development and deployment.
Experience with MLOps tools (e.g., MLflow), Docker, and cloud platforms (AWS, Azure, GCP).
Proven track record in implementing LLMs, RAG, NLP model development and GenAI solutions.
Technical Skills:
Skilled in - Azure AI/Google Vertex Search, Vector Databases, fine-tuning the RAG, NLP model development, API Management (facilitates access to different sources of data)
Proficiency in Python, TensorFlow, PyTorch, and NLP frameworks.
Expertise deep learning, computer vision, and large language models.
Familiarity with REST APIs, NoSQL, and RDBMS.
Certifications (Preferred):
Microsoft Certified: Azure AI Engineer Associate
Google Machine Learning Engineer
SAFe Agile Software Engineer (ASE)
Certification in AI Ethics
Objectives of the Assignment:
Develop and implement AI models and algorithms tailored to business needs.
Integrate AI solutions into existing systems and workflows.
Ensure ethical compliance and data privacy in all AI initiatives.
Support user adoption through training and documentation.
Support existing AI solutions by refinement, troubleshooting, and reconfiguration
Scope of Work and Responsibilities:
AI Solution Development:
Collaborate with cross-functional teams to identify AI opportunities.
Train, validate, and optimize machine learning models.
Translate business requirements to technical specifications.
AI Solution Implementation
Develop code, deploy AI models and into production environments, and conduct ongoing model training
Monitor performance and troubleshoot issues and engage in fine-tuning the solutions to improve accuracy
Ensure compliance with ethical standards and data governance policies.
User Training and Adoption:
Conduct training sessions for stakeholders on AI tools.
Develop user guides and technical documentation.
Data Analysis and Research:
Collect, preprocess, and engineer large datasets for machine learning and AI applications.
Recommend and Implement Data Cleaning and Preparation
Analyse and use structured and unstructured data (including geospatial data) to extract features and actionable insights.
Monitor data quality, detect bias, and manage model/data drift in production environments.
Research emerging AI technologies and recommend improvements.
Governance, Strategy, Support, and Maintenance:
Advise client's staff on AI strategy and policy implications
Contribute to the team's AI roadmap and innovation agenda.
Provide continuous support and contribute towards maintenance and future enhancements.
Deliverables:
Work on Proof of Concepts to study the technical feasibility of AI Use Cases
Functional AI applications integrated into business systems.
Documentation of model/application architecture, training data, and performance metrics.
Training materials and user guides.
Develop, train, and deploy AI models tailored to business needs
“Mindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of - Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.”
Platform Engineer
Requirements engineer job in Arlington, VA
As a disclaimer, I do not work with third party agencies or c2c firms on active positions, and my client is only accepting USC or GC candidates at this time.. Please do not contact me in regard to positions if you do not fit the above-given parameters.
Details
-- Arlington, VA area Senior Platform Engineer (DevOps)
-- Mid-Upper 100s base salary range, before benefits and bonus
-- 4 days onsite in Arlington area (1 day remote)
Senior Platform Engineer - DevOps Focus
Join a dynamic, multi-disciplinary team responsible for the architecture, design, implementation, and support of public and private cloud environments. This role involves building and maintaining systems and applications across diverse platforms, with a strong emphasis on automation, orchestration, and operational excellence.
Key Responsibilities
Design, implement, and maintain systems and applications in multi-vendor cloud environments.
Develop tools and processes for automated deployment, monitoring, and infrastructure management.
Lead engineering efforts on projects, including architectural design and implementation.
Evaluate and optimize software, systems, and workflows to enhance performance and reliability.
Architect automation solutions using tools like Terraform, CloudFormation, Ansible, and Puppet.
Ensure high availability and performance of systems through proactive monitoring and tuning.
Collaborate across teams to troubleshoot and optimize production systems and services.
Mentor junior engineers and contribute to knowledge sharing and team development.
Maintain documentation and ensure compliance with audit and security standards.
Qualifications
Bachelor's degree in Computer Science, Information Technology, or equivalent experience.
10+ years of relevant experience, including at least 3 years in cloud platforms, CI/CD, scripting, or infrastructure-as-code.
Strong background in DevOps, system administration, or software development.
Expertise in software build and deployment pipelines.
Solid understanding of cybersecurity principles in DevOps and CI/CD environments.
Cloud Engineer
Requirements engineer job in Washington, DC
Senior Cloud Systems Engineer
Note: Period of performance may extend beyond 12/31/2026.
The Senior Cloud Systems Engineer will serve as a key member of a team responsible for designing, building, and maintaining a cloud business platform within the Federal Reserve Board. The ideal candidate is an experienced cloud professional with deep expertise in AWS GovCloud and a strong background in modern software development, DevOps, and cloud architecture. This role requires at least ten (10) years of technology experience-five (5) of which must be focused specifically on cloud technologies-and at least five (5) years supporting cloud platforms directly.
The candidate must meet the qualifications outlined in the Technical Services BOA, Section 3.5.6.2 for the labor category Senior Cloud Systems Engineer, and demonstrate substantial hands-on experience with cloud engineering, automation, and secure system design.
Required Knowledge and Experience
The candidate shall possess the following expertise:
Software Development & SDLC
Experience developing software within structured SDLCs including DevOps, Agile, Lean, or Iterative methodologies.
Proficiency in at least one high-level programming language such as Python, PowerShell, or Infrastructure-as-Code frameworks (CloudFormation, AWS CDK, Terraform).
Preferred: Python 3.x data engineering (pandas, Databricks, SPARK).
Cloud Architecture & Engineering
Experience designing, deploying, and migrating secure, scalable systems across IaaS, PaaS, and SaaS environments.
Strong understanding of cloud-based architectures, including web servers, caching, application servers, load balancers, and storage.
Preferred: AWS Load Balancer experience.
Familiarity with loose coupling, stateless systems, and cloud-ready application design best practices.
Experience transitioning legacy systems to cloud-ready architectures.
DevOps & Automation
Hands-on experience with DevOps pipelines, CI/CD automation, containers, and cloud deployment best practices.
Preferred: GitLab or Azure DevOps.
Experience with containerization technologies (Docker, Kubernetes, AWS EKS, ECS, or Fargate).
Identity, Security & Compliance
Understanding of cloud federation and authentication technologies such as SAML, Okta, OAuth, and OpenID Connect, and their application to enterprise/public-facing systems.
Awareness of cloud information-security risks and best practices within high-security environments.
Preferred: Experience with government cloud deployments (FedRAMP, FISMA, high-security architectures).
AWS Expertise
Experience with AWS and AWS GovCloud service offerings.
Highly desirable: Direct configuration and management of AWS components including:
CloudTrail, CloudWatch, S3
RDS or Aurora, PostgreSQL
Lambda, Glue, MWAA (Airflow)
API Gateway
Cost Optimization & Governance
Understanding of cloud provider cost structures and ability to design cost-effective solutions.
APIs & Data Services
Knowledge of RESTful services, XML, or JSON.
Preferred: Experience with AWS API Gateway.
Experience in cloud analytics/data-processing/ML architectures (e.g., data lakes, lake formations).
Familiarity with Data Quality testing tools such as Great Expectations or Glue Data Quality.
Observability & Architecture Frameworks
Experience integrating cloud systems with enterprise observability platforms (e.g., Splunk).
Familiarity with the AWS Well-Architected Framework.
Experience defining platform-level requirements.
Education (Preferred)
Bachelor's degree in Computer Science, Information Technology, or related field.
Duties and Responsibilities
The Contractor shall provide, including but not limited to, the following services:
Design and Develop Cloud Applications
Create secure, maintainable, and scalable custom cloud-based applications.
Set Up and Configure DevSecOps Environments
Build and maintain DevSecOps pipelines for cloud environments, integrating automation, security, and continuous delivery best practices.
Build and Support Cloud Infrastructure
Collaborate with the cloud platform team and development teams to architect, manage, and enable cloud infrastructure and platforms supporting business application development
PACS Engineer
Requirements engineer job in Washington, DC
We are seeking a highly skilled Physical Access Control (PACS) Engineer to join our PACS security team. The PACS SME will provide comprehensive support for the PACS Connector application, ensuring seamless integration and operation across DHS Components. The ideal candidate will have extensive experience designing, installing, and maintaining PACS hardware and software, as well as developing standards, SOPs, and installation guides. This role requires both technical expertise and the ability to create enterprise-wide documentation and standards that align with security, compliance, and operational requirements.
Responsibilities
On-boarding Support: Conduct preplanning meetings to understand PACS IP addresses, TCP ports, and scheduling requirements. Coordinate with NOSC to establish connectivity between the PACS and PACS Connector.
Configuration Management: Configure PACS, Validation System, and Component Users within the application.
Support Services: Provide formal support via the Salesforce ticket system and ad-hoc support as needed.
Documentation: Document the system architecture and update the System Design Document (SDD) regularly or when changes occur.
Training and Policy: Develop and contribute to training materials, Quick Reference Guides, and policy documentation.
Project Review: Participate in the review of project Statements of Work (SOWs), Independent Government Cost Estimates (IGCEs), and market research for PACS equipment.
Testing and Evaluation: Operate the PACS Test and Evaluation (T&E) lab to evaluate software and hardware integrations before production implementation.
Additional Tasks: Perform other tasks as directed.
Qualifications
Required:
Bachelor's degree in Information Technology, Security Engineering, or related field; or equivalent work experience.
5-7 years of experience with PACS hardware and software installation, configuration, and maintenance.
Proven experience developing PACS and Security standards, SOPs, and installation documentation.
Hands-on experience with PACS vendor solutions (e.g., Lenel, CCURE 9000, Identiv, HID, Honeywell, Genetec).
Strong understanding of PACS integration with enterprise security systems (identity management systems).
Preferred:
Strong technical writing and documentation skills.
Knowledge of compliance frameworks such as NIST 800-53, FICAM, or related federal standards.
Experience in enterprise or federal government environments.
Professional certifications such as Certified System Engineer ICAM PACS (CSEIP) or vendor-specific PACS certifications.
Knowledge of networking fundamentals (IP addressing, VLANs, TCP/IP).
Skills & Competencies
Expertise in physical access control system design and deployment.
Excellent problem-solving and troubleshooting abilities.
Strong organizational and documentation skills.
Ability to work collaboratively across IT, facilities, and security teams.
Strong communication skills, both written and verbal.
We are an Equal Opportunity Employer committed to a diverse and inclusive workplace. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, age, gender identity, national origin, disability, or veteran status. We value diverse perspectives and actively seek to create an inclusive environment that celebrates the unique qualities of all employees.
UiPath Engineer
Requirements engineer job in McLean, VA
Need Only Local Candidates from Nearby Area.
Top Skills Must:
UiPath
Document Understanding
Python
Developer Role and Responsibilities
Your specific duties will be based on your experience as an UiPath developer. In this role, you will be responsible for designing and delivering UiPath solutions in accordance with WonderBotz standards and best practices. You will work closely together with our enthusiastic team of both business and technical specialists. You will be part of a fast-growing and successful team that helps our clients get the maximum benefit.
Expected Activities:
• Support development of UiPath strategies, including assessing opportunities
• Under the supervision of more experienced developers, define, design, and develop automation on UiPath platforms for clients, including POCs, pilots, and production automation. More senior developers will be expected to work independently
• Participate in workshops and interviews with business process SMEs to gather and confirm business process details & documenting process definitions. More senior developers will lead these workshops and interviews.
• Participate in design and configuration sessions and apply feedback to improve and enhance work products. More senior developers will lead these sessions.
• Work alongside newly trained developers to guide and mentor them.
Qualifications and Skills
• Have mastered or have a strong desire to master a leading RPA tool (UiPath a must, Blue Prism, Automation Anywhere), including advanced RPA vendor certification.
• At least one year of hands-on experience with at least one of the following programming languages (e.g. .Net, Java, VB, C#/C, HTML/CSS, Python, Web Services, mainframe, web applications, SQL, data integration tools, technical automation tools). More senior developers should have a minimum of 2 to 4 years of this hands-on experience.
• Reasonably proficiency in reading Microsoft Office Visio or other equivalent process flow-charting tool or workflow-based logic
• Extra - Any prior work or academic experience with Document management and processing tools (e.g. Kofax, ABBYY, Data Cap), Data integration tools (e.g. Informatica, Microsoft SSIS), Technical automation tools (e.g. shell scripting, PHP), or Business process management tools (e.g. Pega).
DevOps Engineer
Requirements engineer job in Washington, DC
Description: As a DevOps Engineer, you will play a critical role in bridging the gap between development and operations by automating and streamlining the build, test, and deployment processes. You will support automation of multiple processes. You will work closely with software developers, system operators, and other IT staff members to manage code releases and ensure the smooth operation of production environments. Your expertise will be essential in driving the adoption of best practices for continuous integration and continuous deployment (CI/CD) and in maintaining a robust and scalable infrastructure.
Responsibilities
• Develop, implement, and manage CI/CD pipelines to automate the build, test, and deployment processes.
• Collaborate with development and operations teams to streamline code integration and delivery.
• Manage and monitor production environments, ensuring high availability and performance.
• Automate infrastructure provisioning and configuration management using tools such as Ansible.
• Implement and maintain monitoring, logging, and alerting solutions to ensure system reliability and performance.
• Identify and resolve infrastructure and deployment issues, ensuring minimal disruption to services.
• Ensure security best practices are followed in the deployment and operation of applications and infrastructure.
• Work with cross-functional teams to develop and implement scalable and resilient system architectures.
• Continuously evaluate and improve processes, tools, and technologies to enhance the efficiency and effectiveness of the DevOps workflow.
Skills
• Strong understanding of CI/CD principles and hands-on experience with tools such as GitLab, Terraform, Helm, DB Config management (Liquibase / Flyaway), Jenkins, GitLab CI, CircleCI, Travis CI
• Proficiency in scripting and automation languages such as Python, Bash, or PowerShell.
• Experience with infrastructure as code (IaC) tools like Ansible.
• Solid understanding of cloud platforms, especially AWS.
• Familiarity with containerization and orchestration tools such as Docker, Kubernetes and OpenShift.
• Experience with monitoring and logging tools like Prometheus, Grafana, ELK stack, or Splunk.
• Strong problem-solving skills and the ability to troubleshoot complex system issues.
Clearance: Applicants selected will be subject to a security investigation and may need to meet eligibility requirements for access to classified information.
Compensation and Benefits
Salary Range: $60.00 - $75.00 (Compensation is determined by various factors, including but not limited to location, work experience, skills, education, certifications, seniority, and business needs. This range may be modified in the future.)
Benefits: Gridiron offers a comprehensive benefits package including medical, dental, vision insurance, HSA, FSA, 401(k), disability & ADD insurance, life and pet insurance to eligible employees. Full-time and part-time employees working at least 30 hours per week on a regular basis are eligible to participate in Gridiron's benefits programs.
Gridiron IT Solutions is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status or disability status.
Gridiron IT is a Women Owned Small Business (WOSB) headquartered in the Washington, D.C. area that supports our clients' missions throughout the United States. Gridiron IT specializes in providing comprehensive IT services tailored to meet the needs of federal agencies. Our capabilities include IT Infrastructure & Cloud Services, Cyber Security, Software Integration & Development, Data Solution & AI, and Enterprise Applications. These capabilities are backed by Gridiron IT's experienced workforce and our commitment to ensuring we meet and exceed our clients' expectations.
Data Engineer / Big data Engineer
Requirements engineer job in McLean, VA
Immediate need for a talented Data Engineer / Big data Engineer. This is a 12 months contract opportunity with long-term potential and is located in Mclean, VA(Hybrid). Please review the job description below and contact me ASAP if you are interested.
Job ID: 25-93504
Pay Range: $70 - $75/hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location).
Key Responsibilities:
Design, develop, and maintain data pipelines leveraging Python, Spark/PySpark, and cloud-native services.
Build and optimize data workflows, ETL processes, and transformations for large-scale structured and semi-structured datasets.
Write advanced and efficient SQL queries against Snowflake, including joins, window functions, and performance tuning.
Develop backend and automation tools using Golang and/or Python as needed.
Implement scalable, secure, and high-quality data solutions across AWS services such as S3, Lambda, Glue, Step Functions, EMR, and CloudWatch.
Troubleshoot complex production data issues, including pipeline failures, data quality gaps, and cloud environment challenges.
Perform root-cause analysis and implement automation to prevent recurring issues.
Collaborate with data scientists, analysts, platform engineers, and product teams to enable reliable, high-quality data access.
Ensure compliance with enterprise governance, data quality, and cloud security standards.
Participate in Agile ceremonies, code reviews, and DevOps practices to ensure high engineering quality.
Key Requirements and Technology Experience:
Proficiency in Python with experience building scalable data pipelines or ETL processes.
Strong hands-on experience with Spark/PySpark for distributed data processing.
Experience writing complex SQL queries (Snowflake preferred), including optimization and performance tuning.
Working knowledge of AWS cloud services used in data engineering (S3, Glue, Lambda, EMR, Step Functions, CloudWatch, IAM).
Experience with Golang for scripting, backend services, or performance-critical processes.
Strong debugging, troubleshooting, and analytical skills across cloud and data ecosystems.
Familiarity with CI/CD workflows, Git, and automated testing.
Our client is a leading Banking and Financial Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration.
Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
DevOps Engineer
Requirements engineer job in McLean, VA
The candidate should be able to drive implementation and improvement of tools and technologies for enterprise adoption in accordance with operational and security standards.
Practice and promote a Site Reliability Engineering (SRE) culture to improve and operate cloud platform offerings to the
enterprise while working toward innovation, automation, and operational excellence.
Automation experience is a must for this position.
Ability to provide 24x7 operational support on a periodic basis and involvement in Issue resolution is a must.
Must Have Qualifications:
Must have 5+ years of have on experience with AWS CloudFormation and Terraform. Automation through Shell Scripting and Python required (Ansible nice to have). 3+ years of experience with EKS and Kubernetes
Technical expertise:
7+ years of overall information technology experience with an emphasis on integration and delivery of virtual/cloud platforms to enterprise applications.
At least 5 years of proven experience with AWS CloudFormation, Terraform, or similar tools.
3+ years of experience with engineering and supporting containerization technology (OpenShift, Kubernetes, AWS(ECS/EKS), etc.) at scale.
Experience in Python, Ansible and shell scripting to automate routine operation tasks.
Experience in Tetrate, Rancher, ArgoCD are highly preferred.
About US Tech Solutions:
US Tech Solutions is a global staff augmentation firm providing a wide range of talent on-demand and total workforce solutions. To know more about US Tech Solutions, please visit ***********************
US Tech Solutions is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Recruiter Details:
Aishwarya Chandra
Email: ****************************************
Job ID: 25-53450
DevOps Engineer (Terraform)
Requirements engineer job in Annapolis, MD
Veteran-Owned Firm Seeking a DevOps Engineer (Terraform) with TS/SCI for a role in Annapolis Junction, MD
My name is Stephen Hrutka. I lead a Veteran-Owned management consulting firm in Washington, DC. We specialize in Technical and Cleared Recruiting for the Department of Defense (DoD), the Intelligence Community (IC), and other advanced defense agencies.
At HRUCKUS, we support fellow Veteran-Owned businesses by helping them recruit for positions across organizations such as the VA, SBA, HHS, DARPA, and other leading-edge R&D-focused defense agencies.
We seek to fill a DevOps Engineer (Terraform) position in Annapolis Junction, MD.
The ideal candidate must hold an active TS/SCI clearance and have 5-8+ years of experience with a BS/BA, 3-5+ years with an MS/MA, or 0-2+ years with a PhD. Experience in software engineering and integration of Commercial Off-the-Shelf software products is required, as is a current DoD 8570.1-M IAT Level III certification.
If you're interested, I'll gladly provide more details about the role and discuss your qualifications further.
Thanks,
Stephen M Hrutka
Principal Consultant
HRUCKUS LLC
Executive Summary: HRUCKUS seeks a DevOps Engineer (Terraform for a role supporting the Department of the Navy in Annapolis Junction, MD.
Position Description: The DevOps Engineer with a strong specialization in Terraform will be responsible for designing, implementing, and maintaining scalable infrastructure solutions using Terraform, ensuring efficient deployment and management of resources across various cloud environments. A deep understanding of Infrastructure as Code (IaC) principles and practices is essential for this role.
Position Job Duties:
While your primary focus will be on Terraform, we also value a strong working knowledge of Ansible to automate configuration management and streamline operational tasks. You will collaborate closely with development, operations, and security teams to enhance system reliability, scalability, and performance
Design and implement cloud infrastructure using Terraform and Hashicorp Configuration Language (HCL)
Automate provisioning, scaling, and infrastructure management to ensure reliability and efficiency
Collaborate with development and operations teams to support cloud initiatives and streamline deployment processes
Monitor cloud environments, perform regular maintenance, and troubleshoot issues as they arise
Implement and enforce best practices for cloud security, compliance, and risk management
Conduct ongoing security testing and code reviews to identify and mitigate vulnerabilities
Contribute to architectural designs and technical documentation ,and provide engineering solutions for improved security
Stay current with new tools, technologies, and industry best practices, and advise teams on secure coding practices
Integrate Terraform with Ansible to automate the provisioning and configuration of infrastructure, ensuring a seamless and consistent infrastructure-as-code (IaC) deployment process across cloud and on-prem environments
Use Terraform to manage and provision infrastructure resources (e.g., compute, storage, networking) and leverage Ansible for post-provisioning configuration management, including application deployments and security hardening
Collaborate with engineering teams to design end-to-end automation workflows that combine Terraform's infrastructure provisioning with Ansible's configuration management, ensuring scalable, repeatable, and secure deployments
Position Qualifications:
TS/SCI level clearance is required
Requires 5-8+ years with BS/BA, 3-5+ years with an MS/MA, or 0-2+ years with a PhD
Experience in software engineering and integration of Commercial Off-the-Shelf software products
Certification Requirement:
Must have a current DoD 8570.1-M IAT Level III certification
Details:
Job Title: DevOps Engineer (Terraform & SOR-2)
3 Site Locations:
Annapolis Junction, MD (Primary)
JBAB (Depends on Responsibilities)
Landover, MD (Future)
Security Clearance Requirement: TS/SCI
Assignment Type: Full-time
Salary: Maximum annual salary of $145,000
Cloud Data Engineer- Databricks
Requirements engineer job in McLean, VA
Purpose:
We are seeking a highly skilled Cloud Data Engineer with deep expertise in Databricks and modern cloud platforms such as AWS, Azure, or GCP. This role is ideal for professionals who are passionate about building next-generation data platforms, optimizing complex data workflows, and enabling advanced analytics and AI in cloud-native environments. You'll have the opportunity to work with Fortune-500 organizations in data and analytics, helping them unlock the full potential of their data through innovative, scalable solutions.
Key Result Areas and Activities:
Design and implement robust, scalable data engineering solutions.
Build and optimize data pipelines using Databricks, including serverless capabilities, Unity Catalog, and Mosaic AI.
Collaborate with analytics and AI teams to enable real-time and batch data workflows.
Support and improve cloud-native data platforms (AWS, Azure, GCP).
Ensure adherence to best practices in data modeling, warehousing, and governance.
Contribute to automation of data workflows using CI/CD, DevOps, or DataOps practices.
Implement and maintain workflow orchestration tools like Apache Airflow and dbt.
Roles & Responsibilities
Essential Skills
4+ years of experience in data engineering with a focus on scalable solutions.
Strong hands-on experience with Databricks in a cloud environment.
Proficiency in Spark and Python for data processing.
Solid understanding of data modeling, data warehousing, and architecture principles.
Experience working with at least one major cloud provider (AWS, Azure, or GCP).
Familiarity with CI/CD pipelines and data workflow automation.
Desirable Skills
Direct experience with Unity Catalog and Mosaic AI within Databricks.
Working knowledge of DevOps/DataOps principles in a data engineering context.
Exposure to Apache Airflow, dbt, and modern data orchestration frameworks.
Qualifications
Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or a related field.
Relevant certifications in cloud platforms (AWS/Azure/GCP) or Databricks are a plus.
Qualities:
Able to consult, write, and present persuasively
Able to work in a self-organized and cross-functional team
Able to iterate based on new information, peer reviews, and feedback
Able to work seamlessly with clients across multiple geographies
Research focused mindset
Excellent analytical, presentation, reporting, documentation and interactive skills
"Infocepts is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law."
Data Engineer
Requirements engineer job in Falls Church, VA
*** W2 Contract Only - No C2C - No 3rd Parties ***
The Ash Group is hiring a new Programmer Analyst Principal (Data Engineer) for our client (a global leader providing advanced systems and support in defense, aerospace, and security) based in Falls Church, VA.
In this role, you'll be designing, implementing, and optimizing large-scale data systems and ETL pipelines, with a strong focus on using Amazon Redshift and AWS services to ensure data quality and integrity for complex defense programs.
Compensation, Benefits, and Role Info
Competitive pay rate of $65 per hour.
Medical, dental, vision, direct primary care benefits, and, after six months of employment, a 4% matched 401(k) plan with immediate 100% vesting.
Type: 12-month contract with potential extension or conversion.
Location: On-site in Falls Church, VA.
What You'll Be Doing
Design and implement large-scale ETL data pipelines using AWS Glue and Python/PySpark to ingest, transform, and load data from various sources.
Build and maintain robust data warehouses, focusing on Amazon Redshift, including data modeling and governance.
Write and optimize complex, highly-performant SQL queries across large datasets (Redshift, Oracle, SQL Server).
Collaborate with cross-functional teams (data scientists, analysts) to understand requirements and deliver end-to-end data solutions.
Troubleshoot, optimize performance, and resolve data-related issues like pipeline failures and data quality bottlenecks.
What We're Looking For
8+ years of hands-on experience in data engineering, focusing on designing and implementing large-scale data systems.
5+ years of experience in building production-level ETL pipelines using AWS Glue and Python/PySpark.
Deep proficiency in SQL, including query optimization, indexing, and performance tuning across data warehouses like Amazon Redshift.
Strong understanding of database design principles, data modeling (star/snowflake schemas), and data governance.
Experience with data processing/orchestration frameworks such as Apache Airflow, Apache Kafka, or Fivetran.
If you're a seasoned data engineering professional passionate about building scalable data solutions and driving innovation in cloud-based environments, we want to hear from you. This is an exciting opportunity to work on cutting-edge technologies, collaborate with cross-functional teams, and make a meaningful impact on data-driven decision-making. Apply now to be part of a forward-thinking organization where your expertise will shape the future of our data infrastructure.
#DataEngineer #DataEngineering #AWSEngineer #Redshift #ETL #PySpark #DataPipeline #Westminster #ColoradoJobs #Contract
Senior Data Engineer
Requirements engineer job in McLean, VA
The candidate must have 5+ years of hands on experience working with PySpark/Python, microservices architecture, AWS EKS, SQL, Postgres, DB2, Snowflake, Behave OR Cucumber frameworks, Pytest (unit testing), automation testing and regression testing.
Experience with tools such as Jenkins, SonarQube AND/OR Fortify are preferred for this role.
Experience in Angular and DevOps are nice to haves for this role.
Must Have Qualifications: PySpark/Python based microservices, AWS EKS, Postgres SQL Database, Behave/Cucumber for automation, Pytest, Snowflake, Jenkins, SonarQube and Fortify.
Responsibilities:
Development of microservices based on Python, PySpark, AWS EKS, AWS Postgres for a data-oriented modernization project.
New System: Python and PySpark, AWS Postgres DB, Behave/Cucumber for automation, and Pytest
Perform System, functional and data analysis on the current system and create technical/functional requirement documents.
Current System: Informatica, SAS, AutoSys, DB2
Write automated tests using Behave/cucumber, based on the new micro-services-based architecture
Promote top code quality and solve issues related to performance tuning and scalability.
Strong skills in DevOps, Docker/container-based deployments to AWS EKS using Jenkins and experience with SonarQube and Fortify.
Able to communicate and engage with business teams and analyze the current business requirements (BRS documents) and create necessary data mappings.
Preferred strong skills and experience in reporting applications development and data analysis
Knowledge in Agile methodologies and technical documentation.
Sr DevOps Engineer-Local to McLean VA Only
Requirements engineer job in McLean, VA
Must Have Qualifications:
Must have 5+ years of have on experience with AWS cloud formation and Terraform.
Automation through Shell Scripting and Python required (Ansible nice to have).
3+ years of experience with EKS and Kubernetes .
Description:
The candidate should be able to drive implementation and improvement of tools and technologies for enterprise adoption in accordance with operational and security standards.
Practice and promote a Site Reliability Engineering (SRE) culture to improve and operate cloud platform offerings to the enterprise while working toward innovation, automation, and operational excellence.
Automation experience is a must for this position. Ability to provide 24x7 operational support on a periodic basis and involvement in Issue resolution is a must.
Technical expertise: 7+ years of overall information technology experience with an emphasis on integration and delivery of virtual/cloud platforms to enterprise applications.
At least 5 years proven experience with AWS cloud formation, Terraform, or similar tools.
3+ years of experience with engineering and supporting containerization technology (OpenShift, Kubernetes, AWS(ECS/EKS), etc.) at scale.
Experience in Python, Ansible and shell scripting to automate routine operation tasks.
Experience in Tetrate, Rancher, ArgoCD are highly preferred.