Mid-level DevOps Engineer (TS/SCI)
Requirements engineer job in Herndon, VA
Vantor is forging the new frontier of spatial intelligence, helping decision makers and operators navigate what's happening now and shape what's coming next. Vantor is a place for problem solvers, changemakers, and go-getters-where people are working together to help our customers see the world differently, and in doing so, be seen differently. Come be part of a mission, not just a job, where you can: Shape your own future, build the next big thing, and change the world.
To be eligible for this position, you must be a U.S. Person, defined as a U.S. citizen, permanent resident, Asylee, or Refugee.
Note on Cleared Roles: If this position requires an active U.S. Government security clearance, applicants who do not currently hold the required clearance will not be eligible for consideration. Employment for cleared roles is contingent upon verification of clearance status.
Export Control/ITAR:
Certain roles may be subject to U.S. export control laws, requiring U.S. person status as defined by 8 U.S.C. 1324b(a)(3).
Please review the job details below.
Vantor is seeking a Mid-Level DevOps Engineer located in Herndon, VA, to support the development, integration, and cybersecurity compliance of various intelligence capabilities into a Development and subsequently Production environment. You will be a member of a project team responsible for designing and maintaining multiple DevOps pipelines integrating and deploying numerous components into the larger system technology stack. Deployment will be across multiple networks into private cloud infrastructures self-hosted for our government customer.
As a Vantor team member, you will closely support our mission partners, and your work will have direct mission impact. In this role, you'll work closely with other DevOps engineers, software developers, infrastructure technicians and cybersecurity professionals leveraging open-source technology to create and maintain the full stack of a High-Performance Computing (HPC) system that hosts a diverse range of applications served out to thousands of users.
About Us:
We are a multi-faceted technical team with expertise spanning the full stack of cloud computing technologies including systems administration, DevOps, cybersecurity, software development, and systems engineering that builds and maintains software applications backed by a self-managed cloud infrastructure with a true big-data footprint (over 10 petabytes).
Our diverse background of experience in mission support and technology development and operations serves as a catalyst to solve unique and challenging intelligence problems in support of special operations analysts and their on-going activities.
Prototyping and frequent, iterative feedback are core to our delivery approach, anchored by a need to work quickly in support of our missions.
Principal Responsibilities:
Deploy, and manage highly available applications to ensure system reliability and scalability.
Implement, and maintain HashiCorp Nomad clusters and Kubernetes clusters for workload orchestration.
Execute DNS configuration, management, and performance tuning for enterprise-grade systems.
Develop and implement Infrastructure-as-Code (IaC) solutions using tools like Terraform, Ansible, or similar.
Build and manage multiple CI/CD pipelines with GitLab or equivalent tools to automate deployments and streamline development workflows for rapid development and integration.
Perform system monitoring, logging, and troubleshooting to proactively identify and resolve issues.
Automate security testing and monitoring within the DevOps workflows using ACAS and Trivy.
Analyze cybersecurity scan findings and work with the cybersecurity team to identify false positive findings.
Assist the cybersecurity team in assembling the required Body of Evidence to submit for False Positive exceptions.
Assist the cybersecurity team in assembling the required Body of Evidence to submit containerized and non-containerized software packages for enterprise software approval.
Integrate static code analysis tools such as GitLab SAST, Fortify or Sonarqube and other security mechanisms into CI/CD pipelines.
Build and maintain custom tools to automate cybersecurity analysis and correlation workflows as new cybersecurity compliance requirements emerge.
Perform cybersecurity remediation on DevOps managed Virtual Machines to include OS patching, OS STIGing, and software package updates.
Build, maintain, and monitor configuration management of release products.
Troubleshoot and resolve network, automation pipelines, and infrastructure issues.
Minimum Requirements:
Must have a current/active TS/SCI and be willing and able to pass a CI polygraph.
Must be able and willing to work in a SCIF environment in Herndon Virginia forty hours a week.
Minimum of 5+ years of Industry experience as a DevOps engineer.
Strong expertise in cloud environments, including deployment, optimization, and troubleshooting.
Proven track record in building and operating Cloud Native Applications using tools like Kubernetes and Docker.
Experience in managing, integrating and utilizing PKI certificates for user and Non-Person-Entity Authentication and Authorization.
In-depth experience with IaC tools such as Terraform, Ansible, or equivalent.
Solid experience in creating and managing CI/CD build pipelines using GitLab or similar tools (e.g., Jenkins, Azure DevOps).
Strong scripting and automation skills using Python, Bash, or equivalent languages.
Excellent problem-solving skills with attention to detail and the ability to thrive in a fast-paced environment.
Experience with security best practices in DevOps pipelines (e.g., Trivy, Grype, GtiLab SAST, Sonarqube, etc.).
Familiarity with monitoring tools like Prometheus, Grafana, or ELK Stack.
Strong knowledge of networking and load balancing technologies.
Experience with source configuration management tools such as Git.
CI & CD development experience with technologies like Bash, Jenkins, or GitLab.
Experience with automated deployment technologies such as Cloud Formation, Ansible, Puppet or Chef.
Cloud Technologies deployment experience.
Open-source application deployments and maintenance.
Custom source application deployments and maintenance.
Moderate LINUX system administration experience (RedHat, Rocky Linux, Alma Linux or similar)).
Working knowledge of Linux and Windows operating systems, web services and SQL databases.
Experience working in an Agile environment.
Desired Skills:
Bachelor's degree in Computer Science, Information Systems or related discipline.
Master's degree in Computer Science, Information Systems or related discipline.
Security+ or comparable certification for privileged user access.
Experience with distributed processing methods and tools, such as REST APIs, microservices, IaaS/PaaS services.
Experience developing and deploying web services.
Experience in implementing Docker STIGs
Experience with technical cybersecurity remediation in the context of a Continuous Monitoring (CONMON) program
RHCSA or LPIC1/LPIC2 certifications or equivalent
Certified Kubernetes Administrator certification
Docker Certified Associate
#LI-CJ1
#cjpost
#LI-Onsite
Pay Transparency: In support of pay transparency at Vantor, we disclose salary ranges on all U.S. job postings. The successful candidate's starting pay will fall within the salary range provided below and is determined based on job-related factors, including, but not limited to, the experience, qualifications, knowledge, skills, geographic work location, and market conditions. Candidates with the minimum necessary experience, qualifications, knowledge, and skillsets for the position should not expect to receive the upper end of the pay range.
● The base pay for this position within the Washington, DC metropolitan area is: $113,000.00 - $188,000.00 annually.
For all other states, we use geographic cost of labor as an input to develop market-driven ranges for our roles, and as such, each location where we hire may have a different range.
Benefits: Vantor offers a competitive total rewards package that goes beyond the standard, including a robust 401(k) with company match, mental health resources, and unique perks like student loan repayment assistance, adoption reimbursement and pet insurance to support all aspects of your life. You can find more information on our benefits at: ******************************
The application window is three days from the date the job is posted and will remain posted until a qualified candidate has been identified for hire. If the job is reposted regardless of reason, it will remain posted three days from the date the job is reposted and will remain reposted until a qualified candidate has been identified for hire.
The date of posting can be found on Vantor's Career page at the top of each job posting.
To apply, submit your application via Vantor's Career page.
EEO Policy: Vantor is an equal opportunity employer committed to an inclusive workplace. We believe in fostering an environment where all team members feel respected, valued, and encouraged to share their ideas. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, gender identity, sexual orientation, disability, protected veteran status, age, or any other characteristic protected by law.
Applied AI Interface Engineer
Requirements engineer job in Alexandria, VA
MANTECH seeks a motivated, career and customer-oriented Applied AI Interface Engineer to join our team in Alexandria, VA. As part of the position, you will act as a Software Engineer designing and implementing services and components for AI applications.
Responsibilities include but are not limited to:
Designs and builds User Interfaces using modern UX/UI standards.
Develops, implements, and maintains full-stack software solutions for AI-enabled applications.
Works closely with the Software Architect to understand project requirements and translate them into technical specifications.
Develops and integrates AI and ML capabilities on a cloud-hosted data platform that supports significant market adoption, high performance, and strict access control and governance.
Stays current with advancements in AI, machine learning, and software engineering, incorporating best practices into the development process.
Documents software designs, code, and processes to ensure maintainability, scalability, and knowledge sharing among team members.
Participates in code reviews and provides constructive feedback to peers to ensure code quality, adherence to coding standards, and knowledge transfer within the team.
Minimum Qualifications:
Bachelor's degree in Computer Science, Information Technology, or a related field.
4 or more years (2 with Master's) of experience in software development, systems integration, data management, or related fields.
Proficiency in JavaScript, including familiarity with modern frameworks and libraries such as React, Angular, or Vue.js.
Solid knowledge of HTML and CSS, including responsive design principles and front-end workflows.
Knowledge of Python and REST API frameworks.
Basic understanding of user interface (UI) and user experience (UX) design principles, with the ability to collaborate with designers to translate wireframes into functional code.
Problem-Solving Skills: Strong analytical and problem-solving abilities, with the capacity to debug and resolve issues related to front-end code.
Experience with Generative AI including API access to large language models (LLMs).
Preferred Qualifications:
Experience with Docker, Kubernetes, or other containerization technology.
Experience working in AWS environments.
Strong analytical and problem-solving skills.
Excellent communication and interpersonal skills. Ability to work effectively in a team-oriented environment.
Experience working with data ingest and transformation.
Clearance Requirements:
Must possess a current and active TS/SCI clearance
Physical Requirements:
The person in this position must be able to remain in a stationary position 50% of the time.
Occasionally move about inside the office to access file cabinets, office machinery, or to communicate with co-workers, management, and customers, via email, phone, and or virtual communication, which may involve delivering presentations.
Forward Deployed Software Engineer I
Requirements engineer job in Herndon, VA
Vantor is forging the new frontier of spatial intelligence, helping decision makers and operators navigate what's happening now and shape what's coming next. Vantor is a place for problem solvers, changemakers, and go-getters-where people are working together to help our customers see the world differently, and in doing so, be seen differently. Come be part of a mission, not just a job, where you can: Shape your own future, build the next big thing, and change the world.
To be eligible for this position, you must be a U.S. Person, defined as a U.S. citizen, permanent resident, Asylee, or Refugee.
Note on Cleared Roles: If this position requires an active U.S. Government security clearance, applicants who do not currently hold the required clearance will not be eligible for consideration. Employment for cleared roles is contingent upon verification of clearance status.
Export Control/ITAR:
Certain roles may be subject to U.S. export control laws, requiring U.S. person status as defined by 8 U.S.C. 1324b(a)(3).
Please review the job details below.
Vantor is seeking a mission driven Forward Deployed Software Engineer to support new US Army Programs with novel 3D software solutions. This individual will act as a technical bridge between our engineering teams and end users, integrating our software alongside partner capabilities, engaging with customer environments, learning the systems inside and out, and shaping real-world solutions in direct collaboration with the customer.
You'll be hands-on with Vantor's advanced 3D capabilities, integrating and optimizing our technology in new ways to meet the fast-paced, dynamic needs of warfighters prototyping new Army functions. This role demands technical adaptability, strong communication skills, and a proactive mindset to solve challenges in the field and provide immediate feedback to internal teams.
Responsibilities:
Deploy to CONUS customer locations to support US Army operational needs, training, and mission exercises.
Rapidly learn and troubleshoot the Vantor tech stack, with emphasis on geospatial platforms, data integration, and rapidly enabling end-user workflows.
Serve as a liaison between Vantor's product teams and Army end users, providing real-time feedback and shaping development roadmaps.
Collaborate with cross-functional teams (Product, Engineering, PMO) to deliver customer-specific configurations and technical solutions.
Support fielding, onboarding, and adoption of new capabilities.
Translate complex customer needs into actionable technical requirements.
Provide technical demos, documentation, and hands-on training.
Maintain a high standard of cybersecurity, data integrity, and operational discipline in line with DoD requirements.
Minimum Qualifications:
Bachelor's degree in Computer Science, Engineering, or related technical field (or equivalent experience).
Secret Clearance (Ability to obtain TS/SCI).
2+ Years Relevant Experience.
Proficient in one or more languages: Python, JavaScript, Go, C++, or similar.
Comfort with Linux-based systems, cloud architectures, and containerized deployments (Docker, Kubernetes, etc.).
Experience supporting or interacting with DoD programs, ideally in a forward or fielded capacity.
Strong communication skills with ability to build trust across technical and non-technical stakeholders.
Ability to travel up to 25-50% and support occasional after-hours mission requirements.
Preferred Qualifications:
Prior experience supporting Army or joint tactical missions.
Familiarity with GIS, 3D terrain, or mission command platforms.
Understanding of cybersecurity standards (e.g., RMF, CMMC).
Current or prior military service or operational support background.
Pay Transparency: In support of pay transparency at Vantor, we disclose salary ranges on all U.S. job postings. The successful candidate's starting pay will fall within the salary range provided below and is determined based on job-related factors, including, but not limited to, the experience, qualifications, knowledge, skills, geographic work location, and market conditions. Candidates with the minimum necessary experience, qualifications, knowledge, and skillsets for the position should not expect to receive the upper end of the pay range.
● The base pay for this position within the Washington, DC metropolitan area is: $90,000.00 - $150,000.00 annually.
For all other states, we use geographic cost of labor as an input to develop market-driven ranges for our roles, and as such, each location where we hire may have a different range.
Benefits: Vantor offers a competitive total rewards package that goes beyond the standard, including a robust 401(k) with company match, mental health resources, and unique perks like student loan repayment assistance, adoption reimbursement and pet insurance to support all aspects of your life. You can find more information on our benefits at: ******************************
The application window is three days from the date the job is posted and will remain posted until a qualified candidate has been identified for hire. If the job is reposted regardless of reason, it will remain posted three days from the date the job is reposted and will remain reposted until a qualified candidate has been identified for hire.
The date of posting can be found on Vantor's Career page at the top of each job posting.
To apply, submit your application via Vantor's Career page.
EEO Policy: Vantor is an equal opportunity employer committed to an inclusive workplace. We believe in fostering an environment where all team members feel respected, valued, and encouraged to share their ideas. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, gender identity, sexual orientation, disability, protected veteran status, age, or any other characteristic protected by law.
Quantitative Engineer
Requirements engineer job in Reston, VA
Quantitative Engineer (Testing, Automation & Model Validation)
Duration: Long Term
MUST HAVE:
Python Development
AWS Cloud Services
Finance / Mortgage
Job Description: Quantitative Engineer (Testing, Automation & Model Validation)
We are seeking a Quantitative Engineer for a contract position with strong expertise in testing, automation, and quantitative analysis to join the MFCW application team. The ideal candidate will have hands-on experience with AWS cloud services, Python development, and a solid understanding of software testing methodologies combined with quantitative modeling principles. This role involves designing and implementing automated test frameworks, validating complex models, generating test cases, performing functional and regression testing, and maintaining comprehensive documentation to ensure accuracy and reliability of analytical systems.
________________________________________
Responsibilities
• Automated Testing & Frameworks
Design, develop, and maintain automated test scripts and frameworks for regression and performance testing of quantitative models and applications.
• Test Case Generation & Execution
Generate detailed test cases based on functional requirements, model specifications, and quantitative assumptions.
• Model Vetting & Validation
Perform rigorous testing of financial and analytical models, including sensitivity analysis, back-testing, and stress testing in AWS environments.
• Data Setup & Simulation
Prepare datasets and execute model runs in AWS for validation and scenario analysis.
• Defect Resolution & Optimization
Collaborate with development and modeling teams to identify, troubleshoot, and resolve defects in model logic and automation workflows.
• Documentation & Governance
Document test plans, validation results, and maintain compliance with model governance standards.
• Continuous Improvement
Enhance and optimize existing test automation processes, ensuring scalability and reproducibility.
• Cloud Integration
Utilize AWS services (EC2, S3, Lambda) for deployment, testing environments, and automation pipelines.
• Quality Assurance
Ensure code quality and testing rigor through best practices and continuous integration.
________________________________________
Required Skills & Qualifications
• Education:
Bachelor's degree in Computer Science, Engineering, Applied Mathematics, or related discipline. Minimum 8+ years of experience in software testing, automation, or quantitative engineering.
• Programming:
Strong proficiency in Python for development and automation. Exposure to Java and R is desirable.
• Cloud Expertise:
Hands-on experience with AWS services (EC2, S3, Lambda) and cloud-based testing environments.
• Testing Expertise:
o Ability to design and execute unit tests, integration tests, regression tests, and model validation tests.
o Experience in creating and managing test cases and documentation.
• Automation:
Knowledge of test automation frameworks and tools (e.g., PyTest, Selenium, or similar). Familiarity with microservices and distributed systems is a plus.
• Quantitative Analysis:
Understanding of statistical testing, model risk concepts, and performance metrics for analytical models.
• Version Control & CI/CD:
Familiarity with Git and continuous integration/deployment pipelines.
• Soft Skills:
Strong analytical and problem-solving skills, excellent communication, and documentation abilities.
Quantitative Engineer (Testing, Automation & Model Validation)
Requirements engineer job in Reston, VA
We are seeking a Quantitative Engineer for a contract position with strong expertise in testing, automation, and quantitative analysis to join the application team. The ideal candidate will have hands-on experience with AWS cloud services, Python development, and a solid understanding of software testing methodologies combined with quantitative modeling principles. This role involves designing and implementing automated test frameworks, validating complex models, generating test cases, performing functional and regression testing, and maintaining comprehensive documentation to ensure accuracy and reliability of analytical systems.
________________________________________
Responsibilities
• Automated Testing & Frameworks
Design, develop, and maintain automated test scripts and frameworks for regression and performance testing of quantitative models and applications.
• Test Case Generation & Execution
Generate detailed test cases based on functional requirements, model specifications, and quantitative assumptions.
• Model Vetting & Validation
Perform rigorous testing of financial and analytical models, including sensitivity analysis, back-testing, and stress testing in AWS environments.
• Data Setup & Simulation
Prepare datasets and execute model runs in AWS for validation and scenario analysis.
• Defect Resolution & Optimization
Collaborate with development and modeling teams to identify, troubleshoot, and resolve defects in model logic and automation workflows.
• Documentation & Governance
Document test plans, validation results, and maintain compliance with model governance standards.
• Continuous Improvement
Enhance and optimize existing test automation processes, ensuring scalability and reproducibility.
• Cloud Integration
Utilize AWS services (EC2, S3, Lambda) for deployment, testing environments, and automation pipelines.
• Quality Assurance
Ensure code quality and testing rigor through best practices and continuous integration.
________________________________________
Required Skills & Qualifications
• Education:
Bachelor's degree in Computer Science, Engineering, Applied Mathematics, or related discipline. Minimum 8+ years of experience in software testing, automation, or quantitative engineering.
• Programming:
Strong proficiency in Python for development and automation. Exposure to Java and R is desirable.
• Cloud Expertise:
Hands-on experience with AWS services (EC2, S3, Lambda) and cloud-based testing environments.
• Testing Expertise:
o Ability to design and execute unit tests, integration tests, regression tests, and model validation tests.
o Experience in creating and managing test cases and documentation.
• Automation:
Knowledge of test automation frameworks and tools (e.g., PyTest, Selenium, or similar). Familiarity with microservices and distributed systems is a plus.
• Quantitative Analysis:
Understanding of statistical testing, model risk concepts, and performance metrics for analytical models.
• Version Control & CI/CD:
Familiarity with Git and continuous integration/deployment pipelines.
• Soft Skills:
Strong analytical and problem-solving skills, excellent communication, and documentation abilities.
Cloud Engineer
Requirements engineer job in Washington, DC
Senior Cloud Systems Engineer
Note: Period of performance may extend beyond 12/31/2026.
The Senior Cloud Systems Engineer will serve as a key member of a team responsible for designing, building, and maintaining a cloud business platform within the Federal Reserve Board. The ideal candidate is an experienced cloud professional with deep expertise in AWS GovCloud and a strong background in modern software development, DevOps, and cloud architecture. This role requires at least ten (10) years of technology experience-five (5) of which must be focused specifically on cloud technologies-and at least five (5) years supporting cloud platforms directly.
The candidate must meet the qualifications outlined in the Technical Services BOA, Section 3.5.6.2 for the labor category Senior Cloud Systems Engineer, and demonstrate substantial hands-on experience with cloud engineering, automation, and secure system design.
Required Knowledge and Experience
The candidate shall possess the following expertise:
Software Development & SDLC
Experience developing software within structured SDLCs including DevOps, Agile, Lean, or Iterative methodologies.
Proficiency in at least one high-level programming language such as Python, PowerShell, or Infrastructure-as-Code frameworks (CloudFormation, AWS CDK, Terraform).
Preferred: Python 3.x data engineering (pandas, Databricks, SPARK).
Cloud Architecture & Engineering
Experience designing, deploying, and migrating secure, scalable systems across IaaS, PaaS, and SaaS environments.
Strong understanding of cloud-based architectures, including web servers, caching, application servers, load balancers, and storage.
Preferred: AWS Load Balancer experience.
Familiarity with loose coupling, stateless systems, and cloud-ready application design best practices.
Experience transitioning legacy systems to cloud-ready architectures.
DevOps & Automation
Hands-on experience with DevOps pipelines, CI/CD automation, containers, and cloud deployment best practices.
Preferred: GitLab or Azure DevOps.
Experience with containerization technologies (Docker, Kubernetes, AWS EKS, ECS, or Fargate).
Identity, Security & Compliance
Understanding of cloud federation and authentication technologies such as SAML, Okta, OAuth, and OpenID Connect, and their application to enterprise/public-facing systems.
Awareness of cloud information-security risks and best practices within high-security environments.
Preferred: Experience with government cloud deployments (FedRAMP, FISMA, high-security architectures).
AWS Expertise
Experience with AWS and AWS GovCloud service offerings.
Highly desirable: Direct configuration and management of AWS components including:
CloudTrail, CloudWatch, S3
RDS or Aurora, PostgreSQL
Lambda, Glue, MWAA (Airflow)
API Gateway
Cost Optimization & Governance
Understanding of cloud provider cost structures and ability to design cost-effective solutions.
APIs & Data Services
Knowledge of RESTful services, XML, or JSON.
Preferred: Experience with AWS API Gateway.
Experience in cloud analytics/data-processing/ML architectures (e.g., data lakes, lake formations).
Familiarity with Data Quality testing tools such as Great Expectations or Glue Data Quality.
Observability & Architecture Frameworks
Experience integrating cloud systems with enterprise observability platforms (e.g., Splunk).
Familiarity with the AWS Well-Architected Framework.
Experience defining platform-level requirements.
Education (Preferred)
Bachelor's degree in Computer Science, Information Technology, or related field.
Duties and Responsibilities
The Contractor shall provide, including but not limited to, the following services:
Design and Develop Cloud Applications
Create secure, maintainable, and scalable custom cloud-based applications.
Set Up and Configure DevSecOps Environments
Build and maintain DevSecOps pipelines for cloud environments, integrating automation, security, and continuous delivery best practices.
Build and Support Cloud Infrastructure
Collaborate with the cloud platform team and development teams to architect, manage, and enable cloud infrastructure and platforms supporting business application development
PACS Engineer
Requirements engineer job in Washington, DC
We are seeking a highly skilled Physical Access Control (PACS) Engineer to join our PACS security team. The PACS SME will provide comprehensive support for the PACS Connector application, ensuring seamless integration and operation across DHS Components. The ideal candidate will have extensive experience designing, installing, and maintaining PACS hardware and software, as well as developing standards, SOPs, and installation guides. This role requires both technical expertise and the ability to create enterprise-wide documentation and standards that align with security, compliance, and operational requirements.
Responsibilities
On-boarding Support: Conduct preplanning meetings to understand PACS IP addresses, TCP ports, and scheduling requirements. Coordinate with NOSC to establish connectivity between the PACS and PACS Connector.
Configuration Management: Configure PACS, Validation System, and Component Users within the application.
Support Services: Provide formal support via the Salesforce ticket system and ad-hoc support as needed.
Documentation: Document the system architecture and update the System Design Document (SDD) regularly or when changes occur.
Training and Policy: Develop and contribute to training materials, Quick Reference Guides, and policy documentation.
Project Review: Participate in the review of project Statements of Work (SOWs), Independent Government Cost Estimates (IGCEs), and market research for PACS equipment.
Testing and Evaluation: Operate the PACS Test and Evaluation (T&E) lab to evaluate software and hardware integrations before production implementation.
Additional Tasks: Perform other tasks as directed.
Qualifications
Required:
Bachelor's degree in Information Technology, Security Engineering, or related field; or equivalent work experience.
5-7 years of experience with PACS hardware and software installation, configuration, and maintenance.
Proven experience developing PACS and Security standards, SOPs, and installation documentation.
Hands-on experience with PACS vendor solutions (e.g., Lenel, CCURE 9000, Identiv, HID, Honeywell, Genetec).
Strong understanding of PACS integration with enterprise security systems (identity management systems).
Preferred:
Strong technical writing and documentation skills.
Knowledge of compliance frameworks such as NIST 800-53, FICAM, or related federal standards.
Experience in enterprise or federal government environments.
Professional certifications such as Certified System Engineer ICAM PACS (CSEIP) or vendor-specific PACS certifications.
Knowledge of networking fundamentals (IP addressing, VLANs, TCP/IP).
Skills & Competencies
Expertise in physical access control system design and deployment.
Excellent problem-solving and troubleshooting abilities.
Strong organizational and documentation skills.
Ability to work collaboratively across IT, facilities, and security teams.
Strong communication skills, both written and verbal.
We are an Equal Opportunity Employer committed to a diverse and inclusive workplace. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, age, gender identity, national origin, disability, or veteran status. We value diverse perspectives and actively seek to create an inclusive environment that celebrates the unique qualities of all employees.
UiPath Engineer
Requirements engineer job in McLean, VA
Need Only Local Candidates from Nearby Area.
Top Skills Must:
UiPath
Document Understanding
Python
Developer Role and Responsibilities
Your specific duties will be based on your experience as an UiPath developer. In this role, you will be responsible for designing and delivering UiPath solutions in accordance with WonderBotz standards and best practices. You will work closely together with our enthusiastic team of both business and technical specialists. You will be part of a fast-growing and successful team that helps our clients get the maximum benefit.
Expected Activities:
• Support development of UiPath strategies, including assessing opportunities
• Under the supervision of more experienced developers, define, design, and develop automation on UiPath platforms for clients, including POCs, pilots, and production automation. More senior developers will be expected to work independently
• Participate in workshops and interviews with business process SMEs to gather and confirm business process details & documenting process definitions. More senior developers will lead these workshops and interviews.
• Participate in design and configuration sessions and apply feedback to improve and enhance work products. More senior developers will lead these sessions.
• Work alongside newly trained developers to guide and mentor them.
Qualifications and Skills
• Have mastered or have a strong desire to master a leading RPA tool (UiPath a must, Blue Prism, Automation Anywhere), including advanced RPA vendor certification.
• At least one year of hands-on experience with at least one of the following programming languages (e.g. .Net, Java, VB, C#/C, HTML/CSS, Python, Web Services, mainframe, web applications, SQL, data integration tools, technical automation tools). More senior developers should have a minimum of 2 to 4 years of this hands-on experience.
• Reasonably proficiency in reading Microsoft Office Visio or other equivalent process flow-charting tool or workflow-based logic
• Extra - Any prior work or academic experience with Document management and processing tools (e.g. Kofax, ABBYY, Data Cap), Data integration tools (e.g. Informatica, Microsoft SSIS), Technical automation tools (e.g. shell scripting, PHP), or Business process management tools (e.g. Pega).
Reverse Engineer
Requirements engineer job in Fairfax, VA
Full-time, Onsite (Fairfax, VA)
US Citizen with ability to obtain US Government clearance
*Top Secret preferred
We are seeking a Senior Security Researcher to join our team and lead reverse engineering efforts on embedded hardware and software. This role entails dealing with substantial ambiguity, aligning and leading team members through project execution, making substantial individual contributions, and mentoring other engineers.
Responsibilities:
Lead reverse engineering efforts on embedded hardware and software via individual contributions as well as directing and mentoring team members
Identify vulnerabilities in embedded systems via static analysis, dynamic analysis, and/or side channel attacks
Craft proof-of-concept exploits in support of customer engagements, utilizing shellcode and system analysis to achieve remote code execution, bypass secure boot, escalate privileges, and related techniques
Build defensive security solutions to detect and prevent malicious attacks across a wide variety of embedded systems
Develop software for automating vulnerability identification in embedded software
Contribute to in-house software technologies, including OFRAK (Open Firmware Reverse Analysis Konsole) and Symbiote (embedded run-time protection)
Design and develop automated hardware and software testing infrastructure
Requirements:
Bachelor's Degree in Computer Science, Computer or Electrical Engineering, a related major or equivalent experience
5+ years of proven experience in software and/or hardware reverse engineering
Expert-level understanding of software and/or hardware vulnerabilities and practical exploitation techniques
Experience developing an assembly language for at least one mainstream architecture
Strong grasp of foundational Computer Science knowledge and principles
Enthusiasm for designing and building low-level software solutions
Proficiency in C/C++ and Python
Preferred skills:
Specialization in fuzzing and/or side channel attacks
Proficiency with ARM / AARCH64 / MIPS / PPC assembly languages
Experience writing research proposals and leading research projects
Masters Degree or PhD in Computer Science, Engineering, or a related field
About You:
You are curious and have a high degree of self-initiative and self-motivation
You enjoy collaborating with, learning from, and teaching other researchers and engineers
You are passionate about building the future of embedded security and advancing the state of security for critical devices by leveraging novel technologies
You are comfortable navigating ambiguity and pushing through to results
Platform Engineer
Requirements engineer job in Arlington, VA
As a disclaimer, I do not work with third party agencies or c2c firms on active positions, and my client is only accepting USC or GC candidates at this time.. Please do not contact me in regard to positions if you do not fit the above-given parameters.
Details
-- Arlington, VA area Senior Platform Engineer (DevOps)
-- Mid-Upper 100s base salary range, before benefits and bonus
-- 4 days onsite in Arlington area (1 day remote)
Senior Platform Engineer - DevOps Focus
Join a dynamic, multi-disciplinary team responsible for the architecture, design, implementation, and support of public and private cloud environments. This role involves building and maintaining systems and applications across diverse platforms, with a strong emphasis on automation, orchestration, and operational excellence.
Key Responsibilities
Design, implement, and maintain systems and applications in multi-vendor cloud environments.
Develop tools and processes for automated deployment, monitoring, and infrastructure management.
Lead engineering efforts on projects, including architectural design and implementation.
Evaluate and optimize software, systems, and workflows to enhance performance and reliability.
Architect automation solutions using tools like Terraform, CloudFormation, Ansible, and Puppet.
Ensure high availability and performance of systems through proactive monitoring and tuning.
Collaborate across teams to troubleshoot and optimize production systems and services.
Mentor junior engineers and contribute to knowledge sharing and team development.
Maintain documentation and ensure compliance with audit and security standards.
Qualifications
Bachelor's degree in Computer Science, Information Technology, or equivalent experience.
10+ years of relevant experience, including at least 3 years in cloud platforms, CI/CD, scripting, or infrastructure-as-code.
Strong background in DevOps, system administration, or software development.
Expertise in software build and deployment pipelines.
Solid understanding of cybersecurity principles in DevOps and CI/CD environments.
AWS DevSecOps Engineer
Requirements engineer job in Ellicott City, MD
VITG is seeking a DevSecOps Engineer responsible for automating security integration throughout the CI/CD pipeline and the AWS cloud environment. This role will "shift security left" by taking the lead on implementing security-as-code tools, managing their usage, ensuring their proper configuration and compliance, and proactively embedding security policy into the development process. Our ideal candidate is passionate about being part of a "change," and working in a dynamic and highly collaborative environment focused on speed, stability, and security.
The DevSecOps Engineer provides hands-on expertise to integrate and maintain the security posture for corporate systems that support Federal programs, ensuring a successful program Authority To Operate (ATO). You will be responsible for developing, monitoring, and maintaining systems and procedures to safeguard internal information systems, networks, and CI/CD pipelines through automation.
Applicant Requirements:
US citizen or must be authorized to work in the United States
Must have lived in the USA for three years of the last five years
Must be able to obtain a US federal gov badge and eligible for Public Trust clearance
Must be able to pass a background check, including a drug test
Job Responsibilities:
Develop, implement, and maintain security automation throughout the entire SDLC, integrating security into the CI/CD pipelines using Jenkins/Github and Infrastructure-as-Code (IaC) principles.
Run and manage security scans with tools such as Snyk (SAST/SCA) and establish automated tracking and enforcement mechanisms for vulnerability remediation.
Integrate and manage security workloads running on AWS containers and ensure container image scanning and runtime security policies are enforced.
Design, manage, and maintain source code for AWS infrastructure in GitHub and manage automated pipelines, ensuring security checks and gates are embedded in every deployment.
Maintain security information on JIRA/Confluence and actively participate in agile DevSecOps practices, promoting a "Secure-by-Design" culture.
Provides hands-on support for developing, coordinating, implementing, and enforcing information systems security policies, standards, and methodologies as code.
Maintain operational security posture for Enterprise Salesforce FISMA system by ensuring security is baked into configuration and deployment practices.
Implement security tools, security tool usage, and policy-as-code to ensure configurations remain compliant and configured properly, all while ensuring a successful program ATO.
Automate vulnerability/risk assessment analysis to support continuous monitoring and authorization.
Manages changes to the system and assesses the security impact of those changes through automated compliance checks.
Assists with the management of security aspects of the information system and performs day-to-day security operations of the system
Evaluate security solutions to ensure they meet security requirements for processing classified information
Performs vulnerability/risk assessment analysis to support certification and accreditation
Prepares and reviews documentation to include System Security Plans (SSPs), Risk Assessment Reports, Certification and Accreditation (C&A) packages, and System Requirements Traceability Matrices (SRTMs)
Qualifications & Skills:
Bachelor's or Master's degree in Computer Science, Engineering, Information Technology, or a related discipline
Minimum of 6 years related experience in Information Technology including 4 years in the DevSecOps or Application Security (AppSec) space.
Demonstrated hands-on experience in cloud environments such as AWS Commercial and GovCloud, specifically with security automation, logging, and monitoring services (e.g., GuardDuty, Security Hub, CloudTrail).
Expertise in CI/CD pipeline management and the integration of security tools for Static Application Security Testing (SAST), Dynamic Application Security Testing (DAST), and Software Composition Analysis (SCA).
Required: Strong hands-on experience with AWS, Snyk, GitHub, JIRA, and Confluence to implement and manage the end-to-end DevSecOps toolchain.
Demonstrated work experience with Infrastructure-as-Code (IaC) security (e.g., using Checkov or Terrascan on Terraform/CloudFormation).
(Preferred) Experience with Salesforce Platform and tool ecosystem
(Preferred) Salesforce or any other platform tool - Configuration/Setup of External Client Applications and Secure Communications (TLS)
(Preferred) AppOmni - Have used it and can manage issues, perform new org additions and configurations.
Strong background in the certification and accreditation process (ATO) and the ability to automate compliance checks against frameworks like FISMA, NIST, and FedRAMP.
Possesses working knowledge of business security practices, current security automation tools, and policy-as-code implementation.
Demonstrated working knowledge of vulnerability assessment and penetration testing processes, focusing on how to automate these checks.
Experience with Government Agency Security Assessment Process in support of maintaining and/or establishing an ATO and the appropriate boundary.
Experience with, understanding of and adherence to guidelines such as FISMA, NIST, HIPPA, and IRS Pub-1075 (Preferred)
Preferred Certifications:
Require AWS DevOps or SysOps or equivalent Certification
Preferably possess industry certification such as the CISSP, CEH, GIAC, etc
Job Type: Full Time
Salary: BOE
Benefits:
401(k) with employer contribution
Medical/Dental/Vision insurance (option for full coverage for employee)
Life, ST/LT insurance
Professional development opportunities
Schedule:
8 hour shift
May include minimal after hours support depending on deployment schedule
Work Type:
Hybrid remote in Ellicott City, MD 21043
1 to 2 days in office weekly
AI Engineer
Requirements engineer job in Linthicum, MD
Must-Haves:
Solid understanding of Generative AI Models (LLMS, RAG and RAG pipelines, Prompt Engineering, model tuning and etc.) OpenAI preferred but not required
Familiarity with cloud-based (AWS) AI/ML services
Expertise in Python for AI/ML Development (Including frameworks like PyTorch, sklearn)
Proficiency with Docker and Kubernetes for containerized deployments
Familiarity with version control systems (e.g., Git) and modern development workflows.
Familiarity with CI/CD tools like Jenkins
Plusses:
Secret or TS Clearance
JavaScript (Frameworks like React)
HTML/CSS
Job Description:
A client of Insight Global is seeking an AI Architect/Engineer to drive our artificial intelligence initiatives. This role requires a unique blend of strategic thinking and hands-on implementation expertise. The ideal candidate will be able to understand business objectives and translate them into AI architectures and pipelines, while also possessing the technical skills to develop and deploy production-ready AI models to support use cases. A key focus will be on implementing solutions that automate business and operational processes, leveraging machine learning, robotic process automation, and other automation technologies, improving efficiency and reducing friction.
Pay:
$80-100/hr
AI Engineer
Requirements engineer job in Washington, DC
Job Title: Developer Premium II - AI Engineer
Duration: 7 Months with long term extension
Hybrid Onsite: 4 days per week from Day 1
AI Engineer: The AI Engineer will play a pivotal role in designing, developing, and deploying artificial intelligence solutions that enhance operational efficiency, automate decision-making, and support strategic initiatives for the environmental and social specialists within the client. This role is central to the VPU's digital transformation efforts and will contribute to the development of scalable, ethical, and innovative AI systems.
Qualifications and Experience
Education: Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or related field.
Experience:
Minimum 3 years of experience in AI/ML model development and deployment.
Experience with MLOps tools (e.g., MLflow), Docker, and cloud platforms (AWS, Azure, GCP).
Proven track record in implementing LLMs, RAG, NLP model development and GenAI solutions.
Technical Skills:
Skilled in - Azure AI/Google Vertex Search, Vector Databases, fine-tuning the RAG, NLP model development, API Management (facilitates access to different sources of data)
Proficiency in Python, TensorFlow, PyTorch, and NLP frameworks.
Expertise deep learning, computer vision, and large language models.
Familiarity with REST APIs, NoSQL, and RDBMS.
Certifications (Preferred):
Microsoft Certified: Azure AI Engineer Associate
Google Machine Learning Engineer
SAFe Agile Software Engineer (ASE)
Certification in AI Ethics
Objectives of the Assignment:
Develop and implement AI models and algorithms tailored to business needs.
Integrate AI solutions into existing systems and workflows.
Ensure ethical compliance and data privacy in all AI initiatives.
Support user adoption through training and documentation.
Support existing AI solutions by refinement, troubleshooting, and reconfiguration
Scope of Work and Responsibilities:
AI Solution Development:
Collaborate with cross-functional teams to identify AI opportunities.
Train, validate, and optimize machine learning models.
Translate business requirements to technical specifications.
AI Solution Implementation
Develop code, deploy AI models and into production environments, and conduct ongoing model training
Monitor performance and troubleshoot issues and engage in fine-tuning the solutions to improve accuracy
Ensure compliance with ethical standards and data governance policies.
User Training and Adoption:
Conduct training sessions for stakeholders on AI tools.
Develop user guides and technical documentation.
Data Analysis and Research:
Collect, preprocess, and engineer large datasets for machine learning and AI applications.
Recommend and Implement Data Cleaning and Preparation
Analyse and use structured and unstructured data (including geospatial data) to extract features and actionable insights.
Monitor data quality, detect bias, and manage model/data drift in production environments.
Research emerging AI technologies and recommend improvements.
Governance, Strategy, Support, and Maintenance:
Advise client's staff on AI strategy and policy implications
Contribute to the team's AI roadmap and innovation agenda.
Provide continuous support and contribute towards maintenance and future enhancements.
Deliverables:
Work on Proof of Concepts to study the technical feasibility of AI Use Cases
Functional AI applications integrated into business systems.
Documentation of model/application architecture, training data, and performance metrics.
Training materials and user guides.
Develop, train, and deploy AI models tailored to business needs
“Mindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of - Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.”
DevOps Engineer
Requirements engineer job in Washington, DC
Description: As a DevOps Engineer, you will play a critical role in bridging the gap between development and operations by automating and streamlining the build, test, and deployment processes. You will support automation of multiple processes. You will work closely with software developers, system operators, and other IT staff members to manage code releases and ensure the smooth operation of production environments. Your expertise will be essential in driving the adoption of best practices for continuous integration and continuous deployment (CI/CD) and in maintaining a robust and scalable infrastructure.
Responsibilities
• Develop, implement, and manage CI/CD pipelines to automate the build, test, and deployment processes.
• Collaborate with development and operations teams to streamline code integration and delivery.
• Manage and monitor production environments, ensuring high availability and performance.
• Automate infrastructure provisioning and configuration management using tools such as Ansible.
• Implement and maintain monitoring, logging, and alerting solutions to ensure system reliability and performance.
• Identify and resolve infrastructure and deployment issues, ensuring minimal disruption to services.
• Ensure security best practices are followed in the deployment and operation of applications and infrastructure.
• Work with cross-functional teams to develop and implement scalable and resilient system architectures.
• Continuously evaluate and improve processes, tools, and technologies to enhance the efficiency and effectiveness of the DevOps workflow.
Skills
• Strong understanding of CI/CD principles and hands-on experience with tools such as GitLab, Terraform, Helm, DB Config management (Liquibase / Flyaway), Jenkins, GitLab CI, CircleCI, Travis CI
• Proficiency in scripting and automation languages such as Python, Bash, or PowerShell.
• Experience with infrastructure as code (IaC) tools like Ansible.
• Solid understanding of cloud platforms, especially AWS.
• Familiarity with containerization and orchestration tools such as Docker, Kubernetes and OpenShift.
• Experience with monitoring and logging tools like Prometheus, Grafana, ELK stack, or Splunk.
• Strong problem-solving skills and the ability to troubleshoot complex system issues.
Clearance: Applicants selected will be subject to a security investigation and may need to meet eligibility requirements for access to classified information.
Compensation and Benefits
Salary Range: $60.00 - $75.00 (Compensation is determined by various factors, including but not limited to location, work experience, skills, education, certifications, seniority, and business needs. This range may be modified in the future.)
Benefits: Gridiron offers a comprehensive benefits package including medical, dental, vision insurance, HSA, FSA, 401(k), disability & ADD insurance, life and pet insurance to eligible employees. Full-time and part-time employees working at least 30 hours per week on a regular basis are eligible to participate in Gridiron's benefits programs.
Gridiron IT Solutions is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status or disability status.
Gridiron IT is a Women Owned Small Business (WOSB) headquartered in the Washington, D.C. area that supports our clients' missions throughout the United States. Gridiron IT specializes in providing comprehensive IT services tailored to meet the needs of federal agencies. Our capabilities include IT Infrastructure & Cloud Services, Cyber Security, Software Integration & Development, Data Solution & AI, and Enterprise Applications. These capabilities are backed by Gridiron IT's experienced workforce and our commitment to ensuring we meet and exceed our clients' expectations.
Data Engineer
Requirements engineer job in Falls Church, VA
*** W2 Contract Only - No C2C - No 3rd Parties ***
The Ash Group is hiring a new Programmer Analyst Principal (Data Engineer) for our client (a global leader providing advanced systems and support in defense, aerospace, and security) based in Falls Church, VA.
In this role, you'll be designing, implementing, and optimizing large-scale data systems and ETL pipelines, with a strong focus on using Amazon Redshift and AWS services to ensure data quality and integrity for complex defense programs.
Compensation, Benefits, and Role Info
Competitive pay rate of $65 per hour.
Medical, dental, vision, direct primary care benefits, and, after six months of employment, a 4% matched 401(k) plan with immediate 100% vesting.
Type: 12-month contract with potential extension or conversion.
Location: On-site in Falls Church, VA.
What You'll Be Doing
Design and implement large-scale ETL data pipelines using AWS Glue and Python/PySpark to ingest, transform, and load data from various sources.
Build and maintain robust data warehouses, focusing on Amazon Redshift, including data modeling and governance.
Write and optimize complex, highly-performant SQL queries across large datasets (Redshift, Oracle, SQL Server).
Collaborate with cross-functional teams (data scientists, analysts) to understand requirements and deliver end-to-end data solutions.
Troubleshoot, optimize performance, and resolve data-related issues like pipeline failures and data quality bottlenecks.
What We're Looking For
8+ years of hands-on experience in data engineering, focusing on designing and implementing large-scale data systems.
5+ years of experience in building production-level ETL pipelines using AWS Glue and Python/PySpark.
Deep proficiency in SQL, including query optimization, indexing, and performance tuning across data warehouses like Amazon Redshift.
Strong understanding of database design principles, data modeling (star/snowflake schemas), and data governance.
Experience with data processing/orchestration frameworks such as Apache Airflow, Apache Kafka, or Fivetran.
If you're a seasoned data engineering professional passionate about building scalable data solutions and driving innovation in cloud-based environments, we want to hear from you. This is an exciting opportunity to work on cutting-edge technologies, collaborate with cross-functional teams, and make a meaningful impact on data-driven decision-making. Apply now to be part of a forward-thinking organization where your expertise will shape the future of our data infrastructure.
#DataEngineer #DataEngineering #AWSEngineer #Redshift #ETL #PySpark #DataPipeline #Westminster #ColoradoJobs #Contract
DevOps Engineer
Requirements engineer job in McLean, VA
The candidate should be able to drive implementation and improvement of tools and technologies for enterprise adoption in accordance with operational and security standards.
Practice and promote a Site Reliability Engineering (SRE) culture to improve and operate cloud platform offerings to the
enterprise while working toward innovation, automation, and operational excellence.
Automation experience is a must for this position.
Ability to provide 24x7 operational support on a periodic basis and involvement in Issue resolution is a must.
Must Have Qualifications:
Must have 5+ years of have on experience with AWS CloudFormation and Terraform. Automation through Shell Scripting and Python required (Ansible nice to have). 3+ years of experience with EKS and Kubernetes
Technical expertise:
7+ years of overall information technology experience with an emphasis on integration and delivery of virtual/cloud platforms to enterprise applications.
At least 5 years of proven experience with AWS CloudFormation, Terraform, or similar tools.
3+ years of experience with engineering and supporting containerization technology (OpenShift, Kubernetes, AWS(ECS/EKS), etc.) at scale.
Experience in Python, Ansible and shell scripting to automate routine operation tasks.
Experience in Tetrate, Rancher, ArgoCD are highly preferred.
About US Tech Solutions:
US Tech Solutions is a global staff augmentation firm providing a wide range of talent on-demand and total workforce solutions. To know more about US Tech Solutions, please visit ***********************
US Tech Solutions is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Recruiter Details:
Aishwarya Chandra
Email: ****************************************
Job ID: 25-53450
Cloud Data Engineer- Databricks
Requirements engineer job in McLean, VA
Purpose:
We are seeking a highly skilled Cloud Data Engineer with deep expertise in Databricks and modern cloud platforms such as AWS, Azure, or GCP. This role is ideal for professionals who are passionate about building next-generation data platforms, optimizing complex data workflows, and enabling advanced analytics and AI in cloud-native environments. You'll have the opportunity to work with Fortune-500 organizations in data and analytics, helping them unlock the full potential of their data through innovative, scalable solutions.
Key Result Areas and Activities:
Design and implement robust, scalable data engineering solutions.
Build and optimize data pipelines using Databricks, including serverless capabilities, Unity Catalog, and Mosaic AI.
Collaborate with analytics and AI teams to enable real-time and batch data workflows.
Support and improve cloud-native data platforms (AWS, Azure, GCP).
Ensure adherence to best practices in data modeling, warehousing, and governance.
Contribute to automation of data workflows using CI/CD, DevOps, or DataOps practices.
Implement and maintain workflow orchestration tools like Apache Airflow and dbt.
Roles & Responsibilities
Essential Skills
4+ years of experience in data engineering with a focus on scalable solutions.
Strong hands-on experience with Databricks in a cloud environment.
Proficiency in Spark and Python for data processing.
Solid understanding of data modeling, data warehousing, and architecture principles.
Experience working with at least one major cloud provider (AWS, Azure, or GCP).
Familiarity with CI/CD pipelines and data workflow automation.
Desirable Skills
Direct experience with Unity Catalog and Mosaic AI within Databricks.
Working knowledge of DevOps/DataOps principles in a data engineering context.
Exposure to Apache Airflow, dbt, and modern data orchestration frameworks.
Qualifications
Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or a related field.
Relevant certifications in cloud platforms (AWS/Azure/GCP) or Databricks are a plus.
Qualities:
Able to consult, write, and present persuasively
Able to work in a self-organized and cross-functional team
Able to iterate based on new information, peer reviews, and feedback
Able to work seamlessly with clients across multiple geographies
Research focused mindset
Excellent analytical, presentation, reporting, documentation and interactive skills
"Infocepts is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law."
Senior Data Engineer
Requirements engineer job in McLean, VA
The candidate must have 5+ years of hands on experience working with PySpark/Python, microservices architecture, AWS EKS, SQL, Postgres, DB2, Snowflake, Behave OR Cucumber frameworks, Pytest (unit testing), automation testing and regression testing.
Experience with tools such as Jenkins, SonarQube AND/OR Fortify are preferred for this role.
Experience in Angular and DevOps are nice to haves for this role.
Must Have Qualifications: PySpark/Python based microservices, AWS EKS, Postgres SQL Database, Behave/Cucumber for automation, Pytest, Snowflake, Jenkins, SonarQube and Fortify.
Responsibilities:
Development of microservices based on Python, PySpark, AWS EKS, AWS Postgres for a data-oriented modernization project.
New System: Python and PySpark, AWS Postgres DB, Behave/Cucumber for automation, and Pytest
Perform System, functional and data analysis on the current system and create technical/functional requirement documents.
Current System: Informatica, SAS, AutoSys, DB2
Write automated tests using Behave/cucumber, based on the new micro-services-based architecture
Promote top code quality and solve issues related to performance tuning and scalability.
Strong skills in DevOps, Docker/container-based deployments to AWS EKS using Jenkins and experience with SonarQube and Fortify.
Able to communicate and engage with business teams and analyze the current business requirements (BRS documents) and create necessary data mappings.
Preferred strong skills and experience in reporting applications development and data analysis
Knowledge in Agile methodologies and technical documentation.
Sr DevOps Engineer-Local to McLean VA Only
Requirements engineer job in McLean, VA
Must Have Qualifications:
Must have 5+ years of have on experience with AWS cloud formation and Terraform.
Automation through Shell Scripting and Python required (Ansible nice to have).
3+ years of experience with EKS and Kubernetes .
Description:
The candidate should be able to drive implementation and improvement of tools and technologies for enterprise adoption in accordance with operational and security standards.
Practice and promote a Site Reliability Engineering (SRE) culture to improve and operate cloud platform offerings to the enterprise while working toward innovation, automation, and operational excellence.
Automation experience is a must for this position. Ability to provide 24x7 operational support on a periodic basis and involvement in Issue resolution is a must.
Technical expertise: 7+ years of overall information technology experience with an emphasis on integration and delivery of virtual/cloud platforms to enterprise applications.
At least 5 years proven experience with AWS cloud formation, Terraform, or similar tools.
3+ years of experience with engineering and supporting containerization technology (OpenShift, Kubernetes, AWS(ECS/EKS), etc.) at scale.
Experience in Python, Ansible and shell scripting to automate routine operation tasks.
Experience in Tetrate, Rancher, ArgoCD are highly preferred.
DevOps Engineer (Terraform)
Requirements engineer job in Annapolis, MD
Veteran-Owned Firm Seeking a DevOps Engineer (Terraform) with TS/SCI for a role in Annapolis Junction, MD
My name is Stephen Hrutka. I lead a Veteran-Owned management consulting firm in Washington, DC. We specialize in Technical and Cleared Recruiting for the Department of Defense (DoD), the Intelligence Community (IC), and other advanced defense agencies.
At HRUCKUS, we support fellow Veteran-Owned businesses by helping them recruit for positions across organizations such as the VA, SBA, HHS, DARPA, and other leading-edge R&D-focused defense agencies.
We seek to fill a DevOps Engineer (Terraform) position in Annapolis Junction, MD.
The ideal candidate must hold an active TS/SCI clearance and have 5-8+ years of experience with a BS/BA, 3-5+ years with an MS/MA, or 0-2+ years with a PhD. Experience in software engineering and integration of Commercial Off-the-Shelf software products is required, as is a current DoD 8570.1-M IAT Level III certification.
If you're interested, I'll gladly provide more details about the role and discuss your qualifications further.
Thanks,
Stephen M Hrutka
Principal Consultant
HRUCKUS LLC
Executive Summary: HRUCKUS seeks a DevOps Engineer (Terraform for a role supporting the Department of the Navy in Annapolis Junction, MD.
Position Description: The DevOps Engineer with a strong specialization in Terraform will be responsible for designing, implementing, and maintaining scalable infrastructure solutions using Terraform, ensuring efficient deployment and management of resources across various cloud environments. A deep understanding of Infrastructure as Code (IaC) principles and practices is essential for this role.
Position Job Duties:
While your primary focus will be on Terraform, we also value a strong working knowledge of Ansible to automate configuration management and streamline operational tasks. You will collaborate closely with development, operations, and security teams to enhance system reliability, scalability, and performance
Design and implement cloud infrastructure using Terraform and Hashicorp Configuration Language (HCL)
Automate provisioning, scaling, and infrastructure management to ensure reliability and efficiency
Collaborate with development and operations teams to support cloud initiatives and streamline deployment processes
Monitor cloud environments, perform regular maintenance, and troubleshoot issues as they arise
Implement and enforce best practices for cloud security, compliance, and risk management
Conduct ongoing security testing and code reviews to identify and mitigate vulnerabilities
Contribute to architectural designs and technical documentation ,and provide engineering solutions for improved security
Stay current with new tools, technologies, and industry best practices, and advise teams on secure coding practices
Integrate Terraform with Ansible to automate the provisioning and configuration of infrastructure, ensuring a seamless and consistent infrastructure-as-code (IaC) deployment process across cloud and on-prem environments
Use Terraform to manage and provision infrastructure resources (e.g., compute, storage, networking) and leverage Ansible for post-provisioning configuration management, including application deployments and security hardening
Collaborate with engineering teams to design end-to-end automation workflows that combine Terraform's infrastructure provisioning with Ansible's configuration management, ensuring scalable, repeatable, and secure deployments
Position Qualifications:
TS/SCI level clearance is required
Requires 5-8+ years with BS/BA, 3-5+ years with an MS/MA, or 0-2+ years with a PhD
Experience in software engineering and integration of Commercial Off-the-Shelf software products
Certification Requirement:
Must have a current DoD 8570.1-M IAT Level III certification
Details:
Job Title: DevOps Engineer (Terraform & SOR-2)
3 Site Locations:
Annapolis Junction, MD (Primary)
JBAB (Depends on Responsibilities)
Landover, MD (Future)
Security Clearance Requirement: TS/SCI
Assignment Type: Full-time
Salary: Maximum annual salary of $145,000