Data Scientist
Data scientist job in Columbia, MD
Data Scientist - Transit Data Focus_Columbia, MD (On-site / hybrid)_Contract (6 Months)
Data Scientist - Transit Data Focus
Employment type: Contract
Duration: 6 Months
Justification: To manage and analyze customer databases, AVA (automated voice announcement), and schedule data for predictive maintenance and service planning.
Experience Level: 3-5 years
Job Responsibilities:
Collect, process, and analyze transit-related datasets including customer databases, AVA (automated voice announcement) logs, real-time vehicle data, and schedule data.
Develop predictive models and data-driven insights to support maintenance forecasting, service planning, and operational optimization.
Design and implement data pipelines to integrate, clean, and transform large, heterogeneous transit data sources.
Perform statistical analysis and machine learning to identify patterns, trends, and anomalies relevant to transit service performance and reliability.
Collaborate with transit planners, maintenance teams, and IT staff to translate data insights into actionable business strategies.
Monitor data quality and integrity; implement data validation and cleansing processes.
Technical Skills & Qualifications:
Bachelor's or Master's degree in Data Science, Statistics, Computer Science, Transportation Engineering, or a related quantitative field.
3-5 years of experience working as a data scientist or data analyst, preferably in a transit, transportation, or public sector environment.
Strong proficiency in Python or R for data analysis, statistical modeling, and machine learning.
Experience with SQL for database querying, manipulation, and data extraction.
Familiarity with transit data standards such as GTFS, AVL/CAD, APC (Automated Passenger Counters), and AVA systems.
Experience with data visualization tools such as Power BI, or equivalent.
Data Scientist
Data scientist job in McLean, VA
Kavaliro is seeking a Data Scientist to provide highly technical and in-depth data engineering support. The candidate MUST have experience designing and building data infrastructure, developing data pipelines, transforming and preparing data, ensuring data quality and security, and monitoring and optimizing systems. The candidate MUST have extensive experience with Python and AWS. Experience with SQL, multi-data source queries with database technologies (PostgreSQL, MySQL, RDS, etc.), NiFi, Git, Elasticsearch, Kibana, Jupyter Notebooks, NLP, AI, and any data visualization tools (Tableau, Kibana, Qlik, etc.) are desired.
Required Skills and Demonstrated Experience
Demonstrated experience with data engineering, to include designing and building data infrastructure, developing data pipelines, transforming/preparing data, ensuring data quality and security, and monitoring/optimizing systems.
Demonstrated experience with data management and integration, including designing and perating robust data layers for application development across local and cloud or web data sources.
Demonstrated work experience programming with Python
Demonstrated experience building scalable ETL and ELT workflows for reporting and analytics.
Demonstrated experience with general Linux computing and advanced bash scripting
Demonstrated experience with SQL.
Demonstrated experience constructing complex multi-data source queries with database technologies such as PostgreSQL, MySQL, Neo4J or RDS
Demonstrated experience processing data sources containing structured or unstructured data
Demonstrated experience developing data pipelines with NiFi to bring data into a central environment
Demonstrated experience delivering results to stakeholders through written documentation and oral briefings
Demonstrated experience using code repositories such as Git
Demonstrated experience using Elastic and Kibana
Demonstrated experience working with multiple stakeholders
Demonstrated experience documenting such artifacts as code, Python packages and methodologies
Demonstrated experience using Jupyter Notebooks
Demonstrated experience with machine learning techniques including natural language processing
Demonstrated experience explaining complex technical issues to more junior data scientists, in graphical, verbal, or written formats
Demonstrated experience developing tested, reusable and reproducible work
Work or educational background in one or more of the following areas: mathematics, statistics, hard sciences (e.g. Physics, Computational Biology, Astronomy, Neuroscience, etc.) computer science, data science, or business analytics
Desired Skills and Demonstrated Experience
Demonstrated experience with cloud services, such as AWS, as well as cloud data technologies and architecture.
Demonstrated experience using big data processing tools such as Apache Spark or Trino
Demonstrated experience with machine learning algorithms
Demonstrated experience with using container frameworks such as Docker or Kubernetes
Demonstrated experience with using data visualizations tools such as Tableau, Kibana or Apache Superset
Demonstrated experience creating learning objectives and creating teaching curriculum in technical or scientific fields
Location:
McLean, Virginia
This position is onsite and there is no remote availability.
Clearance:
TS/SCI with Full Scope Polygraph
Applicant MUST hold a permanent U.S. citizenship for this position in accordance with government contract requirements.
Kavaliro provides Equal Employment Opportunities to all employees and applicants. All qualified applicants will receive consideration for employment without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. Kavaliro is committed to the full inclusion of all qualified individuals. In keeping with our commitment, Kavaliro will take the steps to assure that people with disabilities are provided reasonable accommodations. Accordingly, if reasonable accommodation is required to fully participate in the job application or interview process, to perform the essential functions of the position, and/or to receive all other benefits and privileges of employment, please respond to this posting to connect with a company representative.
Junior Data Scientist (TS/SCI)
Data scientist job in Springfield, VA
We are seeking a junior-level Data Science professional with a strong academic foundation and early hands-on experience to join our team as a Exploitation Specialist. The ideal candidate will hold a bachelor's degree in a data science-related field and bring internship or project experience that demonstrates curiosity, initiative, and a willingness to learn from senior team members. This role is a great opportunity for someone eager to grow their technical skill set while supporting a high-impact mission.
Required Qualifications
Active TS/SCI clearance with the willingness to obtain a CI polygraph
Ability to work onsite in Northern Virginia, 40 hours per week (telework options are extremely limited)
Proficiency with Python and SQL
Preferred Qualifications
Familiarity with GEOINT collection and related NGA/NRO systems
Experience with additional programming languages such as R, JavaScript, HTML, and CSS
Understanding of object-oriented programming
Experience using visualization tools such as Grafana, Tableau, or Kibana
Ability to quickly learn new technologies, adapt to evolving mission requirements, and support the development/testing of new analytic methodologies
Senior Data Scientist
Data scientist job in McLean, VA
Purpose:
As a Data Scientist, you will play a key role in delivering impactful, data-driven solutions for our strategic enterprise clients. This role also offers the opportunity to shape and grow Infocepts' Data Science & AI practice, contributing to high-impact AI/ML initiatives, crafting data-driven narratives for stakeholders, and applying advanced techniques to solve complex business problems from strategy to execution.
Key Result Areas and Activities:
Design, build, and deploy AI/ML solutions using modern cloud and data platforms.
Lead data science projects across industries, ensuring alignment with business goals.
Apply supervised, unsupervised, deep learning, and Generative AI (e.g., LLMs, agentic workflows) techniques to address client use cases.
Collaborate with data engineering teams to optimize model pipelines using Delta Lake and Spark.
Communicate findings effectively through data visualizations and stakeholder presentations.
Drive adoption of MLOps practices for scalable and reliable model deployment.
Contribute to the evolution of Infocepts' Data Science & AI offerings through innovation and knowledge sharing.
Roles & Responsibilities
Essential Skills
5+ years of experience in applied AI and machine/deep learning.
Hands-on experience with Databricks, MLflow, PySpark, and Spark MLlib.
Proficiency in Python and SQL for model development and data manipulation.
Strong understanding of supervised and unsupervised learning, deep learning, and Generative AI.
Familiarity with cloud platforms: AWS, Azure, and GCP.
Solid foundation in advanced statistical methods and probabilistic analysis.
Ability to lead end-to-end AI/ML projects, including design, development, and stakeholder management.
Experience with visualization tools like Tableau, Power BI, or similar.
Familiarity with ML workflow orchestration and MLOps practices.
Desirable Skills
Experience with LLMs (Large Language Models) and agentic AI workflows.
Familiarity with modern data platforms like Snowflake.
Exposure to real-time data processing in cloud-native environments.
Contributions to open-source AI projects or publications in data science communities.
Qualifications
Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, Statistics, or a related field.
Certifications in cloud platforms (AWS, Azure, GCP) or Databricks are a plus.
Qualities:
Able to consult, write, and present persuasively
Able to work in a self-organized and cross-functional team
Able to iterate based on new information, peer reviews, and feedback
Able to work seamlessly with clients across multiple geographies
Research focused mindset
Excellent analytical, presentation, reporting, documentation and interactive skills
"Infocepts is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law."
Data Engineer
Data scientist job in McLean, VA
Immediate need for a talented Data Engineer. This is a 12 months contract opportunity with long-term potential and is located in Mclean, VA(Hybrid). Please review the job description below and contact me ASAP if you are interested.
Job ID: 25-93504
Pay Range: $70 - $75/hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location).
Key Responsibilities:
Design, develop, and maintain data pipelines leveraging Python, Spark/PySpark, and cloud-native services.
Build and optimize data workflows, ETL processes, and transformations for large-scale structured and semi-structured datasets.
Write advanced and efficient SQL queries against Snowflake, including joins, window functions, and performance tuning.
Develop backend and automation tools using Golang and/or Python as needed.
Implement scalable, secure, and high-quality data solutions across AWS services such as S3, Lambda, Glue, Step Functions, EMR, and CloudWatch.
Troubleshoot complex production data issues, including pipeline failures, data quality gaps, and cloud environment challenges.
Perform root-cause analysis and implement automation to prevent recurring issues.
Collaborate with data scientists, analysts, platform engineers, and product teams to enable reliable, high-quality data access.
Ensure compliance with enterprise governance, data quality, and cloud security standards.
Participate in Agile ceremonies, code reviews, and DevOps practices to ensure high engineering quality.
Key Requirements and Technology Experience:
Skills-Data Engineer- Python , Spark/PySpark, AWS, Golang, Able to write complex SQL queries against Snowflake tables / Troubleshoot issues, Java/Python, AWS (Glue, EC2, Lambda).
Proficiency in Python with experience building scalable data pipelines or ETL processes.
Strong hands-on experience with Spark/PySpark for distributed data processing.
Experience writing complex SQL queries (Snowflake preferred), including optimization and performance tuning.
Working knowledge of AWS cloud services used in data engineering (S3, Glue, Lambda, EMR, Step Functions, CloudWatch, IAM).
Experience with Golang for scripting, backend services, or performance-critical processes.
Strong debugging, troubleshooting, and analytical skills across cloud and data ecosystems.
Familiarity with CI/CD workflows, Git, and automated testing.
Our client is a leading Banking and Financial Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration.
Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
Data Engineer
Data scientist job in Falls Church, VA
*** W2 Contract Only - No C2C - No 3rd Parties ***
The Ash Group is hiring a new Programmer Analyst Principal (Data Engineer) for our client (a global leader providing advanced systems and support in defense, aerospace, and security) based in Falls Church, VA.
In this role, you'll be designing, implementing, and optimizing large-scale data systems and ETL pipelines, with a strong focus on using Amazon Redshift and AWS services to ensure data quality and integrity for complex defense programs.
Compensation, Benefits, and Role Info
Competitive pay rate of $65 per hour.
Medical, dental, vision, direct primary care benefits, and, after six months of employment, a 4% matched 401(k) plan with immediate 100% vesting.
Type: 12-month contract with potential extension or conversion.
Location: On-site in Falls Church, VA.
What You'll Be Doing
Design and implement large-scale ETL data pipelines using AWS Glue and Python/PySpark to ingest, transform, and load data from various sources.
Build and maintain robust data warehouses, focusing on Amazon Redshift, including data modeling and governance.
Write and optimize complex, highly-performant SQL queries across large datasets (Redshift, Oracle, SQL Server).
Collaborate with cross-functional teams (data scientists, analysts) to understand requirements and deliver end-to-end data solutions.
Troubleshoot, optimize performance, and resolve data-related issues like pipeline failures and data quality bottlenecks.
What We're Looking For
8+ years of hands-on experience in data engineering, focusing on designing and implementing large-scale data systems.
5+ years of experience in building production-level ETL pipelines using AWS Glue and Python/PySpark.
Deep proficiency in SQL, including query optimization, indexing, and performance tuning across data warehouses like Amazon Redshift.
Strong understanding of database design principles, data modeling (star/snowflake schemas), and data governance.
Experience with data processing/orchestration frameworks such as Apache Airflow, Apache Kafka, or Fivetran.
If you're a seasoned data engineering professional passionate about building scalable data solutions and driving innovation in cloud-based environments, we want to hear from you. This is an exciting opportunity to work on cutting-edge technologies, collaborate with cross-functional teams, and make a meaningful impact on data-driven decision-making. Apply now to be part of a forward-thinking organization where your expertise will shape the future of our data infrastructure.
#DataEngineer #DataEngineering #AWSEngineer #Redshift #ETL #PySpark #DataPipeline #Westminster #ColoradoJobs #Contract
Data Engineer
Data scientist job in Baltimore, MD
Job Title : Data Engineer
Job Type : Full Time
Our client is looking for Data Engineer who are passionate about building data-driven, cloud-based solutions and working with advanced technologies. This role offers an opportunity to join a dynamic team and contribute to creating impactful digital experiences that truly make a difference.
What You'll Do:
Collaborate with Agile teams to design, develop, and implement scalable data solutions.
Work with technologies like Python, SQL, Scala, Java, open-source RDBMS, NoSQL databases, and cloud-based data warehousing tools such as Redshift and Snowflake.
Build and optimize data pipelines for real-time streaming and big data processing.
Partner with product teams to deliver robust, cloud-native solutions.
Perform unit testing and code reviews to ensure high-quality, efficient data systems.
What We're Looking For:
Bachelor's degree in a relevant field.
3+ years of experience in application or data engineering.
1+ year of experience with big data technologies.
Preferred Skills:
5+ years of experience with Python, SQL, Scala, or Java.
2+ years of experience with public cloud platforms (AWS, Azure, or GCP).
3+ years of experience with distributed computing tools.
Experience with real-time data streaming, NoSQL databases (MongoDB, Cassandra), and data warehousing (Redshift, Snowflake).
Strong knowledge of UNIX/Linux systems and shell scripting.
Familiarity with Agile development practices.
Senior Data Engineer - Data Intelligence
Data scientist job in Baltimore, MD
Hybrid Role
Job Title: Data Intelligence - Engineer, Data Sr
Project Type: Contract
Duration of the project: 12 Months
We are looking for candidates with 5+ years' experience in Ab Initio Administration. (Internal Notes: Please do not send Developers)
Must have 2+ years' experience with AWS. Working with Ab Initio in AWS Cloud.
Must have solid experience building, installing and configuring Ab Initio.
Must have AWS EKS containerization. Will be involved moving Linux instances to AWS EKS.
Ab Initio Lead Infrastructure
This position is an Ab Initio Administrator and not a developer position. The Senior Ab Initio ETL Administrator is responsible for the tasks involved in administration of ETL tool (Ab-Initio) as well as migrating Ab Initio infrastructure to the Cloud. The candidate will support the implementation of a Data Integration/Data Warehouse for the Data products on-prem and in AWS Cloud like EKS containerization for Ab Initio.
6 - 8 Years' Experience
At least 6 years of Experienced with all the tasks involved in administration of ETL Tool (Ab Initio)
Experienced with managing the project of migration or infrastructure build without supervisor
At least 6 years of Experienced with Advance knowledge of Ab Initio Graphical Development Environment (GDE), Meta Data Hub, Operational Console
Experience with Ab Initio, AWS EKS, S3, Dynamo DB, Mongo DB, ProgreSQL, RDS, DB2
Created Big Data pipelines (ETL) from on-premises to Data Factories, Data Lakes, and Cloud Storage such as EBS or S3.
DevOps (CI/CD Pipeline) prefers Jenkins experience
Experience with Advance knowledge of UNIX and SQL
Experience with manage metadata hub-MDH, Operational Console and troubleshoot environmental issues which affect these components
Experience with scripting and automation such as design and develop automated ETL process and architecture and unit testing of the ETL code
Experience with working on the break fix and continuous development items, review, and inspection for the production changes
Perform the code review for the ETL code developed by the development team and guide to resolve an issue.
Service Oriented Architecture (SOA) knowledge and Demonstrated knowledge and best practices of testing environments and processes
Demonstrated experience working in an Enterprise environment with crossed team interaction and collaboration and policies
Strong testing skills
Excellent problem-solving skills
Strong analytical skills
Excellent verbal and written communications skills
Familiar with structured programming techniques
Must be able to perform assigned tasks with minimum supervision
Strong documentation skills
Experience working in an Agile environment is a plus
Software:
Applies and implements best practices for data auditing, scalability, reliability, and application performance.
AWS certification is a plus
Extensive UNIX AIX or Linux and Scripting experience
Extensive SDLC experience with some development or Systems programming experience
Ability to analyze and trouble-shoot Mid-tier/infrastructure issues.
Very strong verbal and written communication skills (Critical)
Ability to facilitate technical requirements gathering and design sessions
Collaborate and interpret business and technical needs
Excellent attention to detail and quality work products (Critical)
Strong customer service skills with internal and external customers (Critical)
Must be able to perform assigned tasks with minimum supervision (Critical)
Strong analytical and documentation skills
Excellent time management ability. (Critical)
Skills Preferred
Experience with DEVOPS or IAAS
AIX or Linux
LDAP
EIAM (Identity Access Management)
Ab Initio Admin and Architect
Senior Data Engineer
Data scientist job in McLean, VA
The candidate must have 5+ years of hands on experience working with PySpark/Python, microservices architecture, AWS EKS, SQL, Postgres, DB2, Snowflake, Behave OR Cucumber frameworks, Pytest (unit testing), automation testing and regression testing.
Experience with tools such as Jenkins, SonarQube AND/OR Fortify are preferred for this role.
Experience in Angular and DevOps are nice to haves for this role.
Must Have Qualifications: PySpark/Python based microservices, AWS EKS, Postgres SQL Database, Behave/Cucumber for automation, Pytest, Snowflake, Jenkins, SonarQube and Fortify.
Responsibilities:
Development of microservices based on Python, PySpark, AWS EKS, AWS Postgres for a data-oriented modernization project.
New System: Python and PySpark, AWS Postgres DB, Behave/Cucumber for automation, and Pytest
Perform System, functional and data analysis on the current system and create technical/functional requirement documents.
Current System: Informatica, SAS, AutoSys, DB2
Write automated tests using Behave/cucumber, based on the new micro-services-based architecture
Promote top code quality and solve issues related to performance tuning and scalability.
Strong skills in DevOps, Docker/container-based deployments to AWS EKS using Jenkins and experience with SonarQube and Fortify.
Able to communicate and engage with business teams and analyze the current business requirements (BRS documents) and create necessary data mappings.
Preferred strong skills and experience in reporting applications development and data analysis
Knowledge in Agile methodologies and technical documentation.
Azure Data Modeler
Data scientist job in Washington, DC
Azure Data Modeler - Budget Transformation Project
Our client is embarking on a major budget transformation initiative and is seeking an experienced Azure Data Modeler to support data architecture, modeling, and migration activities. This role will play a critical part in designing and optimizing data structures as the organization transitions to SAP. Experience with SAP is preferred, but strong ERP data experience in any platform is also valuable.
Responsibilities
Design, develop, and optimize data models within the Microsoft Azure environment.
Support data architecture needs across the budget transformation program.
Partner with cross-functional stakeholders to enable the transition to SAP (or other ERP systems).
Participate in data migration planning, execution, and validation efforts.
Work collaboratively within SAFe Agile teams and support sprint activities.
Provide off-hours support as needed for critical tasks and migration windows.
Engage onsite in Washington, DC up to three days per week.
Required Qualifications
Strong hands-on expertise in data architecture and data model design.
Proven experience working with Microsoft Azure (core requirement).
Ability to work flexibly, including occasional off-hours support.
Ability to be onsite in Washington, DC as needed (up to 3 days/week).
Preferred Qualifications
Experience with SAP ECC or exposure to SAP implementations.
Experience with other major ERP systems (Oracle, Workday, etc.).
SAFe Agile certification.
Dexian stands at the forefront of Talent + Technology solutions with a presence spanning more than 70 locations worldwide and a team exceeding 10,000 professionals. As one of the largest technology and professional staffing companies and one of the largest minority-owned staffing companies in the United States, Dexian combines over 30 years of industry expertise with cutting-edge technologies to deliver comprehensive global services and support.
Dexian connects the right talent and the right technology with the right organizations to deliver trajectory-changing results that help everyone achieve their ambitions and goals. To learn more, please visit ********************
Dexian is an Equal Opportunity Employer that recruits and hires qualified candidates without regard to race, religion, sex, sexual orientation, gender identity, age, national origin, ancestry, citizenship, disability, or veteran status.
Senior Data Modeler
Data scientist job in Baltimore, MD
6 Month Contract (Possible Extension or Contract-to-Hire)
* Must be able to work on a W-2
* Must be able to work EST hours
Job Description
Seeking a seasoned Data Modeler to architect and operationalize scalable, governed data structures that transform raw, disparate data into trusted, analytics-ready assets. This role demands mastery of data modeling principles, normalization, and SQL-based transformations within modern cloud ecosystems (e.g., Snowflake). As a senior individual contributor, you will set modeling standards, influence cross-functional teams, and ensure data integrity, consistency, and performance across the enterprise.
• Ability to collaborate closely with technical and business leaders to understand data requirements and deliver impactful solutions.
• Ability to design data quality, data lineage, and data governance practices across the organization.
• Ability to advocate for advanced data engineering tools, platforms, and technologies.
• Ability to transfer knowledge and experience to junior staff.
Key Responsibilities
Data Architecture & Modeling
• Design and maintain conceptual, logical, and physical data models that enable high-performance analytics and transactional workloads.
• Normalize and optimize complex, raw, and semi-structured data into relational structures aligned with business and analytical requirements.
• Own metadata and lineage: document entity relationships, definitions, and transformation logic to ensure transparency and reusability.
Collaboration & Influence
• Partner strategically with data engineers, analysts, and business stakeholders to translate requirements into scalable models and influence upstream design decisions.
• Serve as SME for data modeling best practices; evangelize standards and mentor peers to elevate modeling maturity across teams.
Implementation & Governance
• Implement models in Snowflake and other cloud platforms, ensuring compliance with performance, security, and governance frameworks.
• Champion automation: leverage dbt, Python, or orchestration tools to streamline model deployment and enforce version control.
• Conduct proactive data profiling and quality checks to validate accuracy and integrity before production rollout.
Innovation & Continuous Improvement
• Lead POCs for modern modeling approaches-including semantic layers, ML-ready schemas, and hybrid architectures-and institutionalize successful patterns.
• Continuously refine modeling standards, naming conventions, and governance policies to support evolving business needs.
Mandatory Skills
• 5+ years of experience in data modeling or database design, with proven ability to normalize complex datasets into optimized relational models.
• Advanced SQL skills and strong understanding of ETL/ELT processes.
• Hands-on experience with modern cloud data platforms (Snowflake preferred).
Education
• Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
Desired Skills
• Familiarity with data governance and cataloging tools (e.g., Collibra, Alation).
• Experience with dbt or scripting (Python) for automated transformations and CI/CD workflows.
• Strong communication skills to translate technical concepts into business language and influence stakeholders.
You Will Receive The Following Benefits:
Medical & Dental Insurance
Health Savings Account (HSA)
401(k)
Paid Sick Time Leave
Pre-tax Commuter Benefit
Applicants must be currently authorized to work in the United States on a full-time basis now and in the future.
Motion Recruitment provides IT Staffing Solutions (Contract, Contract-to-Hire, and Direct Hire) in major North American markets. Our unique expertise in today's highest-demand tech skill sets, paired with our deep networks and knowledge of our local technology markets, results in an exemplary track record with candidates and clients.
Data Scientist
Data scientist job in Annapolis, MD
The primary role of the Data Scientist is to drive mission-focused insights from complex datasets. This role involves managing, modeling, and interpreting large-scale government data holdings to support decision-making and operational success. You will combine expertise in mathematics, statistics, computer science, and domain-specific knowledge to deliver actionable conclusions and communicate them effectively to both technical and non-technical audiences.
Position Responsibilities
Build and assess analytic models tailored to mission-specific needs
Communicate principal conclusions clearly, using mathematics, statistics, and computer science methods
Develop reproducible workflows and ensure data integrity
Develop qualitative and quantitative methods for exploring and assessing datasets in varying states of organization and cleanliness.
Organize, clean, and curate large datasets for analysis
Present complex technical findings in a way that is accessible to technical and non-technical stakeholders
Experience
Bachelor's degree in Mathematics, Applied Mathematics, Applied Statistics, Machine Learning, Data Science, Computer Science or related field or equivalent combination of education and experience.
Minimum 3+ years' relevant work experience.
Experience with software development working with Python in a Unix environment.
Experience using the Unix command line.
Practical knowledge in Python Machine Learning and Data Visualization
Practical knowledge in Data ETL such as working with: Data loading from SQL, CSV, JSON, Excel, etc., Web scraping (Beautiful Soup, Scrapy, etc.) and Data Wrangling/Cleaning
Proficiency in statistical packages using any of the following: Python, R, STATA, SPSS, etc.
An active TS/SCI with polygraph
Additional Requirements
Experience using the Atlassian Tool Suite.
Experience with development of any of the following; Hadoop, Pig, MapReduce, or HDFS
Working knowledge with other object-oriented programming languages such as Java or C++
Working knowledge with Front-end data visualization libraries (i.e., D3.js; Raphael.js, etc.)
Physical Demands
Ability to work in an office environment (Constant)
Required to sit and stand for long periods; talk, hear, and use hands and fingers to operate a computer and telephone keyboard (Frequent)
Salary Range: $107,000 to $212,000
Clearance Level
Top Secret with Poly
ITAR Requirement:
T
his position requires access to information that is subject to compliance with the International Traffic Arms Regulations (“ITAR”) and/or the Export Administration Regulations (“EAR”). In order to comply with the requirements of the ITAR and/or the EAR, applicants must qualify as a U.S. person under the ITAR and the EAR, or a person to be approved for an export license by the governing agency whose technology comes under its jurisdiction. Please understand that any job offer that requires approval of an export license will be conditional on AeroVironment's determination that it will be able to obtain an export license in a time frame consistent with AeroVironment's business requirements. A “U.S. person” according to the ITAR definition is a U.S. citizen, U.S. lawful permanent resident (green card holder), or protected individual such as a refugee or asylee. See 22 CFR § 120.15. Some positions will require current U.S. Citizenship due to contract requirements.
Benefits: AV offers an excellent benefits package including medical, dental vision, 401K with company matching, a 9/80 work schedule and a paid holiday shutdown. For more information about our company benefit offerings please visit: **********************************
We also encourage you to review our company website at ******************** to learn more about us.
Principals only need apply. NO agencies please.
Who We Are
Based in California, AeroVironment (AVAV) is a global leader in unmanned aircraft systems (UAS) and tactical missile systems. Founded in 1971 by celebrated physicist and engineer, Dr. Paul MacCready, we've been at the leading edge of technical innovation for more than 45 years. Be a part of the team that developed the world's most widely used military drones and created the first submarine-launched reconnaissance drone, and has seven innovative vehicles that are part of the Smithsonian Institution's permanent collection in Washington, DC.
Join us today in developing the next generation of small UAS and tactical missile systems that will deliver more actionable intelligence to our customers so they can proceed with certainty - and succeed.
What We Do
Building on a history of technological innovation, AeroVironment designs, develops, produces, and supports an advanced portfolio of unmanned aircraft systems (UAS) and tactical missile systems. Agencies of the U.S. Department of Defense and allied military services use the company's hand-launched UAS to provide situational awareness to tactical operating units through real-time, airborne reconnaissance, surveillance, and target acquisition.
We are proud to be an EEO/AA Equal Opportunity Employer, including disability/veterans. AeroVironment, Inc. is an Equal Employment Opportunity (EEO) employer and welcomes all qualified applicants. Qualified applicants will receive fair and impartial consideration without regard to race, sex, color, religion, national origin, age, disability, protected veteran status, genetic data, sexual orientation, gender identity or other legally protected status.
ITAR
U.S. Citizenship required
Auto-ApplyData Scientist (U.S. Intelligence Community)
Data scientist job in Fort Meade, MD
Job Title: Data Scientist (U.S. Intelligence Community) Workplace: On-site (hands-on access to high impact data systems needed) Clearance Required: TS/SCI with Polygraph Why Join Elder Research * Mission-Driven Work: Support the U.S. Intelligence Community by building secure, scalable data systems that enable real-time insights.
* Career Stability: We hire for careers, not contractsour work is growing
* People-Focused Culture: We value work-life balance, collaboration, and professional growth.
* Ownership Mindset: Every employee is a shareholder through our Employee Stock Ownership Plan (ESOP).
* Competitive Pay & Benefits:
* Typical salary range: $117k-$196K, depending on experience and contract labor category
* 401(k) with employer contribution
* Health, dental, and vision coverage
* Paid parental leave and flexible PTO
Ready to make an Impact? Message our recruiting team directly at:
Data Scientist (FSP)
Data scientist job in Herndon, VA
Job Description
Red Rock Government Services is a leading software engineering company recognized for its exceptional support to the intelligence community. With a proven track record of delivering innovative and mission-critical solutions, Red Rock specializes in developing secure, scalable, and cutting-edge technologies tailored to meet the complex needs of intelligence operations. The company's expertise in advanced analytics, cloud computing, and artificial intelligence enables it to empower agencies with enhanced decision-making capabilities and operational efficiency. Red Rock's commitment to excellence, agility, and collaboration solidifies its reputation as a trusted partner in safeguarding national security and advancing intelligence objectives.
This position requires a current and active TS/SCI with Full Scope Polygraph security clearance. This position does not have the ability to sponsor candidates for clearance processing.
RedRock is seeking a
Data Scientist
to join our team of diverse and qualified professionals. The role focuses on supporting the Sponsor's mission by delivering advanced language training to ensure proficiency across a broad spectrum of languages for the Sponsor's employees.
Responsibilities:
Works closely with the Sponsor to gather requirements and advise on AWS infrastructure design, development, and deployment of Cloud resources.
Designs, tests, and implements log aggregation in support of Cloud and AWS systems.
Designs, tests, and implements search and visualization infrastructure in support of AWS systems.
Works with vendors to develop and deploy Cloud based solutions to the learning environment in AWS.
Acts as a liaison between the Customer and Vendor Contacts to troubleshoot AWS when deploying new resources.
Recommends new technologies for use in the cloud environment (AWS).
Communicates IT requirements between management and technical entities.
Qualifications:
Experience in risk management and ability to identify project risks and facilitate the development and implementation of mitigation strategies
Experience implementing and integrating AWS solutions.
Experience in gathering requirements from vendor contacts and customers.
Experience using Linux in the AWS environment.
Experience with scripting and web programming technologies required to support web-based learning systems, such as PHP, PERL, Java, Jscript, or PowerShell.
Experience deploying third party software products.
Experience with Software Configuration Management (SCCM).
Experience working with desktop and network hardware.
Location: Herndon, VA
Pay and Benefits:
Pay and benefits are fundamental to any career decision. That's why we craft compensation packages that reflect the importance of the work we do for our customers. Employment benefits include competitive compensation, Health and Wellness programs, Paid Leave and Retirement.
Commitment to Diversity:
All qualified applicants will receive consideration for employment without regard to sex, race, ethnicity, age, national origin, citizenship, religion, physical or mental disability, medical condition, genetic information, pregnancy, family structure, marital status, ancestry, domestic partner status, sexual orientation, gender identity or expression, veteran or military status, or any other basis prohibited by law.
Data Scientist - GEOINT Operations (TS/SCI)
Data scientist job in Springfield, VA
Every hour you save in the collection cycle is an hour the adversary doesn't get to move, hide, or change the picture. That's the difference between having intelligence when it matters and having it after it's too late.
This is your chance to be a part of the National Geospatial-Intelligence Agency Source Office, directly supporting CENTCOM operations. You'll be building AI-powered tools that don't just analyze data, but initiate action. Your code will recommend complex collection strategies to Source Strategy Officers, trigger automated GEOINT collection without waiting for human approval, and get results into analysts' hands fast enough to change the outcome. You'll be integrating AI into real, operational systems-JEMA, GIMS, RemoteView, ArcPro, etc. and seeing exactly how your automation speeds up the fight of time. You'll watch the before-and-after difference your tools make and know you had a direct hand in changing the intelligence picture. If you've been waiting for the moment your AI actually drives how the U.S. collects intelligence, this is it. Join XTS today!
Requirements
You have a current active Top Secret / SCI clearance.
You have achieved a Master's degree / Bachelor's degree + 2 years / Associate's degree + 3 years / 5+ years of relevant experience
Advanced Python skills as you'll be writing automation and AI tools that have to run in live ops without failing.
Experience with developing AI tools that can integrate with platforms like JEMA and GIMS to recommend mission-ready GEOINT collection strategies.
If you're ready to see your code move faster than the adversary, shape decisions in real-time, and watch the difference you make in the fight for time, please send your resume directly to Lanchi Lai at [email protected]
At XTS, we believe in taking care of our employees as much as we take care of our clients. As a veteran-owned company, we understand the importance of community, service, and fostering a culture where each team member can thrive. Our commitment to employee well-being is reflected in the comprehensive benefits and growth opportunities we offer. We offer tailored health care plans that fit your lifestyle, along with dental and vision coverage, paid time off (PTO), and a 401K with employer matching to secure your financial future. As we push forward in the rapidly evolving field of AI, XTS is committed to providing employees with tools and opportunities to stay ahead. We are proud to offer our GeoAI scholarship to help our employees further develop their skills and expertise in this cutting-edge field. We take pride in delivering elite workforces to the Intelligence Community, making a real-world impact on critical missions. Join us and experience a company that invests in your success and professional growth.
Auto-ApplyData Scientist with ML
Data scientist job in Reston, VA
Kavaliro is seeking a Data Scientist to provide highly technical and in-depth data engineering support.
MUST have experience with Python, PyTorch, Flask (knowledge at minimum with ability to quickly pickup), Familiarity with REST APIs (at minimum), Statistics background/experience, Basic understanding of NLP.
Desired skills for a candidate include experience performance R&D with natural language processing, deploying CNN and LLMs or foundational models, deploying ML models on multimedia data, experience with Linux System Administration (or bash), experience with Android Configuration, experience in embedded systems (Raspberry Pi).
Required Skills and Demonstrated Experience
Demonstrated experience in Python, Javascript, and R.
Demonstrated experience employing machine learning and deep learning modules such as Pandas, Scikit, Tensorflow, Pytorch.
Demonstrated experience with statistical inference, as well as building and understanding predictive models, using machine learning methods.
Demonstrated experience with large-scale text analytics.
Desired Skills
Demonstrated hands-on experience performing research or development with natural language processing and working with, deploying, and testing Convolutional Neural Networks (CNN), large-language models (LLMs) or foundational models.
Demonstrated experience developing and deploying testing and verification methodologies to evaluate algorithm performance and identify strategies for improvement or optimization.
Demonstrated experience deploying machine learning models on multimedia data, to include joint text, audio, video, hardware, and peripherals.
Demonstrated experience with Linux System Administration and associated scripting languages (Bash)
Demonstrated experience with Android configuration, software development, and interfacing.
Demonstrated experience in embedded systems (Raspberry Pi)
Develops and conducts independent testing and evaluation methods on research-grade algorithms in applicable fields.
Reports results and provide documentation and guidance on working with the research-grade algorithms.
Evaluates, Integrates and leverage internally-hosted data science tools.
Customize research grade algorithms to be optimized for memory and computational efficiency through quantizing, trimming layers, or through custom methods
Location:
Reston, Virginia
This position is onsite and there is no remote availability.
Clearance:
Active TS/SCI with Full Scope Polygraph
Applicant MUST hold a permanent U.S. citizenship for this position in accordance with government contract requirements.
Kavaliro provides Equal Employment Opportunities to all employees and applicants. All qualified applicants will receive consideration for employment without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. Kavaliro is committed to the full inclusion of all qualified individuals. In keeping with our commitment, Kavaliro will take the steps to assure that people with disabilities are provided reasonable accommodations. Accordingly, if reasonable accommodation is required to fully participate in the job application or interview process, to perform the essential functions of the position, and/or to receive all other benefits and privileges of employment, please respond to this posting to connect with a company representative.
Senior Data Engineer.
Data scientist job in McLean, VA
Immediate need for a talented Senior Data Engineer. This is a 06+months contract opportunity with long-term potential and is located in Mclean, VA(Remote). Please review the job description below and contact me ASAP if you are interested.
Job ID: 25-84666
Pay Range: $64 - $68/hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location).
Key Responsibilities:
Demonstrate ability in implementing data warehouse solutions using modern data platforms such as Client, Databricks or Redshift.
Build data integration solutions between transaction systems and analytics platforms.
Expand data integration solutions to ingest data from internal and external sources and to further transform as per the business consumption needs.
Develop tasks for a multitude of data patterns, e.g., real-time data integration, advanced analytics, machine learning, BI and reporting.
Fundamental understanding of building of data products by data enrichment and ML.
Act as a team player and share knowledge with the existing team members.
Key Requirements and Technology Experience:
Key skills; Python, AWS, SNOWFLAKE
Bachelor's degree in computer science or a related field.
Minimum 5 years of experience in building data driven solutions.
At least 3 years of experience working with AWS services.
Applicants must be authorized to work in the US without requiring employer sponsorship currently or in the future. U.S. FinTech does not offer H-1B sponsorship for this position.
Expertise in real-time data solutions, good-to-have knowledge of streams processing, Message Oriented Platforms and ETL/ELT Tools.
Strong scripting experience using Python and SQL.
Working knowledge of foundational AWS compute, storage, networking and IAM.
Understanding of Gen AI models, prompt engineering, RAG, fine tuning and pre-tuning will be a plus.
Solid scripting experience in AWS using Lambda functions.
Knowledge of CloudFormation template preferred.
Hands-on experience with popular cloud-based data warehouse platforms such as Redshift and Client.
Experience in building data pipelines with related understanding of data ingestion, transformation of structured, semi-structured and unstructured data across cloud services.
Knowledge and understanding of data standards and principles to drive best practices around data management activities and solutions.
Experience with one or more data integration tools such as Attunity (Qlik), AWS Glue ETL, Talend, Kafka etc.
Strong understanding of data security - authorization, authentication, encryption, and network security.
Hands on experience in using and extending machine learning framework and libraries, e.g, scikit-learn, PyTorch, TensorFlow, XGBoost etc. preferred.
Experience with AWS SageMaker family of services or similar tools to develop machine learning models preferred.
Strong written and verbal communication skills to facilitate meetings and workshops to collect data, functional and technology requirements, document processes, data flows, gap analysis, and associated data to support data management/governance related efforts.
Acts with integrity and proactively seeks ways to ensure compliance with regulations, policies, and procedures.
Demonstrated ability to be self-directed with excellent organization, analytical and interpersonal skills, and consistently meet or exceed deadline deliverables.
Strong understanding of the importance and benefits of good data quality, and the ability to champion results across functions.
Our client is a leading Financial Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration.
Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
Cloud Data Engineer- Databricks
Data scientist job in McLean, VA
Purpose:
We are seeking a highly skilled Cloud Data Engineer with deep expertise in Databricks and modern cloud platforms such as AWS, Azure, or GCP. This role is ideal for professionals who are passionate about building next-generation data platforms, optimizing complex data workflows, and enabling advanced analytics and AI in cloud-native environments. You'll have the opportunity to work with Fortune-500 organizations in data and analytics, helping them unlock the full potential of their data through innovative, scalable solutions.
Key Result Areas and Activities:
Design and implement robust, scalable data engineering solutions.
Build and optimize data pipelines using Databricks, including serverless capabilities, Unity Catalog, and Mosaic AI.
Collaborate with analytics and AI teams to enable real-time and batch data workflows.
Support and improve cloud-native data platforms (AWS, Azure, GCP).
Ensure adherence to best practices in data modeling, warehousing, and governance.
Contribute to automation of data workflows using CI/CD, DevOps, or DataOps practices.
Implement and maintain workflow orchestration tools like Apache Airflow and dbt.
Roles & Responsibilities
Essential Skills
4+ years of experience in data engineering with a focus on scalable solutions.
Strong hands-on experience with Databricks in a cloud environment.
Proficiency in Spark and Python for data processing.
Solid understanding of data modeling, data warehousing, and architecture principles.
Experience working with at least one major cloud provider (AWS, Azure, or GCP).
Familiarity with CI/CD pipelines and data workflow automation.
Desirable Skills
Direct experience with Unity Catalog and Mosaic AI within Databricks.
Working knowledge of DevOps/DataOps principles in a data engineering context.
Exposure to Apache Airflow, dbt, and modern data orchestration frameworks.
Qualifications
Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or a related field.
Relevant certifications in cloud platforms (AWS/Azure/GCP) or Databricks are a plus.
Qualities:
Able to consult, write, and present persuasively
Able to work in a self-organized and cross-functional team
Able to iterate based on new information, peer reviews, and feedback
Able to work seamlessly with clients across multiple geographies
Research focused mindset
Excellent analytical, presentation, reporting, documentation and interactive skills
"Infocepts is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law."
Data Scientist
Data scientist job in Arlington, VA
Job Title: Data Scientist LOCATION: ARLINGTON VA / WASHINGTON DC (DUE TO CUSTOMER REQUIREMENTS YOU MUST BE LOCATED IN THE GREATER WASHINGTON DC AREA) Workplace: Hybrid Clearance Required: Must have a IRS Public Trust w/a Full Background Investigation Requisition Type: Pipeline this is not a current opening but rather a talent pipeline for Data Analyst of all levels with an IRS Public Trust w/ background investigation interested in supporting the Government customer. When new IRS Data Scientist positions become available, this talent community will be the first place our recruiters look to fill the roles. Candidates with profiles in this talent community can also expect to receive regular updates on relevant new job opportunities. Be sure to also apply to any relevant current funded/awarded openings, if available.
Position Overview:
As a Data Scientist, you will work directly with clients, managers, and technical staff to understand business needs, develop technical plans, and deliver data-driven analytical solutions that solve client problems. You will primarily create and deploy predictive models from a wide variety of data sources and types using the latest mathematical and statistical methods and other emerging technologies.
Position Requirements:
Required Clearance: Must have a IRS Public Trust w/a Full Background Investigation
Required Education: Bachelor of Science degree in a relevant field (statistics, business, computer science, economics, mathematics, analytics, data science, social sciences, etc.,)
Required Skills / Experience:
* Exploring, cleaning, and wrangling data to provide value-added insights and identify business problems suitable for Data Science solutions
* Experience across the spectrum of design, develop, test, and implement quantitative and qualitative Data Science solutions that are modular, maintainable, resilient to industry shifts, and platform-agnostic
* Demonstrated experience using statistical and analytical software (including but not limited to Python, R, and SQL)
* Analyzing events across government, financial industries, law enforcement, and other similar data environments prioritizing them by compliance and business risk and displaying the results in evidence-driven monitoring and decision support tools.
* Experience in quantitative statistical approaches to anomaly detection to identify non-compliance risk, fraud, and cyber threats using data discovery, predictive analytics, trend analysis, assessment, and appropriate contemporary and emerging analytical techniques.
* Ability to conduct rigorous quantitative data analysis on very large quantitative data sets to develop insights and develop actionable recommendations due to previous experience developing strategies, performing assessments, gap analyses, and making actionable recommendations
* Contribute to meetings and discussions with clients and co-workers to refine understanding of the business problem at hand
* Trying different predictive modeling approaches to identify the best fit for a given set of business understanding, available data, and project timeline
* Writing modular, understandable, re-usable code within an iterative development process that includes team-based code review, client discussions, and end-user training
* Applying statistical tests for robustness, sensitivity, and significance to test and validate supervised and unsupervised models
* Preparing presentations, writing reports (technical and non-technical), and working to communicate technical results to clients with varying levels of analytic sophistication
* Ability to work autonomously in a collaborative, dynamic, cross-functional environment
* Demonstrated business savvy with solid interpersonal and communication skills (written and verbal).
* Experience with design and delivery capabilities with proficiency in gathering requirements and translating business requirements into technical specification.
Preferred Skills and Qualifications:
* Bachelor of Science degree in a relevant field (statistics, business, computer science, economics, mathematics, analytics, data science, social sciences, etc.,)
* 1+ years of experience in data science, data analytics, or a related technical field
* Prior computer programming experience, preferably in a language such as Python or R
* Experience with data exploration, data munging, data wrangling, and model development in R or Python
* Experience using version control (e.g. git, svn, Mercurial) and collaborative Basic understanding of relational database structure and SQL
* Humble and willing to learn, teach, and share ideas
* Experience engaging and interacting with clients, stakeholders and subject matter experts (SMEs) to understand, gather and document requirements
* Comfortable learning new things and working outside of your comfort zone
* Technical mindset you are not afraid of math!
* Must currently possess a Public Trust clearance
* Travel to and work on-site at clients both local and non-local. Number of days at client site vary depending on project requirements.
Desired Skills
* Advanced degree (MS or PhD) in a relevant field (e.g., statistics, computer science, business, mathematics, analytics, data science, engineering, physics, social sciences, management information systems, or decision science, etc.,)
* Programming techniques (e.g. pair programming, code reviews)
* Experience with containerization and environment management (venv or conda)
* Experience with Natural Language Processing (NLP) and advanced text mining techniques
* Experience with graph analytics and network analysis
* Experience with one or more technologies such as R Shiny, Databricks, AWS, Azure
* Experience applying robust, established and emerging quantitative & statistical techniques, knowledgeable on the underlying theoretical and architectural frameworks in the fields of applied analytics, and statistical analysis to include: sampling considerations & survey design like construct validity, measurement bias, as well as internal & external validity, statistical weighting techniques, approaches to outlier and missing data, and exploratory data analysis, cross-sectional analysis, and longitudinal forecasting
* Experience implementing data science processes in a remote, austere environment to include using bash
* Experience with business intelligence and data visualization platforms (Power BI, Tableau, etc.,)
* Understanding of the data analytics lifecycle (e.g. CRISP-DM)
Why apply to this position at Elder Research?
* Competitive Salary and Benefits
* Important Work / Make a Difference supporting U.S. national security.
* Job Stability: Elder Research is not a typical government contractor, we hire you for a career not just a contract.
* People-Focused Culture: we prioritize work-life-balance and provide a supportive, positive, and collaborative work environment as well as opportunities for professional growth and advancement.
Data Engineer (Zero Trust)
Data scientist job in Fort Belvoir, VA
Kavaliro is seeking a Zero Trust Security Architect / Data Engineer to support a mission-critical program by integrating secure architecture principles, strengthening data security, and advancing Zero Trust initiatives across the enterprise.
Key Responsibilities
Develop and implement program protection planning, including IT supply chain security, anti-tampering methods, and risk management aligned to DoD Zero Trust Architecture.
Apply secure system design tools, automated analysis methods, and architectural frameworks to build resilient, least-privilege, continuously monitored environments.
Integrate Zero Trust Data Pillar capabilities-data labeling, tagging, classification, encryption at rest/in transit, access policy definition, monitoring, and auditing.
Analyze and interpret data from multiple structured and unstructured sources to support decision-making and identify anomalies or vulnerabilities.
Assess cybersecurity principles, threats, and vulnerabilities impacting enterprise data systems, including risks such as corruption, exfiltration, and denial-of-service.
Support systems engineering activities, ensuring secure integration of technologies and alignment with Zero Trust operational objectives.
Design and maintain secure network architectures that balance security controls, mission requirements, and operational tradeoffs.
Generate queries, algorithms, and reports to evaluate data structures, identify patterns, and improve system integrity and performance.
Ensure compliance with organizational cybersecurity requirements, particularly confidentiality, integrity, availability, authentication, and non-repudiation.
Evaluate impacts of cybersecurity lapses and implement safeguards to protect mission-critical data systems.
Structure, format, and present data effectively across tools, dashboards, and reporting platforms.
Maintain knowledge of enterprise information security architecture and database systems to support secure data flow and system design.
Requirements
Active TS/SCI security clearance (required).
Deep knowledge of Zero Trust principles (never trust, always verify; explicit authentication; least privilege; continuous monitoring).
Experience with program protection planning, IT supply chain risk management, and anti-tampering techniques.
Strong understanding of cybersecurity principles, CIA triad requirements, and data-focused threats (corruption, exfiltration, denial-of-service).
Proficiency in secure system design, automated systems analysis tools, and systems engineering processes.
Ability to work with structured and unstructured data, including developing queries, algorithms, and analytical reports.
Knowledge of database systems, enterprise information security architecture, and data structuring/presentation techniques.
Understanding of network design processes, security tradeoffs, and enterprise architecture integration.
Strong ability to interpret data from multiple tools to support security decision-making.
Familiarity with impacts of cybersecurity lapses on data systems and operational environments.
Kavaliro is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status, or any other characteristic protected by law.