Post job

Data scientist jobs in Tysons Corner, VA

- 1,265 jobs
All
Data Scientist
Data Engineer
Senior Data Scientist
Data Modeler
  • Data Scientist

    Bestinfo Systems LLC

    Data scientist job in Columbia, MD

    Data Scientist - Transit Data Focus_Columbia, MD (On-site / hybrid)_Contract (6 Months) Data Scientist - Transit Data Focus Employment type: Contract Duration: 6 Months Justification: To manage and analyze customer databases, AVA (automated voice announcement), and schedule data for predictive maintenance and service planning. Experience Level: 3-5 years Job Responsibilities: Collect, process, and analyze transit-related datasets including customer databases, AVA (automated voice announcement) logs, real-time vehicle data, and schedule data. Develop predictive models and data-driven insights to support maintenance forecasting, service planning, and operational optimization. Design and implement data pipelines to integrate, clean, and transform large, heterogeneous transit data sources. Perform statistical analysis and machine learning to identify patterns, trends, and anomalies relevant to transit service performance and reliability. Collaborate with transit planners, maintenance teams, and IT staff to translate data insights into actionable business strategies. Monitor data quality and integrity; implement data validation and cleansing processes. Technical Skills & Qualifications: Bachelor's or Master's degree in Data Science, Statistics, Computer Science, Transportation Engineering, or a related quantitative field. 3-5 years of experience working as a data scientist or data analyst, preferably in a transit, transportation, or public sector environment. Strong proficiency in Python or R for data analysis, statistical modeling, and machine learning. Experience with SQL for database querying, manipulation, and data extraction. Familiarity with transit data standards such as GTFS, AVL/CAD, APC (Automated Passenger Counters), and AVA systems. Experience with data visualization tools such as Power BI, or equivalent.
    $73k-103k yearly est. 2d ago
  • Data Scientist

    Kavaliro 4.2company rating

    Data scientist job in McLean, VA

    Kavaliro is seeking a Data Scientist to provide highly technical and in-depth data engineering support. The candidate MUST have experience designing and building data infrastructure, developing data pipelines, transforming and preparing data, ensuring data quality and security, and monitoring and optimizing systems. The candidate MUST have extensive experience with Python and AWS. Experience with SQL, multi-data source queries with database technologies (PostgreSQL, MySQL, RDS, etc.), NiFi, Git, Elasticsearch, Kibana, Jupyter Notebooks, NLP, AI, and any data visualization tools (Tableau, Kibana, Qlik, etc.) are desired. Required Skills and Demonstrated Experience Demonstrated experience with data engineering, to include designing and building data infrastructure, developing data pipelines, transforming/preparing data, ensuring data quality and security, and monitoring/optimizing systems. Demonstrated experience with data management and integration, including designing and perating robust data layers for application development across local and cloud or web data sources. Demonstrated work experience programming with Python Demonstrated experience building scalable ETL and ELT workflows for reporting and analytics. Demonstrated experience with general Linux computing and advanced bash scripting Demonstrated experience with SQL. Demonstrated experience constructing complex multi-data source queries with database technologies such as PostgreSQL, MySQL, Neo4J or RDS Demonstrated experience processing data sources containing structured or unstructured data Demonstrated experience developing data pipelines with NiFi to bring data into a central environment Demonstrated experience delivering results to stakeholders through written documentation and oral briefings Demonstrated experience using code repositories such as Git Demonstrated experience using Elastic and Kibana Demonstrated experience working with multiple stakeholders Demonstrated experience documenting such artifacts as code, Python packages and methodologies Demonstrated experience using Jupyter Notebooks Demonstrated experience with machine learning techniques including natural language processing Demonstrated experience explaining complex technical issues to more junior data scientists, in graphical, verbal, or written formats Demonstrated experience developing tested, reusable and reproducible work Work or educational background in one or more of the following areas: mathematics, statistics, hard sciences (e.g. Physics, Computational Biology, Astronomy, Neuroscience, etc.) computer science, data science, or business analytics Desired Skills and Demonstrated Experience Demonstrated experience with cloud services, such as AWS, as well as cloud data technologies and architecture. Demonstrated experience using big data processing tools such as Apache Spark or Trino Demonstrated experience with machine learning algorithms Demonstrated experience with using container frameworks such as Docker or Kubernetes Demonstrated experience with using data visualizations tools such as Tableau, Kibana or Apache Superset Demonstrated experience creating learning objectives and creating teaching curriculum in technical or scientific fields Location: McLean, Virginia This position is onsite and there is no remote availability. Clearance: TS/SCI with Full Scope Polygraph Applicant MUST hold a permanent U.S. citizenship for this position in accordance with government contract requirements. Kavaliro provides Equal Employment Opportunities to all employees and applicants. All qualified applicants will receive consideration for employment without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. Kavaliro is committed to the full inclusion of all qualified individuals. In keeping with our commitment, Kavaliro will take the steps to assure that people with disabilities are provided reasonable accommodations. Accordingly, if reasonable accommodation is required to fully participate in the job application or interview process, to perform the essential functions of the position, and/or to receive all other benefits and privileges of employment, please respond to this posting to connect with a company representative.
    $74k-105k yearly est. 1d ago
  • Senior Data Scientist

    Infocepts 3.7company rating

    Data scientist job in McLean, VA

    Purpose: As a Data Scientist, you will play a key role in delivering impactful, data-driven solutions for our strategic enterprise clients. This role also offers the opportunity to shape and grow Infocepts' Data Science & AI practice, contributing to high-impact AI/ML initiatives, crafting data-driven narratives for stakeholders, and applying advanced techniques to solve complex business problems from strategy to execution. Key Result Areas and Activities: Design, build, and deploy AI/ML solutions using modern cloud and data platforms. Lead data science projects across industries, ensuring alignment with business goals. Apply supervised, unsupervised, deep learning, and Generative AI (e.g., LLMs, agentic workflows) techniques to address client use cases. Collaborate with data engineering teams to optimize model pipelines using Delta Lake and Spark. Communicate findings effectively through data visualizations and stakeholder presentations. Drive adoption of MLOps practices for scalable and reliable model deployment. Contribute to the evolution of Infocepts' Data Science & AI offerings through innovation and knowledge sharing. Roles & Responsibilities Essential Skills 5+ years of experience in applied AI and machine/deep learning. Hands-on experience with Databricks, MLflow, PySpark, and Spark MLlib. Proficiency in Python and SQL for model development and data manipulation. Strong understanding of supervised and unsupervised learning, deep learning, and Generative AI. Familiarity with cloud platforms: AWS, Azure, and GCP. Solid foundation in advanced statistical methods and probabilistic analysis. Ability to lead end-to-end AI/ML projects, including design, development, and stakeholder management. Experience with visualization tools like Tableau, Power BI, or similar. Familiarity with ML workflow orchestration and MLOps practices. Desirable Skills Experience with LLMs (Large Language Models) and agentic AI workflows. Familiarity with modern data platforms like Snowflake. Exposure to real-time data processing in cloud-native environments. Contributions to open-source AI projects or publications in data science communities. Qualifications Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, Statistics, or a related field. Certifications in cloud platforms (AWS, Azure, GCP) or Databricks are a plus. Qualities: Able to consult, write, and present persuasively Able to work in a self-organized and cross-functional team Able to iterate based on new information, peer reviews, and feedback Able to work seamlessly with clients across multiple geographies Research focused mindset Excellent analytical, presentation, reporting, documentation and interactive skills "Infocepts is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law."
    $78k-108k yearly est. 1d ago
  • Senior Data Scientist

    Zillion Technologies, Inc. 3.9company rating

    Data scientist job in McLean, VA

    Locals to Only# In- Person Interview Job Title: Data Scientist Specialist We are seeking a highly experienced Principal Gen AI Scientist with a strong focus on Generative AI (GenAI) to lead the design and development of cutting-edge AI Agents, Agentic Workflows and Gen AI Applications that solve complex business problems. This role requires advanced proficiency in Prompt Engineering, Large Language Models (LLMs), RAG, Graph RAG, MCP, A2A, multi-modal AI, Gen AI Patterns, Evaluation Frameworks, Guardrails, data curation, and AWS cloud deployments. You will serve as a hands-on Gen AI (data) scientist and critical thought leader, working alongside full stack developers, UX designers, product managers and data engineers to shape and implement enterprise-grade Gen AI solutions. Responsibilities: Architect and implement scalable AI Agents, Agentic Workflows and GenAI applications to address diverse and complex business use cases. Develop, fine-tune, and optimize lightweight LLMs; lead the evaluation and adaptation of models such as Claude (Anthropic), Azure OpenAI, and open-source alternatives. Design and deploy Retrieval-Augmented Generation (RAG) and Graph RAG systems using vector databases and knowledge bases. Curate enterprise data using connectors integrated with AWS Bedrock's Knowledge Base/Elastic. Implement solutions leveraging MCP (Model Context Protocol) and A2A (Agent-to-Agent) communication. Build and maintain Jupyter-based notebooks using platforms like AWS SageMaker and MLFlow/Kubeflow on Kubernetes (EKS). Collaborate with cross-functional teams of UI and microservice engineers, designers, and data engineers to build full-stack Gen AI experiences. Integrate GenAI solutions with enterprise platforms via API-based methods and GenAI standardized patterns. Establish and enforce validation procedures with Evaluation Frameworks, bias mitigation, safety protocols, and guardrails for production-ready deployment. Design & build robust ingestion pipelines that extract, chunk, enrich, and anonymize data from PDFs, video, and audio sources for use in LLM-powered workflows-leveraging best practices like semantic chunking and privacy controls. Orchestrate multimodal pipelines** using scalable frameworks (e.g., Apache Spark, PySpark) for automated ETL/ELT workflows appropriate for unstructured media. Implement embeddings drives-map media content to vector representations using embedding models, and integrate with vector stores (AWS Knowledge Base/Elastic/Mongo Atlas) to support RAG architectures. Qualifications: experience in AI/ML, with applied GenAI or LLM-based solutions. Deep expertise in prompt engineering, fine-tuning, RAG, GraphRAG, vector databases (e.g., AWS Knowledge Base / Elastic), and multi-modal models. Proven experience with cloud-native AI development (AWS SageMaker, Amazon Bedrock, MLFlow on EKS). Strong programming skills in Python and ML libraries (Transformers, LangChain, etc.). Deep understanding of Gen AI system patterns and architectural best practices, Evaluation Frameworks. Demonstrated ability to work in cross-functional agile teams.
    $77k-110k yearly est. 2d ago
  • Data Engineer / Big data Engineer

    Pyramid Consulting, Inc. 4.1company rating

    Data scientist job in McLean, VA

    Immediate need for a talented Data Engineer / Big data Engineer. This is a 12 months contract opportunity with long-term potential and is located in Mclean, VA(Hybrid). Please review the job description below and contact me ASAP if you are interested. Job ID: 25-93504 Pay Range: $70 - $75/hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location). Key Responsibilities: Design, develop, and maintain data pipelines leveraging Python, Spark/PySpark, and cloud-native services. Build and optimize data workflows, ETL processes, and transformations for large-scale structured and semi-structured datasets. Write advanced and efficient SQL queries against Snowflake, including joins, window functions, and performance tuning. Develop backend and automation tools using Golang and/or Python as needed. Implement scalable, secure, and high-quality data solutions across AWS services such as S3, Lambda, Glue, Step Functions, EMR, and CloudWatch. Troubleshoot complex production data issues, including pipeline failures, data quality gaps, and cloud environment challenges. Perform root-cause analysis and implement automation to prevent recurring issues. Collaborate with data scientists, analysts, platform engineers, and product teams to enable reliable, high-quality data access. Ensure compliance with enterprise governance, data quality, and cloud security standards. Participate in Agile ceremonies, code reviews, and DevOps practices to ensure high engineering quality. Key Requirements and Technology Experience: Proficiency in Python with experience building scalable data pipelines or ETL processes. Strong hands-on experience with Spark/PySpark for distributed data processing. Experience writing complex SQL queries (Snowflake preferred), including optimization and performance tuning. Working knowledge of AWS cloud services used in data engineering (S3, Glue, Lambda, EMR, Step Functions, CloudWatch, IAM). Experience with Golang for scripting, backend services, or performance-critical processes. Strong debugging, troubleshooting, and analytical skills across cloud and data ecosystems. Familiarity with CI/CD workflows, Git, and automated testing. Our client is a leading Banking and Financial Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration. Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
    $70-75 hourly 1d ago
  • Data Engineer

    The Ash Group

    Data scientist job in Falls Church, VA

    *** W2 Contract Only - No C2C - No 3rd Parties *** The Ash Group is hiring a new Programmer Analyst Principal (Data Engineer) for our client (a global leader providing advanced systems and support in defense, aerospace, and security) based in Falls Church, VA. In this role, you'll be designing, implementing, and optimizing large-scale data systems and ETL pipelines, with a strong focus on using Amazon Redshift and AWS services to ensure data quality and integrity for complex defense programs. Compensation, Benefits, and Role Info Competitive pay rate of $65 per hour. Medical, dental, vision, direct primary care benefits, and, after six months of employment, a 4% matched 401(k) plan with immediate 100% vesting. Type: 12-month contract with potential extension or conversion. Location: On-site in Falls Church, VA. What You'll Be Doing Design and implement large-scale ETL data pipelines using AWS Glue and Python/PySpark to ingest, transform, and load data from various sources. Build and maintain robust data warehouses, focusing on Amazon Redshift, including data modeling and governance. Write and optimize complex, highly-performant SQL queries across large datasets (Redshift, Oracle, SQL Server). Collaborate with cross-functional teams (data scientists, analysts) to understand requirements and deliver end-to-end data solutions. Troubleshoot, optimize performance, and resolve data-related issues like pipeline failures and data quality bottlenecks. What We're Looking For 8+ years of hands-on experience in data engineering, focusing on designing and implementing large-scale data systems. 5+ years of experience in building production-level ETL pipelines using AWS Glue and Python/PySpark. Deep proficiency in SQL, including query optimization, indexing, and performance tuning across data warehouses like Amazon Redshift. Strong understanding of database design principles, data modeling (star/snowflake schemas), and data governance. Experience with data processing/orchestration frameworks such as Apache Airflow, Apache Kafka, or Fivetran. If you're a seasoned data engineering professional passionate about building scalable data solutions and driving innovation in cloud-based environments, we want to hear from you. This is an exciting opportunity to work on cutting-edge technologies, collaborate with cross-functional teams, and make a meaningful impact on data-driven decision-making. Apply now to be part of a forward-thinking organization where your expertise will shape the future of our data infrastructure. #DataEngineer #DataEngineering #AWSEngineer #Redshift #ETL #PySpark #DataPipeline #Westminster #ColoradoJobs #Contract
    $65 hourly 1d ago
  • Azure Data Modeler

    Dexian

    Data scientist job in Washington, DC

    Azure Data Modeler - Budget Transformation Project Our client is embarking on a major budget transformation initiative and is seeking an experienced Azure Data Modeler to support data architecture, modeling, and migration activities. This role will play a critical part in designing and optimizing data structures as the organization transitions to SAP. Experience with SAP is preferred, but strong ERP data experience in any platform is also valuable. Responsibilities Design, develop, and optimize data models within the Microsoft Azure environment. Support data architecture needs across the budget transformation program. Partner with cross-functional stakeholders to enable the transition to SAP (or other ERP systems). Participate in data migration planning, execution, and validation efforts. Work collaboratively within SAFe Agile teams and support sprint activities. Provide off-hours support as needed for critical tasks and migration windows. Engage onsite in Washington, DC up to three days per week. Required Qualifications Strong hands-on expertise in data architecture and data model design. Proven experience working with Microsoft Azure (core requirement). Ability to work flexibly, including occasional off-hours support. Ability to be onsite in Washington, DC as needed (up to 3 days/week). Preferred Qualifications Experience with SAP ECC or exposure to SAP implementations. Experience with other major ERP systems (Oracle, Workday, etc.). SAFe Agile certification. Dexian stands at the forefront of Talent + Technology solutions with a presence spanning more than 70 locations worldwide and a team exceeding 10,000 professionals. As one of the largest technology and professional staffing companies and one of the largest minority-owned staffing companies in the United States, Dexian combines over 30 years of industry expertise with cutting-edge technologies to deliver comprehensive global services and support. Dexian connects the right talent and the right technology with the right organizations to deliver trajectory-changing results that help everyone achieve their ambitions and goals. To learn more, please visit ******************** Dexian is an Equal Opportunity Employer that recruits and hires qualified candidates without regard to race, religion, sex, sexual orientation, gender identity, age, national origin, ancestry, citizenship, disability, or veteran status.
    $81k-111k yearly est. 3d ago
  • Data Scientist

    Elder Research 3.9company rating

    Data scientist job in Arlington, VA

    Job Title: Data Scientist LOCATION: ARLINGTON VA / WASHINGTON DC (DUE TO CUSTOMER REQUIREMENTS YOU MUST BE LOCATED IN THE GREATER WASHINGTON DC AREA) Workplace: Hybrid Clearance Required: Must have a IRS Public Trust w/a Full Background Investigation Requisition Type: Pipeline this is not a current opening but rather a talent pipeline for Data Analyst of all levels with an IRS Public Trust w/ background investigation interested in supporting the Government customer. When new IRS Data Scientist positions become available, this talent community will be the first place our recruiters look to fill the roles. Candidates with profiles in this talent community can also expect to receive regular updates on relevant new job opportunities. Be sure to also apply to any relevant current funded/awarded openings, if available. Position Overview: As a Data Scientist, you will work directly with clients, managers, and technical staff to understand business needs, develop technical plans, and deliver data-driven analytical solutions that solve client problems. You will primarily create and deploy predictive models from a wide variety of data sources and types using the latest mathematical and statistical methods and other emerging technologies. Position Requirements: Required Clearance: Must have a IRS Public Trust w/a Full Background Investigation Required Education: Bachelor of Science degree in a relevant field (statistics, business, computer science, economics, mathematics, analytics, data science, social sciences, etc.,) Required Skills / Experience: Exploring, cleaning, and wrangling data to provide value-added insights and identify business problems suitable for Data Science solutions Experience across the spectrum of design, develop, test, and implement quantitative and qualitative Data Science solutions that are modular, maintainable, resilient to industry shifts, and platform-agnostic Demonstrated experience using statistical and analytical software (including but not limited to Python, R, and SQL) Analyzing events across government, financial industries, law enforcement, and other similar data environments prioritizing them by compliance and business risk and displaying the results in evidence-driven monitoring and decision support tools. Experience in quantitative statistical approaches to anomaly detection to identify non-compliance risk, fraud, and cyber threats using data discovery, predictive analytics, trend analysis, assessment, and appropriate contemporary and emerging analytical techniques. Ability to conduct rigorous quantitative data analysis on very large quantitative data sets to develop insights and develop actionable recommendations due to previous experience developing strategies, performing assessments, gap analyses, and making actionable recommendations Contribute to meetings and discussions with clients and co-workers to refine understanding of the business problem at hand Trying different predictive modeling approaches to identify the best fit for a given set of business understanding, available data, and project timeline Writing modular, understandable, re-usable code within an iterative development process that includes team-based code review, client discussions, and end-user training Applying statistical tests for robustness, sensitivity, and significance to test and validate supervised and unsupervised models Preparing presentations, writing reports (technical and non-technical), and working to communicate technical results to clients with varying levels of analytic sophistication Ability to work autonomously in a collaborative, dynamic, cross-functional environment Demonstrated business savvy with solid interpersonal and communication skills (written and verbal). Experience with design and delivery capabilities with proficiency in gathering requirements and translating business requirements into technical specification. Preferred Skills and Qualifications: Bachelor of Science degree in a relevant field (statistics, business, computer science, economics, mathematics, analytics, data science, social sciences, etc.,) 1+ years of experience in data science, data analytics, or a related technical field Prior computer programming experience, preferably in a language such as Python or R Experience with data exploration, data munging, data wrangling, and model development in R or Python Experience using version control (e.g. git, svn, Mercurial) and collaborative Basic understanding of relational database structure and SQL Humble and willing to learn, teach, and share ideas Experience engaging and interacting with clients, stakeholders and subject matter experts (SMEs) to understand, gather and document requirements Comfortable learning new things and working outside of your comfort zone Technical mindset you are not afraid of math! Must currently possess a Public Trust clearance Travel to and work on-site at clients both local and non-local. Number of days at client site vary depending on project requirements. Desired Skills Advanced degree (MS or PhD) in a relevant field (e.g., statistics, computer science, business, mathematics, analytics, data science, engineering, physics, social sciences, management information systems, or decision science, etc.,) Programming techniques (e.g. pair programming, code reviews) Experience with containerization and environment management (venv or conda) Experience with Natural Language Processing (NLP) and advanced text mining techniques Experience with graph analytics and network analysis Experience with one or more technologies such as R Shiny, Databricks, AWS, Azure Experience applying robust, established and emerging quantitative & statistical techniques, knowledgeable on the underlying theoretical and architectural frameworks in the fields of applied analytics, and statistical analysis to include: sampling considerations & survey design like construct validity, measurement bias, as well as internal & external validity, statistical weighting techniques, approaches to outlier and missing data, and exploratory data analysis, cross-sectional analysis, and longitudinal forecasting Experience implementing data science processes in a remote, austere environment to include using bash Experience with business intelligence and data visualization platforms (Power BI, Tableau, etc.,) Understanding of the data analytics lifecycle (e.g. CRISP-DM) Why apply to this position at Elder Research? Competitive Salary and Benefits Important Work / Make a Difference supporting U.S. national security. Job Stability: Elder Research is not a typical government contractor, we hire you for a career not just a contract. People-Focused Culture: we prioritize work-life-balance and provide a supportive, positive, and collaborative work environment as well as opportunities for professional growth and advancement. About Elder Research, Inc People Centered. Data Driven Elder Research is a fast growing consulting firm specializing in predictive analytics. Being in the data mining business almost 30 years, we pride ourselves in our ability to find creative, cutting edge solutions to real-world problems. We work hard to provide the best value to our clients and allow each person to contribute their ideas and put their skills to use immediately. Our team members are passionate, curious, life-long learners. We value humility, servant-leadership, teamwork, and integrity. We seek to serve our clients and our teammates to the best of our abilities. In keeping with our entrepreneurial spirit, we want candidates who are self-motivated with an innate curiosity and strong team work. Elder Research believes in continuous learning and community - each week the entire company attends a Tech Talk and each office location provides lunch. Elder Research provides a supportive work environment with established parental, bereavement, and PTO policies. By prioritizing a healthy work-life balance - with reasonable hours, solid pay, low travel, and extremely flexible time off - Elder Research enables and encourages its employees to serve others and enjoy their lives. Elder Research, Inc. is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, or protected veteran status and will not be discriminated against on the basis of disability. Elder Research is a Government contractor and many of our positions require US Citizenship.
    $76k-111k yearly est. 6d ago
  • Data Scientist (FSP)

    Red Rock Government Services 3.7company rating

    Data scientist job in Herndon, VA

    Red Rock Government Services is a leading software engineering company recognized for its exceptional support to the intelligence community. With a proven track record of delivering innovative and mission-critical solutions, Red Rock specializes in developing secure, scalable, and cutting-edge technologies tailored to meet the complex needs of intelligence operations. The company's expertise in advanced analytics, cloud computing, and artificial intelligence enables it to empower agencies with enhanced decision-making capabilities and operational efficiency. Red Rock's commitment to excellence, agility, and collaboration solidifies its reputation as a trusted partner in safeguarding national security and advancing intelligence objectives. This position requires a current and active TS/SCI with Full Scope Polygraph security clearance. This position does not have the ability to sponsor candidates for clearance processing. RedRock is seeking a Data Scientist to join our team of diverse and qualified professionals. The role focuses on supporting the Sponsor's mission by delivering advanced language training to ensure proficiency across a broad spectrum of languages for the Sponsor's employees. Responsibilities: Works closely with the Sponsor to gather requirements and advise on AWS infrastructure design, development, and deployment of Cloud resources. Designs, tests, and implements log aggregation in support of Cloud and AWS systems. Designs, tests, and implements search and visualization infrastructure in support of AWS systems. Works with vendors to develop and deploy Cloud based solutions to the learning environment in AWS. Acts as a liaison between the Customer and Vendor Contacts to troubleshoot AWS when deploying new resources. Recommends new technologies for use in the cloud environment (AWS). Communicates IT requirements between management and technical entities. Qualifications: Experience in risk management and ability to identify project risks and facilitate the development and implementation of mitigation strategies Experience implementing and integrating AWS solutions. Experience in gathering requirements from vendor contacts and customers. Experience using Linux in the AWS environment. Experience with scripting and web programming technologies required to support web-based learning systems, such as PHP, PERL, Java, Jscript, or PowerShell. Experience deploying third party software products. Experience with Software Configuration Management (SCCM). Experience working with desktop and network hardware. Location: Herndon, VA Pay and Benefits: Pay and benefits are fundamental to any career decision. That's why we craft compensation packages that reflect the importance of the work we do for our customers. Employment benefits include competitive compensation, Health and Wellness programs, Paid Leave and Retirement. Commitment to Diversity: All qualified applicants will receive consideration for employment without regard to sex, race, ethnicity, age, national origin, citizenship, religion, physical or mental disability, medical condition, genetic information, pregnancy, family structure, marital status, ancestry, domestic partner status, sexual orientation, gender identity or expression, veteran or military status, or any other basis prohibited by law.
    $78k-112k yearly est. 60d+ ago
  • Data Scientist - GEOINT Operations (TS/SCI)

    Xcellent Technology Solutions 3.6company rating

    Data scientist job in Springfield, VA

    Every hour you save in the collection cycle is an hour the adversary doesn't get to move, hide, or change the picture. That's the difference between having intelligence when it matters and having it after it's too late. This is your chance to be a part of the National Geospatial-Intelligence Agency Source Office, directly supporting CENTCOM operations. You'll be building AI-powered tools that don't just analyze data, but initiate action. Your code will recommend complex collection strategies to Source Strategy Officers, trigger automated GEOINT collection without waiting for human approval, and get results into analysts' hands fast enough to change the outcome. You'll be integrating AI into real, operational systems-JEMA, GIMS, RemoteView, ArcPro, etc. and seeing exactly how your automation speeds up the fight of time. You'll watch the before-and-after difference your tools make and know you had a direct hand in changing the intelligence picture. If you've been waiting for the moment your AI actually drives how the U.S. collects intelligence, this is it. Join XTS today! Requirements You have a current active Top Secret / SCI clearance. You have achieved a Master's degree / Bachelor's degree + 2 years / Associate's degree + 3 years / 5+ years of relevant experience Advanced Python skills as you'll be writing automation and AI tools that have to run in live ops without failing. Experience with developing AI tools that can integrate with platforms like JEMA and GIMS to recommend mission-ready GEOINT collection strategies. If you're ready to see your code move faster than the adversary, shape decisions in real-time, and watch the difference you make in the fight for time, please send your resume directly to Lanchi Lai at [email protected] At XTS, we believe in taking care of our employees as much as we take care of our clients. As a veteran-owned company, we understand the importance of community, service, and fostering a culture where each team member can thrive. Our commitment to employee well-being is reflected in the comprehensive benefits and growth opportunities we offer. We offer tailored health care plans that fit your lifestyle, along with dental and vision coverage, paid time off (PTO), and a 401K with employer matching to secure your financial future. As we push forward in the rapidly evolving field of AI, XTS is committed to providing employees with tools and opportunities to stay ahead. We are proud to offer our GeoAI scholarship to help our employees further develop their skills and expertise in this cutting-edge field. We take pride in delivering elite workforces to the Intelligence Community, making a real-world impact on critical missions. Join us and experience a company that invests in your success and professional growth.
    $70k-100k yearly est. Auto-Apply 60d+ ago
  • Data Scientist

    Square Peg Technologies 4.2company rating

    Data scientist job in Washington, DC

    Clearance Level Required: Requires a Top Secret (TS/SCI) Clearance with a Polygraph. The Mission: Square Peg Technologies is looking for data scientists who enjoy leveraging large sets of data to tell stories. In this role, you will apply modern machine and deep learning approaches to import, clean, transform and model data that supports our nation's core intelligence decision-makers. You will be responsible for designing or presenting conclusions gained from analyzing data using statistical tools like Tableau, Python, R and others. Basic Qualifications Work as the lead data strategist, identifying and integrating new datasets that can be leveraged through our product capabilities and work closely with the engineering team to strategize and execute the development of data products Execute analytical experiments methodically to help solve various problems and make a true impact across various domains and industries Research and devise innovative statistical models for data analysis Communicate findings to all stakeholders Enable smarter business processes-and implement analytics for meaningful insights Identify relevant data sources and sets to mine for client business needs, and collect large structured and unstructured datasets and variables Devise and utilize algorithms and models to mine big data stores, perform data and error analysis to improve models, and clean and validate data for uniformity and accuracy Analyze data for trends and patterns, and Interpret data with a clear objective in mind Implement analytical models into production by collaborating with software developers and machine learning engineers. Preferred Qualifications Bachelor's degree in statistics, applied mathematics, or related discipline 2+ years experience in data science Proficiency in data mining, mathematics, and statistical analysis Advanced pattern recognition and predictive modeling experience Experience with Excel, PowerPoint, Tableau, SQL, and programming languages (i.e., Java/Python, SAS) Master's degree in stats, applied math, or related discipline Square Peg Technologies Company Information Square Peg Technologies is a boutique technology consulting firm that specializes in data solutions using the latest technologies in analytics, data science, data engineering, and data management. Our mission is to develop and integrate cutting-edge technology solutions that inspire our customers to push the fields of science and technology forward. Here at Square Peg, you will not get lost in a crowd. We value each person's contribution, celebrate wins together, and invest in the personal and professional development of our employees. Along with a strong positive culture that allows employees to thrive and grow, we also offer top-notch benefits including a large amount of vacation, annual bonuses, 401k matching, paid training, and 100% health care coverage. At Square Peg Tech, we aim to be at the forefront of technology and innovation. Our ideal candidate must be forward-leaning and eager to engage with customers, academia, and industry leaders on the future applications of AI/ML, as well as best practices in advanced analytics. With a firm footprint in the Intelligence Community, Square Peg Tech is looking to expand its capabilities into the Department of Defense, NASA, and other prominent government agencies.
    $103k-134k yearly est. Auto-Apply 60d+ ago
  • Data Scientist - Expert

    Janus Soft 4.8company rating

    Data scientist job in Chantilly, VA

    REQUIRED SKILLS • Demonstrated experience with the ability to understand and automate customer workflows and business processes with Al solutions • Demonstrated experience in data structures, data modeling, and software architecture • Demonstrated in-depth experience and knowledge of Inachine learning tools and frameworks such as PyTorch, Tensorflow, etc. • Demonstrated experience with extensive math and computer science skills, with a deep understanding of probability, statistics, and algorithms • Demonstrated experience in Al/ML specific programming languages, such as Python, Jav, C++, Conda, etc. • Demonstrated experience training Al Inodels, as well as in integrating pre-trained models into dataflows and software architectures. DESIRED SKILLS • Demonstrated experience implementing secure Al solutions in the Intelligence Community • Demonstrated experience working with highly sensitive data sets • Demonstrated experience in leading Al/ML initiatives • Demonstrated experience and in-depth knowledge of implementing ETL (Enhance/Transform/Load) pipeline • Demonstrated experience and in-depth knowledge in implementing solutions in Alnazon Web Services • Demonstrated experience with very strong problem solving and critical thinking skills • Demonstrated experience performing in a collaborative team environment.
    $87k-118k yearly est. 60d+ ago
  • Data Scientist with ML

    Kavaliro 4.2company rating

    Data scientist job in Reston, VA

    Kavaliro is seeking a Data Scientist to provide highly technical and in-depth data engineering support. MUST have experience with Python, PyTorch, Flask (knowledge at minimum with ability to quickly pickup), Familiarity with REST APIs (at minimum), Statistics background/experience, Basic understanding of NLP. Desired skills for a candidate include experience performance R&D with natural language processing, deploying CNN and LLMs or foundational models, deploying ML models on multimedia data, experience with Linux System Administration (or bash), experience with Android Configuration, experience in embedded systems (Raspberry Pi). Required Skills and Demonstrated Experience Demonstrated experience in Python, Javascript, and R. Demonstrated experience employing machine learning and deep learning modules such as Pandas, Scikit, Tensorflow, Pytorch. Demonstrated experience with statistical inference, as well as building and understanding predictive models, using machine learning methods. Demonstrated experience with large-scale text analytics. Desired Skills Demonstrated hands-on experience performing research or development with natural language processing and working with, deploying, and testing Convolutional Neural Networks (CNN), large-language models (LLMs) or foundational models. Demonstrated experience developing and deploying testing and verification methodologies to evaluate algorithm performance and identify strategies for improvement or optimization. Demonstrated experience deploying machine learning models on multimedia data, to include joint text, audio, video, hardware, and peripherals. Demonstrated experience with Linux System Administration and associated scripting languages (Bash) Demonstrated experience with Android configuration, software development, and interfacing. Demonstrated experience in embedded systems (Raspberry Pi) Develops and conducts independent testing and evaluation methods on research-grade algorithms in applicable fields. Reports results and provide documentation and guidance on working with the research-grade algorithms. Evaluates, Integrates and leverage internally-hosted data science tools. Customize research grade algorithms to be optimized for memory and computational efficiency through quantizing, trimming layers, or through custom methods Location: Reston, Virginia This position is onsite and there is no remote availability. Clearance: Active TS/SCI with Full Scope Polygraph Applicant MUST hold a permanent U.S. citizenship for this position in accordance with government contract requirements. Kavaliro provides Equal Employment Opportunities to all employees and applicants. All qualified applicants will receive consideration for employment without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. Kavaliro is committed to the full inclusion of all qualified individuals. In keeping with our commitment, Kavaliro will take the steps to assure that people with disabilities are provided reasonable accommodations. Accordingly, if reasonable accommodation is required to fully participate in the job application or interview process, to perform the essential functions of the position, and/or to receive all other benefits and privileges of employment, please respond to this posting to connect with a company representative.
    $74k-105k yearly est. 3d ago
  • Senior Data Engineer.

    Pyramid Consulting, Inc. 4.1company rating

    Data scientist job in McLean, VA

    Immediate need for a talented Senior Data Engineer. This is a 06+months contract opportunity with long-term potential and is located in Mclean, VA(Remote). Please review the job description below and contact me ASAP if you are interested. Job ID: 25-84666 Pay Range: $64 - $68/hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location). Key Responsibilities: Demonstrate ability in implementing data warehouse solutions using modern data platforms such as Client, Databricks or Redshift. Build data integration solutions between transaction systems and analytics platforms. Expand data integration solutions to ingest data from internal and external sources and to further transform as per the business consumption needs. Develop tasks for a multitude of data patterns, e.g., real-time data integration, advanced analytics, machine learning, BI and reporting. Fundamental understanding of building of data products by data enrichment and ML. Act as a team player and share knowledge with the existing team members. Key Requirements and Technology Experience: Key skills; Python, AWS, SNOWFLAKE Bachelor's degree in computer science or a related field. Minimum 5 years of experience in building data driven solutions. At least 3 years of experience working with AWS services. Applicants must be authorized to work in the US without requiring employer sponsorship currently or in the future. U.S. FinTech does not offer H-1B sponsorship for this position. Expertise in real-time data solutions, good-to-have knowledge of streams processing, Message Oriented Platforms and ETL/ELT Tools. Strong scripting experience using Python and SQL. Working knowledge of foundational AWS compute, storage, networking and IAM. Understanding of Gen AI models, prompt engineering, RAG, fine tuning and pre-tuning will be a plus. Solid scripting experience in AWS using Lambda functions. Knowledge of CloudFormation template preferred. Hands-on experience with popular cloud-based data warehouse platforms such as Redshift and Client. Experience in building data pipelines with related understanding of data ingestion, transformation of structured, semi-structured and unstructured data across cloud services. Knowledge and understanding of data standards and principles to drive best practices around data management activities and solutions. Experience with one or more data integration tools such as Attunity (Qlik), AWS Glue ETL, Talend, Kafka etc. Strong understanding of data security - authorization, authentication, encryption, and network security. Hands on experience in using and extending machine learning framework and libraries, e.g, scikit-learn, PyTorch, TensorFlow, XGBoost etc. preferred. Experience with AWS SageMaker family of services or similar tools to develop machine learning models preferred. Strong written and verbal communication skills to facilitate meetings and workshops to collect data, functional and technology requirements, document processes, data flows, gap analysis, and associated data to support data management/governance related efforts. Acts with integrity and proactively seeks ways to ensure compliance with regulations, policies, and procedures. Demonstrated ability to be self-directed with excellent organization, analytical and interpersonal skills, and consistently meet or exceed deadline deliverables. Strong understanding of the importance and benefits of good data quality, and the ability to champion results across functions. Our client is a leading Financial Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration. Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
    $64-68 hourly 1d ago
  • Cloud Data Engineer- Databricks

    Infocepts 3.7company rating

    Data scientist job in McLean, VA

    Purpose: We are seeking a highly skilled Cloud Data Engineer with deep expertise in Databricks and modern cloud platforms such as AWS, Azure, or GCP. This role is ideal for professionals who are passionate about building next-generation data platforms, optimizing complex data workflows, and enabling advanced analytics and AI in cloud-native environments. You'll have the opportunity to work with Fortune-500 organizations in data and analytics, helping them unlock the full potential of their data through innovative, scalable solutions. Key Result Areas and Activities: Design and implement robust, scalable data engineering solutions. Build and optimize data pipelines using Databricks, including serverless capabilities, Unity Catalog, and Mosaic AI. Collaborate with analytics and AI teams to enable real-time and batch data workflows. Support and improve cloud-native data platforms (AWS, Azure, GCP). Ensure adherence to best practices in data modeling, warehousing, and governance. Contribute to automation of data workflows using CI/CD, DevOps, or DataOps practices. Implement and maintain workflow orchestration tools like Apache Airflow and dbt. Roles & Responsibilities Essential Skills 4+ years of experience in data engineering with a focus on scalable solutions. Strong hands-on experience with Databricks in a cloud environment. Proficiency in Spark and Python for data processing. Solid understanding of data modeling, data warehousing, and architecture principles. Experience working with at least one major cloud provider (AWS, Azure, or GCP). Familiarity with CI/CD pipelines and data workflow automation. Desirable Skills Direct experience with Unity Catalog and Mosaic AI within Databricks. Working knowledge of DevOps/DataOps principles in a data engineering context. Exposure to Apache Airflow, dbt, and modern data orchestration frameworks. Qualifications Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or a related field. Relevant certifications in cloud platforms (AWS/Azure/GCP) or Databricks are a plus. Qualities: Able to consult, write, and present persuasively Able to work in a self-organized and cross-functional team Able to iterate based on new information, peer reviews, and feedback Able to work seamlessly with clients across multiple geographies Research focused mindset Excellent analytical, presentation, reporting, documentation and interactive skills "Infocepts is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law."
    $77k-105k yearly est. 1d ago
  • Senior Data Engineer

    Zillion Technologies, Inc. 3.9company rating

    Data scientist job in McLean, VA

    The candidate must have 5+ years of hands on experience working with PySpark/Python, microservices architecture, AWS EKS, SQL, Postgres, DB2, Snowflake, Behave OR Cucumber frameworks, Pytest (unit testing), automation testing and regression testing. Experience with tools such as Jenkins, SonarQube AND/OR Fortify are preferred for this role. Experience in Angular and DevOps are nice to haves for this role. Must Have Qualifications: PySpark/Python based microservices, AWS EKS, Postgres SQL Database, Behave/Cucumber for automation, Pytest, Snowflake, Jenkins, SonarQube and Fortify. Responsibilities: Development of microservices based on Python, PySpark, AWS EKS, AWS Postgres for a data-oriented modernization project. New System: Python and PySpark, AWS Postgres DB, Behave/Cucumber for automation, and Pytest Perform System, functional and data analysis on the current system and create technical/functional requirement documents. Current System: Informatica, SAS, AutoSys, DB2 Write automated tests using Behave/cucumber, based on the new micro-services-based architecture Promote top code quality and solve issues related to performance tuning and scalability. Strong skills in DevOps, Docker/container-based deployments to AWS EKS using Jenkins and experience with SonarQube and Fortify. Able to communicate and engage with business teams and analyze the current business requirements (BRS documents) and create necessary data mappings. Preferred strong skills and experience in reporting applications development and data analysis Knowledge in Agile methodologies and technical documentation.
    $77k-109k yearly est. 5d ago
  • Data Scientist (U.S. Intelligence Community)

    Elder Research 3.9company rating

    Data scientist job in Fort Meade, MD

    Job Title: Data Scientist (U.S. Intelligence Community) Workplace: On-site (hands-on access to high impact data systems needed) Clearance Required: TS/SCI with Polygraph Why Join Elder Research * Mission-Driven Work: Support the U.S. Intelligence Community by building secure, scalable data systems that enable real-time insights. * Career Stability: We hire for careers, not contractsour work is growing * People-Focused Culture: We value work-life balance, collaboration, and professional growth. * Ownership Mindset: Every employee is a shareholder through our Employee Stock Ownership Plan (ESOP). * Competitive Pay & Benefits: * Typical salary range: $117k-$196K, depending on experience and contract labor category * 401(k) with employer contribution * Health, dental, and vision coverage * Paid parental leave and flexible PTO Ready to make an Impact? Message our recruiting team directly at:
    $117k-196k yearly 60d+ ago
  • Data Scientist (FSP)

    Red Rock Government Services 3.7company rating

    Data scientist job in Herndon, VA

    Job Description Red Rock Government Services is a leading software engineering company recognized for its exceptional support to the intelligence community. With a proven track record of delivering innovative and mission-critical solutions, Red Rock specializes in developing secure, scalable, and cutting-edge technologies tailored to meet the complex needs of intelligence operations. The company's expertise in advanced analytics, cloud computing, and artificial intelligence enables it to empower agencies with enhanced decision-making capabilities and operational efficiency. Red Rock's commitment to excellence, agility, and collaboration solidifies its reputation as a trusted partner in safeguarding national security and advancing intelligence objectives. This position requires a current and active TS/SCI with Full Scope Polygraph security clearance. This position does not have the ability to sponsor candidates for clearance processing. RedRock is seeking a Data Scientist to join our team of diverse and qualified professionals. The role focuses on supporting the Sponsor's mission by delivering advanced language training to ensure proficiency across a broad spectrum of languages for the Sponsor's employees. Responsibilities: Works closely with the Sponsor to gather requirements and advise on AWS infrastructure design, development, and deployment of Cloud resources. Designs, tests, and implements log aggregation in support of Cloud and AWS systems. Designs, tests, and implements search and visualization infrastructure in support of AWS systems. Works with vendors to develop and deploy Cloud based solutions to the learning environment in AWS. Acts as a liaison between the Customer and Vendor Contacts to troubleshoot AWS when deploying new resources. Recommends new technologies for use in the cloud environment (AWS). Communicates IT requirements between management and technical entities. Qualifications: Experience in risk management and ability to identify project risks and facilitate the development and implementation of mitigation strategies Experience implementing and integrating AWS solutions. Experience in gathering requirements from vendor contacts and customers. Experience using Linux in the AWS environment. Experience with scripting and web programming technologies required to support web-based learning systems, such as PHP, PERL, Java, Jscript, or PowerShell. Experience deploying third party software products. Experience with Software Configuration Management (SCCM). Experience working with desktop and network hardware. Location: Herndon, VA Pay and Benefits: Pay and benefits are fundamental to any career decision. That's why we craft compensation packages that reflect the importance of the work we do for our customers. Employment benefits include competitive compensation, Health and Wellness programs, Paid Leave and Retirement. Commitment to Diversity: All qualified applicants will receive consideration for employment without regard to sex, race, ethnicity, age, national origin, citizenship, religion, physical or mental disability, medical condition, genetic information, pregnancy, family structure, marital status, ancestry, domestic partner status, sexual orientation, gender identity or expression, veteran or military status, or any other basis prohibited by law.
    $78k-112k yearly est. 20d ago
  • Data Scientist - SME

    Janus Soft 4.8company rating

    Data scientist job in Chantilly, VA

    REQUIRED SKILLS • Demonstrated experience with the ability to understand and automate customer workflows and business processes with Al solutions • Demonstrated experience in data structures, data modeling, and software architecture • Demonstrated in-depth experience and knowledge of Inachine learning tools and frameworks such as PyTorch, Tensorflow, etc. • Demonstrated experience with extensive math and computer science skills, with a deep understanding of probability, statistics, and algorithms • Demonstrated experience in Al/ML specific programming languages, such as Python, Jav, C++, Conda, etc. • Demonstrated experience training Al Inodels, as well as in integrating pre-trained models into dataflows and software architectures. DESIRED SKILLS • Demonstrated experience implementing secure Al solutions in the Intelligence Community • Demonstrated experience working with highly sensitive data sets • Demonstrated experience in leading Al/ML initiatives • Demonstrated experience and in-depth knowledge of implementing ETL (Enhance/Transform/Load) pipeline • Demonstrated experience and in-depth knowledge in implementing solutions in Alnazon Web Services • Demonstrated experience with very strong problem solving and critical thinking skills • Demonstrated experience performing in a collaborative team environment.
    $87k-118k yearly est. 60d+ ago
  • Data Engineer (Zero Trust)

    Kavaliro 4.2company rating

    Data scientist job in Fort Belvoir, VA

    Kavaliro is seeking a Zero Trust Security Architect / Data Engineer to support a mission-critical program by integrating secure architecture principles, strengthening data security, and advancing Zero Trust initiatives across the enterprise. Key Responsibilities Develop and implement program protection planning, including IT supply chain security, anti-tampering methods, and risk management aligned to DoD Zero Trust Architecture. Apply secure system design tools, automated analysis methods, and architectural frameworks to build resilient, least-privilege, continuously monitored environments. Integrate Zero Trust Data Pillar capabilities-data labeling, tagging, classification, encryption at rest/in transit, access policy definition, monitoring, and auditing. Analyze and interpret data from multiple structured and unstructured sources to support decision-making and identify anomalies or vulnerabilities. Assess cybersecurity principles, threats, and vulnerabilities impacting enterprise data systems, including risks such as corruption, exfiltration, and denial-of-service. Support systems engineering activities, ensuring secure integration of technologies and alignment with Zero Trust operational objectives. Design and maintain secure network architectures that balance security controls, mission requirements, and operational tradeoffs. Generate queries, algorithms, and reports to evaluate data structures, identify patterns, and improve system integrity and performance. Ensure compliance with organizational cybersecurity requirements, particularly confidentiality, integrity, availability, authentication, and non-repudiation. Evaluate impacts of cybersecurity lapses and implement safeguards to protect mission-critical data systems. Structure, format, and present data effectively across tools, dashboards, and reporting platforms. Maintain knowledge of enterprise information security architecture and database systems to support secure data flow and system design. Requirements Active TS/SCI security clearance (required). Deep knowledge of Zero Trust principles (never trust, always verify; explicit authentication; least privilege; continuous monitoring). Experience with program protection planning, IT supply chain risk management, and anti-tampering techniques. Strong understanding of cybersecurity principles, CIA triad requirements, and data-focused threats (corruption, exfiltration, denial-of-service). Proficiency in secure system design, automated systems analysis tools, and systems engineering processes. Ability to work with structured and unstructured data, including developing queries, algorithms, and analytical reports. Knowledge of database systems, enterprise information security architecture, and data structuring/presentation techniques. Understanding of network design processes, security tradeoffs, and enterprise architecture integration. Strong ability to interpret data from multiple tools to support security decision-making. Familiarity with impacts of cybersecurity lapses on data systems and operational environments. Kavaliro is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status, or any other characteristic protected by law.
    $85k-119k yearly est. 2d ago

Learn more about data scientist jobs

How much does a data scientist earn in Tysons Corner, VA?

The average data scientist in Tysons Corner, VA earns between $63,000 and $119,000 annually. This compares to the national average data scientist range of $75,000 to $148,000.

Average data scientist salary in Tysons Corner, VA

$86,000

What are the biggest employers of Data Scientists in Tysons Corner, VA?

The biggest employers of Data Scientists in Tysons Corner, VA are:
  1. Leidos
  2. Booz Allen Hamilton
  3. Capital One
  4. Amazon
  5. Peraton
  6. Deloitte
  7. Cymertek
  8. CACI International
  9. CGI Inc.
  10. Stanleyreid
Job type you want
Full Time
Part Time
Internship
Temporary