Data Scientist
Data scientist job at Apogee Integration
Job DescriptionSalary:
Security Clearance: Active TS/SCI with FS Polygraph
required
Seeking a Data Scientist to support an Intelligence Client.
What You'll Do
Assemble large, complex sets of data to meet functional and other requirements
Build models, test hypotheses, interpret, summarize, visualize, and succinctly report on data findings
Construct and perform complex database search queries in multiple databases using SQL and API interfaces
Curate and maintain data stores in support of metrics and evaluation
Design, build, test, maintain, and automate data collection/pipelines to optimize data delivery, and automate manual processes
Communicate and/or present product insights to technical and non-technical individuals
Required
Bachelor's Degree in Data Science, Machine Learning, Computer Science, Electrical Engineering, Physics, Statistics, Quantitative Finance, Econometrics, Economics, Mathematics, Analysis, Operations Research or other related programs, with 3+ years of experience (or 3 additional years of experience in lieu of degree)
Proficiency with open-source programming languages (e.g. Python, R, SQL) and machine learning toolkits (e.g. pytorch, numpy, polars, scikit-learn, tensorflow, pandas)
Proficiency using mathematical, statistical, or other data-driven analysis
Proficiency with APIs; both development/update of internal APIs and use of external APIs
Experience with data manipulation, analysis, analytic/business insight tools, and data visualization
Experience in applying qualitative and quantitative data analysis methods to analytics problems
Critical thinking and problem-solving skills for datasets which are unstructured
Ability to, or demonstrated experience, obtain structured and unstructured data from multiple sources, synthesizing it, and then presenting the results effectively and concisely in written and graphic form to internal and external stakeholders.
Ability to, or demonstrated experience, interacting with customers to coordinate requirements and resolve data questions
Proficiency working independently or as a member of a team to research, organize, and analyze information
Must have strong interpersonal and communication skills
Desired
Experience with transforming, manipulating, and combining data using a programming language, such as Python
Experience with using git, or similar version control systems
Experience with extract, transform and load (ETL) development for data ingest pipelines
Experience with relational databases, MySQL, NiFi, Kafka, Elastic MapReduce (EMR) Hbase, Elastic, Splunk, Spring
Experience with CI/CD pipelines
Ability to, or demonstrated experience, researching and evaluating the latest emerging technologies and applying them
Experience providing analytical judgement and trend analysis based on research and comparisons with past products.Experience with the Sponsors data handling procedures
Experience translating customer requirements into project or system specifications
Useful Addtional Skills and Experience
M.S. or PhD in a quantitative or STEM related major
Experience with large scale ETL from multiple sources
Experience with AWS or cloud computing; experience developing and deploying code in a cloud-based environment.
Experience with Docker or containers
Experience with Databricks
Experience with Graph Databases (e.g. Neo4j, Neptune)
Experience with Graph Analytics
Experience with Kubernetes
Experience with Spark
Experience programming with distributed computing
Demonstrated experience working with sponsor business or mission data, sponsor applications, or sponsor database structures
What You'll Love About Apogee
Challenging work in support of US Intel Community - a Mission that Matters!
Access to our cool ApogeePlex facility
Support for new ideas & encouragement to take risks
Professional Development Assistance (PDA)
Wicked smart and collaborative coworkers
Regular interfacing with company leadership
401(k) with huge company match
Paid Time Off / Fixed & Floating Holidays
Medical, Dental, Vision
Health Savings Accounts / Dependent Care Flexible Spending Accounts
Life Insurance,Disability (Short and Long Term),Accidental Death and Dismemberment (AD&D)
Apogee's Mission
Be the PROVIDERof choice for government & commercial organizations with an unwavering commitment to responsiveness, accuracy, integrity, collaboration, and innovation
Be the EMPLOYERof choice committed to an open & transparent corporate atmosphere and progressive culture that attracts and empowers world class professionals to explore cutting-edge technical solutions while fostering professional growth
Be the preferred SOURCEfor cutting-edge Analytic Products, Systems & Software Engineering, Big Data Integration, IT and Business Services that directly contribute to customer success
Apogee Integration is an Equal Opportunity Employer
Data Scientist
McLean, VA jobs
Kavaliro is seeking a Data Scientist to provide highly technical and in-depth data engineering support. The candidate MUST have experience designing and building data infrastructure, developing data pipelines, transforming and preparing data, ensuring data quality and security, and monitoring and optimizing systems. The candidate MUST have extensive experience with Python and AWS. Experience with SQL, multi-data source queries with database technologies (PostgreSQL, MySQL, RDS, etc.), NiFi, Git, Elasticsearch, Kibana, Jupyter Notebooks, NLP, AI, and any data visualization tools (Tableau, Kibana, Qlik, etc.) are desired.
Required Skills and Demonstrated Experience
Demonstrated experience with data engineering, to include designing and building data infrastructure, developing data pipelines, transforming/preparing data, ensuring data quality and security, and monitoring/optimizing systems.
Demonstrated experience with data management and integration, including designing and perating robust data layers for application development across local and cloud or web data sources.
Demonstrated work experience programming with Python
Demonstrated experience building scalable ETL and ELT workflows for reporting and analytics.
Demonstrated experience with general Linux computing and advanced bash scripting
Demonstrated experience with SQL.
Demonstrated experience constructing complex multi-data source queries with database technologies such as PostgreSQL, MySQL, Neo4J or RDS
Demonstrated experience processing data sources containing structured or unstructured data
Demonstrated experience developing data pipelines with NiFi to bring data into a central environment
Demonstrated experience delivering results to stakeholders through written documentation and oral briefings
Demonstrated experience using code repositories such as Git
Demonstrated experience using Elastic and Kibana
Demonstrated experience working with multiple stakeholders
Demonstrated experience documenting such artifacts as code, Python packages and methodologies
Demonstrated experience using Jupyter Notebooks
Demonstrated experience with machine learning techniques including natural language processing
Demonstrated experience explaining complex technical issues to more junior data scientists, in graphical, verbal, or written formats
Demonstrated experience developing tested, reusable and reproducible work
Work or educational background in one or more of the following areas: mathematics, statistics, hard sciences (e.g. Physics, Computational Biology, Astronomy, Neuroscience, etc.) computer science, data science, or business analytics
Desired Skills and Demonstrated Experience
Demonstrated experience with cloud services, such as AWS, as well as cloud data technologies and architecture.
Demonstrated experience using big data processing tools such as Apache Spark or Trino
Demonstrated experience with machine learning algorithms
Demonstrated experience with using container frameworks such as Docker or Kubernetes
Demonstrated experience with using data visualizations tools such as Tableau, Kibana or Apache Superset
Demonstrated experience creating learning objectives and creating teaching curriculum in technical or scientific fields
Location:
McLean, Virginia
This position is onsite and there is no remote availability.
Clearance:
TS/SCI with Full Scope Polygraph
Applicant MUST hold a permanent U.S. citizenship for this position in accordance with government contract requirements.
Kavaliro provides Equal Employment Opportunities to all employees and applicants. All qualified applicants will receive consideration for employment without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. Kavaliro is committed to the full inclusion of all qualified individuals. In keeping with our commitment, Kavaliro will take the steps to assure that people with disabilities are provided reasonable accommodations. Accordingly, if reasonable accommodation is required to fully participate in the job application or interview process, to perform the essential functions of the position, and/or to receive all other benefits and privileges of employment, please respond to this posting to connect with a company representative.
Data Scientist with ML
Reston, VA jobs
Kavaliro is seeking a Data Scientist to provide highly technical and in-depth data engineering support.
MUST have experience with Python, PyTorch, Flask (knowledge at minimum with ability to quickly pickup), Familiarity with REST APIs (at minimum), Statistics background/experience, Basic understanding of NLP.
Desired skills for a candidate include experience performance R&D with natural language processing, deploying CNN and LLMs or foundational models, deploying ML models on multimedia data, experience with Linux System Administration (or bash), experience with Android Configuration, experience in embedded systems (Raspberry Pi).
Required Skills and Demonstrated Experience
Demonstrated experience in Python, Javascript, and R.
Demonstrated experience employing machine learning and deep learning modules such as Pandas, Scikit, Tensorflow, Pytorch.
Demonstrated experience with statistical inference, as well as building and understanding predictive models, using machine learning methods.
Demonstrated experience with large-scale text analytics.
Desired Skills
Demonstrated hands-on experience performing research or development with natural language processing and working with, deploying, and testing Convolutional Neural Networks (CNN), large-language models (LLMs) or foundational models.
Demonstrated experience developing and deploying testing and verification methodologies to evaluate algorithm performance and identify strategies for improvement or optimization.
Demonstrated experience deploying machine learning models on multimedia data, to include joint text, audio, video, hardware, and peripherals.
Demonstrated experience with Linux System Administration and associated scripting languages (Bash)
Demonstrated experience with Android configuration, software development, and interfacing.
Demonstrated experience in embedded systems (Raspberry Pi)
Develops and conducts independent testing and evaluation methods on research-grade algorithms in applicable fields.
Reports results and provide documentation and guidance on working with the research-grade algorithms.
Evaluates, Integrates and leverage internally-hosted data science tools.
Customize research grade algorithms to be optimized for memory and computational efficiency through quantizing, trimming layers, or through custom methods
Location:
Reston, Virginia
This position is onsite and there is no remote availability.
Clearance:
Active TS/SCI with Full Scope Polygraph
Applicant MUST hold a permanent U.S. citizenship for this position in accordance with government contract requirements.
Kavaliro provides Equal Employment Opportunities to all employees and applicants. All qualified applicants will receive consideration for employment without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. Kavaliro is committed to the full inclusion of all qualified individuals. In keeping with our commitment, Kavaliro will take the steps to assure that people with disabilities are provided reasonable accommodations. Accordingly, if reasonable accommodation is required to fully participate in the job application or interview process, to perform the essential functions of the position, and/or to receive all other benefits and privileges of employment, please respond to this posting to connect with a company representative.
Machine Learning Data Scientist
Pittsburgh, PA jobs
Machine Learning Data Scientist
Length: 6 Month Contract to Start
* Please no agencies. Direct employees currently authorized to work in the United States - no sponsorship available.*
Job Description:
We are looking for a Data Scientist/Engineer with Machine Learning and strong skills in Python, time-series modeling, and SCADA/industrial data. In this role, you will build and deploy ML models for forecasting, anomaly detection, and predictive maintenance using high-frequency sensor and operational data.
Essential Duties and Responsibilities:
Develop ML models for time-series forecasting and anomaly detection
Build data pipelines for SCADA/IIoT data ingestion and processing
Perform feature engineering and signal analysis on time-series data
Deploy models in production using APIs, microservices, and MLOps best practices
Collaborate with data engineers and domain experts to improve data quality and model performance
Qualifications:
Strong Python skills
Experience working with SCADA systems or industrial data historians
Solid understanding of time-series analytics and signal processing
Experience with cloud platforms and containerization (AWS/Azure/GCP, Docker)
POST-OFFER BACKGROUND CHECK IS REQUIRED. Digital Prospectors is an Equal Opportunity Employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, or any other characteristic protected by law. Digital Prospectors affirms the right of all individuals to equal opportunity and prohibits any form of discrimination or harassment.
Come see why DPC has achieved:
4.9/5 Star Glassdoor rating and the only staffing company (< 1000 employees) to be voted in the national Top 10 ‘Employee's Choice - Best Places to Work' by Glassdoor.
Voted ‘Best Staffing Firm to Temp/Contract For' seven times by Staffing Industry Analysts as well as a ‘Best Company to Work For' by Forbes, Fortune and Inc. magazine.
As you are applying, please join us in fostering diversity, equity, and inclusion by completing the Invitation to Self-Identify form today!
*******************
Job #18135
Data Scientist - ML, Python
McLean, VA jobs
10+years of experience required in Information Technology.
Python Programming: At least 5 years of hands-on experience with Python, particularly in
frameworks like FastAPI, Django, Flask, and experience using AI frameworks.
• Access Control Expertise: Strong understanding of access control models such as Role-Based
Access Control (RBAC) and Attribute-Based Access Control (ABAC).
• API and Connector Development: Experience in developing API connectors using Python for
extracting and managing access control data from platforms like Azure, SharePoint, Java, .NET,
WordPress, etc.
• AI and Machine Learning: Hands-on experience integrating AI into applications for automating
tasks such as access control reviews and identifying anomalies
• Cloud and Microsoft Technologies: Proficiency with Azure services, Microsoft Graph API, and
experience integrating Python applications with Azure for access control reviews and reporting.
• Reporting and Visualization: Experience using reporting libraries in Python (Pandas, Matplotlib,
Plotly, Dash) to build dashboards and reports related to security and access control metrics.
• Communication Skills: Ability to collaborate with various stakeholders, explain complex
technical solutions, and deliver high-quality solutions on time.
• PlainID: Experience or familiarity with PlainID platforms for identity and access management.
• Azure OpenAI: Familiarity with Azure OpenAI technologies and their application in access
control and security workflows.
• Power BI: Experience with Microsoft Power BI for data visualization and reporting.
• Agile Methodologies: Experience working in Agile environments and familiarity with Scrum
methodologies for delivering security solutions.
Senior Data Engineer
Bethlehem, PA jobs
Hybrid (Bethlehem, PA)
Contract
We're looking for a Senior Data Engineer to join our growing technology team and help shape the future of our enterprise data landscape. This is a hands-on, high-impact opportunity to make recommendations, build and evolve a modern data platform using Snowflake and cloud-based EDW Solutions.
How You'll Impact Results:
Drive the evolution and architecture of scalable, secure, cloud-native data platforms
Design, build, and maintain data models, pipelines, and integration patterns across the data lake, data warehouse, and consumption layers
Lead deployment of long-term data products and infuse data and analytics capabilities across business and IT
Optimize data pipelines and warehouse performance for accuracy, accessibility, and speed
Collaborate cross-functionally to deliver data, experimentation, and analytics solutions
Implement systems to monitor data quality and ensure reliability and availability of Production data for downstream users, leadership teams, and business processes
Recommend and implement best practices for query performance, storage, and resource efficiency
Test and clearly document data assets, pipelines, and architecture to support usability and scale
Engage across project phases and serve as a key contributor in strategic data architecture initiatives
Your Qualifications That Will Ensure Success:
Required:
10+ years of experience in Information Technology Data Engineering:
professional database and data warehouse development
Advanced proficiency in SQL, data modeling, and performance tuning
Experience in system configuration, security administration, and performance optimization
Deep experience required with Snowflake and modern cloud data platforms (AWS, Azure, or GCP)
Familiarity with developing cloud data applications (AWS, Azure, Google Cloud) and/or standard CI/CD tools like Azure DevOps or GitHub
Strong analytical, problem-solving, and documentation skills
Experience in system configuration, security administration, and performance optimization
Proficiency with Microsoft Excel and common data analysis tools
Ability to troubleshoot technical issues and provide system support to non-technical users.
Preferred:
Experience integrating SAP ECC data into cloud-native platforms
Exposure to AI/ML, API development, or Boomi Atmosphere
Prior experience in consumer packaged goods (CPG), Food / Beverage industry, or manufacturing
Data Engineer
Richmond, VA jobs
Data Engineer - Distributed Energy Resources (DER)
Richmond, VA - Hybrid (1 week on - 1 week off)
12-month contract (Multiple Year Project)
$45-55/hr. depending on experience
We are hiring a Data Integration Engineer to join one of our Fortune 500 utilities partners in the Richmond area! In this role, you will support our client's rapidly growing Distributed Energy Resources (DER) and Virtual Power Plant (VPP) initiatives. You will be responsible for integrating data across platforms such as Salesforce, GIS, SAP, Oracle, and Snowflake to build our client's centralized asset tracking system for thermostats, EV chargers, solar assets, home batteries, and more.
In this role, you will map data, work with APIs, support Agile product squads, and help design system integrations that enable our client to manage customer energy assets and demand response programs at scale. This is a highly visible position on a brand-new product team with the chance to work on cutting-edge energy and utility modernization efforts. If you are interested, please apply!
MINIMUM QUALIFICATIONS:
3-5 years of experience in system integration, data engineering, or data warehousing and Bachelor's degree in Computer Science, Engineering, or related technical discipline.
Hands-on experience working with REST APIs and integrating enterprise systems.
Strong understanding of data structures, data types, and data mapping.
Familiarity with Snowflake or similar data warehousing platform.
Experience connecting data across platforms and/or integrating data from a variety of sources, i.e. SAP, Oracle, etc.
Ability to work independently and solve problems in a fast-paced Agile environment.
Excellent communication skills with the ability to collaborate across IT, business, engineering, and product teams.
RESPONSIBILITIES:
Integrate and map data across Salesforce, GIS, Snowflake, SAP, Oracle, and other enterprise systems
Link distributed energy asset data (EV chargers, thermostats, solar, home batteries, etc.) into a unified asset tracking database
Support API-first integrations: consuming, analyzing, and working with RESTful services
Participate in Agile ceremonies and work through user stories in Jira
Collaborate with product owners, BAs, data analysts, architects, and engineers to translate requirements into actionable technical tasks
Support architecture activities such as identifying data sources, formats, mappings, and integration patterns
Help design and optimize integration workflows across new and existing platforms
Work within newly formed Agile product squads focused on VPP/Asset Tracking and Customer Segmentation
Troubleshoot integration issues and identify long-term solutions
Contribute to building net-new systems and tools as the client expands DER offerings
NICE TO HAVES:
Experience with Salesforce.
Experience working with GIS systems or spatial data.
Understanding customer enrollment systems.
Jira experience.
WHAT'S IN IT FOR YOU…?
Joining our client provides you the opportunity to join a brand-new Agile product squad, work on high-impact energy modernization and DER initiatives, and gain exposure to new technologies and integration tools. This is a long-term contract with strong likelihood of extension in a stable industry and company.
Eight Eleven Group provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, national origin, age, sex, citizenship, disability, genetic information, gender, sexual orientation, gender identity, marital status, amnesty, or status as a covered veteran in accordance with applicable federal, state, and local laws.
Data Engineer (Zero Trust)
Fort Belvoir, VA jobs
Kavaliro is seeking a Zero Trust Security Architect / Data Engineer to support a mission-critical program by integrating secure architecture principles, strengthening data security, and advancing Zero Trust initiatives across the enterprise.
Key Responsibilities
Develop and implement program protection planning, including IT supply chain security, anti-tampering methods, and risk management aligned to DoD Zero Trust Architecture.
Apply secure system design tools, automated analysis methods, and architectural frameworks to build resilient, least-privilege, continuously monitored environments.
Integrate Zero Trust Data Pillar capabilities-data labeling, tagging, classification, encryption at rest/in transit, access policy definition, monitoring, and auditing.
Analyze and interpret data from multiple structured and unstructured sources to support decision-making and identify anomalies or vulnerabilities.
Assess cybersecurity principles, threats, and vulnerabilities impacting enterprise data systems, including risks such as corruption, exfiltration, and denial-of-service.
Support systems engineering activities, ensuring secure integration of technologies and alignment with Zero Trust operational objectives.
Design and maintain secure network architectures that balance security controls, mission requirements, and operational tradeoffs.
Generate queries, algorithms, and reports to evaluate data structures, identify patterns, and improve system integrity and performance.
Ensure compliance with organizational cybersecurity requirements, particularly confidentiality, integrity, availability, authentication, and non-repudiation.
Evaluate impacts of cybersecurity lapses and implement safeguards to protect mission-critical data systems.
Structure, format, and present data effectively across tools, dashboards, and reporting platforms.
Maintain knowledge of enterprise information security architecture and database systems to support secure data flow and system design.
Requirements
Active TS/SCI security clearance (required).
Deep knowledge of Zero Trust principles (never trust, always verify; explicit authentication; least privilege; continuous monitoring).
Experience with program protection planning, IT supply chain risk management, and anti-tampering techniques.
Strong understanding of cybersecurity principles, CIA triad requirements, and data-focused threats (corruption, exfiltration, denial-of-service).
Proficiency in secure system design, automated systems analysis tools, and systems engineering processes.
Ability to work with structured and unstructured data, including developing queries, algorithms, and analytical reports.
Knowledge of database systems, enterprise information security architecture, and data structuring/presentation techniques.
Understanding of network design processes, security tradeoffs, and enterprise architecture integration.
Strong ability to interpret data from multiple tools to support security decision-making.
Familiarity with impacts of cybersecurity lapses on data systems and operational environments.
Kavaliro is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status, or any other characteristic protected by law.
Data Modeler
Philadelphia, PA jobs
Philadelphia, PA
Hybrid / Remote
Brooksource is seeking an experienced Data Modeler to support an enterprise data warehousing team responsible for designing and implementing information solutions across large operational and analytical systems. You'll work closely with data stewards, architects, and DBAs to understand business needs and translate them into high-quality logical and physical data models that align with enterprise standards.
Key Responsibilities
Build and maintain logical and physical data models for the Active Enterprise Data Warehouse (AEDW), operational systems, and data exchange processes.
Collaborate with data stewards and architects to capture and refine business requirements and translate them into scalable data structures.
Ensure physical models accurately implement approved logical models.
Partner with DBAs on schema design, change management, and database optimization.
Assess and improve existing data structures for performance, consistency, and scalability.
Document data definitions, lineage, relationships, and standards using ERwin or similar tools.
Participate in design reviews, data governance work, and data quality initiatives.
Support impact analysis for enhancements, new development, and production changes.
Adhere to enterprise modeling standards, naming conventions, and best practices.
Deliver high-quality modeling artifacts with minimal supervision.
Required Skills & Experience
5+ years as a Data Modeler, Data Architect, or similar role.
Strong expertise with ERwin or other modeling tools.
Experience supporting EDW, ODS, or large analytics environments.
Proficiency developing conceptual, logical, and physical data models.
Strong understanding of relational design, dimensional modeling, and normalization.
Hands-on experience with Oracle, SQL Server, PostgreSQL, or comparable databases.
Ability to translate complex business requirements into clear technical solutions.
Familiarity with data governance, metadata management, and data quality concepts.
Strong communication skills and ability to collaborate across technical and business teams.
Preferred Skills
Experience in healthcare or insurance data environments.
Understanding of ETL/ELT concepts and how data models impact integration workflows.
Exposure to cloud data platforms (AWS, Azure, GCP) or modern modeling approaches.
Knowledge of enterprise architecture concepts.
About the Team
You'll join a collaborative, fast-moving data warehousing team focused on building reliable, scalable information systems that support enterprise decision-making. This role is key in aligning business needs with the data structures that power core operations and analytics.
AWS Data Engineer
McLean, VA jobs
Responsibilities:
Design, build, and maintain scalable data pipelines using AWS Glue and Databricks.
Develop and optimize ETL/ELT processes using PySpark and Python.
Collaborate with data scientists, analysts, and stakeholders to enable efficient data access and transformation.
Implement and maintain data lake and warehouse solutions on AWS (S3, Glue Catalog, Redshift, Athena, etc.).
Ensure data quality, consistency, and reliability across systems.
Optimize performance of large-scale distributed data processing workflows.
Develop automation scripts and frameworks for data ingestion, transformation, and validation.
Follow best practices for data governance, security, and compliance.
Required Skills & Experience:
5-8 years of hands-on experience in Data Engineering.
Strong proficiency in Python and PySpark for data processing and transformation.
Expertise in AWS services - particularly Glue, S3, Lambda, Redshift, and Athena.
Hands-on experience with Databricks for building and managing data pipelines.
Experience working with large-scale data systems and optimizing performance.
Solid understanding of data modeling, data lake architecture, and ETL design principles.
Strong problem-solving skills and ability to work independently in a fast-paced environment.
“Mindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of - Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.”
Expert Exploitation Specialist/Data Scientist (TS/SCI)
Springfield, VA jobs
About the Role
Culmen International is hiring Expert Exploitation Specialist/Data Scientists to provide support on-site at the National Geospatial-Intelligence Agency (NGA) in Springfield, VA.
The National Geospatial-Intelligence Agency (NGA) expects to deliver AOS Metadata Cataloging and Management Services to enhance product and asset management of content enabling rapid creation of discoverable, modular, web enabled, and visually enriched Geospatial Intelligence (GEOINT) products for intelligence producers in NGA, across the National System for Geospatial-Intelligence (NSG).
TALENT PIPELINE - Qualified applicants will be contacted as soon as funding for this position is secured.
What You'll Do in Your New Role
The Data Scientist will coordinate with our clients to understand questions and issues involving the client's datasets, then determine the best method and approach to create data-driven solutions within program guidelines. This position will be relied upon as a Subject Matter Expert (SME), and be expected to lead/assist in the development of automated processes, architect data science solutions, automated workflows, conduct analysis, use available tools to analyze data, remain adaptable to mission requirements, and identify patterns to help solve some of the complex problems that face the DoD and Intelligence Community (IC).
Work with large structured / unstructured data in a modeling and analytical environment to define and create streamline processes in the evaluation of unique datasets and solve challenging intelligence issues
Lead and participate in the design of solutions and refinement of pre-existing processes
Work with Customer Stakeholders, Program Managers, and Product Owners to translate road map features into components/tasks, estimate timelines, identify resources, suggest solutions, and recognize possible risks
Use exploratory data analysis techniques to identify meaningful relationships, patterns, or trends from complex data
Combine applied mathematics, programming skills, analytical techniques, and data to provide impactful insights for decision makers
Research and implement optimization models, strategies, and methods to inform data management activities and analysis
Apply big data analytic tools to large, diverse sets of data to deliver impactful insights and assessments
Conduct peer reviews to improve quality of workflows, procedures, and methodologies
Help build high-performing teams; mentor team members providing development opportunities to increase their technical skills and knowledge
Required Qualifications
TS/SCI Clearance w/CI Poly Eligible
Minimum of 18 years combined experience (A combination of years of experience & professional certifications/trainings can be used in lieu of a degree)
BS in related Field with Graduate level work
Expert proficiency in Python and other programming languages applicable to automation development.
Demonstrated experience designing and implementing workflow automation systems
Advanced experience with ETL (Extract, Transform, Load) processes for geospatial data
Expertise in integrating disparate systems through APl development and implementation
Experience developing and deploying enterprise-scale automation solutions
Knowledge of NGA's Foundation GEOINT products, data types, and delivery methods
Demonstrated experience with database design, implementation, and optimization
Experience with digital media generation systems and automated content delivery platforms
Ability to analyze existing workflows and develop technical solutions to streamline processes
Knowledge of DLA systems and interfaces, particularly MEBS and WebFLIS
Expertise in data quality assurance and validation methodologies
Experience with geospatial data processing, transformation, and delivery automation
Proficiency with ArcGIS tools, GEODEC and ACCORD software systems
Understanding of cartographic principles and standards for CADRG/ECRG products
Strong analytical skills for identifying workflow inefficiencies and implementing solutions
Experience writing technical documentation including SOPS,CONOPS, and system design
Desired Qualifications
Certification(s) in relevant automation technologies or programming languages
Experience with DevOps practices and C/CD implementation
Knowledge of cloud-based automation solutions and their implementation in - government environments
Experience with machine learning applications for GEOINT Workflow optimization
Expertise in data analytics and visualization for workflow performance metrics
Understanding of NGA's enterprise architecture and integration points
Experience implementing RPA (Robotic Process Automation) solutions
Knowledge of secure coding practices and cybersecurity principles
Demonstrated expertise in digital transformation initiatives
Experience mentoring junior staff in automation techniques and best practices
Background in agile development methodologies
Understanding of human-centered design principles for workflow optimization
About the Company
Culmen International is committed to enhancing international safety and security, strengthening homeland defense, advancing humanitarian missions, and optimizing government operations. With experience in over 150 countries, Culmen supports our clients to accomplish critical missions in challenging environments.
Exceptional Medical/Dental/Vision Insurance, premiums for employees are 100% paid by Culmen,
and dependent coverage is available at a nominal rate (including same or opposite sex domestic partners)
401k - Vested immediately and 4% match
Life insurance and disability paid by the company
Supplemental Insurance Available
Opportunities for Training and Continuing Education
12 Paid Holidays
To learn more about Culmen International, please visit **************
At Culmen International, we are committed to creating and sustaining a workplace that upholds the principles of Equal Employment Opportunity (EEO). We believe in the importance of fair treatment and equal access to opportunities for all employees and applicants. Our commitment to these principles is unwavering across all our operations worldwide.
Auto-ApplyJunior Data Scientist [Python Experience] with Top Secret / SCI Clearance
Springfield, VA jobs
Description Founded in 1989, CALNET, Inc. has become one of the fastest growing privately held companies in the Technology, Intelligence Analysis, and Language Services consulting arena. Headquartered in Reston, VA, CALNET employees deliver true value to our customers by employing best practices, and world-class technologies industry expertise in every project. CALNET is ISO 9001, ISO 20000, and CMMI-Level III certified We are currently searching for a talented Junior Data Scientist with Top Secret / SCI Clearance to work in Springfield, VA About the Job We have an exciting opportunity for a Data Scientist with Top Secret / SCI Clearance. The positions will be supporting a new team at NGA that does very similar work to some of our automated collection work. The team will be building automated multi-INT analytic models that detect activity of interest and task GEOINT collection. The qualifications are: Must have:
Active TS/SCI clearance with the willingness to take a CI polygraph exam
Able to work on client site in Northern Virginia 40 hours a week (very limited option for telework)
Proficient with Python and experience with JEMA
Nice to have:
Experience with multiple intelligence types (SIGINT, OSINT, ELINT, GEOINT, MASINT, HUMINT)
Experience with Brewlytics, JEMA, ArcPro and/or other geospatial data analysis tools
Knowledge of GEOINT collection and associated NGA/NRO systems
Proficiency with common programming languages including R, SQL, HTML, and JavaScript
Experience analyzing geospatially enabled data
Experiences providing embedded data science / automation support to analytic teams
Ability to learn new technologies, adapt to dynamic mission needs, and develop/test new analytic methodologies
Requirements
Must be capable of obtaining and holding a U.S. government Top Secret / SCI clearance. Prefer the candidate to already have a CI Polygraph.
Bachelor's degree in computer science, Computer Engineering or related field
1+ years of software development experience in common programming languages including Python, R, SQL, HTML, and JavaScript.
Experience developing AI Tools
U.S Citizenship with Top Secret / SCI clearance is required for this position. CALNET, Inc. is an Equal Opportunity Employer. EEO/M/F/D/V
Auto-ApplyMid-Level Data Scientist with TS/SCI Clearance
Springfield, VA jobs
Description Founded in 1989, CALNET, Inc. has become one of the fastest growing privately held companies in the Technology, Intelligence Analysis, and Language Services consulting arena. Headquartered in Reston, VA, CALNET employees deliver true value to our customers by employing best practices, world-class technologies industry expertise in every project. CALNET is ISO 9001, ISO 20000, ISO 27001, CMMI-Level III for Services, and CMMC Level 2 certified. We are currently searching for a talented, professional Mid-Level Data Scientist with TS/SCI clearance to join our team to support NGA. About the JobThe Senior Analytical Methodologist possesses skills that focus on GEOINT data enrichment and management. Applies expertise to exploit GEOINT data or information in order to develop advanced analytic processes, apply scientific approaches to test geospatial data for accuracy and precision, create automated services that support GEOINT creation and delivery, and/or structure and manage data for further use. Requirements
7 + years of demonstrated experience using quantitative and qualitative techniques to solve complex problems using analytic tools and techniques to solve complex problems such as GIS, quantitative methods and data visualization, modeling, systems analysis, comparative analysis and database development.
Bachelor's degree or master's degree in computer science or information technology discipline.
Demonstrated knowledge and experience in data mining, cleansing and exploring spatial, temporal and non-spatial data in both structured and unstructured formats.
Demonstrated knowledge of programming languages (e.g. Python, Java, JavaScript, SQL), modeling software, spatial analysis tools and concepts, data mining methods, database structures and analytic information extraction and visualization.
Current, active TS/SCI with the ability to obtain a CI Poly.
This opportunity is located in Springfield, VA. To apply, go to ******************************** CALNET, Inc. offers a competitive base salary and a generous benefits package. This package includes medical, dental, vision, life, short- and long-term disability insurances, a 401(k)-retirement savings plan, and generous leave time. CALNET, Inc. is an Equal Opportunity Employer; all qualified applicants are encouraged to apply. ************** EEO/M/F/D/V
Auto-ApplyJunior Data Scientist
Norfolk, VA jobs
Job Description
located in Norfolk, VA. TS/SCI Clearance Required.
National Capitol Contracting is seeking a Junior Data Scientist to support the Naval Sea Systems Command (NAVSEA) and the Shipboard Maintenance Performance Group (SMPG) and help bring cutting edge data analytics to the Navy Fleet Shipyards.
Essential Duties and Responsibilities
Assist in the stand-up (both technical and functional) of the Data Analytics Lab located at NAVSEA Naval Shipyards, recommend software/hardware analytics and data management toolsets, and recommend functional responsibilities for the other SMDAL staff members.
Assist in the development of data procedures and policies, govern data assets, and improve the quality and access to data across the organization.
Analyze and streamline the processes and technology for using data to manage ship maintenance operations, and draw valuable insights from data through data science and business intelligence.
Assist in the implementation of industry best practices, selection of enabling tools and technologies, driving innovation, and contributing to new service offerings for our ship maintenance stakeholders.
Assist in the development of policies and procedures to ensure corporate data systems information protection, data governance, data quality, and data life cycle management.
Government Security Clearance Requirement
TS/SCI and willingness to undergo a CI and/or FS polygraph is required.
Minimum Requirements
Bachelors degree in Computer Science, Data Science, Computational Science, or related field.
Ability to concisely and convincingly communicate (both verbally and written) with both technical and non-technical stakeholders.
Experience using data for descriptive, diagnostic, predictive, and prescriptive analytics.
Knowledge of big data technology infrastructure and environments.
Proficient in a broad range of data tools and technologies (e.g., Oracle, SQL, SAP IQ, etc.), visualization/analytics tools (e.g., Tableau, Qlik, MicroStrategy, SAP Business Objects, etc.), as well as machine languages (e.g., Python, R/RStudio, Java, C#, SQL, etc.)
Experience in developing machine learning models.
Results-driven, creative, and analytical.
Ability to work within cross-functional groups.
Preferred Qualifications
Experience within Data Warehouse and ETL processes.
Special Position Requirements
Environment/Physical Demands: Office Setting, sitting for long periods of time, and utilizing computer/screens for long periods of time.
NCC provides reasonable accommodations to qualified individuals with disabilities. If you are an applicant that requires a reasonable accommodation, please email us and reference the position in your email.
NCC is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, ethnicity, ancestry, color, sex, religion, creed, age, national origin, citizenship status, disability, medical condition, military and veteran status, marital status, sexual orientation or perceived sexual orientation, gender, gender identity, and gender expression, familial status, political affiliation, genetic information, or any other legally protected status or characteristic. E-Verify Employer. VEVRAA Federal Contractor.
Junior Data Scientist
Norfolk, VA jobs
located in Norfolk, VA. TS/SCI Clearance Required. National Capitol Contracting is seeking a Junior Data Scientist to support the Naval Sea Systems Command (NAVSEA) and the Shipboard Maintenance Performance Group (SMPG) and help bring cutting edge data analytics to the Navy Fleet Shipyards.
Essential Duties and Responsibilities
* Assist in the stand-up (both technical and functional) of the Data Analytics Lab located at NAVSEA Naval Shipyards, recommend software/hardware analytics and data management toolsets, and recommend functional responsibilities for the other SMDAL staff members.
* Assist in the development of data procedures and policies, govern data assets, and improve the quality and access to data across the organization.
* Analyze and streamline the processes and technology for using data to manage ship maintenance operations, and draw valuable insights from data through data science and business intelligence.
* Assist in the implementation of industry best practices, selection of enabling tools and technologies, driving innovation, and contributing to new service offerings for our ship maintenance stakeholders.
* Assist in the development of policies and procedures to ensure corporate data systems information protection, data governance, data quality, and data life cycle management.
Government Security Clearance Requirement
* TS/SCI and willingness to undergo a CI and/or FS polygraph is required.
Minimum Requirements
* Bachelors degree in Computer Science, Data Science, Computational Science, or related field.
* Ability to concisely and convincingly communicate (both verbally and written) with both technical and non-technical stakeholders.
* Experience using data for descriptive, diagnostic, predictive, and prescriptive analytics.
* Knowledge of big data technology infrastructure and environments.
* Proficient in a broad range of data tools and technologies (e.g., Oracle, SQL, SAP IQ, etc.), visualization/analytics tools (e.g., Tableau, Qlik, MicroStrategy, SAP Business Objects, etc.), as well as machine languages (e.g., Python, R/RStudio, Java, C#, SQL, etc.)
* Experience in developing machine learning models.
* Results-driven, creative, and analytical.
* Ability to work within cross-functional groups.
Preferred Qualifications
* Experience within Data Warehouse and ETL processes.
Special Position Requirements
Environment/Physical Demands: Office Setting, sitting for long periods of time, and utilizing computer/screens for long periods of time.
NCC provides reasonable accommodations to qualified individuals with disabilities. If you are an applicant that requires a reasonable accommodation, please email us and reference the position in your email.
NCC is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, ethnicity, ancestry, color, sex, religion, creed, age, national origin, citizenship status, disability, medical condition, military and veteran status, marital status, sexual orientation or perceived sexual orientation, gender, gender identity, and gender expression, familial status, political affiliation, genetic information, or any other legally protected status or characteristic. E-Verify Employer. VEVRAA Federal Contractor.
Data Scientist - FBI HTOCU Support (Anticipated)
Arlington, VA jobs
Project/Team: FBI Hi-Tech Organized Crime Unit (HTOCU) Support
Employment Type: Full-Time (1920 hours/year)
Number of Vacancies: 1
Period of Performance: 07/22/2025 07/21/2026, with four 1-year option periods
Clearance Required: Top Secret with Counterintelligence (CI) Polygraph
*Note:
This position has not yet been funded. We are currently soliciting resumes from interested candidates in anticipation of contract award.
Background:
Navanti is seeking a highly analytical and mission-aligned Data Scientist to support the FBI s Hi-Tech Organized Crime Unit (HTOCU) at FBI Headquarters in Washington, DC. This position directly contributes to advanced criminal investigations and intelligence operations by unlocking insight from large and complex datasets.
The selected candidate will support two specialized teams the Joint Criminal Opioid and Darknet Enforcement (JCODE) Team and the Mobile Encrypted Networks and Communications Exploitation (MENACE) Team. These teams rely on data fusion, enrichment, and visualization to investigate encrypted communications, darknet marketplaces, and other high-risk digital threats.
Work will be performed onsite and requires an active Top Secret clearance with Counterintelligence (CI) polygraph.
Core Responsibilities:
Perform data wrangling, normalization, and transformation on investigative datasets.
Create statistical models and dynamic visualizations that reveal patterns, anomalies, and leads.
Enable investigators and analysts by converting raw data into actionable formats.
Recommend and implement data architectures, storage frameworks, and retrieval pipelines.
Collaborate with technical and investigative stakeholders to define analytical requirements.
Document and present data methodologies, findings, and tools to technical and non-technical audiences.
Technical Requirements:
Bachelor s degree in Data Science, Statistics, Computer Science, or a related field.
Minimum of 3 years of experience in applied data science, ideally in federal, law enforcement, or intelligence environments.
Strong proficiency in:
Scripting languages (e.g., Python, R)
Data visualization tools and platforms
Relational and non-relational databases
Active Top Secret clearance with CI polygraph is required.
Preferred Qualifications:
Deep understanding of analytical techniques relevant to intelligence and investigative domains.
Experience with information assurance, anomaly detection, and statistical variance analysis.
Strong presentation skills; capable of translating technical insight into operational intelligence.
Familiarity with federal or law enforcement datasets, workflows, or investigative processes.
#CJ
Data Scientist #1403
Arlington, VA jobs
Primary Responsibilities:
Leverage data analytics and subject matter expertise to collaboratively develop new analytic products and
Provide our client's workforce with templates and reporting guidelines to enable them to recreate data products and visualizations in line with industry best practices.
Leverage technical and analytical expertise to explore and examine data from multiple disparate sources with the goal of discovering patterns and previously hidden insights, providing process efficiencies, or addressing a pressing business problem.
Support the development of new data analytic products and brainstorm solutions for better data metrics, visualizations, and improvements to existing data systems within the organization.
Communicate effectively with senior leaders to shape and develop analytical approaches to improve data quality, data goals and initiatives, as well as how to measure progress on those initiatives.
Effectively collaborate with other team members to solve strategic and data problems creatively, synthesizing new solutions from disparate inputs.
Support and facilitate working-level meetings and projects that further efforts to collaborate on creative solutions to improve our access to important data and support decision-makers across the Department in the near-term and long-term.
Effectively work with functional strategy analysts in the office and throughout the organization to develop measurable and useful metrics for strategic initiatives, with the intent to help our client become more data-driven
Identify and support key near-term actions to establish a foundation from which to augment our existing data infrastructure and analytic capabilities, protect data assets, and enable the organization to use and understand data.
Help improve data literacy within the organization, leveraging best practices for identifying and addressing data quality concerns as well as opportunities to train the broader workforce to better utilize the data they have.
Under general supervision, employee will be expected to exercise sound and independent judgment and consistently exercise discretion.
CLEARANCE REQUIREMENTS: Required active SECRET clearance with TS/SCI eligibility
EDUCATION REQUIREMENTS
Mandatory: Bachelor's degree in Data Science, Business Analytics, Computer Science, Economics, or related field.
Waiverable/Substitutable:
CERTIFICATION REQUIREMENTS:
Mandatory:
Waiverable/Substitutable:
EXPERIENCE REQUIREMENTS: 3+ years relating to data science/analysis
Required: Excellent Microsoft Office skills, with advanced skills in Excel
Required: Proficiency in Python and/or R languages statistical analysis packages.
Required: Demonstrated experience using SQL to merge and query large, complex datasets
Required: Demonstrated experience with business intelligence and analytics software platforms to include: Qlik, Tableau, Alteryx, Snowflake, etc.
Desired: Demonstrated experience using cloud platforms including AWS and Microsoft Azure
ADDITIONAL REQUIREMENTS:
To be eligible for employment, you must be fully vaccinated, except in limited circumstances where an employee is legally entitled to an accommodation, by January 4
th
, 2022 or your start date, whichever is second in time. People are considered fully vaccinated for COVID-19 two weeks after they have received the second dose in a two-dose series, or two weeks after they have received a single-dose vaccine. If you think you are entitled to an accommodation, please advise your Sehlke liaison and they will consult our human resource leads for consideration.
Founded in 2011, Sehlke Consulting is headquartered in Arlington, VA - an Equal Opportunity Employer that values the strength diversity brings to the workplace. Individuals with Disabilities and Protected Veterans are encouraged to apply.
Auto-ApplyLife Sciences Marketing Analytics, Data Scientist
Philadelphia, PA jobs
JPA Health is a fully integrated marketing, communications and medical communications agency for clients ranging from emerging biotech to established pharmaceutical companies and public health organizations. We work exclusively within the health sector. We share our clients' commitment to making people healthier. In fact, some might say we are obsessed with improving and protecting lives. Check out our sizzle reel!
The Role
JPA is searching for a Life Sciences, Marketing Analytics Data Scientist, to join our Integrated Intelligence team. You'll blend analytics, applied statistics, and development to transform omnichannel marketing data into models, dashboards, and insights that fuel behavior change. You'll work across teams to design, measure, and optimize life sciences marketing programs across audiences and channels.
This position is a full-time, hybrid role reporting to any JPA office (i.e., Boston, Washington, DC, Philadelphia, New York City) 2 days per week and working remotely 3 days per week.
Applicants must be authorized to work in the United States without current or future need for visa sponsorship
The Responsibilities
Design and implement data pipelines to ingest, transform, and QA multi-source marketing data (paid, owned, earned, CRM, web, email, social, events, qualitative).
Build statistical models (e.g., regression, classification, forecasting, clustering) to identify performance drivers, segment audiences, and predict outcomes.
Develop fit-for-purpose marketing attribution and contribution approaches based on available data; quantify incrementality where feasible.
Create dashboards and automated reporting that translate complex results into simple, visual stories for technical and non-technical stakeholders.
Plan and analyze A/B and multivariate tests; estimate impact and uncertainty; recommend next-best actions and channel allocations.
Partner with engagement specialists and performance analytics teams to translate insights into channel, message, and audience optimizations throughout the campaign lifecycle.
Document methods, code, and data dictionaries; champion data quality, reproducibility, and privacy-aware measurement.
Present findings to clients; manage timelines and priorities across concurrent projects.
About You
Our ideal candidate must have:
Bachelor's degree in a quantitative field (e.g., statistics, mathematics, economics, engineering) or equivalent experience.
3-5 years in life sciences marketing analytics, data science, or a related role within an agency or in-house team.
Experience with privacy-aware measurement techniques and working within regulated industries, especially life sciences.
Hands-on coding experience for data wrangling, analysis, and statistical modeling, including version control and code review practices, including working familiarity with Python, R or SQL.
Strong grasp of experimental design, statistical inference, and causal thinking; able to communicate assumptions and limitations clearly.
Experience building automated dashboards and reports; adept at data visualization and storytelling using tools like Tableau and PowerBI.
Familiarity with omnichannel datasets and measurements across paid, owned, and earned media.
Comfortable working with imperfect data and designing solutions that scale.
Excellent communication skills; client-facing presence; collaborative approach with cross-functional partners.
What Makes Us Different
JPA Health offers you the opportunity to work with purpose as you achieve extraordinary results for our clients. You will elevate your career in an environment that thrives at the intersection of wellness, connection, and compassion. Our mission to help people live healthier lives begins with you.
Our approach prioritizes compassion to ensure you and your family flourish. We promote flexibility with adaptable work arrangements for a balanced personal and professional life. Our Collaboration Days are designed to strengthen relationships and enhance well-being. Respect, inclusion, camaraderie, and connection - this is the heart of our agency's ethos. We elevate each other. We work collaboratively. And we push ourselves to think bigger.
In addition, JPA Health offers:
Paid time off when you need it most: 20+ days PTO, 10 holidays, Sabbatical, bereavement & compassion leave, parental leave, civic duty, volunteer time and year-end office closure.
Unlimited access to LinkedIn Learning, internal webinars through JPA's Elevate Institute, tuition reimbursement, paid professional development, and paid learning and development time.
An impressive and comprehensive benefits package that supports you and your family's physical, mental and financial well-being.
Competitive pay and opportunities to advance. The anticipated starting pay for this role is between $107,000 and $120,000 annually, based on a variety of factors including but not limited to experience, qualifications, and location. You may also be eligible for performance-based bonuses. We review compensation annually and evaluate readiness for promotions every quarter.
At JPA Health, we are committed to fostering a culture of Diversity, Equity, and Inclusion (DEI). We believe that our strength lies in the diversity of our team, and we strive to create an environment where every individual feels valued, respected, and heard. We are dedicated to promoting equity in all aspects of our work, ensuring that all employees have equal access to opportunities and resources. We are inclusive, welcoming individuals of all races, genders, sexual orientations, religions, national origins, disabilities, and ages. Our commitment to DEI extends beyond our organization, influencing the work we do and the partnerships we build. We believe that by embracing DEI, we can drive innovation, enhance our services, and contribute to a healthier society.
We are proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. If you require an accommodation in order to apply for a position with JPA Health, please contact us for assistance at ******************.
Auto-ApplyData Scientist (Cleared)
Arlington, VA jobs
Job Title: DATA SCIENTIST
Workplace: Hybrid
Clearance Required: MUST HAVE AN ACTIVE SECRET CLEARANCE
Requisition Type: Pipeline this is not a current opening but rather a talent pipeline for Data Scientists of all levels with active clearances interested in supporting the U.S. Government. When new cleared Data Scientist positions become available, this talent community will be the first place our recruiters look to fill the roles. Candidates with profiles in this talent community can also expect to receive regular updates on relevant new job opportunities. Be sure to also apply to any relevant current funded/awarded openings, if available.
Position Overview:
AI / Analytics Platform Data Scientists are responsible for leveraging AI and data analytics to detect, prevent, and investigate fraud and other crimes. Their job responsibilities include:
Develop and implement predictive models and machine learning algorithms to identify patterns.
Use AI/analytics platforms to analyze complex and large volumes of data from various sources to identify potential fraudulent activities.
Apply graph data science techniques to uncover hidden relationships and detect fraudulent networks.
Utilize Neo4j and other graph databases to model complex relationships in fraud/criminal investigations.
Develop fraud detection models leveraging network analytics, link prediction, and anomaly detection in transaction data.
Collaborate with law enforcement, regulatory bodies, and other stakeholders to understand their data needs and translate them into data science projects.
Communicate complex quantitative analyses in a clear, precise, and actionable manner to non-technical stakeholders.
Keep abreast of the latest developments in AI, machine learning, and data science.
Ensure the proper implementation of data privacy regulations and best practices in data security within all AI and analytics projects.
Participate in the development of strategies and policies related to crime prevention and detection.
Assist in preparing reports and presentations for senior management and other stakeholders.
Position Requirements:
Python (Jupyter Notebook)
AI/ML models for NLP
Entity Resolution
Link Analysis
Graph data science and Neo4j for fraud detection and AML
Data visualization and modeling
Clearance Required: Current Secret required
Must reside in the DC Metro area and be able to attend regularly scheduled in-person meetings
Required Education:
Bachelor s Degree in a quantitative discipline (e.g., statistics, mathematics, operations research, engineering, or computer science) with 3-6 years of relevant work experience.
Or, a Master s Degree with 2+ years of relevant work experience
Preferred Skills and Qualifications:
AWS foundational technologies
GenAI technologies
About Elder Research, Inc
People Centered. Data Driven
Elder Research is a fast growing consulting firm specializing in predictive analytics. Being in the data mining business almost 30 years, we pride ourselves in our ability to find creative, cutting edge solutions to real-world problems. We work hard to provide the best value to our clients and allow each person to contribute their ideas and put their skills to use immediately.
Our team members are passionate, curious, life-long learners. We value humility, servant-leadership, teamwork, and integrity. We seek to serve our clients and our teammates to the best of our abilities. In keeping with our entrepreneurial spirit, we want candidates who are self-motivated with an innate curiosity and strong team work.
Elder Research believes in continuous learning and community - each week the entire company attends a Tech Talk and each office location provides lunch. Elder Research provides a supportive work environment with established parental, bereavement, and PTO policies. By prioritizing a healthy work-life balance - with reasonable hours, solid pay, low travel, and extremely flexible time off - Elder Research enables and encourages its employees to serve others and enjoy their lives.
Elder Research, Inc. is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, or protected veteran status and will not be discriminated against on the basis of disability.
Elder Research is a Government contractor and many of our positions require US Citizenship.
Data Scientist
Arlington, VA jobs
Job Title: Data Scientist
Location(s): Arlington, VA & Washington DC (DUE TO CUSTOMER REQUIREMENTS YOU MUST BE LOCATED IN THE GREATER WASHINGTON DC AREA)
Workplace: Hybrid
Clearance Required: Must have a IRS Public Trust w/a Full Background Investigation
Requisition Type: Pipeline this is not a current opening but rather a talent pipeline for Data Analyst of all levels with an IRS Public Trust w/ background investigation interested in supporting the Government customer. When new IRS Data Scientist positions become available, this talent community will be the first place our recruiters look to fill the roles. Candidates with profiles in this talent community can also expect to receive regular updates on relevant new job opportunities. Be sure to also apply to any relevant current funded/awarded openings, if available.
Position Overview:
As a Data Scientist, you will work directly with clients, managers, and technical staff to understand business needs, develop technical plans, and deliver data-driven analytical solutions that solve client problems. You will primarily create and deploy predictive models from a wide variety of data sources and types using the latest mathematical and statistical methods and other emerging technologies.
Position Requirements:
Required Clearance: Must have a IRS Public Trust w/a Full Background Investigation
Required Education: Bachelor of Science degree in a relevant field (statistics, business, computer science, economics, mathematics, analytics, data science, social sciences, etc.,)
Required Skills / Experience:
Exploring, cleaning, and wrangling data to provide value-added insights and identify business problems suitable for Data Science solutions
Experience across the spectrum of design, develop, test, and implement quantitative and qualitative Data Science solutions that are modular, maintainable, resilient to industry shifts, and platform-agnostic
Demonstrated experience using statistical and analytical software (including but not limited to Python, R, and SQL)
Analyzing events across government, financial industries, law enforcement, and other similar data environments prioritizing them by compliance and business risk and displaying the results in evidence-driven monitoring and decision support tools.
Experience in quantitative statistical approaches to anomaly detection to identify non-compliance risk, fraud, and cyber threats using data discovery, predictive analytics, trend analysis, assessment, and appropriate contemporary and emerging analytical techniques.
Ability to conduct rigorous quantitative data analysis on very large quantitative data sets to develop insights and develop actionable recommendations due to previous experience developing strategies, performing assessments, gap analyses, and making actionable recommendations
Contribute to meetings and discussions with clients and co-workers to refine understanding of the business problem at hand
Trying different predictive modeling approaches to identify the best fit for a given set of business understanding, available data, and project timeline
Writing modular, understandable, re-usable code within an iterative development process that includes team-based code review, client discussions, and end-user training
Applying statistical tests for robustness, sensitivity, and significance to test and validate supervised and unsupervised models
Preparing presentations, writing reports (technical and non-technical), and working to communicate technical results to clients with varying levels of analytic sophistication
Ability to work autonomously in a collaborative, dynamic, cross-functional environment
Demonstrated business savvy with solid interpersonal and communication skills (written and verbal).
Experience with design and delivery capabilities with proficiency in gathering requirements and translating business requirements into technical specification.
Preferred Skills and Qualifications:
Bachelor of Science degree in a relevant field (statistics, business, computer science, economics, mathematics, analytics, data science, social sciences, etc.,)
1+ years of experience in data science, data analytics, or a related technical field
Prior computer programming experience, preferably in a language such as Python or R
Experience with data exploration, data munging, data wrangling, and model development in R or Python
Experience using version control (e.g. git, svn, Mercurial) and collaborative Basic understanding of relational database structure and SQL
Humble and willing to learn, teach, and share ideas
Experience engaging and interacting with clients, stakeholders and subject matter experts (SMEs) to understand, gather and document requirements
Comfortable learning new things and working outside of your comfort zone
Technical mindset you are not afraid of math!
Must currently possess a Public Trust clearance
Travel to and work on-site at clients both local and non-local. Number of days at client site vary depending on project requirements.
Desired Skills
Advanced degree (MS or PhD) in a relevant field (e.g., statistics, computer science, business, mathematics, analytics, data science, engineering, physics, social sciences, management information systems, or decision science, etc.,)
Programming techniques (e.g. pair programming, code reviews)
Experience with containerization and environment management (venv or conda)
Experience with Natural Language Processing (NLP) and advanced text mining techniques
Experience with graph analytics and network analysis
Experience with one or more technologies such as R Shiny, Databricks, AWS, Azure
Experience applying robust, established and emerging quantitative & statistical techniques, knowledgeable on the underlying theoretical and architectural frameworks in the fields of applied analytics, and statistical analysis to include: sampling considerations & survey design like construct validity, measurement bias, as well as internal & external validity, statistical weighting techniques, approaches to outlier and missing data, and exploratory data analysis, cross-sectional analysis, and longitudinal forecasting
Experience implementing data science processes in a remote, austere environment to include using bash
Experience with business intelligence and data visualization platforms (Power BI, Tableau, etc.,)
Understanding of the data analytics lifecycle (e.g. CRISP-DM)
About Elder Research, Inc
People Centered. Data Driven
Elder Research is a fast growing consulting firm specializing in predictive analytics. Being in the data mining business almost 30 years, we pride ourselves in our ability to find creative, cutting edge solutions to real-world problems. We work hard to provide the best value to our clients and allow each person to contribute their ideas and put their skills to use immediately.
Our team members are passionate, curious, life-long learners. We value humility, servant-leadership, teamwork, and integrity. We seek to serve our clients and our teammates to the best of our abilities. In keeping with our entrepreneurial spirit, we want candidates who are self-motivated with an innate curiosity and strong team work.
Elder Research believes in continuous learning and community - each week the entire company attends a Tech Talk and each office location provides lunch. Elder Research provides a supportive work environment with established parental, bereavement, and PTO policies. By prioritizing a healthy work-life balance - with reasonable hours, solid pay, low travel, and extremely flexible time off - Elder Research enables and encourages its employees to serve others and enjoy their lives.
Elder Research, Inc. is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, or protected veteran status and will not be discriminated against on the basis of disability.
Elder Research is a Government contractor and many of our positions require US Citizenship.