Data Scientist
Data scientist job in Columbia, MD
Data Scientist - Transit Data Focus_Columbia, MD (On-site / hybrid)_Contract (6 Months)
Data Scientist - Transit Data Focus
Employment type: Contract
Duration: 6 Months
Justification: To manage and analyze customer databases, AVA (automated voice announcement), and schedule data for predictive maintenance and service planning.
Experience Level: 3-5 years
Job Responsibilities:
Collect, process, and analyze transit-related datasets including customer databases, AVA (automated voice announcement) logs, real-time vehicle data, and schedule data.
Develop predictive models and data-driven insights to support maintenance forecasting, service planning, and operational optimization.
Design and implement data pipelines to integrate, clean, and transform large, heterogeneous transit data sources.
Perform statistical analysis and machine learning to identify patterns, trends, and anomalies relevant to transit service performance and reliability.
Collaborate with transit planners, maintenance teams, and IT staff to translate data insights into actionable business strategies.
Monitor data quality and integrity; implement data validation and cleansing processes.
Technical Skills & Qualifications:
Bachelor's or Master's degree in Data Science, Statistics, Computer Science, Transportation Engineering, or a related quantitative field.
3-5 years of experience working as a data scientist or data analyst, preferably in a transit, transportation, or public sector environment.
Strong proficiency in Python or R for data analysis, statistical modeling, and machine learning.
Experience with SQL for database querying, manipulation, and data extraction.
Familiarity with transit data standards such as GTFS, AVL/CAD, APC (Automated Passenger Counters), and AVA systems.
Experience with data visualization tools such as Power BI, or equivalent.
Senior Data Scientist
Data scientist job in McLean, VA
Purpose:
As a Data Scientist, you will play a key role in delivering impactful, data-driven solutions for our strategic enterprise clients. This role also offers the opportunity to shape and grow Infocepts' Data Science & AI practice, contributing to high-impact AI/ML initiatives, crafting data-driven narratives for stakeholders, and applying advanced techniques to solve complex business problems from strategy to execution.
Key Result Areas and Activities:
Design, build, and deploy AI/ML solutions using modern cloud and data platforms.
Lead data science projects across industries, ensuring alignment with business goals.
Apply supervised, unsupervised, deep learning, and Generative AI (e.g., LLMs, agentic workflows) techniques to address client use cases.
Collaborate with data engineering teams to optimize model pipelines using Delta Lake and Spark.
Communicate findings effectively through data visualizations and stakeholder presentations.
Drive adoption of MLOps practices for scalable and reliable model deployment.
Contribute to the evolution of Infocepts' Data Science & AI offerings through innovation and knowledge sharing.
Roles & Responsibilities
Essential Skills
5+ years of experience in applied AI and machine/deep learning.
Hands-on experience with Databricks, MLflow, PySpark, and Spark MLlib.
Proficiency in Python and SQL for model development and data manipulation.
Strong understanding of supervised and unsupervised learning, deep learning, and Generative AI.
Familiarity with cloud platforms: AWS, Azure, and GCP.
Solid foundation in advanced statistical methods and probabilistic analysis.
Ability to lead end-to-end AI/ML projects, including design, development, and stakeholder management.
Experience with visualization tools like Tableau, Power BI, or similar.
Familiarity with ML workflow orchestration and MLOps practices.
Desirable Skills
Experience with LLMs (Large Language Models) and agentic AI workflows.
Familiarity with modern data platforms like Snowflake.
Exposure to real-time data processing in cloud-native environments.
Contributions to open-source AI projects or publications in data science communities.
Qualifications
Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, Statistics, or a related field.
Certifications in cloud platforms (AWS, Azure, GCP) or Databricks are a plus.
Qualities:
Able to consult, write, and present persuasively
Able to work in a self-organized and cross-functional team
Able to iterate based on new information, peer reviews, and feedback
Able to work seamlessly with clients across multiple geographies
Research focused mindset
Excellent analytical, presentation, reporting, documentation and interactive skills
"Infocepts is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law."
Senior Data Engineer - Data Intelligence
Data scientist job in Baltimore, MD
Hybrid Role
Job Title: Data Intelligence - Engineer, Data Sr
Project Type: Contract
Duration of the project: 12 Months
We are looking for candidates with 5+ years' experience in Ab Initio Administration. (Internal Notes: Please do not send Developers)
Must have 2+ years' experience with AWS. Working with Ab Initio in AWS Cloud.
Must have solid experience building, installing and configuring Ab Initio.
Must have AWS EKS containerization. Will be involved moving Linux instances to AWS EKS.
Ab Initio Lead Infrastructure
This position is an Ab Initio Administrator and not a developer position. The Senior Ab Initio ETL Administrator is responsible for the tasks involved in administration of ETL tool (Ab-Initio) as well as migrating Ab Initio infrastructure to the Cloud. The candidate will support the implementation of a Data Integration/Data Warehouse for the Data products on-prem and in AWS Cloud like EKS containerization for Ab Initio.
6 - 8 Years' Experience
At least 6 years of Experienced with all the tasks involved in administration of ETL Tool (Ab Initio)
Experienced with managing the project of migration or infrastructure build without supervisor
At least 6 years of Experienced with Advance knowledge of Ab Initio Graphical Development Environment (GDE), Meta Data Hub, Operational Console
Experience with Ab Initio, AWS EKS, S3, Dynamo DB, Mongo DB, ProgreSQL, RDS, DB2
Created Big Data pipelines (ETL) from on-premises to Data Factories, Data Lakes, and Cloud Storage such as EBS or S3.
DevOps (CI/CD Pipeline) prefers Jenkins experience
Experience with Advance knowledge of UNIX and SQL
Experience with manage metadata hub-MDH, Operational Console and troubleshoot environmental issues which affect these components
Experience with scripting and automation such as design and develop automated ETL process and architecture and unit testing of the ETL code
Experience with working on the break fix and continuous development items, review, and inspection for the production changes
Perform the code review for the ETL code developed by the development team and guide to resolve an issue.
Service Oriented Architecture (SOA) knowledge and Demonstrated knowledge and best practices of testing environments and processes
Demonstrated experience working in an Enterprise environment with crossed team interaction and collaboration and policies
Strong testing skills
Excellent problem-solving skills
Strong analytical skills
Excellent verbal and written communications skills
Familiar with structured programming techniques
Must be able to perform assigned tasks with minimum supervision
Strong documentation skills
Experience working in an Agile environment is a plus
Software:
Applies and implements best practices for data auditing, scalability, reliability, and application performance.
AWS certification is a plus
Extensive UNIX AIX or Linux and Scripting experience
Extensive SDLC experience with some development or Systems programming experience
Ability to analyze and trouble-shoot Mid-tier/infrastructure issues.
Very strong verbal and written communication skills (Critical)
Ability to facilitate technical requirements gathering and design sessions
Collaborate and interpret business and technical needs
Excellent attention to detail and quality work products (Critical)
Strong customer service skills with internal and external customers (Critical)
Must be able to perform assigned tasks with minimum supervision (Critical)
Strong analytical and documentation skills
Excellent time management ability. (Critical)
Skills Preferred
Experience with DEVOPS or IAAS
AIX or Linux
LDAP
EIAM (Identity Access Management)
Ab Initio Admin and Architect
Data Engineer / Big data Engineer
Data scientist job in McLean, VA
Immediate need for a talented Data Engineer / Big data Engineer. This is a 12 months contract opportunity with long-term potential and is located in Mclean, VA(Hybrid). Please review the job description below and contact me ASAP if you are interested.
Job ID: 25-93504
Pay Range: $70 - $75/hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location).
Key Responsibilities:
Design, develop, and maintain data pipelines leveraging Python, Spark/PySpark, and cloud-native services.
Build and optimize data workflows, ETL processes, and transformations for large-scale structured and semi-structured datasets.
Write advanced and efficient SQL queries against Snowflake, including joins, window functions, and performance tuning.
Develop backend and automation tools using Golang and/or Python as needed.
Implement scalable, secure, and high-quality data solutions across AWS services such as S3, Lambda, Glue, Step Functions, EMR, and CloudWatch.
Troubleshoot complex production data issues, including pipeline failures, data quality gaps, and cloud environment challenges.
Perform root-cause analysis and implement automation to prevent recurring issues.
Collaborate with data scientists, analysts, platform engineers, and product teams to enable reliable, high-quality data access.
Ensure compliance with enterprise governance, data quality, and cloud security standards.
Participate in Agile ceremonies, code reviews, and DevOps practices to ensure high engineering quality.
Key Requirements and Technology Experience:
Proficiency in Python with experience building scalable data pipelines or ETL processes.
Strong hands-on experience with Spark/PySpark for distributed data processing.
Experience writing complex SQL queries (Snowflake preferred), including optimization and performance tuning.
Working knowledge of AWS cloud services used in data engineering (S3, Glue, Lambda, EMR, Step Functions, CloudWatch, IAM).
Experience with Golang for scripting, backend services, or performance-critical processes.
Strong debugging, troubleshooting, and analytical skills across cloud and data ecosystems.
Familiarity with CI/CD workflows, Git, and automated testing.
Our client is a leading Banking and Financial Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration.
Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
Senior Data Engineer
Data scientist job in Washington, DC
Job Title: Senior Data Engineer
Only USC/GC is considered
Contract
The Senior Data Engineer will join a newly established Data Services team within a mission-driven nonprofit organization that delivers critical services to government customers. As the organization expands its use of data to drive decision-making across multiple business units, the Data Services team is building a modern data lakehouse platform to provide clean, timely, and accurate data. This role will play a foundational part in designing and operationalizing enterprise data pipelines and infrastructure that empower stakeholders with reliable insights.
Summary
The Senior Data Engineer supports the development and maintenance of a system-wide analytics platform that enables secure, scalable access to enterprise data. This role owns end-to-end engineering efforts across ingestion, transformation, orchestration, and delivery using Azure and Microsoft Fabric technologies. The engineer will develop and optimize ETL / data pipelines, implement medallion architecture patterns, and ensure enterprise data assets are structured, integrated, and governed to meet the needs of diverse business units and external parties.
Key Responsibilities
Data Pipeline Design & Development
• Design, develop, and implement end-to-end data ingestion and processing pipelines using Azure and Microsoft Fabric tools.
• Transform raw bronze data into silver (cleaned) and gold (curated, analytics-ready) layers following the Medallion architecture.
• Develop code and tooling to process and transform data into enterprise data models.
• Implement existing ETL frameworks, patterns, and standards used across the enterprise.
Orchestration, Automation & Operations
• Schedule, orchestrate, and monitor automated and semi-automated data pipelines to ensure reliability and quality.
• Build automated workflows supporting ingestion and transformation observability.
• Ensure technical correctness, timeliness, and high-quality delivery of data services.
Data Modeling, Integration & Governance
• Serve as a subject matter expert on enterprise data sources, structures, and definitions.
• Build and maintain data relationships, mappings, and linkages within the Enterprise Data Warehouse (EDW).
• Perform integration of data assets to support analytics and operational needs across multiple mission-driven departments.
• Create and manage large-scale data warehouses and lakehouse components to ensure efficient data access and retrieval.
Collaboration & Communication
• Partner with analysts, business units, and data consumers to support exploration and decision-making.
• Communicate clearly and effectively with technical and non-technical stakeholders, including senior leadership and customers.
• Champion continuous improvement, accountability, and evidence-based decision-making within the Data Services team.
Qualifications
• Minimum five years of experience working with cloud data platforms such as Azure, Snowflake, AWS Redshift, or Databricks.
• Minimum six years of experience in SQL-based data processing.
• Minimum five years of application development experience using Python.
• At least two years of experience developing ETL pipelines within Microsoft Fabric.
• Strong working knowledge of data warehousing and data lake concepts, including medallion or similar architectural patterns.
• Demonstrated ability to deliver high-quality work on schedule and uphold team accountability standards.
• Proven track record of clear and continuous communication across technical and business audiences.
• Commitment to process improvement and leveraging information to enhance organizational performance.
Technical Skills
• Python: Intermediate to advanced proficiency for data processing, automation, and pipeline development.
• SQL: Intermediate to advanced proficiency for transformations, modeling, and performance optimization.
• Experience with Azure Data Factory, Microsoft Fabric Data Engineering, Delta Lake, or similar technologies.
• Familiarity with orchestration frameworks, metadata management, and modern ETL/ELT patterns.
Lead Data Engineer
Data scientist job in Reston, VA
Imagine working at Intellibus to engineer platforms that impact billions of lives around the world. With your passion and focus we will accomplish great things together!
Our Platform Engineering Team is working to solve the Multiplicity Problem. We are trusted by some of the most reputable and established FinTech Firms. Recently, our team has spearheaded the Conversion & Go Live of apps that support the backbone of the Financial Trading Industry.
Are you a data enthusiast with a natural ability for analytics? We're looking forward skilled Data/Analytics Engineers to fill multiple roles for our exciting new client. This is your chance to shine, demonstrating your dedication and commitment in a role that promises both challenge and reward.
What We Offer:
A dynamic environment where your skills will make a direct impact. The opportunity to work with cutting-edge technologies and innovative projects. A collaborative team that values your passion and focus.
We are looking for Engineers who can
Design, develop, and maintain data pipelines to ingest, transform, and load data from various sources into Snowflake.
Implement ETL (Extract, Transform, Load) processes using Snowflake's features such as Snowpipe, Streams, and Tasks.
Design and implement efficient data models and schemas within Snowflake to support reporting, analytics, and business intelligence needs.
Optimize data warehouse performance and scalability using Snowflake features like clustering, partitioning, and materialized views.
Integrate Snowflake with external systems and data sources, including on-premises databases, cloud storage, and third-party APIs.
Implement data synchronization processes to ensure consistency and accuracy of data across different systems.
Monitor and optimize query performance and resource utilization within Snowflake using query profiling, query optimization techniques, and workload management features.
Identify and resolve performance bottlenecks and optimize data warehouse configurations for maximum efficiency.
Work on Snowflake modeling - roles, databases, schemas, ETL tools with cloud-driven skills
Work on SQL performance measuring, query tuning, and database tuning
Handle SQL language and cloud-based technologies
Set up the RBAC model at the infra and data level.
Work on Data Masking / Encryption / Tokenization, Data Wrangling / Data Pipeline orchestration (tasks).
Setup AWS S3/EC2, Configure External stages, and SQS/SNS
Perform Data Integration e.g. MSK Kafka connect and other partners like Delta Lake (data bricks)
Key Skills & Qualifications:
ETL - Experience with ETL processes for data integration.
SQL - Strong SQL skills for querying and data manipulation
Python - Strong command of Python, especially in AWS Boto3, JSON handling, and dictionary operations
Unix - Competent in Unix for file operations, searches, and regular expressions
AWS - Proficient with AWS services including EC2, Glue, S3, Step Functions, and Lambda for scalable cloud solutions
Database Modeling - Solid grasp of database design principles, including logical and physical data models, and change data capture (CDC) mechanisms.
Snowflake - Experienced in Snowflake for efficient data integration, utilizing features like Snowpipe, Streams, Tasks, and Stored Procedures.
Airflow - Fundamental knowledge of Airflow for orchestrating complex data workflows and setting up automated pipelines
Bachelor's degree in Computer Science, or a related field is preferred. Relevant work experience may be considered in lieu of a degree.
Excellent communication and interpersonal skills, with the ability to effectively collaborate with cross-functional teams and stakeholders.
Proven leadership abilities, with experience mentoring junior developers and driving technical excellence within the team.
We work closely with
Data Wrangling
ETL
Talend
Jasper
Java
Python
Unix
AWS
Data Warehousing
Data Modeling
Database Migration
RBAC model
Data migration
Our Process
Schedule a 15 min Video Call with someone from our Team
4 Proctored GQ Tests (< 2 hours)
30-45 min Final Video Interview
Receive Job Offer
If you are interested in reaching out to us, please apply and our team will contact you within the hour.
Azure Data Modeler
Data scientist job in Washington, DC
Azure Data Modeler - Budget Transformation Project
Our client is embarking on a major budget transformation initiative and is seeking an experienced Azure Data Modeler to support data architecture, modeling, and migration activities. This role will play a critical part in designing and optimizing data structures as the organization transitions to SAP. Experience with SAP is preferred, but strong ERP data experience in any platform is also valuable.
Responsibilities
Design, develop, and optimize data models within the Microsoft Azure environment.
Support data architecture needs across the budget transformation program.
Partner with cross-functional stakeholders to enable the transition to SAP (or other ERP systems).
Participate in data migration planning, execution, and validation efforts.
Work collaboratively within SAFe Agile teams and support sprint activities.
Provide off-hours support as needed for critical tasks and migration windows.
Engage onsite in Washington, DC up to three days per week.
Required Qualifications
Strong hands-on expertise in data architecture and data model design.
Proven experience working with Microsoft Azure (core requirement).
Ability to work flexibly, including occasional off-hours support.
Ability to be onsite in Washington, DC as needed (up to 3 days/week).
Preferred Qualifications
Experience with SAP ECC or exposure to SAP implementations.
Experience with other major ERP systems (Oracle, Workday, etc.).
SAFe Agile certification.
Dexian stands at the forefront of Talent + Technology solutions with a presence spanning more than 70 locations worldwide and a team exceeding 10,000 professionals. As one of the largest technology and professional staffing companies and one of the largest minority-owned staffing companies in the United States, Dexian combines over 30 years of industry expertise with cutting-edge technologies to deliver comprehensive global services and support.
Dexian connects the right talent and the right technology with the right organizations to deliver trajectory-changing results that help everyone achieve their ambitions and goals. To learn more, please visit ********************
Dexian is an Equal Opportunity Employer that recruits and hires qualified candidates without regard to race, religion, sex, sexual orientation, gender identity, age, national origin, ancestry, citizenship, disability, or veteran status.
Data Scientist
Data scientist job in Annapolis, MD
The primary role of the **Data Scientist** is to drive mission-focused insights from complex datasets. This role involves managing, modeling, and interpreting large-scale government data holdings to support decision-making and operational success. You will combine expertise in mathematics, statistics, computer science, and domain-specific knowledge to deliver actionable conclusions and communicate them effectively to both technical and non-technical audiences.
**Position Responsibilities**
+ Build and assess analytic models tailored to mission-specific needs
+ Communicate principal conclusions clearly, using mathematics, statistics, and computer science methods
+ Develop reproducible workflows and ensure data integrity
+ Develop qualitative and quantitative methods for exploring and assessing datasets in varying states of organization and cleanliness.
+ Organize, clean, and curate large datasets for analysis
+ Present complex technical findings in a way that is accessible to technical and non-technical stakeholders
**Experience**
+ Bachelor's degree in Mathematics, Applied Mathematics, Applied Statistics, Machine Learning, Data Science, Computer Science or related field or equivalent combination of education and experience.
+ Minimum 3+ years' relevant work experience.
+ Experience with software development working with Python in a Unix environment.
+ Experience using the Unix command line.
+ Practical knowledge in Python Machine Learning and Data Visualization
+ Practical knowledge in Data ETL such as working with: Data loading from SQL, CSV, JSON, Excel, etc., Web scraping (Beautiful Soup, Scrapy, etc.) and Data Wrangling/Cleaning
+ Proficiency in statistical packages using any of the following: Python, R, STATA, SPSS, etc.
+ An active TS/SCI with polygraph
**Additional Requirements**
+ Experience using the Atlassian Tool Suite.
+ Experience with development of any of the following; Hadoop, Pig, MapReduce, or HDFS
+ Working knowledge with other object-oriented programming languages such as Java or C++ Working knowledge with Front-end data visualization libraries (i.e., D3.js; Raphael.js, etc.)
**Physical Demands**
+ Ability to work in an office environment (Constant)
+ Required to sit and stand for long periods; talk, hear, and use hands and fingers to operate a computer and telephone keyboard (Frequent)
**Salary Range: $107,000 to $212,000**
The AV pay range for this job level is a general guideline only and not a guarantee of compensation or salary. Determination of official compensation or salary relies on several factors including, but not limited to, level of position, job responsibilities, geographic location, scope of relevant work experience, educational background, certifications, contract-specific affordability, organizational requirements, alignment with local internal equity as well as alignment with market data.
**Clearance Level**
Top Secret with Poly
**ITAR Requirement:**
_T_ _his position requires access to information that is subject to compliance with the International Traffic Arms Regulations ("ITAR") and/or the Export Administration Regulations ("EAR"). In order to comply with the requirements of the ITAR and/or the EAR, applicants must qualify as a U.S. person under the ITAR and the EAR, or a person to be approved for an export license by the governing agency whose technology comes under its jurisdiction. Please understand that any job offer that requires approval of an export license will be conditional on AeroVironment's determination that it will be able to obtain an export license in a time frame consistent with AeroVironment's business requirements. A "U.S. person" according to the ITAR definition is a U.S. citizen, U.S. lawful permanent resident (green card holder), or protected individual such as a refugee or asylee. See 22 CFR § 120.15. Some positions will require current U.S. Citizenship due to contract requirements._
**Benefits** : AV offers an excellent benefits package including medical, dental vision, 401K with company matching, a 9/80 work schedule and a paid holiday shutdown. For more information about our company benefit offerings please visit: ********************************* .
We also encourage you to review our company website at ******************** to learn more about us.
Principals only need apply. NO agencies please.
**Who We Are**
Based in California, AeroVironment (AVAV) is a global leader in unmanned aircraft systems (UAS) and tactical missile systems. Founded in 1971 by celebrated physicist and engineer, Dr. Paul MacCready, we've been at the leading edge of technical innovation for more than 45 years. Be a part of the team that developed the world's most widely used military drones and created the first submarine-launched reconnaissance drone, and has seven innovative vehicles that are part of the Smithsonian Institution's permanent collection in Washington, DC.
Join us today in developing the next generation of small UAS and tactical missile systems that will deliver more actionable intelligence to our customers so they can proceed with certainty - and succeed.
**What We Do**
Building on a history of technological innovation, AeroVironment designs, develops, produces, and supports an advanced portfolio of unmanned aircraft systems (UAS) and tactical missile systems. Agencies of the U.S. Department of Defense and allied military services use the company's hand-launched UAS to provide situational awareness to tactical operating units through real-time, airborne reconnaissance, surveillance, and target acquisition.
_We are proud to be an EEO/AA Equal Opportunity Employer, including disability/veterans. AeroVironment, Inc. is an Equal Employment Opportunity (EEO) employer and welcomes all qualified applicants. Qualified applicants will receive fair and impartial consideration without regard to race, sex, color, religion, national origin, age, disability, protected veteran status, genetic data, sexual orientation, gender identity or other legally protected status._
**ITAR**
U.S. Citizenship required
**About AV:**
**AV isn't for everyone. We hire the curious, the relentless, the mission-obsessed. The best of the best.**
We don't just build defense technology-we redefine what's possible. As the premier autonomous systems company in the U.S., AV delivers breakthrough capabilities across air, land, sea, space, and cyber. From AI-powered drones and loitering munitions to integrated autonomy and space resilience, our technologies shape the future of warfare and protect those who serve.
Founded by legendary innovator Dr. Paul MacCready, AV has spent over 50 years pushing the boundaries of what unmanned systems can do. Our heritage includes seven platforms in the Smithsonian-but we're not building history, we're building what's next.
**If you're ready to build technology that matters-with speed, scale, and purpose-there's no better place to do it than AV.**
**Careers at AeroVironment (*****************************************
Data Scientist (U.S. Intelligence Community)
Data scientist job in Fort Meade, MD
Job Title: Data Scientist (U.S. Intelligence Community) Workplace: On-site (hands-on access to high impact data systems needed) Clearance Required: TS/SCI with Polygraph Why Join Elder Research * Mission-Driven Work: Support the U.S. Intelligence Community by building secure, scalable data systems that enable real-time insights.
* Career Stability: We hire for careers, not contractsour work is growing
* People-Focused Culture: We value work-life balance, collaboration, and professional growth.
* Ownership Mindset: Every employee is a shareholder through our Employee Stock Ownership Plan (ESOP).
* Competitive Pay & Benefits:
* Typical salary range: $117k-$196K, depending on experience and contract labor category
* 401(k) with employer contribution
* Health, dental, and vision coverage
* Paid parental leave and flexible PTO
Ready to make an Impact? Message our recruiting team directly at:
Clinical Data Scientist
Data scientist job in Silver Spring, MD
DRT Strategies delivers expert management consulting and information technology (IT) solutions to large federal agencies, the U.S. Navy, state and local government and commercial clients in health care, technology, and financial services industries.
The three letters of our name, DRT, stand for Driving Resolution Together, which is the core philosophy on which the company was founded. That is, we collaborate with our clients to solve their most pressing challenges - together.
We are problem solvers dedicated to your success, combining Fortune 500 experience with small business responsiveness. We have established a reputation with our clients as a forward-thinking consulting firm with demonstrated success in implementing solutions that lead to meaningful results. Our world-class consultants unite people to work collaboratively to achieve project goals and make vision a reality.
Project Description:
This project is designed to support FDA CDER Office of Quality Surveillance (OQS) with developing and implementing modern and innovative techniques to achieve comprehensive surveillance and estimate the state of quality. The objective of this project is to conduct thorough research and implement efficient methods for ingesting publicly available regulatory data. The contractor will provide support for data extraction and analytics that are needed to enhance OQS's dossier program.
Job Summary:
The Clinical Data Scientist serves as a technical specialist leveraging expertise in data science, computer science, mathematics, and statistics to transform complex clinical data into actionable insights.
This role is responsible for designing and implementing advanced data modeling and data mining processes, creating algorithms, and developing predictive models to uncover hidden patterns within the data. Key analytical activities include utilizing techniques such as time-series analysis, trend analysis, machine learning (including clustering, Natural Language Processing, and outlier identification) to maximize data value, foster a proactive approach, and effectively identify high-risk facilities.
Responsibilities:
Data Extraction & Engineering:
Utilize data mining techniques and web scraping tools to extract valuable information from various sources.
Manage and process data effectively by applying knowledge of data types, formats, architecture, and pipeline optimization techniques.
Data Analysis & Interpretation:
Apply expertise in analytics and statistics (including BI, reporting, and data transformation) to analyze and interpret data, driving business insights.
Perform hands-on work with data analysis, validation, and quality assurance.
Strategy & Problem Solving (Technical Strategy):
Apply analytical and critical thinking skills to identify new issues, trends, and opportunities from data.
Research and identify root causes for data issues.
Identify gaps in the system and communicate them to solution project leads or program managers.
Operational Excellence & Delivery:
Ensure the reliability, efficiency, and quality of data services and pipelines.
Ensure the timely delivery of reports, manage conflicting priorities, and meet customer expectations.
Communication:
Prepare and deliver presentations.
Perform periodic check-ins for status and feedback.
Required Experience:
Minimum 5 years of experience in analytics, biopharmaceutical manufacturing, quality analytics, biotechnology R&D, or related research programs.
Technical Expertise:
Minimum 3 years of Full Stack Data Scientist experience, including proven ability to apply expertise in web scraping, time-series analysis, and machine learning techniques.
Required expertise in Python, R (including RShiny), PySpark/Spark, and Advanced SQL.
Strong Git and collaborative development experience.
Familiarity with a broad range of ML techniques and their implementation.
Preferred:
Databricks experience.
Analytical & Communication Skills:
Excellent analytical skills with an emphasis on data exploration and data wrangling.
Excellent written and verbal communication skills.
Preferred Experience:
FDA experience
PhD preferred
Biological science/pharmaceutical background.
Education & Training:
MS in a technical, quantitative field (e.g., Computer Science, Engineering, Statistics, or related).
Salary Range:
$95,000-$125,000
Salary commensurate with experience.
DRT Strategies, Inc. (DRT) follows the guidelines outlined by the Equal Employment Opportunity Commission (EEOC) to provide all employees and qualified applicants employment without regard to race, color, religion, sex (including pregnancy, childbirth, or related conditions, transgender status, and sexual orientation), national origin, age, genetic information, disability, protected veteran status, or any other protected characteristic under federal, state, or local law.
Reasonable accommodations for applicants and employees with disabilities will be provided. If a reasonable accommodation is needed to participate in the job application or interview process, to perform essential job functions, and/or to receive other benefits and privileges of employment, please contact Human Resources by emailing ********************, or by dialing ************.
For additional information, please review the Know Your Rights: Workplace Discrimination is Illegal, E-Verify (English), E-Verify (Spanish). Right to Work (English), Right to Work (Spanish).
Please be aware of recruitment fraud where malicious individuals might pose as DRT Strategies. Only job postings and emails from drtstrategies.com are authentic and legitimate communications regarding DRT Strategies employment opportunities. Please contact Human Resources at ******************** if you believe you have received a fraudulent email.
Auto-ApplyData Scientist TS/SCI (DC/NoVA)
Data scientist job in Washington, DC
Virtualitics, Inc. is an advanced analytics company that helps enterprises and governments make smarter business decisions, faster with ready-to-use AI that can be understood-by analysts and business leaders alike. Our AI platform allows organizations to rapidly process complex data into powerful multi-dimensional graph visualizations, and predict future business outcomes with clear, explainable no-code AI modeling. Virtualitics puts AI into use across the enterprise with enhanced analytics that's easily integrated into the flow of work. Our patented technology is based on over 11 years of research at the California Institute of Technology and NASA Jet Propulsion Laboratory.
The platform has been deployed across Department of Defense agencies, Federal Civilian agencies, and Fortune 500 companies. We're proud to have been recently named as one of the Air Force's "big bets" on emerging technology and Built In's top companies to work for in 2022!
Our fast-growing team is looking for a skilled, motivated and reliable Data Scientist TS/SCI to help grow our company.
What you'll do
Use the capabilities of the Virtualitics AI Platform to deploy production solutions for our customers.
Work with customer data and own the development of an AI solution from ideation to production
Leverage integrations with big data frameworks (e.g. Databricks) as needed to develop solutions for customers.
Your qualifications
Must have an active TS/SCI security clearance and be willing to work from an SCIF.
Must be located in or near Washington DC/Northern Virginia to be able to access relevant SCIFs.
Regular travel anticipated to client sites/military bases.
Degree in Computer Science.
3+ years of experience writing production-ready code in Python.
3+ years of experience with Python Data Stack: pandas, numpy, sklearn, TensorFlow, PyTorch, matplotlib, etc.
1 year of experience deploying a machine learning model in a production environment.
1-2 years of experience with SQL/NoSQL or other database systems (elasticsearch, graph databases, etc.).
3 years of experience with Git (or an alternative version control tool).
1 year of Experience with Docker and/or Kubernetes.
A strong sense of ownership and accountability.
Excellent written and verbal communication skills in English
Pluses
Demonstrated experience in leading the development efforts for a project from a SCIF.
Experience with Cyber Analytics, PCAP and network monitoring, CVEs and Cyber Vulnerabilities, etc.
1 year of experience integrating applications with at least one of Spark, Dask, Snowpark or Kafka.
1 year of experience with technologies like task schedulers (e.g. Celery, Airflow, Prefect, etc.) and web-app development stacks (e.g. Flask/Django)
You will be required to obtain and maintain an interim and/or final security clearance. Failure to obtain and maintain the required level of clearance may result in the withdrawal of a job offer or removal.
What We Offer You
We are a team of ambitious individuals who are passionate about creating a revolutionary software company. At Virtualitics, you will have a significant career development opportunity and a chance to contribute to one of the fastest-growing startups in Southern California. Our benefits include highly competitive pay, equity, fully paid health, vision, and dental insurance for you + your dependents, and unlimited PTO.
Virtualitics is committed to building a company where every individual can bring their full impact and reach their fullest potential. Our mission is to build a diverse and inclusive environment where talented people of all cultures, ages, perspectives, opinions, education, backgrounds, races, gender identity, religions, orientations, abilities, and beliefs can grow and thrive with Virtualitics.
Data Scientist
Data scientist job in Fort Meade, MD
A data scientist will develop machine learning, data mining, statistical and graph-based algorithms to analyze and make sense of datasets;
prototype or consider several algorithms and decide upon final model based on suitable performance metrics;
build models or develop experiments to generate data when training or example datasets are unavailable;
generate reports and visualizations that summarize datasets and provide data-driven insights to customers;
partner with subject matter experts to translate manual data analysis into automated analytics;
implement prototype algorithms within production frameworks for integration into analyst workflows.
A Bachelor's degree from an accredited college or university in a quantitative discipline (e.g., statistics, mathematics, operations research, engineering or computer science).
At least one year of experience analyzing datasets and developing analytics.
One year of experience programming with data analysis software such as R, Python, SAS, or MATLAB.
Experience in software development, cloud development, analyzing datasets, or developing descriptive, predictive, and prescriptive analytics preferred.
Not sure if you're a fit? Submit your resume and we will contact you via email to let you know!
Data Scientist
Data scientist job in Washington, DC
The Columbia Group (TCG), is a technical services support company which has a successful 50+ year history of providing the United States government with financial management support services, information technology, program management, acquisition, logistics management, engineering, and design. As a full-time employee of TCG, you will receive a comprehensive benefits package to include a quality health insurance program (medical, drug, vision and dental) for you and your family, group life insurance, group short and long-term disability insurance, a 401K profit sharing retirement plan, and professional development opportunities. You will accrue 'Paid Time Off' (PTO) to be used towards vacation or sick leave, in addition to receiving eleven (11) paid federal holidays.
We are seeking Junior to Senior Level Data Scientists to join our team in locations such as: Washington, DC Metropolitan and San Diego, CA. Upon award, the ideal candidate will:
Develop and implement a set of techniques or analytics applications to transform raw data into meaningful information using data-oriented programming languages and visualization software. They may also perform the following duties:
Apply data mining, data modeling, natural language processing, and machine learning to extract and analyze information from large structured and unstructured datasets.
Visualize, interpret, and report data findings.
Create dynamic data reports.
Data Scientist (FSP)
Data scientist job in Herndon, VA
Job Description
Red Rock Government Services is a leading software engineering company recognized for its exceptional support to the intelligence community. With a proven track record of delivering innovative and mission-critical solutions, Red Rock specializes in developing secure, scalable, and cutting-edge technologies tailored to meet the complex needs of intelligence operations. The company's expertise in advanced analytics, cloud computing, and artificial intelligence enables it to empower agencies with enhanced decision-making capabilities and operational efficiency. Red Rock's commitment to excellence, agility, and collaboration solidifies its reputation as a trusted partner in safeguarding national security and advancing intelligence objectives.
This position requires a current and active TS/SCI with Full Scope Polygraph security clearance. This position does not have the ability to sponsor candidates for clearance processing.
RedRock is seeking a
Data Scientist
to join our team of diverse and qualified professionals. The role focuses on supporting the Sponsor's mission by delivering advanced language training to ensure proficiency across a broad spectrum of languages for the Sponsor's employees.
Responsibilities:
Works closely with the Sponsor to gather requirements and advise on AWS infrastructure design, development, and deployment of Cloud resources.
Designs, tests, and implements log aggregation in support of Cloud and AWS systems.
Designs, tests, and implements search and visualization infrastructure in support of AWS systems.
Works with vendors to develop and deploy Cloud based solutions to the learning environment in AWS.
Acts as a liaison between the Customer and Vendor Contacts to troubleshoot AWS when deploying new resources.
Recommends new technologies for use in the cloud environment (AWS).
Communicates IT requirements between management and technical entities.
Qualifications:
Experience in risk management and ability to identify project risks and facilitate the development and implementation of mitigation strategies
Experience implementing and integrating AWS solutions.
Experience in gathering requirements from vendor contacts and customers.
Experience using Linux in the AWS environment.
Experience with scripting and web programming technologies required to support web-based learning systems, such as PHP, PERL, Java, Jscript, or PowerShell.
Experience deploying third party software products.
Experience with Software Configuration Management (SCCM).
Experience working with desktop and network hardware.
Location: Herndon, VA
Pay and Benefits:
Pay and benefits are fundamental to any career decision. That's why we craft compensation packages that reflect the importance of the work we do for our customers. Employment benefits include competitive compensation, Health and Wellness programs, Paid Leave and Retirement.
Commitment to Diversity:
All qualified applicants will receive consideration for employment without regard to sex, race, ethnicity, age, national origin, citizenship, religion, physical or mental disability, medical condition, genetic information, pregnancy, family structure, marital status, ancestry, domestic partner status, sexual orientation, gender identity or expression, veteran or military status, or any other basis prohibited by law.
Data Scientist Solution Architect / Data Analyst
Data scientist job in McLean, VA
Established in 1991, Collabera has been a leader in IT staffing for over 22 years and is one of the largest diversity IT staffing firms in the industry. As a half a billion dollar IT company, with more than 9,000 professionals across 30+ offices, Collabera offers comprehensive, cost-effective IT staffing & IT Services. We provide services to Fortune 500 and mid-size companies to meet their talent needs with high quality IT resources through Staff Augmentation, Global Talent Management, Value Added Services through CLASS (Competency Leveraged Advanced Staffing & Solutions) Permanent Placement Services and Vendor Management Programs.
Collabera recognizes true potential of human capital and provides people the right opportunities for growth and professional excellence.
Job Description
Location: McLean, VA 22102
Duration: 6+ months (could go beyond)
Description:
• The candidate for this position will provide analytical support to the Data Science Division in the Cyber, Cloud and Data Science Service line.
• The successful candidate will support the enterprise through designing solutions for data collection, preparation, and model building to develop end-to-end analytic lifecycles to synthesize actionable information.
• The candidate will determine appropriate tools and methods for specific projects to design the analytics solution either as a standalone system, or an analytics embedded inside and overall solution.
• The candidate will be working in teams that include enterprise architects, intelligence analysts, data and visualization experts, software developers, and system engineers, and will have an excellent opportunity to broaden skills.
• The candidate must have experience in developing and deploying solutions for customers.
Applicant must have skills applicable to one or more of the following areas:
• Data wrangling, cleansing, and analytics
• The data science process
• Presenting work to both technical and non-technical audiences
• Statistical evaluation
• Machine Learning, Predictive Modeling
Qualifications
Applicant should have skills in one or more of the following areas:
• Machine Learning Technologies, such as Natural Language Processing (NLP) - e.g., Jaro-Winkler, Damerau- Levenshtein, Metaphone, string manipulation, etc.
• Natural Language Processing (NLP) - e.g., Jaro-Winkler, Damerau- Levenshtein, Metaphone, string manipulation, etc.
• R libraries: base, MASS, plyr, rpart, random Forest, maps/mapproj/rworldmap, zoo, adabag, animation, ggplot, igraph, jsonlite, mclust, pROC, hexbin
• Python libraries: numpy, scipy, matplotlib, scikit-learn,etc
• SPSS, Oracle Data Miner, SAS Base, DataMiner, Dataflux, STAT
• Entity Resolution - Basis Technology Rosette Name Indexer (RNI), Global Name Recognition (GNR), Probabilistic Matching Engine, Trillium Software (TS) Quality
• Apache Hadoop 2.x, MapReduce, Elastic Search 1.4.x, Sqoop, Pig
• Familiar with Libraries: such as ATS SSO, ATS-common framework, Highchart, jersey, jtidy, one2team, iText, Spring/Spring STS, JSON, Network Markov Clustering, Topic Modeling Tool, Naïve Bayes, Apache Commons, Google's Guava, Apache Log4j, Open CSV, SecondString
• Working in interdisciplinary teams.
Additional Information
To discuss on this in more detail, Please contact:
Himanshu Prajapat
**********************************
************
Easy ApplyData Scientist
Data scientist job in Washington, DC
Clearance Level Required:
Requires a Top Secret (TS/SCI) Clearance with a Polygraph.
The Mission:
Square Peg Technologies is looking for data scientists who enjoy leveraging large sets of data to tell stories. In this role, you will apply modern machine and deep learning approaches to import, clean, transform and model data that supports our nation's core intelligence decision-makers. You will be responsible for designing or presenting conclusions gained from analyzing data using statistical tools like Tableau, Python, R and others.
Basic Qualifications
Work as the lead data strategist, identifying and integrating new datasets that can be leveraged through our product capabilities and work closely with the engineering team to strategize and execute the development of data products
Execute analytical experiments methodically to help solve various problems and make a true impact across various domains and industries
Research and devise innovative statistical models for data analysis
Communicate findings to all stakeholders
Enable smarter business processes-and implement analytics for meaningful insights
Identify relevant data sources and sets to mine for client business needs, and collect large structured and unstructured datasets and variables
Devise and utilize algorithms and models to mine big data stores, perform data and error analysis to improve models, and clean and validate data for uniformity and accuracy
Analyze data for trends and patterns, and Interpret data with a clear objective in mind
Implement analytical models into production by collaborating with software developers and machine learning engineers.
Preferred Qualifications
Bachelor's degree in statistics, applied mathematics, or related discipline
2+ years experience in data science
Proficiency in data mining, mathematics, and statistical analysis
Advanced pattern recognition and predictive modeling experience
Experience with Excel, PowerPoint, Tableau, SQL, and programming languages (i.e., Java/Python, SAS)
Master's degree in stats, applied math, or related discipline
Square Peg Technologies Company Information
Square Peg Technologies is a boutique technology consulting firm that specializes in data solutions using the latest technologies in analytics, data science, data engineering, and data management. Our mission is to develop and integrate cutting-edge technology solutions that inspire our customers to push the fields of science and technology forward. Here at Square Peg, you will not get lost in a crowd. We value each person's contribution, celebrate wins together, and invest in the personal and professional development of our employees. Along with a strong positive culture that allows employees to thrive and grow, we also offer top-notch benefits including a large amount of vacation, annual bonuses, 401k matching, paid training, and 100% health care coverage.
At Square Peg Tech, we aim to be at the forefront of technology and innovation. Our ideal candidate must be forward-leaning and eager to engage with customers, academia, and industry leaders on the future applications of AI/ML, as well as best practices in advanced analytics. With a firm footprint in the Intelligence Community, Square Peg Tech is looking to expand its capabilities into the Department of Defense, NASA, and other prominent government agencies.
Auto-ApplySenior Data Scientist - Consultant
Data scientist job in Washington, DC
The World Justice Project (WJP) is seeking a full-time Senior Data Scientist - Consultant to support its Data Analytics Unit. This position will contribute to WJP's global research on rule of law issues by supporting data analysis and report production processes. The role involves designing and implementing machine learning models, developing AI agents, and working with large language models (LLMs) to support a range of research projects.
WJP's global research and data team is based in Washington, DC, and this position will be remote.
About the World Justice Project
The World Justice Project (WJP) was launched in 2009 to promote justice and the rule of law globally. It is best known for its Rule of Law Index, the world's leading source on the performance of 143 countries in relation to absence of corruption, open government, civil and criminal justice, fundamental rights, and other rule of law standards. Drawing on the Index methodology, WJP also undertakes in-depth studies on particularly countries and topics, such as access to justice, criminal justice, environmental governance, and corruption. WJP puts this data and research into action to strengthen the rule of law through convening, engagement, and support of a global network-including through the biennial World Justice Forum and World Justice Challenge. At a time when the rule of law is facing fundamental challenges throughout the world, WJP's mission to strengthen respect for justice, peace, and fundamental rights is more pressing than ever. For more information, please visit worldjusticeproject.org.
Responsibilities
Design and implement machine learning models and statistical analyses to support WJP research initiatives
Lead technical infrastructure modernization including containerization and cloud deployment strategies
Develop and maintain agentic AI systems and LLM-powered tools for data analysis workflows
Provide technical mentorship to other team members on advanced analytics techniques
Collaborate with research teams to translate complex data questions into analytical solutions
Qualifications
Technical Skills
Advanced proficiency in Python and R for statistical analysis and machine learning
Experience with LLMOps and deployment of large language model applications
SQL for database management and complex queries (bonus)
Git/GitHub for version control and collaborative development
Docker and containerization technologies
Understanding of cloud computing concepts (AWS, Azure, or GCP)
Basic HTML/CSS for report customization
Understanding of RESTful APIs and web scraping techniques
Preferred Qualifications
Strong experience with scikit-learn and other ML frameworks
Knowledge of agentic systems and AI workflow automation
Strong statistical background including dimensionality reduction techniques (PCA, factor analysis, MDS, t-SNE)
Expertise in data imputation methods and handling missing data
Experience with latent variable modeling (SEM, IRT, mixture models)
Master's degree or PhD in Computer Science, Statistics, Data Science, or related field
How to Apply
To apply for this position, please visit our employment portal: ************************************************************ The following materials are required for this position:
Resume
Writing sample. It can be in Spanish or English.
Contact information for three professional references
Please submit all application materials in one PDF document.
Applications will be reviewed on a rolling basis. Due to the high volume of applications received, we are only able to follow up directly with candidates selected for interviews. No phone calls please.
Auto-ApplySenior Data and Wildfire Scientist
Data scientist job in Washington, DC
Job DescriptionDescription:
Job Title: Senior Data and Wildfire Scientist
Terms: Full Time 37.5 Hours Per Week
Reports to: Director of Climate Finance
About Us:
The American Forest Foundation (AFF) unlocks the power of family forests as a climate solution while helping thousands of landowners care for their land and improve forest health. Through its Family Forest Carbon Program, AFF expands access to the voluntary carbon market for family forest owners from all walks of life while producing high quality forest carbon credits to help companies reach their net zero goals. The organization's strategic direction seeks to enable family forests to capture and store one billion tonnes of carbon by 2050.
We do not do this alone. AFF works across a broad coalition of conservation partners, corporations, and policymakers to equip family forest owners with financial and technical support to implement sustainable forest management practices on their lands and activate their forests as a critical tool in the fight against climate change.
AFF is in a mode of complex problem solving and rapid iteration. All teams at AFF are working to discover solutions to the planet's most pressing climate issues through developing and iterating viable and scalable models for impact. AFF is driven by values of our shared purpose, measurable and verifiable results, and learning for continuous improvement to tackle big conservation challenges.
About the Role:
The Senior Data and Wildfire Scientist will play a central role in strengthening the scientific and methodological foundations of AFF's programs, particularly the Fire Adapted Forests avoided emissions methodology and the Family Forest Carbon Program (FFCP). This position leverages expertise in data informatics, advanced statistical analysis, and remote sensing to build digital monitoring, reporting, and verification (dMRV) systems that ensure scientific credibility, market alignment, and scalability. The Senior Scientist's will also support AFF's work to better quantify ecosystem co-benefits - including biodiversity, water, and resilience outcomes - alongside carbon benefits, providing holistic and credible accounting of conservation impact. This role is within the Innovation Team of AFF and sits at the intersection of exploratory, data-driving approaches that help AFF achieve interdisciplinary science, methodology, and applied program delivery, helping to ensure AFF remains a leader in family forest climate solutions.
Requirements:
Responsibilities:
Provide leadership for technical and science-based development of new carbon practices, project types, and quantification of co-benefits (including biodiversity, water, and community resilience) to expand AFF's methodologies and program offerings.
Lead the development and implementation of risk quantification and mitigation strategies aligned with AFF's climate strategy, with a particular focus on market mechanisms to address wildfire risk and avoided emissions.
Design and refine digital monitoring, reporting, and verification (dMRV) frameworks that integrate remote sensing, GIS, informatics, and field data across AFF's practice areas.
Analyze large datasets using advanced statistical and geospatial tools to monitor carbon stocks, wildfire emissions avoidance, and ecological change over time.
Conduct high-level research on forest carbon dynamics, including sequestration, avoided emissions, and forest health, with direct application to FFCP methodologies.
Collaborate with AFF's wildfire, carbon, and science teams to ensure scientific rigor in methodological applications and monitoring frameworks.
Support field teams in data collection protocols and provide QA/QC to ensure accuracy and consistency.
Prepare grant proposals, scientific papers, reports, and technical documentation to support funding and share outcomes.
Engage with external stakeholders - including researchers, standard-setting bodies, and market actors - to represent AFF and advance innovation in carbon accounting and dMRV.
Candidate Requirements:
Ph.D. or equivalent experience in forest ecology, data science, remote sensing, or related field.
Demonstrated expertise in:
Remote sensing and GIS applications (satellite, aerial, LiDAR).
Advanced statistical analysis, informatics, and/or machine learning.
Forest carbon dynamics, wildfire risk modeling, and ecological monitoring.
Proven track record in methodological development for carbon accounting or environmental monitoring.
Familiarity with carbon markets, ecosystem monitoring, and MRV/dMRV systems.
Strong record of scientific publication and ability to communicate results to technical and non-technical audiences.
Ability to collaborate effectively across interdisciplinary teams and with external partners.
Core Competencies:
Agility - Thriving in volatile, uncertain, complex and ambiguous environments by anticipating and responding to changes with swift, focused and flexible actions.
Collaborating with Others - Working together with others in a cooperative and supportive manner to achieve shared goals.
Ensuring Accountability - Holding yourself and others to high standards of accountability, creating an infrastructure and transparent organization culture that supports and measures personal and organizational responsibility and accountability.
Problem Solving - Identifying problems and the solutions to them. Your contributions and leadership will be appreciated. Our staff is passionate, dedicated and good at what they do, and will be deeply grateful for your efforts to create and preserve an environment that is both fun and fair.
Job Function Competencies:
Scientific Rigor - Ability to design and implement methods that withstand peer and market scrutiny.
Data-Driven Analysis - Skilled at managing large, complex datasets and generating actionable insights.
Innovation - Applies creative approaches to methodological development and co-benefit quantification.
Communication - Able to translate technical outputs into accessible insights for funders, landowners, and partners.
What's attractive to the right candidate?:
You have the unique and exciting opportunity to work with amazing colleagues, partners and landowners to help connect forest landowners with technical and financial assistance to better steward their land and achieve vital landscape-scale conservation impacts.
Your contributions and leadership will be appreciated. Our staff is passionate, dedicated and good at what they do, and will be deeply grateful for your efforts to create and preserve an environment that is both fun and fair.
We offer a flexible work environment that respects your life outside of work.
Salary is commensurate with experience.
AFF offers a comprehensive and competitive benefits package.
Contact us to apply
We know there are great candidates who may not check all these boxes, and we also know you might bring important skills that we haven't considered. If that applies to you, don't hesitate to apply and tell us about yourself.
****************************************************
We are committed to creating a diverse work environment and proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, veteran status or any other basis protected by law.
Cloud Data Engineer- Databricks
Data scientist job in McLean, VA
Purpose:
We are seeking a highly skilled Cloud Data Engineer with deep expertise in Databricks and modern cloud platforms such as AWS, Azure, or GCP. This role is ideal for professionals who are passionate about building next-generation data platforms, optimizing complex data workflows, and enabling advanced analytics and AI in cloud-native environments. You'll have the opportunity to work with Fortune-500 organizations in data and analytics, helping them unlock the full potential of their data through innovative, scalable solutions.
Key Result Areas and Activities:
Design and implement robust, scalable data engineering solutions.
Build and optimize data pipelines using Databricks, including serverless capabilities, Unity Catalog, and Mosaic AI.
Collaborate with analytics and AI teams to enable real-time and batch data workflows.
Support and improve cloud-native data platforms (AWS, Azure, GCP).
Ensure adherence to best practices in data modeling, warehousing, and governance.
Contribute to automation of data workflows using CI/CD, DevOps, or DataOps practices.
Implement and maintain workflow orchestration tools like Apache Airflow and dbt.
Roles & Responsibilities
Essential Skills
4+ years of experience in data engineering with a focus on scalable solutions.
Strong hands-on experience with Databricks in a cloud environment.
Proficiency in Spark and Python for data processing.
Solid understanding of data modeling, data warehousing, and architecture principles.
Experience working with at least one major cloud provider (AWS, Azure, or GCP).
Familiarity with CI/CD pipelines and data workflow automation.
Desirable Skills
Direct experience with Unity Catalog and Mosaic AI within Databricks.
Working knowledge of DevOps/DataOps principles in a data engineering context.
Exposure to Apache Airflow, dbt, and modern data orchestration frameworks.
Qualifications
Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or a related field.
Relevant certifications in cloud platforms (AWS/Azure/GCP) or Databricks are a plus.
Qualities:
Able to consult, write, and present persuasively
Able to work in a self-organized and cross-functional team
Able to iterate based on new information, peer reviews, and feedback
Able to work seamlessly with clients across multiple geographies
Research focused mindset
Excellent analytical, presentation, reporting, documentation and interactive skills
"Infocepts is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law."
Data Scientist
Data scientist job in Annapolis, MD
The primary role of the Data Scientist is to drive mission-focused insights from complex datasets. This role involves managing, modeling, and interpreting large-scale government data holdings to support decision-making and operational success. You will combine expertise in mathematics, statistics, computer science, and domain-specific knowledge to deliver actionable conclusions and communicate them effectively to both technical and non-technical audiences.
Position Responsibilities
Build and assess analytic models tailored to mission-specific needs
Communicate principal conclusions clearly, using mathematics, statistics, and computer science methods
Develop reproducible workflows and ensure data integrity
Develop qualitative and quantitative methods for exploring and assessing datasets in varying states of organization and cleanliness.
Organize, clean, and curate large datasets for analysis
Present complex technical findings in a way that is accessible to technical and non-technical stakeholders
Experience
Bachelor's degree in Mathematics, Applied Mathematics, Applied Statistics, Machine Learning, Data Science, Computer Science or related field or equivalent combination of education and experience.
Minimum 3+ years' relevant work experience.
Experience with software development working with Python in a Unix environment.
Experience using the Unix command line.
Practical knowledge in Python Machine Learning and Data Visualization
Practical knowledge in Data ETL such as working with: Data loading from SQL, CSV, JSON, Excel, etc., Web scraping (Beautiful Soup, Scrapy, etc.) and Data Wrangling/Cleaning
Proficiency in statistical packages using any of the following: Python, R, STATA, SPSS, etc.
An active TS/SCI with polygraph
Additional Requirements
Experience using the Atlassian Tool Suite.
Experience with development of any of the following; Hadoop, Pig, MapReduce, or HDFS
Working knowledge with other object-oriented programming languages such as Java or C++
Working knowledge with Front-end data visualization libraries (i.e., D3.js; Raphael.js, etc.)
Physical Demands
Ability to work in an office environment (Constant)
Required to sit and stand for long periods; talk, hear, and use hands and fingers to operate a computer and telephone keyboard (Frequent)
Salary Range: $107,000 to $212,000
The AV pay range for this job level is a general guideline only and not a guarantee of compensation or salary. Determination of official compensation or salary relies on several factors including, but not limited to, level of position, job responsibilities, geographic location, scope of relevant work experience, educational background, certifications, contract-specific affordability, organizational requirements, alignment with local internal equity as well as alignment with market data.
Clearance Level
Top Secret with Poly
ITAR Requirement:
T
his position requires access to information that is subject to compliance with the International Traffic Arms Regulations (“ITAR”) and/or the Export Administration Regulations (“EAR”). In order to comply with the requirements of the ITAR and/or the EAR, applicants must qualify as a U.S. person under the ITAR and the EAR, or a person to be approved for an export license by the governing agency whose technology comes under its jurisdiction. Please understand that any job offer that requires approval of an export license will be conditional on AeroVironment's determination that it will be able to obtain an export license in a time frame consistent with AeroVironment's business requirements. A “U.S. person” according to the ITAR definition is a U.S. citizen, U.S. lawful permanent resident (green card holder), or protected individual such as a refugee or asylee. See 22 CFR § 120.15. Some positions will require current U.S. Citizenship due to contract requirements.
Benefits: AV offers an excellent benefits package including medical, dental vision, 401K with company matching, a 9/80 work schedule and a paid holiday shutdown. For more information about our company benefit offerings please visit: **********************************
We also encourage you to review our company website at ******************** to learn more about us.
Principals only need apply. NO agencies please.
Who We Are
Based in California, AeroVironment (AVAV) is a global leader in unmanned aircraft systems (UAS) and tactical missile systems. Founded in 1971 by celebrated physicist and engineer, Dr. Paul MacCready, we've been at the leading edge of technical innovation for more than 45 years. Be a part of the team that developed the world's most widely used military drones and created the first submarine-launched reconnaissance drone, and has seven innovative vehicles that are part of the Smithsonian Institution's permanent collection in Washington, DC.
Join us today in developing the next generation of small UAS and tactical missile systems that will deliver more actionable intelligence to our customers so they can proceed with certainty - and succeed.
What We Do
Building on a history of technological innovation, AeroVironment designs, develops, produces, and supports an advanced portfolio of unmanned aircraft systems (UAS) and tactical missile systems. Agencies of the U.S. Department of Defense and allied military services use the company's hand-launched UAS to provide situational awareness to tactical operating units through real-time, airborne reconnaissance, surveillance, and target acquisition.
We are proud to be an EEO/AA Equal Opportunity Employer, including disability/veterans. AeroVironment, Inc. is an Equal Employment Opportunity (EEO) employer and welcomes all qualified applicants. Qualified applicants will receive fair and impartial consideration without regard to race, sex, color, religion, national origin, age, disability, protected veteran status, genetic data, sexual orientation, gender identity or other legally protected status.
ITAR
U.S. Citizenship required
Auto-Apply