Delivery Consultant - Data Architect, AWS Professional Services, AWS Professional Services, AWS Professional Services
Amazon 4.7
Data engineer job in Arlington, VA
The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team
at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS
solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer
success through their cloud journey, providing technical expertise and best practices throughout the project lifecycle.
You will lead customer-focused project teams as a technical leader and perform hands-on development of Data &
Analytics solutions with exceptional quality.
Possessing a deep understanding of AWS products and services, as a Delivery Consultant you will be proficient in
architecting complex, scalable, and secure solutions tailored to meet the specific needs of each customer. You'll work
closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration
strategies to AWS. As trusted advisors to our customers, providing guidance on industry trends, emerging technologies,
and innovative solutions, you will be responsible for leading the implementation process, ensuring adherence to best
practices, optimizing performance, and managing risks throughout the project.
The AWS Professional Services organization is a global team of experts that help customers realize their desired business
outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to
execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help
customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our
global specialty practices, which cover a variety of solutions, technologies, and industries.
Key job responsibilities
As an experienced technology professional, you will be responsible for:
- Leading project teams, designing and implementing end-to-end large-scale, complex, scalable, and secure Data Analytics AWS solutions tailored to customer needs
- Providing technical guidance and troubleshooting support throughout project delivery
- Collaborating with stakeholders to gather requirements and propose effective Data & Analytics migration and modernization strategies
- Acting as a trusted advisor to customers on industry trends and emerging technologies, ensuring compliance with industry standards and governance while aligning data solutions with business strategies.
- Sharing knowledge within the organization through mentoring, training, and creating reusable artifacts
About the team
Diverse Experiences: AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job below, we encourage candidates to apply. If your career is just starting, hasn't followed a traditional path, or includes alternative experiences, don't let it stop you from applying.
Why AWS? Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating - that's why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses.
Inclusive Team Culture - Here at AWS, it's in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (diversity) conferences, inspire us to
never stop embracing our uniqueness.
Mentorship & Career Growth - We're continuously raising our performance bar as we strive to become Earth's Best Employer. That's why you'll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
Work/Life Balance - We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there's nothing we can't achieve in the cloud.
Basic Qualifications
- 7+ years of technical specialist, design and architecture experience
- 5+ years of database (eg. SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) experience
- 7+ years of consulting, design and implementation of serverless distributed solutions experience
- 5+ years of software development with object oriented language experience
- 3+ years of cloud based solution (AWS or equivalent), system, network and operating system experience
- 7+ years of external or internal customer facing, complex and large scale project management experience
- 5+ years of cloud architecture and solution implementation experience
- Bachelor's degree, or 7+ years of professional or military experience
Preferred Qualifications
- degree in advanced technology, or AWS Professional level certification
- Knowledge of AWS services including compute, storage, networking, security, databases, machine learning, and serverless technologies
- Knowledge of security and compliance standards including HIPAA and GDPR
- Experience in performance optimization and cost management for cloud environments
- Experience communicating technical concepts to diverse audiences in pre-sales environments
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit ********************************************************* for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.
Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $138,200/year in our lowest geographic market up to $239,000/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit ******************************************************* . This position will remain posted until filled. Applicants should apply via our internal or external career site.
$138.2k-239k yearly 6d ago
Looking for a job?
Let Zippia find it for you.
EY-Parthenon - Strategy and Execution - Growth Platforms - Data Scientist - Director
Ernst & Young Oman 4.7
Data engineer job in McLean, VA
Location: Atlanta, Boston, Chicago, Dallas, Denver, Detroit, Houston, Los Angeles, McLean, New York, Hoboken, Philadelphia, San Francisco, Seattle
At EY, we're all in to shape your future with confidence.
We'll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world.
EY-Parthenon - EY Growth Platforms - Data Scientist - Director
The opportunity
EY-Parthenon's unique combination of transformative strategy, transactions and corporate finance delivers real-world value - solutions that work in practice, not just on paper. Benefiting from EY's full spectrum of services, we've reimagined strategic consulting to work in a world of increasing complexity.
With deep functional and sector expertise, paired with innovative AI-powered technology and an investor mindset, we partner with CEOs, Boards, Private Equity and Governments every step of the way - enabling you to shape your future with confidence.
Within the EY-Parthenon service line, the EY Growth Platforms Data Scientist Director will collaborate with Business Leaders, AI/ML Engineers, Project Managers, and other team members to design, build, and scale innovative AI solutions that power strategic growth initiatives and create enterprise value for F500 clients.
Your key responsibilities
The EY Growth Platforms Data Scientist Director will play a critical role building and scaling our multi-source data pipelines- sourcing, merging, and transforming data assets that power high-visibility client engagements. This role will architect, clean, transform, and enrich data to power AI/ML-driven agents and dashboards, and collaborate with Business leaders and C-level executives to get hands-on experience solving some of the most interesting and mission-critical business questions with data.
Skills and attributes for success
· Lead ingestion and ETL design for structured and semi-structured data (CSV, JSON, APIs, Flat Files).
· Understand schema, data quality, and transformation logic for multiple sources on a client-by-client like NAIC, NOAA, Google Trends, EBRI, Cannex, LIMRA, and internal client logs.
· Design normalization and joining pipelines across vertical domains (insurance + consumer + economic data).
· Build data access layers optimized for ML (feature stores, event streams, vector stores).
· Define and enforce standards for data provenance, quality checks, logging, and version control.
· Partner with AI/ML and Platform teams to ensure data is ML- and privacy-ready (HIPAA, SOC2, etc.).
To qualify for the role you must have
· A bachelor's degree in Business, Statistics, Economics, Mathematics, Engineering, Computer Science, Analytics, or other related field and 5 years of related work experience; or a graduate degree and approximately 3 years of related work experience.
· Experience in dataengineering or hybrid data science roles focused on pipeline scalability and schema management.
· Expertise in cloud-native data infrastructure (e.g., GCP/AWS, Snowflake, BigQuery, Databricks, Delta Lake).
· Strong SQL/Python/Scala proficiency and experience with orchestration tools (Airflow, dbt).
· Experience with merging and reconciling third-party data (public APIs, vendor flat files, dashboards).
· Comfort defining semantic layers and mapping unstructured/dirty datasets into usable models for AI/BI use.
· Basic understanding of ML/feature pipelines and downstream modeling needs.
· The ability and willingness to travel and work in excess of standard hours when necessary.
Ideally, you will have
· Experience working in a startup and/or management/strategy consulting.
· Knowledge of how to leverage AI tools in a business setting, including Microsoft Copilot.
· Collaborative, problem-solving, and growth-oriented mindset.
What we look for
We're interested in passionate leaders with strong vision and a desire to stay on top of trends in the Data Science and Big Data industry. If you have a genuine passion for helping businesses achieve the full potential of their data, this role is for you.
What we offer you
At EY, we'll develop you with future-focused skills and equip you with world-class experiences. We'll empower you in a flexible environment, and fuel you and your extraordinary talents in a diverse and inclusive culture of globally connected teams. Learn more .
We offer a comprehensive compensation and benefits package where you'll be rewarded based on your performance and recognized for the value you bring to the business. The base salary range for this job in all geographic locations in the US is $205,000 to $235,000. Individual salaries within those ranges are determined through a wide variety of factors including but not limited to education, experience, knowledge, skills and geography. In addition, our Total Rewards package includes medical and dental coverage, pension and 401(k) plans, and a wide range of paid time off options.
Join us in our team-led and leader-enabled hybrid model. Our expectation is for most people in external, client serving roles to work together in person 40-60% of the time over the course of an engagement, project or year.
Under our flexible vacation policy, you'll decide how much vacation time you need based on your own personal circumstances. You'll also be granted time off for designated EY Paid Holidays, Winter/Summer breaks, Personal/Family Care, and other leaves of absence when needed to support your physical, financial, and emotional well-being.
Are you ready to shape your future with confidence? Apply today.
EY accepts applications for this position on an on-going basis.
For those living in California, please click here for additional information.
EY focuses on high-ethical standards and integrity among its employees and expects all candidates to demonstrate these qualities.
EY | Building a better working world
EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets.
Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow.
EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
EY provides equal employment opportunities to applicants and employees without regard to race, color, religion, age, sex, sexual orientation, gender identity/expression, pregnancy, genetic information, national origin, protected veteran status, disability status, or any other legally protected basis, including arrest and conviction records, in accordance with applicable law.
EY is committed to providing reasonable accommodation to qualified individuals with disabilities including veterans with disabilities. If you have a disability and either need assistance applying online or need to request an accommodation during any part of the application process, please call 1-800-EY-HELP3, select Option 2 for candidate related inquiries, then select Option 1 for candidate queries and finally select Option 2 for candidates with an inquiry which will route you to EY's Talent Shared Services Team (TSS) or email the TSS at **************************.
#J-18808-Ljbffr
$73k-105k yearly est. 5d ago
Federal ML Engineer - Data Strategy & Production
Steampunk
Data engineer job in McLean, VA
An established industry player is seeking a seasoned Machine Learning Engineer to join a dynamic team focused on leveraging AI/ML technology for Federal use cases. In this role, you will collaborate with data scientists and engineers to design and develop innovative machine learning models, analyze large datasets, and optimize solutions for performance and scalability. This position offers a unique opportunity to contribute to impactful projects while working with cutting-edge technologies in a supportive environment that values your growth and contributions. If you are passionate about data and problem-solving, this role is perfect for you!
#J-18808-Ljbffr
$79k-108k yearly est. 3d ago
Data Scientist
Indev
Data engineer job in Washington, DC
Data Scientist
Indev is seeking a Data Scientist with 6+ years of experience to support our AI/ML, advanced analytics, and data visualization initiatives. This role will be instrumental in analyzing, visualizing, and communicating data insights that drive key business decisions. The ideal candidate will have strong technical skills in Python, SQL, Tableau, and AWS Cloud, as well as excellent communication and presentation abilities.
This is a direct-hire, full-time position with salary and benefits. Indev provides a comprehensive benefits package, including Medical, Dental, Vision, 401k with match, Flexible Spending Account, and Paid Time Off (PTO)-including vacation and holiday pay.
Your future duties and responsibilities:
Data Analytics and Business Intelligence:
Analyze large and complex data sets to uncover insights and support decision-making.
Partner with stakeholders to define analytical needs and deliver actionable solutions.
Develop and maintain data models to support advanced analytics initiatives.
Data Visualization and Reporting:
Design and create interactive dashboards and reports using Tableau.
Present findings in a clear, concise manner tailored to both technical and non-technical audiences.
Support data storytelling through compelling visualizations that communicate trends and patterns.
Data Extraction and Management:
Write complex SQL queries for data extraction, transformation, and loading (ETL) tasks.
Work with relational databases (e.g., PostgreSQL, MySQL, NoSQL) to support data integrity and consistency.
Automate routine data processes using a variety of scripts and tools. (e.g., Python, Airflow, SQL, AWS)
Cloud Technology Utilization:
Leverage AWS Cloud services (e.g., S3, EC2, RDS, Lambda) for data processing, storage, and retrieval.
Ability to guide and support cloud modernization of data processes, including advanced AI/ML, automation, and governance technologies.
Optimize cloud workflows for data access and analysis efficiency.
Collaboration and Communication:
Collaborate with cross-functional teams, including engineers, analysts, and business stakeholders.
Clearly articulate analytical concepts and results to a wide range of audiences.
Contribute to client-facing presentations and written reports.
Required qualifications to be successful in this role:
Bachelor's Degree in Technology, Business, Mathematics, Computer Science, Information Systems, or other related discipline from an accredited college, university, or academic institution in the United States. (Master's preferred)
6+ years of professional experience in AI/ML, data analysis, visualization, and reporting.
Proficiency in Python for data manipulation and automation.
Strong experience with SQL and relational databases (e.g., PostgreSQL, MySQL).
Proficiency with Tableau for creating dashboards and visual reports.
Experience creating automated data workflows, analytics processes, and processing pipelines.
Advanced knowledge of data visualization and reporting using a variety of software and tools.
Familiarity with AWS Cloud environments and services for data processing and storage.
Familiarity with DevOps practices, Agile methodologies, and modern software development environments.
Familiarity with the entire software development process from end-to-end.
Excellent oral and written communication skills, with the ability to present complex information clearly.
Strong problem-solving and analytical thinking skills.
Existing and active DHS EOD/clearance (ICE preferred).
Additional preferred qualification:
Strong knowledge of change management processes and technologies (e.g., GitHub)
Experience modeling and analyzing large financial and operational data in cloud environments.
Demonstrated experience developing prototypes and proof-of-concept applications or software from end-to-end.
Knowledge of federal data governance standards and compliance frameworks.
Prior experience supporting federal government clients, especially DHS.
Existing and active DHS EOD/clearance (ICE or TSA preferred).
Clearance: Existing and active DHS EOD/clearance (ICE preferred).
About Us: At Indev, we're not just a company; we're a trailblazing force transforming the way technology solutions shape the future. As a dynamic player in the federal government sector, we're on a mission to empower agencies with cutting-edge, innovative technology solutions that drive innovation, efficiency, and progress. Our team thrives on collaboration, innovation, and embracing challenges head-on to create a meaningful impact on the world around us. Let's innovate. *************
$75k-106k yearly est. 2d ago
Principal Data Scientist: ML & Big Data for Card Products
Capital One 4.7
Data engineer job in McLean, VA
A leading financial services company seeks a Data Scientist to leverage advanced analytics and machine learning techniques. The role demands a strong background in Python, AWS, and statistical modeling, requiring a bachelor's degree in a quantitative field and relevant experience in data analytics. The candidate will work cross-functionally to translate complex data into actionable insights for customers. A competitive benefits package is offered.
#J-18808-Ljbffr
$69k-89k yearly est. 5d ago
Lead Data Engineer
Ntech Workforce
Data engineer job in Reston, VA
Terms of Employment
• W2 Contract-to-Hire, 12 months
• This is a hybrid schedule at Reston, VA, Candidates must reside in Maryland, Washington, DC, or Virginia
Role Purpose
The Lead DataEngineer is responsible for designing, building, deploying, and scaling robust data solutions with a primary focus on the AWS Cloud platform. You will lead a team in transforming the company's data platforms to deliver meaningful, data-driven insights while ensuring high reliability, automation, and performance
Core Responsibilities:
• Data Integration & Pipeline Leadership (50%): Lead the team to design, implement, and monitor all aspects of Data Integration. Develop and enforce best practices for data management, ensuring optimal performance for complex data pipelines.
• AWS Infrastructure & Architecture (10%): Develop and maintain cloud-based infrastructure, including data warehouses (Redshift) and data lakes (S3). Model Data Warehouse solutions specifically tailored for cloud environments.
• Engineering Oversight (10%): Oversee the delivery of high-priority engineering initiatives, managing both long-term projects and ad-hoc ELT/ETL activities.
• Data Extraction & Transformation (10%): Create sophisticated data collection frameworks for structured and unstructured data. Apply advanced ETL/ELT techniques to connect large, diverse datasets.
• Analytics & Quality Control (5%): Execute quantitative analyses to translate data into actionable insights. Design and manage strict quality control procedures across multiple systems.
• Professional Development (5%): Maintain state-of-the-art practices by participating in professional societies and benchmarking new technologies.
Required Qualifications
Education: Bachelor's degree in Computer Science or IT. In lieu of a degree, 4 additional years of relevant experience is required.
Experience:
At Least 10 years Experience in leading dataengineering and cross functional team to implement scalable data platforms for optimal performance.
Experience developing and updating ETL/ELT scripts. Hands-on experience with application development, relational database layout, development, data modeling.
DataEngineering on AWS platform preferably leveraging S3, Kafka, Redshift and other AWS services., at least 3 years
ETL/Data Integration Design and Development experience using Java/Scala, Spark preferably on Cloudera Platform .,couple of projects
Knowledge and understanding of SQL ., Expert
Strong Java Programming with enterprise scale experience ( using Git,Artifactory and other DevOps tools)
Knowledge and understanding of database design and implementation concepts. , Expert
Knowledge and understanding of data exchange formats., Expert
Knowledge and understanding of data movement concepts, Expert
Strong technical and analytical and problem solving skills to troubleshoot to solve a variety of problems., Expert
Requires strong organizational and communication skills, written and verbal, with the ability to handle multiple priorities., Expert
Able to effectively provide direction to and lead technical teams., Expert
nTech is an equal opportunity employer. All offers of employment are contingent upon pre-employment drug and background screenings. Only candidates who meet all of the above client requirements will be contacted by a recruiter.
$79k-108k yearly est. 1d ago
Staff Data Scientist
Visa 4.5
Data engineer job in Washington, DC
Visa is a world leader in payments and technology, with over 259 billion payments transactions flowing safely between consumers, merchants, financial institutions, and government entities in more than 200 countries and territories each year. Our mission is to connect the world through the most innovative, convenient, reliable, and secure payments network, enabling individuals, businesses, and economies to thrive while driven by a common purpose - to uplift everyone, everywhere by being the best way to pay and be paid.
Make an impact with a purpose-driven industry leader. Join us today and experience Life at Visa.
Job Description
Visa USA Inc., a Visa Inc. company, needs a Staff Data Scientist (multiple openings) in Washington, DC to:
Building and validating predictive models with advanced machine learning techniques and tools to drive business value, interpreting and presenting modeling and analytical results to non-technical audience.
Conducting research using latest and emerging modeling technologies and tools to solve new fraud detection business problems like merchant acceptance fraud detection and/or improve existing production models' performance for fraud related business context.
Improving the modeling process through MLOps and automation to drive efficiency and effectiveness.
Partnering with a cross functional team of Product Managers, DataEngineers, Software Engineers, and Platform Engineers to deploy models and/or model innovations into production.
Managing model risks in line with Visa Model Risk Management requirements.
Conducting modeling analysis to address internal and external clients' questions and requests.
Position reports to the Washington, DC office and may allow for partial telecommuting.
Qualifications
Basic Qualifications:
Master's degree in Mathematics, Statistics, Computer Science, Engineering, Economics or related field and 6 years of experience in the job offered or in a Data Scientist or Data Analyst-related occupation.
Position requires the following:
Developing predictive models using Machine learning or Statistical Modeling (e.g. XGBoost).
Scripting in Python, Unix or Linux.
Using SQL for extracting and aggregating data.
Experience with Hadoop or related big data query engine such as PySpark or Scala Spark.
Experience with Model Risk Management.
Additional Information
Worksite: Washington, DC
This is a hybrid position. Hybrid employees can alternate time between both remote and office. Employees in hybrid roles are expected to work from the office 2-3 set days a week (determined by leadership/site), with a general guidepost of being in the office 50% or more of the time based on business needs.
Travel Requirements:This position does not require travel.
Mental/Physical Requirements:This position will be performed in an office setting. The position will require the incumbent to sit and stand at a desk, communicate in person and by telephone, frequently operate standard office equipment, such as telephones and computers.
Visa is an EEO Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability or protected veteran status. Visa will also consider for employment qualified applicants with criminal histories in a manner consistent with EEOC guidelines and applicable local law.
U.S. APPLICANTS ONLY: The estimated salary range for a new hire into this position is $170,872.00 USD to $232,700.00 USD per year, which may include potential sales incentive payments (if applicable). Salary may vary depending on job-related factors which may include knowledge, skills, experience, and location. In addition, this position may be eligible for bonus and equity. Visa has a comprehensive benefits package for which this position may be eligible that includes Medical, Dental, Vision, 401 (k), FSA/HSA, Life Insurance, Paid Time Off, and Wellness Program.
$75k-101k yearly est. 2d ago
Data Engineer - Clearance Required
LMI Consulting, LLC 3.9
Data engineer job in Washington, DC
Job ID 2025-12923 # of Openings 3 Category Data/Analytics Benefit Type Salaried High Fringe/Full-Time
We are seeking a DataEngineer to join our team. The DataEngineer will be responsible for designing, developing, and maintaining enterprise database systems in support of various government and defense projects. This role involves implementing ETL processes, managing low latency application databases, and ensuring database performance and availability.
LMI is a new breed of digital solutions provider dedicated to accelerating government impact with innovation and speed. Investing in technology and prototypes ahead of need, LMI brings commercial-grade platforms and mission-ready AI to federal agencies at commercial speed.
Leveraging our mission-ready technology and solutions, proven expertise in federal deployment, and strategic relationships, we enhance outcomes for the government, efficiently and effectively. With a focus on agility and collaboration, LMI serves the defense, space, healthcare, and energy sectors-helping agencies navigate complexity and outpace change. Headquartered in Tysons, Virginia, LMI is committed to delivering impactful results that strengthen missions and drive lasting value.
Responsibilities
Oversee data architecture for large-scale APIs and web application back-end data stores.
Implement ETL processes to supply application data for usage in web applications.
Manage critical low latency application databases on various platforms.
Develop and maintain packages, scripts, and reusable components for system enhancements and interfaces.
Develop scripts to validate various data on systems and improve database performance.
Troubleshoot and correct code problems identified during ETL and refresh processes.
Develop, implement, and execute quality assurance programs and quality control standards.
Establish database backup/recovery strategy and implement automation of DBA utility functions.
Document database design, data definition language, and data migration strategy.
Engineer extensive database solutions and interfaces for enhanced data requests performance.
Qualifications
Minimum Qualifications:
5+ years of experience designing, developing, and maintaining enterprise database systems.
Strong understanding of SQL (MariaDB/MySQL, Postgres, MS SQL, or Oracle).
Demonstrated experience with ETL pipeline design and implementation using third-party vendor solutions (NNCompass, Pentaho) or developing custom solutions.
Experience with cloud database development and implementation (AWS preferred).
Robust understanding of data migration procedures for backup, restore, as well as schema migration.
Bachelor's degree in Computer Science, Data Science, or a related field. Master's degree preferred but not mandatory.
Ability to obtain and maintain a security clearance (Confidential/NAC required).
Disclaimer:
The salary range displayed represents the typical salary range for this position and is not a guarantee of compensation. Individual salaries are determined by various factors including, but not limited to location, internal equity, business considerations, client contract requirements, and candidate qualifications, such as education, experience, skills, and security clearances.
The salary range for this position is - $148,776-$207,000
LMI is an Equal Opportunity Employer. LMI is committed to the fair treatment of all and to our policy of providing applicants and employees with equal employment opportunities. LMI recruits, hires, trains, and promotes people without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, pregnancy, disability, age, protected veteran status, citizenship status, genetic information, or any other characteristic protected by applicable federal, state, or local law. If you are a person with a disability needing assistance with the application process, please contact
Colorado Residents: In any materials you submit, you may redact or remove age-identifying information such as age, date of birth, or dates of school attendance or graduation. You will not be penalized for redacting or removing this information.
Need help finding the right job?
We can recommend jobs specifically for you!
Click here to get started.
$148.8k-207k yearly 4d ago
Data Engineer
Crystal Management | CMIT
Data engineer job in Vienna, VA
CMiT is seeking an experienced DataEngineer to help lead the analysis, documentation, and migration of a ~2TB Oracle database environment to PostgreSQL. This role involves mapping complex data dependencies across multiple schemas Oracle in legacy data centers to Postgres RDS in AWS environments.
This role offers the opportunity to play a pivotal role in a mission critical database modernization initiative, working with enterprise scale data systems while building expertise in both traditional and cloud-native database technologies.
Responsibilities
Perform comprehensive analysis of Oracle database structures across multiple schemas and systems
Map table relationships, constraints, and dependencies within the ~2TB environment
Document data flows, lineage, and integration points with legacy systems
Create detailed migration documentation and data dictionaries in Confluence
Design optimized PostgreSQL target schemas for performance and maintainability
Develop migration strategies for complex Oracle features (PL/SQL, packages, materialized views)
Create and execute data migration scripts with comprehensive validation procedures
Implement hybrid migration approaches for legacy data center and AWS environments
Design data synchronization strategies between on-premises and cloud environments
Implement secure data transfer mechanisms and connectivity solutions
Ensure data consistency and integrity across distributed PostgreSQL deployments
Qualifications
Education/Certification Required
A minimum of a Bachelor's Degree. Experience or education requirements may be met through an equivalent number of combined years of education or experience.
Required Qualifications
5+ years Oracle database administration and development experience
3+ years hands-on PostgreSQL experience with advanced features and performance tuning
Experience with large-scale database migrations (multi-TB environments)
Expert-level SQL skills across both Oracle and PostgreSQL platforms
Proficiency in scripting languages (Python, Bash, PowerShell) for automation
Experience with database migration tools (AWS DMS, ora2pg, pgloader)
Knowledge of ETL/ELT tools and data pipeline frameworks
Hands-on experience with AWS database services (RDS, Aurora PostgreSQL)
Comfortable with a strong culture of documentation
Proficiency with Confluence for technical documentation and knowledge management and sharing
Ability to create clear data flow diagrams and system architecture documentation
Experience with data lineage tools and dependency mapping
Proficiency with diagramming tools (Visio, PowerPoint, etc)
Proven track record of successful Oracle to PostgreSQL migrations
Experience with legacy enterprise systems and data integration patterns
Understanding of compliance and audit requirements for data migrations
Strong collaboration skills with cross-functional teams
Ability to collaborate with team members to ensure client needs and expectations are met or exceeded. Comfortable working in a remote environment.
Demonstrates a passion for solving complex software challenges and enjoys working team collaboration.
Preferred Qualifications:
AWS certifications (Database Specialty, Solutions Architect)
PostgreSQL and/or Oracle certifications or equivalent expertise
Experience with data governance and master data management
Knowledge of performance monitoring tools (CloudWatch, pg Admin, Oracle Enterprise Manager)
Ability to obtain DHS/TSA clearance
Clearance Required
Active TSA clearance required
$79k-108k yearly est. 4d ago
Data Engineer
Thinktek LLC
Data engineer job in Washington, DC
Who We Are: ThinkTek LLC is a fast-growing Certified SBA 8(a) and Service-Disabled Veteran-Owned Small Business (SDVOSB). We specialize in providing management and technology consulting services to support the business and technology modernization efforts of the Federal Government. ThinkTek was formed with the specific purpose of providing clients tailored solutions in Program & Project Management, Strategic Planning, and IT Services.
Position Description:
ThinkTek is seeking a DataEngineer with strong expertise in Microsoft Azure and experience integrating with ServiceNow (SNOW) platforms to support a Human Capital Management (HCM) modernization program for a large Federal agency. The selected candidate will provide technical leadership in designing secure, scalable data solutions that enhance interoperability between Azure-based systems, HCM platforms, and ServiceNow applications.
The ideal candidate will have deep hands-on experience in Azure Data Factory, Azure Synapse Analytics, Power BI, and SQL Server, combined with a working knowledge of ServiceNow data structures, CMDB, and API integrations.
Responsibilities:
Lead the design and implementation of enterprise data architecture to support Human Capital Management and related business systems.
Develop and maintain scalable data models, data warehouses, and migration strategies within the Microsoft Azure environment.
Architect and manage Azure SQL Database, Azure Data Lake, and Synapse pipelines to enable secure data flow across HCM and ServiceNow platforms.
Integrate data between Azure and ServiceNow, leveraging APIs, MID Servers, and RESTful web services.
Collaborate with HR, IT, and ServiceNow platform teams to define and implement data standards, mappings, and transformation logic.
Establish and monitor data governance, quality assurance, and compliance processes aligned with federal cybersecurity and privacy mandates (e.g., FISMA, FedRAMP).
Develop advanced analytics and reporting solutions using Power BI and Azure Analysis Services to support workforce planning, performance insights, and executive dashboards.
Serve as a technical advisor to ensure data architecture aligns with business goals, modernization priorities, and cloud best practices.
Required Qualifications:
Active Secret Clearance.
Bachelor's Degree or higher.
5+ years of experience leading enterprise data architecture, modeling, and migration efforts.
5+ years of hands-on experience with Azure SQL, Azure Data Factory, Azure Synapse, and Power BI.
Experience with ServiceNow data models, integrations, and CMDB architecture.
Strong SQL and database design proficiency, including stored procedures, APIs, and ETL workflows.
Experience supporting or integrating Human Capital Management or ERP systems (e.g., Oracle HCM, Workday, PeopleSoft, or SAP SuccessFactors).
Knowledge of federal data privacy, security, and compliance frameworks (FISMA, FedRAMP, NIST).
Preferred: Azure Solutions Architect, Azure DataEngineer, or ServiceNow Integration Specialist certification.
Why ThinkTek:
ThinkTek LLC is proud to be an Equal Opportunity Employer (EOE), making employment decisions without regard to race, color, religion, creed, sex, sexual orientation, gender identity, marital status, national origin, age, veteran status, disability, or any other protected class.We offer comprehensive benefits including medical, dental, and vision insurance for full-time employees, generous PTO, and a variety of paid leave options. Learn more about ThinkTek careers and benefits at ****************************
$80k-109k yearly est. 1d ago
Associate Data Engineer
Working America Associa 3.9
Data engineer job in Washington, DC
With over 5 million members, Working America mobilizes working people who don?t have the benefit of a union at work to fight for good jobs and a fair economy. As the community affiliate of the AFL-CIO, we unite working people in urban and suburban communities around a shared economic agenda.
The Associate DataEngineer will support Working America's data integration work, including troubleshooting data issues, supporting interface data flows, and contributing to data validation and transformation efforts. Under the direction of the Manager of Data and Cloud services, they will assist in documenting data structures, supporting audit readiness, and ensuring compliance with data governance standards.
This is a fixed-term position through Dec. 18, 2026.
Reports to: IT Manager of Data & Cloud Services
Duties and Responsibilities
? Support the design, development, and maintenance of data pipelines and database systems.
? Manage data quality, supporting data integration across internal and external systems,and maintaining metadata and data dictionaries.
? Write and update technical documentation for collaboration and knowledge sharing with team members.
? Support the development of reports and visualizations based on established report definitions.
? Other duties as assigned.
Qualifications
? STEM-related Associate?s degree preferred,
? 0-2 years of related work experience in dataengineering or architecture required;
? Familiarity with data systems and infrastructures, pipelines, and data sets required;
? Strong command of relational databases and SQL; Extract, Transform, and Load (ETL) data into a relational database required;
? Familiarity with cloud infrastructure technologies.
? 0-2 years of Python strongly preferred;
? Familiarity with GitHub;
? Excellent listening, interpersonal, communication and problem solving skills;
? Ability to use good judgment, take initiative, and make recommendations in resolving problems;
? Ability to work long and irregular hours as necessary in response to technical difficulties.
Location: Washington, DC
Position
? Full-time
? FLSA Classification: Non-Exempt
The Step 1 salary is $94,209, with additional steps considered for candidates with experience and qualifications beyond what is required above. Includes medical insurance, paid time off, and 401(k) with employer contribution.
The position is a bargaining unit position, represented by the Washington-Baltimore NewsGuild, TNG-CWA, Local 32035.
Working America is an equal opportunity employer. Working America is committed to building a diverse workforce and encourages applications from women, people of color, LGBTQIA and other non-conforming individuals, and individuals with disabilities.
Please see
PI281420309
Job distributed by JobTarget.
Please see job description
$94.2k yearly 1d ago
Enterprise Software Engineering Director & CoE Leader
Northrop Grumman Corp. (Au 4.7
Data engineer job in McLean, VA
A leading defense technology company is seeking a Corporate Software Engineering Director to shape the vision and execution of software engineering across the enterprise. This role requires extensive experience in software engineering management and systems engineering. Candidates must hold a Bachelor's or Master's degree in a STEM field and have a current secret-level security clearance. Duties include leading change management initiatives and developing a unified talent development framework. It is a full-time position based in McLean, Virginia.
#J-18808-Ljbffr
$82k-104k yearly est. 5d ago
Lead Building Engineer - Facilities & HVAC Ops
Spectrum Management Ltd. 4.2
Data engineer job in Washington, DC
A leading facilities management company is looking for a Chief Building Engineer in Washington, D.C. This role requires a valid DC 3rd Class Engineers License and at least 5 years of experience in a similar environment. Responsibilities include performing maintenance and supervision of engineering staff while ensuring efficient building operations. The ideal candidate will possess strong HVAC knowledge and communication skills, and will play a key role in maintaining a safe and efficient workplace. Competitive compensation is offered.
#J-18808-Ljbffr
$85k-110k yearly est. 1d ago
Software Engineer - R&D Systems
Novateur Research Solutions
Data engineer job in Ashburn, VA
Software Engineer - Entry to Mid-Level (R&D Systems)
Novateur stands for Innovation. We value creativity, vision, collaboration, and above all, ambition to innovate. Novateur Research Solutions is an R&D firm located in Northern Virginia, developing intelligent systems that push the boundaries of computer vision, AI, and large-scale learning.
We are looking for Software Engineers eager to build scalable systems and deploy machine learning models in real-world environments. You will work closely with our researchers and engineers to develop software for real-time perception, geospatial analytics, and distributed systems.
Responsibilities:
• Develop and deploy production-grade software in Python and C++.
• Build APIs, data pipelines, and visualization tools to support machine learning workflows.
• Collaborate with researchers to translate algorithms into efficient implementations.
• Contribute to system design, cloud deployment (AWS), and automation.
Requirements:
• BS or MS in Computer Science, Engineering, or a related field.
• Proficiency in modern programming and software engineering practices.
• Familiarity with Docker, Kubernetes, or AWS.
• Enthusiasm for learning and applying machine learning or computer vision methods.
• U.S. Citizen or Permanent Resident.
Why Novateur?
Join a team that values creativity and initiative. Our engineers have freedom to innovate, collaborate with top researchers, publish research in major scientific conferences, and see their ideas deployed in impactful applications.
Company Benefits:
Novateur offers competitive pay and benefits comparable to Fortune 500 companies that include a wide choice of healthcare options with generous company subsidy, 401(k) with generous employer match, paid holidays and paid time off increasing with tenure, and company paid short-term disability, long-term disability, and life insurance.
We offer a work environment which fosters individual thinking along with collaboration opportunities within and beyond Novateur. In return, we expect a high level of performance and passion to deliver enduring results for our clients.
$72k-99k yearly est. 3d ago
Plumbing Engineer
Arup 4.6
Data engineer job in Washington, DC
ReqID: NEW000207
**Joining Arup**
At Arup, you belong to an extraordinary collective - in which we encourage individuality to thrive. Our strength comes from how we respect, share and connect our diverse experiences, perspectives and ideas.
You will have the opportunity to do socially useful work that has meaning - to Arup, to your career, to our members and to the clients and communities we serve.
We are currently seeking a **Plumbing Engineer** to design plumbing and fire protection systems including domesticandspecialty water systems, sanitary drainage and vent systems, storm drainage systems, natural gas distribution systems,specialtygas systems,plumbing and fire protection systems for data centers,other special plumbing systems and fire suppression systems.
**The opportunity**
· Design plumbing and fire protection systems for buildings and structures.
· Design plumbing and fire protection systems for data centers.
· Preparation of drawings detailing the systems designed, including but not limited to plumbing schedules, floor plans and enlarged plans, details and riser diagrams.
· Prepare sketches and narratives for SD packages. Provide Revit drawings for DD deliverables and construction documents.
· System design calculations, code analysis and record-keeping.
· Select equipment for use in building systems.
· Preparation of written reports describing the systems proposed and alternatives.
· Prepare specifications tailored to project needs.
· Coordinate your work with the engineers of other disciplines.
· Attend meetings and communicate with clients to establish their needs and discuss design issues with them.
· Incorporate sustainable design methods in buildings based on client needs.
· Construction administration activities, including but not limited to site visits, review and respond to contractor submittals and RFIs; troubleshooting construction issues, and write site observation reports.
· Calling City Officials and Fire Marshal office as necessary for project delivery.
At Arup, you belong to an extraordinary collective - in which we encourage individuality to thrive. Our strength comes from how we respect, share and connect our diverse experiences, perspectives and ideas.
You will have the opportunity to do socially useful work that has meaning - to Arup, to your career, to our members and to the clients and communities we serve.
**Is this role right for you?**
· Bachelor's degree in mechanical, architectural, civil, or environmental engineering.
· Minimum 3 years' experience in the design and specification of plumbing and fire protection systems required.
· Familiarity with codes and standards, specifically UPC, IPC, IBC, ASPE standards, LEED water efficiency, TDSHS, FGI and NFPA.
· Experience designingand deliveringa variety of building types whichinclude data centers.
-
· Ability to utilize Revit software to develop design drawings and 3D model of plumbing and piping systems is preferred.
· Ability to deliver quality work by project deadline within given time budget.
· Good communication skills essential for team-based working.
· Excellent planning and organization skills required for our fast-paced environment.
· Must be highly motivated, proactive and willing to take on new challenges.
· Knowledge of Bluebeam, Excel, Word and Navisworks.
**Other/Preferred:**
· Professional Engineering License preferred or ASPE CPD (Certified in Plumbing Design) in the US.
· Data center experience preferred
· Sound knowledge in plumbing engineering fundamentals.
· Certifications in Medical Gas design.
· LEED AP or WELL Certifications.
Please note that all applicants must apply directly via the job portal. If this role is not quite what you are looking for, but you are interested in other opportunities for a future with purpose, please sign up to our Talent Community is where you will be kept up to date with roles suitable for you to shape a better world.
**What we offer you**
At Arup, we care about each member's success, so we can grow together.
Guided by our values, we provide an attractive total reward package that recognizes the contribution of each of our members to our shared success. As well as competitive, fair and equitable pay, we offer a career in which all of our members can belong, grow and thrive - through benefits that support health and wellbeing, a wide range of learning opportunities and many possibilities to have an impact through the work they do
We are owned in trust on behalf of our members, giving us the freedom, with personal responsibility, to set our own direction and choose work that aligns with our purpose and adds to Arup's legacy. Our members collaborate on ambitious projects to deliver remarkable outcomes for our clients and communities. Profit Share is a key part of our reward, enabling members to share in the results of our collective efforts.
**Benefits at Work** - At Arup, we have a comprehensive and valuable benefits program that works for our employees and their families while keeping costs low. These benefits provide health and welfare security for you as well as paid time off for rest and renewal. As a member of Arup, everyone shares in our success through a global profit share scheme (payments are dependent on the firm's financial performance). Our Global Profit Share Plan (paid bi-annually) provides an opportunity for you to share in the success of the Firm. As a valued employee of Arup, you can also choose to participate in our 401(k) plan with up to 10% company match to help you save for your future.
**Flexible Working** - We believe that flexible arrangements create a more inclusive way of working that supports our diversity and the wellbeing of our people. Options for alternative schedules and the ability to work outside of the office for a portion of your workweek are available.
**Washington D.C. Hiring Range** - The good faith base salary hiring range for this job if performed in Washington D.C. is $83,100 - $97,800 per year. This range is commensurate with experience, educational background, and skill level. Benefits are not included in the base salary. Please note hiring ranges for candidates performing work outside of Washington D.C. will differ.
**Different People, Shared Values**
Arup is an equal opportunity employer that actively promotes and nurtures a diverse and inclusive workforce. We welcome applications from individuals of all backgrounds, regardless of age (within legal limits), gender identity or expression, marital status, disability, neurotype or mental health, race or ethnicity, faith or belief, sexual orientation, socioeconomic background, and whether you're pregnant or on family leave. We are an open environment that embraces diverse experiences, perspectives, and ideas - this drives our excellence.
Guided by our values and alignment with the UN Sustainable Development Goals, we create and contribute to equitable spaces and systems, while cultivating a sense of belonging for all. Our internal employee networks support our inclusive culture: from race, ethnicity and cross-cultural working to gender equity and LGBTQ+ and disability inclusion - we aim to create a space for you to express yourself and make a positive difference. Discover more about life at Arup at .
**Our Application Process**
We will be reviewing our candidates for this position on a rolling basis. Once you have applied you will be evaluated and potentially moved on to the next round, at which point a member of the talent resourcing team will reach out to you directly.
**Arup is an equal opportunity employer.** All qualified applicants will receive consideration without regard to race, colour, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, spouse of a protected veteran, or disability.
**Stay safe online.** Recruitment fraud is on the rise, and Arup's name, logo, and branding have been used in fraudulent job advertisements. Arup will never ask for bank information as part of our recruitment process.
\#LI-AR2
DC #LI-JW2 \#DC
EOE-Protected Veterans/Disability
$83.1k-97.8k yearly 1d ago
Forward Deployed Engineer, Gov
Openai 4.2
Data engineer job in Washington, DC
About the team
The OpenAI for Government team is a dynamic, mission-driven group leveraging frontier AI to transform how governments achieve their missions. Our team works to empower public servants with secure, compliant AI tools (e.g., ChatGPT Enterprise, ChatGPT Gov) and mission-aligned deployments that meet government technical requirements with strong reliability and safety.
About the role
Forward Deployed Engineers (FDEs) lead complex deployments of frontier models in production. You will embed with our most strategic government and public sector customers-where model performance matters, delivery is urgent, and ambiguity is the default. You'll map their problems, structure delivery, and ship fast. This includes scoping, sequencing, and building full-stack solutions that create measurable value, while driving clarity across internal and external teams.
You will work directly with defense, intelligence, and federal stakeholders as their technical thought partner, guiding adoption, maximizing mission impact, and ensuring successful deployments at scale. Along the way, you'll identify reusable patterns, codify best practices, and share field signal that influences OpenAI's roadmap.
This role is based in Washington DC, Seattle or San Francisco. We use a hybrid work model of 3 days in the office per week. We offer relocation assistance. Travel up to 50% is required, including on-site work with customers.
In this role you will
Own technical delivery across multiple government deployments, from first prototype to stable production.
Deeply embed with public sector customers to design and build novel applications powered by OpenAI models.
Enable successful deployments across customer environments by delivering observable systems spanning infrastructure through applications.
Prototype and build full-stack systems using Python, JavaScript, or comparable stacks that deliver real mission impact.
Proactively guide customers on maximizing business and operational value from their applications.
Forge and manage relationships with customer leadership and stakeholders, ensuring successful deployment and scale.
Scope work, sequence delivery, and remove blockers early-making trade-offs between scope, speed, and quality.
Contribute directly in the code when clarity or momentum depends on it.
Codify working patterns into tools, playbooks, or building blocks others can use.
Share field feedback with Research and Product to influence model and product development.
Keep teams moving through clarity, judgment, and consistent follow-through.
You might thrive in this role if you
Bring 5+ years of engineering or technical deployment experience, ideally in customer-facing or government environments.
Active TS/SCI clearance or equivalent
Have scoped and delivered complex systems in fast-moving or ambiguous contexts.
Write and review production-grade code across frontend and backend, using Python, JavaScript, or similar stacks.
Are familiar with cloud deployment models (Azure, AWS), Kubernetes, Terraform, and related infrastructure.
Have experience building or deploying systems powered by LLMs or generative models, and understand how model behavior affects product experience.
Simplify complexity, make fast, sound decisions under pressure, and communicate clearly across technical and non-technical audiences.
Spot risks early, adjust without slowing down, and model calm judgment when stakes are high.
Are humble, collaborative, and eager to help others with empathy.
Operate with high horsepower, thrive in dynamic environments, and can ruthlessly prioritize across multiple projects.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.
For additional information, please see OpenAI's Aff…
Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.
To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
#J-18808-Ljbffr
$77k-108k yearly est. 4d ago
Splunk Engineer
Conviso Inc.
Data engineer job in Washington, DC
Conviso Inc is hiring Sr Splunk Engineer with at least 6 years of Splunk experience to join our cybersecurity / IT operations team.
Role: Splunk Engineer
Active Secret Clearance Required
Certification Requirements: DoD 8570 IAT II (i.e. Security+), Splunk Certified Administrator or higher
Key Responsibilities
Splunk Platform Management: Install, configure, and maintain Splunk Enterprise, UBA, and SOAR in both on-premises and cloud/hybrid architectures; perform system upgrades, patching, and troubleshooting. Strong preference for any Oracle cloud experience.
UBA and SOAR Optimization: Customize and fine-tune UBA models for behavioral analytics; configure playbooks, integrations, and automated actions within SOAR to accelerate threat response. Coordinate directly with on-prem/cloud infrastructure teams to maintain and deploy these modules.
Team Leadership & Mentoring: Supervise, mentor, and provide technical guidance to junior Splunk team members and peers; delegate tasks, review work quality, facilitate skills development, and foster a collaborative team environment in alignment with mission objectives.
Security and Compliance: Implement and maintain Splunk best practices in accordance with defense agency security policies, compliance requirements, and data retention standards. Experience with STIGs mandatory.
Incident Handling: Respond to incidents with appropriate logs and reports; proactively troubleshoot any log/analytic abnormalities preventatively.
Collaboration & Agile Delivery: Work within Agile project teams, attending ceremonies (stand-ups, sprints, retrospectives) and using Jira for ticketing, backlog tracking, and documentation.
Knowledge Sharing: Develop, update, and share technical documentation, standard operating procedures (SOPs), runbooks, and knowledge articles in alignment with agency practices. Work with many small, medium, and large teams to achieve agency and program objectives.
Log Management and Analysis: Aggregate and parse logs from diverse data sources; develop and maintain dashboards, reports, alerts, and custom searches to surface actionable intelligence.
Technical Skills:
Mastery in deploying and managing Splunk Enterprise, UBA, SOAR, and other Splunk modules.
Experience using scripting (e.g., Python, Bash) for automation and data manipulation.
Mastery in designing and tuning Splunk searches, dashboards, alerts, and CIM compliance.
Experience with log sources common to defense/enterprise networks (Windows, Linux, network appliances, security devices).
Experience using Jira for workflow management and Agile methodologies for project delivery.
Key Attributes and Soft Skills
Skilled communicator, able to collaborate with IT, cybersecurity, and mission teams in written and verbal communications with a positive attitude and customer-first approach.
Proactive learner-stays current on Splunk and security operations best practices.
$72k-95k yearly est. 5d ago
Senior Software Engineer
Mantech 4.5
Data engineer job in Alexandria, VA
MANTECH seeks a motivated, career and customer-oriented Senior Software Engineer to join our team in Alexandria, VA. As part of the position, you will act as a Software Engineer designing and implementing services and components for AI applications.
Responsibilities include but are not limited to:
Develops, implements, and maintains full-stack software solutions for AI-enabled applications.
Works closely with the Software Architect to understand project requirements and translate them into technical specifications.
Develops and integrates AI and ML capabilities on a cloud-hosted data platform that supports significant market adoption, high performance, and strict access control and governance.
Designs and builds prototype and proof of concept User Interfaces using modern UX/UI standards.
Stays current with advancements in AI, machine learning, and software engineering, incorporating best practices into the development process.
Performs system optimization, performance tuning and assists in capacity planning.
Performs proactive analysis of potential issues, risks and roadblocks during system implementation and operations.
Documents software designs, code, and processes to ensure maintainability, scalability, and knowledge sharing among team members.
Participates in code reviews and provides constructive feedback to peers to ensure code quality, adherence to coding standards, and knowledge transfer within the team.
Minimum Qualifications:
Bachelor's degree in Computer Science, Information Technology, or a related field.
10 or more years (7 with Master's) of experience in software development, systems integration, data management, or related fields.
Experience with configuring and developing in a major public cloud system such as Oracle, Azure, AWS, or GCP.
Experience in programming and/or scripting languages such as JavaScript, Python, C++, or Java.
Experience with software development frameworks and tools (e.g., TensorFlow, PyTorch, Flask, Docker).
Strong understanding of software engineering principles, including design patterns, data structures, and algorithms.
Experience with DevOps tools and techniques for building integrations with external systems including Infrastructure as Code (IaC). Example tools include Jenkins, GitLab, Ansible, Terraform, AWS Cloud Formation Templates
Preferred Qualifications:
Master's degree in Computer Science, Information Technology, or a related field.
Strong analytical and problem-solving skills.
Excellent communication and interpersonal skills.
Experience working with data ingest and transformation.
Ability to work effectively in a team-oriented environment.
Clearance Requirements:
Must possess a current and active TS/SCI
Physical Requirements:
The person in this position must be able to remain in a stationary position 50% of the time.
Occasionally move about inside the office to access file cabinets, office machinery, or to communicate with co-workers, management, and customers, via email, phone, and or virtual communication, which may involve delivering presentations.
$85k-108k yearly est. 1d ago
Cloud Engineer II
ITC Federal, Inc. 4.7
Data engineer job in Fairfax, VA
ID 2026-1473 Remote No
JOB TITLE: Cloud Engineer II
POSITION INFORMATION: Full time
Candidate must obtain a DHS Public Trust clearance prior to starting the position.
BENEFITS: Health, Dental and Vision, 401(k), Tuition Reimbursement, Flexible Spending Account (FSA), 11 Paid Federal Holidays, 3 weeks' Paid Time Off
ITC Federal, LLC is an information technology and consulting company focused on servicing the needs of the Federal Government. ITC's mission is to apply earned expertise in DevSecOps, Cloud Computing, Federal Financial Systems, App Dev, and Cyber Security to assist our clients in achieving their mission. ITC is located in Fairfax, VA and offersoutstanding compensation and benefits plan and a challenging and rewarding professional work environment.
JOB OVERVIEW
Analyzes functional business requirements and implement specifications for functional business and technical needs. Provide identification/fixing for the problems within existing systems design implementation of new systems, enhancement of existing systems and participates in continuous analysis for performance, process and security improvements; development and build of next generation IT systems. Responsible for understanding the needs of the customers and the realities of FedRAMP available IT products, and creating requirements that will allow implementation by the architecture and engineering team and COTS products. Must possess experience of system engineering in one or more areas including computer languages, operating systems, or DevOps.
Responsibilities
Manage cloud based IT system infrastructure and processes
Provide second level problem diagnosis and resolution of problems
Provide support for the implementation, troubleshooting and maintenance of customer IT Systems
Flexibility and adaptability to handle competing work demands
Requires limited supervision, self-starter, strong time management skills
Independent problem solver with troubleshooting, decision making and analytical
Accuracy and attention to detail, must have organizational skills
Able to work and interact with others in a structured / team environment
Verbal and written communication skills
Demonstrated ability to communicate to nontechnical audience on technical issues
Operating master consoles to monitor the performance of networks and computer systems
Coordinating computer network access and use
Coordinating with other engineering groups to ensure proper network operation and subnets
Qualifications
REQUIRED:
Support cloud based systems in AWS and Azure
Requesting & installing secure certificates, enabling Global
Security using LDAP registry and setup of SSO using SPNEGO and SAML
Good UNIX shell scripting skills with some exposure to emerging DevOps tools such as Ansible
Develop, code, test, and debug new Ansible Playbooks and recipes or enhancements to existing software
Evaluate new products and technologies, and provide recommendations to enhance current environment
Perform Linux administration support activities
Advanced troubleshooting skills including analyzing log & trace files, opening service requests, collecting documentation, working with Vendors, and driving problem resolution.
Ability to create support documentation & written procedures for routine and non-routine tasks.
Ability to coordinate configuration changes, installs, & upgrades with application owners - including planning, scheduling, status reporting, & following company change control procedures.
Ability to lead in 3rd Party team coordination, communication, and audit review of environments installed and processes used.
Minimum of 3- 4 years of experience in Installation, Configuration, and administration of UNIX/Linux based systems
Ability to work with all levels of staff within and outside of IT and outside the organization.
A self-starter able to work independently but comfortable working in a team environment.
Good analytical and problem-solving skills.
Dependable and flexible when necessary.
Preferred - Certifications in AWS, Azure, Ansible, any of the DevOps stack
WORK ENVIRONMENT AND PHYSICAL DEMANDS: Candidate must be able to function in general office environment.
ITC Federal is an equal opportunity employer and will not discriminate against any application for employment on the basis of age, race, color, gender, national origin, religion, creed, disability, veteran status, marital status, sexual orientation, genetic information, military status, disability, or sex including pregnancy and childbirth or related medical condition or on any other basis prohibited by law.
$66k-89k yearly est. 5d ago
Software Engineer - Graph Technologies
Raft 3.9
Data engineer job in Chantilly, VA
This is a U.S. based position. All of the programs we support require U.S. citizenship to be eligible for employment. All work must be conducted within the continental U.S.
Who we are:
Raft () is a customer-obsessed non-traditional small business with a purposeful focus on Distributed Data Systems, Platforms at Scale, and Complex Application Development, with headquarters in McLean, VA. Our range of clients includes innovative federal and public agencies leveraging design thinking, cutting-edge tech stack, and cloud-native ecosystem. We build digital solutions that impact the lives of millions of Americans.
About the role:
As a Software Engineer - Graph Technologies, you will be diving deeper into the software lifecycle, taking on more complex tasks and leading smaller project initiatives. You will not only write code but also be involved in design discussions, code reviews, and mentoring Associate and Standard Engineers, helping to maintain the high standards of our organization.
This role is contingent on contract award.
What we are looking for:
3+ years of hands-on experience as a software engineer
Design and develop graph-based applications using Neo4j, Amazon Neptune, or similar graph databases
Implement graph algorithms and analytics for complex data relationship analysis and visualization
Build and maintain scalable microservices and APIs supporting Intelligence Community requirements
Develop semantic data models and ontological frameworks, particularly Common Core Ontology implementations
Collaborate with analysts and stakeholders to translate mission requirements into technical solutions
Optimize graph queries and data structures for performance in high-security environments
Participate in agile development practices including code reviews, testing, and continuous integration
Support deployment and maintenance of applications in classified computing environments
Bachelors degree in Computer Science, Engineering, Physics, Mathematics or a related field
Graph Technology Experience - Hands-on experience with graph databases (Neo4j, Neptune, ArangoDB) and graph processing frameworks
Highly preferred:
Previous work supporting IC missions and familiarity with classified environments
Direct experience with National Reconnaissance Office programs, data systems, or mission requirements
Common Core Ontology (CCO) - Hands-on experience implementing CCO standards and semantic modeling frameworks
Container Technologies - Experience with Kubernetes orchestration and Helm chart development/management
Advanced Graph Skills - Experience with graph visualization tools, complex graph algorithms, or distributed graph processing
Cloud Platforms - Familiarity with AWS/Azure services, particularly graph and analytics offerings
Security+ or Similar - Current DoD 8570 baseline certification or equivalent
Advanced Degree - Master's in Computer Science, Data Science, or related field
Clearance Requirements:
Active Top Secret clearance with ability to obtain and maintain SCI
Preference for those with Polygraph
Work Type:
Onsite in Chantilly, VA
Ability to work from a SCIF at least 3 days a week
May require up to 25% travel
Salary Range:
$150,000 - $200,000
The determination of compensation is predicated upon a candidate's comprehensive experience, demonstrated skill, and proven abilities
What we will offer you:
Highly competitive salary
Fully covered healthcare, dental, and vision coverage
401(k) and company match
Take as you need PTO + 11 paid holidays
Education & training benefits
Generous Referral Bonuses
And More!
Our Vision Statement:
We bridge the gap between humans and data through radical transparency and our obsession withthemission.
Our Customer Obsession:
We will approach every deliverable like it's a product. We will adopt a customer-obsessed mentality. As we grow, and our footprint becomes larger, teams and employees will treat each other not only as teammates but customers. We must live the customer-obsessed mindset, always. This will help us scale and it will translate to the interactions that our Rafters have with their clients and other product teams that they integrate with. Our culture will enable our success and set us apart from other companies.
How do we get there?
Public-sector modernization is critical for us to live in a better world. We, at Raft, want to innovate and solve complex problems. And, if we are successful, our generation and the ones that follow us will live in a delightful, efficient, and accessible world where out-of-box thinking,and collaboration is a norm.
Raft's core philosophy is Ubuntu: IAm, BecauseWe are. We support our
"nadi"
by elevating the other Rafters. We work as a hyper collaborative team where each team member brings a unique perspective, adding value that did not exist before. People make Raft special. We celebrate each other and our cognitive and cultural diversity. We are devoted to our practice of innovation and collaboration.
We're an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status.
How much does a data engineer earn in Leesburg, VA?
The average data engineer in Leesburg, VA earns between $69,000 and $124,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.
Average data engineer salary in Leesburg, VA
$92,000
What are the biggest employers of Data Engineers in Leesburg, VA?
The biggest employers of Data Engineers in Leesburg, VA are: