Observability Engineer
Data engineer job in McLean, VA
Vantor is forging the new frontier of spatial intelligence, helping decision makers and operators navigate what's happening now and shape what's coming next. Vantor is a place for problem solvers, changemakers, and go-getters-where people are working together to help our customers see the world differently, and in doing so, be seen differently. Come be part of a mission, not just a job, where you can: Shape your own future, build the next big thing, and change the world.
To be eligible for this position, you must be a U.S. Person, defined as a U.S. citizen, permanent resident, Asylee, or Refugee.
Note on Cleared Roles: If this position requires an active U.S. Government security clearance, applicants who do not currently hold the required clearance will not be eligible for consideration. Employment for cleared roles is contingent upon verification of clearance status.
Export Control/ITAR:
Certain roles may be subject to U.S. export control laws, requiring U.S. person status as defined by 8 U.S.C. 1324b(a)(3).
Please review the job details below.
This position requires an active U.S. Government Security Clearance at the TS/SCI level with required polygraph.
We are looking for a full-time Observability Engineer (OE) to gain deeper insights to complex systems and cloud-native environments. This role is part of our data collection and software development team that ensures Vantor's services have reliability and up-time standards appropriate to customer's needs. The environment calls for a fast rate of improvement while keeping an ever-watchful eye on capacity, performance and cost.
The OE will have the mindset and a set of engineering approaches to understand “the what” and “the why”. They will build monitoring solutions to gain visibility into operational problems, ensuring customer value and satisfaction is achieved. Their focus is to drive observability and monitoring for new and existing systems in order to provide systems insight and resolve application and infrastructure issues. The successful candidate has a breath of knowledge to discover, implement and collaborate with teammates on the implementation of solutions for complex problems across the entire technology stack.
Responsibilities:
Define standards for monitoring the reliability, availability, maintainability and performance of sponsor-owned and operated systems.
Design and architect operational solutions for managing applications and infrastructure.
Drive service acceptance by adopting new processes into operations and developing new monitoring for exposure of risks and automating against repeatable actions.
Partner with service and product owners to establish key performance indicators to identify trends and achieve better outcomes.
Provide deep troubleshooting for production issues.
Engage with service owners to maximize a team's ability to identify and remediate root cause performance issues quickly ensuring rapid service interruption recovery.
Build and/or use tools to correlate disparate data sets in an efficient and automated way to help teams quickly identify the root-cause to issues and to understand how different problems relate to each other.
Coordinate with the sponsor to support major incidents, large-scale deployments and SecOps user support.
Minimum Qualifications:
US citizenship required
Active/current TS/SCI with required polygraph
Bachelor's degree in computer science or related area of study
Minimum 5 years of experience
Working knowledge of K8s, Docker, Helm and automated deployment via pipeline (e.g. Concourse or Jenkins)
Familiarity with distributed control systems such as Git
Experience with AWS cloud services
Experience with setting up monitoring and observability solutions across sponsor owned systems, tools and data feeds
Proficient in scripting with Python and Java
Willingness to work onsite full time
Ability and willingness to share on-call responsibilities
Advanced knowledge of Unix/Linux systems, with high comfort level at the command line
Preferred Qualifications:
Experience with other cloud services providers beyond AWS
Experience with CloudWatch or other monitoring tools inside of AWS
Familiarity with Prometheus/Grafana or other monitoring tools for ETL feeds, APIs, servers, C2S servies, networks and AI/ML capabilities
Good understanding of networking fundamentals
Organized with an ability to document and communicate ongoing work tasks and projects
Receptive to giving, receiving and implementing feedback in a highly collaborative environment
Understanding of Incident and Problem Management
Effectively prioritize work and encourage best practices in others
Meticulous and cautious with the ability to identify and consider all risks and balance those with performing the task efficiently
Experience with Root Cause Analysis (RCA)
Experience with ETL processes
Willingness to step in as a leader to address ongoing incidents and problems, while providing guidance to others in order to drive to a resolution
Pay Transparency: In support of pay transparency at Vantor, we disclose salary ranges on all U.S. job postings. The successful candidate's starting pay will fall within the salary range provided below and is determined based on job-related factors, including, but not limited to, the experience, qualifications, knowledge, skills, geographic work location, and market conditions. Candidates with the minimum necessary experience, qualifications, knowledge, and skillsets for the position should not expect to receive the upper end of the pay range.
The base pay for this position within California, Colorado, Hawaii, New Jersey, the Washington, DC metropolitan area, and for all other states is:
$180,000.00 - $220,000.00
Benefits: Vantor offers a competitive total rewards package that goes beyond the standard, including a robust 401(k) with company match, mental health resources, and unique perks like student loan repayment assistance, adoption reimbursement and pet insurance to support all aspects of your life. You can find more information on our benefits at: ******************************
The application window is three days from the date the job is posted and will remain posted until a qualified candidate has been identified for hire. If the job is reposted regardless of reason, it will remain posted three days from the date the job is reposted and will remain reposted until a qualified candidate has been identified for hire.
The date of posting can be found on Vantor's Career page at the top of each job posting.
To apply, submit your application via Vantor's Career page.
EEO Policy: Vantor is an equal opportunity employer committed to an inclusive workplace. We believe in fostering an environment where all team members feel respected, valued, and encouraged to share their ideas. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, gender identity, sexual orientation, disability, protected veteran status, age, or any other characteristic protected by law.
System Development Engineer II, DBS Relational ADC
Data engineer job in Herndon, VA
System Development Engineer The Amazon Web Services team is innovating new ways of building massively scalable distributed systems and delivering the next generation of cloud computing with AWS offerings like RDS and Aurora. In 2013, AWS launched 280 services, but in 2016 alone we released nearly 1000. We hold high standards for our computer systems and the services we deliver to our customers: our systems are highly secure, highly reliable, highly available, all while functioning at massive scale; our employees are smart, passionate about the cloud, driven to serve customers, and fun to work with.
A successful engineer joining the team will do much more than write code and triage problems. They will work with Amazon's largest and most demanding customers to address specific needs across a full suite of services. They will dive deeply into technical issues and work diligently to improve the customer experience. The ideal candidate will...
- Be great fun to work with. Our company credo is "Work hard. Have fun. Make history". The right candidate will love what they do and instinctively know how to make work fun.
- Have strong Linux & Networking Fundamentals. The ideal candidate will have deep experience working with Linux, preferably in a large scale, distributed environment. You understand networking technology and how servers and networks inter-relate. You regularly take part in deep-dive troubleshooting and conduct technical post-mortem discussions to identify the root cause of complex issues.
- Love to code. Whether its building tools in Java or solving complex system problems in Python, the ideal candidate will love using technology to solve problems. You have a solid understanding of software development methodology and know how to use the right tool for the right job.
- Think Big. The ideal candidate will build and deploy solutions across thousands of devices. You will strive to improve and streamline processes to allow for work on a massive scale.
This position requires that the candidate selected must currently possess and maintain an active TS/SCI security clearance with polygraph. The position further requires the candidate to opt into a commensurate clearance for each government agency for which they perform AWS work.
10012
Key job responsibilities
- You design, implement, and deploy software components and features. You solve difficult problems generating positive feedback.
- You have a solid understanding of design approaches (and how to best use them).
- You are able to work independently and with your team to deliver software successfully.
- Your work is consistently of a high quality (e.g., secure, testable, maintainable, low-defects, efficient, etc.) and incorporates best practices. Your team trusts your work.
- Your code reviews tend to be rapid and uneventful. You provide useful code reviews for changes submitted by others.
- You focus on operational excellence, constructively identifying problems and proposing solutions, taking on projects that improve your team's software, making it better and easier to maintain.
- You make improvements to your team's development and testing processes.
- You have established good working relationships with peers. You recognize discordant views and take part in constructive dialogue to resolve them.
- You are able to confidently train new team-mates about your customers, what your team's software does, how it is constructed, tested, operates, and how it fits into the bigger picture.
A day in the life
Engineers in this role will work on automation, development, and operations to support AWS machine learning services for US government customers. They will work in an agile environment, attend daily standup, and collaborate closely with teammates. They will work on exciting challenges at scale and tackle unsolved problems.
They will support the U.S. Intelligence Community and Defense agencies to implement innovative cloud computing solutions and solve unique technical problems.
About the team
Why AWS
Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating - that's why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses.
Utility Computing (UC)
AWS Utility Computing (UC) provides product innovations - from foundational services such as Amazon's Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS's services and features apart in the industry. As a member of the UC organization, you'll support the development and management of Compute, Database, Storage, Internet of Things (IoT), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services.
Inclusive Team Culture
Here at AWS, it's in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon conferences, inspire us to never stop embracing our uniqueness.
Work/Life Balance
We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there's nothing we can't achieve in the cloud.
Mentorship and Career Growth
We're continuously raising our performance bar as we strive to become Earth's Best Employer. That's why you'll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
Diverse Experiences
Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn't followed a traditional path, or includes alternative experiences, don't let it stop you from applying.
BASIC QUALIFICATIONS- Bachelor's degree in computer science or equivalent
- 3+ years of non-internship professional software development experience
- Experience programming with at least one modern language such as C++, C#, Java, Python, Golang, PowerShell, Ruby
- Knowledge of systems engineering fundamentals (networking, storage, operating systems)
- 1+ years of designing or architecting (design patterns, reliability and scaling) of new and existing systems experience
- Current, active US Government Security Clearance of TS/SCI with Polygraph
PREFERRED QUALIFICATIONS- Experience with PowerShell (preferred), Python, Ruby, or Java
- Experience working in an Agile environment using the Scrum methodology
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit ********************************************************* for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.
Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $116,300/year in our lowest geographic market up to $201,200/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit ******************************************************** This position will remain posted until filled. Applicants should apply via our internal or external career site.
Engineer
Data engineer job in Washington, DC
Who We Are: We're powering a cleaner, brighter future. Exelon is leading the energy transformation, and we're calling all problem solvers, innovators, community builders and change makers. Work with us to deliver solutions that make our diverse cities and communities stronger, healthier and more resilient.
We're powered by purpose-driven people like you who believe in being inclusive and creative, and value safety, innovation, integrity and community service. We are a Fortune 200 company, 19,000 colleagues strong serving more than 10 million customers at six energy companies -- Atlantic City Electric (ACE), Baltimore Gas and Electric (BGE), Commonwealth Edison (ComEd), Delmarva Power & Light (DPL), PECO Energy Company (PECO), and Potomac Electric Power Company (Pepco).
In our relentless pursuit of excellence, we elevate diverse voices, fresh perspectives and bold thinking. And since we know transforming the future of energy is hard work, we provide competitive compensation, incentives, excellent benefits and the opportunity to build a rewarding career.
Are you in? Primary Purpose: PRIMARY PURPOSE OF POSITION
Develops studies, plans, criteria, specifications, calculations, evaluations, design documents, performance assessments, integrated systems analysis, cost estimates, budgets, associated with the planning, design, licensing, construction, operation, and maintenance of Exelon's electric generation, transmission, distribution, gas and telecommunication facilities/systems under the guidance of an experienced engineer. Provides consultation and recommendations to the Company within and to other business units and/or customers as a result of studying company or customer-owned systems, processes, equipment, vehicles or facilities under an experienced engineer. Reviews financial data from budget and actual costs of projects under the guidance of an experienced engineer. Position may be required to work extended hours for coverage during storms or other energy delivery emergencies.
Primary Duties:
PRIMARY DUTIES AND ACCOUNTABILITIES
Performs engineering assignments while exercising independent discretion under the guidance of an experienced engineer. (e.g. Collect data, perform complex analysis, interpret results, draw conclusions, and clearly present a recommendation to management)
Performs engineering tasks associated with large projects or a number of small projects. (e.g. Analyze and interpret the results of complex power flows and perform complex engineering tests, and analyze non-specific and ambiguous results)
May direct the engineering tasks associated with a large project or a number of small projects (e.g. Verify and validate studies, blueprints, or designs against accepted engineering principles and practices. Design high voltage transmission and distribution circuits, meeting all engineering standards and criteria)
Participate on teams and may lead teams.
Job Scope:
JOB SCOPE
Provides technical assistance in support of senior engineers, managers and others.
Applies technical knowledge to help promote a safe work environment and to enhance customer satisfaction.
Minimum Qualifications: MINIMUM QUALIFICATIONS
Bachelor of Science degree in Engineering
2 - 4 years of professional engineering experience
Ability to analyze and interpret complex electrical and mechanical systems.
Knowledge and ability to apply problem solving approaches and engineering theory.
Knowledge of engineering designs, principles and practices.
General knowledge and experience with regulations, guides, standards, codes, methods, and practices necessary to perform assignments for a specific discipline, various installations, or services
Preferred Qualifications:
PREFERRED QUALIFICATIONS
Engineer in Training License
Strong written and oral communication/presentation skills, report generation & technical writing skills
Interpersonal skills & the ability to collaborate with peers and managers
Consulting and needs assessment skills
Time, project management and multi-tasking skills
Benefits:
Annual salary will vary based on a candidate's skills, qualifications, experience, and other factors: $83,200.00/Yr. - $114,400.00/Yr.
Annual Bonus for eligible positions: 10%
401(k) match and annual company contribution
Medical, dental and vision insurance
Life and disability insurance
Generous paid time off options, including vacation, sick time, floating and fixed holidays, maternity leave and bonding/primary caregiver leave or parental leave
Employee Assistance Program and resources for mental and emotional support
Wellbeing programs such as tuition reimbursement, adoption and surrogacy assistance and fitness reimbursement
Referral bonus program
And much more
Note: Exelon-sponsored compensation and benefit programs may vary or not apply based on length of service, job grade, job classification or represented status. Eligibility will be determined by the written plan or program documents.
Auto-ApplyData Engineer (Zero Trust)
Data engineer job in Fort Belvoir, VA
Kavaliro is seeking a Zero Trust Security Architect / Data Engineer to support a mission-critical program by integrating secure architecture principles, strengthening data security, and advancing Zero Trust initiatives across the enterprise.
Key Responsibilities
Develop and implement program protection planning, including IT supply chain security, anti-tampering methods, and risk management aligned to DoD Zero Trust Architecture.
Apply secure system design tools, automated analysis methods, and architectural frameworks to build resilient, least-privilege, continuously monitored environments.
Integrate Zero Trust Data Pillar capabilities-data labeling, tagging, classification, encryption at rest/in transit, access policy definition, monitoring, and auditing.
Analyze and interpret data from multiple structured and unstructured sources to support decision-making and identify anomalies or vulnerabilities.
Assess cybersecurity principles, threats, and vulnerabilities impacting enterprise data systems, including risks such as corruption, exfiltration, and denial-of-service.
Support systems engineering activities, ensuring secure integration of technologies and alignment with Zero Trust operational objectives.
Design and maintain secure network architectures that balance security controls, mission requirements, and operational tradeoffs.
Generate queries, algorithms, and reports to evaluate data structures, identify patterns, and improve system integrity and performance.
Ensure compliance with organizational cybersecurity requirements, particularly confidentiality, integrity, availability, authentication, and non-repudiation.
Evaluate impacts of cybersecurity lapses and implement safeguards to protect mission-critical data systems.
Structure, format, and present data effectively across tools, dashboards, and reporting platforms.
Maintain knowledge of enterprise information security architecture and database systems to support secure data flow and system design.
Requirements
Active TS/SCI security clearance (required).
Deep knowledge of Zero Trust principles (never trust, always verify; explicit authentication; least privilege; continuous monitoring).
Experience with program protection planning, IT supply chain risk management, and anti-tampering techniques.
Strong understanding of cybersecurity principles, CIA triad requirements, and data-focused threats (corruption, exfiltration, denial-of-service).
Proficiency in secure system design, automated systems analysis tools, and systems engineering processes.
Ability to work with structured and unstructured data, including developing queries, algorithms, and analytical reports.
Knowledge of database systems, enterprise information security architecture, and data structuring/presentation techniques.
Understanding of network design processes, security tradeoffs, and enterprise architecture integration.
Strong ability to interpret data from multiple tools to support security decision-making.
Familiarity with impacts of cybersecurity lapses on data systems and operational environments.
Kavaliro is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status, or any other characteristic protected by law.
Data Engineer
Data engineer job in McLean, VA
Immediate need for a talented Data Engineer. This is a 12 months contract opportunity with long-term potential and is located in Mclean, VA(Hybrid). Please review the job description below and contact me ASAP if you are interested.
Job ID: 25-93504
Pay Range: $70 - $75/hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location).
Key Responsibilities:
Design, develop, and maintain data pipelines leveraging Python, Spark/PySpark, and cloud-native services.
Build and optimize data workflows, ETL processes, and transformations for large-scale structured and semi-structured datasets.
Write advanced and efficient SQL queries against Snowflake, including joins, window functions, and performance tuning.
Develop backend and automation tools using Golang and/or Python as needed.
Implement scalable, secure, and high-quality data solutions across AWS services such as S3, Lambda, Glue, Step Functions, EMR, and CloudWatch.
Troubleshoot complex production data issues, including pipeline failures, data quality gaps, and cloud environment challenges.
Perform root-cause analysis and implement automation to prevent recurring issues.
Collaborate with data scientists, analysts, platform engineers, and product teams to enable reliable, high-quality data access.
Ensure compliance with enterprise governance, data quality, and cloud security standards.
Participate in Agile ceremonies, code reviews, and DevOps practices to ensure high engineering quality.
Key Requirements and Technology Experience:
Skills-Data Engineer- Python , Spark/PySpark, AWS, Golang, Able to write complex SQL queries against Snowflake tables / Troubleshoot issues, Java/Python, AWS (Glue, EC2, Lambda).
Proficiency in Python with experience building scalable data pipelines or ETL processes.
Strong hands-on experience with Spark/PySpark for distributed data processing.
Experience writing complex SQL queries (Snowflake preferred), including optimization and performance tuning.
Working knowledge of AWS cloud services used in data engineering (S3, Glue, Lambda, EMR, Step Functions, CloudWatch, IAM).
Experience with Golang for scripting, backend services, or performance-critical processes.
Strong debugging, troubleshooting, and analytical skills across cloud and data ecosystems.
Familiarity with CI/CD workflows, Git, and automated testing.
Our client is a leading Banking and Financial Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration.
Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
Data Engineer
Data engineer job in Falls Church, VA
*** W2 Contract Only - No C2C - No 3rd Parties ***
The Ash Group is hiring a new Programmer Analyst Principal (Data Engineer) for our client (a global leader providing advanced systems and support in defense, aerospace, and security) based in Falls Church, VA.
In this role, you'll be designing, implementing, and optimizing large-scale data systems and ETL pipelines, with a strong focus on using Amazon Redshift and AWS services to ensure data quality and integrity for complex defense programs.
Compensation, Benefits, and Role Info
Competitive pay rate of $65 per hour.
Medical, dental, vision, direct primary care benefits, and, after six months of employment, a 4% matched 401(k) plan with immediate 100% vesting.
Type: 12-month contract with potential extension or conversion.
Location: On-site in Falls Church, VA.
What You'll Be Doing
Design and implement large-scale ETL data pipelines using AWS Glue and Python/PySpark to ingest, transform, and load data from various sources.
Build and maintain robust data warehouses, focusing on Amazon Redshift, including data modeling and governance.
Write and optimize complex, highly-performant SQL queries across large datasets (Redshift, Oracle, SQL Server).
Collaborate with cross-functional teams (data scientists, analysts) to understand requirements and deliver end-to-end data solutions.
Troubleshoot, optimize performance, and resolve data-related issues like pipeline failures and data quality bottlenecks.
What We're Looking For
8+ years of hands-on experience in data engineering, focusing on designing and implementing large-scale data systems.
5+ years of experience in building production-level ETL pipelines using AWS Glue and Python/PySpark.
Deep proficiency in SQL, including query optimization, indexing, and performance tuning across data warehouses like Amazon Redshift.
Strong understanding of database design principles, data modeling (star/snowflake schemas), and data governance.
Experience with data processing/orchestration frameworks such as Apache Airflow, Apache Kafka, or Fivetran.
If you're a seasoned data engineering professional passionate about building scalable data solutions and driving innovation in cloud-based environments, we want to hear from you. This is an exciting opportunity to work on cutting-edge technologies, collaborate with cross-functional teams, and make a meaningful impact on data-driven decision-making. Apply now to be part of a forward-thinking organization where your expertise will shape the future of our data infrastructure.
#DataEngineer #DataEngineering #AWSEngineer #Redshift #ETL #PySpark #DataPipeline #Westminster #ColoradoJobs #Contract
Data Scientist
Data engineer job in Columbia, MD
Data Scientist - Transit Data Focus_Columbia, MD (On-site / hybrid)_Contract (6 Months)
Data Scientist - Transit Data Focus
Employment type: Contract
Duration: 6 Months
Justification: To manage and analyze customer databases, AVA (automated voice announcement), and schedule data for predictive maintenance and service planning.
Experience Level: 3-5 years
Job Responsibilities:
Collect, process, and analyze transit-related datasets including customer databases, AVA (automated voice announcement) logs, real-time vehicle data, and schedule data.
Develop predictive models and data-driven insights to support maintenance forecasting, service planning, and operational optimization.
Design and implement data pipelines to integrate, clean, and transform large, heterogeneous transit data sources.
Perform statistical analysis and machine learning to identify patterns, trends, and anomalies relevant to transit service performance and reliability.
Collaborate with transit planners, maintenance teams, and IT staff to translate data insights into actionable business strategies.
Monitor data quality and integrity; implement data validation and cleansing processes.
Technical Skills & Qualifications:
Bachelor's or Master's degree in Data Science, Statistics, Computer Science, Transportation Engineering, or a related quantitative field.
3-5 years of experience working as a data scientist or data analyst, preferably in a transit, transportation, or public sector environment.
Strong proficiency in Python or R for data analysis, statistical modeling, and machine learning.
Experience with SQL for database querying, manipulation, and data extraction.
Familiarity with transit data standards such as GTFS, AVL/CAD, APC (Automated Passenger Counters), and AVA systems.
Experience with data visualization tools such as Power BI, or equivalent.
Azure Data Modeler
Data engineer job in Washington, DC
Azure Data Modeler - Budget Transformation Project
Our client is embarking on a major budget transformation initiative and is seeking an experienced Azure Data Modeler to support data architecture, modeling, and migration activities. This role will play a critical part in designing and optimizing data structures as the organization transitions to SAP. Experience with SAP is preferred, but strong ERP data experience in any platform is also valuable.
Responsibilities
Design, develop, and optimize data models within the Microsoft Azure environment.
Support data architecture needs across the budget transformation program.
Partner with cross-functional stakeholders to enable the transition to SAP (or other ERP systems).
Participate in data migration planning, execution, and validation efforts.
Work collaboratively within SAFe Agile teams and support sprint activities.
Provide off-hours support as needed for critical tasks and migration windows.
Engage onsite in Washington, DC up to three days per week.
Required Qualifications
Strong hands-on expertise in data architecture and data model design.
Proven experience working with Microsoft Azure (core requirement).
Ability to work flexibly, including occasional off-hours support.
Ability to be onsite in Washington, DC as needed (up to 3 days/week).
Preferred Qualifications
Experience with SAP ECC or exposure to SAP implementations.
Experience with other major ERP systems (Oracle, Workday, etc.).
SAFe Agile certification.
Dexian stands at the forefront of Talent + Technology solutions with a presence spanning more than 70 locations worldwide and a team exceeding 10,000 professionals. As one of the largest technology and professional staffing companies and one of the largest minority-owned staffing companies in the United States, Dexian combines over 30 years of industry expertise with cutting-edge technologies to deliver comprehensive global services and support.
Dexian connects the right talent and the right technology with the right organizations to deliver trajectory-changing results that help everyone achieve their ambitions and goals. To learn more, please visit ********************
Dexian is an Equal Opportunity Employer that recruits and hires qualified candidates without regard to race, religion, sex, sexual orientation, gender identity, age, national origin, ancestry, citizenship, disability, or veteran status.
Data Architect
Data engineer job in Arlington, VA
• Functions as the primary technical architect for data warehousing projects to solve business intelligence challenges
• Possesses deep technical expertise in database design, ETL (OWB/ODI), reporting, and analytics
• Previous consulting experience utilizing an agile delivery methodology
Position Requirements
• Solutions architect must have expertise as both a solutions architect and AI architect.
• 3+ years experience with Azure ETL processing
• 3+ years experience utilizing data warehousing methodologies and processes
• Strong conceptual, analytical, and decision-making skills
• Knowledge and Experience of dimensional modeling
• Strong knowledge of Azure Databricks
• Proficiency in creating PL/SQL packages
• Full SDLC and Data Modeling experience
• Ability to create both logical and physical data models
• Ability to tune databases for maximum performance
• Experience in Data Preparation: Data Profiling, Data Cleansing, and Data Auditing
• Ability to work with Business Analysts to create functional specifications and data
• Manages QA functions
• Develops unit, system, and integration test plans and manages execution
• Ability to write technical and end-user system documentation
• Excellent written and oral communication skills
• Experience transforming logical business requirements into appropriate schemas and models
• Ability to analyze and evaluate moderate to highly complex information systems by being able to interpret such devices as Entity Relation Diagrams, data dictionaries, record layouts, and logic flow diagrams
Cloud Data Engineer- Databricks
Data engineer job in McLean, VA
Purpose:
We are seeking a highly skilled Cloud Data Engineer with deep expertise in Databricks and modern cloud platforms such as AWS, Azure, or GCP. This role is ideal for professionals who are passionate about building next-generation data platforms, optimizing complex data workflows, and enabling advanced analytics and AI in cloud-native environments. You'll have the opportunity to work with Fortune-500 organizations in data and analytics, helping them unlock the full potential of their data through innovative, scalable solutions.
Key Result Areas and Activities:
Design and implement robust, scalable data engineering solutions.
Build and optimize data pipelines using Databricks, including serverless capabilities, Unity Catalog, and Mosaic AI.
Collaborate with analytics and AI teams to enable real-time and batch data workflows.
Support and improve cloud-native data platforms (AWS, Azure, GCP).
Ensure adherence to best practices in data modeling, warehousing, and governance.
Contribute to automation of data workflows using CI/CD, DevOps, or DataOps practices.
Implement and maintain workflow orchestration tools like Apache Airflow and dbt.
Roles & Responsibilities
Essential Skills
4+ years of experience in data engineering with a focus on scalable solutions.
Strong hands-on experience with Databricks in a cloud environment.
Proficiency in Spark and Python for data processing.
Solid understanding of data modeling, data warehousing, and architecture principles.
Experience working with at least one major cloud provider (AWS, Azure, or GCP).
Familiarity with CI/CD pipelines and data workflow automation.
Desirable Skills
Direct experience with Unity Catalog and Mosaic AI within Databricks.
Working knowledge of DevOps/DataOps principles in a data engineering context.
Exposure to Apache Airflow, dbt, and modern data orchestration frameworks.
Qualifications
Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or a related field.
Relevant certifications in cloud platforms (AWS/Azure/GCP) or Databricks are a plus.
Qualities:
Able to consult, write, and present persuasively
Able to work in a self-organized and cross-functional team
Able to iterate based on new information, peer reviews, and feedback
Able to work seamlessly with clients across multiple geographies
Research focused mindset
Excellent analytical, presentation, reporting, documentation and interactive skills
"Infocepts is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law."
Data Architect
Data engineer job in Washington, DC
Job Title: Developer Premium I
Duration: 7 Months with long term extension
Hybrid Onsite: 4 days per week from Day 1, with a full transition to 100% onsite anticipated soon
Job Requirement:
Strong expertise in Data Architecture & Date model design.
MS Azure (core experiment)
Experience with SAP ECC preferred
SAFE agile certification is a plus
Ability to work flexibility including off hours to support critical IT task & migration activities.
Educational Qualifications and Experience:
Bachelor's degree in Computer Science, Information Systems or in a related area of expertise.
Required number of years of proven experience in the specific technology/toolset as per Experience Matrix below for each Level.
Essential Job Functions:
Take functional specs and produce high quality technical specs
Take technical specs and produce completed and well tested programs which meet user satisfaction and acceptance, and precisely reflect the requirements - business logic, performance, and usability requirements
Conduct/attend requirements definition meetings with end-users and document system/business requirements
Conduct Peer Review on Code and Test Cases, prepared by other team members, to assess quality and compliance with coding standards
As required for the role, perform end-user demos of proposed solution and finished product, provide end user training and provide support for user acceptance testing
As required for the role, troubleshoot production support issues and find appropriate solutions within defined SLA to ensure minimal disruption to business operations
Ensure that Bank policies, procedures, and standards are factored into project design and development
As required for the role, install new release, and participate in upgrade activities
As required for the role, perform integration between systems that are on prem and also on the cloud and third-party vendors
As required for the role, collaborate with different teams within the organization for infrastructure, integration, database administration support
Adhere to project schedules and report progress regularly
Prepare weekly status reports and participate in status meetings and highlight issues and constraints that would impact timely delivery of work program items
Find the appropriate tools to implement the project
Maintain knowledge of current industry standards and practices
As needed, interact and collaborate with Enterprise Architects (EA), Office of Information Security (OIS) to obtain approvals and accreditations
“Mindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of - Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.”
Senior Data Engineer
Data engineer job in McLean, VA
The candidate must have 5+ years of hands on experience working with PySpark/Python, microservices architecture, AWS EKS, SQL, Postgres, DB2, Snowflake, Behave OR Cucumber frameworks, Pytest (unit testing), automation testing and regression testing.
Experience with tools such as Jenkins, SonarQube AND/OR Fortify are preferred for this role.
Experience in Angular and DevOps are nice to haves for this role.
Must Have Qualifications: PySpark/Python based microservices, AWS EKS, Postgres SQL Database, Behave/Cucumber for automation, Pytest, Snowflake, Jenkins, SonarQube and Fortify.
Responsibilities:
Development of microservices based on Python, PySpark, AWS EKS, AWS Postgres for a data-oriented modernization project.
New System: Python and PySpark, AWS Postgres DB, Behave/Cucumber for automation, and Pytest
Perform System, functional and data analysis on the current system and create technical/functional requirement documents.
Current System: Informatica, SAS, AutoSys, DB2
Write automated tests using Behave/cucumber, based on the new micro-services-based architecture
Promote top code quality and solve issues related to performance tuning and scalability.
Strong skills in DevOps, Docker/container-based deployments to AWS EKS using Jenkins and experience with SonarQube and Fortify.
Able to communicate and engage with business teams and analyze the current business requirements (BRS documents) and create necessary data mappings.
Preferred strong skills and experience in reporting applications development and data analysis
Knowledge in Agile methodologies and technical documentation.
Lead Principal Data Solutions Architect
Data engineer job in Reston, VA
*****TO BE CONSIDERED, CANDIDATES MUST BE U.S. CITIZEN*****
***** TO BE CONSIDERED, CANDIDATES MUST BE LOCAL TO THE DC/MD/VA METRO AREA AND BE OPEN TO A HYBIRD SCHEDULE IN RESTON, VA*****
Formed in 2011, Inadev is focused on its founding principle to build innovative customer-centric solutions incredibly fast, secure, and at scale. We deliver world-class digital experiences to some of the largest federal agencies and commercial companies. Our technical expertise and innovations are comprised of codeless automation, identity intelligence, immersive technology, artificial intelligence/machine learning (AI/ML), virtualization, and digital transformation.
POSITION DESCRIPTION:
Inadev is seeking a strong Lead Principal Data Solutions Architect Primary focus will be in Natural language processing (NLP), applying data mining techniques, doing statistical analysis and building high quality prediction systems.
PROGRAM DESCRIPTION:
This initiative focuses on modernizing and optimizing a mission-critical data environment within the immigration domain to enable advanced analytics and improved decision-making capabilities. The effort involves designing and implementing a scalable architecture that supports complex data integration, secure storage, and high-performance processing. The program emphasizes agility, innovation, and collaboration to deliver solutions that meet evolving stakeholder requirements while maintaining compliance with stringent security and governance standards.
RESPONSIBILITES:
Leading system architecture decisions, ensuring technical alignment across teams, and advocating for best practices in cloud and data engineering.
Serve as a senior technical leader and trusted advisor, driving architectural strategy and guiding development teams through complex solution design and implementation
Serve as the lead architect and technical authority for enterprise-scale data solutions, ensuring alignment with strategic objectives and technical standards.
Drive system architecture design, including data modeling, integration patterns, and performance optimization for large-scale data warehouses.
Provide expert guidance to development teams on Agile analytics methodologies and best practices for iterative delivery.
Act as a trusted advisor and advocate for the government project lead, translating business needs into actionable technical strategies.
Oversee technical execution across multiple teams, ensuring quality, scalability, and security compliance.
Evaluate emerging technologies and recommend solutions that enhance system capabilities and operational efficiency.
NON-TECHNICAL REQUIREMENTS:
Must be a U.S. Citizen.
Must be willing to work a HYRBID Schedule (2-3 Days) in Reston, VA & client locations in the Northern Virginia/DC/MD area as required.
Ability to pass a 7-year background check and obtain/maintain a U.S. Government Clearance
Strong communication and presentation skills.
Must be able to prioritize and self-start.
Must be adaptable/flexible as priorities shift.
Must be enthusiastic and have passion for learning and constant improvement.
Must be open to collaboration, feedback and client asks.
Must enjoy working with a vibrant team of outgoing personalities.
MANDATORY REQUIREMENTS/SKILLS:
Bachelor of Science degree in Computer Science, Engineering or related subject and at least 10 years of experience leading architectural design of enterprise-level data platforms, with significant focus on Databricks Lakehouse architecture.
Experience within the Federal Government, specifically DHS is preferred.
Must possess demonstrable experience with Databricks Lakehouse Platform, including Delta Lake, Unity Catalog for data governance, Delta Sharing, and Databricks SQL for analytics and BI workloads.
Must demonstrate deep expertise in Databricks Lakehouse architecture, medallion architecture (Bronze/Silver/Gold layers), Unity Catalog governance framework, and enterprise-level integration patterns using Databricks workflows and Auto Loader.
Knowledge of and ability to organize technical execution of Agile Analytics using Databricks Repos, Jobs, and collaborative notebooks, proven by professional experience.
Expertise in Apache Spark on Databricks, including performance optimization, cluster management, Photon engine utilization, and Delta Lake optimization techniques (Z-ordering, liquid clustering, data skipping).
Proficiency in Databricks Unity Catalog for centralized data governance, metadata management, data lineage tracking, and access control across multi-cloud environments.
Experience with Databricks Delta Live Tables (DLT) for declarative ETL pipeline development and data quality management.
Certification in one or more: Databricks Certified Data Engineer Associate/Professional, Databricks Certified Solutions Architect, AWS, Apache Spark, or cloud platform certifications.
DESIRED REQUIREMENTS/SKILLS:
Expertise in ETL tools.
Advanced knowledge of cloud platforms (AWS preferred; Azure or GCP a plus).
Proficiency in SQL, PL/SQL, and performance tuning for large datasets.
Understanding of security frameworks and compliance standards in federal environments.
PHYSICAL DEMANDS:
Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions
Inadev Corporation does not discriminate against qualified individuals based on their status as protected veterans or individuals with disabilities and prohibits discrimination against all individuals based on their race, color, religion, sex, sexual orientation/gender identity, or national origin.
DevOps Engineer
Data engineer job in McLean, VA
The candidate should be able to drive implementation and improvement of tools and technologies for enterprise adoption in accordance with operational and security standards.
Practice and promote a Site Reliability Engineering (SRE) culture to improve and operate cloud platform offerings to the
enterprise while working toward innovation, automation, and operational excellence.
Automation experience is a must for this position.
Ability to provide 24x7 operational support on a periodic basis and involvement in Issue resolution is a must.
Must Have Qualifications:
Must have 5+ years of have on experience with AWS CloudFormation and Terraform. Automation through Shell Scripting and Python required (Ansible nice to have). 3+ years of experience with EKS and Kubernetes
Technical expertise:
7+ years of overall information technology experience with an emphasis on integration and delivery of virtual/cloud platforms to enterprise applications.
At least 5 years of proven experience with AWS CloudFormation, Terraform, or similar tools.
3+ years of experience with engineering and supporting containerization technology (OpenShift, Kubernetes, AWS(ECS/EKS), etc.) at scale.
Experience in Python, Ansible and shell scripting to automate routine operation tasks.
Experience in Tetrate, Rancher, ArgoCD are highly preferred.
About US Tech Solutions:
US Tech Solutions is a global staff augmentation firm providing a wide range of talent on-demand and total workforce solutions. To know more about US Tech Solutions, please visit ***********************
US Tech Solutions is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Recruiter Details:
Aishwarya Chandra
Email: ****************************************
Job ID: 25-53450
SharePoint Engineer
Data engineer job in Washington, DC
BlueWater Federal is looking for a SharePoint Engineer to support the Department of Energy in Washington, DC.
As the SharePoint Engineer, you will be responsible for designing, implementing, and maintaining SharePoint environments and solutions. This includes configuring sites, libraries, workflows, and web parts, ensuring system security, and supporting business processes through automation and integration.
Responsibilities
• Install, configure, and maintain SharePoint servers (on-premises and/or SharePoint Online).
• Monitor system performance, troubleshoot issues, and apply patches or updates.
• Manage permissions, security settings, and compliance requirements.
• Design and deploy SharePoint solutions, including custom workflows, forms, and web parts.
• Migrate data and content from legacy systems to SharePoint using scripts or third-party tools.
• Customize SharePoint sites to meet organizational needs.
• Collaborate with IT teams and IA.
• Provide technical support to end-users and site owners and create documentation
• Ensure adherence to security standards and organizational policies.
• Maintain knowledge of SharePoint best practices and emerging technologies.
Qualifications
• Bachelor's degree
• 10+ years of experience with SharePoint administration with a deep understanding of SharePoint Architecture, features and best practices.
• Must have an active Top Secret clearance with the ability to obtain a Q and SCI clearance
• Proficiency in PowerShell scripting for automation.
• Experience with migrating SharePoint versions on-premises or online (preferably using ShareGate)
• SharePoint components (Search, Taxonomy, Managed Metadata).
• Patching SharePoint server to meet organization security standards.
• Experience with HTML, CSS, JavaScript, REST API, and SQL is preferred
BlueWater Federal Solutions is proud to be an Equal Opportunity Employer. All qualified candidates will be considered without regard to race, color, religion, national origin, age, disability, sexual orientation, gender identity, status as a protected veteran, or any other characteristic protected by law. BlueWater Federal Solutions is a VEVRAA federal contractor and we request priority referral of veterans.
Senior Software Engineer
Data engineer job in Springfield, VA
Job Title: Senior Software Engineer
Security Clearance: Active TS/SCI (or SCI eligibility)
Omni Federal is a mid-size business focused on modern application development, cloud and data analytics for the Federal government. Our past performance is a mix of commercial and federal business that allows us to leverage the latest commercial technologies and processes and adapt them to the Federal government. Omni Federal designs, builds and operates data-rich applications leveraging advanced data modeling, machine learning and data visualization techniques to empower our customers to make better data-driven decisions
.
We are seeking a strong Software Engineer to support an NGA project in Springfield, VA. This is an exciting Modernization initiative where the NGA is embracing modern software development practices and using them to solve challenging missions & provide various capabilities for the NGA. This includes a modern technology stack, rapid prototyping in support of intelligence analysis products and capabilities, and culture of innovation. Candidates must be passionate, energized and excited to work on modern architectures and solve challenging problems for our clients.
Required Skills:
BS or equivalent in Computer Science, Engineering, Mathematics, Information Systems or equivalent technical degree.
10+ years of experience in software engineering/development, or a related area that demonstrates the ability to successfully perform the duties associated with this work.
Experience in Java or Python enterprise application development
Experience building high performance applications in React.js
Web services architecture, design, and development
Experience in PostgreSQL database design
Experience working in AWS and utilizing specific AWS tooling (S3)
PAM Engineer
Data engineer job in Vienna, VA
Responsibilities
Operation of the Privileged Access Management (PAM) technologies, including accounts management, secrets management, and software and systems patching.
Lead projects to develop and deliver new security features and/or software updates.
Work with peers and stakeholders to implement and automate processes for administration and integration with external services.
Contribute to PAM Security Strategy, including discovery, gap analysis, onboarding, and contributing to short to long term delivery of services and service improvements.
Design, configure, and maintain PAM solutions for AIX, RHEL, Windows, and Mainframe systems.
Integrate the PAM solution with various technologies such as Service Now, Compute hosting, IGA, SIEM, and other solutions.
Provide security consultation on internal projects focusing on business needs, data transmission and identity security best practices.
Author and maintain documentation procedures, inventories, and diagrams for PAM systems and processes.
Monitor and respond to capacity and performance needs of the PAM infrastructure.
Provide regular reports to leadership regarding security, capacity, usage, and licensing.
Provide rotational on-call support for production PAM infrastructure systems and processes.
Qualifications
Bachelor's Degree in Information Technology, Computer Science or other related fields.
Industry certifications in cyber security or identity security attesting to broad knowledge of security best practices and design.
5-7+ years administering and maintaining Privileged Access Management (PAM) solutions, such as CyberArk, BeyondTrust, or Delinea.
Experience working in large security access system upgrades/projects using the Scaled Agile Framework (SAFe), Scrum or Kanban.
Significant experience working in a large IT organization with responsibility for supporting the technology and processes in the Privileged Access Management domain and controls program, preferably in a financial services organization.
Considerable experience with Identity and Access Management vendors like Microsoft, CyberArk, Saviynt, ServiceNow, RSA, etc.
Significant experience in working with all levels of staff, management, stakeholders, and vendors.
Significant experience administering tier zero identity infrastructure that provides AAA services such as Active Directory, Azure Active Directory, PKI, Federation Services, and RSA.
Advanced verbal and written communication skills.
Advanced research, analytical, and problem-solving skills.
Effective in producing desired results and achieving goals and objectives.
Practical skill presenting findings, conclusions, alternatives, and information clearly and concisely.
Experience in developing automated solutions and processes using PowerShell for Windows and BASH for UNIX/Linux.
Demonstrates an understanding of how PAM integrates with common resources such as Windows, Linux/UNIX, VMWare, Azure, SQL/Oracle/DB2 database systems, Network appliances, and Mainframe.
Familiar with change control processes (Production Discipline) to ensure up time and business continuity.
Other qualifications:
CyberArk Certifications (Defender, Sentry, Guardian) certs advance from left to right.
SOLID Experience in building and deploying PSM & CPM connectors.
Scripting background for automation and Ansible (preferably doesn't rely solely on AI or Google).
Experience with Credential Providers (AAM and CCP) Setup, Deployment, Support, Use.
PTA experience (nice to have).
Physical Server and OS platform expertise (nice to have).
CC Pace is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We do not discriminate on the basis of race, color, religion, sex, national origin, age, disability, genetic information, or any other protected characteristic under federal, state, or local laws.
CC Pace are committed to employing only candidates who are legally authorized to work in the United States. For us to comply with the Immigration Reform and Control Act of 1986, all new employees, as a condition of employment, must complete the Employment Eligibility Verification Form I-9 and provide documentation that establishes identity and authorization to work. E-Verify will be used for employment verification as part of your onboarding process.
CC Pace values integrity throughout our hiring process. As part of our standard verification procedures, candidates will be asked to provide documentation confirming employment history, education, and work authorization.
Software Integrator
Data engineer job in Manassas, VA
Software Integrator - 100% On Site in Manassas, VA
Client is seeking to hire a Software Integrator to support the Acoustics Rapid COTS Insertion (ARCI) program.
Education:
Bachelor's degree in Computer/Electrical Engineering or Computer Science degree from an accredited university.
2+ years of experience.
Job Responsibilities:
Participate in software development lifecycle including software design, development, integration, test, and support for new and existing software products.
Designing, implementing, testing and debugging complex software applications
Support continuous integration/continuous development agile like development
Basic Qualifications:
Bachelor's degree in Computer/Electrical Engineering or Computer Science degree from an accredited university or equivalent related experience.
Experience with Linux Operating Systems
2+ years of related C, C++, and/or JAVA experience
Experience with inter-process communications and real time systems
Experience with configuration management software (i.e. Subversion and/or GIT)
Backend Software Engineer
Data engineer job in Fort Belvoir, VA
As a back-end developer, you know that a good site or system needs the right combination of clean code, APIs, analytics, and infrastructure to develop a user-focused solution. We're looking for a back-end developer with the software engineering skills it takes to help identify potential risks, contribute to solution development, and create efficient and effective systems for our clients.
As a back-end developer, you'll use the latest architectural approaches and open-source frameworks and tools to help deliver solutions. Using your software engineering knowledge, you'll work with and learn from the development team to create custom tools, systems, and sites with consistent performance and scalability.
In this role, you'll make a mission-forward impact as you sharpen your skillset and grow your career. Work with us as we shape systems for the better.
Qualifications
Experience with programming languages such as Ruby, Python, C#, Java, or PowerShell
TS/SCI clearance
HS diploma or GED and 7+ years of experience as a Software Engineer, or Bachelor's degree and 3+ years of experience as a Software Engineer
Certified Secure Software Lifecycle Professional (CSSLP) Certification
Additional Qualifications
Experience working on multiple OS platforms, including Linux and Windows
Experience with the Windows Computing Environment (CE)
Linux CE Certification
DoD Approved 8570 - Information Assurance Technician (IAT) Level II Certification such as CCNA Security, CySA+, GICSP, GSEC, Security+ CE, CND, or SSCP Certification, or higher level IAT Certification
Clearance:
Applicants selected will be subject to a security investigation and may need to meet eligibility requirements for access to classified information; TS/SCI clearance is required.
Compensation and Benefits
Salary Range: $100,000 - $140,000 MAX (Compensation is determined by various factors, including but not limited to location, work experience, skills, education, certifications, seniority, and business needs. This range may be modified in the future.)
Benefits: Gridiron offers a comprehensive benefits package including medical, dental, vision insurance, HSA, FSA, 401(k), disability & ADD insurance, life and pet insurance to eligible employees. Full-time and part-time employees working at least 30 hours per week on a regular basis are eligible to participate in Gridiron's benefits programs.
Gridiron IT Solutions is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status or disability status.
Gridiron IT is a Women Owned Small Business (WOSB) headquartered in the Washington, D.C. area that supports our clients' missions throughout the United States. Gridiron IT specializes in providing comprehensive IT services tailored to meet the needs of federal agencies. Our capabilities include IT Infrastructure & Cloud Services, Cyber Security, Software Integration & Development, Data Solution & AI, and Enterprise Applications. These capabilities are backed by Gridiron IT's experienced workforce and our commitment to ensuring we meet and exceed our clients' expectations.
Senior Data Engineer.
Data engineer job in McLean, VA
Immediate need for a talented Senior Data Engineer. This is a 06+months contract opportunity with long-term potential and is located in Mclean, VA(Remote). Please review the job description below and contact me ASAP if you are interested.
Job ID: 25-84666
Pay Range: $64 - $68/hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location).
Key Responsibilities:
Demonstrate ability in implementing data warehouse solutions using modern data platforms such as Client, Databricks or Redshift.
Build data integration solutions between transaction systems and analytics platforms.
Expand data integration solutions to ingest data from internal and external sources and to further transform as per the business consumption needs.
Develop tasks for a multitude of data patterns, e.g., real-time data integration, advanced analytics, machine learning, BI and reporting.
Fundamental understanding of building of data products by data enrichment and ML.
Act as a team player and share knowledge with the existing team members.
Key Requirements and Technology Experience:
Key skills; Python, AWS, SNOWFLAKE
Bachelor's degree in computer science or a related field.
Minimum 5 years of experience in building data driven solutions.
At least 3 years of experience working with AWS services.
Applicants must be authorized to work in the US without requiring employer sponsorship currently or in the future. U.S. FinTech does not offer H-1B sponsorship for this position.
Expertise in real-time data solutions, good-to-have knowledge of streams processing, Message Oriented Platforms and ETL/ELT Tools.
Strong scripting experience using Python and SQL.
Working knowledge of foundational AWS compute, storage, networking and IAM.
Understanding of Gen AI models, prompt engineering, RAG, fine tuning and pre-tuning will be a plus.
Solid scripting experience in AWS using Lambda functions.
Knowledge of CloudFormation template preferred.
Hands-on experience with popular cloud-based data warehouse platforms such as Redshift and Client.
Experience in building data pipelines with related understanding of data ingestion, transformation of structured, semi-structured and unstructured data across cloud services.
Knowledge and understanding of data standards and principles to drive best practices around data management activities and solutions.
Experience with one or more data integration tools such as Attunity (Qlik), AWS Glue ETL, Talend, Kafka etc.
Strong understanding of data security - authorization, authentication, encryption, and network security.
Hands on experience in using and extending machine learning framework and libraries, e.g, scikit-learn, PyTorch, TensorFlow, XGBoost etc. preferred.
Experience with AWS SageMaker family of services or similar tools to develop machine learning models preferred.
Strong written and verbal communication skills to facilitate meetings and workshops to collect data, functional and technology requirements, document processes, data flows, gap analysis, and associated data to support data management/governance related efforts.
Acts with integrity and proactively seeks ways to ensure compliance with regulations, policies, and procedures.
Demonstrated ability to be self-directed with excellent organization, analytical and interpersonal skills, and consistently meet or exceed deadline deliverables.
Strong understanding of the importance and benefits of good data quality, and the ability to champion results across functions.
Our client is a leading Financial Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration.
Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.