Engineer IV
Data engineer job in Charlotte, NC
Additional InformationFree parking & meals Job Number25187549 Job CategoryEngineering & Facilities LocationCharlotte Marriott SouthPark, 2200 Rexford Road, Charlotte, North Carolina, United States, 28211VIEW ON MAP ScheduleFull Time Located Remotely?N Type Non-Management
Pay Range: $26.78-$28.39 per hour
POSITION SUMMARY
Respond and attend to guest repair requests. Communicate with guests/customers to resolve maintenance issues. Perform preventive maintenance on tools and equipment, including cleaning and lubrication. Visually inspect tools, equipment, or machines. Carry equipment (e.g., tools, radio). Identify, locate, and operate all shut-off valves for equipment and utility shut-offs for buildings. Maintain maintenance inventory and requisition parts and supplies as needed. Assure each day's activities and problems that occur are communicated to the other shifts using approved communication programs and standards. Display thorough knowledge of building systems, emergency response, and building documentation including reading standard blue prints and electrical schematics concerning plumbing and HVAC. Display advanced engineering operations skills and general mechanical ability. Display professional journeyman level expertise in at least three of the following areas with basic skills in the remaining: air conditioning and refrigeration, electrical, plumbing, carpentry and finish skills, mechanical, general building management, pneumatic/electronic systems and controls, and/or energy conservation. Display solid knowledge and skill in the safe use of hand and power tools and other materials required to perform repair and maintenance tasks. Safely perform highly complex repairs of the physical property, electrical, plumbing and mechanical equipment, air conditioners, refrigeration and pool heaters - ensuring all methods, materials and practices meet company standards and Local and National codes - with little or no supervision. Perform routine inspections of the entire property, noting safety hazards, lack of illumination, down equipment (such as ice makers, fans, extractors, pumps), and take immediate corrective action. Inspect and repair all mechanical equipment including, but not limited to: appliances, HVAC, electrical and plumbing components, diagnose and repair of boilers, pumps and related components. Use the Lockout/Tagout system before performing any maintenance work. Display thorough knowledge of maintenance contracts and vendors. Display advanced knowledge of engineering computer programs related to preventative maintenance, energy management, and other systems, including devices that interact with such programs. Perform advanced troubleshooting of hotel Mechanical, Electrical, and Plumbing (MEP) systems. Display the ability to train and mentor other engineers (e.g., Engineers I, II, and III) as necessary and supervise work in progress and act in a supervisory role in the absence of supervisors and/or management. Display ability to perform Engineer on Duty responsibilities, including readings and rounds.
Follow all company and safety and security policies and procedures; report any maintenance problems, safety hazards, accidents, or injuries; complete safety training and certifications; and properly store flammable materials. Ensure uniform and personal appearances are clean and professional, maintain confidentiality of proprietary information, and protect company assets. Welcome and acknowledge all guests according to company standards, anticipate and address guests' service needs, assist individuals with disabilities, and thank guests with genuine appreciation. Adhere to quality expectations and standards. Develop and maintain positive working relationships with others, support team to reach common goals, and listen and respond appropriately to the concerns of other employees. Speak with others using clear and professional language. Move, lift, carry, push, pull, and place objects weighing less than or equal to 50 pounds without assistance and heavier lifting or movement tasks with assistance. Move up and down stairs, service ramps, and/or ladders. Reach overhead and below the knees, including bending, twisting, pulling, and stooping. Enter and locate work-related information using computers and/or point of sale systems. Perform other reasonable job duties as requested.
PREFERRED QUALIFICATIONS
Education: High school diploma or G.E.D. equivalent.
Certificate in two-year technical diploma program for HVAC/refrigeration.
Related Work Experience: Extensive experience and training in general maintenance (advanced repairs), electrical or refrigeration,
exterior and interior surface preparation and painting.
At least 2 years of hotel engineering/maintenance experience.
Supervisory Experience: No supervisory experience.
REQUIRED QUALIFICATIONS
License or Certification: Valid Driver's License
License or certification in refrigeration or electrical
(earned, or currently working towards receiving)
Universal Chlorofluorocarbon (CFC) certification
Must meet applicable state and federal certification and/or licensing requirements.
At Marriott International, we are dedicated to being an equal opportunity employer, welcoming all and providing access to opportunity. We actively foster an environment where the unique backgrounds of our associates are valued and celebrated. Our greatest strength lies in the rich blend of culture, talent, and experiences of our associates. We are committed to non-discrimination on any protected basis, including disability, veteran status, or other basis protected by applicable law.
Marriott Hotels strive to elevate the art of hospitality, innovating at every opportunity while keeping the comfort of the oh-so-familiar all around the globe. As a host with Marriott Hotels, you will help keep the promise of “Wonderful Hospitality. Always.” by delivering thoughtful, heartfelt, forward-thinking service that upholds and builds upon this living legacy. With the name that's synonymous with hospitality the world over, we are proud to welcome you to explore a career with Marriott Hotels. In joining Marriott Hotels, you join a portfolio of brands with Marriott International. Be where you can do your best work, begin your purpose, belong to an amazing global team, and become the best version of you.
JW Marriott is part of Marriott International's luxury portfolio and consists of more than 100 beautiful properties in gateway cities and distinctive resort locations around the world. JW believes our associates come first. Because if you're happy, our guests will be happy. JW Marriott associates are confident, innovative, genuine, intuitive, and carry on the legacy of the brand's namesake and company founder, J.Willard Marriott. Our hotels offer a work experience unlike any other, where you'll be part of a community and enjoy true camaraderie with a diverse group of co-workers. JW creates opportunities for training, development, recognition and most importantly, a place where you can pursue your passions in a luxury environment with a focus on holistic well-being. Treating guests exceptionally starts with the way we take care of our associates. That's The JW Treatment™. In joining JW Marriott, you join a portfolio of brands with Marriott International. Be where you can do your best work, begin your purpose, belong to an amazing global team, and become the best version of you.
Observability Engineer
Data engineer job in McLean, VA
Vantor is forging the new frontier of spatial intelligence, helping decision makers and operators navigate what's happening now and shape what's coming next. Vantor is a place for problem solvers, changemakers, and go-getters-where people are working together to help our customers see the world differently, and in doing so, be seen differently. Come be part of a mission, not just a job, where you can: Shape your own future, build the next big thing, and change the world.
To be eligible for this position, you must be a U.S. Person, defined as a U.S. citizen, permanent resident, Asylee, or Refugee.
Note on Cleared Roles: If this position requires an active U.S. Government security clearance, applicants who do not currently hold the required clearance will not be eligible for consideration. Employment for cleared roles is contingent upon verification of clearance status.
Export Control/ITAR:
Certain roles may be subject to U.S. export control laws, requiring U.S. person status as defined by 8 U.S.C. 1324b(a)(3).
Please review the job details below.
This position requires an active U.S. Government Security Clearance at the TS/SCI level with required polygraph.
We are looking for a full-time Observability Engineer (OE) to gain deeper insights to complex systems and cloud-native environments. This role is part of our data collection and software development team that ensures Vantor's services have reliability and up-time standards appropriate to customer's needs. The environment calls for a fast rate of improvement while keeping an ever-watchful eye on capacity, performance and cost.
The OE will have the mindset and a set of engineering approaches to understand “the what” and “the why”. They will build monitoring solutions to gain visibility into operational problems, ensuring customer value and satisfaction is achieved. Their focus is to drive observability and monitoring for new and existing systems in order to provide systems insight and resolve application and infrastructure issues. The successful candidate has a breath of knowledge to discover, implement and collaborate with teammates on the implementation of solutions for complex problems across the entire technology stack.
Responsibilities:
Define standards for monitoring the reliability, availability, maintainability and performance of sponsor-owned and operated systems.
Design and architect operational solutions for managing applications and infrastructure.
Drive service acceptance by adopting new processes into operations and developing new monitoring for exposure of risks and automating against repeatable actions.
Partner with service and product owners to establish key performance indicators to identify trends and achieve better outcomes.
Provide deep troubleshooting for production issues.
Engage with service owners to maximize a team's ability to identify and remediate root cause performance issues quickly ensuring rapid service interruption recovery.
Build and/or use tools to correlate disparate data sets in an efficient and automated way to help teams quickly identify the root-cause to issues and to understand how different problems relate to each other.
Coordinate with the sponsor to support major incidents, large-scale deployments and SecOps user support.
Minimum Qualifications:
US citizenship required
Active/current TS/SCI with required polygraph
Bachelor's degree in computer science or related area of study
Minimum 5 years of experience
Working knowledge of K8s, Docker, Helm and automated deployment via pipeline (e.g. Concourse or Jenkins)
Familiarity with distributed control systems such as Git
Experience with AWS cloud services
Experience with setting up monitoring and observability solutions across sponsor owned systems, tools and data feeds
Proficient in scripting with Python and Java
Willingness to work onsite full time
Ability and willingness to share on-call responsibilities
Advanced knowledge of Unix/Linux systems, with high comfort level at the command line
Preferred Qualifications:
Experience with other cloud services providers beyond AWS
Experience with CloudWatch or other monitoring tools inside of AWS
Familiarity with Prometheus/Grafana or other monitoring tools for ETL feeds, APIs, servers, C2S servies, networks and AI/ML capabilities
Good understanding of networking fundamentals
Organized with an ability to document and communicate ongoing work tasks and projects
Receptive to giving, receiving and implementing feedback in a highly collaborative environment
Understanding of Incident and Problem Management
Effectively prioritize work and encourage best practices in others
Meticulous and cautious with the ability to identify and consider all risks and balance those with performing the task efficiently
Experience with Root Cause Analysis (RCA)
Experience with ETL processes
Willingness to step in as a leader to address ongoing incidents and problems, while providing guidance to others in order to drive to a resolution
Pay Transparency: In support of pay transparency at Vantor, we disclose salary ranges on all U.S. job postings. The successful candidate's starting pay will fall within the salary range provided below and is determined based on job-related factors, including, but not limited to, the experience, qualifications, knowledge, skills, geographic work location, and market conditions. Candidates with the minimum necessary experience, qualifications, knowledge, and skillsets for the position should not expect to receive the upper end of the pay range.
The base pay for this position within California, Colorado, Hawaii, New Jersey, the Washington, DC metropolitan area, and for all other states is:
$180,000.00 - $220,000.00
Benefits: Vantor offers a competitive total rewards package that goes beyond the standard, including a robust 401(k) with company match, mental health resources, and unique perks like student loan repayment assistance, adoption reimbursement and pet insurance to support all aspects of your life. You can find more information on our benefits at: ******************************
The application window is three days from the date the job is posted and will remain posted until a qualified candidate has been identified for hire. If the job is reposted regardless of reason, it will remain posted three days from the date the job is reposted and will remain reposted until a qualified candidate has been identified for hire.
The date of posting can be found on Vantor's Career page at the top of each job posting.
To apply, submit your application via Vantor's Career page.
EEO Policy: Vantor is an equal opportunity employer committed to an inclusive workplace. We believe in fostering an environment where all team members feel respected, valued, and encouraged to share their ideas. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, gender identity, sexual orientation, disability, protected veteran status, age, or any other characteristic protected by law.
System Development Engineer II, DBS Relational ADC
Data engineer job in Herndon, VA
System Development Engineer The Amazon Web Services team is innovating new ways of building massively scalable distributed systems and delivering the next generation of cloud computing with AWS offerings like RDS and Aurora. In 2013, AWS launched 280 services, but in 2016 alone we released nearly 1000. We hold high standards for our computer systems and the services we deliver to our customers: our systems are highly secure, highly reliable, highly available, all while functioning at massive scale; our employees are smart, passionate about the cloud, driven to serve customers, and fun to work with.
A successful engineer joining the team will do much more than write code and triage problems. They will work with Amazon's largest and most demanding customers to address specific needs across a full suite of services. They will dive deeply into technical issues and work diligently to improve the customer experience. The ideal candidate will...
- Be great fun to work with. Our company credo is "Work hard. Have fun. Make history". The right candidate will love what they do and instinctively know how to make work fun.
- Have strong Linux & Networking Fundamentals. The ideal candidate will have deep experience working with Linux, preferably in a large scale, distributed environment. You understand networking technology and how servers and networks inter-relate. You regularly take part in deep-dive troubleshooting and conduct technical post-mortem discussions to identify the root cause of complex issues.
- Love to code. Whether its building tools in Java or solving complex system problems in Python, the ideal candidate will love using technology to solve problems. You have a solid understanding of software development methodology and know how to use the right tool for the right job.
- Think Big. The ideal candidate will build and deploy solutions across thousands of devices. You will strive to improve and streamline processes to allow for work on a massive scale.
This position requires that the candidate selected must currently possess and maintain an active TS/SCI security clearance with polygraph. The position further requires the candidate to opt into a commensurate clearance for each government agency for which they perform AWS work.
10012
Key job responsibilities
- You design, implement, and deploy software components and features. You solve difficult problems generating positive feedback.
- You have a solid understanding of design approaches (and how to best use them).
- You are able to work independently and with your team to deliver software successfully.
- Your work is consistently of a high quality (e.g., secure, testable, maintainable, low-defects, efficient, etc.) and incorporates best practices. Your team trusts your work.
- Your code reviews tend to be rapid and uneventful. You provide useful code reviews for changes submitted by others.
- You focus on operational excellence, constructively identifying problems and proposing solutions, taking on projects that improve your team's software, making it better and easier to maintain.
- You make improvements to your team's development and testing processes.
- You have established good working relationships with peers. You recognize discordant views and take part in constructive dialogue to resolve them.
- You are able to confidently train new team-mates about your customers, what your team's software does, how it is constructed, tested, operates, and how it fits into the bigger picture.
A day in the life
Engineers in this role will work on automation, development, and operations to support AWS machine learning services for US government customers. They will work in an agile environment, attend daily standup, and collaborate closely with teammates. They will work on exciting challenges at scale and tackle unsolved problems.
They will support the U.S. Intelligence Community and Defense agencies to implement innovative cloud computing solutions and solve unique technical problems.
About the team
Why AWS
Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating - that's why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses.
Utility Computing (UC)
AWS Utility Computing (UC) provides product innovations - from foundational services such as Amazon's Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS's services and features apart in the industry. As a member of the UC organization, you'll support the development and management of Compute, Database, Storage, Internet of Things (IoT), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services.
Inclusive Team Culture
Here at AWS, it's in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon conferences, inspire us to never stop embracing our uniqueness.
Work/Life Balance
We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there's nothing we can't achieve in the cloud.
Mentorship and Career Growth
We're continuously raising our performance bar as we strive to become Earth's Best Employer. That's why you'll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
Diverse Experiences
Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn't followed a traditional path, or includes alternative experiences, don't let it stop you from applying.
BASIC QUALIFICATIONS- Bachelor's degree in computer science or equivalent
- 3+ years of non-internship professional software development experience
- Experience programming with at least one modern language such as C++, C#, Java, Python, Golang, PowerShell, Ruby
- Knowledge of systems engineering fundamentals (networking, storage, operating systems)
- 1+ years of designing or architecting (design patterns, reliability and scaling) of new and existing systems experience
- Current, active US Government Security Clearance of TS/SCI with Polygraph
PREFERRED QUALIFICATIONS- Experience with PowerShell (preferred), Python, Ruby, or Java
- Experience working in an Agile environment using the Scrum methodology
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit ********************************************************* for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.
Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $116,300/year in our lowest geographic market up to $201,200/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit ******************************************************** This position will remain posted until filled. Applicants should apply via our internal or external career site.
Cybersecurity Engineer III **
Data engineer job in Virginia Beach, VA
SimVentions, consistently voted one Virginia's Best Places to Work, is looking for an experienced cybersecurity professional to join our team! As a Cybersecurity Engineer III, you will play a key role in advancing cybersecurity operations by performing in-depth system hardening, vulnerability assessment, and security compliance activities in accordance with DoD requirements. The ideal candidate will have a solid foundation in cybersecurity practices and proven experience supporting both Linux and Windows environments across DoD networks. You will work collaboratively with Blue Team, Red Team, and other Cybersecurity professionals on overall cyber readiness defense and system accreditation efforts.
** Position is contingent upon award of contract, anticipated in December of 2025. **
Clearance:
An ACTIVE Secret clearance (IT Level II Tier 5 / Special-Sensitive Position) is required for this position. Applicants selected will be subject to a security investigation and may need to meet eligibility requirements for access to classified information. US Citizenship is required to obtain a clearance.
Requirements:
In-depth understanding of computer security, military system specifications, and DoD cybersecurity policies
Strong ability to communicate clearly and succinctly in written and oral presentations
Must possess one of the following DoD 8570.01-M IAT Level III baseline certifications:
CASP+ CE
CCNP Security
CISA
CISSP (Associate)
CISSP
GCED
GCIH
CCSP
Responsibilities:
Develop Assessment and Authorization (A&A) packages for various systems
Develop and maintain security documentation such as:
Authorization Boundary Diagram
System Hardware/Software/Information Flow
System Security Plan
Privacy Impact Assessment
e-Authentication
Implementation Plan
System Level Continuous Monitoring Plan
Ports, Protocols and Services Registration
Plan of Action and Milestones (POA&M)
Conduct annual FISMA assessments
Perform Continuous Monitoring of Authorized Systems
Generate and update test plans; conduct testing of the system components using the Assured Compliance Assessment Solution (ACAS) tool, implement Security Technical Implementation Guides (STIG), and conduct Information Assurance Vulnerability Management (IAVM) reviews
Perform automated ACAS scanning, STIG, SCAP checks (Evaluate STIG, Tenable Nessus, etc.) on various standalone and networked systems
Analyze cybersecurity test scan results and develop/assist with documenting open findings in the Plan of Action and Milestones (POA&M)
Analyze DISA Security Technical Implementation Guide test results and develop/assist with documenting open findings in the Plan of Action and Milestones
Preferred Skills and Experience:
A combined total of ten (10) years of full-time professional experience in all of the following functional areas:
Computer security, military system specifications, and DoD cybersecurity policies
National Cyber Range Complex (NCRC) Total Ship Computing Environment (TSCE) Program requirements and mission, ship install requirements, and protocols (preferred)
Risk Management Framework (RMF), and the implementation of Cybersecurity and IA boundary defense techniques and various IA-enabled appliances. Examples of these appliances and applications are Firewalls, Intrusion Detection System (IDS), Intrusion Prevention System (IPS), Switch/Routers, Cross Domain Solutions (CDS), EMASS and, Endpoint Security Solution (ESS)
Performing STIG implementation
Performing vulnerability assessments with the ACAS tool
Remediating vulnerability findings to include implementing vendor patches on both Linux and Windows Operating systems
Education: Bachelor of Science in Information Systems, Bachelor of Science in Information Technology, Bachelor of Science in Computer Science, Bachelor of Science in Computer Engineering Compensation:
Compensation at SimVentions is determined by a number of factors, including, but not limited to, the candidate's experience, education, training, security clearance, work location, skills, knowledge, and competencies, as well as alignment with our corporate compensation plan and contract specific requirements.
The projected annual compensation range for this position is $90,000 - $140,000 (USD). This estimate reflects the standard salary range for this position and is just one component of the total compensation package that SimVentions offers.
Benefits:
At SimVentions, we're committed to supporting the total well-being of our employees and their families. Our benefit offerings include comprehensive health and welfare plans to serve a variety of needs.
We offer:
Medical, dental, vision, and prescription drug coverage
Employee Stock Ownership Plan (ESOP)
Competitive 401(k) programs
Retirement and Financial Counselors
Health Savings and Health Reimbursement Accounts
Flexible Spending Accounts
Life insurance, short- & long-term disability
Continuing Education Assistance
Paid Time Off, Paid Holidays, Paid Leave (e.g., Maternity, Paternity, Jury Duty, Bereavement, Military)
Third Party Employee Assistance Program that offers emotional and lifestyle well-being services, to include free counseling
Supplemental Benefit Program
Why Work for SimVentions?:
SimVentions is about more than just being a place to work with other growth-orientated technically exceptional experts. It's also a fun place to work. Our family-friendly atmosphere encourages our employee-owners to imagine, create, explore, discover, and do great things together.
Support Our Warfighters
SimVentions is a proud supporter of the U.S. military, and we take pride in our ability to provide relevant, game-changing solutions to our armed men and women around the world.
Drive Customer Success
We deliver innovative products and solutions that go beyond the expected. This means you can expect to work with a team that will allow you to grow, have a voice, and make an impact.
Get Involved in Giving Back
We believe a well-rounded company starts with well-rounded employees, which is why we offer diverse service opportunities for our team throughout the year.
Build Innovative Technology
SimVentions takes pride in its innovative and cutting-edge technology, so you can be sure that whatever project you work on, you will be having a direct impact on our customer's success.
Work with Brilliant People
We don't just hire the smartest people; we seek experienced, creative individuals who are passionate about their work and thrive in our unique culture.
Create Meaningful Solutions
We are trusted partners with our customers and are provided challenging and meaningful requirements to help them solve.
Employees who join SimVentions will enjoy additional perks like:
Employee Ownership: Work with the best and help build YOUR company!
Family focus: Work for a team that recognizes the importance of family time.
Culture: Add to our culture of technical excellence and collaboration.
Dress code: Business casual, we like to be comfortable while we work.
Resources: Excellent facilities, tools, and training opportunities to grow in your field.
Open communication: Work in an environment where your voice matters.
Corporate Fellowship: Opportunities to participate in company sports teams and employee-led interest groups for personal and professional development.
Employee Appreciation: Multiple corporate events throughout the year, including Holiday Events, Company Picnic, Imagineering Day, and more.
Founding Partner of the FredNats Baseball team: Equitable distribution of tickets for every home game to be enjoyed by our employee-owners and their families from our private suite.
Food: We have a lot of food around here!
FTAC
Data Engineer
Data engineer job in Richmond, VA
Data Engineer - Distributed Energy Resources (DER)
Richmond, VA - Hybrid (1 week on - 1 week off)
12-month contract (Multiple Year Project)
$45-55/hr. depending on experience
We are hiring a Data Integration Engineer to join one of our Fortune 500 utilities partners in the Richmond area! In this role, you will support our client's rapidly growing Distributed Energy Resources (DER) and Virtual Power Plant (VPP) initiatives. You will be responsible for integrating data across platforms such as Salesforce, GIS, SAP, Oracle, and Snowflake to build our client's centralized asset tracking system for thermostats, EV chargers, solar assets, home batteries, and more.
In this role, you will map data, work with APIs, support Agile product squads, and help design system integrations that enable our client to manage customer energy assets and demand response programs at scale. This is a highly visible position on a brand-new product team with the chance to work on cutting-edge energy and utility modernization efforts. If you are interested, please apply!
MINIMUM QUALIFICATIONS:
3-5 years of experience in system integration, data engineering, or data warehousing and Bachelor's degree in Computer Science, Engineering, or related technical discipline.
Hands-on experience working with REST APIs and integrating enterprise systems.
Strong understanding of data structures, data types, and data mapping.
Familiarity with Snowflake or similar data warehousing platform.
Experience connecting data across platforms and/or integrating data from a variety of sources, i.e. SAP, Oracle, etc.
Ability to work independently and solve problems in a fast-paced Agile environment.
Excellent communication skills with the ability to collaborate across IT, business, engineering, and product teams.
RESPONSIBILITIES:
Integrate and map data across Salesforce, GIS, Snowflake, SAP, Oracle, and other enterprise systems
Link distributed energy asset data (EV chargers, thermostats, solar, home batteries, etc.) into a unified asset tracking database
Support API-first integrations: consuming, analyzing, and working with RESTful services
Participate in Agile ceremonies and work through user stories in Jira
Collaborate with product owners, BAs, data analysts, architects, and engineers to translate requirements into actionable technical tasks
Support architecture activities such as identifying data sources, formats, mappings, and integration patterns
Help design and optimize integration workflows across new and existing platforms
Work within newly formed Agile product squads focused on VPP/Asset Tracking and Customer Segmentation
Troubleshoot integration issues and identify long-term solutions
Contribute to building net-new systems and tools as the client expands DER offerings
NICE TO HAVES:
Experience with Salesforce.
Experience working with GIS systems or spatial data.
Understanding customer enrollment systems.
Jira experience.
WHAT'S IN IT FOR YOU…?
Joining our client provides you the opportunity to join a brand-new Agile product squad, work on high-impact energy modernization and DER initiatives, and gain exposure to new technologies and integration tools. This is a long-term contract with strong likelihood of extension in a stable industry and company.
Eight Eleven Group provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, national origin, age, sex, citizenship, disability, genetic information, gender, sexual orientation, gender identity, marital status, amnesty, or status as a covered veteran in accordance with applicable federal, state, and local laws.
Data Engineer (Zero Trust)
Data engineer job in Fort Belvoir, VA
Kavaliro is seeking a Zero Trust Security Architect / Data Engineer to support a mission-critical program by integrating secure architecture principles, strengthening data security, and advancing Zero Trust initiatives across the enterprise.
Key Responsibilities
Develop and implement program protection planning, including IT supply chain security, anti-tampering methods, and risk management aligned to DoD Zero Trust Architecture.
Apply secure system design tools, automated analysis methods, and architectural frameworks to build resilient, least-privilege, continuously monitored environments.
Integrate Zero Trust Data Pillar capabilities-data labeling, tagging, classification, encryption at rest/in transit, access policy definition, monitoring, and auditing.
Analyze and interpret data from multiple structured and unstructured sources to support decision-making and identify anomalies or vulnerabilities.
Assess cybersecurity principles, threats, and vulnerabilities impacting enterprise data systems, including risks such as corruption, exfiltration, and denial-of-service.
Support systems engineering activities, ensuring secure integration of technologies and alignment with Zero Trust operational objectives.
Design and maintain secure network architectures that balance security controls, mission requirements, and operational tradeoffs.
Generate queries, algorithms, and reports to evaluate data structures, identify patterns, and improve system integrity and performance.
Ensure compliance with organizational cybersecurity requirements, particularly confidentiality, integrity, availability, authentication, and non-repudiation.
Evaluate impacts of cybersecurity lapses and implement safeguards to protect mission-critical data systems.
Structure, format, and present data effectively across tools, dashboards, and reporting platforms.
Maintain knowledge of enterprise information security architecture and database systems to support secure data flow and system design.
Requirements
Active TS/SCI security clearance (required).
Deep knowledge of Zero Trust principles (never trust, always verify; explicit authentication; least privilege; continuous monitoring).
Experience with program protection planning, IT supply chain risk management, and anti-tampering techniques.
Strong understanding of cybersecurity principles, CIA triad requirements, and data-focused threats (corruption, exfiltration, denial-of-service).
Proficiency in secure system design, automated systems analysis tools, and systems engineering processes.
Ability to work with structured and unstructured data, including developing queries, algorithms, and analytical reports.
Knowledge of database systems, enterprise information security architecture, and data structuring/presentation techniques.
Understanding of network design processes, security tradeoffs, and enterprise architecture integration.
Strong ability to interpret data from multiple tools to support security decision-making.
Familiarity with impacts of cybersecurity lapses on data systems and operational environments.
Kavaliro is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status, or any other characteristic protected by law.
Principal Big Data Engineer
Data engineer job in Durham, NC
Immediate need for a talented Principal Big Data Engineer. This is a 12+ months contract opportunity with long-term potential and is located in Durham, NC(Onsite). Please review the job description below and contact me ASAP if you are interested.
Job ID: 25-94747
Pay Range: $63 - $73 /hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location).
Key Responsibilities:
We are seeking a highly motivated Data Engineer to join the Data Aggregation team.
Data Aggregation is a growing area and we are looking for a skilled engineer to drive design and development of industry leading external facing API solutions.
The comprehensive API / data solutions will seek to bring together retail, clearing and custody capabilities to help external fintech partners with the financial goal planning, investment advice and financial projections capabilities to better serve our clients and more efficiently partner with them to accomplish their financial objectives.
Key Requirements and Technology Experience:
Bachelor's or Master's Degree in a technology related field (e.g. Engineering, Computer Science, etc.) required with 10 years of working experience
Big Data Processing: Apache Spark (EMR), Scala, distributed computing, performance optimization
Cloud & Infrastructure: AWS (S3, EMR, EC2, Lambda, Step Functions), multi-region DR strategy
Databases: Cassandra/YugaByte (NoSQL), Oracle, PostgreSQL,
Snowflake Data Pipeline: ETL design, API integration, batch processing
DevOps & CI/CD: Jenkins, Docker, Kubernetes, Terraform, Git
Monitoring & Observability: Splunk, Datadog APM, Grafana, CloudWatch
Orchestration: Control-M job scheduling, workflow automation
Financial domain experience
Our client is a leading Financial Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration.
Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
Data Engineer
Data engineer job in Falls Church, VA
*** W2 Contract Only - No C2C - No 3rd Parties ***
The Ash Group is hiring a new Programmer Analyst Principal (Data Engineer) for our client (a global leader providing advanced systems and support in defense, aerospace, and security) based in Falls Church, VA.
In this role, you'll be designing, implementing, and optimizing large-scale data systems and ETL pipelines, with a strong focus on using Amazon Redshift and AWS services to ensure data quality and integrity for complex defense programs.
Compensation, Benefits, and Role Info
Competitive pay rate of $65 per hour.
Medical, dental, vision, direct primary care benefits, and, after six months of employment, a 4% matched 401(k) plan with immediate 100% vesting.
Type: 12-month contract with potential extension or conversion.
Location: On-site in Falls Church, VA.
What You'll Be Doing
Design and implement large-scale ETL data pipelines using AWS Glue and Python/PySpark to ingest, transform, and load data from various sources.
Build and maintain robust data warehouses, focusing on Amazon Redshift, including data modeling and governance.
Write and optimize complex, highly-performant SQL queries across large datasets (Redshift, Oracle, SQL Server).
Collaborate with cross-functional teams (data scientists, analysts) to understand requirements and deliver end-to-end data solutions.
Troubleshoot, optimize performance, and resolve data-related issues like pipeline failures and data quality bottlenecks.
What We're Looking For
8+ years of hands-on experience in data engineering, focusing on designing and implementing large-scale data systems.
5+ years of experience in building production-level ETL pipelines using AWS Glue and Python/PySpark.
Deep proficiency in SQL, including query optimization, indexing, and performance tuning across data warehouses like Amazon Redshift.
Strong understanding of database design principles, data modeling (star/snowflake schemas), and data governance.
Experience with data processing/orchestration frameworks such as Apache Airflow, Apache Kafka, or Fivetran.
If you're a seasoned data engineering professional passionate about building scalable data solutions and driving innovation in cloud-based environments, we want to hear from you. This is an exciting opportunity to work on cutting-edge technologies, collaborate with cross-functional teams, and make a meaningful impact on data-driven decision-making. Apply now to be part of a forward-thinking organization where your expertise will shape the future of our data infrastructure.
#DataEngineer #DataEngineering #AWSEngineer #Redshift #ETL #PySpark #DataPipeline #Westminster #ColoradoJobs #Contract
Senior Data Engineer
Data engineer job in Durham, NC
We are seeking an experienced Senior Big Data & Cloud Engineer to design, build, and deliver advanced API and data solutions that support financial goal planning, investment insights, and projection tools. This role is ideal for a seasoned engineer with 10+ years of hands-on experience in big data processing, distributed systems, cloud-native development, and end-to-end data pipeline engineering.
You will work across retail, clearing, and custody platforms, leveraging modern cloud and big data technologies to solve complex engineering challenges. The role involves driving technology strategy, optimizing large-scale data systems, and collaborating across multiple engineering teams.
Key Responsibilities
Design and develop large-scale data movement services using Apache Spark (EMR) or Spring Batch.
Build and maintain ETL workflows, distributed pipelines, and automated batch processes.
Develop high-quality applications using Java, Scala, REST, and SOAP integrations.
Implement cloud-native solutions leveraging AWS S3, EMR, EC2, Lambda, Step Functions, and related services.
Work with modern storage formats and NoSQL databases to support high-volume workloads.
Contribute to architectural discussions and code reviews across engineering teams.
Drive innovation by identifying and implementing modern data engineering techniques.
Maintain strong development practices across the full SDLC.
Design and support multi-region disaster recovery (DR) strategies.
Monitor, troubleshoot, and optimize distributed systems using advanced observability tools.
Required Skills :
10+ years of experience in software/data engineering with strong big data expertise.
Proven ability to design and optimize distributed systems handling large datasets.
Strong communicator who collaborates effectively across teams.
Ability to drive architectural improvements and influence engineering practices.
Customer-focused mindset with commitment to delivering high-quality solutions.
Adaptable, innovative, and passionate about modern data engineering trends.
Data Scientist with GenAI and Python
Data engineer job in Charlotte, NC
Dexian is seeking a Data Scientist with GenAI and Python for an opportunity with a client located in Charlotte, NC.
Responsibilities:
Design, develop, and deploy GenAI models, including LLMs, GANs, and transformers, for tasks such as content generation, data augmentation, and creative applications
Analyze complex data sets to identify patterns, extract meaningful features, and prepare data for model training, with a focus on data quality for GenAI
Develop and refine prompts for LLMs, and optimize GenAI models for performance, efficiency, and specific use cases
Deploy GenAI models into production environments, monitor their performance, and implement strategies for continuous improvement and model governance
Work closely with cross-functional teams (e.g., engineering, product) to understand business needs, translate them into GenAI solutions, and effectively communicate technical concepts to diverse stakeholders
Stay updated on the latest advancements in GenAI and data science, and explore new techniques and applications to drive innovation within the organization
Utilize Python and its extensive libraries (e.g., scikit-learn, TensorFlow, PyTorch, Pandas, LangChain) for data manipulation, model development, and solution implementation
Requirements:
Proven hands-on experience implementing Gen AI project using open source LLMs (Llama, GPT OSS, Gemma, Mistral) and proprietary API's (OpenAI, Anthropic)
Strong background in Retrieval Augmented Generation implementations
In depth understanding of embedding models and their applications
Hands on experience in Natural Language Processing (NLP) solutions on text data
Strong Python development skills. Should be comfortable with Pandas and NumPy for data analysis and feature engineering
Experience building and integrating APIs (REST, FastAPI, Flask) for serving models
Fine tuning and optimizing open source LLM/SLM is a big plus
Knowledge of Agentic AI frameworks and Orchestration
Experience in ML and Deep Learning is an advantage
Familiarity with cloud platforms (AWS/Azure/GCP)
Experience working with Agile Methodology
Strong problem solving, analytical and interpersonal skills
Ability to work effectively in a team environment
Strong written and oral communication skills
Should have the ability to clearly express ideas
Dexian is a leading provider of staffing, IT, and workforce solutions with over 12,000 employees and 70 locations worldwide. As one of the largest IT staffing companies and the 2nd largest minority-owned staffing company in the U.S., Dexian was formed in 2023 through the merger of DISYS and Signature Consultants. Combining the best elements of its core companies, Dexian's platform connects talent, technology, and organizations to produce game-changing results that help everyone achieve their ambitions and goals.
Dexian's brands include Dexian DISYS, Dexian Signature Consultants, Dexian Government Solutions, Dexian Talent Development and Dexian IT Solutions. Visit ******************* to learn more.
Dexian is an Equal Opportunity Employer that recruits and hires qualified candidates without regard to race, religion, sex, sexual orientation, gender identity, age, national origin, ancestry, citizenship, disability, or veteran status.
AWS Data Engineer
Data engineer job in Charlotte, NC
We are looking for a skilled and experienced AWS Data Engineer with 10+ Years of experience to join our team. This role requires hands-on expertise in AWS serverless technologies, Big Data platforms, and automation tools. The ideal candidate will be responsible for designing scalable data pipelines, managing cloud infrastructure, and enabling secure, reliable data operations across marketing and analytics platforms.
Key Responsibilities:
Design, build, and deploy automated CI/CD pipelines for data and application workflows.
Analyze and enhance existing data pipelines for performance and scalability.
Develop semantic data models to support activation and analytical use cases.
Document data structures and metadata using Collibra or similar tools.
Ensure high data quality, availability, and integrity across platforms.
Apply SRE and DevSecOps principles to improve system reliability and security.
Manage security operations within AWS cloud environments.
Configure and automate applications on AWS instances.
Oversee all aspects of infrastructure management, including provisioning and monitoring.
Schedule and automate jobs using tools like Step Functions, Lambda, Glue, etc.
Required Skills & Experience:
Hands-on experience with AWS serverless technologies: Lambda, Glue, Step Functions, S3, RDS, DynamoDB, Athena, CloudFormation, CloudWatch Logs.
Proficiency in Confluent Kafka, Splunk, and Ansible.
Strong command of SQL and scripting languages: Python, R, Spark.
Familiarity with data formats: JSON, XML, Parquet, Avro.
Experience in Big Data engineering and cloud-native data platforms.
Functional knowledge of marketing platforms such as Adobe, Salesforce Marketing Cloud, and Unica/Interact (nice to have).
Preferred Qualifications:
Bachelor's or Master's degree in Computer Science, Data Engineering, or related field.
AWS, Big Data, or DevOps certifications are a plus.
Experience working in hybrid cloud environments and agile teams.
Life at Capgemini
Capgemini supports all aspects of your well-being throughout the changing stages of your life and career. For eligible employees, we offer:
Flexible work
Healthcare including dental, vision, mental health, and well-being programs
Financial well-being programs such as 401(k) and Employee Share Ownership Plan
Paid time off and paid holidays
Paid parental leave
Family building benefits like adoption assistance, surrogacy, and cryopreservation
Social well-being benefits like subsidized back-up child/elder care and tutoring
Mentoring, coaching and learning programs
Employee Resource Groups
Disaster Relief
Disclaimer
Capgemini is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law.
This is a general description of the Duties, Responsibilities and Qualifications required for this position. Physical, mental, sensory or environmental demands may be referenced in an attempt to communicate the manner in which this position traditionally is performed. Whenever necessary to provide individuals with disabilities an equal employment opportunity, Capgemini will consider reasonable accommodations that might involve varying job requirements and/or changing the way this job is performed, provided that such accommodations do not pose an undue hardship.
Capgemini is committed to providing reasonable accommodations during our recruitment process. If you need assistance or accommodation, please get in touch with your recruiting contact.
Click the following link for more information on your rights as an Applicant **************************************************************************
Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem.
Snowflake Data Engineer
Data engineer job in Durham, NC
Experience in developing and proficient in SQL and knowledge on Snowflake cloud computing environments
Knowledge on Data warehousing concepts and metadata management Experience with data modeling, Data lakes
multi-dimensional models and data dictionaries
Hands-on experience with Snowflake features like Time Travel and Zero-Copy Cloning. Experience in query performance
tuning and cost optimization in a cloud data platform
Knowledge in Snowflake warehousing, architecture, processing and administration , DBT , Pipeline
Hands-on experience on PLSQL Snowflake
•Excellent personal communication, leadership, and organizational skills.
•Should be well versed with various Design patterns
Knowledge of SQL database is a plus
Hands-on Snowflake development experience is must
Work with various cross-functional groups, tech leads from other tracks
Need to work with team closely and guide them technically/functionally Must be a team player with good attitude
Data Conversion Engineer
Data engineer job in Charlotte, NC
Summary/Objective
Are you looking to work at a high growth, innovative, and purpose driven FinTech company? If so, you'll love Paymentus. Recognized by Deloitte as one of the fastest growing companies in North America, Paymentus is the premier provider of innovative, reliable, and secure electronic bill payment and presentment for more than 1700 clients. We are a SaaS provider that enables companies to help their customers simplify their financial lives. We do that by making it easier for consumers and businesses to pay bills, plus move and manage money to achieve strong financial health. We continually build upon a massively scalable platform, supporting thousands of businesses and millions of transactions on a daily basis. We're looking for high performers to join our team who excel in their expertise and who can transform plans into action. You'll have the opportunity to grow in an environment where intelligence, innovation, and leadership are valued and rewarded.
About the Role
The Data Conversion Engineer serves as a key component of the Platform Integrations team, providing technical support and guidance on data conversion projects. Conversions are an integral part in ensuring adherence to Paymentus' standards for a successful launch. This role is essential to ensure all bill payment data converts properly and efficiently onto the Paymentus platform.
Responsibilities
Develop data conversion procedures using SQL, Java and Linux scripting
Augment and automate existing manual procedures to optimize accuracy and reduce time for each conversion
Develop and update conversion mappers to interpret incoming data and manipulate it to match Paymentus' specifications
Develop new specifications to satisfy new customers and products
Serve as the primary point of contact/driver for all technical related conversion activities
Review conversion calendar and offer technical support and solutions to meet deadlines and contract dates
Maintain and update technical conversion documentation to share with internal and external clients and partners
Work in close collaboration with implementation, integration, product and development teams using exceptional communication skills
Adapt and creatively solve encountered problems under high stress and tight deadlines
Learn database structure, business logic and combine all knowledge to improve processes
Be flexible
Monitor new client conversions and existing client support if needed; provide daily problem solving, coordination, and communication
Management of multiple projects and conversion implementations
Ability to proactively troubleshoot and solve problems with limited supervision
Qualifications
B.S. Degree in Computer Science or comparable experience
Strong knowledge of Linux and the command line interface
Exceptional SQL skills
Experience with logging/monitoring tools (AWS Cloudwatch, Splunk, ELK, etc.)
Familiarity with various online banking applications and understanding of third-party integrations is a plus
Effective written and verbal communication skills
Problem Solver - recognizes the need to resolve issues quickly and effectively, uses logic to solve problems; identifies problems and brings forward multiple solution options; knows who/when to involve appropriate people when troubleshooting issues
Communication; ability to use formal and informal written and/or verbal communication channels to inform others; articulates ideas and thoughts clearly both verbally and in writing
Dynamic and self-motivated; able to work on their own initiative and deliver the objectives required to maintain service levels
Strong attention to detail
Proficiency with raw data, analytics, or data reporting tools
Preferred Skills
Background in the Payments, Banking, E-Commerce, Finance and/or Utility industries
Experience with front end web interfaces (HTML5, Javascript, CSS3)
Cloud technologies (AWS, GCP, Azure)
Work Environment
This job operates in a professional office environment. This role routinely uses standard office equipment such as laptop computers, photocopiers and smartphones.
Physical Demands
This role requires sitting or standing at a computer workstation for extended periods of time.
Position Type/Expected Hours of Work
This is a full-time position. Days and hours of work are Monday through Friday, 40 hours a week. Occasional evening and weekend work may be required as job duties demand.
Travel
No travel is required for this position.
Other Duties
Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice.
Equal Opportunity Statement
Paymentus is an equal opportunity employer. We enthusiastically accept our responsibility to make employment decisions without regard to race, religious creed, color, age, sex, sexual orientation, national origin, ancestry, citizenship status, religion, marital status, disability, military service or veteran status, genetic information, medical condition including medical characteristics, or any other classification protected by applicable federal, state, and local laws and ordinances. Our management is dedicated to ensuring the fulfillment of this policy with respect to hiring, placement, promotion, transfer, demotion, layoff, termination, recruitment advertising, pay, and other forms of compensation, training, and general treatment during employment.
Reasonable Accommodation
Paymentus recognizes and supports its obligation to endeavor to accommodate job applicants and employees with known physical or mental disabilities who are able to perform the essential functions of the position, with or without reasonable accommodation. Paymentus will endeavor to provide reasonable accommodations to otherwise qualified job applicants and employees with known physical or mental disabilities, unless doing so would impose an undue hardship on the Company or pose a direct threat of substantial harm to the employee or others. An applicant or employee who believes he or she needs a reasonable accommodation of a disability should discuss the need for possible accommodation with the Human Resources Department, or his or her direct supervisor.
Cloud Data Engineer- Databricks
Data engineer job in McLean, VA
Purpose:
We are seeking a highly skilled Cloud Data Engineer with deep expertise in Databricks and modern cloud platforms such as AWS, Azure, or GCP. This role is ideal for professionals who are passionate about building next-generation data platforms, optimizing complex data workflows, and enabling advanced analytics and AI in cloud-native environments. You'll have the opportunity to work with Fortune-500 organizations in data and analytics, helping them unlock the full potential of their data through innovative, scalable solutions.
Key Result Areas and Activities:
Design and implement robust, scalable data engineering solutions.
Build and optimize data pipelines using Databricks, including serverless capabilities, Unity Catalog, and Mosaic AI.
Collaborate with analytics and AI teams to enable real-time and batch data workflows.
Support and improve cloud-native data platforms (AWS, Azure, GCP).
Ensure adherence to best practices in data modeling, warehousing, and governance.
Contribute to automation of data workflows using CI/CD, DevOps, or DataOps practices.
Implement and maintain workflow orchestration tools like Apache Airflow and dbt.
Roles & Responsibilities
Essential Skills
4+ years of experience in data engineering with a focus on scalable solutions.
Strong hands-on experience with Databricks in a cloud environment.
Proficiency in Spark and Python for data processing.
Solid understanding of data modeling, data warehousing, and architecture principles.
Experience working with at least one major cloud provider (AWS, Azure, or GCP).
Familiarity with CI/CD pipelines and data workflow automation.
Desirable Skills
Direct experience with Unity Catalog and Mosaic AI within Databricks.
Working knowledge of DevOps/DataOps principles in a data engineering context.
Exposure to Apache Airflow, dbt, and modern data orchestration frameworks.
Qualifications
Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or a related field.
Relevant certifications in cloud platforms (AWS/Azure/GCP) or Databricks are a plus.
Qualities:
Able to consult, write, and present persuasively
Able to work in a self-organized and cross-functional team
Able to iterate based on new information, peer reviews, and feedback
Able to work seamlessly with clients across multiple geographies
Research focused mindset
Excellent analytical, presentation, reporting, documentation and interactive skills
"Infocepts is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law."
Senior Data Engineer
Data engineer job in Charlotte, NC
We are
At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron s progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets.
Our Challenge:
Looking for skilled senior data engineer with comprehensive experience in designing, developing, and maintaining scalable data solutions within the financial and regulatory domains. Proven expertise in leading end-to-end data architectures, integrating diverse data sources, and ensuring data quality and accuracy.
Additional Information*
The base salary for this position will vary based on geography and other factors. In accordance with law, the base salary for this role if filled within New York, NY is $135k - $155k/year & benefits (see below).
Work location: New York City, NY (Hybrid, 3 days in a week)
The Role
Responsibilities:
Advanced proficiency in Python, SQL Server, Snowflake, Azure Databricks, and PySpark.
Strong understanding of relational databases, ETL processes, and data modeling.
Expertise in system design, architecture, and implementing robust data pipelines.
Hands-on experience with data validation, quality checks, and automation tools (Autosys, Control-M).
Skilled in Agile methodologies, SDLC processes, and CI/CD pipelines.
Effective communicator with the ability to collaborate with business analysts, users, and global teams.
Requirements:
Overall 10+ years of IT experience is required
Collaborate with business stakeholders to gather technical specifications and translate business requirements into technical solutions.
Develop and optimize data models and schemas for efficient data integration and analysis.
Lead application development involving Python, Pyspark, SQL, Snowflake and Databricks platforms.
Implement data validation procedures to maintain high data quality standards.
Strong experience in SQL (Writing complex queries, Join, Tables etc.)
Conduct comprehensive testing (UT, SIT, UAT) alongside business and testing teams.
Provide ongoing support, troubleshooting, and maintenance in production environments.
Contribute to architecture and design discussions to ensure scalable, maintainable data solutions.
Experience with financial systems (capital markets, credit risk, and regulatory compliance applications).
We offer:
A highly competitive compensation and benefits package.
A multinational organization with 58 offices in 21 countries and the possibility to work abroad.
10 days of paid annual leave (plus sick leave and national holidays).
Maternity & paternity leave plans.
A comprehensive insurance plan including medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region).
Retirement savings plans.
A higher education certification policy.
Commuter benefits (varies by region).
Extensive training opportunities, focused on skills, substantive knowledge, and personal development.
On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses.
Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups.
Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms.
A flat and approachable organization.
A truly diverse, fun-loving, and global work culture.
SYNECHRON'S DIVERSITY & INCLUSION STATEMENT
Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.
All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
Palantir Data Engineer
Data engineer job in Charlotte, NC
Build and maintain data pipelines and workflows in Palantir Foundry.
Design, train, and deploy ML models for classification, optimization, and forecasting use cases.
Apply feature engineering, data cleaning, and modeling techniques using Python, Spark, and ML libraries.
Create dashboards and data applications using Slate or Streamlit to enable operational decision-making.
Implement generative AI use cases using large language models (GPT-4, Claude, etc)
AWS Data Engineer (Only W2)
Data engineer job in Charlotte, NC
Title: AWS Data Engineer
Exprience: 10 years
Must Have Skills:
• Strong experience in AWS services, primarily serverless, databases, storage services, container services, schedulers, and batch services.
• Experience in Snowflake and Data Build Tool.
• Expertise in DBT, NodeJS and Python.
• Expertise in Informatica, PowerBI , Database, Cognos.
Nice to Have Skills:
Detailed Job Description:
• Strong experience in AWS services, primarily serverless, databases, storage services, container services, schedulers, and batch services.
• Experience in Snowflake and Data Build Tool.Expertise in DBT, NodeJS and Python.
• Expertise in Informatica, PowerBI , Database, Cognos.
• Proven experience in leading teams across locations.
• Knowledge of DevOps processes, Infrastructure as Code and their purpose.
• Good understanding of data warehouses, their purpose, and implementation
• Good communication skills.
Kindly share the resume in ******************
Senior Data Engineer
Data engineer job in McLean, VA
The candidate must have 5+ years of hands on experience working with PySpark/Python, microservices architecture, AWS EKS, SQL, Postgres, DB2, Snowflake, Behave OR Cucumber frameworks, Pytest (unit testing), automation testing and regression testing.
Experience with tools such as Jenkins, SonarQube AND/OR Fortify are preferred for this role.
Experience in Angular and DevOps are nice to haves for this role.
Must Have Qualifications: PySpark/Python based microservices, AWS EKS, Postgres SQL Database, Behave/Cucumber for automation, Pytest, Snowflake, Jenkins, SonarQube and Fortify.
Responsibilities:
Development of microservices based on Python, PySpark, AWS EKS, AWS Postgres for a data-oriented modernization project.
New System: Python and PySpark, AWS Postgres DB, Behave/Cucumber for automation, and Pytest
Perform System, functional and data analysis on the current system and create technical/functional requirement documents.
Current System: Informatica, SAS, AutoSys, DB2
Write automated tests using Behave/cucumber, based on the new micro-services-based architecture
Promote top code quality and solve issues related to performance tuning and scalability.
Strong skills in DevOps, Docker/container-based deployments to AWS EKS using Jenkins and experience with SonarQube and Fortify.
Able to communicate and engage with business teams and analyze the current business requirements (BRS documents) and create necessary data mappings.
Preferred strong skills and experience in reporting applications development and data analysis
Knowledge in Agile methodologies and technical documentation.
Lead Principal Data Solutions Architect
Data engineer job in Reston, VA
*****TO BE CONSIDERED, CANDIDATES MUST BE U.S. CITIZEN*****
***** TO BE CONSIDERED, CANDIDATES MUST BE LOCAL TO THE DC/MD/VA METRO AREA AND BE OPEN TO A HYBIRD SCHEDULE IN RESTON, VA*****
Formed in 2011, Inadev is focused on its founding principle to build innovative customer-centric solutions incredibly fast, secure, and at scale. We deliver world-class digital experiences to some of the largest federal agencies and commercial companies. Our technical expertise and innovations are comprised of codeless automation, identity intelligence, immersive technology, artificial intelligence/machine learning (AI/ML), virtualization, and digital transformation.
POSITION DESCRIPTION:
Inadev is seeking a strong Lead Principal Data Solutions Architect Primary focus will be in Natural language processing (NLP), applying data mining techniques, doing statistical analysis and building high quality prediction systems.
PROGRAM DESCRIPTION:
This initiative focuses on modernizing and optimizing a mission-critical data environment within the immigration domain to enable advanced analytics and improved decision-making capabilities. The effort involves designing and implementing a scalable architecture that supports complex data integration, secure storage, and high-performance processing. The program emphasizes agility, innovation, and collaboration to deliver solutions that meet evolving stakeholder requirements while maintaining compliance with stringent security and governance standards.
RESPONSIBILITES:
Leading system architecture decisions, ensuring technical alignment across teams, and advocating for best practices in cloud and data engineering.
Serve as a senior technical leader and trusted advisor, driving architectural strategy and guiding development teams through complex solution design and implementation
Serve as the lead architect and technical authority for enterprise-scale data solutions, ensuring alignment with strategic objectives and technical standards.
Drive system architecture design, including data modeling, integration patterns, and performance optimization for large-scale data warehouses.
Provide expert guidance to development teams on Agile analytics methodologies and best practices for iterative delivery.
Act as a trusted advisor and advocate for the government project lead, translating business needs into actionable technical strategies.
Oversee technical execution across multiple teams, ensuring quality, scalability, and security compliance.
Evaluate emerging technologies and recommend solutions that enhance system capabilities and operational efficiency.
NON-TECHNICAL REQUIREMENTS:
Must be a U.S. Citizen.
Must be willing to work a HYRBID Schedule (2-3 Days) in Reston, VA & client locations in the Northern Virginia/DC/MD area as required.
Ability to pass a 7-year background check and obtain/maintain a U.S. Government Clearance
Strong communication and presentation skills.
Must be able to prioritize and self-start.
Must be adaptable/flexible as priorities shift.
Must be enthusiastic and have passion for learning and constant improvement.
Must be open to collaboration, feedback and client asks.
Must enjoy working with a vibrant team of outgoing personalities.
MANDATORY REQUIREMENTS/SKILLS:
Bachelor of Science degree in Computer Science, Engineering or related subject and at least 10 years of experience leading architectural design of enterprise-level data platforms, with significant focus on Databricks Lakehouse architecture.
Experience within the Federal Government, specifically DHS is preferred.
Must possess demonstrable experience with Databricks Lakehouse Platform, including Delta Lake, Unity Catalog for data governance, Delta Sharing, and Databricks SQL for analytics and BI workloads.
Must demonstrate deep expertise in Databricks Lakehouse architecture, medallion architecture (Bronze/Silver/Gold layers), Unity Catalog governance framework, and enterprise-level integration patterns using Databricks workflows and Auto Loader.
Knowledge of and ability to organize technical execution of Agile Analytics using Databricks Repos, Jobs, and collaborative notebooks, proven by professional experience.
Expertise in Apache Spark on Databricks, including performance optimization, cluster management, Photon engine utilization, and Delta Lake optimization techniques (Z-ordering, liquid clustering, data skipping).
Proficiency in Databricks Unity Catalog for centralized data governance, metadata management, data lineage tracking, and access control across multi-cloud environments.
Experience with Databricks Delta Live Tables (DLT) for declarative ETL pipeline development and data quality management.
Certification in one or more: Databricks Certified Data Engineer Associate/Professional, Databricks Certified Solutions Architect, AWS, Apache Spark, or cloud platform certifications.
DESIRED REQUIREMENTS/SKILLS:
Expertise in ETL tools.
Advanced knowledge of cloud platforms (AWS preferred; Azure or GCP a plus).
Proficiency in SQL, PL/SQL, and performance tuning for large datasets.
Understanding of security frameworks and compliance standards in federal environments.
PHYSICAL DEMANDS:
Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions
Inadev Corporation does not discriminate against qualified individuals based on their status as protected veterans or individuals with disabilities and prohibits discrimination against all individuals based on their race, color, religion, sex, sexual orientation/gender identity, or national origin.
Hadoop Developer
Data engineer job in Wilson, NC
Incedo (************************** (formerly a part of $4Bn Group) is a technology solutions and services organization headquartered in the Bay Area, USA with workforce across North America, South Africa and India (Gurgaon, Bangalore). We specialize in Data & Analytics and Product Engineering Services, with deep expertise in Financial Services, Life Science and Communication Engineering. Our key focus is on Emerging Technologies and Innovation. Our end-to-end capabilities span across Application Services, Infrastructure and Operations.
Position :
Hadoop Consultant
Location : Raleigh, NC
Duration: Contract/Full Time
Job Description:
· Bachelors (IT/Computer Science Preferred); Master's Degree preferred (IT/Computer Science Preferred) or equivalent experience
· 8-10 years of industry experience in analysing source system data and data flows, working with structured and unstructured data, and delivering data and solution architecture designs.
· Experience with clustered/distributed computing systems, such as Hadoop/MapReduce, Spark/SparkR, Lucene/ElasticSearch, Storm, Cassandra, Graph Databases, Analytics Notebooks like Jupyter, Zeppelin etc
· Experience building data pipelines for structured/unstructured, real-time/batch, events/synchronous/asynchronous using MQ, Kafka, Steam processing.
· 5+ years of experience as a data engineer/solution architect designing and delivering large scale distributed software systems, preferably in large scale global business- preferably using open source tools and big data technologies such as Cassandra, Hadoop, Hive, Prestodb, Impala, HBase, Spark, Storm, Redis, Drill etc
· Strong hands-on experience of programming with Java, Python, Scala etc.
· Experience with SQL, NoSQL, relational database design, and methods for efficiently retrieving data for Time Series Analytics.
· Knowledge of Data Warehousing best practices; modelling techniques and processes and complex data integration pipelines
· Experience gathering and processing raw data at scale (including writing scripts, web scraping, calling APIs, write SQL queries, etc.)
· Excellent technical skills - able to effectively lead a technical team of business intelligence and big data developers as well as data analysts.
Optional Skills:
· Experience in Machine Learning, Deep Learning, Data Science
· Experience with Cloud architecture & service like AWS, Azure
· Experience with Graph, Sematic Web, RDF Technologies
· Experience with Text Analytics using SOLR or ElasticSearch
Additional Information
All your information will be kept confidential according to EEO guidelines.