Observability Engineer
Data engineer job in McLean, VA
Vantor is forging the new frontier of spatial intelligence, helping decision makers and operators navigate what's happening now and shape what's coming next. Vantor is a place for problem solvers, changemakers, and go-getters-where people are working together to help our customers see the world differently, and in doing so, be seen differently. Come be part of a mission, not just a job, where you can: Shape your own future, build the next big thing, and change the world.
To be eligible for this position, you must be a U.S. Person, defined as a U.S. citizen, permanent resident, Asylee, or Refugee.
Note on Cleared Roles: If this position requires an active U.S. Government security clearance, applicants who do not currently hold the required clearance will not be eligible for consideration. Employment for cleared roles is contingent upon verification of clearance status.
Export Control/ITAR:
Certain roles may be subject to U.S. export control laws, requiring U.S. person status as defined by 8 U.S.C. 1324b(a)(3).
Please review the job details below.
This position requires an active U.S. Government Security Clearance at the TS/SCI level with required polygraph.
We are looking for a full-time Observability Engineer (OE) to gain deeper insights to complex systems and cloud-native environments. This role is part of our data collection and software development team that ensures Vantor's services have reliability and up-time standards appropriate to customer's needs. The environment calls for a fast rate of improvement while keeping an ever-watchful eye on capacity, performance and cost.
The OE will have the mindset and a set of engineering approaches to understand “the what” and “the why”. They will build monitoring solutions to gain visibility into operational problems, ensuring customer value and satisfaction is achieved. Their focus is to drive observability and monitoring for new and existing systems in order to provide systems insight and resolve application and infrastructure issues. The successful candidate has a breath of knowledge to discover, implement and collaborate with teammates on the implementation of solutions for complex problems across the entire technology stack.
Responsibilities:
Define standards for monitoring the reliability, availability, maintainability and performance of sponsor-owned and operated systems.
Design and architect operational solutions for managing applications and infrastructure.
Drive service acceptance by adopting new processes into operations and developing new monitoring for exposure of risks and automating against repeatable actions.
Partner with service and product owners to establish key performance indicators to identify trends and achieve better outcomes.
Provide deep troubleshooting for production issues.
Engage with service owners to maximize a team's ability to identify and remediate root cause performance issues quickly ensuring rapid service interruption recovery.
Build and/or use tools to correlate disparate data sets in an efficient and automated way to help teams quickly identify the root-cause to issues and to understand how different problems relate to each other.
Coordinate with the sponsor to support major incidents, large-scale deployments and SecOps user support.
Minimum Qualifications:
US citizenship required
Active/current TS/SCI with required polygraph
Bachelor's degree in computer science or related area of study
Minimum 5 years of experience
Working knowledge of K8s, Docker, Helm and automated deployment via pipeline (e.g. Concourse or Jenkins)
Familiarity with distributed control systems such as Git
Experience with AWS cloud services
Experience with setting up monitoring and observability solutions across sponsor owned systems, tools and data feeds
Proficient in scripting with Python and Java
Willingness to work onsite full time
Ability and willingness to share on-call responsibilities
Advanced knowledge of Unix/Linux systems, with high comfort level at the command line
Preferred Qualifications:
Experience with other cloud services providers beyond AWS
Experience with CloudWatch or other monitoring tools inside of AWS
Familiarity with Prometheus/Grafana or other monitoring tools for ETL feeds, APIs, servers, C2S servies, networks and AI/ML capabilities
Good understanding of networking fundamentals
Organized with an ability to document and communicate ongoing work tasks and projects
Receptive to giving, receiving and implementing feedback in a highly collaborative environment
Understanding of Incident and Problem Management
Effectively prioritize work and encourage best practices in others
Meticulous and cautious with the ability to identify and consider all risks and balance those with performing the task efficiently
Experience with Root Cause Analysis (RCA)
Experience with ETL processes
Willingness to step in as a leader to address ongoing incidents and problems, while providing guidance to others in order to drive to a resolution
Pay Transparency: In support of pay transparency at Vantor, we disclose salary ranges on all U.S. job postings. The successful candidate's starting pay will fall within the salary range provided below and is determined based on job-related factors, including, but not limited to, the experience, qualifications, knowledge, skills, geographic work location, and market conditions. Candidates with the minimum necessary experience, qualifications, knowledge, and skillsets for the position should not expect to receive the upper end of the pay range.
The base pay for this position within California, Colorado, Hawaii, New Jersey, the Washington, DC metropolitan area, and for all other states is:
$180,000.00 - $220,000.00
Benefits: Vantor offers a competitive total rewards package that goes beyond the standard, including a robust 401(k) with company match, mental health resources, and unique perks like student loan repayment assistance, adoption reimbursement and pet insurance to support all aspects of your life. You can find more information on our benefits at: ******************************
The application window is three days from the date the job is posted and will remain posted until a qualified candidate has been identified for hire. If the job is reposted regardless of reason, it will remain posted three days from the date the job is reposted and will remain reposted until a qualified candidate has been identified for hire.
The date of posting can be found on Vantor's Career page at the top of each job posting.
To apply, submit your application via Vantor's Career page.
EEO Policy: Vantor is an equal opportunity employer committed to an inclusive workplace. We believe in fostering an environment where all team members feel respected, valued, and encouraged to share their ideas. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, gender identity, sexual orientation, disability, protected veteran status, age, or any other characteristic protected by law.
System Development Engineer II, DBS Relational ADC
Data engineer job in Herndon, VA
System Development Engineer The Amazon Web Services team is innovating new ways of building massively scalable distributed systems and delivering the next generation of cloud computing with AWS offerings like RDS and Aurora. In 2013, AWS launched 280 services, but in 2016 alone we released nearly 1000. We hold high standards for our computer systems and the services we deliver to our customers: our systems are highly secure, highly reliable, highly available, all while functioning at massive scale; our employees are smart, passionate about the cloud, driven to serve customers, and fun to work with.
A successful engineer joining the team will do much more than write code and triage problems. They will work with Amazon's largest and most demanding customers to address specific needs across a full suite of services. They will dive deeply into technical issues and work diligently to improve the customer experience. The ideal candidate will...
- Be great fun to work with. Our company credo is "Work hard. Have fun. Make history". The right candidate will love what they do and instinctively know how to make work fun.
- Have strong Linux & Networking Fundamentals. The ideal candidate will have deep experience working with Linux, preferably in a large scale, distributed environment. You understand networking technology and how servers and networks inter-relate. You regularly take part in deep-dive troubleshooting and conduct technical post-mortem discussions to identify the root cause of complex issues.
- Love to code. Whether its building tools in Java or solving complex system problems in Python, the ideal candidate will love using technology to solve problems. You have a solid understanding of software development methodology and know how to use the right tool for the right job.
- Think Big. The ideal candidate will build and deploy solutions across thousands of devices. You will strive to improve and streamline processes to allow for work on a massive scale.
This position requires that the candidate selected must currently possess and maintain an active TS/SCI security clearance with polygraph. The position further requires the candidate to opt into a commensurate clearance for each government agency for which they perform AWS work.
10012
Key job responsibilities
- You design, implement, and deploy software components and features. You solve difficult problems generating positive feedback.
- You have a solid understanding of design approaches (and how to best use them).
- You are able to work independently and with your team to deliver software successfully.
- Your work is consistently of a high quality (e.g., secure, testable, maintainable, low-defects, efficient, etc.) and incorporates best practices. Your team trusts your work.
- Your code reviews tend to be rapid and uneventful. You provide useful code reviews for changes submitted by others.
- You focus on operational excellence, constructively identifying problems and proposing solutions, taking on projects that improve your team's software, making it better and easier to maintain.
- You make improvements to your team's development and testing processes.
- You have established good working relationships with peers. You recognize discordant views and take part in constructive dialogue to resolve them.
- You are able to confidently train new team-mates about your customers, what your team's software does, how it is constructed, tested, operates, and how it fits into the bigger picture.
A day in the life
Engineers in this role will work on automation, development, and operations to support AWS machine learning services for US government customers. They will work in an agile environment, attend daily standup, and collaborate closely with teammates. They will work on exciting challenges at scale and tackle unsolved problems.
They will support the U.S. Intelligence Community and Defense agencies to implement innovative cloud computing solutions and solve unique technical problems.
About the team
Why AWS
Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating - that's why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses.
Utility Computing (UC)
AWS Utility Computing (UC) provides product innovations - from foundational services such as Amazon's Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS's services and features apart in the industry. As a member of the UC organization, you'll support the development and management of Compute, Database, Storage, Internet of Things (IoT), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services.
Inclusive Team Culture
Here at AWS, it's in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon conferences, inspire us to never stop embracing our uniqueness.
Work/Life Balance
We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there's nothing we can't achieve in the cloud.
Mentorship and Career Growth
We're continuously raising our performance bar as we strive to become Earth's Best Employer. That's why you'll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
Diverse Experiences
Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn't followed a traditional path, or includes alternative experiences, don't let it stop you from applying.
BASIC QUALIFICATIONS- Bachelor's degree in computer science or equivalent
- 3+ years of non-internship professional software development experience
- Experience programming with at least one modern language such as C++, C#, Java, Python, Golang, PowerShell, Ruby
- Knowledge of systems engineering fundamentals (networking, storage, operating systems)
- 1+ years of designing or architecting (design patterns, reliability and scaling) of new and existing systems experience
- Current, active US Government Security Clearance of TS/SCI with Polygraph
PREFERRED QUALIFICATIONS- Experience with PowerShell (preferred), Python, Ruby, or Java
- Experience working in an Agile environment using the Scrum methodology
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit ********************************************************* for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.
Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $116,300/year in our lowest geographic market up to $201,200/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit ******************************************************** This position will remain posted until filled. Applicants should apply via our internal or external career site.
Cybersecurity Engineer III **
Data engineer job in Virginia Beach, VA
SimVentions, consistently voted one Virginia's Best Places to Work, is looking for an experienced cybersecurity professional to join our team! As a Cybersecurity Engineer III, you will play a key role in advancing cybersecurity operations by performing in-depth system hardening, vulnerability assessment, and security compliance activities in accordance with DoD requirements. The ideal candidate will have a solid foundation in cybersecurity practices and proven experience supporting both Linux and Windows environments across DoD networks. You will work collaboratively with Blue Team, Red Team, and other Cybersecurity professionals on overall cyber readiness defense and system accreditation efforts.
** Position is contingent upon award of contract, anticipated in December of 2025. **
Clearance:
An ACTIVE Secret clearance (IT Level II Tier 5 / Special-Sensitive Position) is required for this position. Applicants selected will be subject to a security investigation and may need to meet eligibility requirements for access to classified information. US Citizenship is required to obtain a clearance.
Requirements:
In-depth understanding of computer security, military system specifications, and DoD cybersecurity policies
Strong ability to communicate clearly and succinctly in written and oral presentations
Must possess one of the following DoD 8570.01-M IAT Level III baseline certifications:
CASP+ CE
CCNP Security
CISA
CISSP (Associate)
CISSP
GCED
GCIH
CCSP
Responsibilities:
Develop Assessment and Authorization (A&A) packages for various systems
Develop and maintain security documentation such as:
Authorization Boundary Diagram
System Hardware/Software/Information Flow
System Security Plan
Privacy Impact Assessment
e-Authentication
Implementation Plan
System Level Continuous Monitoring Plan
Ports, Protocols and Services Registration
Plan of Action and Milestones (POA&M)
Conduct annual FISMA assessments
Perform Continuous Monitoring of Authorized Systems
Generate and update test plans; conduct testing of the system components using the Assured Compliance Assessment Solution (ACAS) tool, implement Security Technical Implementation Guides (STIG), and conduct Information Assurance Vulnerability Management (IAVM) reviews
Perform automated ACAS scanning, STIG, SCAP checks (Evaluate STIG, Tenable Nessus, etc.) on various standalone and networked systems
Analyze cybersecurity test scan results and develop/assist with documenting open findings in the Plan of Action and Milestones (POA&M)
Analyze DISA Security Technical Implementation Guide test results and develop/assist with documenting open findings in the Plan of Action and Milestones
Preferred Skills and Experience:
A combined total of ten (10) years of full-time professional experience in all of the following functional areas:
Computer security, military system specifications, and DoD cybersecurity policies
National Cyber Range Complex (NCRC) Total Ship Computing Environment (TSCE) Program requirements and mission, ship install requirements, and protocols (preferred)
Risk Management Framework (RMF), and the implementation of Cybersecurity and IA boundary defense techniques and various IA-enabled appliances. Examples of these appliances and applications are Firewalls, Intrusion Detection System (IDS), Intrusion Prevention System (IPS), Switch/Routers, Cross Domain Solutions (CDS), EMASS and, Endpoint Security Solution (ESS)
Performing STIG implementation
Performing vulnerability assessments with the ACAS tool
Remediating vulnerability findings to include implementing vendor patches on both Linux and Windows Operating systems
Education: Bachelor of Science in Information Systems, Bachelor of Science in Information Technology, Bachelor of Science in Computer Science, Bachelor of Science in Computer Engineering Compensation:
Compensation at SimVentions is determined by a number of factors, including, but not limited to, the candidate's experience, education, training, security clearance, work location, skills, knowledge, and competencies, as well as alignment with our corporate compensation plan and contract specific requirements.
The projected annual compensation range for this position is $90,000 - $140,000 (USD). This estimate reflects the standard salary range for this position and is just one component of the total compensation package that SimVentions offers.
Benefits:
At SimVentions, we're committed to supporting the total well-being of our employees and their families. Our benefit offerings include comprehensive health and welfare plans to serve a variety of needs.
We offer:
Medical, dental, vision, and prescription drug coverage
Employee Stock Ownership Plan (ESOP)
Competitive 401(k) programs
Retirement and Financial Counselors
Health Savings and Health Reimbursement Accounts
Flexible Spending Accounts
Life insurance, short- & long-term disability
Continuing Education Assistance
Paid Time Off, Paid Holidays, Paid Leave (e.g., Maternity, Paternity, Jury Duty, Bereavement, Military)
Third Party Employee Assistance Program that offers emotional and lifestyle well-being services, to include free counseling
Supplemental Benefit Program
Why Work for SimVentions?:
SimVentions is about more than just being a place to work with other growth-orientated technically exceptional experts. It's also a fun place to work. Our family-friendly atmosphere encourages our employee-owners to imagine, create, explore, discover, and do great things together.
Support Our Warfighters
SimVentions is a proud supporter of the U.S. military, and we take pride in our ability to provide relevant, game-changing solutions to our armed men and women around the world.
Drive Customer Success
We deliver innovative products and solutions that go beyond the expected. This means you can expect to work with a team that will allow you to grow, have a voice, and make an impact.
Get Involved in Giving Back
We believe a well-rounded company starts with well-rounded employees, which is why we offer diverse service opportunities for our team throughout the year.
Build Innovative Technology
SimVentions takes pride in its innovative and cutting-edge technology, so you can be sure that whatever project you work on, you will be having a direct impact on our customer's success.
Work with Brilliant People
We don't just hire the smartest people; we seek experienced, creative individuals who are passionate about their work and thrive in our unique culture.
Create Meaningful Solutions
We are trusted partners with our customers and are provided challenging and meaningful requirements to help them solve.
Employees who join SimVentions will enjoy additional perks like:
Employee Ownership: Work with the best and help build YOUR company!
Family focus: Work for a team that recognizes the importance of family time.
Culture: Add to our culture of technical excellence and collaboration.
Dress code: Business casual, we like to be comfortable while we work.
Resources: Excellent facilities, tools, and training opportunities to grow in your field.
Open communication: Work in an environment where your voice matters.
Corporate Fellowship: Opportunities to participate in company sports teams and employee-led interest groups for personal and professional development.
Employee Appreciation: Multiple corporate events throughout the year, including Holiday Events, Company Picnic, Imagineering Day, and more.
Founding Partner of the FredNats Baseball team: Equitable distribution of tickets for every home game to be enjoyed by our employee-owners and their families from our private suite.
Food: We have a lot of food around here!
FTAC
NG & NGL Engineer
Data engineer job in South Shore, KY
An exciting career awaits you
At MPC, we're committed to being a great place to work - one that welcomes new ideas, encourages diverse perspectives, develops our people, and fosters a collaborative team environment.
MPLX Natural Gas & Natural Gas Liquids (NG & NGL) Operations is seeking an Operations Engineer. The Operations Engineer will provide engineering support, project management, technical stewardship and oversight to gathering, pipeline and compression assets.
This position will report to the Operations Engineering Manager and requires previous experience in the key areas of operations engineering, crude oil/natural gas/natural gas liquids gathering, pipelines and compression/pump facilities. This position will be focused on providing operations engineering support and the development/execution of gathering, pipeline, and compression related projects.
RESPONSIBILITIES:
1. Provides engineering support, technical stewardship. leadership. and oversight to Natural Gas & Natural Gas Liquids operations
2. Troubleshoots operational issues and optimizes processes utilizing sound engineering practices
3. Utilizes modern technical tools and software to perform engineering calculations and modeling
4. Develops. implements, and manages capital and expense projects for business unit while adhering to budgets and project management processes; supervises contract personnel for project development and execution as required
5. Develops project processes. economic evaluations, scoping, costing. and approval documentation
6. Participates in Process Safety Management processes (PHAs, MOCs. etc.) as applicable
7. Partners with company and industry subject matter experts to maintain thorough knowledge and understanding of applicable DOT, OSHA, EPA, and other environmental safety regulations; ensures area of responsibility is compliant with all industry and company standards
MINIMUM QUALIFICATIONS:
Bachelor's degree in engineering from accredited college or university required
Engineer G&P I: Typically has 0-5 years of relevant experience.
Engineer G&P II: Typically has 4 or more years of relevant experience.
Engineer G&P III: Typically has 7 or more years of relevant experience
Engineer G&P Sr: Typically has 12 or more years of relevant experience
#GP #GPOPS
As an energy industry leader, our career opportunities fuel personal and professional growth.
Location:
South Shore, Kentucky
Additional locations:
Job Requisition ID:
00019728
Location Address:
2 MarkWest Dr
Education:
Bachelors (Required)
Employee Group:
Full time
Employee Subgroup:
Regular
Marathon Petroleum Company LP is an Equal Opportunity Employer and gives consideration for employment to qualified applicants without discrimination on the basis of race, color, religion, creed, sex, gender (including pregnancy, childbirth, breastfeeding or related medical conditions), sexual orientation, gender identity, gender expression, reproductive health decision-making, age, mental or physical disability, medical condition or AIDS/HIV status, ancestry, national origin, genetic information, military, veteran status, marital status, citizenship or any other status protected by applicable federal, state, or local laws. If you would like more information about your EEO rights as an applicant, click here.
If you need a reasonable accommodation for any part of the application process at Marathon Petroleum LP, please contact our Human Resources Department at ***************************************. Please specify the reasonable accommodation you are requesting, along with the job posting number in which you may be interested. A Human Resources representative will review your request and contact you to discuss a reasonable accommodation. Marathon Petroleum offers a total rewards program which includes, but is not limited to, access to health, vision, and dental insurance, paid time off, 401k matching program, paid parental leave, and educational reimbursement. Detailed benefit information is available at ***************************** hired candidate will also be eligible for a discretionary company-sponsored annual bonus program.
Equal Opportunity Employer: Veteran / Disability
We will consider all qualified Applicants for employment, including those with arrest or conviction records, in a manner consistent with the requirements of applicable state and local laws. In reviewing criminal history in connection with a conditional offer of employment, Marathon will consider the key responsibilities of the role.
Auto-ApplyData Engineer
Data engineer job in Falls Church, VA
*** W2 Contract Only - No C2C - No 3rd Parties ***
The Ash Group is hiring a new Programmer Analyst Principal (Data Engineer) for our client (a global leader providing advanced systems and support in defense, aerospace, and security) based in Falls Church, VA.
In this role, you'll be designing, implementing, and optimizing large-scale data systems and ETL pipelines, with a strong focus on using Amazon Redshift and AWS services to ensure data quality and integrity for complex defense programs.
Compensation, Benefits, and Role Info
Competitive pay rate of $65 per hour.
Medical, dental, vision, direct primary care benefits, and, after six months of employment, a 4% matched 401(k) plan with immediate 100% vesting.
Type: 12-month contract with potential extension or conversion.
Location: On-site in Falls Church, VA.
What You'll Be Doing
Design and implement large-scale ETL data pipelines using AWS Glue and Python/PySpark to ingest, transform, and load data from various sources.
Build and maintain robust data warehouses, focusing on Amazon Redshift, including data modeling and governance.
Write and optimize complex, highly-performant SQL queries across large datasets (Redshift, Oracle, SQL Server).
Collaborate with cross-functional teams (data scientists, analysts) to understand requirements and deliver end-to-end data solutions.
Troubleshoot, optimize performance, and resolve data-related issues like pipeline failures and data quality bottlenecks.
What We're Looking For
8+ years of hands-on experience in data engineering, focusing on designing and implementing large-scale data systems.
5+ years of experience in building production-level ETL pipelines using AWS Glue and Python/PySpark.
Deep proficiency in SQL, including query optimization, indexing, and performance tuning across data warehouses like Amazon Redshift.
Strong understanding of database design principles, data modeling (star/snowflake schemas), and data governance.
Experience with data processing/orchestration frameworks such as Apache Airflow, Apache Kafka, or Fivetran.
If you're a seasoned data engineering professional passionate about building scalable data solutions and driving innovation in cloud-based environments, we want to hear from you. This is an exciting opportunity to work on cutting-edge technologies, collaborate with cross-functional teams, and make a meaningful impact on data-driven decision-making. Apply now to be part of a forward-thinking organization where your expertise will shape the future of our data infrastructure.
#DataEngineer #DataEngineering #AWSEngineer #Redshift #ETL #PySpark #DataPipeline #Westminster #ColoradoJobs #Contract
Data Engineer
Data engineer job in Durham, NC
Immediate need for a talented Data Engineer. This is a 12+ months contract opportunity with long-term potential and is located in Durham, NC(Onsite). Please review the job description below and contact me ASAP if you are interested.
Job Diva ID: 25-95096
Pay Range: $65 - $70 /hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location).
Key Requirements and Technology Experience:
Key skills; SQL Stored Procedures
Aurora Postgres
AWS (S3, Glue, Lambda functions)
Java API Development
Basic Snowflake experience
Disaster Recovery
Bachelor's degree in Computer Science (or closely related)
5+ years of experience in application and REST API development using Java
Proven experience writing microservices with Java
Strong in managing API to database connections using different relational database drivers (Oracle, PostgreSQL, etc.)
Demonstrated experience developing, debugging and tuning complex SQL statements, PL/SQL packages and procedures
Hands-on experience building highly resilient, scalable, and efficient solutions using AWS services like Lambda, Glue, step functions, etc.
Hands-on experience with Aurora Postgres a big plus
Experience with DevOps or CI/CD Pipelines using Maven, Jenkins, Terraform, Github, Ansible, etc.
Experience in managing high volume customer-facing application traffic for API's
Knowledge of Messaging Technologies (Kafka, Kinesis, SNS, SQS)
Desire and ability to learn and implement new technologies
Keen ability to see complex challenges from multiple perspectives, and leaning in to solve independently or with others
Knowledge of how to develop highly scalable distributed systems using Open-Source technologies
Proven knowledge of AWS via Associate, Professional, or Specialty Certification(s) a big plus
Ability to validate, monitor, and solve issues during development, testing, or in production
Excellent communication skills, both through written and verbal channels
Our client is a leading financial Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration
Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws.
By applying to our jobs, you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
Senior Data Engineer
Data engineer job in Durham, NC
We are seeking an experienced Senior Big Data & Cloud Engineer to design, build, and deliver advanced API and data solutions that support financial goal planning, investment insights, and projection tools. This role is ideal for a seasoned engineer with 10+ years of hands-on experience in big data processing, distributed systems, cloud-native development, and end-to-end data pipeline engineering.
You will work across retail, clearing, and custody platforms, leveraging modern cloud and big data technologies to solve complex engineering challenges. The role involves driving technology strategy, optimizing large-scale data systems, and collaborating across multiple engineering teams.
Key Responsibilities
Design and develop large-scale data movement services using Apache Spark (EMR) or Spring Batch.
Build and maintain ETL workflows, distributed pipelines, and automated batch processes.
Develop high-quality applications using Java, Scala, REST, and SOAP integrations.
Implement cloud-native solutions leveraging AWS S3, EMR, EC2, Lambda, Step Functions, and related services.
Work with modern storage formats and NoSQL databases to support high-volume workloads.
Contribute to architectural discussions and code reviews across engineering teams.
Drive innovation by identifying and implementing modern data engineering techniques.
Maintain strong development practices across the full SDLC.
Design and support multi-region disaster recovery (DR) strategies.
Monitor, troubleshoot, and optimize distributed systems using advanced observability tools.
Required Skills :
10+ years of experience in software/data engineering with strong big data expertise.
Proven ability to design and optimize distributed systems handling large datasets.
Strong communicator who collaborates effectively across teams.
Ability to drive architectural improvements and influence engineering practices.
Customer-focused mindset with commitment to delivering high-quality solutions.
Adaptable, innovative, and passionate about modern data engineering trends.
Data Scientist with GenAI and Python
Data engineer job in Charlotte, NC
Dexian is seeking a Data Scientist with GenAI and Python for an opportunity with a client located in Charlotte, NC.
Responsibilities:
Design, develop, and deploy GenAI models, including LLMs, GANs, and transformers, for tasks such as content generation, data augmentation, and creative applications
Analyze complex data sets to identify patterns, extract meaningful features, and prepare data for model training, with a focus on data quality for GenAI
Develop and refine prompts for LLMs, and optimize GenAI models for performance, efficiency, and specific use cases
Deploy GenAI models into production environments, monitor their performance, and implement strategies for continuous improvement and model governance
Work closely with cross-functional teams (e.g., engineering, product) to understand business needs, translate them into GenAI solutions, and effectively communicate technical concepts to diverse stakeholders
Stay updated on the latest advancements in GenAI and data science, and explore new techniques and applications to drive innovation within the organization
Utilize Python and its extensive libraries (e.g., scikit-learn, TensorFlow, PyTorch, Pandas, LangChain) for data manipulation, model development, and solution implementation
Requirements:
Proven hands-on experience implementing Gen AI project using open source LLMs (Llama, GPT OSS, Gemma, Mistral) and proprietary API's (OpenAI, Anthropic)
Strong background in Retrieval Augmented Generation implementations
In depth understanding of embedding models and their applications
Hands on experience in Natural Language Processing (NLP) solutions on text data
Strong Python development skills. Should be comfortable with Pandas and NumPy for data analysis and feature engineering
Experience building and integrating APIs (REST, FastAPI, Flask) for serving models
Fine tuning and optimizing open source LLM/SLM is a big plus
Knowledge of Agentic AI frameworks and Orchestration
Experience in ML and Deep Learning is an advantage
Familiarity with cloud platforms (AWS/Azure/GCP)
Experience working with Agile Methodology
Strong problem solving, analytical and interpersonal skills
Ability to work effectively in a team environment
Strong written and oral communication skills
Should have the ability to clearly express ideas
Dexian is a leading provider of staffing, IT, and workforce solutions with over 12,000 employees and 70 locations worldwide. As one of the largest IT staffing companies and the 2nd largest minority-owned staffing company in the U.S., Dexian was formed in 2023 through the merger of DISYS and Signature Consultants. Combining the best elements of its core companies, Dexian's platform connects talent, technology, and organizations to produce game-changing results that help everyone achieve their ambitions and goals.
Dexian's brands include Dexian DISYS, Dexian Signature Consultants, Dexian Government Solutions, Dexian Talent Development and Dexian IT Solutions. Visit ******************* to learn more.
Dexian is an Equal Opportunity Employer that recruits and hires qualified candidates without regard to race, religion, sex, sexual orientation, gender identity, age, national origin, ancestry, citizenship, disability, or veteran status.
AWS Data Engineer
Data engineer job in Charlotte, NC
We are looking for a skilled and experienced AWS Data Engineer with 10+ Years of experience to join our team. This role requires hands-on expertise in AWS serverless technologies, Big Data platforms, and automation tools. The ideal candidate will be responsible for designing scalable data pipelines, managing cloud infrastructure, and enabling secure, reliable data operations across marketing and analytics platforms.
Key Responsibilities:
Design, build, and deploy automated CI/CD pipelines for data and application workflows.
Analyze and enhance existing data pipelines for performance and scalability.
Develop semantic data models to support activation and analytical use cases.
Document data structures and metadata using Collibra or similar tools.
Ensure high data quality, availability, and integrity across platforms.
Apply SRE and DevSecOps principles to improve system reliability and security.
Manage security operations within AWS cloud environments.
Configure and automate applications on AWS instances.
Oversee all aspects of infrastructure management, including provisioning and monitoring.
Schedule and automate jobs using tools like Step Functions, Lambda, Glue, etc.
Required Skills & Experience:
Hands-on experience with AWS serverless technologies: Lambda, Glue, Step Functions, S3, RDS, DynamoDB, Athena, CloudFormation, CloudWatch Logs.
Proficiency in Confluent Kafka, Splunk, and Ansible.
Strong command of SQL and scripting languages: Python, R, Spark.
Familiarity with data formats: JSON, XML, Parquet, Avro.
Experience in Big Data engineering and cloud-native data platforms.
Functional knowledge of marketing platforms such as Adobe, Salesforce Marketing Cloud, and Unica/Interact (nice to have).
Preferred Qualifications:
Bachelor's or Master's degree in Computer Science, Data Engineering, or related field.
AWS, Big Data, or DevOps certifications are a plus.
Experience working in hybrid cloud environments and agile teams.
Life at Capgemini
Capgemini supports all aspects of your well-being throughout the changing stages of your life and career. For eligible employees, we offer:
Flexible work
Healthcare including dental, vision, mental health, and well-being programs
Financial well-being programs such as 401(k) and Employee Share Ownership Plan
Paid time off and paid holidays
Paid parental leave
Family building benefits like adoption assistance, surrogacy, and cryopreservation
Social well-being benefits like subsidized back-up child/elder care and tutoring
Mentoring, coaching and learning programs
Employee Resource Groups
Disaster Relief
Disclaimer
Capgemini is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law.
This is a general description of the Duties, Responsibilities and Qualifications required for this position. Physical, mental, sensory or environmental demands may be referenced in an attempt to communicate the manner in which this position traditionally is performed. Whenever necessary to provide individuals with disabilities an equal employment opportunity, Capgemini will consider reasonable accommodations that might involve varying job requirements and/or changing the way this job is performed, provided that such accommodations do not pose an undue hardship.
Capgemini is committed to providing reasonable accommodations during our recruitment process. If you need assistance or accommodation, please get in touch with your recruiting contact.
Click the following link for more information on your rights as an Applicant **************************************************************************
Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem.
Data Scientist
Data engineer job in Chattanooga, TN
BUILT TO CONNECT
At Astec, we believe in the power of connection and the importance of building long-lasting relationships with our employees, customers and the communities we call home. With a team more than 4,000 strong, our employees are our #1 advantage. We invest in skills training and provide opportunities for career development to help you grow along with the business. We offer programs that support physical safety, as well as benefits and resources to enhance total health and wellbeing, so you can be your best at work and at home.
Our equipment is used to build the roads and infrastructure that connects us to each other and to the goods and services we use. We are an industry leader known for delivering innovative solutions that create value for our customers. As our industry evolves, we are using new technology and data like never before.
We're looking for creative problem solvers to build the future with us. Connect with us today and build your career at Astec.
LOCATION: Chattanooga, TN On-site / Hybrid (Role must report on-site regularly)
ABOUT THE POSITION
The Data Scientist will play a key role in establishing the analytical foundation of Astec Smart Services. This individual will lead efforts to build pipelines from source to cloud, define data workflows, build predictive models, and help guide the team's approach to turning data into customer value. He or she will work closely within Smart Services and cross-functionally to ensure insights are actionable and impactful. The role blends Data architecture, data engineering, and data science to help build Smart Services analytical foundation. This person will be instrumental in helping to build Astec's digital transformation and aftermarket strategy.
Deliverables & Responsibilities
Data Engineering:
Build and maintain robust data pipelines for ingestion, transformation, and storage.
Optimize ETL processes for scalability and performance.
Data Architecture:
Design and implement data models that support analytics and operational needs.
Define standards for data governance, security, and integration.
Data Science:
Develop predictive models and advanced analytics to support business decisions.
Apply statistical and machine learning techniques to large datasets.
Strong business acumen to understand decision drivers with internal and external customers
Collaborate with individuals and departments across the company to ensure insights are aligned with customer needs and drive value.
To be successful in this role, your experience and competencies are:
Bachelor's degree in data science, engineering, or related field. (Adv. degrees a plus.)
5+ years of experience in data science, including at least 3 years in industrial or operational environments.
Strong communication and project management skills are critical.
Proficiency in data pipeline tools (e.g., Spark, Airflow) and cloud platforms (Azure, AWS, GCP).
Strong understanding of data modeling principles and database technologies (SQL/NoSQL).
Hands-on experience with machine learning frameworks (e.g., TensorFlow, PyTorch) and statistical analysis.
Ability to work across data architecture design and data science experimentation.
Programming: Python, SQL, and optionally Scala or Java.
Familiarity with distributed systems and big data technologies.
Strong communication skills for translating technical insights into business value.
Ability to work across technical, commercial, and customer-facing teams.
Supervisor and Leadership Expectations
This role will not have supervisory or managerial responsibilities.
This role will have program management responsibilities.
Our Culture and Values
Employees that become part of Astec embody the values below throughout their work.
Continuous devotion to meeting the needs of our customers
Honesty and integrity in all aspects of business
Respect for all individuals
Preserving entrepreneurial spirit and innovation
Safety, quality and productivity as means to ensure success
EQUAL OPPORTUNITY EMPLOYER
As an Equal Opportunity Employer, Astec does not discriminate on the basis of race, creed, color, religion, gender (sex), sexual orientation, gender identity, marital status, national origin, ancestry, age, disability, citizenship status, a person's veteran status or any other characteristic protected by law or executive order.
Snowflake Data Engineer
Data engineer job in Durham, NC
Experience in developing and proficient in SQL and knowledge on Snowflake cloud computing environments
Knowledge on Data warehousing concepts and metadata management Experience with data modeling, Data lakes
multi-dimensional models and data dictionaries
Hands-on experience with Snowflake features like Time Travel and Zero-Copy Cloning. Experience in query performance
tuning and cost optimization in a cloud data platform
Knowledge in Snowflake warehousing, architecture, processing and administration , DBT , Pipeline
Hands-on experience on PLSQL Snowflake
•Excellent personal communication, leadership, and organizational skills.
•Should be well versed with various Design patterns
Knowledge of SQL database is a plus
Hands-on Snowflake development experience is must
Work with various cross-functional groups, tech leads from other tracks
Need to work with team closely and guide them technically/functionally Must be a team player with good attitude
Data Conversion Engineer
Data engineer job in Charlotte, NC
Summary/Objective
Are you looking to work at a high growth, innovative, and purpose driven FinTech company? If so, you'll love Paymentus. Recognized by Deloitte as one of the fastest growing companies in North America, Paymentus is the premier provider of innovative, reliable, and secure electronic bill payment and presentment for more than 1700 clients. We are a SaaS provider that enables companies to help their customers simplify their financial lives. We do that by making it easier for consumers and businesses to pay bills, plus move and manage money to achieve strong financial health. We continually build upon a massively scalable platform, supporting thousands of businesses and millions of transactions on a daily basis. We're looking for high performers to join our team who excel in their expertise and who can transform plans into action. You'll have the opportunity to grow in an environment where intelligence, innovation, and leadership are valued and rewarded.
About the Role
The Data Conversion Engineer serves as a key component of the Platform Integrations team, providing technical support and guidance on data conversion projects. Conversions are an integral part in ensuring adherence to Paymentus' standards for a successful launch. This role is essential to ensure all bill payment data converts properly and efficiently onto the Paymentus platform.
Responsibilities
Develop data conversion procedures using SQL, Java and Linux scripting
Augment and automate existing manual procedures to optimize accuracy and reduce time for each conversion
Develop and update conversion mappers to interpret incoming data and manipulate it to match Paymentus' specifications
Develop new specifications to satisfy new customers and products
Serve as the primary point of contact/driver for all technical related conversion activities
Review conversion calendar and offer technical support and solutions to meet deadlines and contract dates
Maintain and update technical conversion documentation to share with internal and external clients and partners
Work in close collaboration with implementation, integration, product and development teams using exceptional communication skills
Adapt and creatively solve encountered problems under high stress and tight deadlines
Learn database structure, business logic and combine all knowledge to improve processes
Be flexible
Monitor new client conversions and existing client support if needed; provide daily problem solving, coordination, and communication
Management of multiple projects and conversion implementations
Ability to proactively troubleshoot and solve problems with limited supervision
Qualifications
B.S. Degree in Computer Science or comparable experience
Strong knowledge of Linux and the command line interface
Exceptional SQL skills
Experience with logging/monitoring tools (AWS Cloudwatch, Splunk, ELK, etc.)
Familiarity with various online banking applications and understanding of third-party integrations is a plus
Effective written and verbal communication skills
Problem Solver - recognizes the need to resolve issues quickly and effectively, uses logic to solve problems; identifies problems and brings forward multiple solution options; knows who/when to involve appropriate people when troubleshooting issues
Communication; ability to use formal and informal written and/or verbal communication channels to inform others; articulates ideas and thoughts clearly both verbally and in writing
Dynamic and self-motivated; able to work on their own initiative and deliver the objectives required to maintain service levels
Strong attention to detail
Proficiency with raw data, analytics, or data reporting tools
Preferred Skills
Background in the Payments, Banking, E-Commerce, Finance and/or Utility industries
Experience with front end web interfaces (HTML5, Javascript, CSS3)
Cloud technologies (AWS, GCP, Azure)
Work Environment
This job operates in a professional office environment. This role routinely uses standard office equipment such as laptop computers, photocopiers and smartphones.
Physical Demands
This role requires sitting or standing at a computer workstation for extended periods of time.
Position Type/Expected Hours of Work
This is a full-time position. Days and hours of work are Monday through Friday, 40 hours a week. Occasional evening and weekend work may be required as job duties demand.
Travel
No travel is required for this position.
Other Duties
Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice.
Equal Opportunity Statement
Paymentus is an equal opportunity employer. We enthusiastically accept our responsibility to make employment decisions without regard to race, religious creed, color, age, sex, sexual orientation, national origin, ancestry, citizenship status, religion, marital status, disability, military service or veteran status, genetic information, medical condition including medical characteristics, or any other classification protected by applicable federal, state, and local laws and ordinances. Our management is dedicated to ensuring the fulfillment of this policy with respect to hiring, placement, promotion, transfer, demotion, layoff, termination, recruitment advertising, pay, and other forms of compensation, training, and general treatment during employment.
Reasonable Accommodation
Paymentus recognizes and supports its obligation to endeavor to accommodate job applicants and employees with known physical or mental disabilities who are able to perform the essential functions of the position, with or without reasonable accommodation. Paymentus will endeavor to provide reasonable accommodations to otherwise qualified job applicants and employees with known physical or mental disabilities, unless doing so would impose an undue hardship on the Company or pose a direct threat of substantial harm to the employee or others. An applicant or employee who believes he or she needs a reasonable accommodation of a disability should discuss the need for possible accommodation with the Human Resources Department, or his or her direct supervisor.
Cloud Data Engineer- Databricks
Data engineer job in McLean, VA
Purpose:
We are seeking a highly skilled Cloud Data Engineer with deep expertise in Databricks and modern cloud platforms such as AWS, Azure, or GCP. This role is ideal for professionals who are passionate about building next-generation data platforms, optimizing complex data workflows, and enabling advanced analytics and AI in cloud-native environments. You'll have the opportunity to work with Fortune-500 organizations in data and analytics, helping them unlock the full potential of their data through innovative, scalable solutions.
Key Result Areas and Activities:
Design and implement robust, scalable data engineering solutions.
Build and optimize data pipelines using Databricks, including serverless capabilities, Unity Catalog, and Mosaic AI.
Collaborate with analytics and AI teams to enable real-time and batch data workflows.
Support and improve cloud-native data platforms (AWS, Azure, GCP).
Ensure adherence to best practices in data modeling, warehousing, and governance.
Contribute to automation of data workflows using CI/CD, DevOps, or DataOps practices.
Implement and maintain workflow orchestration tools like Apache Airflow and dbt.
Roles & Responsibilities
Essential Skills
4+ years of experience in data engineering with a focus on scalable solutions.
Strong hands-on experience with Databricks in a cloud environment.
Proficiency in Spark and Python for data processing.
Solid understanding of data modeling, data warehousing, and architecture principles.
Experience working with at least one major cloud provider (AWS, Azure, or GCP).
Familiarity with CI/CD pipelines and data workflow automation.
Desirable Skills
Direct experience with Unity Catalog and Mosaic AI within Databricks.
Working knowledge of DevOps/DataOps principles in a data engineering context.
Exposure to Apache Airflow, dbt, and modern data orchestration frameworks.
Qualifications
Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or a related field.
Relevant certifications in cloud platforms (AWS/Azure/GCP) or Databricks are a plus.
Qualities:
Able to consult, write, and present persuasively
Able to work in a self-organized and cross-functional team
Able to iterate based on new information, peer reviews, and feedback
Able to work seamlessly with clients across multiple geographies
Research focused mindset
Excellent analytical, presentation, reporting, documentation and interactive skills
"Infocepts is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law."
AWS Data Engineer (Only W2)
Data engineer job in Charlotte, NC
Title: AWS Data Engineer
Exprience: 10 years
Must Have Skills:
• Strong experience in AWS services, primarily serverless, databases, storage services, container services, schedulers, and batch services.
• Experience in Snowflake and Data Build Tool.
• Expertise in DBT, NodeJS and Python.
• Expertise in Informatica, PowerBI , Database, Cognos.
Nice to Have Skills:
Detailed Job Description:
• Strong experience in AWS services, primarily serverless, databases, storage services, container services, schedulers, and batch services.
• Experience in Snowflake and Data Build Tool.Expertise in DBT, NodeJS and Python.
• Expertise in Informatica, PowerBI , Database, Cognos.
• Proven experience in leading teams across locations.
• Knowledge of DevOps processes, Infrastructure as Code and their purpose.
• Good understanding of data warehouses, their purpose, and implementation
• Good communication skills.
Kindly share the resume in ******************
Palantir Data Engineer
Data engineer job in Charlotte, NC
Build and maintain data pipelines and workflows in Palantir Foundry.
Design, train, and deploy ML models for classification, optimization, and forecasting use cases.
Apply feature engineering, data cleaning, and modeling techniques using Python, Spark, and ML libraries.
Create dashboards and data applications using Slate or Streamlit to enable operational decision-making.
Implement generative AI use cases using large language models (GPT-4, Claude, etc)
Senior Data Engineer
Data engineer job in Charlotte, NC
We are
At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron s progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets.
Our Challenge:
Looking for skilled senior data engineer with comprehensive experience in designing, developing, and maintaining scalable data solutions within the financial and regulatory domains. Proven expertise in leading end-to-end data architectures, integrating diverse data sources, and ensuring data quality and accuracy.
Additional Information*
The base salary for this position will vary based on geography and other factors. In accordance with law, the base salary for this role if filled within New York, NY is $135k - $155k/year & benefits (see below).
Work location: New York City, NY (Hybrid, 3 days in a week)
The Role
Responsibilities:
Advanced proficiency in Python, SQL Server, Snowflake, Azure Databricks, and PySpark.
Strong understanding of relational databases, ETL processes, and data modeling.
Expertise in system design, architecture, and implementing robust data pipelines.
Hands-on experience with data validation, quality checks, and automation tools (Autosys, Control-M).
Skilled in Agile methodologies, SDLC processes, and CI/CD pipelines.
Effective communicator with the ability to collaborate with business analysts, users, and global teams.
Requirements:
Overall 10+ years of IT experience is required
Collaborate with business stakeholders to gather technical specifications and translate business requirements into technical solutions.
Develop and optimize data models and schemas for efficient data integration and analysis.
Lead application development involving Python, Pyspark, SQL, Snowflake and Databricks platforms.
Implement data validation procedures to maintain high data quality standards.
Strong experience in SQL (Writing complex queries, Join, Tables etc.)
Conduct comprehensive testing (UT, SIT, UAT) alongside business and testing teams.
Provide ongoing support, troubleshooting, and maintenance in production environments.
Contribute to architecture and design discussions to ensure scalable, maintainable data solutions.
Experience with financial systems (capital markets, credit risk, and regulatory compliance applications).
We offer:
A highly competitive compensation and benefits package.
A multinational organization with 58 offices in 21 countries and the possibility to work abroad.
10 days of paid annual leave (plus sick leave and national holidays).
Maternity & paternity leave plans.
A comprehensive insurance plan including medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region).
Retirement savings plans.
A higher education certification policy.
Commuter benefits (varies by region).
Extensive training opportunities, focused on skills, substantive knowledge, and personal development.
On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses.
Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups.
Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms.
A flat and approachable organization.
A truly diverse, fun-loving, and global work culture.
SYNECHRON'S DIVERSITY & INCLUSION STATEMENT
Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.
All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
Data Engineer
Data engineer job in Charlotte, NC
C# Senior Developer RESOURCE TYPE: W2 Only Charlotte, NC - Hybrid Mid (5-7 Years) Role Description A leading Japanese bank is in the process of driving a Digital Transformation across its Americas Division as it continues to modernize technology, strengthen its data-driven approach, and support future growth. As part of this initiative, the firm is seeking an experienced Data Engineer to support the design and development of a strategic enterprise data platform supporting Capital Markets and affiliated securities businesses.
This role will contribute to the development of a scalable, cloud-based data platform leveraging Azure technologies, supporting multiple business units across North America and global teams.Role Objectives
Serve as a member of the Data Strategy team, supporting broker-dealer and swap-dealer entities across the Americas Division.
Participate in the active development of the enterprise data platform, beginning with the establishment of reference data systems for securities and pricing data, and expanding into additional data domains.
Collaborate closely with internal technology teams while adhering to established development standards and best practices.
Support the implementation and expansion of the strategic data platform on the bank's Azure Cloud environment.
Contribute technical expertise and solution design aligned with the overall Data Strategy roadmap.
Qualifications and Skills
Proven experience as a Data Engineer, with strong hands-on experience in Azure cloud environments.
Experience implementing solutions using:
Azure Cloud Services
Azure Data Factory
Azure Data Lake Gen2
Azure Databases
Azure Data Fabric
API Gateway management
Azure Functions
Strong experience with Azure Databricks.
Advanced SQL skills across relational and NoSQL databases.
Experience developing APIs using Python (FastAPI or similar frameworks).
Familiarity with DevOps and CI/CD pipelines (Git, Jenkins, etc.).
Strong understanding of ETL / ELT processes.
Experience within financial services, including exposure to financial instruments, asset classes, and market data, is a strong plus.
Senior Data Engineer
Data engineer job in McLean, VA
The candidate must have 5+ years of hands on experience working with PySpark/Python, microservices architecture, AWS EKS, SQL, Postgres, DB2, Snowflake, Behave OR Cucumber frameworks, Pytest (unit testing), automation testing and regression testing.
Experience with tools such as Jenkins, SonarQube AND/OR Fortify are preferred for this role.
Experience in Angular and DevOps are nice to haves for this role.
Must Have Qualifications: PySpark/Python based microservices, AWS EKS, Postgres SQL Database, Behave/Cucumber for automation, Pytest, Snowflake, Jenkins, SonarQube and Fortify.
Responsibilities:
Development of microservices based on Python, PySpark, AWS EKS, AWS Postgres for a data-oriented modernization project.
New System: Python and PySpark, AWS Postgres DB, Behave/Cucumber for automation, and Pytest
Perform System, functional and data analysis on the current system and create technical/functional requirement documents.
Current System: Informatica, SAS, AutoSys, DB2
Write automated tests using Behave/cucumber, based on the new micro-services-based architecture
Promote top code quality and solve issues related to performance tuning and scalability.
Strong skills in DevOps, Docker/container-based deployments to AWS EKS using Jenkins and experience with SonarQube and Fortify.
Able to communicate and engage with business teams and analyze the current business requirements (BRS documents) and create necessary data mappings.
Preferred strong skills and experience in reporting applications development and data analysis
Knowledge in Agile methodologies and technical documentation.
Senior Data Architect
Data engineer job in Richmond, VA
The client is looking for enterprise data management program. As a participant, this role will need to coordinate with business architecture, data architecture, enterprise architecture staff, data stewards, and data custodians. The Business Data Management Architect will be responsible for leveraging expertise in data modeling and extensive data quality management to design and implement effective data management processes. This position requires defining and utilizing taxonomies for enhanced data organization, classification, and retrieval, contributing to improved metadata management. You will need to become familiar with VDOT's Business Capability Model and participate in developing and maturing an enterprise data model, enterprise data flows, and road maps. This position will require familiarity (or the development of familiarity) with the National Information Exchange Model, the Spatial Data Standards for Facilities, Infrastructure, and Environment, and other standards.
Qualifications:
• Minimum requirement: Prior Virginia Department of Transportation (VDOT) experience directly related to this role
• Extensive experience in data quality management, including establishing standards, monitoring, and continuous improvement
• Demonstrated experience with enterprise data programs at a similarly sized organization (private or public)
• Proven experience in data modeling
• Demonstrated ability to bridge the gap between business architecture and National Information Exchange Model (NIEM) standards
• Strong understanding of data governance principles and best practices
• Proficiency in metadata management, including taxonomies, and enhancing data quality
• Experience in overseeing the complete data lifecycle within a complex organizational structure
• Strong written and verbal communication skills
Skills:
• Extensive data modeling experience
• Advanced business data architecture experience
• Proficiency in metadata management, including taxonomies, and enhancing data quality
• Ability to bridge the gap between business architecture and National Information Exchange Model (NIEM) standards
• Ability to model data lifecycle within a complex organizational structure
Lead Principal Data Solutions Architect
Data engineer job in Reston, VA
*****TO BE CONSIDERED, CANDIDATES MUST BE U.S. CITIZEN*****
***** TO BE CONSIDERED, CANDIDATES MUST BE LOCAL TO THE DC/MD/VA METRO AREA AND BE OPEN TO A HYBIRD SCHEDULE IN RESTON, VA*****
Formed in 2011, Inadev is focused on its founding principle to build innovative customer-centric solutions incredibly fast, secure, and at scale. We deliver world-class digital experiences to some of the largest federal agencies and commercial companies. Our technical expertise and innovations are comprised of codeless automation, identity intelligence, immersive technology, artificial intelligence/machine learning (AI/ML), virtualization, and digital transformation.
POSITION DESCRIPTION:
Inadev is seeking a strong Lead Principal Data Solutions Architect Primary focus will be in Natural language processing (NLP), applying data mining techniques, doing statistical analysis and building high quality prediction systems.
PROGRAM DESCRIPTION:
This initiative focuses on modernizing and optimizing a mission-critical data environment within the immigration domain to enable advanced analytics and improved decision-making capabilities. The effort involves designing and implementing a scalable architecture that supports complex data integration, secure storage, and high-performance processing. The program emphasizes agility, innovation, and collaboration to deliver solutions that meet evolving stakeholder requirements while maintaining compliance with stringent security and governance standards.
RESPONSIBILITES:
Leading system architecture decisions, ensuring technical alignment across teams, and advocating for best practices in cloud and data engineering.
Serve as a senior technical leader and trusted advisor, driving architectural strategy and guiding development teams through complex solution design and implementation
Serve as the lead architect and technical authority for enterprise-scale data solutions, ensuring alignment with strategic objectives and technical standards.
Drive system architecture design, including data modeling, integration patterns, and performance optimization for large-scale data warehouses.
Provide expert guidance to development teams on Agile analytics methodologies and best practices for iterative delivery.
Act as a trusted advisor and advocate for the government project lead, translating business needs into actionable technical strategies.
Oversee technical execution across multiple teams, ensuring quality, scalability, and security compliance.
Evaluate emerging technologies and recommend solutions that enhance system capabilities and operational efficiency.
NON-TECHNICAL REQUIREMENTS:
Must be a U.S. Citizen.
Must be willing to work a HYRBID Schedule (2-3 Days) in Reston, VA & client locations in the Northern Virginia/DC/MD area as required.
Ability to pass a 7-year background check and obtain/maintain a U.S. Government Clearance
Strong communication and presentation skills.
Must be able to prioritize and self-start.
Must be adaptable/flexible as priorities shift.
Must be enthusiastic and have passion for learning and constant improvement.
Must be open to collaboration, feedback and client asks.
Must enjoy working with a vibrant team of outgoing personalities.
MANDATORY REQUIREMENTS/SKILLS:
Bachelor of Science degree in Computer Science, Engineering or related subject and at least 10 years of experience leading architectural design of enterprise-level data platforms, with significant focus on Databricks Lakehouse architecture.
Experience within the Federal Government, specifically DHS is preferred.
Must possess demonstrable experience with Databricks Lakehouse Platform, including Delta Lake, Unity Catalog for data governance, Delta Sharing, and Databricks SQL for analytics and BI workloads.
Must demonstrate deep expertise in Databricks Lakehouse architecture, medallion architecture (Bronze/Silver/Gold layers), Unity Catalog governance framework, and enterprise-level integration patterns using Databricks workflows and Auto Loader.
Knowledge of and ability to organize technical execution of Agile Analytics using Databricks Repos, Jobs, and collaborative notebooks, proven by professional experience.
Expertise in Apache Spark on Databricks, including performance optimization, cluster management, Photon engine utilization, and Delta Lake optimization techniques (Z-ordering, liquid clustering, data skipping).
Proficiency in Databricks Unity Catalog for centralized data governance, metadata management, data lineage tracking, and access control across multi-cloud environments.
Experience with Databricks Delta Live Tables (DLT) for declarative ETL pipeline development and data quality management.
Certification in one or more: Databricks Certified Data Engineer Associate/Professional, Databricks Certified Solutions Architect, AWS, Apache Spark, or cloud platform certifications.
DESIRED REQUIREMENTS/SKILLS:
Expertise in ETL tools.
Advanced knowledge of cloud platforms (AWS preferred; Azure or GCP a plus).
Proficiency in SQL, PL/SQL, and performance tuning for large datasets.
Understanding of security frameworks and compliance standards in federal environments.
PHYSICAL DEMANDS:
Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions
Inadev Corporation does not discriminate against qualified individuals based on their status as protected veterans or individuals with disabilities and prohibits discrimination against all individuals based on their race, color, religion, sex, sexual orientation/gender identity, or national origin.