F5 Engineer
Requirements engineer job in Charlotte, NC
Job Title: F5 Engineer
Duration: 18 Month W2 Contract
Required Pay Scale: $60-$70/hour
Face to Face INTERVIEW REQUIRED; ONE AND DONE!
F5 Engineers
Senior Load balancer engineer required to build complex solutions for the client's applications
Responsibilities include:
- Perform various load balancer changes in accordance with project requirements.
- This includes building new configurations, such as new VIPs, and or WideIP setups
- Modifying existing setups and troubleshooting application traffic issues flowing through load balancers
Main points of contact:
- Project managers
- Peers on his team for design peer review
- Escalations to team lead and manage when necessary
Expert level on F5 Load balancing LTM GTM
Expert on F5 iRules control traffic flows and manipulate headers
Strong understanding of TCP/IP network communications
Need good experience with F5 (GTM, LTM, SSL offloading)
-L2/L3 support experience
-Good routing and switching experience
-Strong DNS experience
-Need Load Balancing SME ; Good experience with load balancing options and features including OneConnect, Persistence, SSL and HTTP. Also need experience with C
-Good experience with iRules and/or iControls
-Experience with Capacity/threshold management and workload management
Senior Load balancer engineer required to build complex solutions for Bank of America's applications
Expert level on F5 Lad balancing LTM GTM, Python needed
Expert on F5 iRules control traffic flows and manipulate headers
Strong understanding of TCP/IP network communications
Understanding of programming skills including Python, JAVA, TCL, Javascript, JQuery, Perl
Strong communication skills to work with project managers and solution architects on deliverables
About Matlen Silver
Experience Matters. Let your experience be driven by our experience. For more than 40 years, Matlen Silver has delivered solutions for complex talent and technology needs to Fortune 500 companies and industry leaders. Led by hard work, honesty, and a trusted team of experts, we can say that Matlen Silver technology has created a solutions experience and legacy of success that is the difference in the way the world works.
Matlen Silver is an Equal Opportunity Employer and considers all applicants for all positions without regard to race, color, religion, gender, national origin, age, sexual orientation, veteran status, the presence of a non-job-related medical condition or disability, or any other legally protected status.
If you are a person with a disability needing assistance with the application or at any point in the hiring process, please contact us at email and/or phone at: ********************* // ************
At The Matlen Silver Group, Inc., W2 employees are eligible for the following benefits:
Health, vision, and dental insurance (single and family coverage)
401(k) plan (employee contributions only)
Jenkins Platform Engineer
Requirements engineer job in Charlotte, NC
Full-time
Charlotte, NC
Qualifications
You have operational experience with one or more of the following cicd tools - Jenkins, Octopus Deploy.
3+ years of experience is managing, maintaining and scaling a devops environment using Jenkins for large number of deployments.
Experience with configuration management tools (Ansible, Puppet/Chef)
Experience of Automating Infrastructure deployment
Demonstrate hands on working knowledge with AWS and its services, including but not limited to EC2, S3, IAM, RDS, Python and Aurora
Experience in Software-Defined-Networks, Software-Defined-Data-Centers, Virtual Private Clouds and other core network technology
Experienced with DevOps and GitOps
Experience with multiple cloud platforms (GCP or Azure) a plus
Proactive, enthusiastic and ambitious
Good team worker with excellent verbal and written communication skills
Platform Engineer
Requirements engineer job in Charlotte, NC
We are seeking an experienced and highly skilled Platform Lead to join our technology team. The ideal candidate will have a strong background in cloud infrastructure, automation, and platform engineering, with deep expertise in AWS services (EC2, RDS, S3, Lambda, Load Balancer, EKS, Encryption, Secrets Manager), Terraform, Git, and Python. As Platform Lead, you will architect, implement, and manage scalable, secure, and reliable cloud platforms to support our business objectives.
Key Responsibilities:
Lead the design, implementation, and management of cloud infrastructure using AWS services such as EC2, RDS, S3, Lambda, Load Balancer, EKS, Encryption, and Secrets Manager.
Develop and maintain infrastructure as code solutions using Terraform.
Build automation scripts and tools in Python to streamline platform operations and deployments.
Oversee platform reliability, scalability, and security, ensuring best practices are followed.
Collaborate with software engineering, DevOps, and security teams to deliver robust cloud solutions.
Manage CI/CD pipelines and version control using Git.
Monitor, troubleshoot, and resolve platform issues, ensuring high availability and performance.
Document architecture, processes, and procedures for platform operations.
Mentor and guide team members, fostering a culture of technical excellence and continuous improvement.
Qualifications:
Bachelor's degree in Computer Science, Engineering, or a related field.
7+ years of professional experience in cloud platform engineering and development.
Extensive hands-on experience with AWS services: EC2, RDS, S3, Lambda, Load Balancer, EKS, Encryption, and Secrets Manager.
Proficiency in infrastructure as code using Terraform.
Strong programming skills in Python.
Experience with version control systems, especially Git.
Solid understanding of cloud security, encryption, and secrets management.
Proven track record in designing and operating scalable, reliable, and secure cloud platforms.
Excellent problem-solving skills, attention to detail, and ability to communicate effectively with stakeholders.
Experience with GenAI or AI/ML platforms is a plus.
Azure Databricks Engineer -ONLY FULL TIME
Requirements engineer job in Charlotte, NC
We are seeking a highly skilled and motivated Technical Team Lead with extensive experience in Azure Databricks to join our dynamic team. The ideal candidate will possess a strong technical background, exceptional leadership abilities, and a passion for driving innovative solutions. As a Technical Team Lead, you will be responsible for guiding a team of developers and engineers in the design, development, and implementation of data driven solutions that leverage Azure Databricks.
Regards
Mamatha k,
Sr. Resource Specialist,
***************
Email: ******************* / ***************
Dynatrace SRE Engineer
Requirements engineer job in Fort Mill, SC
We are looking for a Site Reliability Engineer with deep expertise in Dynatrace and a strong background in observability, automation, and cloud operations. This role focuses on designing and implementing highly reliable, scalable solutions while driving proactive monitoring and operational excellence.
Key Responsibilities:
• Lead the design and implementation of full-stack observability solutions with Dynatrace as the primary platform.
• Configure Dynatrace for application performance monitoring (APM), infrastructure monitoring, and intelligent alerting.
• Build advanced dashboards and integrate Dynatrace with event management systems to enable proactive incident prevention and root cause analysis.
• Collaborate with teams to optimize Dynatrace usage for AIOps-driven insights and automated anomaly detection.
• Provide oversight for production operations to maximize reliability and automation.
• Develop and evolve SRE best practices, runbooks, and tooling to ensure high availability and resilience.
• Implement data-driven operational strategies to improve decision-making and reduce MTTR.
• Hands-on experience with Dynatrace, Splunk, ELK, Grafana, Prometheus, and (future) ThousandEyes.
• Build and manage CI/CD pipelines and Infrastructure as Code (IaC) solutions using Terraform, Jenkins, TeamCity, Octopus, Bamboo, and U-Deploy across hybrid/multi-cloud environments.
• Develop and manage DevOps pipelines in AWS, Azure, and GCP using Terraform and cloud-native tooling.
• Strong developer background with the ability to understand application layers and infrastructure interactions.
• Define and document standard operating procedures, architecture diagrams, and system documentation using Jira, Confluence, and UML.
• Identify areas for process and efficiency improvement within Platform Services Operations; recommend and implement solutions.
• Drive automation initiatives across all operational processes.
• Proactively monitor system capacity and health indicators; provide analytics and forecasts for scaling.
Preferred Qualifications
• Expert-level experience with Dynatrace, including dashboard creation, alert configuration, and integration with other observability tools.
• Strong knowledge of AIOps, performance tuning, and proactive incident management.
• Familiarity with hybrid/multi-cloud environments and modern DevOps practices.
• Excellent problem-solving skills and ability to work in a fast-paced, collaborative environment.
Regards,
Vishwajeet Verma
Cloud Engineer
Requirements engineer job in Fort Mill, SC
Vaco is looking for a Cloud Engineer with 5+ years experience to sit on-site 5 days a week in Ft. Mill, SC. Must be willing to work on a W2 basis and no sponsorship is offered. Required skills: -5+ years experience as a Systems Engineer with 3 of those years supporting Cloud Services
-Experience in Architecture, Design, and Deployment of Enterprise Applications and Systems
-Experience implementing cloud governance
-Azure certification is highly preferred
-Local to Ft. Mill, SC - role will require on-site interview and work on-site 5x week
-Able to work on a W2 basis directly for Vaco's client
Please apply directly to the posting for more information or to be considered for the role.
Determining compensation for this role (and others) at Vaco/Highspring depends upon a wide array of factors including but not limited to the individual's skill sets, experience and training, licensure and certifications, office location and other geographic considerations, as well as other business and organizational needs. With that said, as required by local law in geographies that require salary range disclosure, Vaco/Highspring notes the salary range for the role is noted in this job posting. The individual may also be eligible for discretionary bonuses, and can participate in medical, dental, and vision benefits as well as the company's 401(k) retirement plan.
Desired Skills and Experience
Vaco is looking for a Cloud Engineer with 5+ years experience to sit on-site 5 days a week in Ft. Mill, SC. Must be willing to work on a W2 basis and no sponsorship is offered.
Required skills:
-5+ years experience as a Systems Engineer with 3 of those years supporting Cloud Services
-Experience in Architecture, Design, and Deployment of Enterprise Applications and Systems
-Experience implementing cloud governance
-Azure certification is highly preferred
-Local to Ft. Mill, SC - role will require on-site interview and work on-site 5x week
-Able to work on a W2 basis directly for Vaco's client
Please apply directly to the posting for more information or to be considered for the role.
Data Engineer
Requirements engineer job in Charlotte, NC
Job Title: Data Engineer / SQL Server Developer (7+ Years)
Client: Wells Fargo
Rate: $60/hr
Interview Process: Code Test + In-Person Interview
Job Description
Wells Fargo is seeking a Senior Data Engineer / SQL Server Developer (7+ years) who can work across both database development and QA automation functions. The ideal candidate will have strong SQL Server expertise along with hands-on experience in test automation tools.
Key Responsibilities
Design, develop, and optimize SQL Server database structures, queries, stored procedures, triggers, and ETL workflows.
Perform advanced performance tuning, query optimization, and troubleshooting of complex SQL issues.
Develop and maintain data pipelines ensuring data reliability, integrity, and high performance.
Build and execute automated test scripts using Selenium, BlazeMeter, or similar frameworks.
Perform both functional and performance testing across applications and data processes.
Support deployments in containerized ecosystems, ideally within OpenShift.
Collaborate with architecture, QA, DevOps, and application teams to ensure seamless delivery.
Required Skills
Primary:
7+ years of hands-on SQL Server development (T-SQL, stored procedures, performance tuning, ETL).
Secondary:
Experience working with OpenShift or other container platforms.
Testing / QA Automation:
Strong experience with test automation tools like Selenium, BlazeMeter, JMeter, or equivalent.
Ability to design automated functional and performance test suites.
Ideal Candidate Profile
Senior-level developer capable of taking ownership of both development and test automation deliverables.
Strong analytical and debugging skills across data engineering and testing disciplines.
Experience working in large-scale enterprise environments.
DevOps Engineer (Cloud & CI/CD)
Requirements engineer job in Charlotte, NC
Immediate need for a talented DevOps Engineer (Cloud & CI/CD). This is a 12 months opportunity with long-term potential and is located in Charlotte , NC(Hybrid). Please review the job description below and contact me ASAP if you are interested.
Job Diva ID: 25-91664
Pay Range: $60 - $65 /hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location).
Key Responsibilities: -
Design, implement, and manage CI/CD pipelines to support continuous integration and continuous deployment.
Automate infrastructure provisioning using IaC tools such as Terraform, CloudFormation, or Ansible.
Manage cloud infrastructure on platforms like AWS, Azure, or GCP.
Monitor application performance using logging and monitoring tools (Prometheus, Grafana, ELK, Splunk).
Implement configuration management using Ansible, Puppet, or Chef.
Maintain and optimize containerization and orchestration (Docker, Kubernetes).
Ensure system security through compliance checks, patching, and vulnerability assessments.
Collaborate with development teams to troubleshoot issues across the entire stack.
Improve automation efficiency, deployment speed, and operational reliability.
Maintain documentation for systems, processes, and configurations.
Key Requirements and Technology Experience;
Key Skills; Docker, CI/CD, Cloud
Strong analytical, troubleshooting and problem solving skills
Jenkins or other CI system
Cloud Engineering(cloud computing) experience with Openshift,AWS, Azure or GCP
Design and maintain CI/CD process and tools
Docker experience with Kubernetes container orchestration
Experience with yaml config management tools like Helm/Ansible
Setting up monitoring and alerts
Working with software developers and software engineers to ensure that development follows established processes and works as intended
Perform root cause analysis of production errors and resolve technical issues
Experience with Agile processes
Our client is a leading Banking Industry , and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration
Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, colour, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws.
By applying to our jobs, you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
Data Engineer - Charlotte, NC (2 Days Onsite) #25472
Requirements engineer job in Charlotte, NC
Our client is looking to hire a Data Engineer for a one year contract. This role will be onsite (Hybrid) Monday and Thursday in Charlotte, NC.
Must Haves:
2+ years working in Agile/SDLC delivery
Hands-on Python in production or pipeline work
Hands-on TensorFlow
or
PyTorch
Practical NLP experience
LLM / GenAI applied experience (At least one real build using LLMs: RAG, embeddings + vector DB, prompt workflows, evaluation, fine-tuning/LoRA, or deployment)
Data engineering fundamentals (Clear ETL/ELT or data pipeline experience (lake/warehouse/API/streaming).
SQL + BI/reporting exposure (Can write real SQL and support dashboards/reports)
1 year of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC)
Bachelor's Degree
Data Conversion Engineer
Requirements engineer job in Charlotte, NC
Summary/Objective
Are you looking to work at a high growth, innovative, and purpose driven FinTech company? If so, you'll love Paymentus. Recognized by Deloitte as one of the fastest growing companies in North America, Paymentus is the premier provider of innovative, reliable, and secure electronic bill payment and presentment for more than 1700 clients. We are a SaaS provider that enables companies to help their customers simplify their financial lives. We do that by making it easier for consumers and businesses to pay bills, plus move and manage money to achieve strong financial health. We continually build upon a massively scalable platform, supporting thousands of businesses and millions of transactions on a daily basis. We're looking for high performers to join our team who excel in their expertise and who can transform plans into action. You'll have the opportunity to grow in an environment where intelligence, innovation, and leadership are valued and rewarded.
About the Role
The Data Conversion Engineer serves as a key component of the Platform Integrations team, providing technical support and guidance on data conversion projects. Conversions are an integral part in ensuring adherence to Paymentus' standards for a successful launch. This role is essential to ensure all bill payment data converts properly and efficiently onto the Paymentus platform.
Responsibilities
Develop data conversion procedures using SQL, Java and Linux scripting
Augment and automate existing manual procedures to optimize accuracy and reduce time for each conversion
Develop and update conversion mappers to interpret incoming data and manipulate it to match Paymentus' specifications
Develop new specifications to satisfy new customers and products
Serve as the primary point of contact/driver for all technical related conversion activities
Review conversion calendar and offer technical support and solutions to meet deadlines and contract dates
Maintain and update technical conversion documentation to share with internal and external clients and partners
Work in close collaboration with implementation, integration, product and development teams using exceptional communication skills
Adapt and creatively solve encountered problems under high stress and tight deadlines
Learn database structure, business logic and combine all knowledge to improve processes
Be flexible
Monitor new client conversions and existing client support if needed; provide daily problem solving, coordination, and communication
Management of multiple projects and conversion implementations
Ability to proactively troubleshoot and solve problems with limited supervision
Qualifications
B.S. Degree in Computer Science or comparable experience
Strong knowledge of Linux and the command line interface
Exceptional SQL skills
Experience with logging/monitoring tools (AWS Cloudwatch, Splunk, ELK, etc.)
Familiarity with various online banking applications and understanding of third-party integrations is a plus
Effective written and verbal communication skills
Problem Solver - recognizes the need to resolve issues quickly and effectively, uses logic to solve problems; identifies problems and brings forward multiple solution options; knows who/when to involve appropriate people when troubleshooting issues
Communication; ability to use formal and informal written and/or verbal communication channels to inform others; articulates ideas and thoughts clearly both verbally and in writing
Dynamic and self-motivated; able to work on their own initiative and deliver the objectives required to maintain service levels
Strong attention to detail
Proficiency with raw data, analytics, or data reporting tools
Preferred Skills
Background in the Payments, Banking, E-Commerce, Finance and/or Utility industries
Experience with front end web interfaces (HTML5, Javascript, CSS3)
Cloud technologies (AWS, GCP, Azure)
Work Environment
This job operates in a professional office environment. This role routinely uses standard office equipment such as laptop computers, photocopiers and smartphones.
Physical Demands
This role requires sitting or standing at a computer workstation for extended periods of time.
Position Type/Expected Hours of Work
This is a full-time position. Days and hours of work are Monday through Friday, 40 hours a week. Occasional evening and weekend work may be required as job duties demand.
Travel
No travel is required for this position.
Other Duties
Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice.
Equal Opportunity Statement
Paymentus is an equal opportunity employer. We enthusiastically accept our responsibility to make employment decisions without regard to race, religious creed, color, age, sex, sexual orientation, national origin, ancestry, citizenship status, religion, marital status, disability, military service or veteran status, genetic information, medical condition including medical characteristics, or any other classification protected by applicable federal, state, and local laws and ordinances. Our management is dedicated to ensuring the fulfillment of this policy with respect to hiring, placement, promotion, transfer, demotion, layoff, termination, recruitment advertising, pay, and other forms of compensation, training, and general treatment during employment.
Reasonable Accommodation
Paymentus recognizes and supports its obligation to endeavor to accommodate job applicants and employees with known physical or mental disabilities who are able to perform the essential functions of the position, with or without reasonable accommodation. Paymentus will endeavor to provide reasonable accommodations to otherwise qualified job applicants and employees with known physical or mental disabilities, unless doing so would impose an undue hardship on the Company or pose a direct threat of substantial harm to the employee or others. An applicant or employee who believes he or she needs a reasonable accommodation of a disability should discuss the need for possible accommodation with the Human Resources Department, or his or her direct supervisor.
DevOps Engineer
Requirements engineer job in Charlotte, NC
DevOps Engineer | Direct Hire | Hybrid x2 day on-site | Charlotte, NC
Optomi, in partnership with a leading insurance organization, is seeking an accomplished Senior DevOps Engineer to join their team. This role offers the opportunity to leverage cloud technologies to accelerate value delivery to customers and drive innovation across the organization. The Senior DevOps Engineer will play a critical role in shaping and enhancing development practices by defining and implementing best practices, patterns, and automation strategies. This individual will lead efforts to design, improve, and sustain continuous integration and delivery pipelines while providing hands-on technical oversight to ensure projects align with organizational strategy, architecture, and methodologies. Acting as both a technical leader and trusted advisor, the Senior DevOps Engineer will bring thought leadership in modernization, technology advancement, and application lifecycle management, while also providing expert consulting, mentorship, and guidance to organizational leaders and development teams.
What the right candidate will enjoy!
Direct Hire full-time opportunity
Flexible hybrid schedule
Acting as a leader in modernization, technology advancement, and application lifecycle management
Driving efficient development practices and influencing best practices and patterns across teams
Experience of the right candidate:
Over 7 years of experience in applications development
More than 5 years of experience designing DevOps pipelines using tools and technologies including Azure DevOps, SonarQube, and YAML
In-depth knowledge of Azure services including but not limited to Azure Compute, Azure Storage, Azure Networking, Azure App Service, Logic Apps, VMSS, and Azure Security
Proficiency in Azure DevOps and building CI/CD pipelines, including Azure environment provisioning tasks
Experience with Infrastructure as Code (IaC) using tools such as Azure Resource Manager (ARM) templates, Terraform, Puppet, or Ansible
Experience with scripting languages such as Bicep, PowerShell, Bash, or Python
Demonstrated experience in cloud cost optimization, governance, and implementing FinOps practices
Strong leadership and influencing skills with the ability to drive change and foster a DevOps culture across teams
Experience designing and implementing disaster recovery strategies and high-availability architectures in cloud environments
Self-starter who is capable of working independently and making decisions when necessary/as applicable
Strong verbal, written, and interpersonal communication and the ability to communicate with audiences at varying technical levels
Preferred: Experience working in an Agile environment, preferably SAFe
Preferred: Azure certifications such as Azure Administrator Associate, Azure DevOps Engineer Expert
Preferred: Experience in Application Security / DevSecOps roles
Responsibilities of the right candidate:
Design and oversee the implementation of cloud-based architecture, networking, and containerization, utilizing Infrastructure-as-Code for automation and patterns
Lead the creation and deployment of CI/CD and other automation solutions, focusing on design patterns that emphasize reuse, scalability, performance, availability, and security
Develop and enhance process flows, release pipeline documentation, mockups, and other materials to convey technical details and their alignment with desired outcomes
Conduct technical evaluations of DevOps solutions, understand existing industry options, and design necessary custom system integrations
Serve as a strategic thinker, thought leader, internal consultant, advocate, mentor, and change agent for DevOps architecture within development teams
Measure and demonstrate the benefits and business value of DevOps improvements
Present innovative and complex solutions and ideas to participants at all levels, working both as a leader and an individual contributor
Identify customer, business, and technology needs through relationship building and communication with key stakeholders
Identify gaps and propose modernization opportunities that involve both process and technical/automation aspects of the SDLC
Debug and troubleshoot issues with new and existing CI/CD pipelines
Senior Data Engineer
Requirements engineer job in Charlotte, NC
We are
At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron s progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets.
Our Challenge:
Looking for skilled senior data engineer with comprehensive experience in designing, developing, and maintaining scalable data solutions within the financial and regulatory domains. Proven expertise in leading end-to-end data architectures, integrating diverse data sources, and ensuring data quality and accuracy.
Additional Information*
The base salary for this position will vary based on geography and other factors. In accordance with law, the base salary for this role if filled within New York, NY is $135k - $155k/year & benefits (see below).
Work location: New York City, NY (Hybrid, 3 days in a week)
The Role
Responsibilities:
Advanced proficiency in Python, SQL Server, Snowflake, Azure Databricks, and PySpark.
Strong understanding of relational databases, ETL processes, and data modeling.
Expertise in system design, architecture, and implementing robust data pipelines.
Hands-on experience with data validation, quality checks, and automation tools (Autosys, Control-M).
Skilled in Agile methodologies, SDLC processes, and CI/CD pipelines.
Effective communicator with the ability to collaborate with business analysts, users, and global teams.
Requirements:
Overall 10+ years of IT experience is required
Collaborate with business stakeholders to gather technical specifications and translate business requirements into technical solutions.
Develop and optimize data models and schemas for efficient data integration and analysis.
Lead application development involving Python, Pyspark, SQL, Snowflake and Databricks platforms.
Implement data validation procedures to maintain high data quality standards.
Strong experience in SQL (Writing complex queries, Join, Tables etc.)
Conduct comprehensive testing (UT, SIT, UAT) alongside business and testing teams.
Provide ongoing support, troubleshooting, and maintenance in production environments.
Contribute to architecture and design discussions to ensure scalable, maintainable data solutions.
Experience with financial systems (capital markets, credit risk, and regulatory compliance applications).
We offer:
A highly competitive compensation and benefits package.
A multinational organization with 58 offices in 21 countries and the possibility to work abroad.
10 days of paid annual leave (plus sick leave and national holidays).
Maternity & paternity leave plans.
A comprehensive insurance plan including medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region).
Retirement savings plans.
A higher education certification policy.
Commuter benefits (varies by region).
Extensive training opportunities, focused on skills, substantive knowledge, and personal development.
On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses.
Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups.
Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms.
A flat and approachable organization.
A truly diverse, fun-loving, and global work culture.
SYNECHRON'S DIVERSITY & INCLUSION STATEMENT
Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.
All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
DevOps Engineer
Requirements engineer job in Charlotte, NC
Charlotte NC 2 days onsite
Role Description:
In this DevOps role you will have responsibility for creating systems and tools to assist the Software Development Team with building and deploying software, conducting automated testing, and assisting with level-3 production application support and development teams. In addition, you will have responsibility for developing systems and tools to assist the Software Development Teams in provisioning, configuring and deploying virtual infrastructure, including HA, load balancers, servers, workstations, databases, etc.
Role Objectives:
Participate in full lifecycle development of SDLC and implement all DevOps procedures to manage and support the CI/CD process including the automation of the build, test, deploy pipelines and configuration management.
Employ best practices for designing automation processes and utilities that can be easily used by the development teams.
Design and develop a best practice release management process that employs separation of control and proper approvals.
Closely partner with the security and infrastructure teams to incorporate corporate standards into the CI/CD and provisioning processes.
Maintaining source control management system and integrating it with software build and deployment.
Responsibility for the build environment: resolve build issues, help coordinate complex software test environments and software releases.
Monitoring of Applications operational processes, escalating and facilitating failure resolution as appropriate.
Qualifications and Skills:
5+ years of professional experience of working with the full software development life cycle and designing/developing best practice CI/CD pipelines, GitHub Actions, Ansible (IaC), Terraform/CloudFormation, K8s, test automation, static code analysis, Artifactory and release management processes.
Proficient in at least two of the following Windows batch/PowerShell, bash, Python.
Knowledgeable about networking (TCP, UDP, ICMP, ARP, DNS, TLS, HTTP, SSH, NAT, firewall, load balancing, etc).
Strong experience with managing and support of Windows/Linux Servers.
Good understanding of deployment of various platforms such as web/REST API, messaging bus/queue, application services, Microservices and Cloud Serverless components/managed platform.
Experience working with relational databases/SQL and no-SQL, other database technologies are a plus.
A curiosity concerning technology and the ability to learn new systems and tools quickly.
Excellent communication skills and the ability to work in a collaborative environment.
Desired Skills:
Experience with Cloud solutions i.e Azure (VNet, private Link, Blob storage, Azure SQL, Web App, Data Factory, Client, AKS, ARO, SQL Server/Cosmos) / AWS (VPC, EC2, S3, Route53, ECS, EKS, RDS, ALB/NLB) /
Experience with code-quality (SonarQube, GitHub Enterprise Advanced Security/CodeQL, Jfrog Artifactory + Xray).
Experience with containers and orchestration technologies (Docker, K8s, OpenShift).
Experience with application telemetry, monitoring and alerting solutions (Splunk, LogicMonitor, AWS CloudWatch, Azure Insight or similar).
Data Engineer
Requirements engineer job in Charlotte, NC
Senior Data Engineer
Charlotte- Hybrid (4 days onsite 1 day in person)
6 months extending
2 Openings - 40-65/hr based on experience level
This position is for a Senior AWS Data Engineer working on a Master Data Management (MDM) project. The goal of the project is to create a single, trusted view of business data by cleaning up duplicate and inconsistent information from multiple sources. You'll be building scalable data pipelines on AWS, improving data quality, and working on advanced features like entity resolution and machine learning-assisted matching. It's a hands-on role where you'll own production-grade pipelines and work with large datasets. If you enjoy solving complex data challenges and making systems more efficient, this is a great fit.
Main Responsibilities:
• Build and maintain data pipelines on AWS
• Develop ETL jobs using AWS Glue (PySpark) and Amazon EMR
• Orchestrate workflows using Apache Airflow
• Support full and incremental data processing
• Implement data matching, deduplication, and entity resolution
• Monitor, troubleshoot, and support production pipelines
• Partner with analytics and business teams
Must Haves
• 6+ years in AWS data engineering experience
• Python and PySpark development
• Hands-on with AWS Glue and Amazon EMR
• Experience using Apache Airflow
• Strong SQL skills
• Experience working with large datasets
• Familiarity with ML concepts for data quality or matching
Nice to have
• Experience with entity resolution, fuzzy matching, or deduplication
• Experience with AWS Entity Resolution
• Experience in Business MDM programs
Data Engineer
Requirements engineer job in Charlotte, NC
W2 ONLY - NO CORP TO CORP - CONTRACT TO HIRE - NO VISA SPONSOR/TRANSFER - NO 3RD PARTY AGENCY CANDIDATES
Data Engineer
Serve as subject matter expert and/or technical lead for large-scale data products.
Drive end-to-end solution delivery across multiple platforms and technologies, leveraging ELT solutions to acquire, integrate, and operationalize data.
Partner with architects and stakeholders to define and implement pipeline and data product architecture, ensuring integrity and scalability.
Communicate risks and trade-offs of technology solutions to senior leaders, translating technical concepts for business audiences.
Build and enhance data pipelines using cloud-based architectures.
Design simplified data models for complex business problems.
Champion Data Engineering best practices across teams, implementing leading big data methodologies (AWS, Hadoop/EMR, Spark, Snowflake, Talend, Informatica) in hybrid cloud/on-prem environments.
Operate independently while fostering a collaborative, transformation-focused mindset.
Work effectively in a lean, fast-paced organization, leveraging Scaled Agile principles.
Promote code quality management, FinOps principles, automated testing, and environment management practices to deliver incremental customer value.
Qualifications
5+ years of data engineering experience.
2+ years developing and operating production workloads in cloud infrastructure.
Bachelor's degree in Computer Science, Data Science, Information Technology, or related field.
Hands-on experience with Snowflake (including SnowSQL, Snowpipe).
Expert-level skills in AWS services, Snowflake, Python, Spark (certifications are a plus).
Proficiency in ETL tools such as Talend and Informatica.
Strong knowledge of Data Warehousing (modeling, mapping, batch and real-time pipelines).
Experience with DataOps tools (GitHub, Jenkins, UDeploy).
Familiarity with P&C Commercial Lines business.
Knowledge of legacy tech stack: Oracle Database, PL/SQL, Autosys, Hadoop, stored procedures, Shell scripting.
Experience using Agile tools like Rally.
Excellent written and verbal communication skills to interact effectively with technical and non-technical stakeholders.
Lead Data Engineer
Requirements engineer job in Charlotte, NC
We are looking for a Lead Data Engineer with strong communication skills and hands-on experience across Snowflake, AWS, Python, PySpark, MongoDB, and IICS. This role requires a technical leader who can guide a small engineering team while also building and optimizing scalable data pipelines in a cloud environment.
Long-term Contract
Location: Charlotte, NC
4 Days Onsite
***Interviews are actively happening***
***If you are interested in this role, please share your Updated resume to proceed further***
Responsibilities
Lead and mentor a team of data engineers in day-to-day project delivery
Design, build, and optimize ETL/ELT pipelines using AWS Glue, Python, PySpark, and Snowflake
Work with business and technical stakeholders, deliver updates, and ensure smooth communication
Develop and maintain data workflows using IICS (Informatica Intelligent Cloud Services)
Manage data ingestion from multiple sources, including MongoDB and AWS services
Perform data modeling, SQL scripting, and performance tuning in Snowflake
Support deployment, monitoring, and troubleshooting of data pipelines
Ensure best practices for code quality, documentation, and cloud data architecture
AWS Data Engineer (Only W2)
Requirements engineer job in Charlotte, NC
Title: AWS Data Engineer
Exprience: 10 years
Must Have Skills:
• Strong experience in AWS services, primarily serverless, databases, storage services, container services, schedulers, and batch services.
• Experience in Snowflake and Data Build Tool.
• Expertise in DBT, NodeJS and Python.
• Expertise in Informatica, PowerBI , Database, Cognos.
Nice to Have Skills:
Detailed Job Description:
• Strong experience in AWS services, primarily serverless, databases, storage services, container services, schedulers, and batch services.
• Experience in Snowflake and Data Build Tool.Expertise in DBT, NodeJS and Python.
• Expertise in Informatica, PowerBI , Database, Cognos.
• Proven experience in leading teams across locations.
• Knowledge of DevOps processes, Infrastructure as Code and their purpose.
• Good understanding of data warehouses, their purpose, and implementation
• Good communication skills.
Kindly share the resume in ******************
Big Data Engineer
Requirements engineer job in Charlotte, NC
Hello,
This is Shivam from Centraprise Global working as Talent Acquisition Lead.
I came across your profile on our resume database and wanted to reach out regarding a Job opportunity. If interested please reply with your updated resume, contact details, and best time to discuss regarding the opportunity.
Job Title: Hadoop // Big Data Engineer
Location: Charlotte, NC // New York City, NY (onsite)
Duration: Full Time
Job Description
Must Have Technical/Functional Skills
Primary Skil: Hadoop ecosystem (HDFS, Hive, Spark),PySpark,Python,Apache Kafka
Experience: Minimum 9 years
Roles & Responsibilities
Architectural Leadership:
Define end-to-end architecture for data platforms, streaming systems, and web applications.
Ensure alignment with enterprise standards, security, and compliance requirements.
Evaluate emerging technologies and recommend adoption strategies.
Data Engineering :
Design and implement data ingestion, transformation, and processing pipelines using Hadoop, PySpark, and related tools.
Optimize ETL workflows for large-scale datasets and real-time streaming.
Integrate Apache Kafka for event-driven architectures and messaging.
Application Development :
Build and maintain backend services using Python and microservices architecture.
Develop responsive, dynamic front-end applications using Angular.
Implement RESTful APIs and ensure seamless integration between components.
Collaboration & Leadership:
Work closely with product owners, business analysts, and DevOps teams.
Mentor junior developers and data engineers.
Participate in agile ceremonies, code reviews, and design discussions.
Required Skills & Qualifications:
Technical Expertise:
Strong experience with Hadoop ecosystem (HDFS, Hive, Spark).
Proficiency in PySpark for distributed data processing.
Advanced programming skills in Python.
Hands-on experience with Apache Kafka for real-time streaming.
Frontend development using Angular (TypeScript, HTML, CSS).
Architectural Skills:
Expertise in designing scalable, secure, and high-performance systems.
Familiarity with microservices, API design, and cloud-native architectures.
Additional Skills:
Knowledge of CI/CD pipelines, containerization (Docker/Kubernetes).
Exposure to cloud platforms (AWS, Azure, GCP).
Thanks & Regards,
Shivam Gupta | Talent Acquisition Lead
Desk: ************ Ext- 732
Deployment Engineer
Requirements engineer job in Charlotte, NC
We are seeking a highly skilled and customer-focused IT Support Specialist to join our team. In this role, you will provide hands-on technical support, contribute to key technology initiatives, and ensure an exceptional support experience for users across the organization. The ideal candidate thrives in fast-paced, high-profile environments and brings both strong technical acumen and outstanding communication skills.
Key Responsibilities
Team Collaboration
Work closely with IT peers and business stakeholders to provide a cohesive and seamless support experience.
Share knowledge and contribute to internal documentation, playbooks, and training materials.
Participate in team meetings and actively support knowledge-sharing culture.
Technical Expertise
Hands-on experience supporting Windows 11.
Skilled in Microsoft 365 (M365) services and applications support.
Proficient in Active Directory (AD) administration.
Experience with System Center Configuration Manager (SCCM).
Familiarity with Azure Active Directory (Azure AD).
Experience managing mobile devices through Microsoft Intune.
Working knowledge of iOS device management and support.
Understanding of ITIL practices and frameworks.
Non-Technical Expertise
Proven track record supporting users in fast-paced, high-profile environments.
Excellent communication skills, with the ability to interact effectively and professionally at all levels, including with high-visibility individuals.
Strong analytical and problem-solving skills with a focus on root cause analysis and continuous improvement.
AWS Data Engineer
Requirements engineer job in Charlotte, NC
We are looking for a skilled and experienced AWS Data Engineer with 10+ Years of experience to join our team. This role requires hands-on expertise in AWS serverless technologies, Big Data platforms, and automation tools. The ideal candidate will be responsible for designing scalable data pipelines, managing cloud infrastructure, and enabling secure, reliable data operations across marketing and analytics platforms.
Key Responsibilities:
Design, build, and deploy automated CI/CD pipelines for data and application workflows.
Analyze and enhance existing data pipelines for performance and scalability.
Develop semantic data models to support activation and analytical use cases.
Document data structures and metadata using Collibra or similar tools.
Ensure high data quality, availability, and integrity across platforms.
Apply SRE and DevSecOps principles to improve system reliability and security.
Manage security operations within AWS cloud environments.
Configure and automate applications on AWS instances.
Oversee all aspects of infrastructure management, including provisioning and monitoring.
Schedule and automate jobs using tools like Step Functions, Lambda, Glue, etc.
Required Skills & Experience:
Hands-on experience with AWS serverless technologies: Lambda, Glue, Step Functions, S3, RDS, DynamoDB, Athena, CloudFormation, CloudWatch Logs.
Proficiency in Confluent Kafka, Splunk, and Ansible.
Strong command of SQL and scripting languages: Python, R, Spark.
Familiarity with data formats: JSON, XML, Parquet, Avro.
Experience in Big Data engineering and cloud-native data platforms.
Functional knowledge of marketing platforms such as Adobe, Salesforce Marketing Cloud, and Unica/Interact (nice to have).
Preferred Qualifications:
Bachelor's or Master's degree in Computer Science, Data Engineering, or related field.
AWS, Big Data, or DevOps certifications are a plus.
Experience working in hybrid cloud environments and agile teams.
Life at Capgemini
Capgemini supports all aspects of your well-being throughout the changing stages of your life and career. For eligible employees, we offer:
Flexible work
Healthcare including dental, vision, mental health, and well-being programs
Financial well-being programs such as 401(k) and Employee Share Ownership Plan
Paid time off and paid holidays
Paid parental leave
Family building benefits like adoption assistance, surrogacy, and cryopreservation
Social well-being benefits like subsidized back-up child/elder care and tutoring
Mentoring, coaching and learning programs
Employee Resource Groups
Disaster Relief
Disclaimer
Capgemini is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law.
This is a general description of the Duties, Responsibilities and Qualifications required for this position. Physical, mental, sensory or environmental demands may be referenced in an attempt to communicate the manner in which this position traditionally is performed. Whenever necessary to provide individuals with disabilities an equal employment opportunity, Capgemini will consider reasonable accommodations that might involve varying job requirements and/or changing the way this job is performed, provided that such accommodations do not pose an undue hardship.
Capgemini is committed to providing reasonable accommodations during our recruitment process. If you need assistance or accommodation, please get in touch with your recruiting contact.
Click the following link for more information on your rights as an Applicant **************************************************************************
Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem.