Senior Data Scientist
Senior data scientist job in McLean, VA
Purpose:
As a Data Scientist, you will play a key role in delivering impactful, data-driven solutions for our strategic enterprise clients. This role also offers the opportunity to shape and grow Infocepts' Data Science & AI practice, contributing to high-impact AI/ML initiatives, crafting data-driven narratives for stakeholders, and applying advanced techniques to solve complex business problems from strategy to execution.
Key Result Areas and Activities:
Design, build, and deploy AI/ML solutions using modern cloud and data platforms.
Lead data science projects across industries, ensuring alignment with business goals.
Apply supervised, unsupervised, deep learning, and Generative AI (e.g., LLMs, agentic workflows) techniques to address client use cases.
Collaborate with data engineering teams to optimize model pipelines using Delta Lake and Spark.
Communicate findings effectively through data visualizations and stakeholder presentations.
Drive adoption of MLOps practices for scalable and reliable model deployment.
Contribute to the evolution of Infocepts' Data Science & AI offerings through innovation and knowledge sharing.
Roles & Responsibilities
Essential Skills
5+ years of experience in applied AI and machine/deep learning.
Hands-on experience with Databricks, MLflow, PySpark, and Spark MLlib.
Proficiency in Python and SQL for model development and data manipulation.
Strong understanding of supervised and unsupervised learning, deep learning, and Generative AI.
Familiarity with cloud platforms: AWS, Azure, and GCP.
Solid foundation in advanced statistical methods and probabilistic analysis.
Ability to lead end-to-end AI/ML projects, including design, development, and stakeholder management.
Experience with visualization tools like Tableau, Power BI, or similar.
Familiarity with ML workflow orchestration and MLOps practices.
Desirable Skills
Experience with LLMs (Large Language Models) and agentic AI workflows.
Familiarity with modern data platforms like Snowflake.
Exposure to real-time data processing in cloud-native environments.
Contributions to open-source AI projects or publications in data science communities.
Qualifications
Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, Statistics, or a related field.
Certifications in cloud platforms (AWS, Azure, GCP) or Databricks are a plus.
Qualities:
Able to consult, write, and present persuasively
Able to work in a self-organized and cross-functional team
Able to iterate based on new information, peer reviews, and feedback
Able to work seamlessly with clients across multiple geographies
Research focused mindset
Excellent analytical, presentation, reporting, documentation and interactive skills
"Infocepts is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law."
Senior Data Scientist
Senior data scientist job in Charlotte, NC
About Us:
LTIMindtree is a global technology consulting and digital solutions company that enables enterprises across industries to reimagine business models, accelerate innovation, and maximize growth by harnessing digital technologies. As a digital transformation partner to more than 700+ clients, LTIMindtree brings extensive domain and technology expertise to help drive superior competitive differentiation, customer experiences, and business outcomes in a converging world. Powered by nearly 90,000 talented and entrepreneurial professionals across more than 30 countries, LTIMindtree - a Larsen & Toubro Group company - combines the industry-acclaimed strengths of erstwhile Larsen and Toubro Infotech and Mindtree in solving the most complex business challenges and delivering transformation at scale. For more information, please visit ********************
Role: Senior Data Scientist - Generative AI & Solution Architecture
Location: Charlotte, NC
Work Mode - Hybrid (2-3 days from client office)
Experience: 10+ Years
Job Description
We are looking for a Senior Data Scientist to lead the design and implementation of a Generative AI-driven Contract Risk Reporting System. This role involves building advanced RAG pipelines, a reporting engine, and an interactive chatbot. The ideal candidate combines deep technical expertise, AI/ML fundamentals, and the ability to interpret legal contract language into actionable solution requirements, while engaging directly with clients.
Key Responsibilities:
Architect and deliver AI solutions for contract risk analysis and reporting.
Design and implement LLM-based RAG systems, reporting dashboards, and conversational interfaces.
Translate legal domain requirements into technical specifications.
Collaborate with clients for requirement gathering, solution validation, and presentations.
Ensure MLOps best practices, model monitoring, and Model Risk Management (MRM) compliance.
Required Skills:
Expertise in Generative AI, LLMs, RAG architectures, and NLP techniques.
Strong foundation in machine learning algorithms, model evaluation, and feature engineering.
Hands-on experience with MLOps pipelines, model governance, and risk controls.
Proficiency in Python, LangChain, Vector Databases, and Cloud platforms (AWS/Azure/GCP).
Ability to interpret legal contract language and map it to technical solutions.
Excellent communication, client engagement, and solution design skills.
Preferred:
Experience in legal tech, contract analytics, or risk management solutions.
Familiarity with prompt engineering, domain-specific fine-tuning, and LLM optimization.
LTIMindtree is an equal opportunity employer that is committed to diversity in the workplace. Our employment decisions are made without regard to race, color, creed, religion, sex (including pregnancy, childbirth or related medical conditions), gender identity or expression, national origin, ancestry, age, family-care status, veteran status, marital status, civil union status, domestic partnership status, military service, handicap or disability or history of handicap or disability, genetic information, atypical hereditary cellular or blood trait, union affiliation, affectional or sexual orientation or preference, or any other characteristic protected by applicable federal, state, or local law, except where such considerations are bona fide occupational qualifications permitted by law.
Data scientist
Senior data scientist job in Reston, VA
Job title: Data scientist
Fulltime
About Smart IT Frame:
At Smart IT Frame, we connect top talent with leading organizations across the USA. With over a decade of staffing excellence, we specialize in IT, healthcare, and professional roles, empowering both clients and candidates to grow together.
Note:
• In- person interview
Must Have;
• Data science
• Python
• SQL
• ML/Ops
• Risk Modelling
📩 Apply today or share profiles at ****************************
Principal Data Scientist with Gen AI
Senior data scientist job in McLean, VA
Title: Principal Data Scientist with Gen AI
Contract: W2
Exp: 10+
Duration: Long Term
Interview Mode: In-Person interview
Call Notes:
Looking for a Principal Data Scientist with strong focus on Generative AI (GenAI) with expertise in Machine Learning transitioned into GenAI. Need someone with good experience in RAG, Python- Jupyter, other Software knowledge, using agents in workflows, strong understanding of data.
Someone with advanced proficiency in Prompt Engineering, Large Language Models (LLMs), RAG, Graph RAG, MCP, A2A, multi-modal AI, Gen AI Patterns, Evaluation Frameworks, Guardrails, data curation, and AWS cloud deployments.
Highly preferred for someone who can built AI agent, MCP, A2A, Graph Rag, deployed Gen AI applications to production.
Top Skills:
Machine Learning & Deep Learning - Required
GenAI - Required
Python - Required
Rag and/or Graph Rag - Required
MCP (Model Context Protocol) and A2A (Agent-to-Agent) is highly preferred
Job Description:
We are seeking a highly experienced **Principal Gen AI Scientist** with a strong focus on **Generative AI (GenAI)** to lead the design and development of cutting-edge AI Agents, Agentic Workflows and Gen AI Applications that solve complex business problems. This role requires advanced proficiency in Prompt Engineering, Large Language Models (LLMs), RAG, Graph RAG, MCP, A2A, multi-modal AI, Gen AI Patterns, Evaluation Frameworks, Guardrails, data curation, and AWS cloud deployments. You will serve as a hands-on Gen AI (data) scientist and critical thought leader, working alongside full stack developers, UX designers, product managers and data engineers to shape and implement enterprise-grade Gen AI solutions.
Key Responsibilities:
* Architect and implement scalable AI Agents, Agentic Workflows and GenAI applications to address diverse and complex business use cases.
* Develop, fine-tune, and optimize lightweight LLMs; lead the evaluation and adaptation of models such as Claude (Anthropic), Azure OpenAI, and open-source alternatives.
* Design and deploy Retrieval-Augmented Generation (RAG) and Graph RAG systems using vector databases and knowledge bases.
* Curate enterprise data using connectors integrated with AWS Bedrock's Knowledge Base/Elastic
* Implement solutions leveraging MCP (Model Context Protocol) and A2A (Agent-to-Agent) communication.
* Build and maintain Jupyter-based notebooks using platforms like SageMaker and MLFlow/Kubeflow on Kubernetes (EKS).
* Collaborate with cross-functional teams of UI and microservice engineers, designers, and data engineers to build full-stack Gen AI experiences.
* Integrate GenAI solutions with enterprise platforms via API-based methods and GenAI standardized patterns.
* Establish and enforce validation procedures with Evaluation Frameworks, bias mitigation, safety protocols, and guardrails for production-ready deployment.
* Design & build robust ingestion pipelines that extract, chunk, enrich, and anonymize data from PDFs, video, and audio sources for use in LLM-powered workflows-leveraging best practices like semantic chunking and privacy controls
* Orchestrate multimodal pipelines** using scalable frameworks (e.g., Apache Spark, PySpark) for automated ETL/ELT workflows appropriate for unstructured media
* Implement embeddings drives-map media content to vector representations using embedding models, and integrate with vector stores (AWS KnowledgeBase/Elastic/Mongo Atlas) to support RAG architectures
Required Qualifications:**
* 10+ years of experience in AI/ML, with 3+ years in applied GenAI or LLM-based solutions.
* Deep expertise in prompt engineering, fine-tuning, RAG, GraphRAG, vector databases (e.g., AWS KnowledgeBase / Elastic), and multi-modal models.
* Proven experience with cloud-native AI development (AWS SageMaker, Bedrock, MLFlow on EKS).
* Strong programming skills in Python and ML libraries (Transformers, LangChain, etc.).
* Deep understanding of Gen AI system patterns and architectural best practices, Evaluation Frameworks
* Demonstrated ability to work in cross-functional agile teams.
* Need Github Code Repository Link for each candidate. Please thoroughly vet the candidates.
**Preferred Qualifications:**
* Published contributions or patents in AI/ML/LLM domains.
* Hands-on experience with enterprise AI governance and ethical deployment frameworks.
* Familiarity with CI/CD practices for ML Ops and scalable inference APIs.
Data Scientist
Senior data scientist job in Garner, NC
Accentuate Staffing is working with a client that is hiring an experienced Data Scientist to work on predictive analytics and join their data and AI team. This role combines advanced machine learning research with strategic business analytics to create scalable predictive solutions that drive efficiency and smarter decision-making across the enterprise.
The ideal candidate will bring a blend of technical expertise in machine learning, cloud platforms, and data engineering, alongside strong business acumen. You'll build and refine predictive models that help the company forecast sales, understand demand patterns, and make smarter operational decisions - from production planning to staffing and supply chain management.
Responsibilities:
Design and implement advanced predictive and machine learning models to support sales forecasting, demand planning, and strategic decision-making.
Build and maintain scalable ETL/ELT data pipelines that integrate structured and unstructured data from multiple business sources.
Experiment with AI techniques such as NLP, computer vision, and generative models to explore innovative applications across the organization.
Partner with business and IT teams to define analytics requirements, operationalize models, and integrate outputs into dashboards and reporting platforms.
Develop and manage self-service analytics dashboards using Power BI, SAP Analytics Cloud, or similar tools to deliver actionable insights.
Ensure data integrity, quality, and governance across predictive systems.
Qualifications:
Degree in Data Science, Computer Science, Statistics, or a related field.
Experience in predictive analytics, data science, or AI engineering within a business setting.
Proficiency in Python, R, SQL, and experience with cloud-based ML platforms such as Azure ML, AWS, or GCP.
Hands-on experience with data pipeline technologies (Azure Data Factory, Spark, Hadoop) and business intelligence tools (Power BI, Tableau, or SAP Analytics Cloud).
Strong understanding of machine learning model lifecycle management, from design through deployment and monitoring.
Exceptional communication and stakeholder engagement skills, with the ability to translate technical work into business value.
Senior Data Scientist
Senior data scientist job in McLean, VA
Locals to Only# In- Person Interview
Job Title: Data Scientist Specialist
We are seeking a highly experienced Principal Gen AI Scientist with a strong focus on Generative AI (GenAI) to lead the design and development of cutting-edge AI Agents, Agentic Workflows and Gen AI Applications that solve complex business problems. This role requires advanced proficiency in Prompt Engineering, Large Language Models (LLMs), RAG, Graph RAG, MCP, A2A, multi-modal AI, Gen AI Patterns, Evaluation Frameworks, Guardrails, data curation, and AWS cloud deployments. You will serve as a hands-on Gen AI (data) scientist and critical thought leader, working alongside full stack developers, UX designers, product managers and data engineers to shape and implement enterprise-grade Gen AI solutions.
Responsibilities:
Architect and implement scalable AI Agents, Agentic Workflows and GenAI applications to address diverse and complex business use cases.
Develop, fine-tune, and optimize lightweight LLMs; lead the evaluation and adaptation of models such as Claude (Anthropic), Azure OpenAI, and open-source alternatives.
Design and deploy Retrieval-Augmented Generation (RAG) and Graph RAG systems using vector databases and knowledge bases.
Curate enterprise data using connectors integrated with AWS Bedrock's Knowledge Base/Elastic.
Implement solutions leveraging MCP (Model Context Protocol) and A2A (Agent-to-Agent) communication.
Build and maintain Jupyter-based notebooks using platforms like AWS SageMaker and MLFlow/Kubeflow on Kubernetes (EKS).
Collaborate with cross-functional teams of UI and microservice engineers, designers, and data engineers to build full-stack Gen AI experiences.
Integrate GenAI solutions with enterprise platforms via API-based methods and GenAI standardized patterns.
Establish and enforce validation procedures with Evaluation Frameworks, bias mitigation, safety protocols, and guardrails for production-ready deployment.
Design & build robust ingestion pipelines that extract, chunk, enrich, and anonymize data from PDFs, video, and audio sources for use in LLM-powered workflows-leveraging best practices like semantic chunking and privacy controls.
Orchestrate multimodal pipelines** using scalable frameworks (e.g., Apache Spark, PySpark) for automated ETL/ELT workflows appropriate for unstructured media.
Implement embeddings drives-map media content to vector representations using embedding models, and integrate with vector stores (AWS Knowledge Base/Elastic/Mongo Atlas) to support RAG architectures.
Qualifications:
experience in AI/ML, with applied GenAI or LLM-based solutions.
Deep expertise in prompt engineering, fine-tuning, RAG, GraphRAG, vector databases (e.g., AWS Knowledge Base / Elastic), and multi-modal models.
Proven experience with cloud-native AI development (AWS SageMaker, Amazon Bedrock, MLFlow on EKS).
Strong programming skills in Python and ML libraries (Transformers, LangChain, etc.).
Deep understanding of Gen AI system patterns and architectural best practices, Evaluation Frameworks.
Demonstrated ability to work in cross-functional agile teams.
Data Scientist Specialist
Senior data scientist job in McLean, VA
Job Title: Data Scientist Specialist
Duration: 45 Minutes|120 Minutes
Interview Type: MS Teams - Video Mandatory| 2nd round on-site
Call notes:
Flexibility in hands on experience
Not required 10 years of experience
ML and Gen AI
2 years of Gen AI experience
Gen AI development
This role comes between software engineer and Data Scientist
Can see Data Engineer
BS/MS in Ai or Data Science preferred
PHD degree not needed
Data Scientist - Full Time Only
Senior data scientist job in Charlotte, NC
The ideal candidate's favorite words are learning, data, scale, and agility. You will leverage your strong collaboration skills and ability to extract valuable insights from highly complex data sets to ask the right questions and find the right answers.
Responsibilities
Analyze raw data: assessing quality, cleansing, structuring for downstream processing
Design accurate and scalable prediction algorithms
Collaborate with engineering team to bring analytical prototypes to production
Generate actionable insights for business improvements
Qualifications
Bachelor's degree or equivalent experience in quantative field (Statistics, Mathematics, Computer Science, Engineering, etc.)
At least 1 - 2 years' of experience in quantitative analytics or data modeling
Deep understanding of predictive modeling, machine-learning, clustering and classification techniques, and algorithms
Fluency in a programming language (Python, C,C++, Java, SQL)
Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau)
Snowflake Data Scientist (Need local to Charlotte, NC)
Senior data scientist job in Charlotte, NC
Job Title: Senior Snowflake Data Scientist
Long term Contract
For data scientists, additional skill set required to be in AIML, RAG & LLM Models, Agentic AI experience.
The Senior Snowflake Data Scientist will lead the development, deployment, and operationalization of machine learning and statistical models that solve complex business problems and drive strategic decision-making. This role requires an expert blend of statistical rigor, advanced programming, and deep knowledge of leveraging Snowflake's ecosystem (e.g., Snowpark, Streamlit, external functions) for high-performance, in-warehouse data science.
Key Responsibilities
1. Advanced Modeling & Analysis
Model Development: Design, build, train, and validate sophisticated machine learning (ML) and statistical models (e.g., predictive, prescriptive, clustering, forecasting) to address key business challenges (e.g., customer churn, sales forecasting, risk modeling).
Feature Engineering: Utilize advanced SQL and Python/Snowpark to perform large-scale feature engineering, data transformation, and preparation directly within Snowflake, ensuring high data quality and low latency for modeling.
A/B Testing & Causal Inference: Design and analyze experiments (A/B tests) and employ causal inference techniques to measure the business impact of product features, strategies, and model outputs.
2. MLOps & Production Deployment
Operationalization: Lead the process of deploying trained models into production environments, utilizing Snowpark, Snowflake UDFs/UDTFs, and external functions for scalable inference and real-time scoring.
Pipeline Automation: Collaborate with Data Engineering to integrate ML pipelines into CI/CD workflows, ensuring models are automatically retrained and redeployed using tools like Airflow or orchestration platforms.
Monitoring: Establish and maintain robust monitoring for model performance (drift, bias, accuracy) and operational health within the Snowflake environment.
3. Data Visualization & Storytelling
Insight Generation: Conduct deep-dive exploratory data analysis (EDA) using complex Snowflake SQL to uncover hidden patterns, opportunities, and risks.
Visualization & Communication: Effectively communicate complex analytical findings, model outputs, and recommendations to technical and non-technical stakeholders and senior leadership using compelling data storytelling and visualization tools (e.g., Tableau, Power BI, or Snowflake Streamlit).
4. Platform & Technical Leadership
Best Practices: Define and promote best practices for statistical rigor, ML coding standards, and efficient data processing within the Snowflake ecosystem.
Mentorship: Provide technical guidance and mentorship to junior data scientists and analysts on modeling techniques and leveraging Snowflake's data science features.
Innovation: Stay current with the latest features of the Snowflake Data Cloud (e.g., Generative AI/LLMs, Unistore, Data Sharing) and propose innovative ways to leverage them for business value.
Minimum Qualifications
MS or Ph.D. in a quantitative discipline (e.g., Statistics, Computer Science, Engineering, Economics, or Mathematics).
7+ years of progressive experience in Data Science, with at least 3+ years of hands-on experience building and deploying ML solutions in a cloud data warehouse environment, preferably Snowflake.
Expert proficiency in Python (including packages like scikit-learn, NumPy, Pandas) and writing scalable code for data processing.
Expert-level command of Advanced SQL for complex data manipulation and feature engineering.
Proven experience with Machine Learning algorithms and statistical modeling techniques.
Strong understanding of MLOps principles for model lifecycle management.
Preferred Skills & Certifications
Snowflake SnowPro Advanced: Data Scientist Certification.
Hands-on experience developing solutions using Snowpark (Python/Scala).
Experience building data apps/dashboards using Snowflake Streamlit.
Familiarity with cloud platforms and services (AWS Sagemaker, Azure ML, or GCP Vertex AI) integrated with Snowflake.
Experience with workflow orchestration tools (e.g., Apache Airflow, dbt).
Data Scientist - ML, Python
Senior data scientist job in McLean, VA
10+years of experience required in Information Technology.
Python Programming: At least 5 years of hands-on experience with Python, particularly in
frameworks like FastAPI, Django, Flask, and experience using AI frameworks.
• Access Control Expertise: Strong understanding of access control models such as Role-Based
Access Control (RBAC) and Attribute-Based Access Control (ABAC).
• API and Connector Development: Experience in developing API connectors using Python for
extracting and managing access control data from platforms like Azure, SharePoint, Java, .NET,
WordPress, etc.
• AI and Machine Learning: Hands-on experience integrating AI into applications for automating
tasks such as access control reviews and identifying anomalies
• Cloud and Microsoft Technologies: Proficiency with Azure services, Microsoft Graph API, and
experience integrating Python applications with Azure for access control reviews and reporting.
• Reporting and Visualization: Experience using reporting libraries in Python (Pandas, Matplotlib,
Plotly, Dash) to build dashboards and reports related to security and access control metrics.
• Communication Skills: Ability to collaborate with various stakeholders, explain complex
technical solutions, and deliver high-quality solutions on time.
• PlainID: Experience or familiarity with PlainID platforms for identity and access management.
• Azure OpenAI: Familiarity with Azure OpenAI technologies and their application in access
control and security workflows.
• Power BI: Experience with Microsoft Power BI for data visualization and reporting.
• Agile Methodologies: Experience working in Agile environments and familiarity with Scrum
methodologies for delivering security solutions.
Data Engineer
Senior data scientist job in Charlotte, NC
Job Title: Data Engineer / SQL Server Developer (7+ Years)
Client: Wells Fargo
Rate: $60/hr
Interview Process: Code Test + In-Person Interview
Job Description
Wells Fargo is seeking a Senior Data Engineer / SQL Server Developer (7+ years) who can work across both database development and QA automation functions. The ideal candidate will have strong SQL Server expertise along with hands-on experience in test automation tools.
Key Responsibilities
Design, develop, and optimize SQL Server database structures, queries, stored procedures, triggers, and ETL workflows.
Perform advanced performance tuning, query optimization, and troubleshooting of complex SQL issues.
Develop and maintain data pipelines ensuring data reliability, integrity, and high performance.
Build and execute automated test scripts using Selenium, BlazeMeter, or similar frameworks.
Perform both functional and performance testing across applications and data processes.
Support deployments in containerized ecosystems, ideally within OpenShift.
Collaborate with architecture, QA, DevOps, and application teams to ensure seamless delivery.
Required Skills
Primary:
7+ years of hands-on SQL Server development (T-SQL, stored procedures, performance tuning, ETL).
Secondary:
Experience working with OpenShift or other container platforms.
Testing / QA Automation:
Strong experience with test automation tools like Selenium, BlazeMeter, JMeter, or equivalent.
Ability to design automated functional and performance test suites.
Ideal Candidate Profile
Senior-level developer capable of taking ownership of both development and test automation deliverables.
Strong analytical and debugging skills across data engineering and testing disciplines.
Experience working in large-scale enterprise environments.
Data Engineer
Senior data scientist job in Richmond, VA
Role - Data Engineer
Work mode - Hybrid - This is a hybrid role (1 week onsite 1 week remote alternating)
Duration - 12 months
Must have skills - Talend, Oracle Exadata, Snowflake, Cloud Data Warehouse, Real-time Streaming, Kafka, Kinesis, ETL, Talend, Informatica, SQL, Stored Procedures, Table Design, SQL Query Optimization, ETL Performance Tuning, Data Loading, Oracle, Oracle Exadata.
Required Proven experience with:
• Experience with ETL tools - Talend (this is what the team uses) or Informatica
• Experience with SQL, Stored Procedures and Table Design
• Experience in SQL Query optimization and ETL Data loading performance
• Experience in Snowflake Cloud Data warehouse strongly preferred
• Experience in shell scripting is preferred
• Experience in real time streaming technologies is preferred
Looking for someone with experience in ETL tools.
Dominion uses Talend, but experience with other tools like Informatica is fine.
Strong SQL skills are needed for data loading, transformations, and query optimization.
Experience with Oracle (Exadata) is required.
While Dominion uses Snowflake, AWS or GCP experience is also okay.
They are open to candidates with 5+ years of experience, as long as they're eager to learn.
Thanks and Regards💕
Nikhil
Technical Recruiter
Email : *********************
Web: ******************
101, E, Park Blvd.-Suite 600, Plano, TX 75074, USA
Destiny hears - When you speak louder
Note:
SibiTalent Corp. is an equal opportunity staffing firm. We do not discriminate on the basis of
race, caste, color, religion, gender, culture, visa status, or any other protected characteristic.
All hiring decisions are made strictly based on qualifications, experience, and specific client requirements.
Data Conversion Engineer
Senior data scientist job in Charlotte, NC
Summary/Objective
Are you looking to work at a high growth, innovative, and purpose driven FinTech company? If so, you'll love Paymentus. Recognized by Deloitte as one of the fastest growing companies in North America, Paymentus is the premier provider of innovative, reliable, and secure electronic bill payment and presentment for more than 1700 clients. We are a SaaS provider that enables companies to help their customers simplify their financial lives. We do that by making it easier for consumers and businesses to pay bills, plus move and manage money to achieve strong financial health. We continually build upon a massively scalable platform, supporting thousands of businesses and millions of transactions on a daily basis. We're looking for high performers to join our team who excel in their expertise and who can transform plans into action. You'll have the opportunity to grow in an environment where intelligence, innovation, and leadership are valued and rewarded.
About the Role
The Data Conversion Engineer serves as a key component of the Platform Integrations team, providing technical support and guidance on data conversion projects. Conversions are an integral part in ensuring adherence to Paymentus' standards for a successful launch. This role is essential to ensure all bill payment data converts properly and efficiently onto the Paymentus platform.
Responsibilities
Develop data conversion procedures using SQL, Java and Linux scripting
Augment and automate existing manual procedures to optimize accuracy and reduce time for each conversion
Develop and update conversion mappers to interpret incoming data and manipulate it to match Paymentus' specifications
Develop new specifications to satisfy new customers and products
Serve as the primary point of contact/driver for all technical related conversion activities
Review conversion calendar and offer technical support and solutions to meet deadlines and contract dates
Maintain and update technical conversion documentation to share with internal and external clients and partners
Work in close collaboration with implementation, integration, product and development teams using exceptional communication skills
Adapt and creatively solve encountered problems under high stress and tight deadlines
Learn database structure, business logic and combine all knowledge to improve processes
Be flexible
Monitor new client conversions and existing client support if needed; provide daily problem solving, coordination, and communication
Management of multiple projects and conversion implementations
Ability to proactively troubleshoot and solve problems with limited supervision
Qualifications
B.S. Degree in Computer Science or comparable experience
Strong knowledge of Linux and the command line interface
Exceptional SQL skills
Experience with logging/monitoring tools (AWS Cloudwatch, Splunk, ELK, etc.)
Familiarity with various online banking applications and understanding of third-party integrations is a plus
Effective written and verbal communication skills
Problem Solver - recognizes the need to resolve issues quickly and effectively, uses logic to solve problems; identifies problems and brings forward multiple solution options; knows who/when to involve appropriate people when troubleshooting issues
Communication; ability to use formal and informal written and/or verbal communication channels to inform others; articulates ideas and thoughts clearly both verbally and in writing
Dynamic and self-motivated; able to work on their own initiative and deliver the objectives required to maintain service levels
Strong attention to detail
Proficiency with raw data, analytics, or data reporting tools
Preferred Skills
Background in the Payments, Banking, E-Commerce, Finance and/or Utility industries
Experience with front end web interfaces (HTML5, Javascript, CSS3)
Cloud technologies (AWS, GCP, Azure)
Work Environment
This job operates in a professional office environment. This role routinely uses standard office equipment such as laptop computers, photocopiers and smartphones.
Physical Demands
This role requires sitting or standing at a computer workstation for extended periods of time.
Position Type/Expected Hours of Work
This is a full-time position. Days and hours of work are Monday through Friday, 40 hours a week. Occasional evening and weekend work may be required as job duties demand.
Travel
No travel is required for this position.
Other Duties
Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice.
Equal Opportunity Statement
Paymentus is an equal opportunity employer. We enthusiastically accept our responsibility to make employment decisions without regard to race, religious creed, color, age, sex, sexual orientation, national origin, ancestry, citizenship status, religion, marital status, disability, military service or veteran status, genetic information, medical condition including medical characteristics, or any other classification protected by applicable federal, state, and local laws and ordinances. Our management is dedicated to ensuring the fulfillment of this policy with respect to hiring, placement, promotion, transfer, demotion, layoff, termination, recruitment advertising, pay, and other forms of compensation, training, and general treatment during employment.
Reasonable Accommodation
Paymentus recognizes and supports its obligation to endeavor to accommodate job applicants and employees with known physical or mental disabilities who are able to perform the essential functions of the position, with or without reasonable accommodation. Paymentus will endeavor to provide reasonable accommodations to otherwise qualified job applicants and employees with known physical or mental disabilities, unless doing so would impose an undue hardship on the Company or pose a direct threat of substantial harm to the employee or others. An applicant or employee who believes he or she needs a reasonable accommodation of a disability should discuss the need for possible accommodation with the Human Resources Department, or his or her direct supervisor.
AWS Data Engineer
Senior data scientist job in McLean, VA
Responsibilities:
Design, build, and maintain scalable data pipelines using AWS Glue and Databricks.
Develop and optimize ETL/ELT processes using PySpark and Python.
Collaborate with data scientists, analysts, and stakeholders to enable efficient data access and transformation.
Implement and maintain data lake and warehouse solutions on AWS (S3, Glue Catalog, Redshift, Athena, etc.).
Ensure data quality, consistency, and reliability across systems.
Optimize performance of large-scale distributed data processing workflows.
Develop automation scripts and frameworks for data ingestion, transformation, and validation.
Follow best practices for data governance, security, and compliance.
Required Skills & Experience:
5-8 years of hands-on experience in Data Engineering.
Strong proficiency in Python and PySpark for data processing and transformation.
Expertise in AWS services - particularly Glue, S3, Lambda, Redshift, and Athena.
Hands-on experience with Databricks for building and managing data pipelines.
Experience working with large-scale data systems and optimizing performance.
Solid understanding of data modeling, data lake architecture, and ETL design principles.
Strong problem-solving skills and ability to work independently in a fast-paced environment.
“Mindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of - Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.”
Data Engineer
Senior data scientist job in Charlotte, NC
Senior Data Engineer
Charlotte- Hybrid (4 days onsite 1 day in person)
6 months extending
2 Openings - 40-65/hr based on experience level
This position is for a Senior AWS Data Engineer working on a Master Data Management (MDM) project. The goal of the project is to create a single, trusted view of business data by cleaning up duplicate and inconsistent information from multiple sources. You'll be building scalable data pipelines on AWS, improving data quality, and working on advanced features like entity resolution and machine learning-assisted matching. It's a hands-on role where you'll own production-grade pipelines and work with large datasets. If you enjoy solving complex data challenges and making systems more efficient, this is a great fit.
Main Responsibilities:
• Build and maintain data pipelines on AWS
• Develop ETL jobs using AWS Glue (PySpark) and Amazon EMR
• Orchestrate workflows using Apache Airflow
• Support full and incremental data processing
• Implement data matching, deduplication, and entity resolution
• Monitor, troubleshoot, and support production pipelines
• Partner with analytics and business teams
Must Haves
• 6+ years in AWS data engineering experience
• Python and PySpark development
• Hands-on with AWS Glue and Amazon EMR
• Experience using Apache Airflow
• Strong SQL skills
• Experience working with large datasets
• Familiarity with ML concepts for data quality or matching
Nice to have
• Experience with entity resolution, fuzzy matching, or deduplication
• Experience with AWS Entity Resolution
• Experience in Business MDM programs
Data Engineer
Senior data scientist job in Charlotte, NC
W2 ONLY - NO CORP TO CORP - CONTRACT TO HIRE - NO VISA SPONSOR/TRANSFER - NO 3RD PARTY AGENCY CANDIDATES
Data Engineer
Serve as subject matter expert and/or technical lead for large-scale data products.
Drive end-to-end solution delivery across multiple platforms and technologies, leveraging ELT solutions to acquire, integrate, and operationalize data.
Partner with architects and stakeholders to define and implement pipeline and data product architecture, ensuring integrity and scalability.
Communicate risks and trade-offs of technology solutions to senior leaders, translating technical concepts for business audiences.
Build and enhance data pipelines using cloud-based architectures.
Design simplified data models for complex business problems.
Champion Data Engineering best practices across teams, implementing leading big data methodologies (AWS, Hadoop/EMR, Spark, Snowflake, Talend, Informatica) in hybrid cloud/on-prem environments.
Operate independently while fostering a collaborative, transformation-focused mindset.
Work effectively in a lean, fast-paced organization, leveraging Scaled Agile principles.
Promote code quality management, FinOps principles, automated testing, and environment management practices to deliver incremental customer value.
Qualifications
5+ years of data engineering experience.
2+ years developing and operating production workloads in cloud infrastructure.
Bachelor's degree in Computer Science, Data Science, Information Technology, or related field.
Hands-on experience with Snowflake (including SnowSQL, Snowpipe).
Expert-level skills in AWS services, Snowflake, Python, Spark (certifications are a plus).
Proficiency in ETL tools such as Talend and Informatica.
Strong knowledge of Data Warehousing (modeling, mapping, batch and real-time pipelines).
Experience with DataOps tools (GitHub, Jenkins, UDeploy).
Familiarity with P&C Commercial Lines business.
Knowledge of legacy tech stack: Oracle Database, PL/SQL, Autosys, Hadoop, stored procedures, Shell scripting.
Experience using Agile tools like Rally.
Excellent written and verbal communication skills to interact effectively with technical and non-technical stakeholders.
AWS Data Engineer (Only W2)
Senior data scientist job in Charlotte, NC
Title: AWS Data Engineer
Exprience: 10 years
Must Have Skills:
• Strong experience in AWS services, primarily serverless, databases, storage services, container services, schedulers, and batch services.
• Experience in Snowflake and Data Build Tool.
• Expertise in DBT, NodeJS and Python.
• Expertise in Informatica, PowerBI , Database, Cognos.
Nice to Have Skills:
Detailed Job Description:
• Strong experience in AWS services, primarily serverless, databases, storage services, container services, schedulers, and batch services.
• Experience in Snowflake and Data Build Tool.Expertise in DBT, NodeJS and Python.
• Expertise in Informatica, PowerBI , Database, Cognos.
• Proven experience in leading teams across locations.
• Knowledge of DevOps processes, Infrastructure as Code and their purpose.
• Good understanding of data warehouses, their purpose, and implementation
• Good communication skills.
Kindly share the resume in ******************
Lead Data Engineer
Senior data scientist job in Charlotte, NC
We are looking for a Lead Data Engineer with strong communication skills and hands-on experience across Snowflake, AWS, Python, PySpark, MongoDB, and IICS. This role requires a technical leader who can guide a small engineering team while also building and optimizing scalable data pipelines in a cloud environment.
Long-term Contract
Location: Charlotte, NC
4 Days Onsite
***Interviews are actively happening***
***If you are interested in this role, please share your Updated resume to proceed further***
Responsibilities
Lead and mentor a team of data engineers in day-to-day project delivery
Design, build, and optimize ETL/ELT pipelines using AWS Glue, Python, PySpark, and Snowflake
Work with business and technical stakeholders, deliver updates, and ensure smooth communication
Develop and maintain data workflows using IICS (Informatica Intelligent Cloud Services)
Manage data ingestion from multiple sources, including MongoDB and AWS services
Perform data modeling, SQL scripting, and performance tuning in Snowflake
Support deployment, monitoring, and troubleshooting of data pipelines
Ensure best practices for code quality, documentation, and cloud data architecture
Big Data Engineer
Senior data scientist job in Charlotte, NC
Hello,
This is Shivam from Centraprise Global working as Talent Acquisition Lead.
I came across your profile on our resume database and wanted to reach out regarding a Job opportunity. If interested please reply with your updated resume, contact details, and best time to discuss regarding the opportunity.
Job Title: Hadoop // Big Data Engineer
Location: Charlotte, NC // New York City, NY (onsite)
Duration: Full Time
Job Description
Must Have Technical/Functional Skills
Primary Skil: Hadoop ecosystem (HDFS, Hive, Spark),PySpark,Python,Apache Kafka
Experience: Minimum 9 years
Roles & Responsibilities
Architectural Leadership:
Define end-to-end architecture for data platforms, streaming systems, and web applications.
Ensure alignment with enterprise standards, security, and compliance requirements.
Evaluate emerging technologies and recommend adoption strategies.
Data Engineering :
Design and implement data ingestion, transformation, and processing pipelines using Hadoop, PySpark, and related tools.
Optimize ETL workflows for large-scale datasets and real-time streaming.
Integrate Apache Kafka for event-driven architectures and messaging.
Application Development :
Build and maintain backend services using Python and microservices architecture.
Develop responsive, dynamic front-end applications using Angular.
Implement RESTful APIs and ensure seamless integration between components.
Collaboration & Leadership:
Work closely with product owners, business analysts, and DevOps teams.
Mentor junior developers and data engineers.
Participate in agile ceremonies, code reviews, and design discussions.
Required Skills & Qualifications:
Technical Expertise:
Strong experience with Hadoop ecosystem (HDFS, Hive, Spark).
Proficiency in PySpark for distributed data processing.
Advanced programming skills in Python.
Hands-on experience with Apache Kafka for real-time streaming.
Frontend development using Angular (TypeScript, HTML, CSS).
Architectural Skills:
Expertise in designing scalable, secure, and high-performance systems.
Familiarity with microservices, API design, and cloud-native architectures.
Additional Skills:
Knowledge of CI/CD pipelines, containerization (Docker/Kubernetes).
Exposure to cloud platforms (AWS, Azure, GCP).
Thanks & Regards,
Shivam Gupta | Talent Acquisition Lead
Desk: ************ Ext- 732
Data Engineer
Senior data scientist job in Falls Church, VA
*** W2 Contract Only - No C2C - No 3rd Parties ***
The Ash Group is hiring a new Programmer Analyst Principal (Data Engineer) for our client (a global leader providing advanced systems and support in defense, aerospace, and security) based in Falls Church, VA.
In this role, you'll be designing, implementing, and optimizing large-scale data systems and ETL pipelines, with a strong focus on using Amazon Redshift and AWS services to ensure data quality and integrity for complex defense programs.
Compensation, Benefits, and Role Info
Competitive pay rate of $65 per hour.
Medical, dental, vision, direct primary care benefits, and, after six months of employment, a 4% matched 401(k) plan with immediate 100% vesting.
Type: 12-month contract with potential extension or conversion.
Location: On-site in Falls Church, VA.
What You'll Be Doing
Design and implement large-scale ETL data pipelines using AWS Glue and Python/PySpark to ingest, transform, and load data from various sources.
Build and maintain robust data warehouses, focusing on Amazon Redshift, including data modeling and governance.
Write and optimize complex, highly-performant SQL queries across large datasets (Redshift, Oracle, SQL Server).
Collaborate with cross-functional teams (data scientists, analysts) to understand requirements and deliver end-to-end data solutions.
Troubleshoot, optimize performance, and resolve data-related issues like pipeline failures and data quality bottlenecks.
What We're Looking For
8+ years of hands-on experience in data engineering, focusing on designing and implementing large-scale data systems.
5+ years of experience in building production-level ETL pipelines using AWS Glue and Python/PySpark.
Deep proficiency in SQL, including query optimization, indexing, and performance tuning across data warehouses like Amazon Redshift.
Strong understanding of database design principles, data modeling (star/snowflake schemas), and data governance.
Experience with data processing/orchestration frameworks such as Apache Airflow, Apache Kafka, or Fivetran.
If you're a seasoned data engineering professional passionate about building scalable data solutions and driving innovation in cloud-based environments, we want to hear from you. This is an exciting opportunity to work on cutting-edge technologies, collaborate with cross-functional teams, and make a meaningful impact on data-driven decision-making. Apply now to be part of a forward-thinking organization where your expertise will shape the future of our data infrastructure.
#DataEngineer #DataEngineering #AWSEngineer #Redshift #ETL #PySpark #DataPipeline #Westminster #ColoradoJobs #Contract