Post job

Data engineer jobs in Huntersville, NC - 1,359 jobs

All
Data Engineer
Data Scientist
ETL Architect
Data Consultant
Analytical Data Miner
Software Engineer
Computer Applications Engineer
  • CNC Applications Engineer

    Unify Recruit

    Data engineer job in Charlotte, NC

    A leading manufacturing solutions provider is seeking a highly experienced CNC Applications Engineer with a strong background in part processing, Fanuc CNC controls, and CAM programming using Mastercam or Esprit. This role is crucial in supporting process development, machine optimization, and technical customer support for high-precision machining operations. Key Responsibilities: Part Processing Development: Interpret part drawings and develop comprehensive machining strategies including tooling, workholding, and efficient operation sequencing. CNC Programming & Optimization: Generate, modify, and refine CNC programs using G-code, M-code, and CAM-generated output, ensuring high accuracy and process efficiency. Fanuc Controls Expertise: Troubleshoot, configure, and fine-tune programs and operations on Fanuc-controlled CNC equipment. Technical Customer Support: Provide remote and on-site support to customers for troubleshooting, training, and process validation during installations and upgrades. Machine Setup & Validation: Assist with machine setup, prove-outs, and first-article inspections to ensure part accuracy and adherence to specifications. CAM Software Utilization: Create toolpaths and NC code using CAM software (preferably Mastercam or Esprit) for a variety of CNC machines including multi-axis. Process Improvement: Identify and implement opportunities to reduce cycle times, enhance surface finishes, and extend tool life through process refinement. Documentation: Maintain detailed and accurate documentation of machining processes, programs, and tooling setups. Required Qualifications: 5-10 years of experience in CNC machining, part processing, and programming. Strong working knowledge of Fanuc CNC controls and G/M code. Proficiency in CAM software (preferably Mastercam or Esprit). Deep understanding of machining practices, tooling, speeds/feeds, and materials. Ability to collaborate across teams and support customer-facing technical work. Strong communication and documentation skills. Able to train operators, programmers, or customers on processes and best practices. Preferred Qualifications: Experience with multi-axis CNC machines and turning centers. Exposure to automation and robotics in CNC manufacturing environments. Degree or technical certification in Manufacturing Technology, CNC Programming, or Mechanical Engineering. Work Environment: Combination of office-based engineering and hands-on work on the shop floor. Occasional travel to customer sites for training, support, and machine setup. Benefits: Competitive compensation and performance-based bonuses Medical, Dental, and Vision insurance Paid Time Off (PTO) 401(k) with company match Support for continuing education and training Clear opportunities for career advancement in a fast-paced technical environment
    $80k-110k yearly est. 3d ago
  • Job icon imageJob icon image 2

    Looking for a job?

    Let Zippia find it for you.

  • Java Software Engineer

    Incedo Inc. 4.2company rating

    Data engineer job in Fort Mill, SC

    Senior Software Engineer - AWS Focus with AI experience is must The Senior Software Engineer will design, develop, and deploy scalable applications with a strong emphasis on AWS cloud solutions. This role involves building serverless architectures, containerized workloads, and automation frameworks while collaborating with product, business, InfoSec, and Data Architecture teams to deliver secure and efficient solutions. Responsibilities Develop and deploy AWS-based solutions, including Lambda, Step Functions, and containerized workloads on EKS. Implement and automate Infrastructure as Code using Terraform. Collaborate with cross-functional teams to translate business requirements into technical solutions. Contribute to Generative AI initiatives using AWS Bedrock and assist in deploying AI agents. Ensure adherence to architecture standards, coding best practices, and security guidelines. Participate in all phases of the SDLC: requirements, design, implementation, testing, and deployment. Maintain technical documentation and support knowledge sharing within the team. Engage in Agile ceremonies and contribute to estimation, planning, and delivery. What are we looking for? We want engineers who thrive in a fast-paced environment, are team-oriented, and can deliver innovative solutions while maintaining high standards of quality and security. Requirements Strong AWS expertise: Lambda, Step Functions, EKS. Terraform experience (must-have). Proficiency in Java/Spring Boot (preferred) and Python. 3+ years of experience with containerization (Docker, Kubernetes). Familiarity with CI/CD pipelines and Git-based workflows. Experience with Microservices, RESTful APIs, and RDBMS (PostgreSQL). Strong problem-solving and communication skills. Preferences / Good to have Experience with AWS Bedrock and AI agent deployment. Familiarity with vibe coding tools (Cursor, Copilot). Workflow automation tools (e.g., Camunda) and Kafka for event streaming. Knowledge of CloudFormation/SAM. Agile methodology experience and ability to mentor junior developers.
    $64k-80k yearly est. 18h ago
  • Big Data/ Hadoop consultant

    Sonsoft 3.7company rating

    Data engineer job in Charlotte, NC

    Sonsoft , Inc. is a USA based corporation duly organized under the laws of the Commonwealth of Georgia. Sonsoft Inc. is growing at a steady pace specializing in the fields of Software Development, Software Consultancy and Information Technology Enabled Services. Job Description:- At least 2 years of experience in Big Data space. At least 2 years of Strong Hadoop Experience - MAP REDUCE/Hive/Pig/SQOOP/OOZIE - MUST Candidate should have hands on experience with Java, APIs, spring - MUST. Strong knowledge and Experience in Core Java. Strong knowledge of core Java concepts such as multi-threading, design patterns, data structures Strong hands-on experience: Multithreading / Synchronization / Executor Frameworks / Concurrency frameworks; Design Patterns. Knowledge of REST-based WebServices JSON. Strong knowledge of RDBMS/databases such as Oracle etc. Basic Knowledge of Unix shell scripting. Experience in Hadoop - Pig, Map-reduce, Sqoop, Hive, YARN Experience in file formats like Avro, Parquet Experience in Spark Good exposure to columnar NoSQL DBs like HBase. Complex High Volume High Velocity projects end to end delivery experience Good experience with at least one of the scripting language like Scala, Python. Good exposure to BigData architectures. Experience with some framework building experience on Hadoop Very good understanding of Big Data eco system Experience with sizing and estimating large scale big data projects Good with DB knowledge with SQL tuning experience. Experience with Impala Good exposure to Splunk Good exposure to Kafka Experience with Apache Parquet Data format Past experience and exposure to ETL and data warehouse projects Experience with Spark, Flume Cloudera/ Hortonworks certified Experience and desire to work in a Global delivery environment Qualifications Basic Qualifications :- Bachelor's degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education. At least 4 years of experience within the Information Technologies. Additional Information Note:- This is a Full-Time & Permanent job opportunity for you. Only US Citizen, Green Card Holder, GC-EAD, H4-EAD & L2-EAD can apply. No OPT-EAD, H1B & TN candidates please. Please mention your Visa Status in your email or resume.
    $76k-105k yearly est. 60d+ ago
  • Applied AI Data Scientist

    Thestaffed

    Data engineer job in Dallas, NC

    Our client, a top-tier Management Consulting firm, is seeking a highly skilled Applied AI Data Scientist for a top tier US Bank. Responsibilities and Requirements: Perform statistical analysis, clustering, and probability modeling to drive insights and inform AI-driven solutions Analyze graph-structured data to detect anomalies, extract probabilistic patterns, and support graph-based intelligence Build NLP pipelines with a focus on NER, entity resolution, ontology extraction, and scoring Contribute to AI/ML engineering efforts by developing, testing, and deploying data-driven models and services Apply ML Ops fundamentals, including experiment tracking, metric monitoring, and reproducibility practices Collaborate with cross-functional teams to translate analytical findings into production-grade capabilities Prototype quickly, iterate efficiently, and help evolve data science best practices across the team Solid experience in statistical modeling, clustering techniques, and probability-based analysis Hands-on expertise in graph data analysis, including anomaly detection and distribution pattern extraction Strong NLP skills with practical experience in NER, entity/ontology extraction, and related evaluation methods An engineering-forward mindset with the ability to build, deploy, and optimize real-world solutions (not purely theoretical) Working knowledge of ML Ops basics, including experiment tracking and key model metrics Proficiency in Python and common data science/AI libraries Strong communication skills and the ability to work collaboratively in fast-paced, applied AI environments
    $68k-94k yearly est. 31d ago
  • Data Scientist

    Tata Consulting Services 4.3company rating

    Data engineer job in Charlotte, NC

    Must Have Technical/Functional Skills Strong Python and Machine Learning skillset An experienced Data Scientist to lead end-to-end AI/ML solution design and implementation across a range of business domains in financial services. You will be responsible for architecting robust, scalable, and secure data science solutions that drive innovation and competitive advantage in the BFSI sector. This includes selecting appropriate technologies, defining solution blueprints, ensuring production readiness, and mentoring cross-functional teams. You will work closely with stakeholders to identify high-value use cases and ensure seamless integration of models into business applications. Your deep expertise in machine learning, cloud-native architectures, MLOps practices, and financial domain knowledge will be essential to influence strategy and deliver transformative business impact. * Proficient in Python, scikit-learn, TensorFlow, PyTorch, HuggingFace. * Strong BFSI domain knowledge. * Experience with NLP, LLMs (GPT), and deep learning. * Hands-on with MLOps pipelines and tools. * Experience with graph analytics tools (Neo4j, TigerGraph, NetworkX). Roles & Responsibilities * Architect and drive the design, development, and deployment of scalable ML/AI solutions. * Lead data science teams through complete project lifecycles - from ideation to production. * Define standards, best practices, and governance for AI/ML solutioning and model management. * Collaborate with data engineering, MLOps, product, and business teams. * Oversee integration of data science models into production systems. * Evaluate and recommend ML tools, frameworks, and cloud-native solutions. * Guide feature engineering, data strategy, and feature store design. * Promote innovation with generative AI, reinforcement learning, and graph-based learning. * Knowledge of Spark, PySpark, Scala. * Experience leading CoEs or data science accelerators. * Open-source contributions or published research. TCS Employee Benefits Summary: * Discretionary Annual Incentive. * Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans. * Family Support: Maternal & Parental Leaves. * Insurance Options: Auto & Home Insurance, Identity Theft Protection. * Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement. * Time Off: Vacation, Time Off, Sick Leave & Holidays. * Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing. Salary Range : $100,000-$130,000 a year
    $100k-130k yearly 39d ago
  • Senior DataOps Engineer

    Scout Motors

    Data engineer job in Charlotte, NC

    Here at Scout Motors, we're carrying forward the heritage of one of the most iconic American vehicles in history. A vehicle dating back to 1960. One that forged the path for future generations of rugged SUVs and trucks and will do so once again. But Scout is more than just a brand, it's a legacy steeped in a culture of exploration, caretaking, and hard work. The Scout brand is all about respect. Respect for the past and the future by taking an iconic American brand that hasn't been around for a while, electrifying it, digitizing it, and loading it with American innovation. Respect for communities by creating a company that stands for its people and its customers. Respect for both work and play, with vehicles that are equally at home at a camp site, a job site, or on a Tuesday commute. And respect for our customers by developing two powertrains that meet their requirements - an all-electric powertrain as well as the Harvesterâ„¢ range extender powertrain which includes a built-in gas-powered generator with an estimated 500 miles of combined range. At Scout Motors, we empower our talented, inclusive, and entrepreneurial teams to innovate. What makes a Scout employee? Someone who is a visionary and a leader, who seeks new paths and shares lessons learned. A knowledgeable doer who collaborates across the company to build better. A go-getter with unrivaled passion. Join us at Scout Motors and be part of shaping the future of transportation. If you're ready to drive change and make history, apply now! About the Team The Data Platform Team at Scout is dedicated to unlocking the full potential of data by building a secure, scalable, and distributed platform that enables real-time insights, drives informed decision-making, and fosters innovation across the organization. Our mission is to empower teams with actionable intelligence by streamlining data sharing, ensuring regulatory compliance, maintaining data integrity, and optimizing costs at every level. This role is focused on building foundation for deploying AI-enabled use cases across company operations. What you'll do Become part of an iconic brand that is set to revolutionize the electric pick-up truck & rugged SUV marketplace by achieving the following: Contribute to the design, implementation, and maintenance of the overall cloud infrastructure data platform using modern IaC (Infrastructure as Code) practices. Work closely with software development and systems teams to build Data Integration solutions. Design and build Data models using tools such as Lucid, Talend, Erwin, MySQL workbench. Define and enhance enterprise data model to reflect relationships and dependencies. Review application data systems to ensure adherence to data governance policies. Design and build ETL (Python), ELT(Python) infrastructure, automation, and solutions to transform data as required. Design and Implement BI dashboards to visualize Trends and Forecasts. Design and implement data infrastructure components, ensuring high availability, reliability, scalability, and performance. Design, train and deploy ML models Implement monitoring solutions to proactively identify and address potential issues. Collaborate with security teams to ensure the data platform meets industry standards and compliance requirements. Collaborate with cross-functional teams, including product managers, developers, and business partners to ensure robust and reliable systems. Location & Travel Expectations: This role will be based out of the Scout Motors corporate headquarters in Charlotte, NC. This role may be remote to start but will transition to an in-office setting at the headquarters within 6 months of start date. This role is not eligible for remote work in New York City. The responsibilities of this role require 4-5 days attendance in office with in-person meetings and events regularly. Applicants should expect that the role will require the ability to convene with Scout colleagues in person and travel to participate in events on behalf of the company from time to time. What you'll bring We expect all Scout employees to have integrity, curiosity, resourcefulness, and strive to exhibit a positive attitude, as well as a growth mindset. You'll be comfortable with change and flexible in a fast-paced, high-growth environment. You'll take a collaborative approach to achieve ambitious goals. Here's what else you'll bring: Bachelor's degree in computer science, information technology, or related field or equivalent work experience. 7+ years of hands-on experience as DataOps Engineer in a manufacturing or automotive environment. Experience with streaming and event-based architecture. Experience implementing data lakehouse solutions using Databricks. Experience with infrastructure as code (Terraform). Proficient in building data pipelines using languages such as Python and SQL. Experience with AWS based data services such as Glue, Kinesis, Firehose or other comparable services. Experience with Structured, unstructured and time series databases. Solid understanding of cloud data storage solutions such as RDS, DynamoDB, DocumentDB, Mongo, Cassandra, Influx. Several years of experience working with cloud platforms such as AWS and Azure. Proven ability to develop and deploy scalable ML models. Hands-on experience in designing, training, and deploying ML models Strong ability to extract actionable insights using ML techniques Ability to leverage ML algorithms for forecasting trends and decision-making Excellent problem-solving and troubleshooting skills. When a problem occurs, you run towards it not away. Effective communication and collaboration skills. You treat colleagues with respect. You have a desire for clean implementations but are also humble in discussing alternative solutions and options. What you'll gain The benefits of joining Scout include the chance to build products and a company from the ground up. This is a chance to create something new and lasting - with an iconic brand at its foundation. In addition, Scout provides competitive compensation and benefits to support your physical, mental, and financial wellbeing. Program specifics are detailed in company policies and employee benefit guides, select highlights: Competitive insurance including: Medical, dental, vision and income protection plans 401(k) program with: An employer match and immediate vesting Generous Paid Time Off including: 20 days planned PTO, as accrued 40 hours of unplanned PTO and 14 company or floating holidays, annually Up to 16 weeks of paid parental leave for biological and adoptive parents of all genders Paid leave for circumstances related to bereavement, jury duty, voting time, or military leave Pay Transparency This is a full-time, exempt position eligible to receive a base salary and to participate in an annual performance bonus program. Final salary offered will be determined based on factors including but not limited to the candidate's skills and experience. The annual performance bonus program is preset and not candidate dependent. Initial base salary range = $140,000.00 - $170,000.00 Initial California base salary range = $154,000.00 - $187,000 Internal leveling code: IC8 Notice to applicants: Residing in San Francisco: Pursuant to the San Francisco Fair Chance Ordinance, Scout Motors will consider for employment qualified applicants with arrest and conviction records. Residing in Los Angeles: Scout Motors will consider for employment qualified applicants with criminal histories in a manner consistent with the Los Angeles Fair Chance Initiative for Hiring Ordinance. Residing in New York City: This role is not eligible for remote work in New York City. Equal Opportunity Scout Motors is committed to employing a diverse workforce and is proud to be an Equal Opportunity Employer. Qualified applicants will receive consideration without regard to race, color, religion, sex, national origin, age, sexual orientation, gender identity, gender expression, veteran status, disability, pregnancy, or any other characteristics protected by law. Scout Motors is committed to compliance with all applicable fair employment practice laws. If you require reasonable accommodation to complete a job application, pre-employment testing, or a job interview or to otherwise participate in the hiring process, please contact ScoutAccommodations@scoutmotors.com.
    $154k-187k yearly Auto-Apply 1d ago
  • Data Scientist

    Insight Global

    Data engineer job in Charlotte, NC

    Insight Global is seeking a Data Engineer for one of our large retail food service clients. This position will be within the BI & Analytics group supporting unattended retail solutions for clients and consumers across the US. This individual will be responsible for the following: - Develop the comprehensive knowledge required to support existing Production models and forecast solutions - Implement automated processes for efficiently generating and monitoring production forecasting models - Design and creates algorithms, machine-learning models, re-trains and tunes the model for efficiency and scalability in a cloud environment - Develop ad hoc and statistical analysis to determine trends and significant relationships - Develop and validates predictive and prescriptive models to identify opportunity for business improvement - Identifies unique opportunities and strategizes new uses for data; active engagement with the business customer to identify measurable criteria - Present and communicates complex analyses and content appropriate for the audience - Collaborate and develops a working relationship with IT, Data Engineers and Analytics teams to understand and share best practices and inter-dependencies between teams, data sources, technologies, and platforms - Accountable for ensuring high quality solutions are delivered within project timelines We are a company committed to creating inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity employer that believes everyone matters. Qualified candidates will receive consideration for employment opportunities without regard to race, religion, sex, age, marital status, national origin, sexual orientation, citizenship status, disability, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to Human Resources Request Form (****************************************** Og4IQS1J6dRiMo) . The EEOC "Know Your Rights" Poster is available here (*********************************************************************************************** . To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy: *************************************************** . Skills and Requirements - Graduate studies or BS Degree in Data Science, Computer Science, Mathematics, Statistics, or related field. - 2+ years of experience in large complex data analysis and predictive modeling in a corporate setting - Experience with designing and building processes with large, complex data sets from multiple data sources - 2+ years of advanced proficiency in writing complex MS SQL, including aggregate analysis over large complex datasets, multiple joins and query optimization - Proficient coding abilities in the major programming languages including R and Python - Experience with "end to end" Machine Learning model development - Skilled in best practices for data handling and imputation - Familiar with a variety of ML modeling techniques including limitations and appropriate application of models such as forecasting, logistic regression, random forest, and gradient boost. - Proficiency with MS Excel and pivot tables/charts. - Experience visualizing business insights with MS Power BI or other data visualization tools preferred. - Demonstrates excellent verbal and written communication skills as well as the ability to present complex topics effectively and in a simplified and appropriate manner - Inquisitive, energetic initiative-taker, able to work independently or collaboratively in a fast-paced team environment. - Experience with Snowflake and Snowpark preferred. - Experience working with Azure DevOps; Sprint Agile projects preferred. - Experience with analysis within the retail industry is a plus.
    $68k-95k yearly est. 60d+ ago
  • Data Scientist

    Zone It Solutions

    Data engineer job in Charlotte, NC

    We are looking for a talented Data Scientist skilled in Python and SQL. In this role, you will analyze large datasets, develop predictive models, and derive actionable insights that will guide business decisions. Requirements Proven experience as a Data Scientist or a similar role, with a strong focus on data analysis and modeling. Proficiency in programming languages, especially Python, and strong SQL skills for database management and querying. Experience with statistical analysis techniques and data visualization tools (e.g., Tableau, Matplotlib, Seaborn). Familiarity with machine learning frameworks and libraries (e.g., Scikit-learn, TensorFlow). Strong analytical skills and the ability to work with large datasets to extract meaningful information. Experience in data preprocessing, feature engineering, and model evaluation. Excellent problem-solving abilities and strong communication skills to present findings effectively. A degree in Computer Science, Mathematics, Statistics, or a related field is preferred. Benefits About Us We specialize in Digital, ERP, and larger IT Services. We offer flexible, efficient and collaborative solutions to any organization that requires IT, experts. Our agile, agnostic, and flexible solutions will help you source the IT Expertise you need. If you are looking for new opportunities, send your profile at ***************************. Also follow our LinkedIn page for new job opportunities and more. Zone IT Solutions is an equal opportunity employer and our recruitment process focuses on essential skills and abilities. We encourage applications from a diverse array of backgrounds, including individuals of various ethnicities, cultures, and linguistic backgrounds, as well as those with disabilities.
    $68k-95k yearly est. Auto-Apply 60d+ ago
  • Data Scientist

    Techstarsgroup

    Data engineer job in Charlotte, NC

    Use your Data Science skills in the fight against chronic diseases. Our client operates a pioneering disease management platform, focusing on delivering value-based care tailored to individuals with chronic conditions. By seamlessly integrating human expertise with cutting-edge software and analytics, the platform actively engages patients soon after diagnosis, ensuring they receive the most appropriate care swiftly. It offers continuous support by employing targeted, evidence-based interventions, significantly enhancing patient outcomes and making a meaningful difference in their health journey. As a Data Scientist, you will become an integral part of our Data team, tasked with enhancing our analytical capabilities to support our overarching mission. Your role will involve modeling data and crafting visualizations to communicate with stakeholders, delivering insights that prompt actionable measures. A background in working with healthcare data, including claims, prior authorizations, and electronic health records, is crucial for success in this role. You will work closely with teams across Product, Engineering, Clinical, and Operations to pinpoint opportunities for improving outcomes and monitoring the effectiveness of our interventions for our members and clients. Your contributions will be pivotal in establishing strong relationships with providers by developing data products that support practice transformation efforts. Within three months, you will: - Acquire a comprehensive understanding of our data platform, contributing to the enhancement of our data models and pipelines. - Forge relationships with oncology practices and providers, showcasing our expertise in developing data and analytics products. - Collaborate with stakeholders to grasp their business needs and translate these into technical specifications, involving the creation of data models, pipelines, and analytics dashboards using tools such as Looker or RStudio. This may include utilizing dbt to construct models for analyzing medical claims data to identify value care utilization across oncology practices or examining the dispersion of medical care among our members and its impact on out-of-pocket expenses. After six months, you will: - Assume responsibility for and lead the development of data models to assess the impact of our interventions with oncology practices, reporting outcomes both internally and externally. - Play a crucial role in enhancing our data infrastructure and devising a roadmap for a scalable and modular data architecture to accommodate our team's expansion. - Lead in the development of utilization and quality metrics, becoming the primary contact for stakeholder inquiries. You will also utilize our data assets for identifying business opportunities and strategic initiatives. Keys to success include: - Prioritizing our members. The mission of our organization, especially the experience of our members, is of utmost importance to you. - Being action-oriented. You have a knack for identifying and prioritizing the needs of your initiatives, ensuring that urgent and important tasks are addressed promptly. - Valuing diverse perspectives. You are humble, constantly seeking feedback, and are keen on learning and sharing knowledge. - Relevant experience. You have experience handling large healthcare datasets, preferably within a health plan or a healthcare-focused technology startup with advanced data structures and pipelines. Expertise in medical claims, pharmacy claims, eligibility files, and other pertinent healthcare data is vital for creating data marts for reporting and analysis. - Technical proficiency. Your skills in analytics, data modeling, and data transformation are essential. While familiarity with DBT is preferred, we welcome candidates who are eager to learn it swiftly. Experience with Python or R and tools like Looker for data analysis and visualization is advantageous. - Effective communication. You are adept at expressing your ideas clearly to both technical and non-technical team members and stakeholders. - Comfort with ambiguity. You have a proven track record of navigating through challenges and finding solutions in uncertain situations, particularly in fast-paced environments and ambitious startups.
    $68k-95k yearly est. 60d+ ago
  • Data Scientist

    Isolved HCM

    Data engineer job in Charlotte, NC

    Summary/objective We are seeking a highly skilled Data Scientist to focus on building and deploying predictive models that identify customer churn risk and upsell opportunities. This role will play a key part in driving revenue growth and retention strategies by leveraging advanced machine learning, statistical modeling, and large-scale data capabilities within Databricks. Why Join Us? Be at the forefront of using Databricks AI/ML capabilities to solve real-world business challenges. Directly influence customer retention and revenue growth through applied data science. Work in a collaborative environment where experimentation and innovation are encouraged. Core Job Duties: Model Development * Design, develop, and deploy predictive models for customer churn and upsell propensity using Databricks ML capabilities. * Evaluate and compare algorithms (e.g., logistic regression, gradient boosting, random forest, deep learning) to optimize predictive performance. * Incorporate feature engineering pipelines that leverage customer behavior, transaction history, and product usage data. Data Engineering & Pipeline Ownership * Build and maintain scalable data pipelines in Databricks (using PySpark, Delta Lake, and MLflow) to enable reliable model training and scoring. * Collaborate with data engineers to ensure proper data ingestion, transformation, and governance. Experimentation & Validation * Conduct A/B tests and back testing to validate model effectiveness. * Apply techniques for model monitoring, drift detection, and retraining in production. Business Impact & Storytelling * Translate complex analytical outputs into clear recommendations for business stakeholders. * Partner with Product and Customer Success teams to design strategies that reduce churn, increase upsell and improve customer retention KPIs. Minimum Qualifications: * Master's or PhD in Data Science, Statistics, Computer Science, or related field (or equivalent industry experience). * 3+ years of experience building predictive models in a production environment. * Strong proficiency in Python (pandas, scikit-learn, PySpark) and SQL. * Demonstrated expertise using Databricks for: * Data manipulation and distributed processing with PySpark. * Building and managing models with MLflow. * Leveraging Delta Lake for efficient data storage and retrieval. * Implementing scalable ML pipelines within Databricks' ML Runtime. * Experience with feature engineering for behavioral and transactional datasets. * Strong understanding of customer lifecycle analytics, including churn modeling and upsell/recommendation systems. * Ability to communicate results and influence decision-making across technical and non-technical teams. Preferred Qualifications: * Experience with cloud platforms (Azure Databricks, AWS, or GCP). * Familiarity with Unity Catalog for data governance and security. * Knowledge of deep learning frameworks (TensorFlow, PyTorch) within Databricks. * Exposure to MLOps best practices (CI/CD for ML, model versioning, monitoring). * Background in SaaS, subscription-based businesses, or customer analytics. Physical Demands Prolonged periods of sitting at a desk and working on a computer. Must be able to lift up to 15 pounds. Travel Required: Limited Work Authorization: Employees must be legally authorized to work in the United States. FLSA Classification: Exempt Location: Any Effective Date: 9/16/2025 About isolved isolved is a provider of human capital management (HCM) solutions that help organizations recruit, retain and elevate their workforce. More than 195,000 employers and 8 million employees rely on isolved's software and services to streamline human resource (HR) operations and deliver employee experiences that matter. isolved People Cloud is a unified yet modular HCM platform with built-in artificial intelligence (AI) and analytics that connects HR, payroll, benefits, and workforce and talent management into a single solution that drives better business outcomes. Through the Sidekick Advantage, isolved also provides expert guidance, embedded services and an engaged community that empowers People Heroes to grow their companies and careers. Learn more at ******************* isolved is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. isolved is a progressive and open-minded meritocracy. If you are smart and good at what you do, come as you are. Visit ************************** for more information regarding our incredible culture and focus on our employee experience. Visit ************************* for a comprehensive list of our employee total rewards offerings.
    $68k-95k yearly est. 17d ago
  • AI Data Scientist

    Nationmind LLC

    Data engineer job in Charlotte, NC

    NationMind LLC is a technology consulting firm focused on software development and QA testing services. We help clients build reliable, scalable applications with a strong emphasis on automation, performance, and quality. Our team works across industries, delivering solutions that drive innovation and operational efficiency. We are currently hiring skilled professionals for AI Data Scientist to join our growing team. Job title: AI Data Scientist Location: Charlotte, NC (Hybrid) Job description/ Role Overview We are looking for an experienced Data Scientist who can design develop and deploy advanced AIML models and data driven solutions. The ideal candidate will have strong expertise in machine learning deep learning LLMs and cloud based data platforms along with handson experience in data engineering vector databases and endtoend deployment. Key Responsibilities Model Development Optimization Build and finetune MLDL models including LLMs for NLP tasks Implement RAG Retrieval Augmented Generation and Agentic AI workflows for enterprise use cases Optimize models for performance scalability and cost efficiency Data Engineering Management Design and maintain data pipelines for structured and unstructured data Work with vector databases eg Pinecone Milvus Weaviate for semantic search and embeddings Ensure data quality governance and compliance Deployment MLOps Deploy models using Docker Kubernetes and cloudnative services AWS Azure GCP Implement CICD pipelines for ML workflows and automated retraining Monitor model performance and drift using MLOps tools Collaboration Communication Work closely with architects engineers and business stakeholders to translate requirements into solutions Present insights and recommendations using data visualization tools Required Technical Skills Programming Python Pandas NumPy Scikitlearn PyTorch TensorFlow SQL AIML Frameworks LangChain Hugging Face LlamaIndex Cloud Platforms AWS SageMaker Azure ML GCP Vertex AI Databases SQLNoSQL Vector DBs Pinecone Milvus Weaviate Deployment Docker Kubernetes Helm MLOps Tools MLflow Kubeflow Airflow Visualization Power BI Tableau Matplotlib Plotly
    $68k-95k yearly est. 17d ago
  • Data Engineer

    Contact Government Services

    Data engineer job in Charlotte, NC

    Employment Type: Full-Time, Mid-level Department: Business Intelligence CGS is seeking a passionate and driven Data Engineer to support a rapidly growing Data Analytics and Business Intelligence platform focused on providing solutions that empower our federal customers with the tools and capabilities needed to turn data into actionable insights. The ideal candidate is a critical thinker and perpetual learner; excited to gain exposure and build skillsets across a range of technologies while solving some of our clients' toughest challenges. CGS brings motivated, highly skilled, and creative people together to solve the government's most dynamic problems with cutting-edge technology. To carry out our mission, we are seeking candidates who are excited to contribute to government innovation, appreciate collaboration, and can anticipate the needs of others. Here at CGS, we offer an environment in which our employees feel supported, and we encourage professional growth through various learning opportunities. Skills and attributes for success: * Complete development efforts across data pipeline to store, manage, store, and provision to data consumers. * Being an active and collaborating member of an Agile/Scrum team and following all Agile/Scrum best practices. * Write code to ensure the performance and reliability of data extraction and processing. * Support continuous process automation for data ingest. * Achieve technical excellence by advocating for and adhering to lean-agile engineering principles and practices such as API-first design, simple design, continuous integration, version control, and automated testing. * Work with program management and engineers to implement and document complex and evolving requirements. * Help cultivate an environment that promotes customer service excellence, innovation, collaboration, and teamwork. * Collaborate with others as part of a cross-functional team that includes user experience researchers and designers, product managers, engineers, and other functional specialists. Qualifications: * Must be a US Citizen. * Must be able to obtain a Public Trust Clearance. * 7+ years of IT experience including experience in design, management, and solutioning of large, complex data sets and models. * Experience with developing data pipelines from many sources from structured and unstructured data sets in a variety of formats. * Proficiency in developing ETL processes, and performing test and validation steps. * Proficiency to manipulate data (Python, R, SQL, SAS). * Strong knowledge of big data analysis and storage tools and technologies. * Strong understanding of the agile principles and ability to apply them. * Strong understanding of the CI/CD pipelines and ability to apply them. * Experience with relational database, such as, PostgreSQL. * Work comfortably in version control systems, such as, Git Repositories. Ideally, you will also have: * Experience creating and consuming APIs. * Experience with DHS and knowledge of DHS standards a plus. * Candidates will be given special consideration for extensive experience with Python. * Ability to develop visualizations utilizing Tableau or PowerBI. * Experience in developing Shell scripts on Linux. * Demonstrated experience translating business and technical requirements into comprehensive data strategies and analytic solutions. * Demonstrated ability to communicate across all levels of the organization and communicate technical terms to non-technical audiences. Our Commitment: Contact Government Services (CGS) strives to simplify and enhance government bureaucracy through the optimization of human, technical, and financial resources. We combine cutting-edge technology with world-class personnel to deliver customized solutions that fit our client's specific needs. We are committed to solving the most challenging and dynamic problems. For the past seven years, we've been growing our government-contracting portfolio, and along the way, we've created valuable partnerships by demonstrating a commitment to honesty, professionalism, and quality work. Here at CGS we value honesty through hard work and self-awareness, professionalism in all we do, and to deliver the best quality to our consumers mending those relations for years to come. We care about our employees. Therefore, we offer a comprehensive benefits package: * Health, Dental, and Vision * Life Insurance * 401k * Flexible Spending Account (Health, Dependent Care, and Commuter) * Paid Time Off and Observance of State/Federal Holidays Contact Government Services, LLC is an Equal Opportunity Employer. Applicants will be considered without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Join our team and become part of government innovation! Explore additional job opportunities with CGS on our Job Board: ************************************* For more information about CGS please visit: ************************** or contact: Email: [email protected] #CJ $112,597.33 - $152,810.66 a year We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.
    $112.6k-152.8k yearly 60d+ ago
  • Senior Data Engineer - SQL & Reporting

    Stratacuity

    Data engineer job in Charlotte, NC

    Client: Financial Services Team: Investment Solutions, Banking, Lending and TR Job Title: Senior Data Engineer - SQL & Reporting x2 Contract Length: 18+ months Rate: $65-70/hr Top Requirements: * Database design experience using MS SQL Server - 7+ years * Stored procedures/complex queries (T-SQL, PL/SQL) - 7+ years * MS Reporting Services - 5+ years * MS Power BI - 3+ years * MS SSIS - 3+ years * Broadcom AutoSys - 3+ years * IBM Connect: Direct (NDM) - 3+ years * Knowledge of financial services, especially advisory products * Compliance and Certification Management Job Summary: In this contingent role, the Software Engineer will leverage extensive experience in database design and development to support complex software initiatives. The engineer will analyze, develop, and optimize database solutions, contribute to reporting and data visualization efforts, and collaborate with cross-functional teams to ensure scalable and efficient data systems that meet business needs. Day-to-Day Responsibilities: * Design, develop, and optimize database schemas and objects using MS SQL Server; * Create and maintain complex stored procedures and queries (T-SQL, PL/SQL); * Develop reports and dashboards using MS Reporting Services and Power BI; * Build and support ETL pipelines with MS SSIS; * Automate and schedule processes using Broadcom AutoSys and IBM Connect: Direct; * Collaborate with technical and business teams to gather requirements and deliver solutions; * Ensure compliance with financial industry standards and security protocols. Apex Systems is a world-class IT services company that serves thousands of clients across the globe. When you join Apex, you become part of a team that values innovation, collaboration, and continuous learning. We offer quality career resources, training, certifications, development opportunities, and a comprehensive benefits package. Our commitment to excellence is reflected in many awards, including ClearlyRated's Best of Staffing in Talent Satisfaction in the United States and Great Place to Work in the United Kingdom and Mexico. Apex uses a virtual recruiter as part of the application process. Click here for more details. Apex Benefits Overview: Apex offers a range of supplemental benefits, including medical, dental, vision, life, disability, and other insurance plans that offer an optional layer of financial protection. We offer an ESPP (employee stock purchase program) and a 401K program which allows you to contribute typically within 30 days of starting, with a company match after 12 months of tenure. Apex also offers a HSA (Health Savings Account on the HDHP plan), a SupportLinc Employee Assistance Program (EAP) with up to 8 free counseling sessions, a corporate discount savings program and other discounts. In terms of professional development, Apex hosts an on-demand training program, provides access to certification prep and a library of technical and leadership courses/books/seminars once you have 6+ months of tenure, and certification discounts and other perks to associations that include CompTIA and IIBA. Apex has a dedicated customer service team for our Consultants that can address questions around benefits and other resources, as well as a certified Career Coach. You can access a full list of our benefits, programs, support teams and resources within our 'Welcome Packet' as well, which an Apex team member can provide. Employee Type: Contract Location: Charlotte, NC, US Job Type: Date Posted: December 12, 2025 Similar Jobs * Senior Data Engineer - Snowflake & Azure * Senior Data Engineer * Data Engineer - Data Engineer II * HRX Reporting * MOSA PowerBi / Sql Sr CE
    $65-70 hourly 3d ago
  • Data Engineer

    Carolina It Professionals 4.2company rating

    Data engineer job in Charlotte, NC

    Daily responsibilities Develop Spark/ BQ data pipelines in GCP DataProc Cluster Develop data models and schema to support business requirements Ensure data quality and reliability by developing automated checks in place Optimize performance and ensure adherence to SLAs Document and share learnings with the rest of the team. Candidate will be working on and migrating workloads from Hadoop OnPrem cluster to GCP. Team is geographically spread based mainly out of Charlotte, NC and Bangalore, India. Candidate should be able to communicate in a concise and clear manner to keep the team appraised of day-to-day progress. Required Skills GCP/ Big query Experience - 1 year minimum Scala/ Python - 3+ years SQL - 3+ years Strong SQL, one or more programming language Scala/ Python Preferred Skills Exposure to building semantic layer in LookML GCP Certification Master's Degree In related field
    $84k-111k yearly est. 60d+ ago
  • Data Engineer - Kafka

    Delhaize America 4.6company rating

    Data engineer job in Salisbury, NC

    Ahold Delhaize USA, a division of global food retailer Ahold Delhaize, is part of the U.S. family of brands, which includes five leading omnichannel grocery brands - Food Lion, Giant Food, The GIANT Company, Hannaford and Stop & Shop. Our associates support the brands with a wide range of services, including Finance, Legal, Sustainability, Commercial, Digital and E-commerce, Technology and more. Primary Purpose: The Data Engineer II contributes to the expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. They will contribute to our data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They engage through the entire lifecycle of a project from data mapping, data pipelines, data modeling, and finally data consumption. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products. They will learn to optimize or even re-design our company's data architecture to support our next generation of products and data initiatives. They can take on smaller projects from start to finish, work on problems of moderate scope where analysis of situations or data requires a review of a variety of factors and trace issues to their source. They develop solutions to a variety of problems of moderate scope and complexity. Our flexible/ hybrid work schedule includes 3 in-person days at one of our core locations and 2 remote days. Our core office locations include Salisbury, NC, Chicago, IL, and Quincy, MA. Applicants must be currently authorized to work in the United States on a full-time basis. Duties & Responsibilities: * Solves simple to moderate application errors, resolves application problems, following up promptly with all appropriate customers and IT personnel. * Reviews and contributes to QA test plans and supports QA team during test execution. * Participates in developing streaming data applications (Kafka), data transformation and data pipelines. * Ensures change control and change management procedures are followed within the program/project as they relate to requirements. * Able to interpret requirement documents, contributes to creating functional design documents as a part of data development life cycle. * Documents all phases of work including gathering requirements, architectural diagrams, and other program technical specifications using current specified design standards for new or revised solutions. * Relates information from various sources to draw logical conclusions. * Conducts unit testing on data streams. * Conducts data lineage and impact analysis as a part of the change management process. * Conducts data analysis (SQL, Excel, Data Discovery, etc.) on legacy systems and new data sources. * Creates source to target data mappings for data pipelines and integration activities. * Assists in identifying the impact of proposed application development/enhancements projects. * Performs data profiling and process analysis to understand key source systems and uses knowledge of application features and functions to assess scope and impact of business needs. * Implement and maintain data governance policies and procedures to ensure data quality, security and compliance * Ensure operational stability of a 24/7/365 grocery retail environment by providing technical support, system monitoring, and issue resolution which may be required during off-hours, weekends, and holidays as needed. Qualifications: * Bachelors Degree in Computer Science or Technical field; equivalent trainings/certifications/experience equivalency will be considered * 3 or more years of equivalent experience in relevant job or field of technology Preferred Qualifications: * Masters Degree in relevant field of study preferred, Additional trainings or certifications in relevant field of study preferred * Experience in Agile teams and or Product/Platform based operating model. * Experience in retail or grocery preferred. * Experience with Kafka. #DICEJobs #LI-hybrid #LI-SS1 Salary Range: $101,360 - $152,040 Actual compensation offered to a candidate may vary based on their unique qualifications and experience, internal equity, and market conditions. Final compensation decisions will be made in accordance with company policies and applicable laws. At Ahold Delhaize USA, we provide services to one of the largest portfolios of grocery companies in the nation, and we're actively seeking top talent. Our team shares a common motivation to drive change, take ownership and enable our brands to better care for their customers. We thrive on supporting great local grocery brands and their strategies. Our associates are the heartbeat of our organization. We are committed to offering a welcoming work environment where all associates can succeed and thrive. Guided by our values of courage, care, teamwork, integrity (and even a little humor), we are dedicated to being a great place to work. We believe in collaboration, curiosity, and continuous learning in all that we think, create and do. While building a culture where personal and professional growth are just as important as business growth, we invest in our people, empowering them to learn, grow and deliver at all levels of the business.
    $101.4k-152k yearly 31d ago
  • Data Conversion Engineer

    Paymentus 4.5company rating

    Data engineer job in Charlotte, NC

    Summary/Objective Are you looking to work at a high growth, innovative, and purpose driven FinTech company? If so, you'll love Paymentus. Recognized by Deloitte as one of the fastest growing companies in North America, Paymentus is the premier provider of innovative, reliable, and secure electronic bill payment and presentment for more than 1700 clients. We are a SaaS provider that enables companies to help their customers simplify their financial lives. We do that by making it easier for consumers and businesses to pay bills, plus move and manage money to achieve strong financial health. We continually build upon a massively scalable platform, supporting thousands of businesses and millions of transactions on a daily basis. We're looking for high performers to join our team who excel in their expertise and who can transform plans into action. You'll have the opportunity to grow in an environment where intelligence, innovation, and leadership are valued and rewarded. Essential Functions/ Responsibilities The Data Conversion Engineer serves as a key component of the Platform Integrations team, providing technical support and guidance on data conversion projects. Conversions are an integral part in ensuring adherence to Paymentus' standards for a successful launch. This role is essential to ensure all bill payment data converts properly and efficiently onto the Paymentus platform. Develop data conversion procedures using SQL, Java and Linux scripting Augment and automate existing manual procedures to optimize accuracy and reduce time for each conversion Develop and update conversion mappers to interpret incoming data and manipulate it to match Paymentus' specifications Develop new specifications to satisfy new customers and products -erve as the primary point of contact/driver for all technical related conversion activities Review conversion calendar and offer technical support and solutions to meet deadlines and contract dates Maintain and update technical conversion documentation to share with internal and external clients and partners Work in close collaboration with implementation, integration, product and development teams using exceptional communication skills Adapt and creatively solve encountered problems under high stress and tight deadlines Learn database structure, business logic and combine all knowledge to improve processes. Be flexible Monitor new client conversions and existing client support if needed; provide daily problem solving, coordination, and communication Management of multiple projects and conversion implementations Ability to proactively troubleshoot and solve problems with limited supervision Education and Experience S. Degree in Computer Science or comparable experience Strong knowledge of Linux and the command line interface Exceptional SQL skills Experience with logging/monitoring tools (AWS Cloudwatch, Splunk, ELK, etc.) Familiarity with various online banking applications and understanding of third-party integrations is a plus Effective written and verbal communication skills Problem Solver - recognizes the need to resolve issues quickly and effectively, uses logic to solve problems; identifies problems and brings forward multiple solution options; knows who/when to involve appropriate people when troubleshooting issues Communication; ability to use formal and informal written and/or verbal communication channels to inform others; articulates ideas and thoughts clearly both verbally and in writing Dynamic and self-motivated; able to work on their own initiative and deliver the objectives required to maintain service levels Strong attention to detail Proficiency with raw data, analytics, or data reporting tools Nice to Have Background in the Payments, Banking, E-Commerce, Finance and/or Utility industries Experience with front end web interfaces (HTML5, Javascript, CSS3) Cloud technologies (AWS, GCP, Azure) Work Environment This job operates in a professional office environment. This role routinely uses standard office equipment such as laptop computers, photocopiers and smartphones. Physical Demands This role requires sitting or standing at a computer workstation. Position Type/Expected Hours of Work This is a full-time position. Days and hours of work are Monday through Friday, 40 hours a week. Occasional evening and weekend work may be required as job duties demand. Travel No travel required for this role. Other Duties Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice. EEO Statement Paymentus is an equal opportunity employer. We enthusiastically accept our responsibility to make employment decisions without regard to race, religious creed, color, age, sex, sexual orientation, national origin, ancestry, citizenship status, religion, marital status, disability, military service or veteran status, genetic information, medical condition including medical characteristics, or any other classification protected by applicable federal, state, and local laws and ordinances. Our management is dedicated to ensuring the fulfillment of this policy with respect to hiring, placement, promotion, transfer, demotion, layoff, termination, recruitment advertising, pay, and other forms of compensation, training, and general treatment during employment. Reasonable Accommodation Paymentus recognizes and supports its obligation to endeavor to accommodate job applicants and employees with known physical or mental disabilities who are able to perform the essential functions of the position, with or without reasonable accommodation. Paymentus will endeavor to provide reasonable accommodations to otherwise qualified job applicants and employees with known physical or mental disabilities, unless doing so would impose an undue hardship on the Company or pose a direct threat of substantial harm to the employee or others. An applicant or employee who believes he or she needs a reasonable accommodation of a disability should discuss the need for possible accommodation with the Human Resources Department, or his or her direct supervisor.
    $82k-114k yearly est. 43d ago
  • Data Platforms Engineer

    Resideo

    Data engineer job in Charlotte, NC

    ADI is seeking a passionate Data Platforms Engineer to design, develop, and maintain our enterprise data platforms. As the Engineer of Data Platforms, you will be part of a talented engineering and operations team focused on building and supporting secure, performant, and scalable next gen data platforms that include Snowflake, Databricks, MS Fabric, PowerBI, DBT, Airflow, Azure data services, SQL Server database and modern AI/ML technologies. This role will help ensuring our data infrastructure meets both present and future needs while adhering to governance, compliance and operational excellence. **JOB DUTIES:** + Platform Development - end-to-end engineering data platforms, ensuring platform reliability, scalability, and extensibility that evolves with the business goals and customer needs + Build reusable frameworks and automation using snowflake cloud, GitHub, Azure Data Factory, DBT, or similar tools. + Maintain critical infrastructure, including OLTP databases, near real-time data pipelines, batch processing systems and frameworks that make data infrastructure accessible for engineering teams + Administer, configure, maintain and support data, analytics and cloud data platforms + Manage and administer data security and protection to protect the databases against threats or unauthorized access. + Contribute to data modeling, partitioning, indexing, clustering, and performance optimization efforts in Snowflake and other data platforms, following established best practices + Resource Monitoring, cost optimization, data validation and quality checks inbuilt into the data pipeline + Maintain run books and knowledge bases for platform operations team + Ensure seamless integration of data systems across the company, collaborating with data engineering and analytics teams to align solutions with business needs + Performs after-hours support for critical production systems with occasional, scheduled maintenance or release deployments and provide on call support as needed **YOU MUST HAVE:** + 2+ years' overall experience designing, deploying, managing and supporting RDBMS, Data Warehouse systems, Data Lake, and BI solutions, including working with any platforms such as, SQL Server, Snowflake Cloud, Databricks, DBT, ADF (Azure Data Factory) and other Azure services + Advanced SQL proficiency + Proficiency in Snowflake cloud advanced capabilities + Hands-on public cloud experience including experience with technologies and tools such as Continuous Integration, Continuous Deployment, Configuration Management, and Provisioning Automation with tools such as Github + Understanding of security practices and certificate management + Experience with orchestration tools **WE VALUE:** + Certifications in cloud technologies, data management, or data engineering (e.g., Azure Data Engineer, Snowflake SnowPro core/Advanced) + Understanding of enterprise data management principles + Experience with AI/ML technologies and MLOps capabilities and tools + Experience with Data Platforms administration \#LI-MH2 \#LI-HYBRID Resideo Technologies has announced its intention to spin off ADI Global Distribution and establish it as a separate, publicly traded company. Under this plan, ADI will continue its role as a leading global wholesale distributor serving commercial and residential markets, while Resideo will retain its manufacturing and product-solutions business. Upon separation, both companies will operate independently to better serve their respective markets and customers. The spin-off is currently targeted for completion in the second half of 2026, subject to customary conditions. Resideo is a $6.76 billion global manufacturer, developer, and distributor of technology-driven sensing and control solutions that help homeowners and businesses stay connected and in control of their comfort, security, energy use, and smart living. We focus on the professional channel, serving over 100,000 contractors, installers, dealers, and integrators across the HVAC, security, fire, electrical, and home comfort markets. Our products are found in more than 150 million residential and commercial spaces worldwide, with tens of millions of new devices sold annually. Trusted brands like Honeywell Home, First Alert, and Resideo power connected living for over 12.8 million customers through our Products & Solutions segment. Our ADI | Snap One segment spans 200+ stocking locations in 17 countries, offering a catalog of over 500,000 products from more than 1,000 manufacturers. With a global team of more than 14,000 employees, we offer the opportunity to make a real impact in a fast-growing, purpose-driven industry. Learn more at ************************ At Resideo, we bring together diverse individuals to build the future of homes. Resideo is an equal opportunity employer. Qualified applicants will be considered without regard to age, race, creed, color, national origin, ancestry, marital status, affectional or sexual orientation, gender identity or expression, disability, nationality, sex, religion, or veteran status. For more information on applicable U.S. equal employment regulations, refer to the ****************************************************************************************************************************************************** If you require a reasonable accommodation to apply for a job, please use Contact Us form for assistance.
    $77k-103k yearly est. 60d+ ago
  • Quantexa Data Engineer

    SMBC

    Data engineer job in Charlotte, NC

    SMBC Group is a top-tier global financial group. Headquartered in Tokyo and with a 400-year history, SMBC Group offers a diverse range of financial services, including banking, leasing, securities, credit cards, and consumer finance. The Group has more than 130 offices and 80,000 employees worldwide in nearly 40 countries. Sumitomo Mitsui Financial Group, Inc. (SMFG) is the holding company of SMBC Group, which is one of the three largest banking groups in Japan. SMFG's shares trade on the Tokyo, Nagoya, and New York (NYSE: SMFG) stock exchanges. In the Americas, SMBC Group has a presence in the US, Canada, Mexico, Brazil, Chile, Colombia, and Peru. Backed by the capital strength of SMBC Group and the value of its relationships in Asia, the Group offers a range of commercial and investment banking services to its corporate, institutional, and municipal clients. It connects a diverse client base to local markets and the organization's extensive global network. The Group's operating companies in the Americas include Sumitomo Mitsui Banking Corp. (SMBC), SMBC Nikko Securities America, Inc., SMBC Capital Markets, Inc., SMBC MANUBANK, JRI America, Inc., SMBC Leasing and Finance, Inc., Banco Sumitomo Mitsui Brasileiro S.A., and Sumitomo Mitsui Finance and Leasing Co., Ltd. **JOB SUMMARY** Quantexa Data Engineer supporting SMBC's AML Market applications and data services. Purpose: design, build, and maintain scalable data pipelines and shared libraries that ensure data availability, quality, reliability, and regulatory compliance for Anti-Money Laundering obligations. Scope and impact: enterprise-scale AML data engineering on Azure (Data Factory Gen2, Databricks), Elasticsearch, and MS SQL Server, enabling faster time to market through reusable capabilities, improving platform health and performance, and reducing compliance risk by strengthening data integrity and security. Reporting structure: reports to AML Market Technology leadership and partners closely with Technology, Product, and Business stakeholders. **PRINCIPAL DUTIES AND RESPONSIBILITIES** + Design, develop, and optimize robust data pipelines and ETL processes for AML data domains. + Build reusable libraries and shared capabilities to accelerate delivery across Data Engineering teams. + Evolve platform capabilities and maintain overall platform health, performance, and reliability. + Collaborate with cross-functional stakeholders to translate data requirements into solutions. + Ensure data quality, integrity, and security across all data systems and workflows. + Monitor, troubleshoot, and continuously improve data workflows, SLAs, and performance. + Automate validation, testing, and deployment processes to increase delivery speed and consistency. + Develop and integrate services using Spark and Scala or Java or Python; implement RESTful APIs with JSON; support Tomcat-hosted services and Elasticsearch indexing/queries. + Apply Agile practices; manage work using Jira and documentation using Confluence. + Operate within DevOps tooling and pipelines (git, Azure DevOps, Jenkins). + Stay current with emerging data engineering technologies and trends; propose improvements. + Communicate proactively with business users, support teams, vendors, and stakeholders to ensure high customer satisfaction. **POSITION SPECIFICATIONS** **Required Qualifications** + Bachelor's or Master's degree in Computer Science, Engineering, or related field. + 7+ years in data engineering or closely related roles. **Technical Skills** + Highly proficient in Scala, Python, and SQL; hands-on with Spark; experience with Apache Airflow. + Azure cloud data platforms: Azure Data Factory Gen2, Azure Databricks, Azure-hosted databases. + Elasticsearch experience; RESTful APIs with JSON; Tomcat application services. + Advanced SQL on relational databases, preferably MS SQL Server and Azure-hosted databases. + Strong ETL tooling knowledge and data modeling skills. + DevOps lifecycle experience: git, Azure DevOps, Jenkins. + Agile methodologies; Jira and Confluence proficiency. + Linux/Unix environment familiarity. **Professional Competencies** + Strong analytical and diagnostic skills for troubleshooting complex systems. + Ability to design scalable data architectures and shared frameworks that improve time to market. + Clear, proactive communication with technical and non-technical stakeholders. + Focus on data quality, integrity, security, and SLA adherence. Our positions are open to all, regardless of their human characteristics or conditions regarding ethnicity, gender, sexual orientation, accessibility, cultural, social, or any other factor. As well, will always be open to people with disabilities. EOE, including Disability/veterans
    $77k-103k yearly est. 22d ago
  • Data Platforms Engineer

    Resideo Technologies, Inc.

    Data engineer job in Charlotte, NC

    ADI is seeking a passionate Data Platforms Engineer to design, develop, and maintain our enterprise data platforms. As the Engineer of Data Platforms, you will be part of a talented engineering and operations team focused on building and supporting secure, performant, and scalable next gen data platforms that include Snowflake, Databricks, MS Fabric, PowerBI, DBT, Airflow, Azure data services, SQL Server database and modern AI/ML technologies. This role will help ensuring our data infrastructure meets both present and future needs while adhering to governance, compliance and operational excellence. JOB DUTIES: * Platform Development - end-to-end engineering data platforms, ensuring platform reliability, scalability, and extensibility that evolves with the business goals and customer needs * Build reusable frameworks and automation using snowflake cloud, GitHub, Azure Data Factory, DBT, or similar tools. * Maintain critical infrastructure, including OLTP databases, near real-time data pipelines, batch processing systems and frameworks that make data infrastructure accessible for engineering teams * Administer, configure, maintain and support data, analytics and cloud data platforms * Manage and administer data security and protection to protect the databases against threats or unauthorized access. * Contribute to data modeling, partitioning, indexing, clustering, and performance optimization efforts in Snowflake and other data platforms, following established best practices * Resource Monitoring, cost optimization, data validation and quality checks inbuilt into the data pipeline * Maintain run books and knowledge bases for platform operations team * Ensure seamless integration of data systems across the company, collaborating with data engineering and analytics teams to align solutions with business needs * Performs after-hours support for critical production systems with occasional, scheduled maintenance or release deployments and provide on call support as needed YOU MUST HAVE: * 2+ years' overall experience designing, deploying, managing and supporting RDBMS, Data Warehouse systems, Data Lake, and BI solutions, including working with any platforms such as, SQL Server, Snowflake Cloud, Databricks, DBT, ADF (Azure Data Factory) and other Azure services * Advanced SQL proficiency * Proficiency in Snowflake cloud advanced capabilities * Hands-on public cloud experience including experience with technologies and tools such as Continuous Integration, Continuous Deployment, Configuration Management, and Provisioning Automation with tools such as Github * Understanding of security practices and certificate management * Experience with orchestration tools WE VALUE: * Certifications in cloud technologies, data management, or data engineering (e.g., Azure Data Engineer, Snowflake SnowPro core/Advanced) * Understanding of enterprise data management principles * Experience with AI/ML technologies and MLOps capabilities and tools * Experience with Data Platforms administration #LI-MH2 #LI-HYBRID
    $77k-103k yearly est. Auto-Apply 60d+ ago
  • Data Enginner

    CapB Infotek

    Data engineer job in Charlotte, NC

    We need immediately few Data Engineers with Spark Java and AbInitio and/or Informatica. The position is based out of Charlotte and can be done remotely. Requirements: 5+ years' experience on ETL Development Tools: Spark / Java (preferred), Python, Ab Initio, Informatica Advanced experience & demonstrated proficiency in Core Java and Spark. Experience with Apache Beam, Data Flow, Cloud Data Flow, Cloud Composer and Big Query 5+ years' experience in Big Data technologies , Distributed Multi-tier Application Development, Database Design, Data processing, Data Warehouse. Requires Strong experience in SQL queries and stored procedures; data profiling, data analysis, and data validations skills. Deep DBMS (Incl. Oracle, Teradata, and SQL Server) and Advanced understanding of data warehousing ETL concepts (esp. change data capture). Strong experience on database design, best architecture practices, normalization, and dimensional modeling etc. Experience at developing complex UNIX shell scripts. Experience with GIT or similar source code versioning tools and coding standards. Experience with scheduling tool such as Autosys or similar tools.
    $77k-103k yearly est. 60d+ ago

Learn more about data engineer jobs

How much does a data engineer earn in Huntersville, NC?

The average data engineer in Huntersville, NC earns between $67,000 and $118,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.

Average data engineer salary in Huntersville, NC

$89,000
Job type you want
Full Time
Part Time
Internship
Temporary