Post job

Data scientist jobs in Oak Ridge, TN

- 2,886 jobs
All
Data Scientist
Data Engineer
Senior Data Scientist
Data Architect
Senior Computer Scientist
Data Consultant
  • Data Scientist (LLM & Financial data)

    Talent Groups 4.2company rating

    Data scientist job in Charlotte, NC

    Job Responsibilities: Analyze large financial datasets to extract insights and support business decisions. Develop, implement, and evaluate machine learning models and algorithms tailored to banking and finance use cases (e.g., risk modeling, fraud detection, customer segmentation). Apply and fine-tune large language models (LLMs) for tasks such as document analysis, customer communication, and regulatory compliance. Collaborate with cross-functional teams to understand business requirements and deliver data-driven solutions. Communicate findings and recommendations through reports, dashboards, and presentations. Work with data engineers to ensure data quality and pipeline reliability.
    $76k-111k yearly est. 1d ago
  • Data Scientist

    Coforge

    Data scientist job in Atlanta, GA

    Role: Data Scientist Mode Of Hire: Full Time Key Responsibilities • Data wrangling & feature engineering: Ingest, clean, and transform data from SQL, APIs, and data lakes (e.g., Snowflake, Databricks). Design robust pipelines that feed into analytics and ML workflows. • Data understanding & exploration: Work closely with domain experts to deeply understand the meaning, context, quality, and limitations of available datasets. Translate business questions into data requirements and analytics plans. • Machine learning development: Build, tune, and validate predictive models using scikit-learn, SparkML, XGBoost, or TensorFlow. • Cross-functional partnership: Collaborate with marketing, sales, and product teams to scope business use cases, define success metrics, and integrate models into operational workflows. • Model deployment & MLOps: Deploy and manage models using MLflow, docker and CI/CD pipelines. Implement versioning, testing, performance monitoring, and retraining strategies as part of a robust MLOps practice. • Infrastructure support: Work with data engineering and DevOps teams to maintain and improve model training and deployment infrastructure, including compute resources, workflow orchestration and environment configuration. • Insight delivery: Build clear, actionable reporting and visualizations using tools like Power BI or Tableau. Focus on impact, not just analysis. Skills Required: • Bachelor's degree in Data Science, Computer Science, Engineering, or a related quantitative field. • 5+ years of experience in a data science, ML engineering, or analytics role. • Strong SQL, Python and ML Techniques programming skills. • Experience with Azure Cloud, Databricks, and/or Snowflake. • Experience building and deploying machine learning models in production environments. Hands-on experience with Databricks, including SparkML, and MLflow integration. • Familiarity with MLOps best practices, including version control, model monitoring, and automated testing. • Experience with tools such as Git, MLflow, Docker and workflow schedulers. • Ability to communicate complex technical work to non-technical stakeholders. • Experience with scalable model training environments and distributed computing. Preferred Qualifications • Master's degree in a quantitative or technical discipline. • Experience in financial services, fintech, or enterprise B2B analytics. • Knowledge of A/B testing, causal inference, and statistical experimentation. • Familiarity with GenAI, LLM pipelines, and vector-based retrieval is a plus and platform like Snowflake Cortex.
    $66k-91k yearly est. 4d ago
  • Data Scientist

    Astec Digital

    Data scientist job in Chattanooga, TN

    BUILT TO CONNECT At Astec, we believe in the power of connection and the importance of building long-lasting relationships with our employees, customers and the communities we call home. With a team more than 4,000 strong, our employees are our #1 advantage. We invest in skills training and provide opportunities for career development to help you grow along with the business. We offer programs that support physical safety, as well as benefits and resources to enhance total health and wellbeing, so you can be your best at work and at home. Our equipment is used to build the roads and infrastructure that connects us to each other and to the goods and services we use. We are an industry leader known for delivering innovative solutions that create value for our customers. As our industry evolves, we are using new technology and data like never before. We're looking for creative problem solvers to build the future with us. Connect with us today and build your career at Astec. LOCATION: Chattanooga, TN On-site / Hybrid (Role must report on-site regularly) ABOUT THE POSITION The Data Scientist will play a key role in establishing the analytical foundation of Astec Smart Services. This individual will lead efforts to build pipelines from source to cloud, define data workflows, build predictive models, and help guide the team's approach to turning data into customer value. He or she will work closely within Smart Services and cross-functionally to ensure insights are actionable and impactful. The role blends Data architecture, data engineering, and data science to help build Smart Services analytical foundation. This person will be instrumental in helping to build Astec's digital transformation and aftermarket strategy. Deliverables & Responsibilities Data Engineering: Build and maintain robust data pipelines for ingestion, transformation, and storage. Optimize ETL processes for scalability and performance. Data Architecture: Design and implement data models that support analytics and operational needs. Define standards for data governance, security, and integration. Data Science: Develop predictive models and advanced analytics to support business decisions. Apply statistical and machine learning techniques to large datasets. Strong business acumen to understand decision drivers with internal and external customers Collaborate with individuals and departments across the company to ensure insights are aligned with customer needs and drive value. To be successful in this role, your experience and competencies are: Bachelor's degree in data science, engineering, or related field. (Adv. degrees a plus.) 5+ years of experience in data science, including at least 3 years in industrial or operational environments. Strong communication and project management skills are critical. Proficiency in data pipeline tools (e.g., Spark, Airflow) and cloud platforms (Azure, AWS, GCP). Strong understanding of data modeling principles and database technologies (SQL/NoSQL). Hands-on experience with machine learning frameworks (e.g., TensorFlow, PyTorch) and statistical analysis. Ability to work across data architecture design and data science experimentation. Programming: Python, SQL, and optionally Scala or Java. Familiarity with distributed systems and big data technologies. Strong communication skills for translating technical insights into business value. Ability to work across technical, commercial, and customer-facing teams. Supervisor and Leadership Expectations This role will not have supervisory or managerial responsibilities. This role will have program management responsibilities. Our Culture and Values Employees that become part of Astec embody the values below throughout their work. Continuous devotion to meeting the needs of our customers Honesty and integrity in all aspects of business Respect for all individuals Preserving entrepreneurial spirit and innovation Safety, quality and productivity as means to ensure success EQUAL OPPORTUNITY EMPLOYER As an Equal Opportunity Employer, Astec does not discriminate on the basis of race, creed, color, religion, gender (sex), sexual orientation, gender identity, marital status, national origin, ancestry, age, disability, citizenship status, a person's veteran status or any other characteristic protected by law or executive order.
    $68k-94k yearly est. 4d ago
  • Junior Data Scientist (TS/SCI)

    Take2 Consulting, LLC 3.7company rating

    Data scientist job in Springfield, VA

    We are seeking a junior-level Data Science professional with a strong academic foundation and early hands-on experience to join our team as a Exploitation Specialist. The ideal candidate will hold a bachelor's degree in a data science-related field and bring internship or project experience that demonstrates curiosity, initiative, and a willingness to learn from senior team members. This role is a great opportunity for someone eager to grow their technical skill set while supporting a high-impact mission. Required Qualifications Active TS/SCI clearance with the willingness to obtain a CI polygraph Ability to work onsite in Northern Virginia, 40 hours per week (telework options are extremely limited) Proficiency with Python and SQL Preferred Qualifications Familiarity with GEOINT collection and related NGA/NRO systems Experience with additional programming languages such as R, JavaScript, HTML, and CSS Understanding of object-oriented programming Experience using visualization tools such as Grafana, Tableau, or Kibana Ability to quickly learn new technologies, adapt to evolving mission requirements, and support the development/testing of new analytic methodologies
    $64k-83k yearly est. 1d ago
  • Data Scientist

    Parker's Kitchen 4.2company rating

    Data scientist job in Savannah, GA

    We are looking for a Data Scientist with expertise in optimization and forecasting to help improve how we manage labor, staffing, and operational resources across our retail locations. This role is critical in building models and decision-support tools that ensure the right people, in the right place, at the right time - balancing customer service, efficiency, and cost. You will work closely with Operations, Finance, and Store Leadership teams to deliver practical solutions that improve labor planning, scheduling, and demand forecasting. The right candidate will be confident, resourceful, and excited to own both the technical and business-facing aspects of applying data science in a fast-paced retail environment. Responsibilities Build and maintain forecasting models (time-series, machine learning, and statistical) for sales and transactions. Develop and deploy optimization models (linear/mixed-integer programming, heuristics, simulation) to improve workforce scheduling and labor allocation. Partner with operations and finance to translate forecasts into actionable staffing and labor plans that reduce costs while maintaining service levels. Build dashboards and automated tools to track forecast accuracy, labor KPIs, and staffing effectiveness. Provide insights and “what-if” scenario modeling to support strategic workforce and budget planning. Knowledge, Skills, And Abilities Strong foundation in forecasting techniques (time-series models, regression, machine learning) and optimization methods (linear/mixed-integer programming, heuristics, simulation). Proficiency in Python or R for modeling and analysis, along with strong SQL skills for working with large-scale datasets. Knowledge of statistics, probability, and applied mathematics to support predictive and prescriptive modeling. Experience building and deploying predictive models, optimization tools, and decision-support solutions that drive measurable business outcomes. Strong data storytelling and visualization skills using tools such as Power BI, Tableau, or Looker. Ability to translate analytical outputs into clear, actionable recommendations for non-technical stakeholders. Strong collaboration skills with the ability to partner cross-functionally with Operations, Finance, and Store Leadership to drive adoption of data-driven approaches. Ability to work independently and resourcefully, combining technical depth with practical problem-solving to deliver results in a fast-paced environment. Education And Requirements Required: Bachelor's or Master's degree in Data Science, Statistics, Applied Mathematics, Industrial Engineering, Operations Research, or related field. Minimum 2-3 years of professional experience in Data Science or a related area. Strong skills in time-series forecasting (e.g., ARIMA, Prophet, ML-based approaches). Proficiency in optimization techniques (linear programming, integer programming). Strong Python or R programming skills. SQL expertise for large, complex datasets. Strong communication skills with the ability to partner with business stakeholders. Preferred Experience in Retail, Restaurant, and/or Convenience Stores a plus. Experience with cloud platforms (Snowflake, AWS, GCP, Azure). Knowledge of BI tools (Tableau, Power BI, Looker). Physical Requirements Prolonged periods sitting/standing at a desk and working on a computer Must be able to lift up to 50 pounds Parker's is an equal opportunity employer committed to hiring a diverse workforce and sustaining an inclusive culture. Parker's does not discriminate on the basis of disability, veteran status or any other basis protected under federal, state, or local laws.
    $73k-100k yearly est. 2d ago
  • Senior Data Scientist

    Harnham

    Data scientist job in Charlottesville, VA

    Senior / Principal Data Scientist Employment Type: 12 Month Contract-to-hire About Us Our client is an engineering consultancy delivering advanced testing, analytics, and consulting services to organizations worldwide. Our mission is to leverage cutting-edge technology and data-driven insights to improve safety and performance in sports and related industries. Role Overview We are seeking a Senior / Principal Data Scientist to lead modeling and analytics initiatives across diverse domains, including: Computer Vision: Detect and interpret events from game video. Sensor Data Analysis: Event detection from athlete-worn devices. Generative AI: Develop solutions to deliver actionable insights to end users. You will design and implement machine learning models, build simulation-based tools, and create dashboards that provide injury risk assessments and mitigation strategies. This role involves research, algorithm development, and presenting findings to executive stakeholders. Key Responsibilities Develop and implement advanced machine learning models and algorithms. Build analytical tools and dashboards using frameworks like Shiny, Dash, or Streamlit. Collaborate with cross-functional teams to deliver actionable insights. Present research and analytics to leadership and clients. Qualifications Education: MS or PhD in Computer Science, Statistics, or related STEM field. Strong understanding of machine learning theory and structure. Proficiency in Python or R and experience with common ML libraries. Experience with SQL/NoSQL databases and distributed processing. 5+ years of post-graduate experience in engineering, consulting, or leadership roles. Preferred: Background in biomechanics or sports analytics. Publications or conference contributions in relevant fields. Experience with professional sports organizations. Consulting experience with executive-level presentations. Ideal Candidate You are passionate about applying data science to enhance athlete safety and performance. You thrive in collaborative environments and enjoy building innovative solutions that make a real-world impact.
    $80k-111k yearly est. 2d ago
  • Senior Data Scientist

    Marathon TS

    Data scientist job in Knoxville, TN

    This person is responsible for extracting insights and building predictive models from data. Analyzes curated data, builds ML models, and validates them with experiments. The DS researches and defines business problems and translates them into analytical or ML tasks. Conducts exploratory data analysis to identify patterns, correlations, and anomalies. They will be in charge of designing, training, and evaluating statistical, ML, or AI models. Works with SMEs to determine use cases, model requirements, and alternative solutions. Defines requirements and specifications from which Data Specialists build data structures. Works closely with Data Engineers and customers to ensure correct implementation. Required Experience: Bachelor's in Mathematics, OR, Business, Economics, or related field preferred. Minimum 7+ years designing and implementing mathematical and statistical models. Strong experience with Python/R, PyTorch, Scikit-Learn, Jupyter, visualization, A/B testing. SQL (Oracle, MySQL, or Hive). Oracle DB and Hadoop experience preferred. Ability to work independently and under deadlines. Strong communication and problem-solving skills. Clearance Requirement: Must be a US Citizen with an active DoD Secret security clearance. #cjjobs Marathon TS is committed to the development of a creative, diverse and inclusive work environment. In order to provide equal employment and advancement opportunities to all individuals, employment decisions at Marathon TS will be based on merit, qualifications, and abilities. Marathon TS does not discriminate against any person because of race, color, creed, religion, sex, national origin, disability, age or any other characteristic protected by law (referred to as "protected status").
    $70k-96k yearly est. 1d ago
  • Senior Data Scientist

    Infocepts 3.7company rating

    Data scientist job in McLean, VA

    Purpose: As a Data Scientist, you will play a key role in delivering impactful, data-driven solutions for our strategic enterprise clients. This role also offers the opportunity to shape and grow Infocepts' Data Science & AI practice, contributing to high-impact AI/ML initiatives, crafting data-driven narratives for stakeholders, and applying advanced techniques to solve complex business problems from strategy to execution. Key Result Areas and Activities: Design, build, and deploy AI/ML solutions using modern cloud and data platforms. Lead data science projects across industries, ensuring alignment with business goals. Apply supervised, unsupervised, deep learning, and Generative AI (e.g., LLMs, agentic workflows) techniques to address client use cases. Collaborate with data engineering teams to optimize model pipelines using Delta Lake and Spark. Communicate findings effectively through data visualizations and stakeholder presentations. Drive adoption of MLOps practices for scalable and reliable model deployment. Contribute to the evolution of Infocepts' Data Science & AI offerings through innovation and knowledge sharing. Roles & Responsibilities Essential Skills 5+ years of experience in applied AI and machine/deep learning. Hands-on experience with Databricks, MLflow, PySpark, and Spark MLlib. Proficiency in Python and SQL for model development and data manipulation. Strong understanding of supervised and unsupervised learning, deep learning, and Generative AI. Familiarity with cloud platforms: AWS, Azure, and GCP. Solid foundation in advanced statistical methods and probabilistic analysis. Ability to lead end-to-end AI/ML projects, including design, development, and stakeholder management. Experience with visualization tools like Tableau, Power BI, or similar. Familiarity with ML workflow orchestration and MLOps practices. Desirable Skills Experience with LLMs (Large Language Models) and agentic AI workflows. Familiarity with modern data platforms like Snowflake. Exposure to real-time data processing in cloud-native environments. Contributions to open-source AI projects or publications in data science communities. Qualifications Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, Statistics, or a related field. Certifications in cloud platforms (AWS, Azure, GCP) or Databricks are a plus. Qualities: Able to consult, write, and present persuasively Able to work in a self-organized and cross-functional team Able to iterate based on new information, peer reviews, and feedback Able to work seamlessly with clients across multiple geographies Research focused mindset Excellent analytical, presentation, reporting, documentation and interactive skills "Infocepts is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law."
    $78k-108k yearly est. 2d ago
  • Senior Data Scientist

    Ltimindtree

    Data scientist job in Charlotte, NC

    About Us: LTIMindtree is a global technology consulting and digital solutions company that enables enterprises across industries to reimagine business models, accelerate innovation, and maximize growth by harnessing digital technologies. As a digital transformation partner to more than 700+ clients, LTIMindtree brings extensive domain and technology expertise to help drive superior competitive differentiation, customer experiences, and business outcomes in a converging world. Powered by nearly 90,000 talented and entrepreneurial professionals across more than 30 countries, LTIMindtree - a Larsen & Toubro Group company - combines the industry-acclaimed strengths of erstwhile Larsen and Toubro Infotech and Mindtree in solving the most complex business challenges and delivering transformation at scale. For more information, please visit ******************** Role: Senior Data Scientist - Generative AI & Solution Architecture Location: Charlotte, NC Work Mode - Hybrid (2-3 days from client office) Experience: 10+ Years Job Description We are looking for a Senior Data Scientist to lead the design and implementation of a Generative AI-driven Contract Risk Reporting System. This role involves building advanced RAG pipelines, a reporting engine, and an interactive chatbot. The ideal candidate combines deep technical expertise, AI/ML fundamentals, and the ability to interpret legal contract language into actionable solution requirements, while engaging directly with clients. Key Responsibilities: Architect and deliver AI solutions for contract risk analysis and reporting. Design and implement LLM-based RAG systems, reporting dashboards, and conversational interfaces. Translate legal domain requirements into technical specifications. Collaborate with clients for requirement gathering, solution validation, and presentations. Ensure MLOps best practices, model monitoring, and Model Risk Management (MRM) compliance. Required Skills: Expertise in Generative AI, LLMs, RAG architectures, and NLP techniques. Strong foundation in machine learning algorithms, model evaluation, and feature engineering. Hands-on experience with MLOps pipelines, model governance, and risk controls. Proficiency in Python, LangChain, Vector Databases, and Cloud platforms (AWS/Azure/GCP). Ability to interpret legal contract language and map it to technical solutions. Excellent communication, client engagement, and solution design skills. Preferred: Experience in legal tech, contract analytics, or risk management solutions. Familiarity with prompt engineering, domain-specific fine-tuning, and LLM optimization. LTIMindtree is an equal opportunity employer that is committed to diversity in the workplace. Our employment decisions are made without regard to race, color, creed, religion, sex (including pregnancy, childbirth or related medical conditions), gender identity or expression, national origin, ancestry, age, family-care status, veteran status, marital status, civil union status, domestic partnership status, military service, handicap or disability or history of handicap or disability, genetic information, atypical hereditary cellular or blood trait, union affiliation, affectional or sexual orientation or preference, or any other characteristic protected by applicable federal, state, or local law, except where such considerations are bona fide occupational qualifications permitted by law.
    $77k-107k yearly est. 5d ago
  • Data Engineer

    Sharp Decisions 4.6company rating

    Data scientist job in Charlotte, NC

    C# Senior Developer RESOURCE TYPE: W2 Only Charlotte, NC - Hybrid Mid (5-7 Years) Role Description A leading Japanese bank is in the process of driving a Digital Transformation across its Americas Division as it continues to modernize technology, strengthen its data-driven approach, and support future growth. As part of this initiative, the firm is seeking an experienced Data Engineer to support the design and development of a strategic enterprise data platform supporting Capital Markets and affiliated securities businesses. This role will contribute to the development of a scalable, cloud-based data platform leveraging Azure technologies, supporting multiple business units across North America and global teams.Role Objectives Serve as a member of the Data Strategy team, supporting broker-dealer and swap-dealer entities across the Americas Division. Participate in the active development of the enterprise data platform, beginning with the establishment of reference data systems for securities and pricing data, and expanding into additional data domains. Collaborate closely with internal technology teams while adhering to established development standards and best practices. Support the implementation and expansion of the strategic data platform on the bank's Azure Cloud environment. Contribute technical expertise and solution design aligned with the overall Data Strategy roadmap. Qualifications and Skills Proven experience as a Data Engineer, with strong hands-on experience in Azure cloud environments. Experience implementing solutions using: Azure Cloud Services Azure Data Factory Azure Data Lake Gen2 Azure Databases Azure Data Fabric API Gateway management Azure Functions Strong experience with Azure Databricks. Advanced SQL skills across relational and NoSQL databases. Experience developing APIs using Python (FastAPI or similar frameworks). Familiarity with DevOps and CI/CD pipelines (Git, Jenkins, etc.). Strong understanding of ETL / ELT processes. Experience within financial services, including exposure to financial instruments, asset classes, and market data, is a strong plus.
    $78k-101k yearly est. 2d ago
  • Data Conversion Engineer

    Paymentus 4.5company rating

    Data scientist job in Charlotte, NC

    Summary/Objective Are you looking to work at a high growth, innovative, and purpose driven FinTech company? If so, you'll love Paymentus. Recognized by Deloitte as one of the fastest growing companies in North America, Paymentus is the premier provider of innovative, reliable, and secure electronic bill payment and presentment for more than 1700 clients. We are a SaaS provider that enables companies to help their customers simplify their financial lives. We do that by making it easier for consumers and businesses to pay bills, plus move and manage money to achieve strong financial health. We continually build upon a massively scalable platform, supporting thousands of businesses and millions of transactions on a daily basis. We're looking for high performers to join our team who excel in their expertise and who can transform plans into action. You'll have the opportunity to grow in an environment where intelligence, innovation, and leadership are valued and rewarded. About the Role The Data Conversion Engineer serves as a key component of the Platform Integrations team, providing technical support and guidance on data conversion projects. Conversions are an integral part in ensuring adherence to Paymentus' standards for a successful launch. This role is essential to ensure all bill payment data converts properly and efficiently onto the Paymentus platform. Responsibilities Develop data conversion procedures using SQL, Java and Linux scripting Augment and automate existing manual procedures to optimize accuracy and reduce time for each conversion Develop and update conversion mappers to interpret incoming data and manipulate it to match Paymentus' specifications Develop new specifications to satisfy new customers and products Serve as the primary point of contact/driver for all technical related conversion activities Review conversion calendar and offer technical support and solutions to meet deadlines and contract dates Maintain and update technical conversion documentation to share with internal and external clients and partners Work in close collaboration with implementation, integration, product and development teams using exceptional communication skills Adapt and creatively solve encountered problems under high stress and tight deadlines Learn database structure, business logic and combine all knowledge to improve processes Be flexible Monitor new client conversions and existing client support if needed; provide daily problem solving, coordination, and communication Management of multiple projects and conversion implementations Ability to proactively troubleshoot and solve problems with limited supervision Qualifications B.S. Degree in Computer Science or comparable experience Strong knowledge of Linux and the command line interface Exceptional SQL skills Experience with logging/monitoring tools (AWS Cloudwatch, Splunk, ELK, etc.) Familiarity with various online banking applications and understanding of third-party integrations is a plus Effective written and verbal communication skills Problem Solver - recognizes the need to resolve issues quickly and effectively, uses logic to solve problems; identifies problems and brings forward multiple solution options; knows who/when to involve appropriate people when troubleshooting issues Communication; ability to use formal and informal written and/or verbal communication channels to inform others; articulates ideas and thoughts clearly both verbally and in writing Dynamic and self-motivated; able to work on their own initiative and deliver the objectives required to maintain service levels Strong attention to detail Proficiency with raw data, analytics, or data reporting tools Preferred Skills Background in the Payments, Banking, E-Commerce, Finance and/or Utility industries Experience with front end web interfaces (HTML5, Javascript, CSS3) Cloud technologies (AWS, GCP, Azure) Work Environment This job operates in a professional office environment. This role routinely uses standard office equipment such as laptop computers, photocopiers and smartphones. Physical Demands This role requires sitting or standing at a computer workstation for extended periods of time. Position Type/Expected Hours of Work This is a full-time position. Days and hours of work are Monday through Friday, 40 hours a week. Occasional evening and weekend work may be required as job duties demand. Travel No travel is required for this position. Other Duties Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice. Equal Opportunity Statement Paymentus is an equal opportunity employer. We enthusiastically accept our responsibility to make employment decisions without regard to race, religious creed, color, age, sex, sexual orientation, national origin, ancestry, citizenship status, religion, marital status, disability, military service or veteran status, genetic information, medical condition including medical characteristics, or any other classification protected by applicable federal, state, and local laws and ordinances. Our management is dedicated to ensuring the fulfillment of this policy with respect to hiring, placement, promotion, transfer, demotion, layoff, termination, recruitment advertising, pay, and other forms of compensation, training, and general treatment during employment. Reasonable Accommodation Paymentus recognizes and supports its obligation to endeavor to accommodate job applicants and employees with known physical or mental disabilities who are able to perform the essential functions of the position, with or without reasonable accommodation. Paymentus will endeavor to provide reasonable accommodations to otherwise qualified job applicants and employees with known physical or mental disabilities, unless doing so would impose an undue hardship on the Company or pose a direct threat of substantial harm to the employee or others. An applicant or employee who believes he or she needs a reasonable accommodation of a disability should discuss the need for possible accommodation with the Human Resources Department, or his or her direct supervisor.
    $82k-114k yearly est. 2d ago
  • AWS Data Engineer (Only W2)

    Ampstek

    Data scientist job in Charlotte, NC

    Title: AWS Data Engineer Exprience: 10 years Must Have Skills: • Strong experience in AWS services, primarily serverless, databases, storage services, container services, schedulers, and batch services. • Experience in Snowflake and Data Build Tool. • Expertise in DBT, NodeJS and Python. • Expertise in Informatica, PowerBI , Database, Cognos. Nice to Have Skills: Detailed Job Description: • Strong experience in AWS services, primarily serverless, databases, storage services, container services, schedulers, and batch services. • Experience in Snowflake and Data Build Tool.Expertise in DBT, NodeJS and Python. • Expertise in Informatica, PowerBI , Database, Cognos. • Proven experience in leading teams across locations. • Knowledge of DevOps processes, Infrastructure as Code and their purpose. • Good understanding of data warehouses, their purpose, and implementation • Good communication skills. Kindly share the resume in ******************
    $77k-103k yearly est. 4d ago
  • Palantir Data Engineer

    Keylent Inc.

    Data scientist job in Charlotte, NC

    Build and maintain data pipelines and workflows in Palantir Foundry. Design, train, and deploy ML models for classification, optimization, and forecasting use cases. Apply feature engineering, data cleaning, and modeling techniques using Python, Spark, and ML libraries. Create dashboards and data applications using Slate or Streamlit to enable operational decision-making. Implement generative AI use cases using large language models (GPT-4, Claude, etc)
    $77k-103k yearly est. 1d ago
  • W2 Opportunity // GCP Data Engineer // Atlanta, GA

    Cloudingest

    Data scientist job in Atlanta, GA

    Job Description: GCP Data Engineer Rate: $50/hr. on W2 (No C2C) We are seeking a highly skilled GCP Data Engineer to design, build, and optimize cloud-native data pipelines and analytics solutions on Google Cloud Platform. The ideal candidate has strong experience with Python, BigQuery, Cloud Data Fusion, and core GCP services such as Cloud Composer, Cloud Storage, Cloud Functions, and Pub/Sub. This role requires a strong foundation in data warehousing concepts and scalable data engineering practices. Responsibilities Design, develop, and maintain robust ETL/ELT pipelines on Google Cloud Platform. Build and optimize data workflows using Cloud Data Fusion, BigQuery, and Cloud Composer. Write efficient and maintainable Python code to support data ingestion, transformation, and automation. Develop optimized BigQuery SQL for analytics, reporting, and large-scale data modeling. Utilize GCP services such as Cloud Storage, Pub/Sub, and Cloud Functions to build event-driven and scalable data solutions. Ensure data quality, governance, and reliability across all pipelines. Collaborate with cross-functional teams to deliver clean, trusted, production-ready datasets. Monitor, troubleshoot, and resolve performance issues in cloud data pipelines and workflows. Must-Have Skills Strong experience with GCP BigQuery (data modeling, SQL development, performance tuning). Proficiency in Python for data engineering and pipeline automation. Hands-on experience with Cloud Data Fusion for ETL/ELT development. Working experience with key GCP services: Cloud Composer Cloud Storage Cloud Functions Pub/Sub Strong understanding of data warehousing concepts, star/snowflake schemas, and best practices. Solid understanding of cloud data architecture and distributed processing. Good-to-Have Skills Experience with Vertex AI for ML pipeline integration or model deployment. Familiarity with Dataproc (Spark/Hadoop) for large-scale processing. Knowledge of CI/CD workflows, Git, and DevOps best practices. Experience with Cloud Logging/Monitoring tools.
    $50 hourly 3d ago
  • Lead Azure Databrick Engineer

    Syren

    Data scientist job in Atlanta, GA

    ****************Individual Contractors (W2/1099) are encouraged to apply. Visa sponsorship is not available for this role at this time************ An Azure Data Engineer is responsible for designing, implementing, and maintaining the data infrastructure within an organization. They collaborate with both business and IT teams to understand stakeholders' needs and unlock the full potential of data. They create conceptual and logical data models, analyze structural requirements, and ensure efficient database solutions. Must Have Skills: Experience of Migrating from other platform to Databricks Proficiency in Databricks and Azure Cloud, Databricks Asset Bundles, Hoslistic vision on the Data Strategy. Proficiency in Data Streaming and Data Modeling Experience in architecting at least two large-scale big data projects Strong understanding of data scaling and its complexities Data Archiving and Purging mechanisms. Job Requirements • Degree in computer science or equivalent preferred • Demonstrable experience in architecture, design, implementation, and/or support of highly distributed applications with Azure cloud and Databricks. • 10+ Years of Hands-on experience with data modelling, database design, data mining, and segmentation techniques. • Working knowledge and experience with "Cloud Architectures" (e.g., SaaS, PaaS, IaaS) and the ability to address the unique security considerations of secure Cloud computing • Should have architected solutions for Cloud environments such as Microsoft Azure and/or GCP • Experience with debugging and performance tuning in distributed environments • Strong analytical skills with the ability to collect, organize, analyse, and broadcast significant amounts of information with attention to detail and accuracy • Experience dealing with structured, unstructured data. • Must have Python, PySpark experience. • Experience in ML or/and graph analysis is a plus
    $75k-100k yearly est. 4d ago
  • Lead Data Engineer - Palantir Foundry

    Smurfit Westrock

    Data scientist job in Atlanta, GA

    Our technology organization is transforming how we work at WestRock. We align with our businesses to deliver innovative solutions that: Address specific business challenges, integrate processes, and create great experiences Connect our work to shared goals that propel WestRock forward in the Digital Age Imagine how technology can advance the way we work by using disruptive technology We are looking for forward thinking technologists that can accelerate our focus areas such as building stronger foundational technology capabilities, reducing complexity, employing digital transformation concepts, and leveraging disruptive technology. As a Lead Data Engineer, you will play a pivotal role in building and scaling modern data infrastructure that powers decision-making across production, supply chain, and operations. Helps to define and analyze business requirements for Enterprise scale reports. Analyzes and evaluates business use cases for data engineering problems and helps design and develop processing solutions with ETL Cloud based technologies. How you will impact WestRock: Architect and implement scalable data pipelines using Palantir Foundry (pipelines, workshops, ontology) to unify and transform operational data. Design and develop robust data workflows using Python, Apache Airflow, and Apache Spark to support real-time and batch processing needs. Build and deploy solutions on cloud platforms (AWS or Azure), ensuring high availability, security, and performance. Collaborate with data scientists, analysts, and operations teams to deliver actionable insights and operational tooling. Define and enforce data engineering best practices, including CI/CD automation, version control (Git), and testing strategies. Mentor junior developers, conduct code reviews, and help shape the technical roadmap for the data platform. What you need to succeed: Education: Bachelor's degree in computer science or similar At least 6 years of strong Data Engineering experience Hands-on experience with Palantir Foundry, including pipelines, ontology modeling, and workshop development. Strong programming skills in Python or Java, with experience building and maintaining production-grade data pipelines. Proficiency in Apache Airflow and Apache Spark for workflow orchestration and large-scale data processing. Proven experience deploying data solutions on AWS or Azure, with strong understanding of cloud-native services. Familiarity with Git for version control and CI/CD pipelines for automated testing and deployment. Demonstrated ability to mentor junior engineers, lead projects, and work independently in a fast-paced environment. Good communication skills, with the ability to collaborate effectively across technical and non-technical teams. Good analytical and troubleshooting abilities. What we offer: Corporate culture based on integrity, respect, accountability and excellence Comprehensive training with numerous learning and development opportunities An attractive salary reflecting skills, competencies and potential A career with a global packaging company where Sustainability, Safety and Inclusion are business drivers and foundational elements of the daily work.
    $75k-100k yearly est. 1d ago
  • Data Engineer - OrcaWorks AI

    Orcaworks.Ai

    Data scientist job in Atlanta, GA

    Experience Level: Entry-level (Master's preferred) About OrcaWorks AI At OrcaWorks AI, we're building next-generation AI systems that empower businesses to make data-driven decisions with intelligence and speed. We're seeking passionate Data Engineers who love solving real-world data challenges and want to be part of a growing team building cutting-edge AI infrastructure. Key Responsibilities Design, develop, and maintain data pipelines using tools like Airbyte and Prefect to feed AI and machine learning models. Integrate data from multiple structured and unstructured sources into unified and queryable layers using ElasticSearch or Vespa. Implement data validation, transformation, and storage solutions using modern ETL frameworks. Collaborate with AI, LLM, and data science teams to ensure reliable and optimized data flow for model training. Support database management, SQLModel, and data governance practices across services. Required Skills & Qualifications Master's degree (or Bachelor's with equivalent experience) in Computer Science, Information Systems, or Data Engineering. Proficiency in Python and SQL; experience with PySpark or equivalent ETL frameworks. Hands-on experience with Airbyte, Prefect, and DBT. Familiarity with search and indexing systems like Vespa or ElasticSearch. Knowledge of cloud data platforms (AWS, GCP, or Azure) and API integration. Strong understanding of data security and applied AI workflows.
    $75k-100k yearly est. 5d ago
  • Senior Data Engineer

    Zillion Technologies, Inc. 3.9company rating

    Data scientist job in McLean, VA

    The candidate must have 5+ years of hands on experience working with PySpark/Python, microservices architecture, AWS EKS, SQL, Postgres, DB2, Snowflake, Behave OR Cucumber frameworks, Pytest (unit testing), automation testing and regression testing. Experience with tools such as Jenkins, SonarQube AND/OR Fortify are preferred for this role. Experience in Angular and DevOps are nice to haves for this role. Must Have Qualifications: PySpark/Python based microservices, AWS EKS, Postgres SQL Database, Behave/Cucumber for automation, Pytest, Snowflake, Jenkins, SonarQube and Fortify. Responsibilities: Development of microservices based on Python, PySpark, AWS EKS, AWS Postgres for a data-oriented modernization project. New System: Python and PySpark, AWS Postgres DB, Behave/Cucumber for automation, and Pytest Perform System, functional and data analysis on the current system and create technical/functional requirement documents. Current System: Informatica, SAS, AutoSys, DB2 Write automated tests using Behave/cucumber, based on the new micro-services-based architecture Promote top code quality and solve issues related to performance tuning and scalability. Strong skills in DevOps, Docker/container-based deployments to AWS EKS using Jenkins and experience with SonarQube and Fortify. Able to communicate and engage with business teams and analyze the current business requirements (BRS documents) and create necessary data mappings. Preferred strong skills and experience in reporting applications development and data analysis Knowledge in Agile methodologies and technical documentation.
    $77k-109k yearly est. 1d ago
  • Data Engineer

    A2C 4.7company rating

    Data scientist job in Alpharetta, GA

    5 days onsite in Alpharetta, GA Skills required: Python Data Pipeline Data Analysis Data Modeling Must have solid Cloud experience AI/ML Strong problem-solving skills Strong Communication skill A problem solver with ability to analyze and research complex issues and problems; and proposing actionable solutions and/or strategies. Solid understanding and hands on experience with major cloud platforms. Experience in designing and implementing data pipelines. Must have experience with one of the following: GCP, AWS OR Azure - MUST have the drive to learn GCP.
    $77k-106k yearly est. 3d ago
  • Data Engineer

    Wissen Technology

    Data scientist job in Alpharetta, GA

    Data Engineer, Wissen Technology, Alpharetta, GA The Wissen team continues to expand its footprint in the Canada & USA. More openings to come as we continue to grow the team! Please read below for a brilliant career opportunity. Role: Data Engineer Title: Associate Location: Alpharetta. GA (Day 1 Onsite) - Fulltime Mode of Work: Onsite Your required skills: 1+ Years Required Position Overview: We are seeking a detail-oriented and motivated Data Engineer with 1+ years of experience to support the design, development, and maintenance of data pipelines and data workflows. The ideal candidate should have strong SQL and Python skills, with a solid understanding of ETL concepts. Key Responsibilities: Assist in building, maintaining, and optimizing ETL pipelines. Work with structured and unstructured data to support analytics and reporting needs. Collaborate with senior engineers and cross-functional teams to ensure data availability and quality. Perform data validation, cleaning, and transformation tasks. Monitor data pipeline performance and troubleshoot issues. Support implementation of best practices in data engineering and data management. Mandatory Skills: Strong proficiency in SQL. Good understanding of Python for data manipulation and scripting. Knowledge of basic ETL concepts and data workflow fundamentals. Preferred Skills: Understanding of Hadoop ecosystem basics. Familiarity with Kafka for real-time data streaming. Benefits: 401K Healthcare insurance for you and your family (medical, dental, vision). Short / Long term disability insurance. Life Insurance. Accidental death & disability Insurance. 3 weeks of Paid Time Off. Support and fee coverage for immigration needs. Remote office set up stipend. Support for industry certifications. Additional cash incentives. We are: A high end technical consulting firm built and run by highly qualified technologists. Our workforce consists of 5000+ highly skilled professionals, with leadership from Wharton, MIT, IITs, IIMs, and NITs and decades of experience at Goldman Sachs, Morgan Stanley, MSCI, Deutsche Bank, Credit Suisse, Verizon, British Telecom, ISRO etc. Without any external funding or investments, Wissen Technology has grown its revenues by 100% every other year since it started as a subsidiary of Wissen Group in 2015. We have a global presence with offices in the US, India, UK, Australia, Mexico, and Canada. You are: A true tech or domain ninja. Or both. Comfortable working in a quickly growing profitable startup, have a “can do” attitude and are willing to take on any task thrown your way. You will: · Develop and promote the company's culture of engineering excellence. · Define, develop and deliver solutions at a top tier investment bank or another esteemed client. Perform other duties as needed Your Education and Experience: · We value candidates who can execute on our vision and help us build an industry-leading organization. Rather than requiring a specific certification or number of years of experience, we ask that you be able to demonstrate your skills in the areas outlined above. · Graduate-level degree in computer science, engineering, or related technical field Wissen embraces diversity and is an equal opportunity employer. We are committed to building a team that represents a variety of backgrounds, skills, and abilities. We believe that the more inclusive our team is, the better our work will be. All qualified applicants, including but not limited to LGBTQ+, Minorities, Females, the Disabled, and Veterans, are encouraged to apply. About Wissen Technology: The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015. Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world-class products. We empower businesses with a dynamic portfolio of services and accelerators tailored to today's digital demands and based on future ready technology stack. Our services include Industry Leading Custom Software Development, AI-Driven Software Engineering, Generative AI & Machine Learning, Real-Time Data Analytics & Insights, Interactive Data Visualization & Decision Intelligence, Intelligent Process Automation, Multi-Cloud & Hybrid Cloud Strategies, Cross-Platform Mobile Experiences, CI/CD-Powered Agile DevOps, Automated Quality Engineering, and cutting-edge integrations. Certified as a Great Place to Work for five consecutive years (2020-2025) and recognized as a Top 20 AI/ML vendor by CIO Insider, Wissen Group has delivered multimillion-dollar projects for over 20 Fortune 500 companies. Wissen Technology delivers exceptional value on mission-critical projects through thought leadership, ownership, and reliable, high-quality, on-time delivery. Our industry-leading technical expertise stems from the talented professionals we attract. Committed to fostering their growth and providing top-tier career opportunities, Wissen ensures an outstanding experience and value for our clients and employees. We serve diverse industries, including Banking, E-commerce, Telecom, Healthcare, Manufacturing, and Energy. About Wissen Technology: Website : ************** LinkedIn: ************************************************** Wissen Leadership : https://**************/company/wissen-technology-leadership Wissen Live : **************************************************/posts/feed View=All Wissen Thought Leadership : https://**************/articles/ Employee Speak: *************************************************************** ********************************************************************* Great Place to Work: https://**************/blog/wissen-is-a-great-place-to-work-says-the-great-place-to-work-institute-… https://www.linkedin.com/posts/wissen-infotech_wissen-leadership-wissenites-activity-**********1317… About Wissen Interview Process: https://**************/blog/we-work-on-highly-complex-technology-projects-here-is-how-it-changes-wh… Latest in Wissen in CIO Insider : **************************************************************************************************** Job Type: Full-time Schedule: Monday to Friday
    $75k-100k yearly est. 1d ago

Learn more about data scientist jobs

How much does a data scientist earn in Oak Ridge, TN?

The average data scientist in Oak Ridge, TN earns between $60,000 and $110,000 annually. This compares to the national average data scientist range of $75,000 to $148,000.

Average data scientist salary in Oak Ridge, TN

$81,000
Job type you want
Full Time
Part Time
Internship
Temporary