Post job

Data scientist jobs in Johns Creek, GA

- 378 jobs
All
Data Scientist
Data Engineer
Senior Data Scientist
  • Data Scientist

    Coforge

    Data scientist job in Atlanta, GA

    Role: Data Scientist Mode Of Hire: Full Time Key Responsibilities • Data wrangling & feature engineering: Ingest, clean, and transform data from SQL, APIs, and data lakes (e.g., Snowflake, Databricks). Design robust pipelines that feed into analytics and ML workflows. • Data understanding & exploration: Work closely with domain experts to deeply understand the meaning, context, quality, and limitations of available datasets. Translate business questions into data requirements and analytics plans. • Machine learning development: Build, tune, and validate predictive models using scikit-learn, SparkML, XGBoost, or TensorFlow. • Cross-functional partnership: Collaborate with marketing, sales, and product teams to scope business use cases, define success metrics, and integrate models into operational workflows. • Model deployment & MLOps: Deploy and manage models using MLflow, docker and CI/CD pipelines. Implement versioning, testing, performance monitoring, and retraining strategies as part of a robust MLOps practice. • Infrastructure support: Work with data engineering and DevOps teams to maintain and improve model training and deployment infrastructure, including compute resources, workflow orchestration and environment configuration. • Insight delivery: Build clear, actionable reporting and visualizations using tools like Power BI or Tableau. Focus on impact, not just analysis. Skills Required: • Bachelor's degree in Data Science, Computer Science, Engineering, or a related quantitative field. • 5+ years of experience in a data science, ML engineering, or analytics role. • Strong SQL, Python and ML Techniques programming skills. • Experience with Azure Cloud, Databricks, and/or Snowflake. • Experience building and deploying machine learning models in production environments. Hands-on experience with Databricks, including SparkML, and MLflow integration. • Familiarity with MLOps best practices, including version control, model monitoring, and automated testing. • Experience with tools such as Git, MLflow, Docker and workflow schedulers. • Ability to communicate complex technical work to non-technical stakeholders. • Experience with scalable model training environments and distributed computing. Preferred Qualifications • Master's degree in a quantitative or technical discipline. • Experience in financial services, fintech, or enterprise B2B analytics. • Knowledge of A/B testing, causal inference, and statistical experimentation. • Familiarity with GenAI, LLM pipelines, and vector-based retrieval is a plus and platform like Snowflake Cortex.
    $66k-91k yearly est. 3d ago
  • Data Engineer

    Clear Point Consultants 4.6company rating

    Data scientist job in Atlanta, GA

    No C2C We're looking for a hands-on Data Engineer to help build, scale, and fine-tune real-time data systems using Kafka, AWS, and a modern data stack. In this role, you'll work deeply with streaming data, ETL, distributed systems, and PostgreSQL to power analytics, product innovation, and AI-driven use cases. You'll also get to work with AI/ML frameworks, automation, and MLOps tools to support advanced modeling and a highly responsive data platform. What You'll Do Design and build real-time streaming pipelines using Kafka, Confluent Schema Registry, and Zookeeper Build and manage cloud-based data workflows using AWS services like Glue, EMR, EC2, and S3 Optimize and maintain PostgreSQL and other databases with strong schema design, advanced SQL, and performance tuning Integrate AI and ML frameworks (TensorFlow, PyTorch, Hugging Face) into data pipelines for training and inference Automate data quality checks, feature generation, and anomaly detection using AI-powered monitoring and observability tools Partner with ML engineers to deploy, monitor, and continuously improve machine learning models in both batch and real-time pipelines using tools like MLflow, SageMaker, Airflow, and Kubeflow Experiment with vector databases and retrieval-augmented generation (RAG) pipelines to support GenAI and LLM initiatives Build scalable, cloud-native, event-driven architectures that power AI-driven data products What You Bring Bachelor's degree in Computer Science, Engineering, Math, or a related technical field 3+ years of hands-on data engineering experience with Kafka (Confluent or open-source) and AWS Experience with automated data quality, monitoring, and observability tools Strong SQL skills and solid database fundamentals with PostgreSQL and both traditional and NoSQL databases Proficiency in Python, Scala, or Java for pipeline development and AI integrations Experience with synthetic data generation, vector databases, or GenAI-powered data products Hands-on experience integrating ML models into production data pipelines using frameworks like PyTorch or TensorFlow and MLOps tools such as Airflow, MLflow, SageMaker, or Kubeflow
    $98k-127k yearly est. 2d ago
  • Data Engineer

    A2C 4.7company rating

    Data scientist job in Alpharetta, GA

    5 days onsite in Alpharetta, GA Skills required: Python Data Pipeline Data Analysis Data Modeling Must have solid Cloud experience AI/ML Strong problem-solving skills Strong Communication skill A problem solver with ability to analyze and research complex issues and problems; and proposing actionable solutions and/or strategies. Solid understanding and hands on experience with major cloud platforms. Experience in designing and implementing data pipelines. Must have experience with one of the following: GCP, AWS OR Azure - MUST have the drive to learn GCP.
    $77k-106k yearly est. 2d ago
  • W2 Opportunity // GCP Data Engineer // Atlanta, GA

    Cloudingest

    Data scientist job in Atlanta, GA

    Job Description: GCP Data Engineer Rate: $50/hr. on W2 (No C2C) We are seeking a highly skilled GCP Data Engineer to design, build, and optimize cloud-native data pipelines and analytics solutions on Google Cloud Platform. The ideal candidate has strong experience with Python, BigQuery, Cloud Data Fusion, and core GCP services such as Cloud Composer, Cloud Storage, Cloud Functions, and Pub/Sub. This role requires a strong foundation in data warehousing concepts and scalable data engineering practices. Responsibilities Design, develop, and maintain robust ETL/ELT pipelines on Google Cloud Platform. Build and optimize data workflows using Cloud Data Fusion, BigQuery, and Cloud Composer. Write efficient and maintainable Python code to support data ingestion, transformation, and automation. Develop optimized BigQuery SQL for analytics, reporting, and large-scale data modeling. Utilize GCP services such as Cloud Storage, Pub/Sub, and Cloud Functions to build event-driven and scalable data solutions. Ensure data quality, governance, and reliability across all pipelines. Collaborate with cross-functional teams to deliver clean, trusted, production-ready datasets. Monitor, troubleshoot, and resolve performance issues in cloud data pipelines and workflows. Must-Have Skills Strong experience with GCP BigQuery (data modeling, SQL development, performance tuning). Proficiency in Python for data engineering and pipeline automation. Hands-on experience with Cloud Data Fusion for ETL/ELT development. Working experience with key GCP services: Cloud Composer Cloud Storage Cloud Functions Pub/Sub Strong understanding of data warehousing concepts, star/snowflake schemas, and best practices. Solid understanding of cloud data architecture and distributed processing. Good-to-Have Skills Experience with Vertex AI for ML pipeline integration or model deployment. Familiarity with Dataproc (Spark/Hadoop) for large-scale processing. Knowledge of CI/CD workflows, Git, and DevOps best practices. Experience with Cloud Logging/Monitoring tools.
    $50 hourly 2d ago
  • Data Engineer - OrcaWorks AI

    Orcaworks.Ai

    Data scientist job in Atlanta, GA

    Experience Level: Entry-level (Master's preferred) About OrcaWorks AI At OrcaWorks AI, we're building next-generation AI systems that empower businesses to make data-driven decisions with intelligence and speed. We're seeking passionate Data Engineers who love solving real-world data challenges and want to be part of a growing team building cutting-edge AI infrastructure. Key Responsibilities Design, develop, and maintain data pipelines using tools like Airbyte and Prefect to feed AI and machine learning models. Integrate data from multiple structured and unstructured sources into unified and queryable layers using ElasticSearch or Vespa. Implement data validation, transformation, and storage solutions using modern ETL frameworks. Collaborate with AI, LLM, and data science teams to ensure reliable and optimized data flow for model training. Support database management, SQLModel, and data governance practices across services. Required Skills & Qualifications Master's degree (or Bachelor's with equivalent experience) in Computer Science, Information Systems, or Data Engineering. Proficiency in Python and SQL; experience with PySpark or equivalent ETL frameworks. Hands-on experience with Airbyte, Prefect, and DBT. Familiarity with search and indexing systems like Vespa or ElasticSearch. Knowledge of cloud data platforms (AWS, GCP, or Azure) and API integration. Strong understanding of data security and applied AI workflows.
    $75k-100k yearly est. 4d ago
  • Lead Azure Databrick Engineer

    Syren

    Data scientist job in Atlanta, GA

    ****************Individual Contractors (W2/1099) are encouraged to apply. Visa sponsorship is not available for this role at this time************ An Azure Data Engineer is responsible for designing, implementing, and maintaining the data infrastructure within an organization. They collaborate with both business and IT teams to understand stakeholders' needs and unlock the full potential of data. They create conceptual and logical data models, analyze structural requirements, and ensure efficient database solutions. Must Have Skills: Experience of Migrating from other platform to Databricks Proficiency in Databricks and Azure Cloud, Databricks Asset Bundles, Hoslistic vision on the Data Strategy. Proficiency in Data Streaming and Data Modeling Experience in architecting at least two large-scale big data projects Strong understanding of data scaling and its complexities Data Archiving and Purging mechanisms. Job Requirements • Degree in computer science or equivalent preferred • Demonstrable experience in architecture, design, implementation, and/or support of highly distributed applications with Azure cloud and Databricks. • 10+ Years of Hands-on experience with data modelling, database design, data mining, and segmentation techniques. • Working knowledge and experience with "Cloud Architectures" (e.g., SaaS, PaaS, IaaS) and the ability to address the unique security considerations of secure Cloud computing • Should have architected solutions for Cloud environments such as Microsoft Azure and/or GCP • Experience with debugging and performance tuning in distributed environments • Strong analytical skills with the ability to collect, organize, analyse, and broadcast significant amounts of information with attention to detail and accuracy • Experience dealing with structured, unstructured data. • Must have Python, PySpark experience. • Experience in ML or/and graph analysis is a plus
    $75k-100k yearly est. 3d ago
  • Lead Data Engineer - Palantir Foundry

    Smurfit Westrock

    Data scientist job in Atlanta, GA

    Our technology organization is transforming how we work at WestRock. We align with our businesses to deliver innovative solutions that: Address specific business challenges, integrate processes, and create great experiences Connect our work to shared goals that propel WestRock forward in the Digital Age Imagine how technology can advance the way we work by using disruptive technology We are looking for forward thinking technologists that can accelerate our focus areas such as building stronger foundational technology capabilities, reducing complexity, employing digital transformation concepts, and leveraging disruptive technology. As a Lead Data Engineer, you will play a pivotal role in building and scaling modern data infrastructure that powers decision-making across production, supply chain, and operations. Helps to define and analyze business requirements for Enterprise scale reports. Analyzes and evaluates business use cases for data engineering problems and helps design and develop processing solutions with ETL Cloud based technologies. How you will impact WestRock: Architect and implement scalable data pipelines using Palantir Foundry (pipelines, workshops, ontology) to unify and transform operational data. Design and develop robust data workflows using Python, Apache Airflow, and Apache Spark to support real-time and batch processing needs. Build and deploy solutions on cloud platforms (AWS or Azure), ensuring high availability, security, and performance. Collaborate with data scientists, analysts, and operations teams to deliver actionable insights and operational tooling. Define and enforce data engineering best practices, including CI/CD automation, version control (Git), and testing strategies. Mentor junior developers, conduct code reviews, and help shape the technical roadmap for the data platform. What you need to succeed: Education: Bachelor's degree in computer science or similar At least 6 years of strong Data Engineering experience Hands-on experience with Palantir Foundry, including pipelines, ontology modeling, and workshop development. Strong programming skills in Python or Java, with experience building and maintaining production-grade data pipelines. Proficiency in Apache Airflow and Apache Spark for workflow orchestration and large-scale data processing. Proven experience deploying data solutions on AWS or Azure, with strong understanding of cloud-native services. Familiarity with Git for version control and CI/CD pipelines for automated testing and deployment. Demonstrated ability to mentor junior engineers, lead projects, and work independently in a fast-paced environment. Good communication skills, with the ability to collaborate effectively across technical and non-technical teams. Good analytical and troubleshooting abilities. What we offer: Corporate culture based on integrity, respect, accountability and excellence Comprehensive training with numerous learning and development opportunities An attractive salary reflecting skills, competencies and potential A career with a global packaging company where Sustainability, Safety and Inclusion are business drivers and foundational elements of the daily work.
    $75k-100k yearly est. 5d ago
  • SAP Data Engineer

    IDR, Inc. 4.3company rating

    Data scientist job in Atlanta, GA

    IDR is seeking an SAP Data Engineer to join one of our top clients for an opportunity in Atlanta, GA. This role involves designing, building, and optimizing data pipelines and architecture to support advanced analytics and business intelligence in a dynamic enterprise environment. Position Overview for the SAP Data Engineer: Develop and optimize ETL/ELT pipelines from SAP sources and other enterprise data sources Design, build, and maintain data architecture supporting analytics and BI initiatives Deep knowledge of SAP technologies including BW/4HANA, S/4HANA, ECC, BusinessObjects, and SAC Experience with Celonis data engineering and enterprise DataOps practices Collaborate with cross-functional teams to ensure data governance, security, and performance Requirements for the SAP Data Engineer: 5 - 8+ years of hands-on experience in Data Engineering with SAP BW/4HANA, SAP ECC/S4, BusinessObjects, SAC, and Celonis Strong knowledge of data integration techniques such as ODP, SLT, ABAP, SQL, and CDS views Experience developing and customizing BI extraction processes using ABAP Familiarity with SAP Datasphere, hybrid architecture, and SAP BPC Understanding of process excellence frameworks, Celonis EMS, DataOps, CI/CD practices, Snowflake, and Tableau What's in it for you? Competitive compensation package Full Benefits; Medical, Vision, Dental, and more! Opportunity to get in with an industry leading organization Why IDR? 25+ Years of Proven Industry Experience in 4 major markets Employee Stock Ownership Program Dedicated Engagement Manager who is committed to you and your success Medical, Dental, Vision, and Life Insurance ClearlyRated's Best of Staffing Client and Talent Award winner 12 years in a row
    $74k-99k yearly est. 1d ago
  • ETL Databricks Data Engineer

    Capgemini 4.5company rating

    Data scientist job in Atlanta, GA

    We are seeking a ETL Databricks Data Engineer to join our team and help build robust, scalable data solutions. This role involves designing and maintaining data pipelines, optimizing ETL processes, and collaborating with cross-functional teams to ensure data integrity and accessibility. Key Responsibilities Design, develop, and maintain scalable data pipelines and ETL processes using Databricks. Create and optimize Python scripts for data transformation, automation, and integration tasks. Develop and fine-tune SQL queries for data extraction, transformation, and loading. Collaborate with data scientists, analysts, and business stakeholders to deliver high-quality data solutions. Ensure data integrity, security, and compliance with organizational standards. Participate in code reviews and contribute to best practices in data engineering. Required Skills & Qualifications 5 years of professional experience in data engineering or related roles. Strong proficiency in Databricks (including Spark-based data processing). Advanced programming skills in Python. Expertise in SQL for querying and data modeling. Familiarity with Azure Cloud and Azure Data Factory (ADF). Understanding of ETL frameworks, data governance, and performance tuning. Knowledge of CI/CD practices and version control tools (e.g., Git). Exposure to BI tools such as Power BI or Tableau for data visualization. Life at Capgemini Capgemini supports all aspects of your well-being throughout the changing stages of your life and career. For eligible employees, we offer: • Flexible work • Healthcare including dental, vision, mental health, and well-being programs • Financial well-being programs such as 401(k) and Employee Share Ownership Plan • Paid time off and paid holidays • Paid parental leave • Family building benefits like adoption assistance, surrogacy, and cryopreservation • Social well-being benefits like subsidized back-up child/elder care and tutoring • Mentoring, coaching and learning programs • Employee Resource Groups • Disaster Relief Disclaimer Capgemini is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law. This is a general description of the Duties, Responsibilities and Qualifications required for this position. Physical, mental, sensory or environmental demands may be referenced in an attempt to communicate the manner in which this position traditionally is performed. Whenever necessary to provide individuals with disabilities an equal employment opportunity, Capgemini will consider reasonable accommodations that might involve varying job requirements and/or changing the way this job is performed, provided that such accommodations do not pose an undue hardship. Capgemini is committed to providing reasonable accommodations during our recruitment process. If you need assistance or accommodation, please reach out to your recruiting contact. Click the following link for more information on your rights as an Applicant **************************************************************************
    $77k-99k yearly est. 2d ago
  • Data Engineer

    Synechron 4.4company rating

    Data scientist job in Alpharetta, GA

    We are At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets. Our Challenge Join our data-driven enterprise and lead the design of scalable and high-performance big data solutions. You will craft architectures that handle vast volumes of data, optimize pipeline performance, and incorporate advanced governance and AI-powered processing to unlock actionable insights. Additional Information The base salary for this position varies based on geography and other factors. In accordance with law, the base salary for this role if filled within Alpharetta, GA is $120K - 125K/year & benefits (see below). The Role Responsibilities: Design, build, and maintain scalable big data architectures supporting enterprise analytics and operational needs. Develop, implement, and optimize data pipelines using Apache Airflow, Databricks, and other relevant technologies to ensure reliable data flow and process automation. Manage and enhance data workflows for batch and real-time processing, ensuring efficiency and scalability. Collaborate with data scientists, analysts, and business stakeholders to translate requirements into robust data solutions. Implement data governance, security, and compliance best practices to protect sensitive information. Explore integrating AI/ML techniques into data pipelines, leveraging Databricks and other AI tools for predictive analytics and automation. Develop monitoring dashboards and alert systems to ensure pipeline health and performance. Stay current with emerging big data and cloud technologies, recommending best practices to improve system performance and scalability. Requirements: 5+ years of proven experience in Big Data architecture design, including distributed storage and processing frameworks such as Hadoop, Spark, and Databricks. Strong expertise in performance tuning for large-scale data systems. Hands-on experience with Apache Airflow for workflow orchestration. Proficiency in SQL for managing and querying large databases. Extensive experience with Python for scripting, automation, and data processing workflows. Experience working with cloud platforms (Azure, AWS, or GCP) preferable. Preferred, but not required: Deep understanding of data governance and security frameworks to safeguard sensitive data. Experience with integrating AI/ML models into data pipelines using Databricks MLflow or similar tools. Knowledge of containerization (Docker, Kubernetes) is a plus We offer: A highly competitive compensation and benefits package. A multinational organization with 58 offices in 21 countries and the possibility to work abroad. 10 days of paid annual leave (plus sick leave and national holidays). Maternity & paternity leave plans. A comprehensive insurance plan including medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region). Retirement savings plans. A higher education certification policy. Commuter benefits (varies by region). Extensive training opportunities, focused on skills, substantive knowledge, and personal development. On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses. Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups. Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms. A flat and approachable organization. A truly diverse, fun-loving, and global work culture
    $120k-125k yearly 3d ago
  • Staff Data Scientist - Underwriting and Operations Analytics

    Travelers Insurance Company 4.4company rating

    Data scientist job in Alpharetta, GA

    **Who Are We?** Taking care of our customers, our communities and each other. That's the Travelers Promise. By honoring this commitment, we have maintained our reputation as one of the best property casualty insurers in the industry for over 170 years. Join us to discover a culture that is rooted in innovation and thrives on collaboration. Imagine loving what you do and where you do it. **Job Category** Data Science **Compensation Overview** The annual base salary range provided for this position is a nationwide market range and represents a broad range of salaries for this role across the country. The actual salary for this position will be determined by a number of factors, including the scope, complexity and location of the role; the skills, education, training, credentials and experience of the candidate; and other conditions of employment. As part of our comprehensive compensation and benefits program, employees are also eligible for performance-based cash incentive awards. **Salary Range** $161,400.00 - $266,300.00 **Target Openings** 1 **What Is the Opportunity?** As a Staff Data Scientist, you will build complex models that solve key business problems to support underwriting, risk control, and business operations. This may include the use of the most advanced technical tools in the data science practice, allowing you to develop sophisticated solutions that enhance risk segmentation, streamline decision-making processes, and drive operational excellence across these critical business functions. **What Will You Do?** + Lead business or technical projects focused on the design or development of analytical solutions. + Lead development of community best practices in AI/Machine Learning, statistical techniques, and coding. + Establish a practice/process of sharing expertise with the community through discussions, presentations, or peer reviews. + Begin to challenge conventional thinking where appropriate. + Anticipate potential objections and persuade peers, technical and business leaders to adopt a different point of view. + Guide technical strategy of teams through own technical expertise. + Set and manage expectations with business partners for multiple projects, generate ideas and build consensus, and be aware of potential conflicts. + Communicate analysis, insights, and results to team, peers, business partners. + Partner with cross-functional teams and leaders to support the successful execution of data science strategies. + Be a mentor or resource for less experienced analytic talent, onboard new employees and interns, and provide support for recruiting and talent assessment efforts. + Collaborate with Sr Staff Data Scientist on various training and skill development initiatives, including delivering training to the analytics community. + Perform other duties as assigned. **What Will Our Ideal Candidate Have?** + Subject matter expertise in modeling/ research/ analytics or actuarial required + Subject matter expertise in value creation and business model concepts + Subject matter expertise in multiple statistical software programs + Ability to develop highly complex models, interpret model results and recommend adjustments + Expertise in advanced statistics underlying data models + Ability to apply emerging statistical procedures to complex work + Subject matter expertise in 3-5 of the following: Regression, Classification, Machine Vision, Natural Language Processing, Deep Learning and Statistical modeling. **What is a Must Have?** + Master's degree in Statistics, Mathematics, Decision Sciences, Actuarial Science or related analytical STEM field plus five years of experience or any suitable and equivalent combination of education and work experience. + Heavy concentration in mathematics, including statistics and programming, business intelligence/analytics, as well as data science tools and research using large data sets. Additional verification of specific coursework will be required. **What Is in It for You?** + **Health Insurance** : Employees and their eligible family members - including spouses, domestic partners, and children - are eligible for coverage from the first day of employment. + **Retirement:** Travelers matches your 401(k) contributions dollar-for-dollar up to your first 5% of eligible pay, subject to an annual maximum. If you have student loan debt, you can enroll in the Paying it Forward Savings Program. When you make a payment toward your student loan, Travelers will make an annual contribution into your 401(k) account. You are also eligible for a Pension Plan that is 100% funded by Travelers. + **Paid Time Off:** Start your career at Travelers with a minimum of 20 days Paid Time Off annually, plus nine paid company Holidays. + **Wellness Program:** The Travelers wellness program is comprised of tools, discounts and resources that empower you to achieve your wellness goals and caregiving needs. In addition, our mental health program provides access to free professional counseling services, health coaching and other resources to support your daily life needs. + **Volunteer Encouragement:** We have a deep commitment to the communities we serve and encourage our employees to get involved. Travelers has a Matching Gift and Volunteer Rewards program that enables you to give back to the charity of your choice. **Employment Practices** Travelers is an equal opportunity employer. We value the unique abilities and talents each individual brings to our organization and recognize that we benefit in numerous ways from our differences. In accordance with local law, candidates seeking employment in Colorado are not required to disclose dates of attendance at or graduation from educational institutions. If you are a candidate and have specific questions regarding the physical requirements of this role, please send us an email (*******************) so we may assist you. Travelers reserves the right to fill this position at a level above or below the level included in this posting. To learn more about our comprehensive benefit programs please visit ******************************************************** .
    $71k-93k yearly est. 29d ago
  • Lead Data Scientist

    Smarsh 4.6company rating

    Data scientist job in Atlanta, GA

    Who are we? Smarsh empowers its customers to manage risk and unleash intelligence in their digital communications. Our growing community of over 6500 organizations in regulated industries counts on Smarsh every day to help them spot compliance, legal or reputational risks in 80+ communication channels before those risks become regulatory fines or headlines. Relentless innovation has fueled our journey to consistent leadership recognition from analysts like Gartner and Forrester, and our sustained, aggressive growth has landed Smarsh in the annual Inc. 5000 list of fastest-growing American companies since 2008. Summary As a Lead Data Scientist (NLP & Financial Compliance) at Smarsh, you will spearhead the development of state-of-the-art natural language processing (NLP) and large language model (LLM) solutions that power next-generation compliance and surveillance systems. You'll work on highly specialized problems at the intersection of natural language processing, communications intelligence, financial supervision, and regulatory compliance, where unstructured data from emails, chats, voice transcripts, and trade communications hold the keys to uncovering misconduct and risk. The role will involve working with other Senior Data Scientists and mentoring Associate Data Scientists in analyzing complex data, generating insights, and creating solutions as needed across a variety of tools and platforms. This role demands both technical excellence in NLP modeling and a deep understanding of financial domain behavior-including insider trading, market manipulation, off-channel communications, MNPI, bribery, and other supervisory risk areas. The ideal candidate for this position will possess the ability to perform both independent and team-based research and generate insights from large data sets with a hands-on/can do attitude of servicing/managing day to day data requests and analysis. This role also offers a unique opportunity to get exposure to many problems and solutions associated with taking machine learning and analytics research to production. On any given day, you will have the opportunity to interface with business leaders, machine learning researchers, data engineers, platform engineers, data scientists and many more, enabling you to level up in true end-to-end data science proficiency.How will you contribute? Collect, analyze, and interpret small/large datasets to uncover meaningful insights to support the development of statistical methods / machine learning algorithms. Lead the design, training, and deployment of NLP and transformer-based models for financial surveillance and supervisory use cases (e.g., misconduct detection, market abuse, trade manipulation, insider communication). Development of machine learning models and other analytics following established workflows, while also looking for optimization and improvement opportunities Data annotation and quality review Exploratory data analysis and model fail state analysis Contribute to model governance, documentation, and explainability frameworks aligned with internal and regulatory AI standards. Client/prospect guidance in machine learning model and analytic fine-tuning/development processes Provide guidance to junior team members on model development and EDA Work with Product Manager(s) to intake project/product requirements and translate these to technical tasks within the team's tooling, technique and procedures Continued self-led personal development What will you bring? Strong understanding of financial markets, compliance, surveillance, supervision, or regulatory technology Experience with one or more data science and machine/deep learning frameworks and tooling, including scikit-learn, H2O, keras, pytorch, tensorflow, pandas, numpy, carot, tidyverse Command of data science and statistics principles (regression, Bayes, time series, clustering, P/R, AUROC, exploratory data analysis etc…) Strong knowledge of key programming concepts (e.g. split-apply-combine, data structures, object-oriented programming) Solid statistics knowledge (hypothesis testing, ANOVA, chi-square tests, etc…) Knowledge of NLP transfer learning, including word embedding models (glo Ve, fast Text, word2vec) and transformer models (Bert, SBert, HuggingFace, and GPT-x etc.) Experience with natural language processing toolkits like NLTK, spa Cy, Nvidia NeMo Knowledge of microservices architecture and continuous delivery concepts in machine learning and related technologies such as helm, Docker and Kubernetes Familiarity with Deep Learning techniques for NLP. Familiarity with LLMs - using ollama & Langchain Excellent verbal and written skills Proven collaborator, thriving on teamwork Preferred Qualifications Master's or Doctor of Philosophy degree in Computer Science, Applied Math, Statistics, or a scientific field Familiarity with cloud computing platforms (AWS, GCS, Azure) Experience with automated supervision/surveillance/compliance tools About our culture Smarsh hires lifelong learners with a passion for innovating with purpose, humility and humor. Collaboration is at the heart of everything we do. We work closely with the most popular communications platforms and the world's leading cloud infrastructure platforms. We use the latest in AI/ML technology to help our customers break new ground at scale. We are a global organization that values diversity, and we believe that providing opportunities for everyone to be their authentic self is key to our success. Smarsh leadership, culture, and commitment to developing our people have all garnered Comparably.com Best Places to Work Awards. Come join us and find out what the best work of your career looks like.
    $64k-86k yearly est. Auto-Apply 1d ago
  • Data Scientist

    Jerry 4.0company rating

    Data scientist job in Atlanta, GA

    You could be a data scientist anywhere. Why us? * Join a pre-IPO startup with capital, traction and runway ($240M funded | 60X revenue growth in 5 years | $2T market size) * Partner with our VP of Data Science & Analytics, Armando La Rocca (ex-BCG, Better) * Disrupt a massive market and take us to a $10B business in the next few years * Be immersed in a talent-dense environment and greatly accelerate your career growth About the opportunity: Jerry is looking for a Data Scientist to join our growing team! We hit a huge milestone in early 2024 by achieving profitability and have ambitious goals for the next few years - scale from 5M to 50M customers and become a $10B business. As a data scientist, you will play a key role in championing data-driven decisions across the company's most critical business initiatives (including carrier growth and development, building new products from 0-1, etc.). As part of the data science & analytics team, you will leverage advanced machine learning models to conduct in-depth analyses and extract insights that will shape our growth strategies. Jerry is building the first super app to make car ownership affordable and accessible - insurance, buy/sell, registration, loans, safety, repairs, parking, etc - a $2T market in the U.S. We started with insurance in 2019, and since then we've launched loan refinancing, driving insights, repair marketplace, car diagnostics, and a GenAI-powered chatbot & voicebot. We have amassed over 5M customers, raised $240MM in funding, scaled our revenue 60X and our team to 225 across 6 countries. Here's what an ex-BCG teammate has to say about joining Jerry: "The biggest difference between my current role at Jerry and my past life at BCG is being able to feel the direct impact my work has on the business, every day. In consulting, I rarely felt like my work was having any impact on the world, despite advising the largest global corporations. Here, I can see that my work is moving the needle on key company metrics in a matter of weeks. When I say "key," I mean it: I am not talking about secondary KPIs that no one pays attention to, but business-critical metrics that the CEO and the rest of the C-suite look at on a daily basis (think sales, retention, pricing, etc). The benefit of being a digital-first company is that we are sitting on a treasure trove of data. This means we almost never have to make a decision that's not backed by a robust, statistically significant analysis. While data science can be a support function in some other places (this was something I wanted to avoid), data scientists at Jerry are at the core of the business. I have also appreciated the professional growth opportunities. I have had flexibility to work on the skills I wanted to develop, be it technical or soft skills. I have found continuous learning to be one of my most important drivers of happiness at work. Getting the space and latitude to work on projects that I am most interested in was all the more important for this reason." How you will make an impact: * Partner with marketing, product, and business development teams to integrate customer performance insights into user and partner acquisition strategies * Lead the design, execution, and analysis of A/B experiments on new and existing features, extracting key insights to inform product and business strategies * Define, understand, and test levers to drive profitable and scalable user acquisition and partnership growth * Identify opportunities to automate manual processes and optimize operational efficiency Preferred experience: * Bachelor's degree in a quantitatively or intellectually rigorous discipline * 1+ years of experience from a top-tier economic consulting firm (Analysis Group, Cornerstone Research, Charles River Associates preferred) OR 1+ years of management consulting experience from a top firm (McKinsey, Bain, and Boston Consulting Group preferred) OR relevant experience in data science/business analysis * High level of comfort with SQL or Python Who you are: * You have a framework for problem solving and live by first principles * You are comfortable communicating with audiences varying from front-line employees to the company's C-suite * You set a very high bar for yourself and for your team, and you are constantly pushing that bar higher in the pursuit of excellence While we appreciate your interest and application, only applicants under consideration will be contacted. Jerry.ai is proud to be an Equal Employment Opportunity employer. We prohibit discrimination based on race, religion, color, national origin, sex, pregnancy, reproductive health decisions or related medical conditions, sexual orientation, gender identity, gender expression, age, veteran status, disability, genetic information, or other characteristics protected by applicable local, state or federal laws. Jerry.ai is committed to providing reasonable accommodations for individuals with disabilities in our job application process. If you need assistance or an accommodation due to a disability, please contact us at ******************* The successful candidate's starting pay will fall within the pay range listed on this job posting, determined based on job-related factors including, but not limited to, skills, experience, qualifications, work location, and market conditions. Ranges are market-dependent and may be modified in the future. In addition to base salary, the compensation may include opportunities for equity grants. We offer a comprehensive benefits package to regular employees, including health, dental, and vision coverage, paid time off, paid parental leave, 401(K) plan with employer matching, and wellness benefits, among others. Equity opportunities may also be part of your total rewards package. Part-time, contract, or freelance roles may not be eligible for certain benefits. About Jerry.ai: Jerry.ai is America's first and only super app to radically simplify car ownership. We are redefining how people manage owning a car, one of their most expensive and time-consuming assets. Backed by artificial intelligence and machine learning, Jerry.ai simplifies and automates owning and maintaining a car while providing personalized services for all car owners' needs. We spend every day innovating and improving our AI-powered app to provide the best possible experience for our customers. From car insurance and financing to maintenance and safety, Jerry.ai does it all. We are the #1 rated and most downloaded app in our category with a 4.7 star rating in the App Store. We have more than 5 million customers - and we're just getting started. Jerry.ai was founded in 2017 by serial entrepreneurs and has raised more than $240 million in financing. Join our team and work with passionate, curious and egoless people who love solving real-world problems. Help us build a revolutionary product that's disrupting a massive market.
    $59k-86k yearly est. 3d ago
  • Data Scientist

    Tata Consulting Services 4.3company rating

    Data scientist job in Atlanta, GA

    Must Have Technical/Functional Skills Proficiency in Python, SQL, and R programming languages. Hands-on experience with Domo, SAS, and web development languages. Strong knowledge of cloud platforms (AWS, Azure) and big data ecosystems (Hadoop). Expertise in machine learning techniques and statistical modeling. Excellent data visualization and storytelling skills. Bachelor's or Master's degree in Data Science, Computer Science, Statistics, or related field. Preferred Skills Experience with advanced analytics and big data tools. Familiarity with NLP, deep learning frameworks, and modern visualization libraries. Strong problem-solving and communication skills. Roles & Responsibilities * Design and implement predictive models and machine learning algorithms to solve business problems. * Perform feature selection using techniques such as PCA, Lasso, and Elastic Net. * Develop and deploy models using algorithms like Linear & Logistic Regression, SVM, Decision Trees, XGBoost, Hist Gradient Boosting, and LightGBM. * Utilize cloud technologies (AWS, Azure, Hadoop) for scalable data solutions. * Create interactive dashboards and visualizations using tools like Domo, Tableau, PowerBI, D3.js, and Matplotlib. * Communicate insights through compelling data storytelling for diverse stakeholders. * Collaborate with cross-functional teams to integrate data solutions into business processes. Salary Range-$100,000-$125,000 a year #LI-KR3 TCS Employee Benefits Summary: Discretionary Annual Incentive. Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans. Family Support: Maternal & Parental Leaves. Insurance Options: Auto & Home Insurance, Identity Theft Protection. Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement. Time Off: Vacation, Time Off, Sick Leave & Holidays. Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
    $100k-125k yearly 3d ago
  • Data Scientist

    Charter Global 4.0company rating

    Data scientist job in Atlanta, GA

    Charter Global is a global Information Technology and Solutions provider with more than 19 years of proven track record in gaining customer confidence. We support our customer needs in Asia-Pacific, Europe and the Americas (North and Latin). Our customers are in a variety of vertical industries: Energy, Finance, Healthcare, Government, Retail, Fulfillment Logistics, Manufacturing and Telecom. Charter Global has honed the processes, methodologies, tools and functions to deliver software services via a Global Delivery Model which blends in local and remote resources to get the job done - consistently Here is the Job details: Title: Data Scientist Location: Atlanta, GA / Dallas, TX Duration: 12+ Months Job Description: · Oversees and performs end-top-end data based research. Design data mining solutions to be implemented and executed with alignment to the planned scope and design coverage and needs/uses, leveraging knowledge and a broad understanding of E2E business processes and requirements. · Defines the data analytics research plan, scope and resources required to meet the objectives of his/her area of ownership (segments of the project/engagement). This includes:- Analyzing the data and identifying data sources- Determining appropriate types of data analytics to be performed- Selecting and applying data analysis techniques to ensure that recommendations and predictions provide an adequate level of confidence based on defined coverage criteria- Designing repeatable, verifiable and traceable analytical models · Identifies and analyses new data analytic directions and their potential business impact to determine the proper prioritization of data analytics activities based on business needs and analytics ROI. · Identifies data sources, oversees the data collection process and designs the data structure in collaboration with data experts (BI or big-data) and subject matter and business experts. Ensures that data used in the data analysis activities are of the highest quality. · Constructs data models (algorithms and formulas) for required business needs and predictions. · Works closely with content experts and managers to pinpoint queries, map data, and validate results. Maintains and enhances the professional and business relationship with internal stakeholders. · Facilitates presentations on how to use the data. Offers practical solutions to be used by the end decision makers. · Presents results, including the preparation of patents and white papers and facilitating presentations during conferences. · Effectively participates in and/or leads formal and informal reviews with stakeholders, applying knowledge and experience and providing insights from a professional perspective. · Prepares results in such a way they will be convenient for use. Provides professional support for team members and keeps them informed of assignments/ project status. Provides for training and support for other team members. · Explores and examines data from multiple sources. The Data Scientist sifts through all incoming data with the goal of discovering previously hidden insights, which, in turn, can provide a competitive advantage or address a pressing business problem. A Data Scientist does not simply collect and report on data, but also looks at it from many angles, determines what it means, and then recommends ways to apply the data. · Participates in technical designs of the wrapped product, which will run the model. Ensures that the model is properly used, that data inputs are supplied correctly and in a timely manner. Verifies that the correct data population is being processed. Ensures that that the output of the model is used correctly and that business objectives are being targeted. · Participates in the development stages of the wrapped product, including design, development, QA, implementation and production. This is particularly critical when the Development and Implementation teams find it difficult to understand and evaluate the model's objectives and benefits) Critical Experiences: · Masters in mathematics, statistics or computer science or related field - Ph D. degree preferred · 5 or more years of relevant quantitative and qualitative research and analytics experience · Solid knowledge of statistical techniques · The ability to come up with solutions to loosely defined business problems by leveraging pattern detection over potentially large datasets · Strong programming skills (such as Hadoop MapReduce or other big data frameworks, Java), statistical modeling (like SAS or R) · Experience using machine learning algorithms · High proficiency in the use of statistical packages · Proficiency in statistical analysis, quantitative analytics, forecasting/predictive analytics, multivariate testing, and optimization algorithms · Strong communication and interpersonal skills · Knowledge of telecommunications and of the subject area being investigated - advantage. Additional Information Technical Expertise
    $64k-87k yearly est. 60d+ ago
  • Data Scientist

    GE Aerospace 4.8company rating

    Data scientist job in Alpharetta, GA

    As a Data Scientist, you will work in teams addressing statistical, machine learning, and artificial intelligence problems in a commercial technology and consultancy development environment. You will be part of a data science or cross-disciplinary team driving AI business solutions involving large, complex data sets. Potential application areas include time series forecasting, machine learning regression and classification, root cause analysis (RCA), simulation and optimization, large language models, and computer vision. The ideal candidate will be responsible for developing and deploying machine learning models in production environments. This role requires a strong technical background, excellent problem-solving skills, and the ability to work collaboratively with data engineers, analysts, and other stakeholders. Roles and Responsibilities: * Design, develop, and deploy machine learning models and algorithms under guidance from senior team members * Develop, verify, and validate analytics to address customer needs and opportunities. * Work in technical teams in development, deployment, and application of applied analytics, predictive analytics, and prescriptive analytics. * Develop and maintain pipelines for Retrieval-Augmented Generation (RAG) and Large Language Models (LLM). * Ensure efficient data retrieval and augmentation processes to support LLM training and inference. * Participate in Data Science Workouts to shape Data Science opportunities and identify opportunities to use data science to create customer value. * Perform exploratory and targeted data analyses using descriptive statistics and other methods. * Work with data engineers on data quality assessment, data cleansing, data analytics, and model productionization * Generate reports, annotated code, and other projects artifacts to document, archive, and communicate your work and outcomes. * Communicate methods, findings, and hypotheses with stakeholders. Minimum Qualifications: * Bachelor's Degree in Computer Science or "STEM" Majors (Science, Technology, Engineering and Math) with a minimum of 2+ years of experience * 2 years of proficiency in Python. * Familiarity with statistical machine learning techniques as applied to business problems * Strong analytical and problem-solving skills. * Strong communication and collaboration skills. * Note: Military experience is equivalent to professional experience Eligibility Requirement: * Legal authorization to work in the U.S. is required. We will not sponsor individuals for employment visas, now or in the future, for this job. Desired Characteristics: * Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud) and their machine learning services. * Experience with handling unstructured data, including images, videos, and text * Ability to work in a fast-paced, dynamic environment. * Experience with data preprocessing and augmentation tools. * Demonstrated experience applying critical thinking and problem-solving * Demonstrated experience working in team settings in various roles * Strong presentation and communications skills Note: To comply with US immigration and other legal requirements, it is necessary to specify the minimum number of years' experience required for any role based within the USA. For roles outside of the USA, to ensure compliance with applicable legislation, the JDs should focus on the substantive level of experience required for the role and a minimum number of years should NOT be used. This Job Description is intended to provide a high level guide to the role. However, it is not intended to amend or otherwise restrict/expand the duties required from each individual employee as set out in their respective employment contract and/or as otherwise agreed between an employee and their manager. This role requires access to U.S. export-controlled information. Therefore, employment will be contingent upon the ability to prove that you meet the status of a U.S. Person as one of the following: U.S. lawful permanent resident, U.S. Citizen, have been granted asylee or refugee status (i.e., a protected individual under the Immigration and Naturalization Act, 8 U.S.C. 1324b(a)(3)). Additional Information GE Aerospace offers a great work environment, professional development, challenging careers, and competitive compensation. GE Aerospace is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law. GE Aerospace will only employ those who are legally authorized to work in the United States for this opening. Any offer of employment is conditioned upon the successful completion of a drug screen (as applicable). Relocation Assistance Provided: No
    $67k-88k yearly est. Auto-Apply 36d ago
  • Data Scientist

    Alston & Bird's Antitrust Group 4.9company rating

    Data scientist job in Atlanta, GA

    THE FIRM As a leading international law firm, we are dedicated to excellence through impactful communication, collaboration, and community involvement. Our company culture has earned us one of the "100 Best Companies to Work For" for 26 consecutive years. This honor, along with many others, highlights our commitment to innovation and professional development. At Alston & Bird LLP, our foundation is made of trust, reliability, and compassion. JOB DESCRIPTION We are in search of visionary Data Scientist to join our Practice Innovation team-a group dedicated to transforming how legal services are delivered through cutting-edge technology. In this role, you'll lead the charge in deploying enterprise AI platforms like Microsoft Copilot Studio and Azure OpenAI, helping to craft smarter, data-driven legal strategies. If you're passionate about legal tech, fluent in data science, and excited by the possibilities of AI, we'd love to meet you. ESSENTIAL DUTIES Design, develop, and maintain predictive models to support administrative and legal departments. Configure and deploy AI solutions using Copilot Studio, Azure OpenAI, and other Microsoft 365 AI tools to enhance legal workflows. Collaborate with attorneys, innovation leads, and IT to identify opportunities for automation and data-driven decision-making. Build and maintain dashboards and analytics tools for internal stakeholders (e.g., attorney development, DEI, legal operations). Evaluate third-party legal tech tools and assist in integration with firm systems. Support AI governance, including ethical use, data privacy, and compliance with firm policies. SKILLS NEEDED TO BE SUCCESSFUL Knowledge of legal data types (e.g., timekeeping, matter metadata, court filings). Understanding of AI governance, model evaluation, and responsible AI practices. Strong communication skills and ability to translate technical concepts for non-technical stakeholders. Preferred: experience working in or with law firms or legal departments. EDUCATION AND EXPERIENCE Bachelor's or Master's degree in Data Science, Computer Science, Statistics, or related field preferred. 1-3+ years of experience in a data science or machine learning role, preferably in a professional services or legal environment. Proficiency in Python, SQL, and data visualization tools (e.g., Power BI, Tableau). Experience with Copilot Studio, Azure OpenAI, or similar enterprise AI platforms. Familiarity with natural language processing (NLP), large language models (LLMs), and prompt engineering. EQUAL OPPORTUNITY EMPLOYER Alston & Bird LLP is an Equal Opportunity Employer does not discriminate on the bases of any status protected under federal, state, or local law. Applicants will be considered regardless of their sex, race, age, religion, color, national origin, ancestry, physical disability, mental disability, medical condition (associated with cancer, a history of cancer, or genetic characteristics), HIV/AIDS status, genetic information, marital status, sexual orientation, gender, gender identity, gender expression, military and veteran status, or other protected category under the law on the basis of race, color, religion, sex, age, sexual orientation, gender identity and/or expression, national origin, veteran status or disability in relation to our recruiting, hiring, and promoting practices. The statements contained in this position description are not necessarily all-inclusive, additional duties and responsibilities may be assigned, and requirements may vary from time to time. Professional business references and a background screening will be required for all final applicants selected for a position. If you need assistance or an accommodation due to a disability you may contact *************************. Alston & Bird is not currently accepting resumes from agencies for this position. If you are a recruiter, search firm, or employment agency, you will not be compensated in any way for your referral of a candidate even if Alston & Bird hires the candidate.
    $68k-91k yearly est. Auto-Apply 60d+ ago
  • Senior Data Scientist, Navista

    Cardinal Health 4.4company rating

    Data scientist job in Atlanta, GA

    At Navista, our mission is to empower community oncology practices to deliver patient-centered cancer care. Navista, a Cardinal Health company, is an oncology practice alliance co-created with oncologists and practice leaders that offers advanced support services and technology to help practices remain independent and thrive. True to our name, our experienced team is passionate about helping oncology practices navigate the future. We are seeking an innovative and highly skilled **Senior Data Scientist** with specialized expertise in Generative AI (GenAI), Large Language Models (LLMs), and Agentic Systems to join the Navista - Data & Advanced Analytics team supporting the growth of our Navista Application Suite and the Integrated Oncology Network (IoN). In this critical role, you will be at the forefront of designing, developing, and deploying advanced AI solutions that leverage the power of generative models and intelligent agents to transform our products and operations. You will be responsible for pushing the boundaries of what's possible, from foundational research to production-ready applications, working with diverse datasets and complex problem spaces, particularly within the oncology domain. The ideal candidate will possess a deep theoretical understanding and practical experience in building, fine-tuning, and deploying LLMs, as well as architecting and implementing agentic frameworks. You will play a key role in shaping our AI strategy, mentoring junior team members, and collaborating with cross-functional engineering and product teams to bring groundbreaking AI capabilities to life, including developing predictive models from complex, often unstructured, oncology data. **_Responsibilities_** + **Research & Development:** Lead the research, design, and development of novel Generative AI models and algorithms, including but not limited to LLMs, diffusion models, GANs, and VAEs, to address complex business challenges. + **LLM Expertise:** Architect, fine-tune, and deploy Large Language Models for various applications such as natural language understanding, generation, summarization, question-answering, and code generation, with a focus on extracting insights from unstructured clinical and research data. + **Agentic Systems Design:** Design and implement intelligent agentic systems capable of autonomous decision-making, planning, reasoning, and interaction within complex environments, leveraging LLMs as core components. + **Predictive Modeling:** Develop and deploy advanced predictive models and capabilities using both structured and unstructured data, particularly within the oncology space, to forecast outcomes, identify trends, and support clinical or commercial decision-making. + **Prompt Engineering & Optimization:** Develop advanced prompt engineering strategies and techniques to maximize the performance and reliability of LLM-based applications. + **Data Strategy for GenAI:** Work with data engineers to define and implement data collection, preprocessing, and augmentation strategies specifically tailored for training and fine-tuning generative models and LLMs, including techniques for handling and enriching unstructured oncology data (e.g., clinical notes, pathology reports). + **Model Evaluation & Deployment:** Develop robust evaluation metrics and methodologies for generative models, agentic systems, and predictive models. Oversee the deployment, monitoring, and continuous improvement of these models in production environments. + **Collaboration & Leadership:** Collaborate closely with machine learning engineers, software engineers, and product managers to integrate AI solutions into our products. Provide technical leadership and mentorship to junior data scientists. + **Innovation & Thought Leadership:** Stay abreast of the latest advancements in GenAI, LLMs, and agentic AI research. Proactively identify new opportunities and technologies that can enhance our capabilities and competitive advantage. + **Ethical AI:** Ensure the responsible and ethical development and deployment of AI systems, addressing potential biases, fairness, and transparency concerns. **_Qualifications_** + 8-12 years of experience as a Data Scientist or Machine Learning Engineer, with a significant focus on deep learning and natural language processing, preferred + Bachelor's degree in related field, or equivalent work experience, preferred + Proven hands-on experience with Generative AI models (e.g., Transformers, GANs, VAEs, Diffusion Models) and their applications. + Extensive experience working with Large Language Models (LLMs), including fine-tuning, prompt engineering, RAG (Retrieval Augmented Generation), and understanding various architectures (e.g., GPT, Llama, BERT, T5). + Demonstrated experience in designing, building, and deploying agentic systems or multi-agent systems, including concepts like planning, reasoning, and tool use. + Strong experience working with unstructured data, particularly in the oncology domain (e.g., clinical notes, pathology reports, genomic data, imaging reports), and extracting meaningful features for analysis. + Demonstrated ability to create and deploy predictive capabilities and models from complex datasets, including those with unstructured components. + Proficiency in Python and deep learning frameworks such as PyTorch or TensorFlow. + Experience with relevant libraries and tools (e.g., Hugging Face Transformers, LangChain, LlamaIndex). + Strong understanding of machine learning fundamentals, statistical modeling, and experimental design. + Experience with at least one cloud platforms ( GCP, Azure) for training and deploying large-scale AI models. + Excellent problem-solving skills, with the ability to tackle complex, ambiguous problems and drive solutions. + Strong communication and presentation skills, capable of explaining comp + Experience in the healthcare or life sciences industry, specifically with oncology data and research, highly preferred + Experience with MLOps practices for deploying and managing large-scale AI models, highly preferred + Familiarity with distributed computing frameworks (e.g., Spark, Dask), highly preferred + Experience contributing to open-source AI projects, highly preferred **_What is expected of you and others at this level_** + Applies advanced knowledge and understanding of concepts, principles, and technical capabilities to manage a wide variety of projects + Participates in the development of policies and procedures to achieve specific goals + Recommends new practices, processes, metrics, or models + Works on or may lead complex projects of large scope + Projects may have significant and long-term impact + Provides solutions which may set precedent + Independently determines method for completion of new projects + Receives guidance on overall project objectives + Acts as a mentor to less experienced colleagues **Anticipated salary range:** $123,400 - $176,300 **Bonus eligible:** Yes **Benefits:** Cardinal Health offers a wide variety of benefits and programs to support health and well-being. + Medical, dental and vision coverage + Paid time off plan + Health savings account (HSA) + 401k savings plan + Access to wages before pay day with my FlexPay + Flexible spending accounts (FSAs) + Short- and long-term disability coverage + Work-Life resources + Paid parental leave + Healthy lifestyle programs **Application window anticipated to close:** 02/15/2026 *if interested in opportunity, please submit application as soon as possible. The salary range listed is an estimate. Pay at Cardinal Health is determined by multiple factors including, but not limited to, a candidate's geographical location, relevant education, experience and skills and an evaluation of internal pay equity. \#LI-Remote _Candidates who are back-to-work, people with disabilities, without a college degree, and Veterans are encouraged to apply._ _Cardinal Health supports an inclusive workplace that values diversity of thought, experience and background. We celebrate the power of our differences to create better solutions for our customers by ensuring employees can be their authentic selves each day. Cardinal Health is an Equal_ _Opportunity/Affirmative_ _Action employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, ancestry, age, physical or mental disability, sex, sexual orientation, gender identity/expression, pregnancy, veteran status, marital status, creed, status with regard to public assistance, genetic status or any other status protected by federal, state or local law._ _To read and review this privacy notice click_ here (***************************************************************************************************************************
    $123.4k-176.3k yearly 2d ago
  • Data Engineer w/ Python & SQL

    A2C 4.7company rating

    Data scientist job in Alpharetta, GA

    We're looking for a Data Engineer to build and maintain scalable data pipelines and cloud data infrastructure on GCP. The role focuses on BigQuery, Dataflow, and modern ETL/ELT to support analytics and ML workflows. MUST HAVES A problem solver with ability to analyze and research complex issues and problems; and proposing actionable solutions and/or strategies. Solid understanding and hands on experience with major cloud platforms. Experience in designing and implementing data pipelines. Must have strong Python, SQL & GCP skills Responsibilities Build and optimize batch/streaming pipelines using Dataflow, Pub/Sub, Composer. Develop and tune BigQuery models, queries, and ingestion processes. Implement IaC (Terraform), CI/CD, monitoring, and data quality checks. Ensure data governance, security, and reliable pipeline operations. Collaborate with data science teams and support Vertex AI-based ML workflows. Must-Have Must have strong Python, SQL & GCP skills 3-5+ years of data engineering experience. Hands-on GCP experience (BigQuery, Dataflow, Pub/Sub). Solid ETL/ELT and data modeling experience. Nice-to-Have GCP certifications, Spark, Kafka, Airflow, dbt/Dataform, Docker/K8s.
    $77k-106k yearly est. 3d ago
  • Staff Data Scientist

    GE Aerospace 4.8company rating

    Data scientist job in Alpharetta, GA

    As a Staff Data Scientist, you will work in teams addressing statistical, machine learning, and artificial intelligence problems in a commercial technology and consultancy development environment. You will be part of a data science or cross-disciplinary team driving AI business solutions involving large, complex data sets. Potential application areas include time series forecasting, machine learning regression and classification, root cause analysis (RCA), simulation and optimization, large language models, and computer vision. The ideal candidate will be responsible for developing and deploying machine learning models in production environments. This role requires a strong technical background, excellent problem-solving skills, and the ability to work collaboratively with data engineers, analysts, and other stakeholders. **** **Roles and Responsibilities** : + Design, develop, and deploy machine learning models and algorithms + Understand business problems and identify opportunities to implement data science solutions. + Develop, verify, and validate analytics to address customer needs and opportunities. + Work in technical teams in development, deployment, and application of applied analytics, predictive analytics, and prescriptive analytics. + Develop and maintain pipelines for Retrieval-Augmented Generation (RAG) and Large Language Models (LLM). + Ensure efficient data retrieval and augmentation processes to support LLM training and inference. + Utilize semantic and ontology technologies to enhance data integration and retrieval. Ensure data is semantically enriched to support advanced analytics and machine learning models. + Participate in Data Science Workouts to shape Data Science opportunities and identify opportunities to use data science to create customer value. + Perform exploratory and targeted data analyses using descriptive statistics and other methods. + Work with data engineers on data quality assessment, data cleansing, data analytics, and model productionization + Generate reports, annotated code, and other projects artifacts to document, archive, and communicate your work and outcomes. + Communicate methods, findings, and hypotheses with stakeholders. **Minimum Qualifications:** + Bachelor's degree from accredited university or college with minimum of **3** years of professional experience OR an associate's degree with minimum of **5** years of professional experience + 3 years of proficiency in Python (mandatory). + 2 years' experience with machine learning frameworks and deploying models into production environments + **Note:** Military experience is equivalent to professional experience Eligibility Requirement: + Legal authorization to work in the U.S. is required. We will not sponsor individuals for employment visas, now or in the future, for this job. **Desired Characteristics:** + Strong analytical and problem-solving skills. + Excellent communication and collaboration abilities. + Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud) and their machine learning services. + Experience with handling unstructured data, including images, videos, and text. + Understanding of computer vision techniques and tools + Ability to work in a fast-paced, dynamic environment. + Experience with data preprocessing and augmentation tools. + Demonstrated expertise in critical thinking and problem-solving methods + Familiarity with cloud platforms (e.g. AWS, Azure, Google Cloud, Databricks) and their machine learning services + Demonstrated skill in defining and delivering customer value. + Demonstrated expertise working in team settings in various roles + Demonstrated expertise in presentation and communications skills. + Experience with deep learning and neural networks. + Knowledge of data governance and compliance standards. **Note:** To comply with US immigration and other legal requirements, it is necessary to specify the minimum number of years' experience required for any role based within the USA. For roles outside of the USA, to ensure compliance with applicable legislation, the JDs should focus on the substantive level of experience required for the role and a minimum number of years should NOT be used. This Job Description is intended to provide a high level guide to the role. However, it is not intended to amend or otherwise restrict/expand the duties required from each individual employee as set out in their respective employment contract and/or as otherwise agreed between an employee and their manager. _This role requires access to U.S. export-controlled information. Therefore, for applicants who are not U.S. lawful permanent residents, U.S. Citizens, or have been granted asylee or refugee status (i.e., not a protected individual under the Immigration and Naturalization Act, 8 U.S.C. 1324b(a)(3), otherwise known as a U.S. Person), employment will be contingent on the ability to obtain authorization for access to U.S. export-controlled information from the U.S. Government._ **Additional Information** GE Aerospace offers a great work environment, professional development, challenging careers, and competitive compensation. GE Aerospace is an Equal Opportunity Employer (****************************************************************************************** . Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law. GE Aerospace will only employ those who are legally authorized to work in the United States for this opening. Any offer of employment is conditioned upon the successful completion of a drug screen (as applicable). **Relocation Assistance Provided:** Yes GE Aerospace is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law.
    $67k-88k yearly est. 60d+ ago

Learn more about data scientist jobs

How much does a data scientist earn in Johns Creek, GA?

The average data scientist in Johns Creek, GA earns between $58,000 and $106,000 annually. This compares to the national average data scientist range of $75,000 to $148,000.

Average data scientist salary in Johns Creek, GA

$78,000

What are the biggest employers of Data Scientists in Johns Creek, GA?

The biggest employers of Data Scientists in Johns Creek, GA are:
  1. General Electric
  2. ADP
  3. The Travelers Companies
  4. VitalChek
  5. RELX
  6. Deposco
  7. Adpcareers
  8. Blueprint30 LLC
Job type you want
Full Time
Part Time
Internship
Temporary