Data Scientist
Data scientist job in Atlanta, GA
Role: Data Scientist
Mode Of Hire: Full Time
Key Responsibilities
• Data wrangling & feature engineering: Ingest, clean, and transform data from SQL, APIs, and data lakes (e.g., Snowflake, Databricks). Design robust pipelines that feed into analytics and ML workflows.
• Data understanding & exploration: Work closely with domain experts to deeply understand the meaning, context, quality, and limitations of available datasets. Translate business questions into data requirements and analytics plans.
• Machine learning development: Build, tune, and validate predictive models using scikit-learn, SparkML, XGBoost, or TensorFlow.
• Cross-functional partnership: Collaborate with marketing, sales, and product teams to scope business use cases, define success metrics, and integrate models into operational workflows.
• Model deployment & MLOps: Deploy and manage models using MLflow, docker and CI/CD pipelines. Implement versioning, testing, performance monitoring, and retraining strategies as part of a robust MLOps practice.
• Infrastructure support: Work with data engineering and DevOps teams to maintain and improve model training and deployment infrastructure, including compute resources, workflow orchestration and environment configuration.
• Insight delivery: Build clear, actionable reporting and visualizations using tools like Power BI or Tableau. Focus on impact, not just analysis.
Skills Required:
• Bachelor's degree in Data Science, Computer Science, Engineering, or a related quantitative field.
• 5+ years of experience in a data science, ML engineering, or analytics role.
• Strong SQL, Python and ML Techniques programming skills.
• Experience with Azure Cloud, Databricks, and/or Snowflake.
• Experience building and deploying machine learning models in production environments. Hands-on experience with Databricks, including SparkML, and MLflow integration.
• Familiarity with MLOps best practices, including version control, model monitoring, and automated testing.
• Experience with tools such as Git, MLflow, Docker and workflow schedulers.
• Ability to communicate complex technical work to non-technical stakeholders.
• Experience with scalable model training environments and distributed computing.
Preferred Qualifications
• Master's degree in a quantitative or technical discipline.
• Experience in financial services, fintech, or enterprise B2B analytics.
• Knowledge of A/B testing, causal inference, and statistical experimentation.
• Familiarity with GenAI, LLM pipelines, and vector-based retrieval is a plus and platform like Snowflake Cortex.
Data Engineer
Data scientist job in Atlanta, GA
No C2C
We're looking for a hands-on Data Engineer to help build, scale, and fine-tune real-time data systems using Kafka, AWS, and a modern data stack. In this role, you'll work deeply with streaming data, ETL, distributed systems, and PostgreSQL to power analytics, product innovation, and AI-driven use cases. You'll also get to work with AI/ML frameworks, automation, and MLOps tools to support advanced modeling and a highly responsive data platform.
What You'll Do
Design and build real-time streaming pipelines using Kafka, Confluent Schema Registry, and Zookeeper
Build and manage cloud-based data workflows using AWS services like Glue, EMR, EC2, and S3
Optimize and maintain PostgreSQL and other databases with strong schema design, advanced SQL, and performance tuning
Integrate AI and ML frameworks (TensorFlow, PyTorch, Hugging Face) into data pipelines for training and inference
Automate data quality checks, feature generation, and anomaly detection using AI-powered monitoring and observability tools
Partner with ML engineers to deploy, monitor, and continuously improve machine learning models in both batch and real-time pipelines using tools like MLflow, SageMaker, Airflow, and Kubeflow
Experiment with vector databases and retrieval-augmented generation (RAG) pipelines to support GenAI and LLM initiatives
Build scalable, cloud-native, event-driven architectures that power AI-driven data products
What You Bring
Bachelor's degree in Computer Science, Engineering, Math, or a related technical field
3+ years of hands-on data engineering experience with Kafka (Confluent or open-source) and AWS
Experience with automated data quality, monitoring, and observability tools
Strong SQL skills and solid database fundamentals with PostgreSQL and both traditional and NoSQL databases
Proficiency in Python, Scala, or Java for pipeline development and AI integrations
Experience with synthetic data generation, vector databases, or GenAI-powered data products
Hands-on experience integrating ML models into production data pipelines using frameworks like PyTorch or TensorFlow and MLOps tools such as Airflow, MLflow, SageMaker, or Kubeflow
W2 Opportunity // GCP Data Engineer // Atlanta, GA
Data scientist job in Atlanta, GA
Job Description: GCP Data Engineer
Rate: $50/hr. on W2 (No C2C)
We are seeking a highly skilled GCP Data Engineer to design, build, and optimize cloud-native data pipelines and analytics solutions on Google Cloud Platform. The ideal candidate has strong experience with Python, BigQuery, Cloud Data Fusion, and core GCP services such as Cloud Composer, Cloud Storage, Cloud Functions, and Pub/Sub. This role requires a strong foundation in data warehousing concepts and scalable data engineering practices.
Responsibilities
Design, develop, and maintain robust ETL/ELT pipelines on Google Cloud Platform.
Build and optimize data workflows using Cloud Data Fusion, BigQuery, and Cloud Composer.
Write efficient and maintainable Python code to support data ingestion, transformation, and automation.
Develop optimized BigQuery SQL for analytics, reporting, and large-scale data modeling.
Utilize GCP services such as Cloud Storage, Pub/Sub, and Cloud Functions to build event-driven and scalable data solutions.
Ensure data quality, governance, and reliability across all pipelines.
Collaborate with cross-functional teams to deliver clean, trusted, production-ready datasets.
Monitor, troubleshoot, and resolve performance issues in cloud data pipelines and workflows.
Must-Have Skills
Strong experience with GCP BigQuery (data modeling, SQL development, performance tuning).
Proficiency in Python for data engineering and pipeline automation.
Hands-on experience with Cloud Data Fusion for ETL/ELT development.
Working experience with key GCP services:
Cloud Composer
Cloud Storage
Cloud Functions
Pub/Sub
Strong understanding of data warehousing concepts, star/snowflake schemas, and best practices.
Solid understanding of cloud data architecture and distributed processing.
Good-to-Have Skills
Experience with Vertex AI for ML pipeline integration or model deployment.
Familiarity with Dataproc (Spark/Hadoop) for large-scale processing.
Knowledge of CI/CD workflows, Git, and DevOps best practices.
Experience with Cloud Logging/Monitoring tools.
Data Engineer
Data scientist job in Alpharetta, GA
5 days onsite in Alpharetta, GA
Skills required:
Python
Data Pipeline
Data Analysis
Data Modeling
Must have solid Cloud experience
AI/ML
Strong problem-solving skills
Strong Communication skill
A problem solver with ability to analyze and research complex issues and problems; and proposing actionable solutions and/or strategies.
Solid understanding and hands on experience with major cloud platforms.
Experience in designing and implementing data pipelines.
Must have experience with one of the following: GCP, AWS OR Azure - MUST have the drive to learn GCP.
ETL Databricks Data Engineer
Data scientist job in Atlanta, GA
We are seeking a ETL Databricks Data Engineer to join our team and help build robust, scalable data solutions. This role involves designing and maintaining data pipelines, optimizing ETL processes, and collaborating with cross-functional teams to ensure data integrity and accessibility.
Key Responsibilities
Design, develop, and maintain scalable data pipelines and ETL processes using Databricks.
Create and optimize Python scripts for data transformation, automation, and integration tasks.
Develop and fine-tune SQL queries for data extraction, transformation, and loading.
Collaborate with data scientists, analysts, and business stakeholders to deliver high-quality data solutions.
Ensure data integrity, security, and compliance with organizational standards.
Participate in code reviews and contribute to best practices in data engineering.
Required Skills & Qualifications
5 years of professional experience in data engineering or related roles.
Strong proficiency in Databricks (including Spark-based data processing).
Advanced programming skills in Python.
Expertise in SQL for querying and data modeling.
Familiarity with Azure Cloud and Azure Data Factory (ADF).
Understanding of ETL frameworks, data governance, and performance tuning.
Knowledge of CI/CD practices and version control tools (e.g., Git).
Exposure to BI tools such as Power BI or Tableau for data visualization.
Life at Capgemini
Capgemini supports all aspects of your well-being throughout the changing stages of your life and career. For eligible employees, we offer:
• Flexible work
• Healthcare including dental, vision, mental health, and well-being programs
• Financial well-being programs such as 401(k) and Employee Share Ownership Plan
• Paid time off and paid holidays
• Paid parental leave
• Family building benefits like adoption assistance, surrogacy, and cryopreservation
• Social well-being benefits like subsidized back-up child/elder care and tutoring
• Mentoring, coaching and learning programs
• Employee Resource Groups
• Disaster Relief
Disclaimer
Capgemini is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law.
This is a general description of the Duties, Responsibilities and Qualifications required for this position. Physical, mental, sensory or environmental demands may be referenced in an attempt to communicate the manner in which this position traditionally is performed. Whenever necessary to provide individuals with disabilities an equal employment opportunity, Capgemini will consider reasonable accommodations that might involve varying job requirements and/or changing the way this job is performed, provided that such accommodations do not pose an undue hardship.
Capgemini is committed to providing reasonable accommodations during our recruitment process. If you need assistance or accommodation, please reach out to your recruiting contact.
Click the following link for more information on your rights as an Applicant **************************************************************************
Data Engineer - OrcaWorks AI
Data scientist job in Atlanta, GA
Experience Level: Entry-level (Master's preferred)
About OrcaWorks AI
At OrcaWorks AI, we're building next-generation AI systems that empower businesses to make data-driven decisions with intelligence and speed. We're seeking passionate Data Engineers who love solving real-world data challenges and want to be part of a growing team building cutting-edge AI infrastructure.
Key Responsibilities
Design, develop, and maintain data pipelines using tools like Airbyte and Prefect to feed AI and machine learning models.
Integrate data from multiple structured and unstructured sources into unified and queryable layers using ElasticSearch or Vespa.
Implement data validation, transformation, and storage solutions using modern ETL frameworks.
Collaborate with AI, LLM, and data science teams to ensure reliable and optimized data flow for model training.
Support database management, SQLModel, and data governance practices across services.
Required Skills & Qualifications
Master's degree (or Bachelor's with equivalent experience) in Computer Science, Information Systems, or Data Engineering.
Proficiency in Python and SQL; experience with PySpark or equivalent ETL frameworks.
Hands-on experience with Airbyte, Prefect, and DBT.
Familiarity with search and indexing systems like Vespa or ElasticSearch.
Knowledge of cloud data platforms (AWS, GCP, or Azure) and API integration.
Strong understanding of data security and applied AI workflows.
Lead Data Engineer - Palantir Foundry
Data scientist job in Atlanta, GA
Our technology organization is transforming how we work at WestRock. We align with our businesses to deliver innovative solutions that:
Address specific business challenges, integrate processes, and create great experiences
Connect our work to shared goals that propel WestRock forward in the Digital Age
Imagine how technology can advance the way we work by using disruptive technology
We are looking for forward thinking technologists that can accelerate our focus areas such as building stronger foundational technology capabilities, reducing complexity, employing digital transformation concepts, and leveraging disruptive technology.
As a Lead Data Engineer, you will play a pivotal role in building and scaling modern data infrastructure that powers decision-making across production, supply chain, and operations. Helps to define and analyze business requirements for Enterprise scale reports. Analyzes and evaluates business use cases for data engineering problems and helps design and develop processing solutions with ETL Cloud based technologies.
How you will impact WestRock:
Architect and implement scalable data pipelines using Palantir Foundry (pipelines, workshops, ontology) to unify and transform operational data.
Design and develop robust data workflows using Python, Apache Airflow, and Apache Spark to support real-time and batch processing needs.
Build and deploy solutions on cloud platforms (AWS or Azure), ensuring high availability, security, and performance.
Collaborate with data scientists, analysts, and operations teams to deliver actionable insights and operational tooling.
Define and enforce data engineering best practices, including CI/CD automation, version control (Git), and testing strategies.
Mentor junior developers, conduct code reviews, and help shape the technical roadmap for the data platform.
What you need to succeed:
Education: Bachelor's degree in computer science or similar
At least 6 years of strong Data Engineering experience
Hands-on experience with Palantir Foundry, including pipelines, ontology modeling, and workshop development.
Strong programming skills in Python or Java, with experience building and maintaining production-grade data pipelines.
Proficiency in Apache Airflow and Apache Spark for workflow orchestration and large-scale data processing.
Proven experience deploying data solutions on AWS or Azure, with strong understanding of cloud-native services.
Familiarity with Git for version control and CI/CD pipelines for automated testing and deployment.
Demonstrated ability to mentor junior engineers, lead projects, and work independently in a fast-paced environment.
Good communication skills, with the ability to collaborate effectively across technical and non-technical teams.
Good analytical and troubleshooting abilities.
What we offer:
Corporate culture based on integrity, respect, accountability and excellence
Comprehensive training with numerous learning and development opportunities
An attractive salary reflecting skills, competencies and potential
A career with a global packaging company where Sustainability, Safety and Inclusion are business drivers and foundational elements of the daily work.
Lead Azure Databrick Engineer
Data scientist job in Atlanta, GA
****************Individual Contractors (W2/1099) are encouraged to apply. Visa sponsorship is not available for this role at this time************
An Azure Data Engineer is responsible for designing, implementing, and maintaining the data infrastructure within an organization. They collaborate with both business and IT teams to understand stakeholders' needs and unlock the full potential of data. They create conceptual and logical data models, analyze structural requirements, and ensure efficient database solutions.
Must Have Skills:
Experience of Migrating from other platform to Databricks
Proficiency in Databricks and Azure Cloud, Databricks Asset Bundles, Hoslistic vision on the Data Strategy.
Proficiency in Data Streaming and Data Modeling
Experience in architecting at least two large-scale big data projects
Strong understanding of data scaling and its complexities
Data Archiving and Purging mechanisms.
Job Requirements
• Degree in computer science or equivalent preferred
• Demonstrable experience in architecture, design, implementation, and/or support of highly distributed applications with Azure cloud and Databricks.
• 10+ Years of Hands-on experience with data modelling, database design, data mining, and segmentation techniques.
• Working knowledge and experience with "Cloud Architectures" (e.g., SaaS, PaaS, IaaS) and the ability to address the unique security considerations of secure Cloud computing
• Should have architected solutions for Cloud environments such as Microsoft Azure and/or GCP
• Experience with debugging and performance tuning in distributed environments
• Strong analytical skills with the ability to collect, organize, analyse, and broadcast significant amounts of information with attention to detail and accuracy
• Experience dealing with structured, unstructured data.
• Must have Python, PySpark experience.
• Experience in ML or/and graph analysis is a plus
Data Engineer
Data scientist job in Alpharetta, GA
We are
At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets.
Our Challenge
Join our data-driven enterprise and lead the design of scalable and high-performance big data solutions. You will craft architectures that handle vast volumes of data, optimize pipeline performance, and incorporate advanced governance and AI-powered processing to unlock actionable insights.
Additional Information
The base salary for this position varies based on geography and other factors. In accordance with law, the base salary for this role if filled within Alpharetta, GA is $120K - 125K/year & benefits (see below).
The Role
Responsibilities:
Design, build, and maintain scalable big data architectures supporting enterprise analytics and operational needs.
Develop, implement, and optimize data pipelines using Apache Airflow, Databricks, and other relevant technologies to ensure reliable data flow and process automation.
Manage and enhance data workflows for batch and real-time processing, ensuring efficiency and scalability.
Collaborate with data scientists, analysts, and business stakeholders to translate requirements into robust data solutions.
Implement data governance, security, and compliance best practices to protect sensitive information.
Explore integrating AI/ML techniques into data pipelines, leveraging Databricks and other AI tools for predictive analytics and automation.
Develop monitoring dashboards and alert systems to ensure pipeline health and performance.
Stay current with emerging big data and cloud technologies, recommending best practices to improve system performance and scalability.
Requirements:
5+ years of proven experience in Big Data architecture design, including distributed storage and processing frameworks such as Hadoop, Spark, and Databricks.
Strong expertise in performance tuning for large-scale data systems.
Hands-on experience with Apache Airflow for workflow orchestration.
Proficiency in SQL for managing and querying large databases.
Extensive experience with Python for scripting, automation, and data processing workflows.
Experience working with cloud platforms (Azure, AWS, or GCP) preferable.
Preferred, but not required:
Deep understanding of data governance and security frameworks to safeguard sensitive data.
Experience with integrating AI/ML models into data pipelines using Databricks MLflow or similar tools.
Knowledge of containerization (Docker, Kubernetes) is a plus
We offer:
A highly competitive compensation and benefits package.
A multinational organization with 58 offices in 21 countries and the possibility to work abroad.
10 days of paid annual leave (plus sick leave and national holidays).
Maternity & paternity leave plans.
A comprehensive insurance plan including medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region).
Retirement savings plans.
A higher education certification policy.
Commuter benefits (varies by region).
Extensive training opportunities, focused on skills, substantive knowledge, and personal development.
On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses.
Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups.
Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms.
A flat and approachable organization.
A truly diverse, fun-loving, and global work culture
Staff Data Scientist - Underwriting and Operations Analytics
Data scientist job in Alpharetta, GA
**Who Are We?** Taking care of our customers, our communities and each other. That's the Travelers Promise. By honoring this commitment, we have maintained our reputation as one of the best property casualty insurers in the industry for over 170 years. Join us to discover a culture that is rooted in innovation and thrives on collaboration. Imagine loving what you do and where you do it.
**Job Category**
Data Science
**Compensation Overview**
The annual base salary range provided for this position is a nationwide market range and represents a broad range of salaries for this role across the country. The actual salary for this position will be determined by a number of factors, including the scope, complexity and location of the role; the skills, education, training, credentials and experience of the candidate; and other conditions of employment. As part of our comprehensive compensation and benefits program, employees are also eligible for performance-based cash incentive awards.
**Salary Range**
$161,400.00 - $266,300.00
**Target Openings**
1
**What Is the Opportunity?**
As a Staff Data Scientist, you will build complex models that solve key business problems to support underwriting, risk control, and business operations. This may include the use of the most advanced technical tools in the data science practice, allowing you to develop sophisticated solutions that enhance risk segmentation, streamline decision-making processes, and drive operational excellence across these critical business functions.
**What Will You Do?**
+ Lead business or technical projects focused on the design or development of analytical solutions.
+ Lead development of community best practices in AI/Machine Learning, statistical techniques, and coding.
+ Establish a practice/process of sharing expertise with the community through discussions, presentations, or peer reviews.
+ Begin to challenge conventional thinking where appropriate.
+ Anticipate potential objections and persuade peers, technical and business leaders to adopt a different point of view.
+ Guide technical strategy of teams through own technical expertise.
+ Set and manage expectations with business partners for multiple projects, generate ideas and build consensus, and be aware of potential conflicts.
+ Communicate analysis, insights, and results to team, peers, business partners.
+ Partner with cross-functional teams and leaders to support the successful execution of data science strategies.
+ Be a mentor or resource for less experienced analytic talent, onboard new employees and interns, and provide support for recruiting and talent assessment efforts.
+ Collaborate with Sr Staff Data Scientist on various training and skill development initiatives, including delivering training to the analytics community.
+ Perform other duties as assigned.
**What Will Our Ideal Candidate Have?**
+ Subject matter expertise in modeling/ research/ analytics or actuarial required
+ Subject matter expertise in value creation and business model concepts
+ Subject matter expertise in multiple statistical software programs
+ Ability to develop highly complex models, interpret model results and recommend adjustments
+ Expertise in advanced statistics underlying data models
+ Ability to apply emerging statistical procedures to complex work
+ Subject matter expertise in 3-5 of the following: Regression, Classification, Machine Vision, Natural Language Processing, Deep Learning and Statistical modeling.
**What is a Must Have?**
+ Master's degree in Statistics, Mathematics, Decision Sciences, Actuarial Science or related analytical STEM field plus five years of experience or any suitable and equivalent combination of education and work experience.
+ Heavy concentration in mathematics, including statistics and programming, business intelligence/analytics, as well as data science tools and research using large data sets. Additional verification of specific coursework will be required.
**What Is in It for You?**
+ **Health Insurance** : Employees and their eligible family members - including spouses, domestic partners, and children - are eligible for coverage from the first day of employment.
+ **Retirement:** Travelers matches your 401(k) contributions dollar-for-dollar up to your first 5% of eligible pay, subject to an annual maximum. If you have student loan debt, you can enroll in the Paying it Forward Savings Program. When you make a payment toward your student loan, Travelers will make an annual contribution into your 401(k) account. You are also eligible for a Pension Plan that is 100% funded by Travelers.
+ **Paid Time Off:** Start your career at Travelers with a minimum of 20 days Paid Time Off annually, plus nine paid company Holidays.
+ **Wellness Program:** The Travelers wellness program is comprised of tools, discounts and resources that empower you to achieve your wellness goals and caregiving needs. In addition, our mental health program provides access to free professional counseling services, health coaching and other resources to support your daily life needs.
+ **Volunteer Encouragement:** We have a deep commitment to the communities we serve and encourage our employees to get involved. Travelers has a Matching Gift and Volunteer Rewards program that enables you to give back to the charity of your choice.
**Employment Practices**
Travelers is an equal opportunity employer. We value the unique abilities and talents each individual brings to our organization and recognize that we benefit in numerous ways from our differences.
In accordance with local law, candidates seeking employment in Colorado are not required to disclose dates of attendance at or graduation from educational institutions.
If you are a candidate and have specific questions regarding the physical requirements of this role, please send us an email (*******************) so we may assist you.
Travelers reserves the right to fill this position at a level above or below the level included in this posting.
To learn more about our comprehensive benefit programs please visit ******************************************************** .
Applied Data Scientist
Data scientist job in Atlanta, GA
Why Jerry.ai * Join a profitable pre-IPO startup with capital, traction, and runway ($240M funded | 60X revenue growth in 5 years | $2T market size) * Work closely with brilliant leaders and teammates from companies like Amazon, Better, LinkedIn, McKinsey, BCG, Bain
* Disrupt a massive market and take us to a $10B business in the next few years
* Our growth is driven by forward-thinking technology: Jerry.ai is getting mentioned in many conversations about our use of GenAI, such as this Forbes article
* Be immersed in a talent-dense environment and greatly accelerate your career growth
* Impact millions of users experience with car maintenance and auto insurance
About the opportunity
We are looking for a detail-obsessed Applied Data Scientist to join our growing team. The ideal candidate is passionate about the foundational layer of all data analysis: ensuring our data is clean, accurate, and reliable. You are a detective at heart, driven by a deep curiosity to understand complex data systems, and someone who finds immense satisfaction in transforming messy, ambiguous datasets into pristine assets that drive critical business decisions.
This role is perfect for a scientist who wants to own the entire journey: from taking raw application data to generating clean inputs, to building models and delivering tangible value in real-world applications. You will be a central point of contact, communicating with engineers, product managers, and business analysts to ensure data integrity from collection to analysis.
How you will make an impact
* Model Ownership: Participate in the full modeling lifecycle, from statistical analysis and experimentation to building, validating, and iterating on machine learning models that address critical business challenges
* Data Quality: Own the data foundation by preparing, cleaning and transforming raw, complex data into high-quality features for modeling. Proactively identify and handle missing values, outliers, and inconsistencies
* Problem Investigation: Investigate data discrepancies (tracking bugs, ETL errors, definitional issues) and design automated frameworks to ensure data accuracy
* Cross-Functional Collaboration: Act as a strategic liaison, collaborating with data Engineering and product teams to drive the data strategy and definition of our centralized feature store, ensuring it becomes the 'single source of truth' for all ML models
* Documentation: Create and maintain clear, authoritative documentation for data sources, cleaning processes, and variable definitions
Who You Are
* Obsessed with Details: You have an exceptional eye for detail and a low tolerance for errors. You believe that accuracy and precision are non-negotiable
* A Strategic Thinker: You can see the bigger picture and are passionate about building robust systems and processes that will stand the test of time
* Proactive & Persistent: You don't wait for problems to find you. You actively seek out data quality issues and are persistent in seeing an investigation through to its resolution.
* Curious & Adaptive: You are inherently curious and possess a strong desire to understand how things work. You're comfortable with ambiguity and skilled at breaking down complex problems
Requirements
* Education: Bachelor's degree (PhD preferred) in a quantitative field (Statistics, Physics, Mathematics, etc.)
* Programming: Strong proficiency in Python (Pandas/NumPy) and SQL for complex querying and data manipulation
* Data Quality: Hands-on experience with data cleaning techniques and data validation frameworks
* Tooling: Familiarity with data visualization tools to help identify and communicate data issues
While we appreciate your interest and application, only applicants under consideration will be contacted.
Jerry.ai is proud to be an Equal Employment Opportunity employer. We prohibit discrimination based on race, religion, color, national origin, sex, pregnancy, reproductive health decisions or related medical conditions, sexual orientation, gender identity, gender expression, age, veteran status, disability, genetic information, or other characteristics protected by applicable local, state or federal laws.
Jerry.ai is committed to providing reasonable accommodations for individuals with disabilities in our job application process. If you need assistance or an accommodation due to a disability, please contact us at *******************
The successful candidate's starting pay will fall within the pay range listed on this job posting, determined based on job-related factors including, but not limited to, skills, experience, qualifications, work location, and market conditions. Ranges are market-dependent and may be modified in the future. In addition to base salary, the compensation may include opportunities for equity grants.
We offer a comprehensive benefits package to regular employees, including health, dental, and vision coverage, paid time off, paid parental leave, 401(K) plan with employer matching, and wellness benefits, among others. Equity opportunities may also be part of your total rewards package. Part-time, contract, or freelance roles may not be eligible for certain benefits.
About Jerry.ai:
Jerry.ai is America's first and only super app to radically simplify car ownership. We are redefining how people manage owning a car, one of their most expensive and time-consuming assets.
Backed by artificial intelligence and machine learning, Jerry.ai simplifies and automates owning and maintaining a car while providing personalized services for all car owners' needs. We spend every day innovating and improving our AI-powered app to provide the best possible experience for our customers. From car insurance and financing to maintenance and safety, Jerry.ai does it all.
We are the #1 rated and most downloaded app in our category with a 4.7 star rating in the App Store. We have more than 5 million customers - and we're just getting started.
Jerry.ai was founded in 2017 by serial entrepreneurs and has raised more than $240 million in financing.
Join our team and work with passionate, curious and egoless people who love solving real-world problems. Help us build a revolutionary product that's disrupting a massive market.
Data Scientist
Data scientist job in Atlanta, GA
Must Have Technical/Functional Skills Proficiency in Python, SQL, and R programming languages. Hands-on experience with Domo, SAS, and web development languages. Strong knowledge of cloud platforms (AWS, Azure) and big data ecosystems (Hadoop). Expertise in machine learning techniques and statistical modeling.
Excellent data visualization and storytelling skills.
Bachelor's or Master's degree in Data Science, Computer Science, Statistics, or related field.
Preferred Skills
Experience with advanced analytics and big data tools.
Familiarity with NLP, deep learning frameworks, and modern visualization libraries.
Strong problem-solving and communication skills.
Roles & Responsibilities
* Design and implement predictive models and machine learning algorithms to solve business problems.
* Perform feature selection using techniques such as PCA, Lasso, and Elastic Net.
* Develop and deploy models using algorithms like Linear & Logistic Regression, SVM, Decision Trees, XGBoost, Hist Gradient Boosting, and LightGBM.
* Utilize cloud technologies (AWS, Azure, Hadoop) for scalable data solutions.
* Create interactive dashboards and visualizations using tools like Domo, Tableau, PowerBI, D3.js, and Matplotlib.
* Communicate insights through compelling data storytelling for diverse stakeholders.
* Collaborate with cross-functional teams to integrate data solutions into business processes.
Salary Range-$100,000-$125,000 a year
#LI-KR3
TCS Employee Benefits Summary:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
Data Scientist
Data scientist job in Atlanta, GA
Charter Global is a global Information Technology and Solutions provider with more than 19 years of proven track record in gaining customer confidence. We support our customer needs in Asia-Pacific, Europe and the Americas (North and Latin). Our customers are in a variety of vertical industries: Energy, Finance, Healthcare,
Government, Retail, Fulfillment Logistics, Manufacturing and Telecom.
Charter Global has honed the processes, methodologies, tools and functions to deliver software services via a Global Delivery Model which blends in local and remote resources to get the job done - consistently
Here is the Job details:
Title: Data Scientist
Location: Atlanta, GA / Dallas, TX
Duration: 12+ Months
Job Description:
· Oversees and performs end-top-end data based research. Design data mining solutions to be implemented and executed with alignment to the planned scope and design coverage and needs/uses, leveraging knowledge and a broad understanding of E2E business processes and requirements.
· Defines the data analytics research plan, scope and resources required to meet the objectives of his/her area of ownership (segments of the project/engagement). This includes:- Analyzing the data and identifying data sources- Determining appropriate types of data analytics to be performed- Selecting and applying data analysis techniques to ensure that recommendations and predictions provide an adequate level of confidence based on defined coverage criteria- Designing repeatable, verifiable and traceable analytical models
· Identifies and analyses new data analytic directions and their potential business impact to determine the proper prioritization of data analytics activities based on business needs and analytics ROI.
· Identifies data sources, oversees the data collection process and designs the data structure in collaboration with data experts (BI or big-data) and subject matter and business experts. Ensures that data used in the data analysis activities are of the highest quality.
· Constructs data models (algorithms and formulas) for required business needs and predictions.
· Works closely with content experts and managers to pinpoint queries, map data, and validate results. Maintains and enhances the professional and business relationship with internal stakeholders.
· Facilitates presentations on how to use the data. Offers practical solutions to be used by the end decision makers.
· Presents results, including the preparation of patents and white papers and facilitating presentations during conferences.
· Effectively participates in and/or leads formal and informal reviews with stakeholders, applying knowledge and experience and providing insights from a professional perspective.
· Prepares results in such a way they will be convenient for use. Provides professional support for team members and keeps them informed of assignments/ project status. Provides for training and support for other team members.
· Explores and examines data from multiple sources. The Data Scientist sifts through all incoming data with the goal of discovering previously hidden insights, which, in turn, can provide a competitive advantage or address a pressing business problem. A Data Scientist does not simply collect and report on data, but also looks at it from many angles, determines what it means, and then recommends ways to apply the data.
· Participates in technical designs of the wrapped product, which will run the model. Ensures that the model is properly used, that data inputs are supplied correctly and in a timely manner. Verifies that the correct data population is being processed. Ensures that that the output of the model is used correctly and that business objectives are being targeted.
· Participates in the development stages of the wrapped product, including design, development, QA, implementation and production. This is particularly critical when the Development and Implementation teams find it difficult to understand and evaluate the model's objectives and benefits)
Critical Experiences:
· Masters in mathematics, statistics or computer science or related field - Ph D. degree preferred
· 5 or more years of relevant quantitative and qualitative research and analytics experience
· Solid knowledge of statistical techniques
· The ability to come up with solutions to loosely defined business problems by leveraging pattern detection over potentially large datasets
· Strong programming skills (such as Hadoop MapReduce or other big data frameworks, Java), statistical modeling (like SAS or R)
· Experience using machine learning algorithms
· High proficiency in the use of statistical packages
· Proficiency in statistical analysis, quantitative analytics, forecasting/predictive analytics, multivariate testing, and optimization algorithms
· Strong communication and interpersonal skills
· Knowledge of telecommunications and of the subject area being investigated - advantage.
Additional Information
Technical Expertise
Senior Data Scientist
Data scientist job in Atlanta, GA
About Oversight
Oversight is the world's leading provider of AI-based spend management and risk mitigation solutions for large enterprises. Based in Atlanta, GA, Oversight works with many of the world's most innovative companies and government agencies to digitally transform their spend audit and financial control processes.
Oversight's AI-powered platform works across our customers' financial systems to continuously monitor and analyze all spend transactions for fraud, waste, and misuse. With a consolidated, consistent view of risk across their enterprise, customers can prevent financial loss and optimize spend while strengthening the controls that improve compliance. Learn More.
Position Overview: Job Purpose
Oversight Systems is a leading provider of cloud-based artificial intelligence solutions automating and analyzing financial payment transactions to identify fraud, non-compliant purchases, and wasteful spending. Oversight analyzes over $2 trillion in expenditures annually at Fortune 5000 companies and government agencies worldwide.
We are looking for an enthusiastic and talented Senior Data Scientist to join our team. You will collaborate with a strong team of data scientists, product managers, and developers on designing and building AI solutions to expand our analytic platform and spend risk products.
Responsibilities
Lead development projects, engaging with the project stakeholders, collecting high level requirements, testing different machine learning models and designing and delivering systems with machine learning at their cores
Design and maintain a Machine Learning pipelines - data ingestion, feature engineering, modeling including ensemble methods, predicting, explaining, deploying, and diagnosing over fitting
Research and develop analysis, and optimization methods to improve the quality of Oversight's user facing analytics
Identify and present opportunities to leverage advanced analytical methods to solve difficult non-routine business problems
Provide technical direction, mentoring, training, and guidance to team members
Skills
Applied experience with machine learning on large datasets
Strong SQL experience in processing and analyzing data
Strong quantitative analytical experience, including hands-on use of statistics, regression modeling, decision trees, random forest, support vector machine, kernel-based methods, clustering, and similar methods
Ability to analyze a wide variety of data: structured and unstructured, observational, and experimental, to drive system designs and product implementations
Knowledge of probability and statistics, including experimental design, predictive modeling, optimization, and causal inference
Understanding of the different machine learning approaches and techniques and able to research and test new ones
Experience articulating and translating business questions and using statistical techniques to arrive at an answer using available data
Demonstrated leadership and self-direction
Hands-on experience with Linux distributions
Strong communication skills, data presentation skills, and ability to collaborate with other teams
Experience in both R&D environment and software product development is a plus
Experience
Must possess a minimum of 3 years work experience using Python
Education
Bachelor's degree in Computer Science with Data Science Emphasis and 5 years of Data Science experience (emphasis in Machine Learning)
Auto-ApplyData Scientist
Data scientist job in Alpharetta, GA
As a Data Scientist, you will work in teams addressing statistical, machine learning, and artificial intelligence problems in a commercial technology and consultancy development environment. You will be part of a data science or cross-disciplinary team driving AI business solutions involving large, complex data sets. Potential application areas include time series forecasting, machine learning regression and classification, root cause analysis (RCA), simulation and optimization, large language models, and computer vision. The ideal candidate will be responsible for developing and deploying machine learning models in production environments. This role requires a strong technical background, excellent problem-solving skills, and the ability to work collaboratively with data engineers, analysts, and other stakeholders.
****
**Roles and Responsibilities** :
+ Design, develop, and deploy machine learning models and algorithms under guidance from senior team members
+ Develop, verify, and validate analytics to address customer needs and opportunities.
+ Work in technical teams in development, deployment, and application of applied analytics, predictive analytics, and prescriptive analytics.
+ Develop and maintain pipelines for Retrieval-Augmented Generation (RAG) and Large Language Models (LLM).
+ Ensure efficient data retrieval and augmentation processes to support LLM training and inference.
+ Participate in Data Science Workouts to shape Data Science opportunities and identify opportunities to use data science to create customer value.
+ Perform exploratory and targeted data analyses using descriptive statistics and other methods.
+ Work with data engineers on data quality assessment, data cleansing, data analytics, and model productionization
+ Generate reports, annotated code, and other projects artifacts to document, archive, and communicate your work and outcomes.
+ Communicate methods, findings, and hypotheses with stakeholders.
**Minimum Qualifications:**
+ Bachelor's Degree in Computer Science or "STEM" Majors (Science, Technology, Engineering and Math) with a minimum of 2+ years of experience
+ 2 years of proficiency in Python.
+ Familiarity with statistical machine learning techniques as applied to business problems
+ Strong analytical and problem-solving skills.
+ Strong communication and collaboration skills.
+ **Note:** Military experience is equivalent to professional experience
Eligibility Requirement:
+ Legal authorization to work in the U.S. is required. We will not sponsor individuals for employment visas, now or in the future, for this job.
**Desired Characteristics:**
+ Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud) and their machine learning services.
+ Experience with handling unstructured data, including images, videos, and text
+ Ability to work in a fast-paced, dynamic environment.
+ Experience with data preprocessing and augmentation tools.
+ Demonstrated experience applying critical thinking and problem-solving
+ Demonstrated experience working in team settings in various roles
+ Strong presentation and communications skills
**Note:**
To comply with US immigration and other legal requirements, it is necessary to specify the minimum number of years' experience required for any role based within the USA. For roles outside of the USA, to ensure compliance with applicable legislation, the JDs should focus on the substantive level of experience required for the role and a minimum number of years should NOT be used.
This Job Description is intended to provide a high level guide to the role. However, it is not intended to amend or otherwise restrict/expand the duties required from each individual employee as set out in their respective employment contract and/or as otherwise agreed between an employee and their manager.
_This role requires access to U.S. export-controlled information. Therefore, employment will be contingent upon the ability to prove that you meet the status of a U.S. Person as one of the following: U.S. lawful permanent resident, U.S. Citizen, have been granted asylee or refugee status (i.e., a protected individual under the Immigration and Naturalization Act, 8 U.S.C. 1324b(a)(3))._
**Additional Information**
GE Aerospace offers a great work environment, professional development, challenging careers, and competitive compensation. GE Aerospace is an Equal Opportunity Employer (****************************************************************************************** . Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law.
GE Aerospace will only employ those who are legally authorized to work in the United States for this opening. Any offer of employment is conditioned upon the successful completion of a drug screen (as applicable).
**Relocation Assistance Provided:** No
GE Aerospace is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law.
Data Scientist
Data scientist job in Atlanta, GA
THE FIRM
As a leading international law firm, we are dedicated to excellence through impactful communication, collaboration, and community involvement. Our company culture has earned us one of the "100 Best Companies to Work For" for 26 consecutive years. This honor, along with many others, highlights our commitment to innovation and professional development. At Alston & Bird LLP, our foundation is made of trust, reliability, and compassion.
JOB DESCRIPTION
We are in search of visionary Data Scientist to join our Practice Innovation team-a group dedicated to transforming how legal services are delivered through cutting-edge technology. In this role, you'll lead the charge in deploying enterprise AI platforms like Microsoft Copilot Studio and Azure OpenAI, helping to craft smarter, data-driven legal strategies. If you're passionate about legal tech, fluent in data science, and excited by the possibilities of AI, we'd love to meet you.
ESSENTIAL DUTIES
Design, develop, and maintain predictive models to support administrative and legal departments.
Configure and deploy AI solutions using Copilot Studio, Azure OpenAI, and other Microsoft 365 AI tools to enhance legal workflows.
Collaborate with attorneys, innovation leads, and IT to identify opportunities for automation and data-driven decision-making.
Build and maintain dashboards and analytics tools for internal stakeholders (e.g., attorney development, DEI, legal operations).
Evaluate third-party legal tech tools and assist in integration with firm systems.
Support AI governance, including ethical use, data privacy, and compliance with firm policies.
SKILLS NEEDED TO BE SUCCESSFUL
Knowledge of legal data types (e.g., timekeeping, matter metadata, court filings).
Understanding of AI governance, model evaluation, and responsible AI practices.
Strong communication skills and ability to translate technical concepts for non-technical stakeholders.
Preferred: experience working in or with law firms or legal departments.
EDUCATION AND EXPERIENCE
Bachelor's or Master's degree in Data Science, Computer Science, Statistics, or related field preferred.
1-3+ years of experience in a data science or machine learning role, preferably in a professional services or legal environment.
Proficiency in Python, SQL, and data visualization tools (e.g., Power BI, Tableau).
Experience with Copilot Studio, Azure OpenAI, or similar enterprise AI platforms.
Familiarity with natural language processing (NLP), large language models (LLMs), and prompt engineering.
EQUAL OPPORTUNITY EMPLOYER
Alston & Bird LLP is an Equal Opportunity Employer does not discriminate on the bases of any status protected under federal, state, or local law. Applicants will be considered regardless of their sex, race, age, religion, color, national origin, ancestry, physical disability, mental disability, medical condition (associated with cancer, a history of cancer, or genetic characteristics), HIV/AIDS status, genetic information, marital status, sexual orientation, gender, gender identity, gender expression, military and veteran status, or other protected category under the law on the basis of race, color, religion, sex, age, sexual orientation, gender identity and/or expression, national origin, veteran status or disability in relation to our recruiting, hiring, and promoting practices.
The statements contained in this position description are not necessarily all-inclusive, additional duties and responsibilities may be assigned, and requirements may vary from time to time.
Professional business references and a background screening will be required for all final applicants selected for a position.
If you need assistance or an accommodation due to a disability you may contact *************************.
Alston & Bird is not currently accepting resumes from agencies for this position. If you are a recruiter, search firm, or employment agency, you will not be compensated in any way for your referral of a candidate even if Alston & Bird hires the candidate.
Auto-ApplySenior Data Scientist
Data scientist job in Atlanta, GA
**What Data Science contributes to Cardinal Health** The Data & Analytics Function oversees the analytics lifecycle in order to identify, analyze and present relevant insights that drive business decisions and anticipate opportunities to achieve a competitive advantage. This function manages analytics products, the access, design and implementation of reporting/business intelligence solutions, and the application of advanced quantitative modeling.
Data Science applies base, scientific methodologies from various disciplines, techniques and tools that extracts knowledge and insight from data to solve complex business problems on large data sets, integrating multiple systems.
This role will support the Major Rugby business unit, a legacy supplier of multi-source, generic pharmaceuticals for over 60 years. Major Rugby provides over 1,000 high-quality, Rx, OTC and vitamin, mineral and supplement products to the acute, retail, government and consumer markets. This role will focus on leveraging advanced analytics, machine learning, and optimization techniques to solve complex challenges related to demand forecasting, inventory optimization, logistics efficiency and risk mitigation. Our goal is to uncover insights and drive meaningful deliverables to improve decision making and business outcomes.
**Responsibilities:**
+ Leads the design, development, and deployment of advanced analytics and machine learning models to solve complex business problems
+ Collaborates cross-functionally with product, engineering, operations, and business teams to identify opportunities for data-driven decision-making
+ Translates business requirements into analytical solutions and delivers insights that drive strategic initiatives
+ Develops and maintains scalable data science solutions, ensuring reproducibility, performance, and maintainability
+ Evaluates and implements new tools, frameworks, and methodologies to enhance the data science toolkit
+ Drives experimentation and A/B testing strategies to optimize business outcomes
+ Mentors junior data scientists and contributes to the development of a high-performing analytics team
+ Ensures data quality, governance, and compliance with organizational and regulatory standards
+ Stays current with industry trends, emerging technologies, and best practices in data science and AI
+ Contributes to the development of internal knowledge bases, documentation, and training materials
**Qualifications:**
+ 8-12 years of experience in data science, analytics, or a related field (preferred)
+ Advanced degree (Master's or Ph.D.) in Data Science, Computer Science, Engineering, Operations Research, Statistics, or a related discipline preferred
+ Strong programming skills in Python and SQL;
+ Proficiency in data visualization tools such as Tableau, or Looker, with a proven ability to translate complex data into clear, actionable business insights
+ Deep understanding of machine learning, statistical modeling, predictive analytics, and optimization techniques
+ Experience with cloud platforms (e.g., AWS, Azure, GCP) and big data technologies (e.g., Spark, Hadoop) is highly desirable
+ Excellent communication and storytelling skills, with the ability to influence stakeholders and present findings to both technical and non-technical audiences
+ Experience in Supervised and Unsupervised Machine Learning including Classification, Forecasting, Anomaly Detection, Pattern Detection, Text Mining, using variety of techniques such as Decision trees, Time Series Analysis, Bagging and Boosting algorithms, Neural Networks, Deep Learning and Natural Language processing (NLP).
+ Experience with PyTorch or other deep learning frameworks
+ Strong understanding of RESTful APIs and / or data streaming a big plus
+ Required experience of modern version control (GitHub, Bitbucket)
+ Hands-on experience with containerization (Docker, Kubernetes, etc.)
+ Experience with product discovery and design thinking
+ Experience with Gen AI
+ Experience with supply chain analytics is preferred
**Anticipated salary range:** $123,400 - $176,300
**Bonus eligible:** Yes
**Benefits:** Cardinal Health offers a wide variety of benefits and programs to support health and well-being.
+ Medical, dental and vision coverage
+ Paid time off plan
+ Health savings account (HSA)
+ 401k savings plan
+ Access to wages before pay day with my FlexPay
+ Flexible spending accounts (FSAs)
+ Short- and long-term disability coverage
+ Work-Life resources
+ Paid parental leave
+ Healthy lifestyle programs
**Application window anticipated to close:** 12/02/2025 *if interested in opportunity, please submit application as soon as possible.
The salary range listed is an estimate. Pay at Cardinal Health is determined by multiple factors including, but not limited to, a candidate's geographical location, relevant education, experience and skills and an evaluation of internal pay equity.
\#LI-Remote
\#LI-AP4
_Candidates who are back-to-work, people with disabilities, without a college degree, and Veterans are encouraged to apply._
_Cardinal Health supports an inclusive workplace that values diversity of thought, experience and background. We celebrate the power of our differences to create better solutions for our customers by ensuring employees can be their authentic selves each day. Cardinal Health is an Equal_ _Opportunity/Affirmative_ _Action employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, ancestry, age, physical or mental disability, sex, sexual orientation, gender identity/expression, pregnancy, veteran status, marital status, creed, status with regard to public assistance, genetic status or any other status protected by federal, state or local law._
_To read and review this privacy notice click_ here (***************************************************************************************************************************
Data Engineer w/ Python & SQL
Data scientist job in Alpharetta, GA
We're looking for a Data Engineer to build and maintain scalable data pipelines and cloud data infrastructure on GCP. The role focuses on BigQuery, Dataflow, and modern ETL/ELT to support analytics and ML workflows.
MUST HAVES
A problem solver with ability to analyze and research complex issues and problems; and proposing actionable solutions and/or strategies.
Solid understanding and hands on experience with major cloud platforms.
Experience in designing and implementing data pipelines.
Must have strong Python, SQL & GCP skills
Responsibilities
Build and optimize batch/streaming pipelines using Dataflow, Pub/Sub, Composer.
Develop and tune BigQuery models, queries, and ingestion processes.
Implement IaC (Terraform), CI/CD, monitoring, and data quality checks.
Ensure data governance, security, and reliable pipeline operations.
Collaborate with data science teams and support Vertex AI-based ML workflows.
Must-Have
Must have strong Python, SQL & GCP skills
3-5+ years of data engineering experience.
Hands-on GCP experience (BigQuery, Dataflow, Pub/Sub).
Solid ETL/ELT and data modeling experience.
Nice-to-Have
GCP certifications, Spark, Kafka, Airflow, dbt/Dataform, Docker/K8s.
Data Scientist
Data scientist job in Atlanta, GA
Charter Global is a global Information Technology and Solutions provider with more than 19 years of proven track record in gaining customer confidence. We support our customer needs in Asia-Pacific, Europe and the Americas (North and Latin). Our customers are in a variety of vertical industries: Energy, Finance, Healthcare,
Government, Retail, Fulfillment Logistics, Manufacturing and Telecom.
Charter Global has honed the processes, methodologies, tools and functions to deliver software services via a Global Delivery Model which blends in local and remote resources to get the job done - consistently
Here is the Job details:
Title: Data Scientist
Location: Atlanta, GA / Dallas, TX
Duration: 12+ Months
Job Description:
· Oversees and performs end-top-end data based research. Design data mining solutions to be implemented and executed with alignment to the planned scope and design coverage and needs/uses, leveraging knowledge and a broad understanding of E2E business processes and requirements.
· Defines the data analytics research plan, scope and resources required to meet the objectives of his/her area of ownership (segments of the project/engagement). This includes:- Analyzing the data and identifying data sources- Determining appropriate types of data analytics to be performed- Selecting and applying data analysis techniques to ensure that recommendations and predictions provide an adequate level of confidence based on defined coverage criteria- Designing repeatable, verifiable and traceable analytical models
· Identifies and analyses new data analytic directions and their potential business impact to determine the proper prioritization of data analytics activities based on business needs and analytics ROI.
· Identifies data sources, oversees the data collection process and designs the data structure in collaboration with data experts (BI or big-data) and subject matter and business experts. Ensures that data used in the data analysis activities are of the highest quality.
· Constructs data models (algorithms and formulas) for required business needs and predictions.
· Works closely with content experts and managers to pinpoint queries, map data, and validate results. Maintains and enhances the professional and business relationship with internal stakeholders.
· Facilitates presentations on how to use the data. Offers practical solutions to be used by the end decision makers.
· Presents results, including the preparation of patents and white papers and facilitating presentations during conferences.
· Effectively participates in and/or leads formal and informal reviews with stakeholders, applying knowledge and experience and providing insights from a professional perspective.
· Prepares results in such a way they will be convenient for use. Provides professional support for team members and keeps them informed of assignments/ project status. Provides for training and support for other team members.
· Explores and examines data from multiple sources. The Data Scientist sifts through all incoming data with the goal of discovering previously hidden insights, which, in turn, can provide a competitive advantage or address a pressing business problem. A Data Scientist does not simply collect and report on data, but also looks at it from many angles, determines what it means, and then recommends ways to apply the data.
· Participates in technical designs of the wrapped product, which will run the model. Ensures that the model is properly used, that data inputs are supplied correctly and in a timely manner. Verifies that the correct data population is being processed. Ensures that that the output of the model is used correctly and that business objectives are being targeted.
· Participates in the development stages of the wrapped product, including design, development, QA, implementation and production. This is particularly critical when the Development and Implementation teams find it difficult to understand and evaluate the model's objectives and benefits)
Critical Experiences
:
· Masters in mathematics, statistics or computer science or related field - Ph D. degree preferred
· 5 or more years of relevant quantitative and qualitative research and analytics experience
· Solid knowledge of statistical techniques
· The ability to come up with solutions to loosely defined business problems by leveraging pattern detection over potentially large datasets
· Strong programming skills (such as Hadoop MapReduce or other big data frameworks, Java), statistical modeling (like SAS or R)
· Experience using machine learning algorithms
· High proficiency in the use of statistical packages
· Proficiency in statistical analysis, quantitative analytics, forecasting/predictive analytics, multivariate testing, and optimization algorithms
· Strong communication and interpersonal skills
· Knowledge of telecommunications and of the subject area being investigated - advantage.
Additional Information
Technical Expertise
Staff Data Scientist
Data scientist job in Alpharetta, GA
As a Staff Data Scientist, you will work in teams addressing statistical, machine learning, and artificial intelligence problems in a commercial technology and consultancy development environment. You will be part of a data science or cross-disciplinary team driving AI business solutions involving large, complex data sets. Potential application areas include time series forecasting, machine learning regression and classification, root cause analysis (RCA), simulation and optimization, large language models, and computer vision. The ideal candidate will be responsible for developing and deploying machine learning models in production environments. This role requires a strong technical background, excellent problem-solving skills, and the ability to work collaboratively with data engineers, analysts, and other stakeholders.
****
**Roles and Responsibilities** :
+ Design, develop, and deploy machine learning models and algorithms
+ Understand business problems and identify opportunities to implement data science solutions.
+ Develop, verify, and validate analytics to address customer needs and opportunities.
+ Work in technical teams in development, deployment, and application of applied analytics, predictive analytics, and prescriptive analytics.
+ Develop and maintain pipelines for Retrieval-Augmented Generation (RAG) and Large Language Models (LLM).
+ Ensure efficient data retrieval and augmentation processes to support LLM training and inference.
+ Utilize semantic and ontology technologies to enhance data integration and retrieval. Ensure data is semantically enriched to support advanced analytics and machine learning models.
+ Participate in Data Science Workouts to shape Data Science opportunities and identify opportunities to use data science to create customer value.
+ Perform exploratory and targeted data analyses using descriptive statistics and other methods.
+ Work with data engineers on data quality assessment, data cleansing, data analytics, and model productionization
+ Generate reports, annotated code, and other projects artifacts to document, archive, and communicate your work and outcomes.
+ Communicate methods, findings, and hypotheses with stakeholders.
**Minimum Qualifications:**
+ Bachelor's degree from accredited university or college with minimum of **3** years of professional experience OR an associate's degree with minimum of **5** years of professional experience
+ 3 years of proficiency in Python (mandatory).
+ 2 years' experience with machine learning frameworks and deploying models into production environments
+ **Note:** Military experience is equivalent to professional experience
Eligibility Requirement:
+ Legal authorization to work in the U.S. is required. We will not sponsor individuals for employment visas, now or in the future, for this job.
**Desired Characteristics:**
+ Strong analytical and problem-solving skills.
+ Excellent communication and collaboration abilities.
+ Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud) and their machine learning services.
+ Experience with handling unstructured data, including images, videos, and text.
+ Understanding of computer vision techniques and tools
+ Ability to work in a fast-paced, dynamic environment.
+ Experience with data preprocessing and augmentation tools.
+ Demonstrated expertise in critical thinking and problem-solving methods
+ Familiarity with cloud platforms (e.g. AWS, Azure, Google Cloud, Databricks) and their machine learning services
+ Demonstrated skill in defining and delivering customer value.
+ Demonstrated expertise working in team settings in various roles
+ Demonstrated expertise in presentation and communications skills.
+ Experience with deep learning and neural networks.
+ Knowledge of data governance and compliance standards.
**Note:**
To comply with US immigration and other legal requirements, it is necessary to specify the minimum number of years' experience required for any role based within the USA. For roles outside of the USA, to ensure compliance with applicable legislation, the JDs should focus on the substantive level of experience required for the role and a minimum number of years should NOT be used.
This Job Description is intended to provide a high level guide to the role. However, it is not intended to amend or otherwise restrict/expand the duties required from each individual employee as set out in their respective employment contract and/or as otherwise agreed between an employee and their manager.
_This role requires access to U.S. export-controlled information. Therefore, for applicants who are not U.S. lawful permanent residents, U.S. Citizens, or have been granted asylee or refugee status (i.e., not a protected individual under the Immigration and Naturalization Act, 8 U.S.C. 1324b(a)(3), otherwise known as a U.S. Person), employment will be contingent on the ability to obtain authorization for access to U.S. export-controlled information from the U.S. Government._
**Additional Information**
GE Aerospace offers a great work environment, professional development, challenging careers, and competitive compensation. GE Aerospace is an Equal Opportunity Employer (****************************************************************************************** . Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law.
GE Aerospace will only employ those who are legally authorized to work in the United States for this opening. Any offer of employment is conditioned upon the successful completion of a drug screen (as applicable).
**Relocation Assistance Provided:** Yes
GE Aerospace is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law.