Post job

Data scientist jobs in Albany, GA

- 1,516 jobs
All
Data Scientist
Data Engineer
Data Architect
Senior Data Scientist
Data Consultant
  • Data Scientist

    Coforge

    Data scientist job in Atlanta, GA

    Role: Data Scientist Mode Of Hire: Full Time Key Responsibilities • Data wrangling & feature engineering: Ingest, clean, and transform data from SQL, APIs, and data lakes (e.g., Snowflake, Databricks). Design robust pipelines that feed into analytics and ML workflows. • Data understanding & exploration: Work closely with domain experts to deeply understand the meaning, context, quality, and limitations of available datasets. Translate business questions into data requirements and analytics plans. • Machine learning development: Build, tune, and validate predictive models using scikit-learn, SparkML, XGBoost, or TensorFlow. • Cross-functional partnership: Collaborate with marketing, sales, and product teams to scope business use cases, define success metrics, and integrate models into operational workflows. • Model deployment & MLOps: Deploy and manage models using MLflow, docker and CI/CD pipelines. Implement versioning, testing, performance monitoring, and retraining strategies as part of a robust MLOps practice. • Infrastructure support: Work with data engineering and DevOps teams to maintain and improve model training and deployment infrastructure, including compute resources, workflow orchestration and environment configuration. • Insight delivery: Build clear, actionable reporting and visualizations using tools like Power BI or Tableau. Focus on impact, not just analysis. Skills Required: • Bachelor's degree in Data Science, Computer Science, Engineering, or a related quantitative field. • 5+ years of experience in a data science, ML engineering, or analytics role. • Strong SQL, Python and ML Techniques programming skills. • Experience with Azure Cloud, Databricks, and/or Snowflake. • Experience building and deploying machine learning models in production environments. Hands-on experience with Databricks, including SparkML, and MLflow integration. • Familiarity with MLOps best practices, including version control, model monitoring, and automated testing. • Experience with tools such as Git, MLflow, Docker and workflow schedulers. • Ability to communicate complex technical work to non-technical stakeholders. • Experience with scalable model training environments and distributed computing. Preferred Qualifications • Master's degree in a quantitative or technical discipline. • Experience in financial services, fintech, or enterprise B2B analytics. • Knowledge of A/B testing, causal inference, and statistical experimentation. • Familiarity with GenAI, LLM pipelines, and vector-based retrieval is a plus and platform like Snowflake Cortex.
    $66k-91k yearly est. 3d ago
  • Data Scientist

    The Energy Authority 4.1company rating

    Data scientist job in Jacksonville, FL

    Hybrid Jacksonville, FL The Energy Authority is a public power-owned, nonprofit corporation with offices in Jacksonville, Florida and Bellevue (Seattle), Washington. TEA provides public power utilities with access to advanced resources and technology systems so they can respond competitively in the changing energy markets. Through partnership with TEA, utilities benefit from an experienced organization that is singularly focused on deriving the maximum value of their assets from the market. Quantitative Analytics & Research Analyst The energy landscape is evolving, and so are the challenges our Clients face. TEA's Quantitative Analytics & Research team is looking for an innovative and analytical mind to help transform complex market trends into cutting-edge solutions. If you're passionate about machine learning, optimization, and statistical modeling and thrive in a fast-paced environment where curiosity meets impact, this is your opportunity to make a difference! Why You'll Love This Role: Build models that drive the future of energy markets and empower clients to make smarter, data-driven decisions. Work alongside a collaborative team of data scientists and industry experts who thrive on tackling complex challenges. Explore new ideas, experiment with innovative solutions, and see your work deliver impact across a wide range of clients. What You'll Do: Clean, transform, and validate structured and unstructured data to ensure data quality and reliability. Maintain existing machine learning and optimization models, updating them as needed. Research and experiment with new methods to enhance current models as well as develop new solutions. Collaborate with cross-functional teams to maintain data pipelines. Collaborate with cross-functional teams to understand the business requirements and deliver solution effectively. Document workflows, methodologies, and data sources to ensure transparency and reproducibility. What You Bring to the Team: Passion for analytics and problem-solving with a drive to find meaningful insights in complex data. Strong skills in machine learning, predictive modeling, and optimization techniques such as regression, neural networks, and feature engineering. Solid understanding of electricity markets, including economic dispatch, marginal pricing, and market clearing rules. Proficiency in R or Python, with a working knowledge of web design (HTML, CSS, JavaScript, R-Shiny),Stochastic approach, and database queries. Your Growth Path at TEA: Level 1 Data Scientist: Operate and maintain models, gaining hands-on experience with TEA's tools and industry functions. Level 2 Data Scientist: Design and refine analytical models, assess their effectiveness, and contribute to cutting-edge solutions. Level 3 Data Scientist: Provide strategic insights, manage projects, and support TEA's business development initiatives. Level 4 Senior Data Scientist: Serve as a subject matter expert, mentor team members, and lead high-impact initiatives. What You Need to Succeed: Bachelors and/or Masters in Data Science, Computer Science, Engineering or other relevant field. Industry experience in analyzing data, maintaining and developing models is preferred. Proficiency in programming languages Python or R. Strong foundation in the following subjects: statistics, machine learning, and optimization. Background in energy and power systems is preferred. Eagerness to learn, receive feedback, and grow technical depth. Strong problem-solving and analytical thinking. Good communication skills; ability to understand the requirements and present the results to stakeholders. Tools & Technology You'll Work With: Microsoft Word, Excel/Visual Basic, and other advanced analysis tools. Large-scale utility simulation software such as PROMOD, PLEXOS, AuroraXMP, GE Maps, Dayzer, PSS/e, and PowerWorld. Decision analysis techniques such as decision trees and influence diagrams. Compensation & Benefits: Competitive Salary: Based on experience, level and location. Incentives: Annual target short-term incentive rate of 17.5% of base salary based on corporate, team, and individual performance. Benefits: Paid Time Off starting at 17 days per year, plus nine paid holidays. 401(a) employer contribution and 457(b) deferred compensation retirement savings plans. Medical, dental, vision, life, AD&D, and disability insurance. Health savings and flexible spending accounts. A great work-life balance and visibility to senior executives in a supportive company culture. TEA Values TEA employees share a common sense of purpose. When TEA accomplishes its mission, the result is improved quality of life for the citizens and businesses of the communities our clients serve. TEA employees exceed the expectations of those they serve, deliver services with the highest standards of fair, honest, and ethical behavior, set the standard for service and expertise in our industry, embody a spirit of collaboration, and embrace TEA's founding entrepreneurial spirit by seizing opportunities to deliver value. If you are self-motivated, driven to deliver excellence, and passionate about your career, TEA is the perfect place for you. It's YOUR Future. It's OUR Future.
    $71k-93k yearly est. 1d ago
  • Applied Data Scientist Regulatory and Statistical

    Moffitt Cancer Center 4.9company rating

    Data scientist job in Tampa, FL

    Shape the Future of Predictive Medicine. One-Year Project. Innovators Wanted! Are you driven by curiosity, energized by ambiguity, and passionate about transforming healthcare? Dr. Ciara Freeman at the esteemed Moffitt Cancer Center is searching for a bold, entrepreneurial-minded Applied Data Scientist - Regulatory and Statistical for a dynamic, one-year project to help build the first regulatory-grade AI models that predict therapy response in multiple myeloma. You'll partner with a physician-scientist PI and data-engineering team to prototype, validate, and document predictive models designed for clinical use. This is hands-on translational ML - fast iteration, real impact, auditable results. Your models will form the core of clinically actionable, auditable AI systems that change how we treat cancer. Ideal Candidate: Expert in Python (scikit-learn, XGBoost, PyTorch/TensorFlow). Skilled in survival or clinical modeling; thrive where rigor meets speed. Startup thinkers with a thirst for discovery Individuals who thrive in fast-paced, risk-friendly environments Problem solvers who see challenges as opportunities Team players eager to see their ideas put into action Responsibilities: Develop and validate multimodal survival and risk-stratification models (clinical + omics + imaging). Collaborate with engineers to define and extract features. Perform calibration, bias analysis, and explainability (SHAP, PDPs, model cards). Translate results into clinician-friendly insights and contribute to IP and regulatory filings. Credentials & Qualifications: Master's degree in Computer Science, Data Science, Biostatistics, or a related quantitative field with seven (7) years of applied statistical or machine learning model development experience in healthcare, biotech, or regulated environments. Or PhD with five (5) years of applied statistical or machine learning model development experience in healthcare, biotech, or regulated environments. Familiarity with Snowflake or modern data-engineering workflows preferred. Join a project that's not just about data - it's about revolutionizing patient care. Help us bridge the gap between today's personalized medicine and tomorrow's predictive breakthroughs. If you're ready to take risks, drive results, and change the future of medicine, apply today! Moffitt Cancer Center proudly stands as a Comprehensive Cancer Center designated by the National Cancer Institute (NCI) in the vibrant city of Tampa, Florida. This dynamic city is an exceptional choice for those seeking exciting opportunities in a rapidly growing metropolitan area. With its flourishing economy and rich cultural diversity, the Tampa Bay region masterfully combines urban elegance with breathtaking natural beauty. Discover why countless individuals have chosen to make Tampa their home and experience firsthand what makes it one of the fastest-growing metropolitan cities in the United States.
    $64k-86k yearly est. 1d ago
  • Data Scientist

    Tata Consultancy Services 4.3company rating

    Data scientist job in Sunrise, FL

    Data Scientist (Python / PostgreSQL / PyTest) Must Have Technical/Functional Skills Data Scientist with strong expertise in Python, PostgreSQL, and PyTest to join our team. The role involves building advanced data models, performing statistical analysis, and implementing automated testing frameworks for enterprise-grade solutions. Roles & Responsibilities · Develop and deploy data science models and algorithms using Python. · Perform data extraction, transformation, and analysis leveraging PostgreSQL and Pandas. · Design and implement robust automated test suites using PyTest for data pipelines and models. · Conduct exploratory data analysis (EDA) and feature engineering for predictive modeling. · Collaborate with stakeholders to define data-driven solutions and business insights. · Optimize SQL queries and database structures for performance and scalability. · Integrate models into production systems with CI/CD pipelines and testing gates. · Ensure data quality, security, and compliance with enterprise standards. · Document methodologies, workflows, and maintain reproducibility of experiments. · Provide production support and troubleshoot data-related issues Required Qualification · 5+ years of experience in Data Science and Python programming. · Strong knowledge of statistical modeling, machine learning techniques, and data visualization. · Expertise in PostgreSQL (schema design, query optimization, indexing). · Proficiency in PyTest for automated testing and CI/CD integration. · Hands-on experience with Pandas and NumPy for data manipulation. · Familiarity with Git-based workflows and CI/CD tools. · Strong problem-solving skills and ability to communicate technical concepts clearly. Generic Managerial Skills, If any · Experience with SQLAlchemy, Airflow, or Prefect for workflow orchestration. · Knowledge of cloud platforms (AWS, GCP, or Azure) and containerization (Docker). · Exposure to big data technologies (Spark, Hadoop) and streaming platforms (Kafka). · Prior experience in BFSI domain. Data Science, Python, Pandas, PostgreSQL, PyTest, Machine Learning, CI/CD, SQL Performance, Airflow, Docker, Kafka. Interested candidates please share me your updated resume to *******************
    $49k-64k yearly est. 1d ago
  • Data Scientist

    Parker's Kitchen 4.2company rating

    Data scientist job in Savannah, GA

    We are looking for a Data Scientist with expertise in optimization and forecasting to help improve how we manage labor, staffing, and operational resources across our retail locations. This role is critical in building models and decision-support tools that ensure the right people, in the right place, at the right time - balancing customer service, efficiency, and cost. You will work closely with Operations, Finance, and Store Leadership teams to deliver practical solutions that improve labor planning, scheduling, and demand forecasting. The right candidate will be confident, resourceful, and excited to own both the technical and business-facing aspects of applying data science in a fast-paced retail environment. Responsibilities Build and maintain forecasting models (time-series, machine learning, and statistical) for sales and transactions. Develop and deploy optimization models (linear/mixed-integer programming, heuristics, simulation) to improve workforce scheduling and labor allocation. Partner with operations and finance to translate forecasts into actionable staffing and labor plans that reduce costs while maintaining service levels. Build dashboards and automated tools to track forecast accuracy, labor KPIs, and staffing effectiveness. Provide insights and “what-if” scenario modeling to support strategic workforce and budget planning. Knowledge, Skills, And Abilities Strong foundation in forecasting techniques (time-series models, regression, machine learning) and optimization methods (linear/mixed-integer programming, heuristics, simulation). Proficiency in Python or R for modeling and analysis, along with strong SQL skills for working with large-scale datasets. Knowledge of statistics, probability, and applied mathematics to support predictive and prescriptive modeling. Experience building and deploying predictive models, optimization tools, and decision-support solutions that drive measurable business outcomes. Strong data storytelling and visualization skills using tools such as Power BI, Tableau, or Looker. Ability to translate analytical outputs into clear, actionable recommendations for non-technical stakeholders. Strong collaboration skills with the ability to partner cross-functionally with Operations, Finance, and Store Leadership to drive adoption of data-driven approaches. Ability to work independently and resourcefully, combining technical depth with practical problem-solving to deliver results in a fast-paced environment. Education And Requirements Required: Bachelor's or Master's degree in Data Science, Statistics, Applied Mathematics, Industrial Engineering, Operations Research, or related field. Minimum 2-3 years of professional experience in Data Science or a related area. Strong skills in time-series forecasting (e.g., ARIMA, Prophet, ML-based approaches). Proficiency in optimization techniques (linear programming, integer programming). Strong Python or R programming skills. SQL expertise for large, complex datasets. Strong communication skills with the ability to partner with business stakeholders. Preferred Experience in Retail, Restaurant, and/or Convenience Stores a plus. Experience with cloud platforms (Snowflake, AWS, GCP, Azure). Knowledge of BI tools (Tableau, Power BI, Looker). Physical Requirements Prolonged periods sitting/standing at a desk and working on a computer Must be able to lift up to 50 pounds Parker's is an equal opportunity employer committed to hiring a diverse workforce and sustaining an inclusive culture. Parker's does not discriminate on the basis of disability, veteran status or any other basis protected under federal, state, or local laws.
    $73k-100k yearly est. 1d ago
  • Senior Data Scientist

    Pangeatwo 3.6company rating

    Data scientist job in Birmingham, AL

    We're seeking a Contract-to-Hire Senior Data Scientist to lead and collaborate with a multidisciplinary team in designing and developing innovative analytical products and solutions using Machine Learning, NLP, and Deep Learning. This role is ideal for someone who thrives in ambiguity, enjoys solving complex problems, and can translate business needs into measurable outcomes. What You'll Do • Partner with business leaders to understand needs and define measurable goals • Gather requirements, build project plans, manage deadlines, and communicate updates • Analyze large structured and unstructured datasets • Build, evaluate, implement, and maintain predictive models • Present results to both technical and non-technical stakeholders • Deploy models and monitor ongoing performance and data accuracy • Contribute ideas, stay current with industry trends, and support team development Lead-Level Opportunities Include: • Driving data science strategy and overseeing project delivery • Providing technical mentorship and leadership to the team • Promoting innovation and exploring emerging tech, tools, and methodologies What We're Looking For • Bachelor's degree in Applied Mathematics, Statistics, Computer Science, Data Science, or related field • 3-6 years of relevant experience (advanced degrees may reduce required experience) • Strong skills in machine learning, statistical modeling, and data analysis • Proficiency in Python or R • Experience with large datasets, preprocessing, and feature engineering • Prior management experience • Experience with transfer learning • Experience building and deploying deep learning solutions • Strong communication skills and ability to present complex concepts clearly • Experience in life insurance or a related domain is a plus • Ability to independently manage projects end-to-end Qualifications • Master's or PhD • Industry experience in similar roles • Publications or patents in data science or ML • Experience collaborating across technical and business teams • Familiarity with software engineering best practices and version control • Relevant certifications (AWS ML Specialty, Google Data Engineer, etc.) Rooted in Birmingham. Focused on You. We're a local recruiting firm based right here in Birmingham. We partner with top companies across the city-from large corporations to fast-growing startups-and we'd love to meet you for coffee to talk about your career goals. Whether you're actively searching or just exploring, we're here to guide you through the entire process- from resume tips to interview coaching. At our clients' request, only individuals with required experience will be considered. Please note - if you have recently submitted your resume to a PangeaTwo posting, your qualifications will be considered for other open opportunities. Your resume will never be submitted to a client without your prior knowledge and consent to do so.
    $79k-109k yearly est. 2d ago
  • Electronic Data Interchange Consultant

    Thorndale Partners

    Data scientist job in Birmingham, AL

    DETAILS: EDI CONSULTANT /TRAINER Title: EDI Consultant Length: 3-6 months for first project and extensions from there, possible for multiple projects Compensation: Hourly DOE Location: Meadowbrook, AL (Birmingham) / can be remote, but need to visit on the front end a couple of weeks and as needed occasionally down the road. OVERVIEW: EDI CONSULTANT /TRAINER This individual will plan, develop, and implement the EDI operations and strategy roadmap for the organization train and mentor a small team. RESPONSIBILITIES: EDI CONSULTANT /TRAINER Manage Mapping and Administration for TrustedLink/OpenText /BizManager for iSeries/AS400 Mentor a small team of resources to assist in EDI operations. Oversees the design, development, testing, deployment, and maintenance of the EDI systems, applications, and integrations - Must be strong with TrustedLink and BizManager for iSeries. Develop and Document Specifications Monitors and evaluates the EDI system's performance, availability, security, and compliance, and initiates corrective actions as needed. Ensures that the EDI systems adhere to the industry standards, best practices, and regulatory requirements. Resolves complex EDI issues and provides technical support and guidance to the users. Establishes and maintains effective relationships with the internal and external stakeholders, such as business units, IT departments, vendors, and trading partners. MINIMUM REQUIREMENTS: EDI CONSULTANT /TRAINER Experience with AS400 / iSeries and RPG development and data files. Strong experience with OpenText, TrustedLink, and BizManager for iSeries 2+ years leadership experience training and leading a small team 10+ years of experience in EDI systems development, implementation, and management. Extensive knowledge and expertise in EDI standards, formats, protocols, and technologies, such as ANSI X12, EDIFACT, XML, AS2, FTP, VAN communication protocols etc.
    $73k-101k yearly est. 3d ago
  • Data Engineer (Mid & Senior)

    Hruckus

    Data scientist job in Huntsville, AL

    Veteran-Owned Firm Seeking Data Engineers for an Onsite Assignment in Huntsville, AL My name is Stephen Hrutka. I lead a Veteran-Owned management consulting firm in Washington, DC. We specialize in Technical and Cleared Recruiting for the Department of Defense (DoD), the Intelligence Community (IC), and other advanced defense agencies. At HRUCKUS, we support fellow Veteran-Owned businesses by helping them recruit for positions across organizations such as the VA, SBA, HHS, DARPA, and other leading-edge R&D-focused defense agencies. We seek to fill Data Engineer roles supporting the FBI in Huntsville, AL. The ideal candidate will possess an active Top-Secret Security Clearance, and 5+ to 8+ years of experience in data engineering or database development. They should have strong hands-on experience with ETL tools (e.g., Informatica, Talend, Pentaho, AWS Glue, or custom Java ETL frameworks) and be proficient in SQL and at least one major RDBMS (Oracle or PostgreSQL). If you're interested, I'll gladly provide more details about the role and discuss your qualifications further. Thanks, Stephen M Hrutka Principal Consultant HRUCKUS LLC Executive Summary: HRUCKUS is seeking a Mid-Level and Senior-Level Data Engineers with Top-Secret Security Clearance for a role supporting the FBI in Huntsville, AL. Job Description: We are seeking Data Engineers (Senior and Mid-Level) to support secure, mission-critical data environments within a classified cloud infrastructure. These roles are fully onsite in Huntsville, AL and require an active Top Secret clearance. The ideal candidates will have strong experience with ETL development, data migration, Java-based data pipelines, and relational/NoSQL databases (Oracle, PostgreSQL, MongoDB), along with exposure to AWS cloud services and Agile/Scrum methodologies. Responsibilities: Design, develop, and maintain ETL workflows to extract, transform, and load large-scale structured and unstructured datasets. Develop data migration solutions between legacy and modern systems using SQL, Java, and cloud-native tools. Implement data integration frameworks leveraging AWS services such as Glue, Lambda, S3, RDS, Redshift, and Kinesis. Develop automation scripts using Python, Shell, or Bash for deployment, data validation, and maintenance tasks. Maintain and enhance data pipelines for real-time and batch data processing. Support data quality, metadata management, and governance activities. Participate in Agile/Scrum sprints, contributing to design, code reviews, testing, and documentation. Troubleshoot and resolve data-related issues across on-premises and AWS environments. Qualifications: Active TOP SECRET clearance Bachelor's degree in Computer Science, IT, or related field Mid-Level: 5+ years of professional experience in data engineering or database development. Senior-Level: 8+ years of professional experience in data engineering or database development. Strong hands-on experience with ETL tools (e.g., Informatica, Talend, Pentaho, AWS Glue, or custom Java ETL frameworks). Proficiency in SQL and at least one major RDBMS (Oracle or PostgreSQL). Experience with data migration projects and data quality validation. Proficient in Java or Python for data processing and automation. Experience working with cloud technologies, preferably AWS (RDS, S3, Lambda, Redshift, Glue). Working knowledge of Linux/Unix environments and shell scripting. Experience in an Agile/Scrum development environment. Excellent problem-solving, analytical, and communication skills. Details: Job Title: Mid-Level and Senior-level Data Engineer Location: Redstone Arsenal, Huntsville, AL 35898 Security Clearance: Top-Secret Clearance Salary: Up to $150,000 per year (based on experience) Job Type: Full-time, Onsite Benefits: Paid Time Off (PTO): 3 weeks of PTO (including sick leave). Unused PTO is paid out at the end of the year. Holidays: 2 floating holidays and eight public holidays per year. Health & Dental Insurance: The company covers 50% of employee health and dental insurance (dependents may be added at an extra cost). Coverage becomes effective after 30 days. Life Insurance: Standard Short-Term Disability (STD), Long-Term Disability (LTD), and life insurance at no cost to full-time employees. 401(k) Program: Eligible after 90 days with a 4% company match and immediate vesting. Profit Sharing: Employees can participate in the company's profit-sharing program without requiring personal contributions. Commuting and Parking: No reimbursement for commuting or parking expenses.
    $150k yearly 4d ago
  • Senior Data Engineer / Data Modeler (Spark, Cloud, Databricks & Modeling)

    Employvision Inc. 3.7company rating

    Data scientist job in Hollywood, FL

    No VISA candidates please.. MUST BE ONSITE 3-4 DAYS IN MIAMI FL We are seeking a skilled Data Engineer to design and build analytics solutions that drive meaningful business value. In this role, you will collaborate closely with data teams, stakeholders, and leadership to ensure technical solutions align with business goals. You will create scalable data architectures, robust data models, and modern engineering pipelines that support the full data lifecycle. The ideal candidate is a strong database designer with hands-on experience in QL, data modeling, and Databricks. You should be comfortable gathering requirements, managing timelines, collaborating with offshore teams, and interacting directly with clients. This role requires 4 days per week onsite in Miami; no travel required. Key Responsibilities Design, build, and maintain scalable on-prem and cloud-based data architectures. Translate business requirements into clear, actionable technical specifications. Optimize data flows, pipelines, and integrations for performance and scalability. Develop and implement data engineering solutions using Databricks (AWS, Azure, or Google Cloud Platform). Lead end-to-end pipeline development, including ingestion, transformation, and storage. Ensure data quality, governance, security, and integrity across the lifecycle. Work directly with clients to understand needs and deliver tailored solutions. Provide guidance and mentorship to junior engineers and team members. Clearly communicate complex data concepts to technical and non-technical audiences. Manage stakeholder expectations and maintain strong client relationships. Stay current with modern data engineering, cloud, and analytics technologies. Skills & Attributes for Success Strong analytical, decision-making, and problem-solving abilities. Expertise in cloud architecture and modern data engineering concepts. Experience with data integration, modeling, and security best practices. Ability to handle complex business requirements and legacy system landscapes. Excellent communication and relationship-building skills. Required Qualifications Bachelor s degree in Computer Science, Engineering, or related field (Master s preferred). 5+ years of data engineering experience focused on cloud-based solutions. Hands-on expertise with Databricks and Spark for large-scale data processing. Strong programming skills in Python, Scala, and/or SQL. Deep experience in data modeling, ETL development, and data warehousing. Proven success delivering at least two end-to-end data engineering projects, such as: Building a data lake on Databricks integrating multiple sources for BI analytics. Developing real-time pipelines using Databricks and cloud-native services. Ability to work independently, lead tasks, meet deadlines, and manage client communication. Senior Project Expertise (Preferred for Consulting-Level Roles) Ability to connect technical solutions to broader business strategies. Experience managing multiple concurrent projects and deliverables. Skilled in stakeholder engagement at all levels, including executives. Change management exposure in data transformation initiatives. Ability to identify risks and define mitigation strategies. Experience contributing to architectural decision-making and technical leadership. Strong documentation, reporting, and client-facing communication. Nice to Have Experience with data quality frameworks and semantic layers. Familiarity with AWS, Azure, or Google Cloud Platform data services. Understanding of data governance, privacy, and compliance standards. Exposure to machine learning tools and frameworks.
    $79k-113k yearly est. 22h ago
  • W2 Opportunity // GCP Data Engineer // Atlanta, GA

    Cloudingest

    Data scientist job in Atlanta, GA

    Job Description: GCP Data Engineer Rate: $50/hr. on W2 (No C2C) We are seeking a highly skilled GCP Data Engineer to design, build, and optimize cloud-native data pipelines and analytics solutions on Google Cloud Platform. The ideal candidate has strong experience with Python, BigQuery, Cloud Data Fusion, and core GCP services such as Cloud Composer, Cloud Storage, Cloud Functions, and Pub/Sub. This role requires a strong foundation in data warehousing concepts and scalable data engineering practices. Responsibilities Design, develop, and maintain robust ETL/ELT pipelines on Google Cloud Platform. Build and optimize data workflows using Cloud Data Fusion, BigQuery, and Cloud Composer. Write efficient and maintainable Python code to support data ingestion, transformation, and automation. Develop optimized BigQuery SQL for analytics, reporting, and large-scale data modeling. Utilize GCP services such as Cloud Storage, Pub/Sub, and Cloud Functions to build event-driven and scalable data solutions. Ensure data quality, governance, and reliability across all pipelines. Collaborate with cross-functional teams to deliver clean, trusted, production-ready datasets. Monitor, troubleshoot, and resolve performance issues in cloud data pipelines and workflows. Must-Have Skills Strong experience with GCP BigQuery (data modeling, SQL development, performance tuning). Proficiency in Python for data engineering and pipeline automation. Hands-on experience with Cloud Data Fusion for ETL/ELT development. Working experience with key GCP services: Cloud Composer Cloud Storage Cloud Functions Pub/Sub Strong understanding of data warehousing concepts, star/snowflake schemas, and best practices. Solid understanding of cloud data architecture and distributed processing. Good-to-Have Skills Experience with Vertex AI for ML pipeline integration or model deployment. Familiarity with Dataproc (Spark/Hadoop) for large-scale processing. Knowledge of CI/CD workflows, Git, and DevOps best practices. Experience with Cloud Logging/Monitoring tools.
    $50 hourly 2d ago
  • Lead Data Engineer

    Selby Jennings

    Data scientist job in Tampa, FL

    A leading Investment Management Firm is looking to bring on a Lead Data Engineer to join its team in Tampa, Denver, Memphis, or Southfield. This is an excellent chance to work alongside industry leaders while getting to be both hands on and helping lead the team. Key Responsibilities Project Oversight: Direct end-to-end software development activities, from initial requirements through deployment, ensuring projects meet deadlines and quality standards. Database Engineering: Architect and refine SQL queries, stored procedures, and schema designs to maximize efficiency and scalability within Oracle environments. Performance Tuning: Evaluate system performance and apply strategies to enhance data storage and retrieval processes. Data Processing: Utilize tools like Pandas and Spark for data wrangling, transformation, and analysis. Python Solutions: Develop and maintain Python-based applications and automation workflows. Pipeline Automation: Implement and manage continuous integration and delivery pipelines using Jenkins and similar technologies to optimize build, test, and release cycles. Team Development: Guide and support junior engineers, promoting collaboration and technical growth. Technical Documentation: Create and maintain comprehensive documentation for all development initiatives. Core Skills Experience: Over a decade in software engineering, with deep expertise in Python and Oracle database systems. Technical Knowledge: Strong command of SQL, Oracle, Python, Spark, Jenkins, Kubernetes, Pandas, and modern CI/CD practices. Optimization Expertise: Skilled in database tuning and applying best practices for performance. Leadership Ability: Proven track record in managing teams and delivering complex projects. Analytical Strength: Exceptional problem-solving capabilities with a data-centric mindset. Communication: Clear and effective written and verbal communication skills. Education: Bachelor's degree in Computer Science, Engineering, or equivalent professional experience. Preferred Qualifications Certifications: Professional credentials in Oracle, Python, Kubernetes, or CI/CD technologies. Agile Background: Hands-on experience with Agile or Scrum frameworks. Cloud Platforms: Familiarity with AWS, Azure, or Google Cloud services.
    $72k-99k yearly est. 1d ago
  • Data Engineer - OrcaWorks AI

    Orcaworks.Ai

    Data scientist job in Atlanta, GA

    Experience Level: Entry-level (Master's preferred) About OrcaWorks AI At OrcaWorks AI, we're building next-generation AI systems that empower businesses to make data-driven decisions with intelligence and speed. We're seeking passionate Data Engineers who love solving real-world data challenges and want to be part of a growing team building cutting-edge AI infrastructure. Key Responsibilities Design, develop, and maintain data pipelines using tools like Airbyte and Prefect to feed AI and machine learning models. Integrate data from multiple structured and unstructured sources into unified and queryable layers using ElasticSearch or Vespa. Implement data validation, transformation, and storage solutions using modern ETL frameworks. Collaborate with AI, LLM, and data science teams to ensure reliable and optimized data flow for model training. Support database management, SQLModel, and data governance practices across services. Required Skills & Qualifications Master's degree (or Bachelor's with equivalent experience) in Computer Science, Information Systems, or Data Engineering. Proficiency in Python and SQL; experience with PySpark or equivalent ETL frameworks. Hands-on experience with Airbyte, Prefect, and DBT. Familiarity with search and indexing systems like Vespa or ElasticSearch. Knowledge of cloud data platforms (AWS, GCP, or Azure) and API integration. Strong understanding of data security and applied AI workflows.
    $75k-100k yearly est. 4d ago
  • Lead Azure Databrick Engineer

    Syren

    Data scientist job in Atlanta, GA

    ****************Individual Contractors (W2/1099) are encouraged to apply. Visa sponsorship is not available for this role at this time************ An Azure Data Engineer is responsible for designing, implementing, and maintaining the data infrastructure within an organization. They collaborate with both business and IT teams to understand stakeholders' needs and unlock the full potential of data. They create conceptual and logical data models, analyze structural requirements, and ensure efficient database solutions. Must Have Skills: Experience of Migrating from other platform to Databricks Proficiency in Databricks and Azure Cloud, Databricks Asset Bundles, Hoslistic vision on the Data Strategy. Proficiency in Data Streaming and Data Modeling Experience in architecting at least two large-scale big data projects Strong understanding of data scaling and its complexities Data Archiving and Purging mechanisms. Job Requirements • Degree in computer science or equivalent preferred • Demonstrable experience in architecture, design, implementation, and/or support of highly distributed applications with Azure cloud and Databricks. • 10+ Years of Hands-on experience with data modelling, database design, data mining, and segmentation techniques. • Working knowledge and experience with "Cloud Architectures" (e.g., SaaS, PaaS, IaaS) and the ability to address the unique security considerations of secure Cloud computing • Should have architected solutions for Cloud environments such as Microsoft Azure and/or GCP • Experience with debugging and performance tuning in distributed environments • Strong analytical skills with the ability to collect, organize, analyse, and broadcast significant amounts of information with attention to detail and accuracy • Experience dealing with structured, unstructured data. • Must have Python, PySpark experience. • Experience in ML or/and graph analysis is a plus
    $75k-100k yearly est. 3d ago
  • Lead Data Engineer - Palantir Foundry

    Smurfit Westrock

    Data scientist job in Atlanta, GA

    Our technology organization is transforming how we work at WestRock. We align with our businesses to deliver innovative solutions that: Address specific business challenges, integrate processes, and create great experiences Connect our work to shared goals that propel WestRock forward in the Digital Age Imagine how technology can advance the way we work by using disruptive technology We are looking for forward thinking technologists that can accelerate our focus areas such as building stronger foundational technology capabilities, reducing complexity, employing digital transformation concepts, and leveraging disruptive technology. As a Lead Data Engineer, you will play a pivotal role in building and scaling modern data infrastructure that powers decision-making across production, supply chain, and operations. Helps to define and analyze business requirements for Enterprise scale reports. Analyzes and evaluates business use cases for data engineering problems and helps design and develop processing solutions with ETL Cloud based technologies. How you will impact WestRock: Architect and implement scalable data pipelines using Palantir Foundry (pipelines, workshops, ontology) to unify and transform operational data. Design and develop robust data workflows using Python, Apache Airflow, and Apache Spark to support real-time and batch processing needs. Build and deploy solutions on cloud platforms (AWS or Azure), ensuring high availability, security, and performance. Collaborate with data scientists, analysts, and operations teams to deliver actionable insights and operational tooling. Define and enforce data engineering best practices, including CI/CD automation, version control (Git), and testing strategies. Mentor junior developers, conduct code reviews, and help shape the technical roadmap for the data platform. What you need to succeed: Education: Bachelor's degree in computer science or similar At least 6 years of strong Data Engineering experience Hands-on experience with Palantir Foundry, including pipelines, ontology modeling, and workshop development. Strong programming skills in Python or Java, with experience building and maintaining production-grade data pipelines. Proficiency in Apache Airflow and Apache Spark for workflow orchestration and large-scale data processing. Proven experience deploying data solutions on AWS or Azure, with strong understanding of cloud-native services. Familiarity with Git for version control and CI/CD pipelines for automated testing and deployment. Demonstrated ability to mentor junior engineers, lead projects, and work independently in a fast-paced environment. Good communication skills, with the ability to collaborate effectively across technical and non-technical teams. Good analytical and troubleshooting abilities. What we offer: Corporate culture based on integrity, respect, accountability and excellence Comprehensive training with numerous learning and development opportunities An attractive salary reflecting skills, competencies and potential A career with a global packaging company where Sustainability, Safety and Inclusion are business drivers and foundational elements of the daily work.
    $75k-100k yearly est. 22h ago
  • Data Engineer

    Flybridge Staffing

    Data scientist job in Palm Beach Gardens, FL

    Flybridge Staffing is currently searching for a Data Engineer for a client located in the Palm Beach Gardens area. This is a direct-hire position that will work off a hybrid schedule of 2 days remote. This person will design systems that supply high-performance datasets for advanced analytics. Experience: BA degree and 5+ years of Data Engineering experience Strong experience building ETL data pipelines for on-premises SQL Server 2017 or newer Deep understanding of the development of data pipelines with either SSIS or Python Broad experience with SQL Server, including Columnstore, etc. Extensive experience using SSMS and T-SQL to create and maintain SQL Server tables, views, functions, stored procedures, and user-defined table types. Experience with data modeling indexes, Temporal tables, CLR, and Service Broker. Experience in partitioning tables and indexes, and performance improvement with Query Analyzer Experience writing C#, PowerShell, and Python. Experience with source control integration with GitHub, BitBucket, and Azure DevOps. Experience working in an Agile and Kanban SDLC. Experience with cloud-based data management solutions such as Snowflake, Redshift. Experience with Python programming is a plus. Libraries such as Pandas, Numpy, csv, Traceback, JSON, PyODBC, Math-Are nice to have. Experience writing design documentation such as ERDs, Data Flow Diagrams, and Process Flow Diagrams. Experience with open-source database engines such as Clickhouse, ArcticDB, and PostGreSQL is a plus. Responsibilities: Collaborate effectively with Stakeholders, Project Managers, Software Engineers, Data Analysts, QA Analysts, DBAs, and Data Engineers. Build and maintain data pipelines based on functional and non-functional requirements. Proactively seek out information and overcome obstacles to deliver projects efficiently. Ensure that data pipelines incorporate best practices related to performance, scaling, extensibility, fault tolerance, instrumentation, and maintainability. Ensure that data pipelines are kept simple and not overly engineered. Produce and maintain design and operational documentation. Analyze complex data problems and engineer elegant solutions. ****NO SPONSORSHIP AVAILABLE**** US Citizen, GC, EAD only please. If your background aligns with the above details and you would like to learn more, please submit your resume to jobs@flybridgestaffing.com or on our website, www.flybridgestaffing.com and one of our recruiters will be in touch with you ASAP. Follow us on LinkedIn to keep up with all our latest job openings and referral program.
    $71k-98k yearly est. 4d ago
  • Data Engineer w/ Python & SQL

    A2C 4.7company rating

    Data scientist job in Alpharetta, GA

    We're looking for a Data Engineer to build and maintain scalable data pipelines and cloud data infrastructure on GCP. The role focuses on BigQuery, Dataflow, and modern ETL/ELT to support analytics and ML workflows. MUST HAVES A problem solver with ability to analyze and research complex issues and problems; and proposing actionable solutions and/or strategies. Solid understanding and hands on experience with major cloud platforms. Experience in designing and implementing data pipelines. Must have strong Python, SQL & GCP skills Responsibilities Build and optimize batch/streaming pipelines using Dataflow, Pub/Sub, Composer. Develop and tune BigQuery models, queries, and ingestion processes. Implement IaC (Terraform), CI/CD, monitoring, and data quality checks. Ensure data governance, security, and reliable pipeline operations. Collaborate with data science teams and support Vertex AI-based ML workflows. Must-Have Must have strong Python, SQL & GCP skills 3-5+ years of data engineering experience. Hands-on GCP experience (BigQuery, Dataflow, Pub/Sub). Solid ETL/ELT and data modeling experience. Nice-to-Have GCP certifications, Spark, Kafka, Airflow, dbt/Dataform, Docker/K8s.
    $77k-106k yearly est. 3d ago
  • Sr. Data Engineer (SQL+Python+AWS)

    SGS Technologie 3.5company rating

    Data scientist job in Saint Petersburg, FL

    looking for a Sr. Data Engineer (SQL+Python+AWS) to work on a 12+ Months, Contract (potential Extension or may Convert to Full-time) = Hybrid at St. Petersburg, FL 33716 with a Direct Financial Client = only on W2 for US Citizen or Green Card Holders. Notes from the Hiring Manager: • Setting up Python environments and data structures to support the Data Science/ML team. • No prior Data Science or Machine Learning experience required. • Role involves building new data pipelines and managing file-loading connections. • Strong SQL skills are essential. • Contract-to-hire position. • Hybrid role based in St. Pete, FL (33716) only. Duties: This role is building and maintaining data pipelines that connect Oracle-based source systems to AWS cloud environments, to provide well-structured data for analysis and machine learning in AWS SageMaker. It includes working closely with data scientists to deliver scalable data workflows as a foundation for predictive modeling and analytics. • Develop and maintain data pipelines to extract, transform, and load data from Oracle databases and other systems into AWS environments (S3, Redshift, Glue, etc.). • Collaborate with data scientists to ensure data is prepared, cleaned, and optimized for SageMaker-based machine learning workloads. • Implement and manage data ingestion frameworks, including batch and streaming pipelines. • Automate and schedule data workflows using AWS Glue, Step Functions, or Airflow. • Develop and maintain data models, schemas, and cataloging processes for discoverability and consistency. • Optimize data processes for performance and cost efficiency. • Implement data quality checks, validation, and governance standards. • Work with DevOps and security teams to comply with RJ standards. Skills: Required: • Strong proficiency with SQL and hands-on experience working with Oracle databases. • Experience designing and implementing ETL/ELT pipelines and data workflows. • Hands-on experience with AWS data services, such as S3, Glue, Redshift, Lambda, and IAM. • Proficiency in Python for data engineering (pandas, boto3, pyodbc, etc.). • Solid understanding of data modeling, relational databases, and schema design. • Familiarity with version control, CI/CD, and automation practices. • Ability to collaborate with data scientists to align data structures with model and analytics requirements Preferred: • Experience integrating data for use in AWS SageMaker or other ML platforms. • Exposure to MLOps or ML pipeline orchestration. • Familiarity with data cataloging and governance tools (AWS Glue Catalog, Lake Formation). • Knowledge of data warehouse design patterns and best practices. • Experience with data orchestration tools (e.g., Apache Airflow, Step Functions). • Working knowledge of Java is a plus. Education: B.S. in Computer Science, MIS or related degree and a minimum of five (5) years of related experience or combination of education, training and experience.
    $71k-91k yearly est. 3d ago
  • Data Engineer

    Wissen Technology

    Data scientist job in Alpharetta, GA

    Data Engineer, Wissen Technology, Alpharetta, GA The Wissen team continues to expand its footprint in the Canada & USA. More openings to come as we continue to grow the team! Please read below for a brilliant career opportunity. Role: Data Engineer Title: Associate Location: Alpharetta. GA (Day 1 Onsite) - Fulltime Mode of Work: Onsite Your required skills: 1+ Years Required Position Overview: We are seeking a detail-oriented and motivated Data Engineer with 1+ years of experience to support the design, development, and maintenance of data pipelines and data workflows. The ideal candidate should have strong SQL and Python skills, with a solid understanding of ETL concepts. Key Responsibilities: Assist in building, maintaining, and optimizing ETL pipelines. Work with structured and unstructured data to support analytics and reporting needs. Collaborate with senior engineers and cross-functional teams to ensure data availability and quality. Perform data validation, cleaning, and transformation tasks. Monitor data pipeline performance and troubleshoot issues. Support implementation of best practices in data engineering and data management. Mandatory Skills: Strong proficiency in SQL. Good understanding of Python for data manipulation and scripting. Knowledge of basic ETL concepts and data workflow fundamentals. Preferred Skills: Understanding of Hadoop ecosystem basics. Familiarity with Kafka for real-time data streaming. Benefits: 401K Healthcare insurance for you and your family (medical, dental, vision). Short / Long term disability insurance. Life Insurance. Accidental death & disability Insurance. 3 weeks of Paid Time Off. Support and fee coverage for immigration needs. Remote office set up stipend. Support for industry certifications. Additional cash incentives. We are: A high end technical consulting firm built and run by highly qualified technologists. Our workforce consists of 5000+ highly skilled professionals, with leadership from Wharton, MIT, IITs, IIMs, and NITs and decades of experience at Goldman Sachs, Morgan Stanley, MSCI, Deutsche Bank, Credit Suisse, Verizon, British Telecom, ISRO etc. Without any external funding or investments, Wissen Technology has grown its revenues by 100% every other year since it started as a subsidiary of Wissen Group in 2015. We have a global presence with offices in the US, India, UK, Australia, Mexico, and Canada. You are: A true tech or domain ninja. Or both. Comfortable working in a quickly growing profitable startup, have a “can do” attitude and are willing to take on any task thrown your way. You will: · Develop and promote the company's culture of engineering excellence. · Define, develop and deliver solutions at a top tier investment bank or another esteemed client. Perform other duties as needed Your Education and Experience: · We value candidates who can execute on our vision and help us build an industry-leading organization. Rather than requiring a specific certification or number of years of experience, we ask that you be able to demonstrate your skills in the areas outlined above. · Graduate-level degree in computer science, engineering, or related technical field Wissen embraces diversity and is an equal opportunity employer. We are committed to building a team that represents a variety of backgrounds, skills, and abilities. We believe that the more inclusive our team is, the better our work will be. All qualified applicants, including but not limited to LGBTQ+, Minorities, Females, the Disabled, and Veterans, are encouraged to apply. About Wissen Technology: The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015. Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world-class products. We empower businesses with a dynamic portfolio of services and accelerators tailored to today's digital demands and based on future ready technology stack. Our services include Industry Leading Custom Software Development, AI-Driven Software Engineering, Generative AI & Machine Learning, Real-Time Data Analytics & Insights, Interactive Data Visualization & Decision Intelligence, Intelligent Process Automation, Multi-Cloud & Hybrid Cloud Strategies, Cross-Platform Mobile Experiences, CI/CD-Powered Agile DevOps, Automated Quality Engineering, and cutting-edge integrations. Certified as a Great Place to Work for five consecutive years (2020-2025) and recognized as a Top 20 AI/ML vendor by CIO Insider, Wissen Group has delivered multimillion-dollar projects for over 20 Fortune 500 companies. Wissen Technology delivers exceptional value on mission-critical projects through thought leadership, ownership, and reliable, high-quality, on-time delivery. Our industry-leading technical expertise stems from the talented professionals we attract. Committed to fostering their growth and providing top-tier career opportunities, Wissen ensures an outstanding experience and value for our clients and employees. We serve diverse industries, including Banking, E-commerce, Telecom, Healthcare, Manufacturing, and Energy. About Wissen Technology: Website : ************** LinkedIn: ************************************************** Wissen Leadership : https://**************/company/wissen-technology-leadership Wissen Live : **************************************************/posts/feed View=All Wissen Thought Leadership : https://**************/articles/ Employee Speak: *************************************************************** ********************************************************************* Great Place to Work: https://**************/blog/wissen-is-a-great-place-to-work-says-the-great-place-to-work-institute-… https://www.linkedin.com/posts/wissen-infotech_wissen-leadership-wissenites-activity-**********1317… About Wissen Interview Process: https://**************/blog/we-work-on-highly-complex-technology-projects-here-is-how-it-changes-wh… Latest in Wissen in CIO Insider : **************************************************************************************************** Job Type: Full-time Schedule: Monday to Friday
    $75k-100k yearly est. 22h ago
  • Data Architect

    Zensar Technologies 4.3company rating

    Data scientist job in Sunrise, FL

    JD: 14+ years of overall IT experience with expertise in Data landscape - Data Warehouse, Data lake etc. Hands on experience in Big Data and Hadoop ecosystem; Strong skills in SQL, Python or Spark Proficient in Data Warehousing concepts and Customer Data Management (Customer 360) Experience in GCP platform - Dataflow, Dataproc, Kubernetes containers etc. Expertise in deep Data exploration and Data analysis Excellent communication and inter personal skills
    $77k-102k yearly est. 2d ago
  • Claims Data Engineer

    Nationsbenefits

    Data scientist job in Plantation, FL

    NationsBenefits is recognized as one of the fastest growing companies in America and a Healthcare Fintech provider of supplemental benefits, flex cards, and member engagement solutions. We partner with managed care organizations to provide innovative healthcare solutions that drive growth, improve outcomes, reduce costs, and bring value to their members. Through our comprehensive suite of innovative supplemental benefits, fintech payment platforms, and member engagement solutions, we help health plans deliver high-quality benefits to their members that address the social determinants of health and improve member health outcomes and satisfaction. Our compliance-focused infrastructure, proprietary technology systems, and premier service delivery model allow our health plan partners to deliver high-quality, value-based care to millions of members. We offer a fulfilling work environment that attracts top talent and encourages all associates to contribute to delivering premier service to internal and external customers alike. Our goal is to transform the healthcare industry for the better! We provide career advancement opportunities from within the organization across multiple locations in the US, South America, and India. Position Summary: We are seeking a seasoned EDI 837 Claims Data Engineer to design, develop, and maintain data pipelines that process healthcare claims in compliance with HIPAA and ANSI X12 standards. This role requires deep expertise in Electronic Data Interchange (EDI), particularly the 837-transaction set, and will be pivotal in ensuring accurate, timely, and secure claims data exchange across payers, providers, clearinghouses, state agencies, and CMS. Key Responsibilities EDI Development & Integration Design, build, and maintain pipelines for processing 837 healthcare claim transactions. Implement and support EDI workflows across multiple trading partners. Ensure compliance with HIPAA regulations and ANSI X12 standards. Data Engineering Develop ETL processes to transform, validate, and load claims data into enterprise data warehouses. Optimize data flows for scalability, reliability, and performance. Collaborate with analysts and stakeholders to ensure claims data accuracy. Write and optimize SQL queries, stored procedures, and scripts for validation and reporting. Monitoring & Troubleshooting Monitor EDI transactions for errors, rejections, and compliance issues. Troubleshoot and resolve data mapping, translation, and connectivity problems. Perform root cause analysis and implement corrective actions. Collaboration Work closely with business analysts, QA teams, and IT operations to support claims processing. Partner with healthcare domain experts to align technical solutions with business needs. Required Skills & Qualifications 5+ years of experience in healthcare data engineering or claims integration. Strong expertise with EDI 837 transactions and healthcare claims processing. Proven experience with Medicaid and Medicare data exchanges between state agencies and CMS. Hands-on experience with Databricks, SSIS, and SQL Server. Knowledge of HIPAA compliance, CMS reporting requirements, and interoperability standards. Strong problem-solving skills and ability to work in cross-functional teams. Excellent communication and documentation skills. Preferred Skills Experience with Azure cloud platforms Familiarity with other EDI transactions (835, 270/271, 276/277). Exposure to data governance frameworks and security best practices. Background in data warehousing and healthcare analytics.
    $71k-98k yearly est. 1d ago

Learn more about data scientist jobs

How much does a data scientist earn in Albany, GA?

The average data scientist in Albany, GA earns between $57,000 and $106,000 annually. This compares to the national average data scientist range of $75,000 to $148,000.

Average data scientist salary in Albany, GA

$77,000
Job type you want
Full Time
Part Time
Internship
Temporary