Post job

Data scientist jobs in Saint Simons, GA

- 1,322 jobs
All
Data Scientist
Data Engineer
Data Architect
  • Data Scientist

    Coforge

    Data scientist job in Atlanta, GA

    Role: Data Scientist Mode Of Hire: Full Time Key Responsibilities • Data wrangling & feature engineering: Ingest, clean, and transform data from SQL, APIs, and data lakes (e.g., Snowflake, Databricks). Design robust pipelines that feed into analytics and ML workflows. • Data understanding & exploration: Work closely with domain experts to deeply understand the meaning, context, quality, and limitations of available datasets. Translate business questions into data requirements and analytics plans. • Machine learning development: Build, tune, and validate predictive models using scikit-learn, SparkML, XGBoost, or TensorFlow. • Cross-functional partnership: Collaborate with marketing, sales, and product teams to scope business use cases, define success metrics, and integrate models into operational workflows. • Model deployment & MLOps: Deploy and manage models using MLflow, docker and CI/CD pipelines. Implement versioning, testing, performance monitoring, and retraining strategies as part of a robust MLOps practice. • Infrastructure support: Work with data engineering and DevOps teams to maintain and improve model training and deployment infrastructure, including compute resources, workflow orchestration and environment configuration. • Insight delivery: Build clear, actionable reporting and visualizations using tools like Power BI or Tableau. Focus on impact, not just analysis. Skills Required: • Bachelor's degree in Data Science, Computer Science, Engineering, or a related quantitative field. • 5+ years of experience in a data science, ML engineering, or analytics role. • Strong SQL, Python and ML Techniques programming skills. • Experience with Azure Cloud, Databricks, and/or Snowflake. • Experience building and deploying machine learning models in production environments. Hands-on experience with Databricks, including SparkML, and MLflow integration. • Familiarity with MLOps best practices, including version control, model monitoring, and automated testing. • Experience with tools such as Git, MLflow, Docker and workflow schedulers. • Ability to communicate complex technical work to non-technical stakeholders. • Experience with scalable model training environments and distributed computing. Preferred Qualifications • Master's degree in a quantitative or technical discipline. • Experience in financial services, fintech, or enterprise B2B analytics. • Knowledge of A/B testing, causal inference, and statistical experimentation. • Familiarity with GenAI, LLM pipelines, and vector-based retrieval is a plus and platform like Snowflake Cortex.
    $66k-91k yearly est. 3d ago
  • Data Scientist

    The Energy Authority 4.1company rating

    Data scientist job in Jacksonville, FL

    Hybrid Jacksonville, FL The Energy Authority is a public power-owned, nonprofit corporation with offices in Jacksonville, Florida and Bellevue (Seattle), Washington. TEA provides public power utilities with access to advanced resources and technology systems so they can respond competitively in the changing energy markets. Through partnership with TEA, utilities benefit from an experienced organization that is singularly focused on deriving the maximum value of their assets from the market. Quantitative Analytics & Research Analyst The energy landscape is evolving, and so are the challenges our Clients face. TEA's Quantitative Analytics & Research team is looking for an innovative and analytical mind to help transform complex market trends into cutting-edge solutions. If you're passionate about machine learning, optimization, and statistical modeling and thrive in a fast-paced environment where curiosity meets impact, this is your opportunity to make a difference! Why You'll Love This Role: Build models that drive the future of energy markets and empower clients to make smarter, data-driven decisions. Work alongside a collaborative team of data scientists and industry experts who thrive on tackling complex challenges. Explore new ideas, experiment with innovative solutions, and see your work deliver impact across a wide range of clients. What You'll Do: Clean, transform, and validate structured and unstructured data to ensure data quality and reliability. Maintain existing machine learning and optimization models, updating them as needed. Research and experiment with new methods to enhance current models as well as develop new solutions. Collaborate with cross-functional teams to maintain data pipelines. Collaborate with cross-functional teams to understand the business requirements and deliver solution effectively. Document workflows, methodologies, and data sources to ensure transparency and reproducibility. What You Bring to the Team: Passion for analytics and problem-solving with a drive to find meaningful insights in complex data. Strong skills in machine learning, predictive modeling, and optimization techniques such as regression, neural networks, and feature engineering. Solid understanding of electricity markets, including economic dispatch, marginal pricing, and market clearing rules. Proficiency in R or Python, with a working knowledge of web design (HTML, CSS, JavaScript, R-Shiny),Stochastic approach, and database queries. Your Growth Path at TEA: Level 1 Data Scientist: Operate and maintain models, gaining hands-on experience with TEA's tools and industry functions. Level 2 Data Scientist: Design and refine analytical models, assess their effectiveness, and contribute to cutting-edge solutions. Level 3 Data Scientist: Provide strategic insights, manage projects, and support TEA's business development initiatives. Level 4 Senior Data Scientist: Serve as a subject matter expert, mentor team members, and lead high-impact initiatives. What You Need to Succeed: Bachelors and/or Masters in Data Science, Computer Science, Engineering or other relevant field. Industry experience in analyzing data, maintaining and developing models is preferred. Proficiency in programming languages Python or R. Strong foundation in the following subjects: statistics, machine learning, and optimization. Background in energy and power systems is preferred. Eagerness to learn, receive feedback, and grow technical depth. Strong problem-solving and analytical thinking. Good communication skills; ability to understand the requirements and present the results to stakeholders. Tools & Technology You'll Work With: Microsoft Word, Excel/Visual Basic, and other advanced analysis tools. Large-scale utility simulation software such as PROMOD, PLEXOS, AuroraXMP, GE Maps, Dayzer, PSS/e, and PowerWorld. Decision analysis techniques such as decision trees and influence diagrams. Compensation & Benefits: Competitive Salary: Based on experience, level and location. Incentives: Annual target short-term incentive rate of 17.5% of base salary based on corporate, team, and individual performance. Benefits: Paid Time Off starting at 17 days per year, plus nine paid holidays. 401(a) employer contribution and 457(b) deferred compensation retirement savings plans. Medical, dental, vision, life, AD&D, and disability insurance. Health savings and flexible spending accounts. A great work-life balance and visibility to senior executives in a supportive company culture. TEA Values TEA employees share a common sense of purpose. When TEA accomplishes its mission, the result is improved quality of life for the citizens and businesses of the communities our clients serve. TEA employees exceed the expectations of those they serve, deliver services with the highest standards of fair, honest, and ethical behavior, set the standard for service and expertise in our industry, embody a spirit of collaboration, and embrace TEA's founding entrepreneurial spirit by seizing opportunities to deliver value. If you are self-motivated, driven to deliver excellence, and passionate about your career, TEA is the perfect place for you. It's YOUR Future. It's OUR Future.
    $71k-93k yearly est. 1d ago
  • Applied Data Scientist Regulatory and Statistical

    Moffitt Cancer Center 4.9company rating

    Data scientist job in Tampa, FL

    Shape the Future of Predictive Medicine. One-Year Project. Innovators Wanted! Are you driven by curiosity, energized by ambiguity, and passionate about transforming healthcare? Dr. Ciara Freeman at the esteemed Moffitt Cancer Center is searching for a bold, entrepreneurial-minded Applied Data Scientist - Regulatory and Statistical for a dynamic, one-year project to help build the first regulatory-grade AI models that predict therapy response in multiple myeloma. You'll partner with a physician-scientist PI and data-engineering team to prototype, validate, and document predictive models designed for clinical use. This is hands-on translational ML - fast iteration, real impact, auditable results. Your models will form the core of clinically actionable, auditable AI systems that change how we treat cancer. Ideal Candidate: Expert in Python (scikit-learn, XGBoost, PyTorch/TensorFlow). Skilled in survival or clinical modeling; thrive where rigor meets speed. Startup thinkers with a thirst for discovery Individuals who thrive in fast-paced, risk-friendly environments Problem solvers who see challenges as opportunities Team players eager to see their ideas put into action Responsibilities: Develop and validate multimodal survival and risk-stratification models (clinical + omics + imaging). Collaborate with engineers to define and extract features. Perform calibration, bias analysis, and explainability (SHAP, PDPs, model cards). Translate results into clinician-friendly insights and contribute to IP and regulatory filings. Credentials & Qualifications: Master's degree in Computer Science, Data Science, Biostatistics, or a related quantitative field with seven (7) years of applied statistical or machine learning model development experience in healthcare, biotech, or regulated environments. Or PhD with five (5) years of applied statistical or machine learning model development experience in healthcare, biotech, or regulated environments. Familiarity with Snowflake or modern data-engineering workflows preferred. Join a project that's not just about data - it's about revolutionizing patient care. Help us bridge the gap between today's personalized medicine and tomorrow's predictive breakthroughs. If you're ready to take risks, drive results, and change the future of medicine, apply today! Moffitt Cancer Center proudly stands as a Comprehensive Cancer Center designated by the National Cancer Institute (NCI) in the vibrant city of Tampa, Florida. This dynamic city is an exceptional choice for those seeking exciting opportunities in a rapidly growing metropolitan area. With its flourishing economy and rich cultural diversity, the Tampa Bay region masterfully combines urban elegance with breathtaking natural beauty. Discover why countless individuals have chosen to make Tampa their home and experience firsthand what makes it one of the fastest-growing metropolitan cities in the United States.
    $64k-86k yearly est. 1d ago
  • Data Scientist

    Tata Consultancy Services 4.3company rating

    Data scientist job in Sunrise, FL

    Data Scientist (Python / PostgreSQL / PyTest) Must Have Technical/Functional Skills Data Scientist with strong expertise in Python, PostgreSQL, and PyTest to join our team. The role involves building advanced data models, performing statistical analysis, and implementing automated testing frameworks for enterprise-grade solutions. Roles & Responsibilities · Develop and deploy data science models and algorithms using Python. · Perform data extraction, transformation, and analysis leveraging PostgreSQL and Pandas. · Design and implement robust automated test suites using PyTest for data pipelines and models. · Conduct exploratory data analysis (EDA) and feature engineering for predictive modeling. · Collaborate with stakeholders to define data-driven solutions and business insights. · Optimize SQL queries and database structures for performance and scalability. · Integrate models into production systems with CI/CD pipelines and testing gates. · Ensure data quality, security, and compliance with enterprise standards. · Document methodologies, workflows, and maintain reproducibility of experiments. · Provide production support and troubleshoot data-related issues Required Qualification · 5+ years of experience in Data Science and Python programming. · Strong knowledge of statistical modeling, machine learning techniques, and data visualization. · Expertise in PostgreSQL (schema design, query optimization, indexing). · Proficiency in PyTest for automated testing and CI/CD integration. · Hands-on experience with Pandas and NumPy for data manipulation. · Familiarity with Git-based workflows and CI/CD tools. · Strong problem-solving skills and ability to communicate technical concepts clearly. Generic Managerial Skills, If any · Experience with SQLAlchemy, Airflow, or Prefect for workflow orchestration. · Knowledge of cloud platforms (AWS, GCP, or Azure) and containerization (Docker). · Exposure to big data technologies (Spark, Hadoop) and streaming platforms (Kafka). · Prior experience in BFSI domain. Data Science, Python, Pandas, PostgreSQL, PyTest, Machine Learning, CI/CD, SQL Performance, Airflow, Docker, Kafka. Interested candidates please share me your updated resume to *******************
    $49k-64k yearly est. 1d ago
  • Data Scientist

    Parker's Kitchen 4.2company rating

    Data scientist job in Savannah, GA

    We are looking for a Data Scientist with expertise in optimization and forecasting to help improve how we manage labor, staffing, and operational resources across our retail locations. This role is critical in building models and decision-support tools that ensure the right people, in the right place, at the right time - balancing customer service, efficiency, and cost. You will work closely with Operations, Finance, and Store Leadership teams to deliver practical solutions that improve labor planning, scheduling, and demand forecasting. The right candidate will be confident, resourceful, and excited to own both the technical and business-facing aspects of applying data science in a fast-paced retail environment. Responsibilities Build and maintain forecasting models (time-series, machine learning, and statistical) for sales and transactions. Develop and deploy optimization models (linear/mixed-integer programming, heuristics, simulation) to improve workforce scheduling and labor allocation. Partner with operations and finance to translate forecasts into actionable staffing and labor plans that reduce costs while maintaining service levels. Build dashboards and automated tools to track forecast accuracy, labor KPIs, and staffing effectiveness. Provide insights and “what-if” scenario modeling to support strategic workforce and budget planning. Knowledge, Skills, And Abilities Strong foundation in forecasting techniques (time-series models, regression, machine learning) and optimization methods (linear/mixed-integer programming, heuristics, simulation). Proficiency in Python or R for modeling and analysis, along with strong SQL skills for working with large-scale datasets. Knowledge of statistics, probability, and applied mathematics to support predictive and prescriptive modeling. Experience building and deploying predictive models, optimization tools, and decision-support solutions that drive measurable business outcomes. Strong data storytelling and visualization skills using tools such as Power BI, Tableau, or Looker. Ability to translate analytical outputs into clear, actionable recommendations for non-technical stakeholders. Strong collaboration skills with the ability to partner cross-functionally with Operations, Finance, and Store Leadership to drive adoption of data-driven approaches. Ability to work independently and resourcefully, combining technical depth with practical problem-solving to deliver results in a fast-paced environment. Education And Requirements Required: Bachelor's or Master's degree in Data Science, Statistics, Applied Mathematics, Industrial Engineering, Operations Research, or related field. Minimum 2-3 years of professional experience in Data Science or a related area. Strong skills in time-series forecasting (e.g., ARIMA, Prophet, ML-based approaches). Proficiency in optimization techniques (linear programming, integer programming). Strong Python or R programming skills. SQL expertise for large, complex datasets. Strong communication skills with the ability to partner with business stakeholders. Preferred Experience in Retail, Restaurant, and/or Convenience Stores a plus. Experience with cloud platforms (Snowflake, AWS, GCP, Azure). Knowledge of BI tools (Tableau, Power BI, Looker). Physical Requirements Prolonged periods sitting/standing at a desk and working on a computer Must be able to lift up to 50 pounds Parker's is an equal opportunity employer committed to hiring a diverse workforce and sustaining an inclusive culture. Parker's does not discriminate on the basis of disability, veteran status or any other basis protected under federal, state, or local laws.
    $73k-100k yearly est. 1d ago
  • Data Architect

    Agile Resources, Inc.

    Data scientist job in Atlanta, GA

    Note: Initial 100% onsite required for the first six months. Employment Type: Permanent / Direct Hire / Full-time Salary Up to $180,000 (depending on experience) + bonus The Role: We're seeking a highly skilled and hands-on Data Architect to lead the design, implementation, and ongoing evolution of our enterprise-grade data systems. This role is crucial for building scalable, secure, and intelligent data infrastructure that supports core analytics, operational excellence, and future AI initiatives. Success requires a seasoned technologist who can seamlessly integrate cloud-native services with traditional data warehousing to create a modern, unified data platform. What You'll Do: Architecture & Strategy: Lead the design and implementation of modern data platforms, including Data Lakes, Data Warehouses, and Lakehouse architectures, to enable a single source of truth for the enterprise. Data Modeling & Integration: Architect unified data models that support both modular monoliths and microservices-based platforms. Design and optimize high-volume, low-latency streaming/batch ETL/ELT pipelines. Technical Leadership: Drive the technical execution across the entire data lifecycle. Build and optimize core data processing scripts using Spark and Python. Governance & Quality: Define and enforce standards for data governance, metadata management, and data observability across distributed systems. Implement automated data lineage tracking, schema evolution, and data quality monitoring. Cloud Infrastructure: Configure and manage cloud-native data services, including core data storage and event ingestion infrastructure. Required Experience: Experience: 10+ years of proven experience in enterprise data architecture and engineering. Core Platform Expertise: Strong, hands-on experience with the Azure Data Ecosystem including Azure Data Lake Storage (ADLS), Azure Synapse Analytics (or equivalent cloud DW), and Azure Purview (or equivalent data catalog). Processing: Deep expertise in Databricks (or Apache Spark) for ETL/ELT pipeline implementation, using Delta Lake and SQL Server (or equivalent RDBMS). Coding & Scripting: Strong proficiency in Python, Spark, and advanced SQL. Data Governance: Hands-on experience implementing data lineage tracking and data quality monitoring (e.g., using Great Expectations or dbt). Preferred Skills: Semantic Technologies: Hands-on experience developing ontology frameworks using OWL, RDF, and SPARQL to enable semantic interoperability. Advanced AI Data: Experience integrating structured/unstructured data into Knowledge Graphs and Vector Databases. Streaming/Telemetry: Experience developing and maintaining semantic telemetry pipelines using services like Azure Event Hubs or Kafka. Emerging Concepts: Exposure to linked data ecosystems, data mesh, or data fabric concepts.
    $180k yearly 1d ago
  • Data Architect

    Integris Group 4.0company rating

    Data scientist job in Orlando, FL

    (Orlando, FL) Business Challenge The company is in the midst of an AI transformation, creating exciting opportunities for growth. At the same time, they are leading a Salesforce modernization and integrating the systems and data of their recent acquisition. To support these initiatives, they are bringing in a Senior Data Architect/Engineer to establish enterprise standards for application and data architecture, partnering closely with the Solutions Architect and Tech Leads. Role Overview The Senior Data Architect/Engineer leads the design, development, and evolution of enterprise data architecture, while contributing directly to the delivery of robust, scalable solutions. This position blends strategy and hands-on engineering, requiring deep expertise in modern data platforms, pipeline development, and cloud-native architecture. You will: Define architectural standards and best practices. Evaluate and implement new tools. Guide enterprise data initiatives. Partner with data product teams, engineers, and business stakeholders to build platforms supporting analytics, reporting, and AI/ML workloads. Day-to-Day Responsibilities Lead the design and documentation of scalable data frameworks: data lakes, warehouses, streaming architectures, and Azure-native data platforms. Build and optimize secure, high-performing ETL/ELT pipelines, data APIs, and data models. Develop solutions that support analytics, advanced reporting, and AI/ML use cases. Recommend and standardize modern data tools, frameworks, and architectural practices. Mentor and guide team members, collaborating across business, IT, and architecture groups. Partner with governance teams to ensure data quality, lineage, security, and stewardship. Desired Skills & Experience 10+ years of progressive experience in Data Engineering and Architecture. Strong leadership experience, including mentoring small distributed teams (currently 4 people: 2 onshore, 2 offshore; team growing to 6). Deep knowledge of Azure ecosystem (Data Lake, Synapse, SQL DB, Data Factory, Databricks). Proven expertise with ETL pipelines (including 3rd-party/vendor integrations). Strong SQL and data modeling skills; familiarity with star/snowflake schemas and other approaches. Hands-on experience creating Data APIs. Solid understanding of metadata management, governance, security, and data lineage. Programming experience with SQL, Python, Spark. Familiarity with containerized compute/orchestration frameworks (Docker, Kubernetes) is a plus. Experience with Salesforce data models, MDM tools, and streaming platforms (Kafka, Event Hub) is preferred. Excellent problem-solving, communication, and leadership skills. Education: Bachelor's degree in Computer Science, Information Systems, or related field (Master's preferred). Azure certifications in Data Engineering or Solution Architecture strongly preferred. Essential Duties & Time Allocation Data Architecture Leadership - Define enterprise-wide strategies and frameworks (35%) Engineering & Delivery - Build and optimize ETL/ELT pipelines, APIs, and models (30%) Tooling & Standards - Evaluate new tools and support adoption of modern practices (15%) Mentorship & Collaboration - Mentor engineers and align stakeholders (10%) Governance & Quality - Embed stewardship, lineage, and security into architecture (10%)
    $84k-119k yearly est. 3d ago
  • Data Architect

    Radiant Digital 4.1company rating

    Data scientist job in Tampa, FL

    Data Architecture & Modeling Design and maintain enterprise-level logical, conceptual, and physical data models. Define data standards, naming conventions, metadata structures, and modeling best practices. Ensure scalability, performance, and alignment of data models with business requirements. Data Governance & Quality Implement and enforce data governance principles and policies. Define data ownership, stewardship, data lineage, and lifecycle management. Lead initiatives to improve data quality, consistency, and compliance. Enterprise Data Management Develop enterprise data strategies, including data integration, master data management (MDM), and reference data frameworks. Define and oversee the enterprise data architecture blueprint. Ensure alignment between business vision and data technology roadmaps.
    $83k-118k yearly est. 1d ago
  • W2 Opportunity // GCP Data Engineer // Atlanta, GA

    Cloudingest

    Data scientist job in Atlanta, GA

    Job Description: GCP Data Engineer Rate: $50/hr. on W2 (No C2C) We are seeking a highly skilled GCP Data Engineer to design, build, and optimize cloud-native data pipelines and analytics solutions on Google Cloud Platform. The ideal candidate has strong experience with Python, BigQuery, Cloud Data Fusion, and core GCP services such as Cloud Composer, Cloud Storage, Cloud Functions, and Pub/Sub. This role requires a strong foundation in data warehousing concepts and scalable data engineering practices. Responsibilities Design, develop, and maintain robust ETL/ELT pipelines on Google Cloud Platform. Build and optimize data workflows using Cloud Data Fusion, BigQuery, and Cloud Composer. Write efficient and maintainable Python code to support data ingestion, transformation, and automation. Develop optimized BigQuery SQL for analytics, reporting, and large-scale data modeling. Utilize GCP services such as Cloud Storage, Pub/Sub, and Cloud Functions to build event-driven and scalable data solutions. Ensure data quality, governance, and reliability across all pipelines. Collaborate with cross-functional teams to deliver clean, trusted, production-ready datasets. Monitor, troubleshoot, and resolve performance issues in cloud data pipelines and workflows. Must-Have Skills Strong experience with GCP BigQuery (data modeling, SQL development, performance tuning). Proficiency in Python for data engineering and pipeline automation. Hands-on experience with Cloud Data Fusion for ETL/ELT development. Working experience with key GCP services: Cloud Composer Cloud Storage Cloud Functions Pub/Sub Strong understanding of data warehousing concepts, star/snowflake schemas, and best practices. Solid understanding of cloud data architecture and distributed processing. Good-to-Have Skills Experience with Vertex AI for ML pipeline integration or model deployment. Familiarity with Dataproc (Spark/Hadoop) for large-scale processing. Knowledge of CI/CD workflows, Git, and DevOps best practices. Experience with Cloud Logging/Monitoring tools.
    $50 hourly 2d ago
  • Senior Python Data Engineer (Banking)

    Ittconnect

    Data scientist job in Miami, FL

    ITTConnect is seeking a Sr Data Engineer with experience in Banking / Financial Services for a direct-hire full time position with a client that is a large financial institution. is hybrid Requirements 10+ years of experience with Software Development 3+ years of experience with Data Engineering Hands-on Python coding experience, with knowledge in DataOps and on-premise environments Strong understanding of Python and its applicability within data tools, including libraries such as pandas and related Airflow: DAG creation, workflow maintenance, integration with dbt-core DBT: Development and maintenance of models and macros in dbt-core (not dbt Cloud). Experience migrating SQL code into dbt Git: Layered deployment structure aligned with the Infrastructure team SQL Server: Advanced knowledge of SQL Server, including tuning, performance evaluation, optimization Medallion Architecture: Understanding of Medallion architecture operations. Implementation skills are not required, but operational familiarity is expected Highly desirable previous experience working for Financial Services / Banks Highly desirable fluency in Portuguese or Spanish Bachelor's degree in Information Technology or related field
    $70k-98k yearly est. 2d ago
  • Lead Data Engineer

    Selby Jennings

    Data scientist job in Tampa, FL

    A leading Investment Management Firm is looking to bring on a Lead Data Engineer to join its team in Tampa, Denver, Memphis, or Southfield. This is an excellent chance to work alongside industry leaders while getting to be both hands on and helping lead the team. Key Responsibilities Project Oversight: Direct end-to-end software development activities, from initial requirements through deployment, ensuring projects meet deadlines and quality standards. Database Engineering: Architect and refine SQL queries, stored procedures, and schema designs to maximize efficiency and scalability within Oracle environments. Performance Tuning: Evaluate system performance and apply strategies to enhance data storage and retrieval processes. Data Processing: Utilize tools like Pandas and Spark for data wrangling, transformation, and analysis. Python Solutions: Develop and maintain Python-based applications and automation workflows. Pipeline Automation: Implement and manage continuous integration and delivery pipelines using Jenkins and similar technologies to optimize build, test, and release cycles. Team Development: Guide and support junior engineers, promoting collaboration and technical growth. Technical Documentation: Create and maintain comprehensive documentation for all development initiatives. Core Skills Experience: Over a decade in software engineering, with deep expertise in Python and Oracle database systems. Technical Knowledge: Strong command of SQL, Oracle, Python, Spark, Jenkins, Kubernetes, Pandas, and modern CI/CD practices. Optimization Expertise: Skilled in database tuning and applying best practices for performance. Leadership Ability: Proven track record in managing teams and delivering complex projects. Analytical Strength: Exceptional problem-solving capabilities with a data-centric mindset. Communication: Clear and effective written and verbal communication skills. Education: Bachelor's degree in Computer Science, Engineering, or equivalent professional experience. Preferred Qualifications Certifications: Professional credentials in Oracle, Python, Kubernetes, or CI/CD technologies. Agile Background: Hands-on experience with Agile or Scrum frameworks. Cloud Platforms: Familiarity with AWS, Azure, or Google Cloud services.
    $72k-99k yearly est. 1d ago
  • Lead Azure Databrick Engineer

    Syren

    Data scientist job in Atlanta, GA

    ****************Individual Contractors (W2/1099) are encouraged to apply. Visa sponsorship is not available for this role at this time************ An Azure Data Engineer is responsible for designing, implementing, and maintaining the data infrastructure within an organization. They collaborate with both business and IT teams to understand stakeholders' needs and unlock the full potential of data. They create conceptual and logical data models, analyze structural requirements, and ensure efficient database solutions. Must Have Skills: Experience of Migrating from other platform to Databricks Proficiency in Databricks and Azure Cloud, Databricks Asset Bundles, Hoslistic vision on the Data Strategy. Proficiency in Data Streaming and Data Modeling Experience in architecting at least two large-scale big data projects Strong understanding of data scaling and its complexities Data Archiving and Purging mechanisms. Job Requirements • Degree in computer science or equivalent preferred • Demonstrable experience in architecture, design, implementation, and/or support of highly distributed applications with Azure cloud and Databricks. • 10+ Years of Hands-on experience with data modelling, database design, data mining, and segmentation techniques. • Working knowledge and experience with "Cloud Architectures" (e.g., SaaS, PaaS, IaaS) and the ability to address the unique security considerations of secure Cloud computing • Should have architected solutions for Cloud environments such as Microsoft Azure and/or GCP • Experience with debugging and performance tuning in distributed environments • Strong analytical skills with the ability to collect, organize, analyse, and broadcast significant amounts of information with attention to detail and accuracy • Experience dealing with structured, unstructured data. • Must have Python, PySpark experience. • Experience in ML or/and graph analysis is a plus
    $75k-100k yearly est. 3d ago
  • Data Engineer - OrcaWorks AI

    Orcaworks.Ai

    Data scientist job in Atlanta, GA

    Experience Level: Entry-level (Master's preferred) About OrcaWorks AI At OrcaWorks AI, we're building next-generation AI systems that empower businesses to make data-driven decisions with intelligence and speed. We're seeking passionate Data Engineers who love solving real-world data challenges and want to be part of a growing team building cutting-edge AI infrastructure. Key Responsibilities Design, develop, and maintain data pipelines using tools like Airbyte and Prefect to feed AI and machine learning models. Integrate data from multiple structured and unstructured sources into unified and queryable layers using ElasticSearch or Vespa. Implement data validation, transformation, and storage solutions using modern ETL frameworks. Collaborate with AI, LLM, and data science teams to ensure reliable and optimized data flow for model training. Support database management, SQLModel, and data governance practices across services. Required Skills & Qualifications Master's degree (or Bachelor's with equivalent experience) in Computer Science, Information Systems, or Data Engineering. Proficiency in Python and SQL; experience with PySpark or equivalent ETL frameworks. Hands-on experience with Airbyte, Prefect, and DBT. Familiarity with search and indexing systems like Vespa or ElasticSearch. Knowledge of cloud data platforms (AWS, GCP, or Azure) and API integration. Strong understanding of data security and applied AI workflows.
    $75k-100k yearly est. 4d ago
  • Lead Data Engineer - Palantir Foundry

    Smurfit Westrock

    Data scientist job in Atlanta, GA

    Our technology organization is transforming how we work at WestRock. We align with our businesses to deliver innovative solutions that: Address specific business challenges, integrate processes, and create great experiences Connect our work to shared goals that propel WestRock forward in the Digital Age Imagine how technology can advance the way we work by using disruptive technology We are looking for forward thinking technologists that can accelerate our focus areas such as building stronger foundational technology capabilities, reducing complexity, employing digital transformation concepts, and leveraging disruptive technology. As a Lead Data Engineer, you will play a pivotal role in building and scaling modern data infrastructure that powers decision-making across production, supply chain, and operations. Helps to define and analyze business requirements for Enterprise scale reports. Analyzes and evaluates business use cases for data engineering problems and helps design and develop processing solutions with ETL Cloud based technologies. How you will impact WestRock: Architect and implement scalable data pipelines using Palantir Foundry (pipelines, workshops, ontology) to unify and transform operational data. Design and develop robust data workflows using Python, Apache Airflow, and Apache Spark to support real-time and batch processing needs. Build and deploy solutions on cloud platforms (AWS or Azure), ensuring high availability, security, and performance. Collaborate with data scientists, analysts, and operations teams to deliver actionable insights and operational tooling. Define and enforce data engineering best practices, including CI/CD automation, version control (Git), and testing strategies. Mentor junior developers, conduct code reviews, and help shape the technical roadmap for the data platform. What you need to succeed: Education: Bachelor's degree in computer science or similar At least 6 years of strong Data Engineering experience Hands-on experience with Palantir Foundry, including pipelines, ontology modeling, and workshop development. Strong programming skills in Python or Java, with experience building and maintaining production-grade data pipelines. Proficiency in Apache Airflow and Apache Spark for workflow orchestration and large-scale data processing. Proven experience deploying data solutions on AWS or Azure, with strong understanding of cloud-native services. Familiarity with Git for version control and CI/CD pipelines for automated testing and deployment. Demonstrated ability to mentor junior engineers, lead projects, and work independently in a fast-paced environment. Good communication skills, with the ability to collaborate effectively across technical and non-technical teams. Good analytical and troubleshooting abilities. What we offer: Corporate culture based on integrity, respect, accountability and excellence Comprehensive training with numerous learning and development opportunities An attractive salary reflecting skills, competencies and potential A career with a global packaging company where Sustainability, Safety and Inclusion are business drivers and foundational elements of the daily work.
    $75k-100k yearly est. 5d ago
  • GCP Data Architect with 14+ years (Day 1 onsite)

    M3Bi-A Zensar Company

    Data scientist job in Sunrise, FL

    12-14 years of overall IT experience with expertise in Data landscape - Data Warehouse, Data lake etc. Hands on experience in Big Data and Hadoop ecosystem; Strong skills in SQL, Python or Spark Proficient in Data Warehousing concepts and Customer Data Management (Customer 360) Experience in GCP platform - Dataflow, Dataproc, Kubernetes containers etc. Expertise in deep Data exploration and Data analysis Excellent communication and inter personal skills
    $78k-110k yearly est. 3d ago
  • SAP Data Engineer

    IDR, Inc. 4.3company rating

    Data scientist job in Atlanta, GA

    IDR is seeking an SAP Data Engineer to join one of our top clients for an opportunity in Atlanta, GA. This role involves designing, building, and optimizing data pipelines and architecture to support advanced analytics and business intelligence in a dynamic enterprise environment. Position Overview for the SAP Data Engineer: Develop and optimize ETL/ELT pipelines from SAP sources and other enterprise data sources Design, build, and maintain data architecture supporting analytics and BI initiatives Deep knowledge of SAP technologies including BW/4HANA, S/4HANA, ECC, BusinessObjects, and SAC Experience with Celonis data engineering and enterprise DataOps practices Collaborate with cross-functional teams to ensure data governance, security, and performance Requirements for the SAP Data Engineer: 5 - 8+ years of hands-on experience in Data Engineering with SAP BW/4HANA, SAP ECC/S4, BusinessObjects, SAC, and Celonis Strong knowledge of data integration techniques such as ODP, SLT, ABAP, SQL, and CDS views Experience developing and customizing BI extraction processes using ABAP Familiarity with SAP Datasphere, hybrid architecture, and SAP BPC Understanding of process excellence frameworks, Celonis EMS, DataOps, CI/CD practices, Snowflake, and Tableau What's in it for you? Competitive compensation package Full Benefits; Medical, Vision, Dental, and more! Opportunity to get in with an industry leading organization Why IDR? 25+ Years of Proven Industry Experience in 4 major markets Employee Stock Ownership Program Dedicated Engagement Manager who is committed to you and your success Medical, Dental, Vision, and Life Insurance ClearlyRated's Best of Staffing Client and Talent Award winner 12 years in a row
    $74k-99k yearly est. 1d ago
  • Data Engineer w/ Python & SQL

    A2C 4.7company rating

    Data scientist job in Alpharetta, GA

    We're looking for a Data Engineer to build and maintain scalable data pipelines and cloud data infrastructure on GCP. The role focuses on BigQuery, Dataflow, and modern ETL/ELT to support analytics and ML workflows. MUST HAVES A problem solver with ability to analyze and research complex issues and problems; and proposing actionable solutions and/or strategies. Solid understanding and hands on experience with major cloud platforms. Experience in designing and implementing data pipelines. Must have strong Python, SQL & GCP skills Responsibilities Build and optimize batch/streaming pipelines using Dataflow, Pub/Sub, Composer. Develop and tune BigQuery models, queries, and ingestion processes. Implement IaC (Terraform), CI/CD, monitoring, and data quality checks. Ensure data governance, security, and reliable pipeline operations. Collaborate with data science teams and support Vertex AI-based ML workflows. Must-Have Must have strong Python, SQL & GCP skills 3-5+ years of data engineering experience. Hands-on GCP experience (BigQuery, Dataflow, Pub/Sub). Solid ETL/ELT and data modeling experience. Nice-to-Have GCP certifications, Spark, Kafka, Airflow, dbt/Dataform, Docker/K8s.
    $77k-106k yearly est. 3d ago
  • Sr. Data Engineer (SQL+Python+AWS)

    SGS Technologie 3.5company rating

    Data scientist job in Saint Petersburg, FL

    looking for a Sr. Data Engineer (SQL+Python+AWS) to work on a 12+ Months, Contract (potential Extension or may Convert to Full-time) = Hybrid at St. Petersburg, FL 33716 with a Direct Financial Client = only on W2 for US Citizen or Green Card Holders. Notes from the Hiring Manager: • Setting up Python environments and data structures to support the Data Science/ML team. • No prior Data Science or Machine Learning experience required. • Role involves building new data pipelines and managing file-loading connections. • Strong SQL skills are essential. • Contract-to-hire position. • Hybrid role based in St. Pete, FL (33716) only. Duties: This role is building and maintaining data pipelines that connect Oracle-based source systems to AWS cloud environments, to provide well-structured data for analysis and machine learning in AWS SageMaker. It includes working closely with data scientists to deliver scalable data workflows as a foundation for predictive modeling and analytics. • Develop and maintain data pipelines to extract, transform, and load data from Oracle databases and other systems into AWS environments (S3, Redshift, Glue, etc.). • Collaborate with data scientists to ensure data is prepared, cleaned, and optimized for SageMaker-based machine learning workloads. • Implement and manage data ingestion frameworks, including batch and streaming pipelines. • Automate and schedule data workflows using AWS Glue, Step Functions, or Airflow. • Develop and maintain data models, schemas, and cataloging processes for discoverability and consistency. • Optimize data processes for performance and cost efficiency. • Implement data quality checks, validation, and governance standards. • Work with DevOps and security teams to comply with RJ standards. Skills: Required: • Strong proficiency with SQL and hands-on experience working with Oracle databases. • Experience designing and implementing ETL/ELT pipelines and data workflows. • Hands-on experience with AWS data services, such as S3, Glue, Redshift, Lambda, and IAM. • Proficiency in Python for data engineering (pandas, boto3, pyodbc, etc.). • Solid understanding of data modeling, relational databases, and schema design. • Familiarity with version control, CI/CD, and automation practices. • Ability to collaborate with data scientists to align data structures with model and analytics requirements Preferred: • Experience integrating data for use in AWS SageMaker or other ML platforms. • Exposure to MLOps or ML pipeline orchestration. • Familiarity with data cataloging and governance tools (AWS Glue Catalog, Lake Formation). • Knowledge of data warehouse design patterns and best practices. • Experience with data orchestration tools (e.g., Apache Airflow, Step Functions). • Working knowledge of Java is a plus. Education: B.S. in Computer Science, MIS or related degree and a minimum of five (5) years of related experience or combination of education, training and experience.
    $71k-91k yearly est. 3d ago
  • Data Architect

    Zensar Technologies 4.3company rating

    Data scientist job in Sunrise, FL

    JD: 14+ years of overall IT experience with expertise in Data landscape - Data Warehouse, Data lake etc. Hands on experience in Big Data and Hadoop ecosystem; Strong skills in SQL, Python or Spark Proficient in Data Warehousing concepts and Customer Data Management (Customer 360) Experience in GCP platform - Dataflow, Dataproc, Kubernetes containers etc. Expertise in deep Data exploration and Data analysis Excellent communication and inter personal skills
    $77k-102k yearly est. 2d ago
  • Claims Data Engineer

    Nationsbenefits

    Data scientist job in Plantation, FL

    NationsBenefits is recognized as one of the fastest growing companies in America and a Healthcare Fintech provider of supplemental benefits, flex cards, and member engagement solutions. We partner with managed care organizations to provide innovative healthcare solutions that drive growth, improve outcomes, reduce costs, and bring value to their members. Through our comprehensive suite of innovative supplemental benefits, fintech payment platforms, and member engagement solutions, we help health plans deliver high-quality benefits to their members that address the social determinants of health and improve member health outcomes and satisfaction. Our compliance-focused infrastructure, proprietary technology systems, and premier service delivery model allow our health plan partners to deliver high-quality, value-based care to millions of members. We offer a fulfilling work environment that attracts top talent and encourages all associates to contribute to delivering premier service to internal and external customers alike. Our goal is to transform the healthcare industry for the better! We provide career advancement opportunities from within the organization across multiple locations in the US, South America, and India. Position Summary: We are seeking a seasoned EDI 837 Claims Data Engineer to design, develop, and maintain data pipelines that process healthcare claims in compliance with HIPAA and ANSI X12 standards. This role requires deep expertise in Electronic Data Interchange (EDI), particularly the 837-transaction set, and will be pivotal in ensuring accurate, timely, and secure claims data exchange across payers, providers, clearinghouses, state agencies, and CMS. Key Responsibilities EDI Development & Integration Design, build, and maintain pipelines for processing 837 healthcare claim transactions. Implement and support EDI workflows across multiple trading partners. Ensure compliance with HIPAA regulations and ANSI X12 standards. Data Engineering Develop ETL processes to transform, validate, and load claims data into enterprise data warehouses. Optimize data flows for scalability, reliability, and performance. Collaborate with analysts and stakeholders to ensure claims data accuracy. Write and optimize SQL queries, stored procedures, and scripts for validation and reporting. Monitoring & Troubleshooting Monitor EDI transactions for errors, rejections, and compliance issues. Troubleshoot and resolve data mapping, translation, and connectivity problems. Perform root cause analysis and implement corrective actions. Collaboration Work closely with business analysts, QA teams, and IT operations to support claims processing. Partner with healthcare domain experts to align technical solutions with business needs. Required Skills & Qualifications 5+ years of experience in healthcare data engineering or claims integration. Strong expertise with EDI 837 transactions and healthcare claims processing. Proven experience with Medicaid and Medicare data exchanges between state agencies and CMS. Hands-on experience with Databricks, SSIS, and SQL Server. Knowledge of HIPAA compliance, CMS reporting requirements, and interoperability standards. Strong problem-solving skills and ability to work in cross-functional teams. Excellent communication and documentation skills. Preferred Skills Experience with Azure cloud platforms Familiarity with other EDI transactions (835, 270/271, 276/277). Exposure to data governance frameworks and security best practices. Background in data warehousing and healthcare analytics.
    $71k-98k yearly est. 1d ago

Learn more about data scientist jobs

How much does a data scientist earn in Saint Simons, GA?

The average data scientist in Saint Simons, GA earns between $57,000 and $107,000 annually. This compares to the national average data scientist range of $75,000 to $148,000.

Average data scientist salary in Saint Simons, GA

$78,000
Job type you want
Full Time
Part Time
Internship
Temporary