Post job

Data scientist jobs in Columbus, GA

- 827 jobs
All
Data Scientist
Data Engineer
Actuarial Analyst
Data Analyst
Senior Data Scientist
Data Architect
Actuary
Operations Research Analyst
Data Consultant
  • Data Scientist

    Coforge

    Data scientist job in Atlanta, GA

    Role: Data Scientist Mode Of Hire: Full Time Key Responsibilities • Data wrangling & feature engineering: Ingest, clean, and transform data from SQL, APIs, and data lakes (e.g., Snowflake, Databricks). Design robust pipelines that feed into analytics and ML workflows. • Data understanding & exploration: Work closely with domain experts to deeply understand the meaning, context, quality, and limitations of available datasets. Translate business questions into data requirements and analytics plans. • Machine learning development: Build, tune, and validate predictive models using scikit-learn, SparkML, XGBoost, or TensorFlow. • Cross-functional partnership: Collaborate with marketing, sales, and product teams to scope business use cases, define success metrics, and integrate models into operational workflows. • Model deployment & MLOps: Deploy and manage models using MLflow, docker and CI/CD pipelines. Implement versioning, testing, performance monitoring, and retraining strategies as part of a robust MLOps practice. • Infrastructure support: Work with data engineering and DevOps teams to maintain and improve model training and deployment infrastructure, including compute resources, workflow orchestration and environment configuration. • Insight delivery: Build clear, actionable reporting and visualizations using tools like Power BI or Tableau. Focus on impact, not just analysis. Skills Required: • Bachelor's degree in Data Science, Computer Science, Engineering, or a related quantitative field. • 5+ years of experience in a data science, ML engineering, or analytics role. • Strong SQL, Python and ML Techniques programming skills. • Experience with Azure Cloud, Databricks, and/or Snowflake. • Experience building and deploying machine learning models in production environments. Hands-on experience with Databricks, including SparkML, and MLflow integration. • Familiarity with MLOps best practices, including version control, model monitoring, and automated testing. • Experience with tools such as Git, MLflow, Docker and workflow schedulers. • Ability to communicate complex technical work to non-technical stakeholders. • Experience with scalable model training environments and distributed computing. Preferred Qualifications • Master's degree in a quantitative or technical discipline. • Experience in financial services, fintech, or enterprise B2B analytics. • Knowledge of A/B testing, causal inference, and statistical experimentation. • Familiarity with GenAI, LLM pipelines, and vector-based retrieval is a plus and platform like Snowflake Cortex.
    $66k-91k yearly est. 1d ago
  • Senior Data Scientist

    Seneca Resources 4.6company rating

    Data scientist job in Hoover, AL

    Sr. Data Scientist Clearance Requirements: None Contract to Hire Pay Rate: $70 - $80 per hour We are seeking a highly skilled and innovative Senior Data Scientist to design advanced analytical solutions that drive strategic decision-making and business performance. This role will serve as a key contributor in shaping data strategy, developing predictive models, and delivering actionable insights that influence product, engineering, and business initiatives. Top candidates will bring deep expertise in statistical modeling, machine learning, data mining, and large-scale analytics-paired with the ability to translate complex findings into clear, meaningful outcomes for stakeholders. This is an excellent opportunity for a motivated data scientist who thrives in dynamic, research-oriented environments and enjoys building impactful data-driven solutions. Core Responsibilities: Lead data strategy efforts by identifying, evaluating, and integrating new datasets to enhance product capabilities. Partner closely with engineering, product, and design teams to understand business needs and build scalable data solutions. Develop, test, and refine statistical and machine learning models to support predictive analytics and operational improvements. Source, collect, and preprocess structured and unstructured datasets for modeling and experimentation. Conduct exploratory data analysis to uncover trends, patterns, and opportunities that align with business priorities. Build algorithms for large-scale data mining, including model validation, optimization, and error analysis. Deploy models into production environments in partnership with software developers and ML engineers. Communicate insights and recommendations clearly to technical and non-technical stakeholders. Stay current with emerging tools, technologies, and best practices in data science and AI/ML. Required Skills/Education: 7+ years of experience in data science, machine learning, or advanced analytics. Strong expertise in statistical analysis, data mining, pattern recognition, and predictive modeling. Proficiency in SQL, Python, Java, SAS, and data visualization tools such as Tableau and Excel. Experience designing, validating, and deploying analytical or ML models in production environments. Ability to manage multiple projects in a fast-paced, research-focused environment. Excellent communication skills with the ability to simplify complex concepts for diverse audiences. Preferred Qualifications: Bachelor's degree in Statistics, Applied Mathematics, Computer Science, or a related field (or equivalent experience). 2+ years of project management or technical leadership experience. Professional certifications in data science, machine learning, cloud technologies, or related areas. Experience with big data platforms (Spark, Hadoop) or cloud ML tools (AWS, Azure, GCP). About Seneca Resources At Seneca Resources, we are more than just a staffing and consulting firm, we are a trusted career partner. With offices across the U.S. and clients ranging from Fortune 500 companies to government organizations, we provide opportunities that help professionals grow their careers while making an impact. When you work with Seneca, you're choosing a company that invests in your success, celebrates your achievements, and connects you to meaningful work with leading organizations nationwide. We take the time to understand your goals and match you with roles that align with your skills and career path. Our consultants and contractors enjoy competitive pay, comprehensive health, dental, and vision coverage, 401(k) retirement plans, and the support of a dedicated team who will advocate for you every step of the way. Seneca Resources is proud to be an Equal Opportunity Employer, committed to fostering a diverse and inclusive workplace where all qualified individuals are encouraged to apply.
    $70-80 hourly 1d ago
  • Senior Data Scientist

    Pangeatwo 3.6company rating

    Data scientist job in Birmingham, AL

    We're seeking a Contract-to-Hire Senior Data Scientist to lead and collaborate with a multidisciplinary team in designing and developing innovative analytical products and solutions using Machine Learning, NLP, and Deep Learning. This role is ideal for someone who thrives in ambiguity, enjoys solving complex problems, and can translate business needs into measurable outcomes. What You'll Do • Partner with business leaders to understand needs and define measurable goals • Gather requirements, build project plans, manage deadlines, and communicate updates • Analyze large structured and unstructured datasets • Build, evaluate, implement, and maintain predictive models • Present results to both technical and non-technical stakeholders • Deploy models and monitor ongoing performance and data accuracy • Contribute ideas, stay current with industry trends, and support team development Lead-Level Opportunities Include: • Driving data science strategy and overseeing project delivery • Providing technical mentorship and leadership to the team • Promoting innovation and exploring emerging tech, tools, and methodologies What We're Looking For • Bachelor's degree in Applied Mathematics, Statistics, Computer Science, Data Science, or related field • 3-6 years of relevant experience (advanced degrees may reduce required experience) • Strong skills in machine learning, statistical modeling, and data analysis • Proficiency in Python or R • Experience with large datasets, preprocessing, and feature engineering • Prior management experience • Experience with transfer learning • Experience building and deploying deep learning solutions • Strong communication skills and ability to present complex concepts clearly • Experience in life insurance or a related domain is a plus • Ability to independently manage projects end-to-end Qualifications • Master's or PhD • Industry experience in similar roles • Publications or patents in data science or ML • Experience collaborating across technical and business teams • Familiarity with software engineering best practices and version control • Relevant certifications (AWS ML Specialty, Google Data Engineer, etc.) Rooted in Birmingham. Focused on You. We're a local recruiting firm based right here in Birmingham. We partner with top companies across the city-from large corporations to fast-growing startups-and we'd love to meet you for coffee to talk about your career goals. Whether you're actively searching or just exploring, we're here to guide you through the entire process- from resume tips to interview coaching. At our clients' request, only individuals with required experience will be considered. Please note - if you have recently submitted your resume to a PangeaTwo posting, your qualifications will be considered for other open opportunities. Your resume will never be submitted to a client without your prior knowledge and consent to do so.
    $79k-109k yearly est. 5d ago
  • Electronic Data Interchange Consultant

    Thorndale Partners

    Data scientist job in Birmingham, AL

    DETAILS: EDI CONSULTANT /TRAINER Title: EDI Consultant Length: 3-6 months for first project and extensions from there, possible for multiple projects Compensation: Hourly DOE Location: Meadowbrook, AL (Birmingham) / can be remote, but need to visit on the front end a couple of weeks and as needed occasionally down the road. OVERVIEW: EDI CONSULTANT /TRAINER This individual will plan, develop, and implement the EDI operations and strategy roadmap for the organization train and mentor a small team. RESPONSIBILITIES: EDI CONSULTANT /TRAINER Manage Mapping and Administration for TrustedLink/OpenText /BizManager for iSeries/AS400 Mentor a small team of resources to assist in EDI operations. Oversees the design, development, testing, deployment, and maintenance of the EDI systems, applications, and integrations - Must be strong with TrustedLink and BizManager for iSeries. Develop and Document Specifications Monitors and evaluates the EDI system's performance, availability, security, and compliance, and initiates corrective actions as needed. Ensures that the EDI systems adhere to the industry standards, best practices, and regulatory requirements. Resolves complex EDI issues and provides technical support and guidance to the users. Establishes and maintains effective relationships with the internal and external stakeholders, such as business units, IT departments, vendors, and trading partners. MINIMUM REQUIREMENTS: EDI CONSULTANT /TRAINER Experience with AS400 / iSeries and RPG development and data files. Strong experience with OpenText, TrustedLink, and BizManager for iSeries 2+ years leadership experience training and leading a small team 10+ years of experience in EDI systems development, implementation, and management. Extensive knowledge and expertise in EDI standards, formats, protocols, and technologies, such as ANSI X12, EDIFACT, XML, AS2, FTP, VAN communication protocols etc.
    $73k-101k yearly est. 1d ago
  • Data Scientist

    Parker's Kitchen 4.2company rating

    Data scientist job in Savannah, GA

    We are looking for a Data Scientist with expertise in optimization and forecasting to help improve how we manage labor, staffing, and operational resources across our retail locations. This role is critical in building models and decision-support tools that ensure the right people, in the right place, at the right time - balancing customer service, efficiency, and cost. You will work closely with Operations, Finance, and Store Leadership teams to deliver practical solutions that improve labor planning, scheduling, and demand forecasting. The right candidate will be confident, resourceful, and excited to own both the technical and business-facing aspects of applying data science in a fast-paced retail environment. Responsibilities Build and maintain forecasting models (time-series, machine learning, and statistical) for sales and transactions. Develop and deploy optimization models (linear/mixed-integer programming, heuristics, simulation) to improve workforce scheduling and labor allocation. Partner with operations and finance to translate forecasts into actionable staffing and labor plans that reduce costs while maintaining service levels. Build dashboards and automated tools to track forecast accuracy, labor KPIs, and staffing effectiveness. Provide insights and “what-if” scenario modeling to support strategic workforce and budget planning. Knowledge, Skills, And Abilities Strong foundation in forecasting techniques (time-series models, regression, machine learning) and optimization methods (linear/mixed-integer programming, heuristics, simulation). Proficiency in Python or R for modeling and analysis, along with strong SQL skills for working with large-scale datasets. Knowledge of statistics, probability, and applied mathematics to support predictive and prescriptive modeling. Experience building and deploying predictive models, optimization tools, and decision-support solutions that drive measurable business outcomes. Strong data storytelling and visualization skills using tools such as Power BI, Tableau, or Looker. Ability to translate analytical outputs into clear, actionable recommendations for non-technical stakeholders. Strong collaboration skills with the ability to partner cross-functionally with Operations, Finance, and Store Leadership to drive adoption of data-driven approaches. Ability to work independently and resourcefully, combining technical depth with practical problem-solving to deliver results in a fast-paced environment. Education And Requirements Required: Bachelor's or Master's degree in Data Science, Statistics, Applied Mathematics, Industrial Engineering, Operations Research, or related field. Minimum 2-3 years of professional experience in Data Science or a related area. Strong skills in time-series forecasting (e.g., ARIMA, Prophet, ML-based approaches). Proficiency in optimization techniques (linear programming, integer programming). Strong Python or R programming skills. SQL expertise for large, complex datasets. Strong communication skills with the ability to partner with business stakeholders. Preferred Experience in Retail, Restaurant, and/or Convenience Stores a plus. Experience with cloud platforms (Snowflake, AWS, GCP, Azure). Knowledge of BI tools (Tableau, Power BI, Looker). Physical Requirements Prolonged periods sitting/standing at a desk and working on a computer Must be able to lift up to 50 pounds Parker's is an equal opportunity employer committed to hiring a diverse workforce and sustaining an inclusive culture. Parker's does not discriminate on the basis of disability, veteran status or any other basis protected under federal, state, or local laws.
    $73k-100k yearly est. 4d ago
  • Data Architect

    Agile Resources, Inc.

    Data scientist job in Atlanta, GA

    Note: Initial 100% onsite required for the first six months. Employment Type: Permanent / Direct Hire / Full-time Salary Up to $180,000 (depending on experience) + bonus The Role: We're seeking a highly skilled and hands-on Data Architect to lead the design, implementation, and ongoing evolution of our enterprise-grade data systems. This role is crucial for building scalable, secure, and intelligent data infrastructure that supports core analytics, operational excellence, and future AI initiatives. Success requires a seasoned technologist who can seamlessly integrate cloud-native services with traditional data warehousing to create a modern, unified data platform. What You'll Do: Architecture & Strategy: Lead the design and implementation of modern data platforms, including Data Lakes, Data Warehouses, and Lakehouse architectures, to enable a single source of truth for the enterprise. Data Modeling & Integration: Architect unified data models that support both modular monoliths and microservices-based platforms. Design and optimize high-volume, low-latency streaming/batch ETL/ELT pipelines. Technical Leadership: Drive the technical execution across the entire data lifecycle. Build and optimize core data processing scripts using Spark and Python. Governance & Quality: Define and enforce standards for data governance, metadata management, and data observability across distributed systems. Implement automated data lineage tracking, schema evolution, and data quality monitoring. Cloud Infrastructure: Configure and manage cloud-native data services, including core data storage and event ingestion infrastructure. Required Experience: Experience: 10+ years of proven experience in enterprise data architecture and engineering. Core Platform Expertise: Strong, hands-on experience with the Azure Data Ecosystem including Azure Data Lake Storage (ADLS), Azure Synapse Analytics (or equivalent cloud DW), and Azure Purview (or equivalent data catalog). Processing: Deep expertise in Databricks (or Apache Spark) for ETL/ELT pipeline implementation, using Delta Lake and SQL Server (or equivalent RDBMS). Coding & Scripting: Strong proficiency in Python, Spark, and advanced SQL. Data Governance: Hands-on experience implementing data lineage tracking and data quality monitoring (e.g., using Great Expectations or dbt). Preferred Skills: Semantic Technologies: Hands-on experience developing ontology frameworks using OWL, RDF, and SPARQL to enable semantic interoperability. Advanced AI Data: Experience integrating structured/unstructured data into Knowledge Graphs and Vector Databases. Streaming/Telemetry: Experience developing and maintaining semantic telemetry pipelines using services like Azure Event Hubs or Kafka. Emerging Concepts: Exposure to linked data ecosystems, data mesh, or data fabric concepts.
    $180k yearly 4d ago
  • Data Engineer

    Clear Point Consultants 4.6company rating

    Data scientist job in Atlanta, GA

    No C2C We're looking for a hands-on Data Engineer to help build, scale, and fine-tune real-time data systems using Kafka, AWS, and a modern data stack. In this role, you'll work deeply with streaming data, ETL, distributed systems, and PostgreSQL to power analytics, product innovation, and AI-driven use cases. You'll also get to work with AI/ML frameworks, automation, and MLOps tools to support advanced modeling and a highly responsive data platform. What You'll Do Design and build real-time streaming pipelines using Kafka, Confluent Schema Registry, and Zookeeper Build and manage cloud-based data workflows using AWS services like Glue, EMR, EC2, and S3 Optimize and maintain PostgreSQL and other databases with strong schema design, advanced SQL, and performance tuning Integrate AI and ML frameworks (TensorFlow, PyTorch, Hugging Face) into data pipelines for training and inference Automate data quality checks, feature generation, and anomaly detection using AI-powered monitoring and observability tools Partner with ML engineers to deploy, monitor, and continuously improve machine learning models in both batch and real-time pipelines using tools like MLflow, SageMaker, Airflow, and Kubeflow Experiment with vector databases and retrieval-augmented generation (RAG) pipelines to support GenAI and LLM initiatives Build scalable, cloud-native, event-driven architectures that power AI-driven data products What You Bring Bachelor's degree in Computer Science, Engineering, Math, or a related technical field 3+ years of hands-on data engineering experience with Kafka (Confluent or open-source) and AWS Experience with automated data quality, monitoring, and observability tools Strong SQL skills and solid database fundamentals with PostgreSQL and both traditional and NoSQL databases Proficiency in Python, Scala, or Java for pipeline development and AI integrations Experience with synthetic data generation, vector databases, or GenAI-powered data products Hands-on experience integrating ML models into production data pipelines using frameworks like PyTorch or TensorFlow and MLOps tools such as Airflow, MLflow, SageMaker, or Kubeflow
    $98k-127k yearly est. 23h ago
  • Data Engineer (Mid & Senior)

    Hruckus

    Data scientist job in Huntsville, AL

    Veteran-Owned Firm Seeking Data Engineers for an Onsite Assignment in Huntsville, AL My name is Stephen Hrutka. I lead a Veteran-Owned management consulting firm in Washington, DC. We specialize in Technical and Cleared Recruiting for the Department of Defense (DoD), the Intelligence Community (IC), and other advanced defense agencies. At HRUCKUS, we support fellow Veteran-Owned businesses by helping them recruit for positions across organizations such as the VA, SBA, HHS, DARPA, and other leading-edge R&D-focused defense agencies. We seek to fill Data Engineer roles supporting the FBI in Huntsville, AL. The ideal candidate will possess an active Top-Secret Security Clearance, and 5+ to 8+ years of experience in data engineering or database development. They should have strong hands-on experience with ETL tools (e.g., Informatica, Talend, Pentaho, AWS Glue, or custom Java ETL frameworks) and be proficient in SQL and at least one major RDBMS (Oracle or PostgreSQL). If you're interested, I'll gladly provide more details about the role and discuss your qualifications further. Thanks, Stephen M Hrutka Principal Consultant HRUCKUS LLC Executive Summary: HRUCKUS is seeking a Mid-Level and Senior-Level Data Engineers with Top-Secret Security Clearance for a role supporting the FBI in Huntsville, AL. Job Description: We are seeking Data Engineers (Senior and Mid-Level) to support secure, mission-critical data environments within a classified cloud infrastructure. These roles are fully onsite in Huntsville, AL and require an active Top Secret clearance. The ideal candidates will have strong experience with ETL development, data migration, Java-based data pipelines, and relational/NoSQL databases (Oracle, PostgreSQL, MongoDB), along with exposure to AWS cloud services and Agile/Scrum methodologies. Responsibilities: Design, develop, and maintain ETL workflows to extract, transform, and load large-scale structured and unstructured datasets. Develop data migration solutions between legacy and modern systems using SQL, Java, and cloud-native tools. Implement data integration frameworks leveraging AWS services such as Glue, Lambda, S3, RDS, Redshift, and Kinesis. Develop automation scripts using Python, Shell, or Bash for deployment, data validation, and maintenance tasks. Maintain and enhance data pipelines for real-time and batch data processing. Support data quality, metadata management, and governance activities. Participate in Agile/Scrum sprints, contributing to design, code reviews, testing, and documentation. Troubleshoot and resolve data-related issues across on-premises and AWS environments. Qualifications: Active TOP SECRET clearance Bachelor's degree in Computer Science, IT, or related field Mid-Level: 5+ years of professional experience in data engineering or database development. Senior-Level: 8+ years of professional experience in data engineering or database development. Strong hands-on experience with ETL tools (e.g., Informatica, Talend, Pentaho, AWS Glue, or custom Java ETL frameworks). Proficiency in SQL and at least one major RDBMS (Oracle or PostgreSQL). Experience with data migration projects and data quality validation. Proficient in Java or Python for data processing and automation. Experience working with cloud technologies, preferably AWS (RDS, S3, Lambda, Redshift, Glue). Working knowledge of Linux/Unix environments and shell scripting. Experience in an Agile/Scrum development environment. Excellent problem-solving, analytical, and communication skills. Details: Job Title: Mid-Level and Senior-level Data Engineer Location: Redstone Arsenal, Huntsville, AL 35898 Security Clearance: Top-Secret Clearance Salary: Up to $150,000 per year (based on experience) Job Type: Full-time, Onsite Benefits: Paid Time Off (PTO): 3 weeks of PTO (including sick leave). Unused PTO is paid out at the end of the year. Holidays: 2 floating holidays and eight public holidays per year. Health & Dental Insurance: The company covers 50% of employee health and dental insurance (dependents may be added at an extra cost). Coverage becomes effective after 30 days. Life Insurance: Standard Short-Term Disability (STD), Long-Term Disability (LTD), and life insurance at no cost to full-time employees. 401(k) Program: Eligible after 90 days with a 4% company match and immediate vesting. Profit Sharing: Employees can participate in the company's profit-sharing program without requiring personal contributions. Commuting and Parking: No reimbursement for commuting or parking expenses.
    $150k yearly 2d ago
  • Technical Data Architect

    Oldcastle Infrastructure 4.3company rating

    Data scientist job in Atlanta, GA

    Exempt Oldcastle Infrastructure™, a CRH company, is the leading provider of utility infrastructure solutions for the water, energy, and communications markets throughout North America. We're more than just a manufacturer of precast concrete, polymer concrete, or plastic products. We're a trusted and strategic partner to engineers, contractors, distributors, specifiers, and more. With our network of more than 80 manufacturing facilities and more than 4,000 employees, we're leading the industry with innovation and a safety-first mindset. Job Summary Oldcastle Infrastructure (OI), as part of CRH's Infrastructure Products Group (IPG), is a global manufacturing leader of utility infrastructure products. Our goal is to be the most efficient producer of engineered systems and our customers' strategic partner of choice. A crucial part of OI's journey is the investment in new digital tools including a new ERP. With a modern, common platform, OI will unlock the benefits of its scale, deliver a better customer experience, and build a foundation for continuous process improvement. The Technical Data Architect is a senior role accountable of defining, governing, and delivering the data architecture strategy required to migrate enterprise data from legacy systems into SAP S/4HANA and Salesforce CPQ. This role ensures that data models, migration approaches, and governance structures support end-to-end business processes and regulatory compliance, while delivering high-quality, reconciled, and auditable data into the template. The architect will partner with the business data management team, program management office, functional process owners, and system integrators to ensure a seamless transition with minimal disruption to operations. Job Location This role will work hybrid out of our office in the Sandy Springs, GA area. Job Responsibilities Data Architecture Modeling Design target SAP S/4HANA data models and mapping rules from legacy systems. Validate functional data alignment for Finance (FI/CO), Sales & Distribution (SD), Materials Management (MM) and Production Planning (PP). Leverage CRH IPG Data Dictionary, Data Management and ETL migration tools to support the cleansing and data migration processes. Provide Technical capabilities to support the data quality and data reconciliations for Master Data Subjects. ERP Data Migration Collaborate with the business Master Data team on the legacy data migration by supporting the technical requirements for Customers, Vendors, BOMs, Products and other master data subjects. Define extraction, transformation, load, and reconciliation processes with automation where possible. Master Data Management Partner with the Business Master Data team to align on the governance model, ownership, and ongoing stewardship processes for core data subjects. Define and support the data migration testing strategy, including mock loads, trial conversions, and dress rehearsals. Partner with business master data team and users for the validation and sign-off at each migration stage. Design cutover sequencing for data loads, ensuring minimal downtime. Coordinate with functional leads and the PMO on the entry/exit criteria and contingency planning for go-live events related to data quality readiness. Job Requirements 5-8+ years of experience working in Data Architecture in the manufacturing industry Proven track record in delivering large-scale data migrations (CPQ, OTC, Finance, Supply Chain, Manufacturing P2P). Hands-on experience with ETL/migration tools (SAP Data Services, Informatica, etc). Strong knowledge of data governance, master data management, and audit/compliance processes. Process improvement knowledge gained while working in an organization undergoing a significant operational culture shift Creation and improvement of processes that demonstrate ease of doing business internally and externally Development and implementation of process adherence and data quality adoption metrics Comfortable operating in environment of ambiguity and fast change Strong interpersonal and organizational influencing skills Ability to communicate in a simple, articulate, thoughtful manner to varying audience levels Innovative spirit to work cross-functionally in developing improvement ideas A pleasant, likeable manner while accomplishing challenging results Bachelor's degree in computer science or technical related discipline SAP Technical Certifications in Master Data/Data Services/MDG (preferred) PMP Certification (preferred) What CRH Offers You Highly competitive base pay Comprehensive medical, dental and disability benefits programs Group retirement savings program Health and wellness programs An inclusive culture that values opportunity for growth, development, and internal promotion About CRH CRH has a long and proud heritage. We are a collection of hundreds of family businesses, regional companies and large enterprises that together form the CRH family. CRH operates in a decentralized, diversified structure that allows you to work in a small company environment while having the career opportunities of a large international organization. If you're up for a rewarding challenge, we invite you to take the first step and apply today! Once you click apply now, you will be brought to our official employment application. Please complete your online profile and it will be sent to the hiring manager. Our system allows you to view and track your status 24 hours a day. Thank you for your interest! Oldcastle Infrastructure, a CRH Company, is an Affirmative Action and Equal Opportunity Employer. EOE/Vet/Disability CRH is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, status as a protected veteran or any other characteristic protected under applicable federal, state, or local law.
    $84k-113k yearly est. 4d ago
  • International Logistics Data Quality Analyst

    Log-Net, Inc.

    Data scientist job in Atlanta, GA

    Log-Net, Inc provides the finest trade, transportation, and logistics information technology systems via the internet. We currently have a unique opportunity in the areas of international logistics process improvement and trading partner management. Your primary roles would be to support the ongoing deployment of a global logistics solutions and to facilitate the management of all supplier interfaces between our clients, including coordinating and monitoring activity with the trading partners, managing the EDI processes, ensuring quality control standards are maintained and setting and maintaining agreed upon service level of standards. If you enjoy a challenging and fast paced demanding environment with customer interaction, we invite you to take a closer look at this opportunity. Responsibilities Understand the day-to-day issues that our business faces, which can be better understood with data Being the point of contact for EDI and reporting around EDI Successfully onboard and be the point of contact for new trade partners into LOG-NET Compile and analyze data related to business EDI issues Be able to fully understand LOG-NET's platform and the capabilities of it Conducting analysis, presentations, and reports on the supply chain outlook. Developing carrier scorecard reports and analysis. Integrated logistics communication with trade partners and carriers. Coordinating client problem solving and troubleshooting. Self-starter, technical aptitude with an attitude towards learning Qualifications Bachelor's or Master's degree in Logistics, Supply Chain, Statistics, Applied Mathematics, Physics or equivalent experience preferred EDI or API integration experience preferred 0-1 years of Analytical experience Experience with data analytics, SQL, Big Query preferred Experience in international supply chain preferred Authorization to work in the United States is required for this position SALARY RANGE: USD 50,000-70,000
    $58k-82k yearly est. 2d ago
  • GIS Data Analyst

    Aptonet

    Data scientist job in Atlanta, GA

    We are seeking a highly skilled GIS Data Analyst to support the creation, maintenance, and quality assurance of critical geospatial infrastructure data for enterprise-level Engineering initiatives. This role plays a key part in ensuring the accuracy and reliability of GIS data used across the organization - including compliance-related programs and operational analytics. The ideal candidate is a hands-on GIS professional with strong analytical skills, advanced geospatial editing experience, and the ability to interpret field data into accurate digital representations. PTC (Positive Train Control) and rail experience are preferred but not required. Key Responsibilities Create, modify, and quality-check geospatial infrastructure data for engineering and business operations Utilize GIS tools to ensure accurate topology and track geometry representation Convert field-collected spatial data into a validated digital rail network aligned with organizational standards Review, approve, and promote data change sets submitted by GIS/CAD technicians Conduct regular inventory analysis including track mileage, asset counts, and spatial measurements Collaborate with engineering sub-groups and business partners to support enterprise GIS initiatives Contribute to the preparation, assembly, and deployment of geospatial data to support compliance programs and corporate systems Support continuous improvement by recommending cost-saving initiatives leveraging GIS technologies Assist senior GIS staff in additional GIS responsibilities as needed Required Skills & Qualifications Advanced proficiency with GIS software, ideally ESRI tools (ArcGIS Pro, Desktop, geodatabase editing, topology management) Strong analytical, problem-solving, and data quality assurance capabilities Ability to interpret engineering drawings, field data, and spatial reference materials Familiarity working with infrastructure or utility network datasets Excellent communication and collaboration skills Bachelor's degree required - GIS, Computer Science, Software Engineering, IT, Geography, or related Preferred Qualifications (Nice to Have) Exposure to railroad infrastructure or linear transportation networks Experience supporting Positive Train Control (PTC) data models or compliance initiatives Working knowledge of CAD-to-GIS workflows Experience with Enterprise GIS deployments in large-scale organizations Soft Skills Detail-oriented data stewardship mindset Ability to make informed decisions and manage competing priorities Strong teamwork and communication in a technical environment
    $58k-82k yearly est. 3d ago
  • Financial Data Analyst

    Genpact 4.4company rating

    Data scientist job in Alpharetta, GA

    Ready to build the future with AI? At Genpact, we don't just keep up with technology-we set the pace. AI and digital innovation are redefining industries, and we're leading the charge. Genpact's AI Gigafactory, our industry-first accelerator, is an example of how we're scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies' most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what's possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Inviting applications for the role of Financial Data Analyst at Alpharetta , GA . Role : Financial Data Analyst Location : Alpharetta , GA 30005 / 3 days from Office Hiring Type: Fulltime with Genpact + Benefits Responsibilities Define and execute the product roadmap for AI tooling and data integration initiatives, driving products from concept to launch in a fast-paced, Agile environment. Translate business needs and product strategy into detailed requirements and user stories. Collaborate with engineering, data, and AI/ML teams to design and implement data connectors that enable seamless access to internal and external financial datasets. Partner with data engineering teams to ensure reliable data ingestion, transformation, and availability for analytics and AI models. Evaluate and work to onboard new data sources, ensuring accuracy, consistency, and completeness of fundamental and financial data. Continuously assess opportunities to enhance data coverage, connectivity, and usability within AI and analytics platforms. Monitor and analyze product performance post-launch to drive ongoing optimization and inform future investments. Facilitate alignment across stakeholders, including engineering, research, analytics, and business partners, ensuring clear communication and prioritization. Minimum qualifications Bachelor's degree in Computer Science, Finance, or related discipline. MBA/Master's Degree desired. 5+ years of experience in a similar role Strong understanding of fundamental and financial datasets, including company financials, market data, and research data. Proven experience in data integration, particularly using APIs, data connectors, or ETL frameworks to enable AI or analytics use cases. Familiarity with AI/ML data pipelines, model lifecycle, and related tooling. Experience working with cross-functional teams in an Agile environment. Strong analytical, problem-solving, and communication skills with the ability to translate complex concepts into actionable insights. Prior experience in financial services, investment banking, or research domains. Excellent organizational and stakeholder management abilities with a track record of delivering data-driven products. Preferred qualifications Deep understanding of Python, SQL, or similar scripting languages Knowledge of cloud data platforms (AWS, GCP, or Azure) and modern data architectures (data lakes, warehouses, streaming) Familiarity with AI/ML platforms Understanding of data governance, metadata management, and data security best practices in financial environments. Experience with API standards (REST, GraphQL) and data integration frameworks. Demonstrated ability to partner with engineering and data science teams to operationalize AI initiatives. Why join Genpact? • Lead AI-first transformation - Build and scale AI solutions that redefine industries • Make an impact - Drive change for global enterprises and solve business challenges that matter • Accelerate your career-Gain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills • Grow with the best - Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace • Committed to ethical AI - Work in an environment where governance, transparency, and security are at the core of everything webuild • Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let's build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.
    $69k-84k yearly est. 23h ago
  • ETL Databricks Data Engineer

    Capgemini 4.5company rating

    Data scientist job in Atlanta, GA

    We are seeking a ETL Databricks Data Engineer to join our team and help build robust, scalable data solutions. This role involves designing and maintaining data pipelines, optimizing ETL processes, and collaborating with cross-functional teams to ensure data integrity and accessibility. Key Responsibilities Design, develop, and maintain scalable data pipelines and ETL processes using Databricks. Create and optimize Python scripts for data transformation, automation, and integration tasks. Develop and fine-tune SQL queries for data extraction, transformation, and loading. Collaborate with data scientists, analysts, and business stakeholders to deliver high-quality data solutions. Ensure data integrity, security, and compliance with organizational standards. Participate in code reviews and contribute to best practices in data engineering. Required Skills & Qualifications 5 years of professional experience in data engineering or related roles. Strong proficiency in Databricks (including Spark-based data processing). Advanced programming skills in Python. Expertise in SQL for querying and data modeling. Familiarity with Azure Cloud and Azure Data Factory (ADF). Understanding of ETL frameworks, data governance, and performance tuning. Knowledge of CI/CD practices and version control tools (e.g., Git). Exposure to BI tools such as Power BI or Tableau for data visualization. Life at Capgemini Capgemini supports all aspects of your well-being throughout the changing stages of your life and career. For eligible employees, we offer: • Flexible work • Healthcare including dental, vision, mental health, and well-being programs • Financial well-being programs such as 401(k) and Employee Share Ownership Plan • Paid time off and paid holidays • Paid parental leave • Family building benefits like adoption assistance, surrogacy, and cryopreservation • Social well-being benefits like subsidized back-up child/elder care and tutoring • Mentoring, coaching and learning programs • Employee Resource Groups • Disaster Relief Disclaimer Capgemini is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law. This is a general description of the Duties, Responsibilities and Qualifications required for this position. Physical, mental, sensory or environmental demands may be referenced in an attempt to communicate the manner in which this position traditionally is performed. Whenever necessary to provide individuals with disabilities an equal employment opportunity, Capgemini will consider reasonable accommodations that might involve varying job requirements and/or changing the way this job is performed, provided that such accommodations do not pose an undue hardship. Capgemini is committed to providing reasonable accommodations during our recruitment process. If you need assistance or accommodation, please reach out to your recruiting contact. Click the following link for more information on your rights as an Applicant **************************************************************************
    $77k-99k yearly est. 23h ago
  • W2 Opportunity // GCP Data Engineer // Atlanta, GA

    Cloudingest

    Data scientist job in Atlanta, GA

    Job Description: GCP Data Engineer Rate: $50/hr. on W2 (No C2C) We are seeking a highly skilled GCP Data Engineer to design, build, and optimize cloud-native data pipelines and analytics solutions on Google Cloud Platform. The ideal candidate has strong experience with Python, BigQuery, Cloud Data Fusion, and core GCP services such as Cloud Composer, Cloud Storage, Cloud Functions, and Pub/Sub. This role requires a strong foundation in data warehousing concepts and scalable data engineering practices. Responsibilities Design, develop, and maintain robust ETL/ELT pipelines on Google Cloud Platform. Build and optimize data workflows using Cloud Data Fusion, BigQuery, and Cloud Composer. Write efficient and maintainable Python code to support data ingestion, transformation, and automation. Develop optimized BigQuery SQL for analytics, reporting, and large-scale data modeling. Utilize GCP services such as Cloud Storage, Pub/Sub, and Cloud Functions to build event-driven and scalable data solutions. Ensure data quality, governance, and reliability across all pipelines. Collaborate with cross-functional teams to deliver clean, trusted, production-ready datasets. Monitor, troubleshoot, and resolve performance issues in cloud data pipelines and workflows. Must-Have Skills Strong experience with GCP BigQuery (data modeling, SQL development, performance tuning). Proficiency in Python for data engineering and pipeline automation. Hands-on experience with Cloud Data Fusion for ETL/ELT development. Working experience with key GCP services: Cloud Composer Cloud Storage Cloud Functions Pub/Sub Strong understanding of data warehousing concepts, star/snowflake schemas, and best practices. Solid understanding of cloud data architecture and distributed processing. Good-to-Have Skills Experience with Vertex AI for ML pipeline integration or model deployment. Familiarity with Dataproc (Spark/Hadoop) for large-scale processing. Knowledge of CI/CD workflows, Git, and DevOps best practices. Experience with Cloud Logging/Monitoring tools.
    $50 hourly 23h ago
  • Data Governance Analyst

    HD Supply 4.6company rating

    Data scientist job in Atlanta, GA

    Responsible for implementing the day-to-day needs of the data governance and data quality program. Participate in recommending and implementing policies and procedures for data governance approved by the Data Governance council and Data Governance team. Identify data quality opportunities and drive compliance with data governance and quality initiatives. Ensure data governance opportunities are identified and addressed throughout the project life cycle. Job Specific Responsibilities and Preferred Qualifications Preferred Qualifications - Remote East Coast Role Strong expertise in SQL with the ability to write, optimize, and troubleshoot complex queries for data extraction, analysis, and reporting. Proficiency in creating, testing, and troubleshooting regular expressions (Regex) for text parsing, validation, and pattern-based data manipulation. Experience with eCommerce analytics, including analyzing performance metrics and delivering actionable insights to improve business outcomes. Hands-on experience with data visualization tools (e.g., Tableau, Adobe Analytics) and preparing clear reports and presentations for leadership and cross-functional teams. Familiarity with cloud-based data platforms such as Snowflake, AWS Redshift, or Google BigQuery; Python for data manipulation and automation is a plus. Demonstrated ability to support data governance initiatives by applying policies, standards, and dashboards to monitor and improve data quality. Strong analytical and problem-solving skills with exceptional attention to detail, accuracy, and data integrity. Major Tasks, Responsibilities, and Key Accountabilities Participates in the execution and implementation of approved data definitions, policies, standards, process data access, and dashboard statistics. Supports requests to change configurations, use, or design of data elements for specific area of influence. Conducts testing and user acceptance of new system functionality. Analyzes and identifies data sources, data, redundancy, and implements processes to remediate data issues and /or data clean-up efforts. Supports governance principles, policies, and stewardship within the business. Assists in the development and distribution of data quality dashboard. Scopes, resources, and manages data quality initiatives. Participates in the review of all system enhancements and new technologies for data needs, use, and redundancies. Nature and Scope Demonstrates skill in data analysis techniques by resolving missing/incomplete information and inconsistencies/anomalies in more complex research/data. Nature of work requires increasing independence; receives guidance only on unusual, complex problems or issues. Work review typically involves periodic review of output by a supervisor and/or direct customers of the process. May provide general guidance/direction to or train junior level support or professional personnel. Work Environment Located in a comfortable indoor area. Any unpleasant conditions would be infrequent and not objectionable. Frequent periods are spent standing or sitting in the same location with some opportunity to move about. Occasionally there may be a requirement to stoop or lift light material or equipment (typically less than 8 pounds). Typically requires overnight travel less than 10% of the time. Education and Experience Typically requires BS/BA in a related discipline. Generally 2-5 years of experience in a related field OR MS/MA and generally 2-4 years of experience in a related field. Certification is required in some areas.
    $65k-94k yearly est. 2d ago
  • Lead Data Engineer - Palantir Foundry

    Smurfit Westrock

    Data scientist job in Atlanta, GA

    Our technology organization is transforming how we work at WestRock. We align with our businesses to deliver innovative solutions that: Address specific business challenges, integrate processes, and create great experiences Connect our work to shared goals that propel WestRock forward in the Digital Age Imagine how technology can advance the way we work by using disruptive technology We are looking for forward thinking technologists that can accelerate our focus areas such as building stronger foundational technology capabilities, reducing complexity, employing digital transformation concepts, and leveraging disruptive technology. As a Lead Data Engineer, you will play a pivotal role in building and scaling modern data infrastructure that powers decision-making across production, supply chain, and operations. Helps to define and analyze business requirements for Enterprise scale reports. Analyzes and evaluates business use cases for data engineering problems and helps design and develop processing solutions with ETL Cloud based technologies. How you will impact WestRock: Architect and implement scalable data pipelines using Palantir Foundry (pipelines, workshops, ontology) to unify and transform operational data. Design and develop robust data workflows using Python, Apache Airflow, and Apache Spark to support real-time and batch processing needs. Build and deploy solutions on cloud platforms (AWS or Azure), ensuring high availability, security, and performance. Collaborate with data scientists, analysts, and operations teams to deliver actionable insights and operational tooling. Define and enforce data engineering best practices, including CI/CD automation, version control (Git), and testing strategies. Mentor junior developers, conduct code reviews, and help shape the technical roadmap for the data platform. What you need to succeed: Education: Bachelor's degree in computer science or similar At least 6 years of strong Data Engineering experience Hands-on experience with Palantir Foundry, including pipelines, ontology modeling, and workshop development. Strong programming skills in Python or Java, with experience building and maintaining production-grade data pipelines. Proficiency in Apache Airflow and Apache Spark for workflow orchestration and large-scale data processing. Proven experience deploying data solutions on AWS or Azure, with strong understanding of cloud-native services. Familiarity with Git for version control and CI/CD pipelines for automated testing and deployment. Demonstrated ability to mentor junior engineers, lead projects, and work independently in a fast-paced environment. Good communication skills, with the ability to collaborate effectively across technical and non-technical teams. Good analytical and troubleshooting abilities. What we offer: Corporate culture based on integrity, respect, accountability and excellence Comprehensive training with numerous learning and development opportunities An attractive salary reflecting skills, competencies and potential A career with a global packaging company where Sustainability, Safety and Inclusion are business drivers and foundational elements of the daily work.
    $75k-100k yearly est. 3d ago
  • Data Engineer - OrcaWorks AI

    Orcaworks.Ai

    Data scientist job in Atlanta, GA

    Experience Level: Entry-level (Master's preferred) About OrcaWorks AI At OrcaWorks AI, we're building next-generation AI systems that empower businesses to make data-driven decisions with intelligence and speed. We're seeking passionate Data Engineers who love solving real-world data challenges and want to be part of a growing team building cutting-edge AI infrastructure. Key Responsibilities Design, develop, and maintain data pipelines using tools like Airbyte and Prefect to feed AI and machine learning models. Integrate data from multiple structured and unstructured sources into unified and queryable layers using ElasticSearch or Vespa. Implement data validation, transformation, and storage solutions using modern ETL frameworks. Collaborate with AI, LLM, and data science teams to ensure reliable and optimized data flow for model training. Support database management, SQLModel, and data governance practices across services. Required Skills & Qualifications Master's degree (or Bachelor's with equivalent experience) in Computer Science, Information Systems, or Data Engineering. Proficiency in Python and SQL; experience with PySpark or equivalent ETL frameworks. Hands-on experience with Airbyte, Prefect, and DBT. Familiarity with search and indexing systems like Vespa or ElasticSearch. Knowledge of cloud data platforms (AWS, GCP, or Azure) and API integration. Strong understanding of data security and applied AI workflows.
    $75k-100k yearly est. 2d ago
  • Lead Azure Databrick Engineer

    Syren

    Data scientist job in Atlanta, GA

    ****************Individual Contractors (W2/1099) are encouraged to apply. Visa sponsorship is not available for this role at this time************ An Azure Data Engineer is responsible for designing, implementing, and maintaining the data infrastructure within an organization. They collaborate with both business and IT teams to understand stakeholders' needs and unlock the full potential of data. They create conceptual and logical data models, analyze structural requirements, and ensure efficient database solutions. Must Have Skills: Experience of Migrating from other platform to Databricks Proficiency in Databricks and Azure Cloud, Databricks Asset Bundles, Hoslistic vision on the Data Strategy. Proficiency in Data Streaming and Data Modeling Experience in architecting at least two large-scale big data projects Strong understanding of data scaling and its complexities Data Archiving and Purging mechanisms. Job Requirements • Degree in computer science or equivalent preferred • Demonstrable experience in architecture, design, implementation, and/or support of highly distributed applications with Azure cloud and Databricks. • 10+ Years of Hands-on experience with data modelling, database design, data mining, and segmentation techniques. • Working knowledge and experience with "Cloud Architectures" (e.g., SaaS, PaaS, IaaS) and the ability to address the unique security considerations of secure Cloud computing • Should have architected solutions for Cloud environments such as Microsoft Azure and/or GCP • Experience with debugging and performance tuning in distributed environments • Strong analytical skills with the ability to collect, organize, analyse, and broadcast significant amounts of information with attention to detail and accuracy • Experience dealing with structured, unstructured data. • Must have Python, PySpark experience. • Experience in ML or/and graph analysis is a plus
    $75k-100k yearly est. 1d ago
  • Data Engineer

    Synechron 4.4company rating

    Data scientist job in Alpharetta, GA

    We are At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets. Our Challenge Join our data-driven enterprise and lead the design of scalable and high-performance big data solutions. You will craft architectures that handle vast volumes of data, optimize pipeline performance, and incorporate advanced governance and AI-powered processing to unlock actionable insights. Additional Information The base salary for this position varies based on geography and other factors. In accordance with law, the base salary for this role if filled within Alpharetta, GA is $120K - 125K/year & benefits (see below). The Role Responsibilities: Design, build, and maintain scalable big data architectures supporting enterprise analytics and operational needs. Develop, implement, and optimize data pipelines using Apache Airflow, Databricks, and other relevant technologies to ensure reliable data flow and process automation. Manage and enhance data workflows for batch and real-time processing, ensuring efficiency and scalability. Collaborate with data scientists, analysts, and business stakeholders to translate requirements into robust data solutions. Implement data governance, security, and compliance best practices to protect sensitive information. Explore integrating AI/ML techniques into data pipelines, leveraging Databricks and other AI tools for predictive analytics and automation. Develop monitoring dashboards and alert systems to ensure pipeline health and performance. Stay current with emerging big data and cloud technologies, recommending best practices to improve system performance and scalability. Requirements: 5+ years of proven experience in Big Data architecture design, including distributed storage and processing frameworks such as Hadoop, Spark, and Databricks. Strong expertise in performance tuning for large-scale data systems. Hands-on experience with Apache Airflow for workflow orchestration. Proficiency in SQL for managing and querying large databases. Extensive experience with Python for scripting, automation, and data processing workflows. Experience working with cloud platforms (Azure, AWS, or GCP) preferable. Preferred, but not required: Deep understanding of data governance and security frameworks to safeguard sensitive data. Experience with integrating AI/ML models into data pipelines using Databricks MLflow or similar tools. Knowledge of containerization (Docker, Kubernetes) is a plus We offer: A highly competitive compensation and benefits package. A multinational organization with 58 offices in 21 countries and the possibility to work abroad. 10 days of paid annual leave (plus sick leave and national holidays). Maternity & paternity leave plans. A comprehensive insurance plan including medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region). Retirement savings plans. A higher education certification policy. Commuter benefits (varies by region). Extensive training opportunities, focused on skills, substantive knowledge, and personal development. On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses. Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups. Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms. A flat and approachable organization. A truly diverse, fun-loving, and global work culture
    $120k-125k yearly 1d ago
  • Data Engineer w/ Python & SQL

    A2C 4.7company rating

    Data scientist job in Alpharetta, GA

    We're looking for a Data Engineer to build and maintain scalable data pipelines and cloud data infrastructure on GCP. The role focuses on BigQuery, Dataflow, and modern ETL/ELT to support analytics and ML workflows. MUST HAVES A problem solver with ability to analyze and research complex issues and problems; and proposing actionable solutions and/or strategies. Solid understanding and hands on experience with major cloud platforms. Experience in designing and implementing data pipelines. Must have strong Python, SQL & GCP skills Responsibilities Build and optimize batch/streaming pipelines using Dataflow, Pub/Sub, Composer. Develop and tune BigQuery models, queries, and ingestion processes. Implement IaC (Terraform), CI/CD, monitoring, and data quality checks. Ensure data governance, security, and reliable pipeline operations. Collaborate with data science teams and support Vertex AI-based ML workflows. Must-Have Must have strong Python, SQL & GCP skills 3-5+ years of data engineering experience. Hands-on GCP experience (BigQuery, Dataflow, Pub/Sub). Solid ETL/ELT and data modeling experience. Nice-to-Have GCP certifications, Spark, Kafka, Airflow, dbt/Dataform, Docker/K8s.
    $77k-106k yearly est. 1d ago

Learn more about data scientist jobs

How much does a data scientist earn in Columbus, GA?

The average data scientist in Columbus, GA earns between $57,000 and $105,000 annually. This compares to the national average data scientist range of $75,000 to $148,000.

Average data scientist salary in Columbus, GA

$77,000

What are the biggest employers of Data Scientists in Columbus, GA?

The biggest employers of Data Scientists in Columbus, GA are:
  1. Maximus
  2. Molina Healthcare
Job type you want
Full Time
Part Time
Internship
Temporary