Post job

Data engineer jobs in Riverdale, GA

- 1,729 jobs
All
Data Engineer
Data Scientist
Data Architect
Senior Data Architect
Senior Software Engineer
Lead Data Technician
Data Warehouse Developer
Software Engineer
Requirements Engineer
  • Epidemiology Data Scientist

    Guidehouse 3.7company rating

    Data engineer job in Atlanta, GA

    Job Family: Data Science Consulting Travel Required: Up to 10% Clearance Required: Ability to Obtain Public Trust What You Will Do: Lead quantitative analysis and modeling of real-world data, including CMS, commercial claims, and clinical health data, to generate actionable insights for federal public health. Lead analyses to estimate chronic disease population-level prevalence and incidence leveraging real-world data. Design, implement, and refine machine learning and statistical models (e.g., regression, clustering, causal inference) for surveillance and research purposes. Develop and maintain data pipelines and dashboards for large-scale health datasets using R, Python, and SQL. Utilize DataBricks and Snowflake for scalable data processing and analytics. Collaborate with epidemiologists, clinicians, and public health experts to interpret findings and inform CDC policy and strategy. Prepare scientific reports, presentations, and publications for both technical and non-technical audiences. Ensure data quality, documentation, and reproducibility across all analytic workflows. Support onboarding and training of new team members as needed. What You Will Need: Advanced degree (MS or PhD) in Data Science, Epidemiology, Public Health, Biostatistics, or related field. Minimum FIVE (5) years of experience in health data science and epidemiology, including deep experience analyzing CMS or commercial claims data, clinical data, or other large health datasets to generate epidemiological population-level estimates. Strong proficiency in R, Python (including libraries such as Pandas, NumPy, Scikit-learn), and SQL, including large data set manipulation. Demonstrated experience with statistical modeling, machine learning, and data visualization. Experience with DataBricks and/or Snowflake. Strong background in population-level public health, with experience in scientific writing and presentation. Excellent communication and collaboration skills. Ability to work in a fast-paced environment independently to produce high-quality deliverables on-time What Would Be Nice To Have: Experience with generative AI, NLP, or advanced analytics in healthcare settings. Deep experience in biostatistics and using statistical platforms including SAS and R Familiarity with public health surveillance systems, federal health programs, or similar public sector environments. Experience in chronic disease surveillance, preferably neurological conditions Prior work in consulting and working with multi-disciplinary teams and stakeholder engagement. The annual salary range for this position is $102,000.00-$170,000.00. Compensation decisions depend on a wide range of factors, including but not limited to skill sets, experience and training, security clearances, licensure and certifications, and other business and organizational needs. What We Offer: Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. Benefits include: Medical, Rx, Dental & Vision Insurance Personal and Family Sick Time & Company Paid Holidays Position may be eligible for a discretionary variable incentive bonus Parental Leave and Adoption Assistance 401(k) Retirement Plan Basic Life & Supplemental Life Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts Short-Term & Long-Term Disability Student Loan PayDown Tuition Reimbursement, Personal Development & Learning Opportunities Skills Development & Certifications Employee Referral Program Corporate Sponsored Events & Community Outreach Emergency Back-Up Childcare Program Mobility Stipend About Guidehouse Guidehouse is an Equal Opportunity Employer-Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at ************** or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or ************************. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse's Ethics Hotline. If you want to check the validity of correspondence you have received, please contact *************************. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant's dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.
    $102k-170k yearly Auto-Apply 1d ago
  • Data Architect with low latency

    New York Technology Partners 4.7company rating

    Data engineer job in Atlanta, GA

    Role : Data Architect with low latency Duration : Long Term Contact We're seeking a seasoned Lead Software Engineer to architect, build, and scale real time data processing platforms that power event driven applications and analytics. You'll lead the design of streaming microservices, govern data quality and lineage, and mentor engineers while partnering with product, platform, and security stakeholders to deliver resilient, low latency systems. Responsibilities: • Own design & delivery of high throughput, low latency streaming solutions using technologies like Confluent Kafka, Apache Flink, Hazelcast, Kafka Streams, Kafka Connect, and Schema Registry. • Design and implement microservices and event driven systems with robust ETL/ELT pipelines for real time ingestion, enrichment, and delivery. • Establish distributed caching and in memory data grid patterns (e.g., Redis, Hazelcast) to optimize read/write performance and session/state management. • Define and operationalize event gateways / event grids for event routing, fan out, and reliable delivery. • Lead data governance initiatives-standards for metadata, lineage, classifications, retention, access controls, and compliance (PII/PCI/SOX/GDPR as applicable). • Drive CI/CD best practices (pipelines, automated testing, progressive delivery) to enable safe, frequent releases; champion DevSecOps and “shift left” testing. • Set SLOs/SLAs, track observability (tracing, metrics, logs), and optimize performance at scale (throughput, backpressure, state, checkpointing). • Work with Security, Platform, and Cloud teams on networking, IAM, secrets, certificates, and cost optimization. • Mentor engineers, conduct design reviews, and enforce coding standards and reliability patterns. • Guide platform and delivery roadmap Required Qualifications: • 10+ years in software engineering; 5+ years designing large-scale real time or event driven platforms. • Expert with Confluent Kafka (brokers, partitions, consumer groups, Schema Registry, Kafka Connect), Flink (DataStream/Table API, stateful ops, checkpointing), Hazelcast, and/or Kafka Streams. • Strong in ETL/ELT design, streaming joins/windows, exactly once semantics, and idempotent processing. • Experience with microservices (Java/Python), REST/gRPC, protobuf/Avro, and contract-first development. • Hands-on with distributed caching and in memory data grids; performance tuning and eviction strategies. • Cloud experience in any one or more cloud platforms Azure/AWS/GCP; containers, Docker, Kubernetes. • Experience in production-grade CI/CD (Jenkins, Bamboo, Harness or similar), Infrastructure as Code (Terraform/Helm). • Robust observability (Prometheus/Grafana/OpenTelemetry, Splunk/ELK or similar), and resilience patterns (circuit breakers, retries, DLQs). • Practical data governance: metadata catalogs, lineage, encryption, RBAC. • Excellent communication; ability to lead design, influence stakeholders, and guide cross-functional delivery. • Core competencies to include Architectural Thinking, Systems Design, Operational Excellence, Security & Compliance, Team Leadership, Stakeholder Management. Nice to Have: • Experience with CDC, Kafka Connect custom connectors, Flink SQL, Beam. • Streaming ML or feature stores integration (online/offline consistency). • Multi region / disaster recovery for streaming platforms. • Experience with Zero downtime migrations, blue/green, and canary deployments."
    $93k-121k yearly est. 4d ago
  • SAP Data Engineer

    IDR, Inc. 4.3company rating

    Data engineer job in Atlanta, GA

    IDR is seeking an SAP Data Engineer to join one of our top clients for an opportunity in Atlanta, GA. This role involves designing, building, and optimizing data pipelines and architecture to support advanced analytics and business intelligence in a dynamic enterprise environment. Position Overview for the SAP Data Engineer: Develop and optimize ETL/ELT pipelines from SAP sources and other enterprise data sources Design, build, and maintain data architecture supporting analytics and BI initiatives Deep knowledge of SAP technologies including BW/4HANA, S/4HANA, ECC, BusinessObjects, and SAC Experience with Celonis data engineering and enterprise DataOps practices Collaborate with cross-functional teams to ensure data governance, security, and performance Requirements for the SAP Data Engineer: 5 - 8+ years of hands-on experience in Data Engineering with SAP BW/4HANA, SAP ECC/S4, BusinessObjects, SAC, and Celonis Strong knowledge of data integration techniques such as ODP, SLT, ABAP, SQL, and CDS views Experience developing and customizing BI extraction processes using ABAP Familiarity with SAP Datasphere, hybrid architecture, and SAP BPC Understanding of process excellence frameworks, Celonis EMS, DataOps, CI/CD practices, Snowflake, and Tableau What's in it for you? Competitive compensation package Full Benefits; Medical, Vision, Dental, and more! Opportunity to get in with an industry leading organization Why IDR? 25+ Years of Proven Industry Experience in 4 major markets Employee Stock Ownership Program Dedicated Engagement Manager who is committed to you and your success Medical, Dental, Vision, and Life Insurance ClearlyRated's Best of Staffing Client and Talent Award winner 12 years in a row
    $74k-99k yearly est. 22h ago
  • Data Engineer w/ Python & SQL

    A2C 4.7company rating

    Data engineer job in Alpharetta, GA

    We're looking for a Data Engineer to build and maintain scalable data pipelines and cloud data infrastructure on GCP. The role focuses on BigQuery, Dataflow, and modern ETL/ELT to support analytics and ML workflows. MUST HAVES A problem solver with ability to analyze and research complex issues and problems; and proposing actionable solutions and/or strategies. Solid understanding and hands on experience with major cloud platforms. Experience in designing and implementing data pipelines. Must have strong Python, SQL & GCP skills Responsibilities Build and optimize batch/streaming pipelines using Dataflow, Pub/Sub, Composer. Develop and tune BigQuery models, queries, and ingestion processes. Implement IaC (Terraform), CI/CD, monitoring, and data quality checks. Ensure data governance, security, and reliable pipeline operations. Collaborate with data science teams and support Vertex AI-based ML workflows. Must-Have Must have strong Python, SQL & GCP skills 3-5+ years of data engineering experience. Hands-on GCP experience (BigQuery, Dataflow, Pub/Sub). Solid ETL/ELT and data modeling experience. Nice-to-Have GCP certifications, Spark, Kafka, Airflow, dbt/Dataform, Docker/K8s.
    $77k-106k yearly est. 2d ago
  • ETL Databricks Data Engineer

    Capgemini 4.5company rating

    Data engineer job in Atlanta, GA

    We are seeking a ETL Databricks Data Engineer to join our team and help build robust, scalable data solutions. This role involves designing and maintaining data pipelines, optimizing ETL processes, and collaborating with cross-functional teams to ensure data integrity and accessibility. Key Responsibilities Design, develop, and maintain scalable data pipelines and ETL processes using Databricks. Create and optimize Python scripts for data transformation, automation, and integration tasks. Develop and fine-tune SQL queries for data extraction, transformation, and loading. Collaborate with data scientists, analysts, and business stakeholders to deliver high-quality data solutions. Ensure data integrity, security, and compliance with organizational standards. Participate in code reviews and contribute to best practices in data engineering. Required Skills & Qualifications 5 years of professional experience in data engineering or related roles. Strong proficiency in Databricks (including Spark-based data processing). Advanced programming skills in Python. Expertise in SQL for querying and data modeling. Familiarity with Azure Cloud and Azure Data Factory (ADF). Understanding of ETL frameworks, data governance, and performance tuning. Knowledge of CI/CD practices and version control tools (e.g., Git). Exposure to BI tools such as Power BI or Tableau for data visualization. Life at Capgemini Capgemini supports all aspects of your well-being throughout the changing stages of your life and career. For eligible employees, we offer: • Flexible work • Healthcare including dental, vision, mental health, and well-being programs • Financial well-being programs such as 401(k) and Employee Share Ownership Plan • Paid time off and paid holidays • Paid parental leave • Family building benefits like adoption assistance, surrogacy, and cryopreservation • Social well-being benefits like subsidized back-up child/elder care and tutoring • Mentoring, coaching and learning programs • Employee Resource Groups • Disaster Relief Disclaimer Capgemini is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law. This is a general description of the Duties, Responsibilities and Qualifications required for this position. Physical, mental, sensory or environmental demands may be referenced in an attempt to communicate the manner in which this position traditionally is performed. Whenever necessary to provide individuals with disabilities an equal employment opportunity, Capgemini will consider reasonable accommodations that might involve varying job requirements and/or changing the way this job is performed, provided that such accommodations do not pose an undue hardship. Capgemini is committed to providing reasonable accommodations during our recruitment process. If you need assistance or accommodation, please reach out to your recruiting contact. Click the following link for more information on your rights as an Applicant **************************************************************************
    $77k-99k yearly est. 1d ago
  • Data Scientist

    Coforge

    Data engineer job in Atlanta, GA

    Role: Data Scientist Mode Of Hire: Full Time Key Responsibilities • Data wrangling & feature engineering: Ingest, clean, and transform data from SQL, APIs, and data lakes (e.g., Snowflake, Databricks). Design robust pipelines that feed into analytics and ML workflows. • Data understanding & exploration: Work closely with domain experts to deeply understand the meaning, context, quality, and limitations of available datasets. Translate business questions into data requirements and analytics plans. • Machine learning development: Build, tune, and validate predictive models using scikit-learn, SparkML, XGBoost, or TensorFlow. • Cross-functional partnership: Collaborate with marketing, sales, and product teams to scope business use cases, define success metrics, and integrate models into operational workflows. • Model deployment & MLOps: Deploy and manage models using MLflow, docker and CI/CD pipelines. Implement versioning, testing, performance monitoring, and retraining strategies as part of a robust MLOps practice. • Infrastructure support: Work with data engineering and DevOps teams to maintain and improve model training and deployment infrastructure, including compute resources, workflow orchestration and environment configuration. • Insight delivery: Build clear, actionable reporting and visualizations using tools like Power BI or Tableau. Focus on impact, not just analysis. Skills Required: • Bachelor's degree in Data Science, Computer Science, Engineering, or a related quantitative field. • 5+ years of experience in a data science, ML engineering, or analytics role. • Strong SQL, Python and ML Techniques programming skills. • Experience with Azure Cloud, Databricks, and/or Snowflake. • Experience building and deploying machine learning models in production environments. Hands-on experience with Databricks, including SparkML, and MLflow integration. • Familiarity with MLOps best practices, including version control, model monitoring, and automated testing. • Experience with tools such as Git, MLflow, Docker and workflow schedulers. • Ability to communicate complex technical work to non-technical stakeholders. • Experience with scalable model training environments and distributed computing. Preferred Qualifications • Master's degree in a quantitative or technical discipline. • Experience in financial services, fintech, or enterprise B2B analytics. • Knowledge of A/B testing, causal inference, and statistical experimentation. • Familiarity with GenAI, LLM pipelines, and vector-based retrieval is a plus and platform like Snowflake Cortex.
    $66k-91k yearly est. 2d ago
  • Data Analytics & AI Delivery Lead

    Futran Solutions 3.9company rating

    Data engineer job in Alpharetta, GA

    Hello Hope you are doing well Job Title: Data Analytics & AI Delivery Lead 10+ Years of Experience Essential Duties and Responsibilities: We are seeking a highly motivated and experienced Delivery Leader to oversee the execution of strategic Analytics and AI projects. This individual will be responsible for ensuring on-time, high-quality delivery of initiatives across data and analytics programs, managing project organization, and serving as a critical bridge between business stakeholders and technical teams. This role requires strong delivery expertise across agile and waterfall methodologies, deep experience in stakeholder engagement, and familiarity with the Azure ecosystem, including tools like Synapse, Microsoft Fabric, Power BI, and Azure DevOps. The ideal candidate brings a track record of leading complex, cross-functional data initiatives with confidence, clarity, and consistency. Key Responsibilities: Lead end-to-end delivery of Analytics and AI projects, from initiation through implementation and transition to operations. Partner closely with business stakeholders to align project goals with strategic objectives and ensure value realization. Drive project planning activities including scoping, scheduling, resource planning, risk management, and reporting. Facilitate agile ceremonies or stage-gate processes based on project needs (Scrum or Waterfall). Serve as a liaison between technical teams and business functions to translate requirements and remove roadblocks. Ensure strong governance and compliance with delivery standards, documentation, and stakeholder communication. Track key metrics such as scope, budget, timeline, and benefit realization to ensure successful project outcomes. Contribute to the continuous improvement of delivery practices within the analytics and AI domain. Your qualifications Required: Bachelor's degree in Information Systems, Computer Science, Engineering, or a related field (Master's preferred). 8+ years of experience in project/program management with a focus on data, analytics, or AI initiatives. Demonstrated ability to manage complex, multi-disciplinary programs using Agile, Waterfall, and hybrid methodologies. Experience delivering projects involving Azure Data Services, including Synapse Analytics, Power BI, Microsoft Fabric, and Azure DevOps. Strong stakeholder management skills, with a proven ability to influence, communicate, and collaborate at all levels of the organization. Knowledge of data governance, data modeling, and AI/ML deployment frameworks is a plus. PMP, Scrum Master, or SAFe certification is advantageous. Preferred: Strong analytical mindset with the ability to understand and interpret technical details. Exceptional organizational skills and attention to detail. Comfortable managing ambiguity and driving clarity in evolving project environments. Proactive problem-solver with a focus on outcomes and team empowerment. Familiarity with tools like Jira, Azure Boards, or similar project tracking platforms. Thanks & Regards, Mittapalli Lalith Kumar Senior Technical Recruiter ***************************
    $93k-127k yearly est. 4d ago
  • Data Architect

    Agile Resources, Inc.

    Data engineer job in Atlanta, GA

    Note: Initial 100% onsite required for the first six months. Employment Type: Permanent / Direct Hire / Full-time Salary Up to $180,000 (depending on experience) + bonus The Role: We're seeking a highly skilled and hands-on Data Architect to lead the design, implementation, and ongoing evolution of our enterprise-grade data systems. This role is crucial for building scalable, secure, and intelligent data infrastructure that supports core analytics, operational excellence, and future AI initiatives. Success requires a seasoned technologist who can seamlessly integrate cloud-native services with traditional data warehousing to create a modern, unified data platform. What You'll Do: Architecture & Strategy: Lead the design and implementation of modern data platforms, including Data Lakes, Data Warehouses, and Lakehouse architectures, to enable a single source of truth for the enterprise. Data Modeling & Integration: Architect unified data models that support both modular monoliths and microservices-based platforms. Design and optimize high-volume, low-latency streaming/batch ETL/ELT pipelines. Technical Leadership: Drive the technical execution across the entire data lifecycle. Build and optimize core data processing scripts using Spark and Python. Governance & Quality: Define and enforce standards for data governance, metadata management, and data observability across distributed systems. Implement automated data lineage tracking, schema evolution, and data quality monitoring. Cloud Infrastructure: Configure and manage cloud-native data services, including core data storage and event ingestion infrastructure. Required Experience: Experience: 10+ years of proven experience in enterprise data architecture and engineering. Core Platform Expertise: Strong, hands-on experience with the Azure Data Ecosystem including Azure Data Lake Storage (ADLS), Azure Synapse Analytics (or equivalent cloud DW), and Azure Purview (or equivalent data catalog). Processing: Deep expertise in Databricks (or Apache Spark) for ETL/ELT pipeline implementation, using Delta Lake and SQL Server (or equivalent RDBMS). Coding & Scripting: Strong proficiency in Python, Spark, and advanced SQL. Data Governance: Hands-on experience implementing data lineage tracking and data quality monitoring (e.g., using Great Expectations or dbt). Preferred Skills: Semantic Technologies: Hands-on experience developing ontology frameworks using OWL, RDF, and SPARQL to enable semantic interoperability. Advanced AI Data: Experience integrating structured/unstructured data into Knowledge Graphs and Vector Databases. Streaming/Telemetry: Experience developing and maintaining semantic telemetry pipelines using services like Azure Event Hubs or Kafka. Emerging Concepts: Exposure to linked data ecosystems, data mesh, or data fabric concepts.
    $180k yearly 22h ago
  • ML Engineer with Timeseries data experience

    Techstar Group 3.7company rating

    Data engineer job in Atlanta, GA

    Role: ML Engineer with Timeseries data experience Hybrid in Atlanta, GA (locals preferred) $58/hr on C2C, Any Visa Model Development: Design, build, train, and optimize ML/DL models for time-series forecasting, prediction, anomaly detection, and causal inference. Data Pipelines: Create robust data pipelines for collection, preprocessing, feature engineering, and labeling of large-scale time-series data. Scalable Systems: Architect and implement scalable AI/ML infrastructure and MLOps pipelines (CI/CD, monitoring) for production deployment. Collaboration: Work with data engineers, software developers, and domain experts to integrate AI solutions. Performance: Monitor, troubleshoot, and optimize model performance, ensuring robustness and real-world applicability. Languages & Frameworks: Good understanding of AWS Framework, Python (Pandas, NumPy), PyTorch, TensorFlow, Scikit-learn, PySpark. ML/DL Expertise: Strong grasp of time-series models (ARIMA, Prophet, Deep Learning), anomaly detection, and predictive analytics Data Handling: Experience with large datasets, feature engineering, and scalable data processing.
    $58 hourly 3d ago
  • W2 Opportunity // GCP Data Engineer // Atlanta, GA

    Cloudingest

    Data engineer job in Atlanta, GA

    Job Description: GCP Data Engineer Rate: $50/hr. on W2 (No C2C) We are seeking a highly skilled GCP Data Engineer to design, build, and optimize cloud-native data pipelines and analytics solutions on Google Cloud Platform. The ideal candidate has strong experience with Python, BigQuery, Cloud Data Fusion, and core GCP services such as Cloud Composer, Cloud Storage, Cloud Functions, and Pub/Sub. This role requires a strong foundation in data warehousing concepts and scalable data engineering practices. Responsibilities Design, develop, and maintain robust ETL/ELT pipelines on Google Cloud Platform. Build and optimize data workflows using Cloud Data Fusion, BigQuery, and Cloud Composer. Write efficient and maintainable Python code to support data ingestion, transformation, and automation. Develop optimized BigQuery SQL for analytics, reporting, and large-scale data modeling. Utilize GCP services such as Cloud Storage, Pub/Sub, and Cloud Functions to build event-driven and scalable data solutions. Ensure data quality, governance, and reliability across all pipelines. Collaborate with cross-functional teams to deliver clean, trusted, production-ready datasets. Monitor, troubleshoot, and resolve performance issues in cloud data pipelines and workflows. Must-Have Skills Strong experience with GCP BigQuery (data modeling, SQL development, performance tuning). Proficiency in Python for data engineering and pipeline automation. Hands-on experience with Cloud Data Fusion for ETL/ELT development. Working experience with key GCP services: Cloud Composer Cloud Storage Cloud Functions Pub/Sub Strong understanding of data warehousing concepts, star/snowflake schemas, and best practices. Solid understanding of cloud data architecture and distributed processing. Good-to-Have Skills Experience with Vertex AI for ML pipeline integration or model deployment. Familiarity with Dataproc (Spark/Hadoop) for large-scale processing. Knowledge of CI/CD workflows, Git, and DevOps best practices. Experience with Cloud Logging/Monitoring tools.
    $50 hourly 1d ago
  • Lead Data Engineer - Palantir Foundry

    Smurfit Westrock

    Data engineer job in Atlanta, GA

    Our technology organization is transforming how we work at WestRock. We align with our businesses to deliver innovative solutions that: Address specific business challenges, integrate processes, and create great experiences Connect our work to shared goals that propel WestRock forward in the Digital Age Imagine how technology can advance the way we work by using disruptive technology We are looking for forward thinking technologists that can accelerate our focus areas such as building stronger foundational technology capabilities, reducing complexity, employing digital transformation concepts, and leveraging disruptive technology. As a Lead Data Engineer, you will play a pivotal role in building and scaling modern data infrastructure that powers decision-making across production, supply chain, and operations. Helps to define and analyze business requirements for Enterprise scale reports. Analyzes and evaluates business use cases for data engineering problems and helps design and develop processing solutions with ETL Cloud based technologies. How you will impact WestRock: Architect and implement scalable data pipelines using Palantir Foundry (pipelines, workshops, ontology) to unify and transform operational data. Design and develop robust data workflows using Python, Apache Airflow, and Apache Spark to support real-time and batch processing needs. Build and deploy solutions on cloud platforms (AWS or Azure), ensuring high availability, security, and performance. Collaborate with data scientists, analysts, and operations teams to deliver actionable insights and operational tooling. Define and enforce data engineering best practices, including CI/CD automation, version control (Git), and testing strategies. Mentor junior developers, conduct code reviews, and help shape the technical roadmap for the data platform. What you need to succeed: Education: Bachelor's degree in computer science or similar At least 6 years of strong Data Engineering experience Hands-on experience with Palantir Foundry, including pipelines, ontology modeling, and workshop development. Strong programming skills in Python or Java, with experience building and maintaining production-grade data pipelines. Proficiency in Apache Airflow and Apache Spark for workflow orchestration and large-scale data processing. Proven experience deploying data solutions on AWS or Azure, with strong understanding of cloud-native services. Familiarity with Git for version control and CI/CD pipelines for automated testing and deployment. Demonstrated ability to mentor junior engineers, lead projects, and work independently in a fast-paced environment. Good communication skills, with the ability to collaborate effectively across technical and non-technical teams. Good analytical and troubleshooting abilities. What we offer: Corporate culture based on integrity, respect, accountability and excellence Comprehensive training with numerous learning and development opportunities An attractive salary reflecting skills, competencies and potential A career with a global packaging company where Sustainability, Safety and Inclusion are business drivers and foundational elements of the daily work.
    $75k-100k yearly est. 4d ago
  • Lead Azure Databrick Engineer

    Syren

    Data engineer job in Atlanta, GA

    ****************Individual Contractors (W2/1099) are encouraged to apply. Visa sponsorship is not available for this role at this time************ An Azure Data Engineer is responsible for designing, implementing, and maintaining the data infrastructure within an organization. They collaborate with both business and IT teams to understand stakeholders' needs and unlock the full potential of data. They create conceptual and logical data models, analyze structural requirements, and ensure efficient database solutions. Must Have Skills: Experience of Migrating from other platform to Databricks Proficiency in Databricks and Azure Cloud, Databricks Asset Bundles, Hoslistic vision on the Data Strategy. Proficiency in Data Streaming and Data Modeling Experience in architecting at least two large-scale big data projects Strong understanding of data scaling and its complexities Data Archiving and Purging mechanisms. Job Requirements • Degree in computer science or equivalent preferred • Demonstrable experience in architecture, design, implementation, and/or support of highly distributed applications with Azure cloud and Databricks. • 10+ Years of Hands-on experience with data modelling, database design, data mining, and segmentation techniques. • Working knowledge and experience with "Cloud Architectures" (e.g., SaaS, PaaS, IaaS) and the ability to address the unique security considerations of secure Cloud computing • Should have architected solutions for Cloud environments such as Microsoft Azure and/or GCP • Experience with debugging and performance tuning in distributed environments • Strong analytical skills with the ability to collect, organize, analyse, and broadcast significant amounts of information with attention to detail and accuracy • Experience dealing with structured, unstructured data. • Must have Python, PySpark experience. • Experience in ML or/and graph analysis is a plus
    $75k-100k yearly est. 2d ago
  • Data Engineer - OrcaWorks AI

    Orcaworks.Ai

    Data engineer job in Atlanta, GA

    Experience Level: Entry-level (Master's preferred) About OrcaWorks AI At OrcaWorks AI, we're building next-generation AI systems that empower businesses to make data-driven decisions with intelligence and speed. We're seeking passionate Data Engineers who love solving real-world data challenges and want to be part of a growing team building cutting-edge AI infrastructure. Key Responsibilities Design, develop, and maintain data pipelines using tools like Airbyte and Prefect to feed AI and machine learning models. Integrate data from multiple structured and unstructured sources into unified and queryable layers using ElasticSearch or Vespa. Implement data validation, transformation, and storage solutions using modern ETL frameworks. Collaborate with AI, LLM, and data science teams to ensure reliable and optimized data flow for model training. Support database management, SQLModel, and data governance practices across services. Required Skills & Qualifications Master's degree (or Bachelor's with equivalent experience) in Computer Science, Information Systems, or Data Engineering. Proficiency in Python and SQL; experience with PySpark or equivalent ETL frameworks. Hands-on experience with Airbyte, Prefect, and DBT. Familiarity with search and indexing systems like Vespa or ElasticSearch. Knowledge of cloud data platforms (AWS, GCP, or Azure) and API integration. Strong understanding of data security and applied AI workflows.
    $75k-100k yearly est. 3d ago
  • Technical Data Architect

    Oldcastle Infrastructure 4.3company rating

    Data engineer job in Atlanta, GA

    Exempt Oldcastle Infrastructure™, a CRH company, is the leading provider of utility infrastructure solutions for the water, energy, and communications markets throughout North America. We're more than just a manufacturer of precast concrete, polymer concrete, or plastic products. We're a trusted and strategic partner to engineers, contractors, distributors, specifiers, and more. With our network of more than 80 manufacturing facilities and more than 4,000 employees, we're leading the industry with innovation and a safety-first mindset. Job Summary Oldcastle Infrastructure (OI), as part of CRH's Infrastructure Products Group (IPG), is a global manufacturing leader of utility infrastructure products. Our goal is to be the most efficient producer of engineered systems and our customers' strategic partner of choice. A crucial part of OI's journey is the investment in new digital tools including a new ERP. With a modern, common platform, OI will unlock the benefits of its scale, deliver a better customer experience, and build a foundation for continuous process improvement. The Technical Data Architect is a senior role accountable of defining, governing, and delivering the data architecture strategy required to migrate enterprise data from legacy systems into SAP S/4HANA and Salesforce CPQ. This role ensures that data models, migration approaches, and governance structures support end-to-end business processes and regulatory compliance, while delivering high-quality, reconciled, and auditable data into the template. The architect will partner with the business data management team, program management office, functional process owners, and system integrators to ensure a seamless transition with minimal disruption to operations. Job Location This role will work hybrid out of our office in the Sandy Springs, GA area. Job Responsibilities Data Architecture Modeling Design target SAP S/4HANA data models and mapping rules from legacy systems. Validate functional data alignment for Finance (FI/CO), Sales & Distribution (SD), Materials Management (MM) and Production Planning (PP). Leverage CRH IPG Data Dictionary, Data Management and ETL migration tools to support the cleansing and data migration processes. Provide Technical capabilities to support the data quality and data reconciliations for Master Data Subjects. ERP Data Migration Collaborate with the business Master Data team on the legacy data migration by supporting the technical requirements for Customers, Vendors, BOMs, Products and other master data subjects. Define extraction, transformation, load, and reconciliation processes with automation where possible. Master Data Management Partner with the Business Master Data team to align on the governance model, ownership, and ongoing stewardship processes for core data subjects. Define and support the data migration testing strategy, including mock loads, trial conversions, and dress rehearsals. Partner with business master data team and users for the validation and sign-off at each migration stage. Design cutover sequencing for data loads, ensuring minimal downtime. Coordinate with functional leads and the PMO on the entry/exit criteria and contingency planning for go-live events related to data quality readiness. Job Requirements 5-8+ years of experience working in Data Architecture in the manufacturing industry Proven track record in delivering large-scale data migrations (CPQ, OTC, Finance, Supply Chain, Manufacturing P2P). Hands-on experience with ETL/migration tools (SAP Data Services, Informatica, etc). Strong knowledge of data governance, master data management, and audit/compliance processes. Process improvement knowledge gained while working in an organization undergoing a significant operational culture shift Creation and improvement of processes that demonstrate ease of doing business internally and externally Development and implementation of process adherence and data quality adoption metrics Comfortable operating in environment of ambiguity and fast change Strong interpersonal and organizational influencing skills Ability to communicate in a simple, articulate, thoughtful manner to varying audience levels Innovative spirit to work cross-functionally in developing improvement ideas A pleasant, likeable manner while accomplishing challenging results Bachelor's degree in computer science or technical related discipline SAP Technical Certifications in Master Data/Data Services/MDG (preferred) PMP Certification (preferred) What CRH Offers You Highly competitive base pay Comprehensive medical, dental and disability benefits programs Group retirement savings program Health and wellness programs An inclusive culture that values opportunity for growth, development, and internal promotion About CRH CRH has a long and proud heritage. We are a collection of hundreds of family businesses, regional companies and large enterprises that together form the CRH family. CRH operates in a decentralized, diversified structure that allows you to work in a small company environment while having the career opportunities of a large international organization. If you're up for a rewarding challenge, we invite you to take the first step and apply today! Once you click apply now, you will be brought to our official employment application. Please complete your online profile and it will be sent to the hiring manager. Our system allows you to view and track your status 24 hours a day. Thank you for your interest! Oldcastle Infrastructure, a CRH Company, is an Affirmative Action and Equal Opportunity Employer. EOE/Vet/Disability CRH is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, status as a protected veteran or any other characteristic protected under applicable federal, state, or local law.
    $84k-113k yearly est. 22h ago
  • Data Engineer

    Synechron 4.4company rating

    Data engineer job in Alpharetta, GA

    We are At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets. Our Challenge Join our data-driven enterprise and lead the design of scalable and high-performance big data solutions. You will craft architectures that handle vast volumes of data, optimize pipeline performance, and incorporate advanced governance and AI-powered processing to unlock actionable insights. Additional Information The base salary for this position varies based on geography and other factors. In accordance with law, the base salary for this role if filled within Alpharetta, GA is $120K - 125K/year & benefits (see below). The Role Responsibilities: Design, build, and maintain scalable big data architectures supporting enterprise analytics and operational needs. Develop, implement, and optimize data pipelines using Apache Airflow, Databricks, and other relevant technologies to ensure reliable data flow and process automation. Manage and enhance data workflows for batch and real-time processing, ensuring efficiency and scalability. Collaborate with data scientists, analysts, and business stakeholders to translate requirements into robust data solutions. Implement data governance, security, and compliance best practices to protect sensitive information. Explore integrating AI/ML techniques into data pipelines, leveraging Databricks and other AI tools for predictive analytics and automation. Develop monitoring dashboards and alert systems to ensure pipeline health and performance. Stay current with emerging big data and cloud technologies, recommending best practices to improve system performance and scalability. Requirements: 5+ years of proven experience in Big Data architecture design, including distributed storage and processing frameworks such as Hadoop, Spark, and Databricks. Strong expertise in performance tuning for large-scale data systems. Hands-on experience with Apache Airflow for workflow orchestration. Proficiency in SQL for managing and querying large databases. Extensive experience with Python for scripting, automation, and data processing workflows. Experience working with cloud platforms (Azure, AWS, or GCP) preferable. Preferred, but not required: Deep understanding of data governance and security frameworks to safeguard sensitive data. Experience with integrating AI/ML models into data pipelines using Databricks MLflow or similar tools. Knowledge of containerization (Docker, Kubernetes) is a plus We offer: A highly competitive compensation and benefits package. A multinational organization with 58 offices in 21 countries and the possibility to work abroad. 10 days of paid annual leave (plus sick leave and national holidays). Maternity & paternity leave plans. A comprehensive insurance plan including medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region). Retirement savings plans. A higher education certification policy. Commuter benefits (varies by region). Extensive training opportunities, focused on skills, substantive knowledge, and personal development. On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses. Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups. Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms. A flat and approachable organization. A truly diverse, fun-loving, and global work culture
    $120k-125k yearly 2d ago
  • Senior Data Architect

    Visionaire Partners 4.1company rating

    Data engineer job in Atlanta, GA

    Long-term opportunity with a rapidly growing company! RESPONSIBILITIES: Own end-to-end data architecture for enterprise SaaS platforms, including both OLTP and analytical serving layers Design and operate solutions across Azure SQL DB/MI, Azure Databricks with Delta Lake, ADLS Gen2, Synapse Analytics / Microsoft Fabric Partner with analytics teams on Power BI semantic models, including performance optimization and row-level security (RLS) Define and implement Information Lifecycle Management (ILM): hot/warm/cold tiers, 2-year OLTP retention, archive/nearline, and a BI mirror that enables rich analytics without impacting production workloads. Engineer ERP/SAP financial interfaces for idempotency, reconciliation, and traceability; design rollback/de-dup strategies and financial journal integrity controls. Govern schema evolution/DbVersions to prevent cross-customer regressions while achieving performance gains Establish data SLOs (freshness, latency, correctness) mapped to customer SLAs; instrument monitoring/alerting and drive continuous improvement. This is a direct-hire opportunity in Atlanta. Work onsite the first 5-6 months, then transition to a hybrid schedule of 3 days in the office with 2 days remote (flex days). REQUIRED SKILLS: 10+ years of experience in data or database engineering 5-8+ years owning data or database architecture for customer-facing SaaS or analytics platforms at enterprise scale Proven experience operating at multi-terabyte scale (5+ TB) with measurable improvements in performance, reliability, and cost Strong expertise with Azure data technologies Advanced SQL skills, including query optimization, indexing, partitioning, CDC, caching, and schema versioning Experience designing audit-ready, SLA-driven data platforms Strong background in ERP/SAP data integrations, particularly financial data Bachelor's degree PREFERRED SKILLS: Power BI performance modeling (RLS, composite models, incremental refresh, DAX optimization). Modular monolith/microservices experience Semantic tech (ontology/knowledge graphs), vector stores, and agentic AI orchestration experience Must be authorized to work in the US. Sponsorships are not available.
    $100k-137k yearly est. 3d ago
  • Senior Data Architect

    Mtech Systems 3.5company rating

    Data engineer job in Dunwoody, GA

    At MTech Systems, our company mission is to increase yield in protein production to help feed the growing world population without compromising animal welfare or damaging the planet. We aim to create software that delivers real-time data to the entire supply chain that allows producers to get better insight into what is happening on their farms and what they can do to responsibly improve production. MTech Systems is a prominent provider of tools for managing performance in Live Animal Protein Production. For over 30 years, MTech Systems has provided cutting-edge enterprise data solutions for all aspects of the live poultry operations cycle. We provide our customers with solutions in Business Intelligence, Live Production Accounting, Production Planning, and Remote Data Management-all through an integrated system. Our applications can currently be found running businesses on six continents in over 50 countries. MTech has built an international reputation for equipping our customers with the power to utilize comprehensive data to maximize profitability. With over 250 employees globally, MTech Systems currently has main offices in Mexico, United States, and Brazil, with additional resources in key markets around the world. MTech Systems USA's headquarters is based in Atlanta, Georgia and has approximately 90 team members in a casual, collaborative environment. Our work culture here is based on a commitment to helping our clients feed the world, resulting in a flexible and rewarding atmosphere. We are committed to maintaining a work culture that enhances collaboration, provides robust development tools, offers training programs, and allows for direct access to senior and executive management. Job Summary MTech builds customer-facing SaaS & analytics products used by global enterprise customers. You will own the database/data platform architecture that powers these products-driving performance, reliability, auditability, and cost efficiency at multi-tenant, multi-terabyte scale. Success is measured in hard outcomes: fewer P1s/support tickets, faster queries, bullet-proof ERP/SAP integrations, SLO compliance tied to SLAs, and audit ready evidence. Responsibilities and Duties Architecture & Design Own the end-to-end data architecture for enterprise SaaS (OLTP + analytical serving), including Azure SQL/MI, Databricks/Delta Lake, ADLS, Synapse/Fabric, and collaboration on Power BI semantic models (RLS, performance). Define and implement Information Lifecycle Management (ILM): hot/warm/cold tiers, 2-year OLTP retention, archive/nearline, and a BI mirror that enables rich analytics without impacting production workloads. Engineer ERP/SAP financial interfaces for idempotency, reconciliation, and traceability; design rollback/de-dup strategies and financial journal integrity controls. Govern schema evolution/DbVersions to prevent cross-customer regressions while achieving performance gains. Establish data SLOs (freshness, latency, correctness) mapped to customer SLAs; instrument monitoring/alerting and drive continuous improvement. Operations & Observability Build observability for pipelines and interfaces (logs/metrics/traces, lineage, data quality gates) and correlate application telemetry (e.g., Stackify/Retrace) with DB performance for rapid rootcause analysis. Create incident playbooks (reprocess, reconcile, rollback) and drive MTTR down across data incidents. Collaboration & Leadership Lead the DBA/DB engineering function (standards, reviews, capacity planning, HA/DR, on-call, performance/availability SLOs) and mentor data engineers. Partner with Product/Projects/BI to shape domain models that meet demanding customer reporting (e.g., Tyson Matrix) and planning needs without compromising OLTP. Required Qualifications 15+ years in data/database engineering; 5-8+ years owning data/DB architecture for customerfacing SaaS/analytics at enterprise scale. Proven results at multi-terabyte scale (≥5 TB) with measurable improvements (P1 reduction, MTTR, query latency, cost/performance). Expertise in Azure SQL/MI, Databricks/Delta Lake, ADLS, Synapse/Fabric; deep SQL, partitioning/indexing, query plans, CDC, caching, schema versioning. Audit & SLA readiness: implemented controls/evidence to satisfy SOC 1 Type 2 (or equivalent) and run environments to SLOs linked to SLAs. ERP/SAP data interface craftsmanship: idempotent, reconciled, observable financial integrations. ILM/Archival + BI mirror design for queryable archives/analytics without OLTP impact. Preferred Skills Power BI performance modeling (RLS, composite models, incremental refresh, DAX optimization). Modular monolith/microservices experience (plus, not required). Semantic tech (ontology/knowledge graphs), vector stores, and agentic AI orchestration experience (advantage, not required). EEO Statement Integrated into our shared values is MTech's commitment to diversity and equal employment opportunity. All qualified applicants will receive consideration for employment without regard to sex, age, race, color, creed, religion, national origin, disability, sexual orientation, gender identity, veteran status, military service, genetic information, or any other characteristic or conduct protected by law. MTech aims to maintain a global inclusive workplace where every person is regarded fairly, appreciated for their uniqueness, advanced according to their accomplishments, and encouraged to fulfill their highest potential. We believe in understanding and respecting differences among all people. Every individual at MTech has an ongoing responsibility to respect and support a globally diverse environment.
    $92k-123k yearly est. 22h ago
  • Senior Angular Developer

    Firstpro 360 4.5company rating

    Data engineer job in Norcross, GA

    About the Role We're seeking a Senior Angular Developer to lead the migration of our applications from Angular 14 to Angular 19+ and architect state management solutions using NgRx. Key Responsibilities Lead Angular 14 to 19+ migration initiatives Design and implement NgRx state management (Store, Effects, Entity) Build scalable, performant enterprise Angular applications Mentor developers on Angular best practices and reactive programming Conduct code reviews and establish coding standards Required Skills 5+ years Angular experience (versions 2+) Hands-on experience migrating Angular applications across major versions Expert knowledge of NgRx state management patterns Strong proficiency in TypeScript and RxJS Experience with modern Angular features (standalone components, signals) Unit and E2E testing experience (Jest, Cypress, or similar) Nice to Have Angular 19 feature experience Angular Material or component library expertise Server-side rendering (Angular Universal) Micro-frontend architecture knowledge
    $91k-118k yearly est. 3d ago
  • Software Engineer

    Insight Global

    Data engineer job in Atlanta, GA

    DevOps Software Engineer Type: Hybrid in Atlanta, GA - Onsite 4x a month Duration: 6-month contract to hire About the Role The DevOps Software Engineer works within the client's Software Development group as part of an Agile Scrum or Kanban team. In this role, you will design, code, test, automate, and support high‑performing software and delivery pipelines. You serve as a technical expert on the systems you build and maintain, collaborating with teammates, business partners, and stakeholders to deliver scalable solutions aligned with the client's technical vision. Software Engineering Responsibilities Build, maintain, and optimize CI/CD pipelines. Manage branching and release strategies (feature, release, hotfix, etc.). Implement automation across build, deployment, and operational processes. Design, develop, test, and document applications based on business requirements. Build standalone and multi‑tiered applications using modern development practices. Deliver web and desktop applications for assigned projects. Required Qualifications 3+ years of experience in software development (coding, debugging, testing, troubleshooting). Hands‑on experience building and maintaining CI/CD pipelines. Experience managing branching and release strategies. Strong focus on automation across development and operational workflows. Azure DevOps, GitHub, Git, Visual Studio or similar tooling Web Services, JSON, XML, CSS, HTML C#, JavaScript, SQL Contract/Contract-to-Hire Roles: Compensation: $40/hr to $50/hr. Exact compensation may vary based on several factors, including skills, experience, and education. Benefit packages for this role will start on the 31st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law. Conversion Salary: $95,000 - $115,000
    $95k-115k yearly 2d ago
  • UI/UX Engineer

    Charter Global 4.0company rating

    Data engineer job in Atlanta, GA

    📅 Contract: 6+ Months About the Role: We're seeking a UI/UX Engineer to design and develop intuitive, user-centric web interfaces that deliver exceptional digital experiences. This role blends creative design, front-end development, and data-driven insights to optimize usability and performance. Key Responsibilities: Design and implement responsive, accessible web interfaces using modern frameworks (React, Angular, Vue). Collaborate with business and technical teams to translate requirements into functional UI/UX solutions. Utilize analytics tools (Google Analytics, Tag Manager, Hotjar) to track user behavior and improve engagement. Conduct A/B testing and generate insights for UX enhancements and conversion optimization. Ensure compliance with accessibility standards (WCAG) and SEO best practices. Requirements: 5+ years in UI/UX design and front-end development. Proficiency in HTML5, CSS3, JavaScript, and design tools (Figma, Adobe XD, Sketch). Experience with CMS platforms, version control (Git), and Agile methodologies. Strong understanding of analytics, accessibility, and user-centered design principles. Bachelor's degree in Design, HCI, or related field. Regards, Ashish Lal | Talent Acquisition Manager Charter Global Inc | ***************************** Email: ************************* LinkedIn: ASHISH K LAL | LinkedIn One Glenlake Parkway | Suite 525 | Atlanta, GA 30328
    $67k-85k yearly est. 22h ago

Learn more about data engineer jobs

How much does a data engineer earn in Riverdale, GA?

The average data engineer in Riverdale, GA earns between $65,000 and $114,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.

Average data engineer salary in Riverdale, GA

$86,000
Job type you want
Full Time
Part Time
Internship
Temporary