Post job

Data engineer jobs in Roswell, GA

- 1,684 jobs
All
Data Engineer
Senior Software Engineer
Data Architect
Requirements Engineer
Devops Engineer
Data Scientist
Senior Data Architect
Data Warehouse Developer
Senior Consultant Engineer
  • Senior Data Engineer

    Quantum World Technologies Inc. 4.2company rating

    Data engineer job in Alpharetta, GA

    Role: Senior Data Engineer Job Type: Fulltime only Visa- Indipendet Visa only Job Discription Hands-on experience in building and optimizing data processing applications using Java and Python, ensuring high performance and scalability of data pipelines Advanced knowledge of Apache Spark to handle large-scale data processing tasks, including the development and optimization of complex Spark applications for efficient data transformation. Comprehensive understanding of Hadoop, HDFS, and cloud Big Data technologies, with hands-on experience in managing and processing vast amounts of data effectively.
    $80k-110k yearly est. 1d ago
  • Data Scientist

    Coforge

    Data engineer job in Atlanta, GA

    Role: Data Scientist Mode Of Hire: Full Time Key Responsibilities • Data wrangling & feature engineering: Ingest, clean, and transform data from SQL, APIs, and data lakes (e.g., Snowflake, Databricks). Design robust pipelines that feed into analytics and ML workflows. • Data understanding & exploration: Work closely with domain experts to deeply understand the meaning, context, quality, and limitations of available datasets. Translate business questions into data requirements and analytics plans. • Machine learning development: Build, tune, and validate predictive models using scikit-learn, SparkML, XGBoost, or TensorFlow. • Cross-functional partnership: Collaborate with marketing, sales, and product teams to scope business use cases, define success metrics, and integrate models into operational workflows. • Model deployment & MLOps: Deploy and manage models using MLflow, docker and CI/CD pipelines. Implement versioning, testing, performance monitoring, and retraining strategies as part of a robust MLOps practice. • Infrastructure support: Work with data engineering and DevOps teams to maintain and improve model training and deployment infrastructure, including compute resources, workflow orchestration and environment configuration. • Insight delivery: Build clear, actionable reporting and visualizations using tools like Power BI or Tableau. Focus on impact, not just analysis. Skills Required: • Bachelor's degree in Data Science, Computer Science, Engineering, or a related quantitative field. • 5+ years of experience in a data science, ML engineering, or analytics role. • Strong SQL, Python and ML Techniques programming skills. • Experience with Azure Cloud, Databricks, and/or Snowflake. • Experience building and deploying machine learning models in production environments. Hands-on experience with Databricks, including SparkML, and MLflow integration. • Familiarity with MLOps best practices, including version control, model monitoring, and automated testing. • Experience with tools such as Git, MLflow, Docker and workflow schedulers. • Ability to communicate complex technical work to non-technical stakeholders. • Experience with scalable model training environments and distributed computing. Preferred Qualifications • Master's degree in a quantitative or technical discipline. • Experience in financial services, fintech, or enterprise B2B analytics. • Knowledge of A/B testing, causal inference, and statistical experimentation. • Familiarity with GenAI, LLM pipelines, and vector-based retrieval is a plus and platform like Snowflake Cortex.
    $66k-91k yearly est. 2d ago
  • ETL Databricks Data Engineer

    Capgemini 4.5company rating

    Data engineer job in Atlanta, GA

    We are seeking a ETL Databricks Data Engineer to join our team and help build robust, scalable data solutions. This role involves designing and maintaining data pipelines, optimizing ETL processes, and collaborating with cross-functional teams to ensure data integrity and accessibility. Key Responsibilities Design, develop, and maintain scalable data pipelines and ETL processes using Databricks. Create and optimize Python scripts for data transformation, automation, and integration tasks. Develop and fine-tune SQL queries for data extraction, transformation, and loading. Collaborate with data scientists, analysts, and business stakeholders to deliver high-quality data solutions. Ensure data integrity, security, and compliance with organizational standards. Participate in code reviews and contribute to best practices in data engineering. Required Skills & Qualifications 5 years of professional experience in data engineering or related roles. Strong proficiency in Databricks (including Spark-based data processing). Advanced programming skills in Python. Expertise in SQL for querying and data modeling. Familiarity with Azure Cloud and Azure Data Factory (ADF). Understanding of ETL frameworks, data governance, and performance tuning. Knowledge of CI/CD practices and version control tools (e.g., Git). Exposure to BI tools such as Power BI or Tableau for data visualization. Life at Capgemini Capgemini supports all aspects of your well-being throughout the changing stages of your life and career. For eligible employees, we offer: • Flexible work • Healthcare including dental, vision, mental health, and well-being programs • Financial well-being programs such as 401(k) and Employee Share Ownership Plan • Paid time off and paid holidays • Paid parental leave • Family building benefits like adoption assistance, surrogacy, and cryopreservation • Social well-being benefits like subsidized back-up child/elder care and tutoring • Mentoring, coaching and learning programs • Employee Resource Groups • Disaster Relief Disclaimer Capgemini is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law. This is a general description of the Duties, Responsibilities and Qualifications required for this position. Physical, mental, sensory or environmental demands may be referenced in an attempt to communicate the manner in which this position traditionally is performed. Whenever necessary to provide individuals with disabilities an equal employment opportunity, Capgemini will consider reasonable accommodations that might involve varying job requirements and/or changing the way this job is performed, provided that such accommodations do not pose an undue hardship. Capgemini is committed to providing reasonable accommodations during our recruitment process. If you need assistance or accommodation, please reach out to your recruiting contact. Click the following link for more information on your rights as an Applicant **************************************************************************
    $77k-99k yearly est. 1d ago
  • W2 Opportunity // GCP Data Engineer // Atlanta, GA

    Cloudingest

    Data engineer job in Atlanta, GA

    Job Description: GCP Data Engineer Rate: $50/hr. on W2 (No C2C) We are seeking a highly skilled GCP Data Engineer to design, build, and optimize cloud-native data pipelines and analytics solutions on Google Cloud Platform. The ideal candidate has strong experience with Python, BigQuery, Cloud Data Fusion, and core GCP services such as Cloud Composer, Cloud Storage, Cloud Functions, and Pub/Sub. This role requires a strong foundation in data warehousing concepts and scalable data engineering practices. Responsibilities Design, develop, and maintain robust ETL/ELT pipelines on Google Cloud Platform. Build and optimize data workflows using Cloud Data Fusion, BigQuery, and Cloud Composer. Write efficient and maintainable Python code to support data ingestion, transformation, and automation. Develop optimized BigQuery SQL for analytics, reporting, and large-scale data modeling. Utilize GCP services such as Cloud Storage, Pub/Sub, and Cloud Functions to build event-driven and scalable data solutions. Ensure data quality, governance, and reliability across all pipelines. Collaborate with cross-functional teams to deliver clean, trusted, production-ready datasets. Monitor, troubleshoot, and resolve performance issues in cloud data pipelines and workflows. Must-Have Skills Strong experience with GCP BigQuery (data modeling, SQL development, performance tuning). Proficiency in Python for data engineering and pipeline automation. Hands-on experience with Cloud Data Fusion for ETL/ELT development. Working experience with key GCP services: Cloud Composer Cloud Storage Cloud Functions Pub/Sub Strong understanding of data warehousing concepts, star/snowflake schemas, and best practices. Solid understanding of cloud data architecture and distributed processing. Good-to-Have Skills Experience with Vertex AI for ML pipeline integration or model deployment. Familiarity with Dataproc (Spark/Hadoop) for large-scale processing. Knowledge of CI/CD workflows, Git, and DevOps best practices. Experience with Cloud Logging/Monitoring tools.
    $50 hourly 1d ago
  • Data Architect

    Agile Resources, Inc.

    Data engineer job in Atlanta, GA

    Note: Initial 100% onsite required for the first six months. Employment Type: Permanent / Direct Hire / Full-time Salary Up to $180,000 (depending on experience) + bonus The Role: We're seeking a highly skilled and hands-on Data Architect to lead the design, implementation, and ongoing evolution of our enterprise-grade data systems. This role is crucial for building scalable, secure, and intelligent data infrastructure that supports core analytics, operational excellence, and future AI initiatives. Success requires a seasoned technologist who can seamlessly integrate cloud-native services with traditional data warehousing to create a modern, unified data platform. What You'll Do: Architecture & Strategy: Lead the design and implementation of modern data platforms, including Data Lakes, Data Warehouses, and Lakehouse architectures, to enable a single source of truth for the enterprise. Data Modeling & Integration: Architect unified data models that support both modular monoliths and microservices-based platforms. Design and optimize high-volume, low-latency streaming/batch ETL/ELT pipelines. Technical Leadership: Drive the technical execution across the entire data lifecycle. Build and optimize core data processing scripts using Spark and Python. Governance & Quality: Define and enforce standards for data governance, metadata management, and data observability across distributed systems. Implement automated data lineage tracking, schema evolution, and data quality monitoring. Cloud Infrastructure: Configure and manage cloud-native data services, including core data storage and event ingestion infrastructure. Required Experience: Experience: 10+ years of proven experience in enterprise data architecture and engineering. Core Platform Expertise: Strong, hands-on experience with the Azure Data Ecosystem including Azure Data Lake Storage (ADLS), Azure Synapse Analytics (or equivalent cloud DW), and Azure Purview (or equivalent data catalog). Processing: Deep expertise in Databricks (or Apache Spark) for ETL/ELT pipeline implementation, using Delta Lake and SQL Server (or equivalent RDBMS). Coding & Scripting: Strong proficiency in Python, Spark, and advanced SQL. Data Governance: Hands-on experience implementing data lineage tracking and data quality monitoring (e.g., using Great Expectations or dbt). Preferred Skills: Semantic Technologies: Hands-on experience developing ontology frameworks using OWL, RDF, and SPARQL to enable semantic interoperability. Advanced AI Data: Experience integrating structured/unstructured data into Knowledge Graphs and Vector Databases. Streaming/Telemetry: Experience developing and maintaining semantic telemetry pipelines using services like Azure Event Hubs or Kafka. Emerging Concepts: Exposure to linked data ecosystems, data mesh, or data fabric concepts.
    $180k yearly 5d ago
  • Lead Data Engineer - Palantir Foundry

    Smurfit Westrock

    Data engineer job in Atlanta, GA

    Our technology organization is transforming how we work at WestRock. We align with our businesses to deliver innovative solutions that: Address specific business challenges, integrate processes, and create great experiences Connect our work to shared goals that propel WestRock forward in the Digital Age Imagine how technology can advance the way we work by using disruptive technology We are looking for forward thinking technologists that can accelerate our focus areas such as building stronger foundational technology capabilities, reducing complexity, employing digital transformation concepts, and leveraging disruptive technology. As a Lead Data Engineer, you will play a pivotal role in building and scaling modern data infrastructure that powers decision-making across production, supply chain, and operations. Helps to define and analyze business requirements for Enterprise scale reports. Analyzes and evaluates business use cases for data engineering problems and helps design and develop processing solutions with ETL Cloud based technologies. How you will impact WestRock: Architect and implement scalable data pipelines using Palantir Foundry (pipelines, workshops, ontology) to unify and transform operational data. Design and develop robust data workflows using Python, Apache Airflow, and Apache Spark to support real-time and batch processing needs. Build and deploy solutions on cloud platforms (AWS or Azure), ensuring high availability, security, and performance. Collaborate with data scientists, analysts, and operations teams to deliver actionable insights and operational tooling. Define and enforce data engineering best practices, including CI/CD automation, version control (Git), and testing strategies. Mentor junior developers, conduct code reviews, and help shape the technical roadmap for the data platform. What you need to succeed: Education: Bachelor's degree in computer science or similar At least 6 years of strong Data Engineering experience Hands-on experience with Palantir Foundry, including pipelines, ontology modeling, and workshop development. Strong programming skills in Python or Java, with experience building and maintaining production-grade data pipelines. Proficiency in Apache Airflow and Apache Spark for workflow orchestration and large-scale data processing. Proven experience deploying data solutions on AWS or Azure, with strong understanding of cloud-native services. Familiarity with Git for version control and CI/CD pipelines for automated testing and deployment. Demonstrated ability to mentor junior engineers, lead projects, and work independently in a fast-paced environment. Good communication skills, with the ability to collaborate effectively across technical and non-technical teams. Good analytical and troubleshooting abilities. What we offer: Corporate culture based on integrity, respect, accountability and excellence Comprehensive training with numerous learning and development opportunities An attractive salary reflecting skills, competencies and potential A career with a global packaging company where Sustainability, Safety and Inclusion are business drivers and foundational elements of the daily work.
    $75k-100k yearly est. 4d ago
  • Data Engineer - OrcaWorks AI

    Orcaworks.Ai

    Data engineer job in Atlanta, GA

    Experience Level: Entry-level (Master's preferred) About OrcaWorks AI At OrcaWorks AI, we're building next-generation AI systems that empower businesses to make data-driven decisions with intelligence and speed. We're seeking passionate Data Engineers who love solving real-world data challenges and want to be part of a growing team building cutting-edge AI infrastructure. Key Responsibilities Design, develop, and maintain data pipelines using tools like Airbyte and Prefect to feed AI and machine learning models. Integrate data from multiple structured and unstructured sources into unified and queryable layers using ElasticSearch or Vespa. Implement data validation, transformation, and storage solutions using modern ETL frameworks. Collaborate with AI, LLM, and data science teams to ensure reliable and optimized data flow for model training. Support database management, SQLModel, and data governance practices across services. Required Skills & Qualifications Master's degree (or Bachelor's with equivalent experience) in Computer Science, Information Systems, or Data Engineering. Proficiency in Python and SQL; experience with PySpark or equivalent ETL frameworks. Hands-on experience with Airbyte, Prefect, and DBT. Familiarity with search and indexing systems like Vespa or ElasticSearch. Knowledge of cloud data platforms (AWS, GCP, or Azure) and API integration. Strong understanding of data security and applied AI workflows.
    $75k-100k yearly est. 3d ago
  • Lead Azure Databrick Engineer

    Syren

    Data engineer job in Atlanta, GA

    ****************Individual Contractors (W2/1099) are encouraged to apply. Visa sponsorship is not available for this role at this time************ An Azure Data Engineer is responsible for designing, implementing, and maintaining the data infrastructure within an organization. They collaborate with both business and IT teams to understand stakeholders' needs and unlock the full potential of data. They create conceptual and logical data models, analyze structural requirements, and ensure efficient database solutions. Must Have Skills: Experience of Migrating from other platform to Databricks Proficiency in Databricks and Azure Cloud, Databricks Asset Bundles, Hoslistic vision on the Data Strategy. Proficiency in Data Streaming and Data Modeling Experience in architecting at least two large-scale big data projects Strong understanding of data scaling and its complexities Data Archiving and Purging mechanisms. Job Requirements • Degree in computer science or equivalent preferred • Demonstrable experience in architecture, design, implementation, and/or support of highly distributed applications with Azure cloud and Databricks. • 10+ Years of Hands-on experience with data modelling, database design, data mining, and segmentation techniques. • Working knowledge and experience with "Cloud Architectures" (e.g., SaaS, PaaS, IaaS) and the ability to address the unique security considerations of secure Cloud computing • Should have architected solutions for Cloud environments such as Microsoft Azure and/or GCP • Experience with debugging and performance tuning in distributed environments • Strong analytical skills with the ability to collect, organize, analyse, and broadcast significant amounts of information with attention to detail and accuracy • Experience dealing with structured, unstructured data. • Must have Python, PySpark experience. • Experience in ML or/and graph analysis is a plus
    $75k-100k yearly est. 2d ago
  • Data Engineer w/ Python & SQL

    A2C 4.7company rating

    Data engineer job in Alpharetta, GA

    We're looking for a Data Engineer to build and maintain scalable data pipelines and cloud data infrastructure on GCP. The role focuses on BigQuery, Dataflow, and modern ETL/ELT to support analytics and ML workflows. MUST HAVES A problem solver with ability to analyze and research complex issues and problems; and proposing actionable solutions and/or strategies. Solid understanding and hands on experience with major cloud platforms. Experience in designing and implementing data pipelines. Must have strong Python, SQL & GCP skills Responsibilities Build and optimize batch/streaming pipelines using Dataflow, Pub/Sub, Composer. Develop and tune BigQuery models, queries, and ingestion processes. Implement IaC (Terraform), CI/CD, monitoring, and data quality checks. Ensure data governance, security, and reliable pipeline operations. Collaborate with data science teams and support Vertex AI-based ML workflows. Must-Have Must have strong Python, SQL & GCP skills 3-5+ years of data engineering experience. Hands-on GCP experience (BigQuery, Dataflow, Pub/Sub). Solid ETL/ELT and data modeling experience. Nice-to-Have GCP certifications, Spark, Kafka, Airflow, dbt/Dataform, Docker/K8s.
    $77k-106k yearly est. 2d ago
  • Data Engineer

    Synechron 4.4company rating

    Data engineer job in Alpharetta, GA

    We are At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets. Our Challenge Join our data-driven enterprise and lead the design of scalable and high-performance big data solutions. You will craft architectures that handle vast volumes of data, optimize pipeline performance, and incorporate advanced governance and AI-powered processing to unlock actionable insights. Additional Information The base salary for this position varies based on geography and other factors. In accordance with law, the base salary for this role if filled within Alpharetta, GA is $120K - 125K/year & benefits (see below). The Role Responsibilities: Design, build, and maintain scalable big data architectures supporting enterprise analytics and operational needs. Develop, implement, and optimize data pipelines using Apache Airflow, Databricks, and other relevant technologies to ensure reliable data flow and process automation. Manage and enhance data workflows for batch and real-time processing, ensuring efficiency and scalability. Collaborate with data scientists, analysts, and business stakeholders to translate requirements into robust data solutions. Implement data governance, security, and compliance best practices to protect sensitive information. Explore integrating AI/ML techniques into data pipelines, leveraging Databricks and other AI tools for predictive analytics and automation. Develop monitoring dashboards and alert systems to ensure pipeline health and performance. Stay current with emerging big data and cloud technologies, recommending best practices to improve system performance and scalability. Requirements: 5+ years of proven experience in Big Data architecture design, including distributed storage and processing frameworks such as Hadoop, Spark, and Databricks. Strong expertise in performance tuning for large-scale data systems. Hands-on experience with Apache Airflow for workflow orchestration. Proficiency in SQL for managing and querying large databases. Extensive experience with Python for scripting, automation, and data processing workflows. Experience working with cloud platforms (Azure, AWS, or GCP) preferable. Preferred, but not required: Deep understanding of data governance and security frameworks to safeguard sensitive data. Experience with integrating AI/ML models into data pipelines using Databricks MLflow or similar tools. Knowledge of containerization (Docker, Kubernetes) is a plus We offer: A highly competitive compensation and benefits package. A multinational organization with 58 offices in 21 countries and the possibility to work abroad. 10 days of paid annual leave (plus sick leave and national holidays). Maternity & paternity leave plans. A comprehensive insurance plan including medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region). Retirement savings plans. A higher education certification policy. Commuter benefits (varies by region). Extensive training opportunities, focused on skills, substantive knowledge, and personal development. On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses. Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups. Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms. A flat and approachable organization. A truly diverse, fun-loving, and global work culture
    $120k-125k yearly 2d ago
  • Technical Data Architect

    Oldcastle Infrastructure 4.3company rating

    Data engineer job in Atlanta, GA

    Exempt Oldcastle Infrastructureâ„¢, a CRH company, is the leading provider of utility infrastructure solutions for the water, energy, and communications markets throughout North America. We're more than just a manufacturer of precast concrete, polymer concrete, or plastic products. We're a trusted and strategic partner to engineers, contractors, distributors, specifiers, and more. With our network of more than 80 manufacturing facilities and more than 4,000 employees, we're leading the industry with innovation and a safety-first mindset. Job Summary Oldcastle Infrastructure (OI), as part of CRH's Infrastructure Products Group (IPG), is a global manufacturing leader of utility infrastructure products. Our goal is to be the most efficient producer of engineered systems and our customers' strategic partner of choice. A crucial part of OI's journey is the investment in new digital tools including a new ERP. With a modern, common platform, OI will unlock the benefits of its scale, deliver a better customer experience, and build a foundation for continuous process improvement. The Technical Data Architect is a senior role accountable of defining, governing, and delivering the data architecture strategy required to migrate enterprise data from legacy systems into SAP S/4HANA and Salesforce CPQ. This role ensures that data models, migration approaches, and governance structures support end-to-end business processes and regulatory compliance, while delivering high-quality, reconciled, and auditable data into the template. The architect will partner with the business data management team, program management office, functional process owners, and system integrators to ensure a seamless transition with minimal disruption to operations. Job Location This role will work hybrid out of our office in the Sandy Springs, GA area. Job Responsibilities Data Architecture Modeling Design target SAP S/4HANA data models and mapping rules from legacy systems. Validate functional data alignment for Finance (FI/CO), Sales & Distribution (SD), Materials Management (MM) and Production Planning (PP). Leverage CRH IPG Data Dictionary, Data Management and ETL migration tools to support the cleansing and data migration processes. Provide Technical capabilities to support the data quality and data reconciliations for Master Data Subjects. ERP Data Migration Collaborate with the business Master Data team on the legacy data migration by supporting the technical requirements for Customers, Vendors, BOMs, Products and other master data subjects. Define extraction, transformation, load, and reconciliation processes with automation where possible. Master Data Management Partner with the Business Master Data team to align on the governance model, ownership, and ongoing stewardship processes for core data subjects. Define and support the data migration testing strategy, including mock loads, trial conversions, and dress rehearsals. Partner with business master data team and users for the validation and sign-off at each migration stage. Design cutover sequencing for data loads, ensuring minimal downtime. Coordinate with functional leads and the PMO on the entry/exit criteria and contingency planning for go-live events related to data quality readiness. Job Requirements 5-8+ years of experience working in Data Architecture in the manufacturing industry Proven track record in delivering large-scale data migrations (CPQ, OTC, Finance, Supply Chain, Manufacturing P2P). Hands-on experience with ETL/migration tools (SAP Data Services, Informatica, etc). Strong knowledge of data governance, master data management, and audit/compliance processes. Process improvement knowledge gained while working in an organization undergoing a significant operational culture shift Creation and improvement of processes that demonstrate ease of doing business internally and externally Development and implementation of process adherence and data quality adoption metrics Comfortable operating in environment of ambiguity and fast change Strong interpersonal and organizational influencing skills Ability to communicate in a simple, articulate, thoughtful manner to varying audience levels Innovative spirit to work cross-functionally in developing improvement ideas A pleasant, likeable manner while accomplishing challenging results Bachelor's degree in computer science or technical related discipline SAP Technical Certifications in Master Data/Data Services/MDG (preferred) PMP Certification (preferred) What CRH Offers You Highly competitive base pay Comprehensive medical, dental and disability benefits programs Group retirement savings program Health and wellness programs An inclusive culture that values opportunity for growth, development, and internal promotion About CRH CRH has a long and proud heritage. We are a collection of hundreds of family businesses, regional companies and large enterprises that together form the CRH family. CRH operates in a decentralized, diversified structure that allows you to work in a small company environment while having the career opportunities of a large international organization. If you're up for a rewarding challenge, we invite you to take the first step and apply today! Once you click apply now, you will be brought to our official employment application. Please complete your online profile and it will be sent to the hiring manager. Our system allows you to view and track your status 24 hours a day. Thank you for your interest! Oldcastle Infrastructure, a CRH Company, is an Affirmative Action and Equal Opportunity Employer. EOE/Vet/Disability CRH is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, status as a protected veteran or any other characteristic protected under applicable federal, state, or local law.
    $84k-113k yearly est. 5d ago
  • Senior Data Architect

    Mtech Systems 3.5company rating

    Data engineer job in Dunwoody, GA

    At MTech Systems, our company mission is to increase yield in protein production to help feed the growing world population without compromising animal welfare or damaging the planet. We aim to create software that delivers real-time data to the entire supply chain that allows producers to get better insight into what is happening on their farms and what they can do to responsibly improve production. MTech Systems is a prominent provider of tools for managing performance in Live Animal Protein Production. For over 30 years, MTech Systems has provided cutting-edge enterprise data solutions for all aspects of the live poultry operations cycle. We provide our customers with solutions in Business Intelligence, Live Production Accounting, Production Planning, and Remote Data Management-all through an integrated system. Our applications can currently be found running businesses on six continents in over 50 countries. MTech has built an international reputation for equipping our customers with the power to utilize comprehensive data to maximize profitability. With over 250 employees globally, MTech Systems currently has main offices in Mexico, United States, and Brazil, with additional resources in key markets around the world. MTech Systems USA's headquarters is based in Atlanta, Georgia and has approximately 90 team members in a casual, collaborative environment. Our work culture here is based on a commitment to helping our clients feed the world, resulting in a flexible and rewarding atmosphere. We are committed to maintaining a work culture that enhances collaboration, provides robust development tools, offers training programs, and allows for direct access to senior and executive management. Job Summary MTech builds customer-facing SaaS & analytics products used by global enterprise customers. You will own the database/data platform architecture that powers these products-driving performance, reliability, auditability, and cost efficiency at multi-tenant, multi-terabyte scale. Success is measured in hard outcomes: fewer P1s/support tickets, faster queries, bullet-proof ERP/SAP integrations, SLO compliance tied to SLAs, and audit ready evidence. Responsibilities and Duties Architecture & Design Own the end-to-end data architecture for enterprise SaaS (OLTP + analytical serving), including Azure SQL/MI, Databricks/Delta Lake, ADLS, Synapse/Fabric, and collaboration on Power BI semantic models (RLS, performance). Define and implement Information Lifecycle Management (ILM): hot/warm/cold tiers, 2-year OLTP retention, archive/nearline, and a BI mirror that enables rich analytics without impacting production workloads. Engineer ERP/SAP financial interfaces for idempotency, reconciliation, and traceability; design rollback/de-dup strategies and financial journal integrity controls. Govern schema evolution/DbVersions to prevent cross-customer regressions while achieving performance gains. Establish data SLOs (freshness, latency, correctness) mapped to customer SLAs; instrument monitoring/alerting and drive continuous improvement. Operations & Observability Build observability for pipelines and interfaces (logs/metrics/traces, lineage, data quality gates) and correlate application telemetry (e.g., Stackify/Retrace) with DB performance for rapid rootcause analysis. Create incident playbooks (reprocess, reconcile, rollback) and drive MTTR down across data incidents. Collaboration & Leadership Lead the DBA/DB engineering function (standards, reviews, capacity planning, HA/DR, on-call, performance/availability SLOs) and mentor data engineers. Partner with Product/Projects/BI to shape domain models that meet demanding customer reporting (e.g., Tyson Matrix) and planning needs without compromising OLTP. Required Qualifications 15+ years in data/database engineering; 5-8+ years owning data/DB architecture for customerfacing SaaS/analytics at enterprise scale. Proven results at multi-terabyte scale (≥5 TB) with measurable improvements (P1 reduction, MTTR, query latency, cost/performance). Expertise in Azure SQL/MI, Databricks/Delta Lake, ADLS, Synapse/Fabric; deep SQL, partitioning/indexing, query plans, CDC, caching, schema versioning. Audit & SLA readiness: implemented controls/evidence to satisfy SOC 1 Type 2 (or equivalent) and run environments to SLOs linked to SLAs. ERP/SAP data interface craftsmanship: idempotent, reconciled, observable financial integrations. ILM/Archival + BI mirror design for queryable archives/analytics without OLTP impact. Preferred Skills Power BI performance modeling (RLS, composite models, incremental refresh, DAX optimization). Modular monolith/microservices experience (plus, not required). Semantic tech (ontology/knowledge graphs), vector stores, and agentic AI orchestration experience (advantage, not required). EEO Statement Integrated into our shared values is MTech's commitment to diversity and equal employment opportunity. All qualified applicants will receive consideration for employment without regard to sex, age, race, color, creed, religion, national origin, disability, sexual orientation, gender identity, veteran status, military service, genetic information, or any other characteristic or conduct protected by law. MTech aims to maintain a global inclusive workplace where every person is regarded fairly, appreciated for their uniqueness, advanced according to their accomplishments, and encouraged to fulfill their highest potential. We believe in understanding and respecting differences among all people. Every individual at MTech has an ongoing responsibility to respect and support a globally diverse environment.
    $92k-123k yearly est. 5d ago
  • Physical AI Engineering & Strategy, Senior Consultant

    TSR Consulting 4.9company rating

    Data engineer job in Alpharetta, GA

    Our client, a leading financial services company is hiring a Physical AI Engineering & Strategy, Senior Consultant on a long-term contract basis. Pay is $90-$100/hr + W2. DOE. Job ID 83255 A Critical requirement for a unique skill set that is not currently available internally at Specifically, we are seeking an AI specialist with expertise in programming robot dogs and humanoids from the Chinese manufacturer, Unitree, and proficiency with NVIDIA Omniverse. Merely having experience with robots will not be sufficient for this role. The team is in the process of establishing various AI labs, primarily located in Alpharetta, though we are open to considering other locations as well. This position is intended to be a permanent role, and we are looking for a long-term commitment. The Opportunity Our Artificial Intelligence and Data team is at the forefront of integrating cutting-edge technology with physical systems. As a Senior Consultant in Physical AI, you will collaborate closely with clients and interdisciplinary teams at to develop innovative solutions that leverage the power of robotics, software, and open-source technologies. This role offers a unique opportunity to engage in continuous learning and research, driving advancements in how businesses utilize AI across various industries. You will be instrumental in shaping the future of Physical AI, with the chance to influence new methodologies and technologies that address complex challenges. Key Responsibilities In this role, you will be responsible for delivering state-of-the-art Physical AI solutions. Your key responsibilities will include: • Developing and implementing scalable demos that highlight the integration of software and hardware in Physical AI applications. • Showcase innovations and speak at -hosted events, including innovation summits & technology showcases. • Conduct analytics and monitor market trends to inform solution development, identify emerging opportunities, and position the teams capabilities strategically within the evolving Physical AI landscape. • Utilizing strong knowledge of the NVIDIA ecosystem to design and execute solutions that meet stakeholder needs. • Leading development in robotics, focusing on kinetics, perception, and decision making to enhance operational efficiency and safety. • Collaborating with a variety of stakeholders to understand requirements and provide technical leadership throughout the project lifecycle. Skills and Attributes for Success To thrive in this role, you will need a combination of technical expertise and strong interpersonal skills. Key attributes include: • A passion for innovation and a proactive approach to problem-solving. • Excellent communication skills to effectively convey complex concepts to diverse audiences. • Strong analytical skills to guide project direction and decision-making. • Ability to build and maintain relationships with clients and team members, fostering a collaborative environment. • A commitment to delivering high-quality client services and continuous improvement. To Qualify for the Role, You Must Have • A Bachelors degree in a relevant field (e.g., Robotics, Computer Science, Mechanical Engineering, Electrical Engineering). • 3-6 years of full-time working experience in Robotics, Digital Twin, and Computer Vision/Deep Learning/Reinforcement Learning. • Proven experience in developing and implementing scalable AI systems that integrate software and hardware components effectively. • Strong proficiency in programming languages such as Python, C++, or Java, with hands-on experience in robotics frameworks (e.g., ROS) and simulation environments (e.g., Isaac Sim). • Extensive knowledge of the NVIDIA ecosystem, including experience with their offerings and implementation strategies for robotics and AI applications. • Solid understanding of robotic systems, including kinematics, dynamics, control algorithms, and sensor integration (e.g., LIDAR, cameras). • Familiarity with machine learning techniques and algorithms, particularly in the context of robotics, automation, and decision-making processes. • Experience designing, building, and maintaining robotics systems and digital twin models, with a focus on real-time data integration and simulation. • Proficiency in using data manipulation and analysis tools (e.g., Pandas, NumPy) to derive insights from sensor data and simulations, along with experience in popular ML frameworks such as TensorFlow or PyTorch. • Experience with DevOps tools (e.g., GIT, Azure DevOps) and Agile methodologies (e.g., Jira) to develop and deploy analytical solutions with multiple features, pipelines, and releases. • Excellent communication skills to clearly convey findings, recommendations, and opportunities for improvement to clients and team members. • Willingness to travel as needed to engage with clients and stakeholders. Ideally, Youll Also Have • An advanced degree (Masters or Ph.D.) in Robotics, Computer Science, Mechanical/Aerospace Engineering, Electrical Engineering, or a related quantitative field; graduate school years can substitute for some industry experience. • Previous experience in research projects related to robotics, automation, or digital twin technologies, including contributions to academic journals or conferences that showcase research findings and technical expertise. • Proficiency in computer vision techniques and libraries (e.g., OpenCV) for object detection, tracking, and recognition, with practical applications in robotics. • Experience in designing and implementing control systems for robotic applications, including PID controllers, state-space control, and adaptive control techniques. • Ability to develop and optimize algorithms for tasks such as path planning, motion planning, and decision-making in robotics, enhancing system performance and efficiency. • Familiarity with Hardware-in-the-loop (HIL) testing techniques to validate the performance of robotic systems in real-time by integrating hardware components with simulation models. • Strong understanding of and/or interest in Agentic AI and Generative AI, with the ability to explore and implement innovative applications in these areas. • Experience with Cosmos or similar data management platforms to facilitate the integration and analysis of large datasets in AI applications..
    $90-100 hourly 5d ago
  • Meraki Engineer

    Tekstream Solutions 4.5company rating

    Data engineer job in Atlanta, GA

    Contract: initial 1 year contract with likely extension to 1-5 years **NO THIRD PARTY RECRUITERS, PLEASE"" TekStream Solutions is hiring a Network engineer with Meraki experience for a contract role in the Atlanta area to help us at one of our top clients. This is an initial 1-year contract with a preference of coming into the office 2 days a week in the Atlanta area. TekStream Solutions key partnerships with AWS, Splunk and Oracle have led to the building of unique solutions for clients as well as recognition the past 7 years on the Inc. 5000 fast growing companies. Please review some of the skill-sets below and send us a note if you are interested in discussing. Although primarily hiring for full-time roles, we are considering consultants / contract relationships for certain roles. Meraki Engineer Strong Experience with Meraki Dashboard API Add new organizations, admins, networks, devices, VLANs, Service Set Identifiers (SSIDs) Provision thousands of new sites in minutes with an automation script-Python experience Automatically onboard and off-board new employees' teleworker device(s) Multiyear experience in Meraki Configurations Hands on experience with Meraki in configuring, managing, and monitoring Wi-Fi access points, Switches, Security appliance, Network attached devices You'll thrive in this role if you're proactive about troubleshooting, comfortable working across multiple Meraki organizations, and communicate clearly with both technical and non-technical teams.
    $64k-91k yearly est. 5d ago
  • SRE/DevOps Engineer

    Insight Global

    Data engineer job in Alpharetta, GA

    As a Cloud Infrastructure Site Reliability Engineer (SRE) with expertise in multiple public cloud service provider platforms, you will be responsible for operating infrastructure solutions, following the principles and practices pioneered by Google's SRE model. Your work will ensure our cloud services meet uptime, reliability, and performance targets, and you will drive automation and continuous improvement across our production environments. This role will involve collaborating with cross-functional teams to enhance our cloud reliability posture and streamline processes through automation. Key Responsibilities: Design, build, and maintain highly available, scalable, and secure cloud infrastructure on platforms such as AWS, GCP, or Azure. Develop and implement automation for provisioning, monitoring, scaling, and incident response using Infrastructure-as-Code tools (e.g., Terraform, CloudFormation, Ansible). Monitor system reliability, capacity, and performance; proactively detect and address issues before they impact users. Respond to production incidents, participate in on-call rotations, and lead post-incident reviews to drive root cause analysis and reliability improvements. Collaborate with software engineering and security teams to ensure new services and features are production-ready and meet reliability standards. Build and maintain tools for deployment, monitoring, and operations; automate manual processes to reduce toil. Document operational processes and system architectures to ensure knowledge sharing and repeatability. Continuously evaluate and implement new technologies to improve system reliability, security, and efficiency. Qualifications: Bachelor's degree in Computer Science, Engineering, or a related technical field, or equivalent practical experience. 3+ years of experience in software development with proficiency in at least one programming language (e.g., Python, Go, Java, C++). Experience administrating cloud platforms (AWS, GCP, Azure), including networking, security, containerization, storage, data management, and serverless technologies. Solid understanding of Linux systems, networking fundamentals, virtualized, and distributed systems, file systems, system processes and configurations. Deep understanding of observability (monitoring, alerting, and logging) tools in cloud environments. Ability to set up and maintain monitoring dashboards, alerts, and logs. Familiarity with Continuous Integration/Continuous Deployment (CI/CD) tools for automated testing, deployments, provisioning, and observability. Ability to manage and respond to incidents, perform root cause analysis, and implement post-mortem reviews. Understanding of setting, monitoring, and maintaining Service-Level Objectives (SLOs) and Service-Level Agreements (SLAs) for system reliability. Plusses: Experience working with enterprise-scale financial services or other regulated industries 5+ years of experience in SRE, DevOps, infrastructure, or cloud engineering roles, preferably supporting large-scale, distributed systems. Excellent problem-solving, troubleshooting, and communication skills. Experience leading technical projects or mentoring junior engineers. Certifications: Certified Engineer, DevOps, SRE, CSREF This role ranges based on years of experience between 65-75/hr on W2.
    $76k-100k yearly est. 1d ago
  • UI/UX Engineer

    Charter Global 4.0company rating

    Data engineer job in Atlanta, GA

    📅 Contract: 6+ Months About the Role: We're seeking a UI/UX Engineer to design and develop intuitive, user-centric web interfaces that deliver exceptional digital experiences. This role blends creative design, front-end development, and data-driven insights to optimize usability and performance. Key Responsibilities: Design and implement responsive, accessible web interfaces using modern frameworks (React, Angular, Vue). Collaborate with business and technical teams to translate requirements into functional UI/UX solutions. Utilize analytics tools (Google Analytics, Tag Manager, Hotjar) to track user behavior and improve engagement. Conduct A/B testing and generate insights for UX enhancements and conversion optimization. Ensure compliance with accessibility standards (WCAG) and SEO best practices. Requirements: 5+ years in UI/UX design and front-end development. Proficiency in HTML5, CSS3, JavaScript, and design tools (Figma, Adobe XD, Sketch). Experience with CMS platforms, version control (Git), and Agile methodologies. Strong understanding of analytics, accessibility, and user-centered design principles. Bachelor's degree in Design, HCI, or related field. Regards, Ashish Lal | Talent Acquisition Manager Charter Global Inc | ***************************** Email: ************************* LinkedIn: ASHISH K LAL | LinkedIn One Glenlake Parkway | Suite 525 | Atlanta, GA 30328
    $67k-85k yearly est. 5d ago
  • Senior Java Software Engineer

    Vaiticka Solution

    Data engineer job in Alpharetta, GA

    Sr. Java Developer Job Type: Contract Visa: GC-EAD, H4EAD, Green Card, L2-EAD, TN Visa, US Citizen Interview Mode - Local candidates required - In-person interview. Job Description: We are seeking a highly skilled Java Developer with strong expertise in Apache Kafka to join our development team in Alpharetta, GA. The ideal candidate will have extensive experience in building scalable, high-performance enterprise applications using Java, Spring Boot, and Kafka, with a strong understanding of distributed systems and real-time data streaming. Key Responsibilities: Design, develop, and maintain Java-based applications with a focus on event-driven architecture using Kafka. Build and optimize Kafka producers, consumers, topics, and partitions for high throughput and low latency. Implement Spring Boot microservices integrating with Kafka for real-time data pipelines. Ensure data reliability, fault tolerance, and scalability in distributed systems. Collaborate with architects, DevOps, and QA teams to ensure seamless integration and deployment. Troubleshoot and resolve issues related to Kafka message processing, data streaming, and performance bottlenecks. Write clean, maintainable, and efficient code, following best practices and coding standards. Mentor junior developers and contribute to code reviews. Required Skills & Experience: 8+ years of hands-on software development experience with Java/J2EE. Strong experience with Apache Kafka (producers, consumers, brokers, schema registry, streams, connectors). Expertise in Spring Framework, Spring Boot, RESTful APIs, and Microservices. Solid understanding of multithreading, concurrency, and distributed system design. Experience with RDBMS (Oracle, MySQL, PostgreSQL) and NoSQL databases (MongoDB, Cassandra). Strong background in performance tuning, debugging, and optimizing Java applications. Knowledge of CI/CD pipelines, Docker, Kubernetes is a plus. Familiarity with cloud platforms (AWS / Azure / GCP) is desirable. -Thanks & regards Akram Khan
    $83k-109k yearly est. 3d ago
  • Senior Lead Software Engineer

    Apps Accelerator

    Data engineer job in Alpharetta, GA

    Senior Lead Software Engineer- Founding Software Engineer Company: Apps Accelerator Type: Full-time / Founding Role About Us At Apps Accelerator, we're building an AI-first venture studio a place where great ideas, intelligent engineering, and entrepreneurial energy collide. Based in Alpharetta, GA, we're designing and launching the next generation of AI-powered web and mobile companies that move fast, scale smart, and redefine what's possible. We're not another dev shop. We're builders of intelligent ventures taking ideas from whiteboard to revenue with speed, precision, and creativity. The Role We're looking for a Founding Software Engineer someone who doesn't just want to write code but wants to build companies. You'll work side-by-side with the founder to design, develop, and launch AI-first products from concept to MVP. You'll experiment, iterate, and scale, using the full Microsoft and open AI ecosystem from React and React Native to LLM integrations, automation, and intelligent systems. This role is perfect for someone who's half engineer, half entrepreneur - someone who thrives on building 0 → 1, learning fast, and shipping faster. What You'll Do Lead the development of AI-first MVPs using React, React Native, Node.js, and modern backend frameworks. Integrate AI-assisted tools (e.g., GitHub Copilot, OpenAI API, LangChain) to accelerate development. Architect scalable, cloud-based infrastructure on Azure or AWS. Partner with design and strategy to bring ideas to life - fast. Collaborate on product vision, roadmap, and technical direction for multiple ventures. Establish the foundation for engineering culture, standards, and best practices. Grow with the studio - mentoring new engineers as the venture portfolio expands. What We're Looking For 5+ years of experience in full-stack development (React, React Native, Node.js, TypeScript). Strong grasp of AI tools and frameworks - LLMs, APIs, and automation workflows. Startup or entrepreneurial experience - you've shipped products or launched something from scratch. A builder's mindset - you care about speed, quality, and iteration. Curiosity and grit. You love solving problems no one's solved before. Bonus Skills Experience with LangChain, Semantic Kernel, or LlamaIndex. Familiarity with MLOps or deploying AI models in production. Passion for UX/UI design and creating intuitive interfaces. Interest in becoming a co-founder or equity partner in future ventures. Why Join Us Ground-floor opportunity in an AI venture studio that's redefining how companies are built. Hybrid work model with in-person collaboration in Alpharetta, GA. Competitive base + equity participation in future ventures. Freedom to experiment, innovate, and build products that matter. Work directly with the founder; no red tape, just real impact.
    $83k-109k yearly est. 2d ago
  • Senior Software Engineer 4672

    Tier4 Group

    Data engineer job in Atlanta, GA

    Senior Software Engineer - Integrations & Platforms We're looking for a Senior Software Engineer to lead integrations across multiple systems in a fast-paced, growing tech team. This role is hands-on and collaborative-you'll design, build, and maintain integrations that help the business run smoothly while driving innovation. What You'll Do: Lead the design, development, and deployment of APIs, microservices, and system integrations. Collaborate with U.S. and international teams to define requirements and build solutions. Troubleshoot and resolve integration issues to keep systems running reliably. Mentor junior engineers and contribute to best practices across the team. Participate in projects using Agile and Waterfall approaches. What We're Looking For: Strong programming experience in Go or Python. Experience with APIs, microservices, and containerized environments (Kubernetes). Familiarity with SOA, integration patterns, event-driven architecture, and API management platforms. Experience with cloud-native integrations and serverless computing (AWS Lambda, Azure Functions). Proven ability to take ownership of initiatives and drive technical solutions. Nice to Have: Experience with streaming platforms (Kafka, Kinesis, NATS) or industrial protocols (Modbus, MQTT, OPC-UA). Background with security standards (SAML, OAuth, OpenID Connect). Familiarity with CI/CD and DevOps practices. Why You'll Love This Role: You'll work on meaningful projects that impact multiple business areas, collaborate with talented teams, and grow your skills while making a real difference. Hybrid flexibility, challenging work, and the chance to innovate make this a role where you can truly shine.
    $83k-108k yearly est. 5d ago
  • Data Engineer

    A2C 4.7company rating

    Data engineer job in Alpharetta, GA

    5 days onsite in Alpharetta, GA Skills required: Python Data Pipeline Data Analysis Data Modeling Must have solid Cloud experience AI/ML Strong problem-solving skills Strong Communication skill A problem solver with ability to analyze and research complex issues and problems; and proposing actionable solutions and/or strategies. Solid understanding and hands on experience with major cloud platforms. Experience in designing and implementing data pipelines. Must have experience with one of the following: GCP, AWS OR Azure - MUST have the drive to learn GCP.
    $77k-106k yearly est. 1d ago

Learn more about data engineer jobs

How much does a data engineer earn in Roswell, GA?

The average data engineer in Roswell, GA earns between $65,000 and $114,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.

Average data engineer salary in Roswell, GA

$87,000

What are the biggest employers of Data Engineers in Roswell, GA?

The biggest employers of Data Engineers in Roswell, GA are:
  1. Equifax
  2. Ernst & Young
  3. Intercontinental Exchange
  4. IFG Companies
  5. a2c
  6. Quantum World Technologies
  7. AT&T
  8. Synchrony Financial
  9. ADP
  10. Locum Tenens USA
Job type you want
Full Time
Part Time
Internship
Temporary