Senior Data Scientist (Senior Consultant)
Data scientist job in Atlanta, GA
Job Family:
Data Science Consulting
Travel Required:
Up to 10%
Clearance Required:
Ability to Obtain Public Trust
About our AI and Data Capability Team
Our consultants on the AI and Data Analytics Capability team help clients maximize the value of their data and automate business processes. This high performing team works with clients to implement the full spectrum of data analytics and data science services, from data architecture and storage to data engineering and querying, to data visualization and dashboarding, to predictive analytics, machine learning, and artificial intelligence as well as intelligent automation. Our services enable our clients to define their information strategy, enable mission critical insights and data-driven decision making, reduce cost and complexity, increase trust, and improve operational effectiveness.
What You Will Do:
Data Collection & Management: Identify, gather, and manage data from primary and secondary sources, ensuring its accuracy and integrity.
Data Cleaning & Preprocessing: Clean raw data by identifying and addressing inconsistencies, missing values, and errors to prepare it for analysis.
Data Analysis & Interpretation: Apply statistical techniques and analytical methods to explore datasets, discover trends, find patterns, and derive insights.
Data Visualization & Reporting: Develop reports, dashboards, and visualizations using tools like Tableau or Power BI to present complex findings clearly to stakeholders.
Collaboration & Communication: Work with cross-functional teams, understand business requirements, and effectively communicate insights to support data-driven decision-making.
Problem Solving: Address specific business challenges by using data to identify underperforming processes, pinpoint areas for growth, and determine optimal strategies.
What You Will Need:
US Citizenship is required
Bachelor's degree is required
Minimum THREE (3) Years Experience using Power BI, Tableau and other visualization tools to develop intuitive and user friendly dashboards and visualizations.
Skilled in SQL, R, and other languages to assist in database querying and statistical programming.
Strong foundational knowledge and experience in statistics, probability, and experimental design.
Familiarity with cloud platforms (e.g., Amazon Web Services, Azure, or Google Cloud) and containerization (e.g., Docker).
Experience applying data governance concepts and techniques to assure greater data quality and reliability.
he curiosity and creativity to uncover hidden patterns and opportunities.
Strong communication skills to bridge technical and business worlds.
What Would Be Nice To Have:
Hands-on experience with Python, SQL, and modern ML frameworks.
Experience in data and AI system development, with a proven ability to design scalable architectures and implement reliable models.
Expertise in Python or Java for data processing.
Demonstrated work experience within the public sector.
Ability to support business development including RFP/RFQ/RFI responses involving data science / analytics.
The annual salary range for this position is $113,000.00-$188,000.00. Compensation decisions depend on a wide range of factors, including but not limited to skill sets, experience and training, security clearances, licensure and certifications, and other business and organizational needs.
What We Offer:
Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace.
Benefits include:
Medical, Rx, Dental & Vision Insurance
Personal and Family Sick Time & Company Paid Holidays
Position may be eligible for a discretionary variable incentive bonus
Parental Leave and Adoption Assistance
401(k) Retirement Plan
Basic Life & Supplemental Life
Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts
Short-Term & Long-Term Disability
Student Loan PayDown
Tuition Reimbursement, Personal Development & Learning Opportunities
Skills Development & Certifications
Employee Referral Program
Corporate Sponsored Events & Community Outreach
Emergency Back-Up Childcare Program
Mobility Stipend
About Guidehouse
Guidehouse is an Equal Opportunity Employer-Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation.
Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco.
If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at ************** or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation.
All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or ************************. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process.
If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse's Ethics Hotline. If you want to check the validity of correspondence you have received, please contact *************************. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant's dealings with unauthorized third parties.
Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.
Auto-ApplyData Scientist
Data scientist job in Atlanta, GA
Role: Data Scientist
Mode Of Hire: Full Time
Key Responsibilities
• Data wrangling & feature engineering: Ingest, clean, and transform data from SQL, APIs, and data lakes (e.g., Snowflake, Databricks). Design robust pipelines that feed into analytics and ML workflows.
• Data understanding & exploration: Work closely with domain experts to deeply understand the meaning, context, quality, and limitations of available datasets. Translate business questions into data requirements and analytics plans.
• Machine learning development: Build, tune, and validate predictive models using scikit-learn, SparkML, XGBoost, or TensorFlow.
• Cross-functional partnership: Collaborate with marketing, sales, and product teams to scope business use cases, define success metrics, and integrate models into operational workflows.
• Model deployment & MLOps: Deploy and manage models using MLflow, docker and CI/CD pipelines. Implement versioning, testing, performance monitoring, and retraining strategies as part of a robust MLOps practice.
• Infrastructure support: Work with data engineering and DevOps teams to maintain and improve model training and deployment infrastructure, including compute resources, workflow orchestration and environment configuration.
• Insight delivery: Build clear, actionable reporting and visualizations using tools like Power BI or Tableau. Focus on impact, not just analysis.
Skills Required:
• Bachelor's degree in Data Science, Computer Science, Engineering, or a related quantitative field.
• 5+ years of experience in a data science, ML engineering, or analytics role.
• Strong SQL, Python and ML Techniques programming skills.
• Experience with Azure Cloud, Databricks, and/or Snowflake.
• Experience building and deploying machine learning models in production environments. Hands-on experience with Databricks, including SparkML, and MLflow integration.
• Familiarity with MLOps best practices, including version control, model monitoring, and automated testing.
• Experience with tools such as Git, MLflow, Docker and workflow schedulers.
• Ability to communicate complex technical work to non-technical stakeholders.
• Experience with scalable model training environments and distributed computing.
Preferred Qualifications
• Master's degree in a quantitative or technical discipline.
• Experience in financial services, fintech, or enterprise B2B analytics.
• Knowledge of A/B testing, causal inference, and statistical experimentation.
• Familiarity with GenAI, LLM pipelines, and vector-based retrieval is a plus and platform like Snowflake Cortex.
Master data consultant
Data scientist job in Atlanta, GA
Client is currently undertaking several large-scale global transformation projects.
Each of which requires conversion and migration of data, and ongoing master data management. This role will provide essential support to a team of Lead Global Master Data Consultants as they work to prepare data for conversion and migration, and as they design and test several master data governance systems for ongoing data management.
Business continuity and SOX compliance hinge on the accuracy, completeness and integrity of the data preparation work that this team will deliver.
The candidate must be a self-starter, able to work on own initiative within a framework.
The role reports to the Director Data & Analytics, Aurora, and will take work direction from across the team of Lead Global Master Data Consultants.
Scope:
Each transformation project extends over multiple global regions. This role is likely to be required to work on more than one global project at once.
The following master data domains are the primary focus for this role: Customer, Product, Direct Material, InDirect Material, Vendor, Customer Hierarchy, Product Hierarchy. The role is likely to be required to work on more than one of these domains at the same time.
Role Responsibilities & Accountabilities:
These duties are not meant to be all-inclusive and other duties may be assigned.
Data Preparation for Data Conversion / Migration - the process of moving or extending master data into different systems or organization structures:
Master Data Cleansing (and coordination of multiple functional teams to do this) (and cross workstream coordination of cleansing of other data)
Data standardization, harmonization, and requirements localization
SAP S4 Master data value mapping, data construction
Coordination of cross-workstream data element requirements, design, and value mapping
Test data creation
Data validation
Bug investigation and solutioning
Change Management and training
Master Data Governance (SAP MDG) and Profisee - the tools and processes used to create and maintain master data records on an ongoing basis. Also known as the CRUD process:
Local functional requirements gathering and documentation
Alignment of local requirements to global solution
Supervision of outsourced test script development and execution
Coordination of functional requirements alignment across upstream and downstream platforms
Test master data creation
Testing the MDG process and tool
Bug investigation and solutioning
Change Management and training
Cut-over
Hypercare
Project task administration - various tools are used to manage these projects, including ADO,
TargetProcess, Clarity, Jira, and Excel
Update task status and provide commentary where required
Monitor for new tasks or dependencies and take appropriate action
Maintain documentation - primarily involving the following data documents: data definition (DD), field mapping design (FMD) and functional design (FD):
Assist lead consultants with maintaining this documentation making sure it is up to date,
accurate and as comprehensive as possible
Qualifications / Education / Experience Required:
BA or BS in Finance, Business Administration or equivalent
Ideally 5+ years' experience in master data management, ideally within a consumer products company
An understanding of master data and transactional data processes, architecture, hierarchies,
relationships, and interdependencies
SAP MDG CRUD process/workflow functional design and testing experience
Data Quality functional rule design and testing experience
Fiori UI functional design experience
Technical experience with data governance tools would be a bonus, but candidates who only have technical experience are not acceptable
Operational experience in delivering master data standardization, cleansing and harmonization
A self-starter, able to work on own initiative
Strong analytical skills
Must be hands on and detail oriented, but also able to see the big picture
Strong communication and influencing skills
Ability to question and challenge assumptions; interrogating existing practices to drive improvement with strong abstract thinking skills
Demonstrated multi-cultural savvy, i.e. experience working and delivering results across businesses,
functions, regions, countries, and cultures
Strong listening skills
Fluent in English - Required (Additional languages preferred)
Microsoft Office programs including Word, Excel, and PowerPoint. SAP experience preferred
Demonstrated multi-cultural savvy, i.e. experience working and delivering results across businesses,
functions, regions, countries, and cultures
Strong listening skills
Fluent in English - Required (Additional languages preferred)
Microsoft Office programs including Word, Excel, and PowerPoint. SAP experience preferred
Ability to travel by ground and air up to 25% (primarily international travel
Data Architect
Data scientist job in Atlanta, GA
Note: Initial 100% onsite required for the first six months.
Employment Type: Permanent / Direct Hire / Full-time
Salary Up to $160,000 (depending on experience) + bonus
The Role:
We're seeking a highly skilled and hands-on Data Architect to lead the design, implementation, and ongoing evolution of our enterprise-grade data systems. This role is crucial for building scalable, secure, and intelligent data infrastructure that supports core analytics, operational excellence, and future AI initiatives. Success requires a seasoned technologist who can seamlessly integrate cloud-native services with traditional data warehousing to create a modern, unified data platform.
What You'll Do:
Architecture & Strategy: Lead the design and implementation of modern data platforms, including Data Lakes, Data Warehouses, and Lakehouse architectures, to enable a single source of truth for the enterprise.
Data Modeling & Integration: Architect unified data models that support both modular monoliths and microservices-based platforms. Design and optimize high-volume, low-latency streaming/batch ETL/ELT pipelines.
Technical Leadership: Drive the technical execution across the entire data lifecycle. Build and optimize core data processing scripts using Spark and Python.
Governance & Quality: Define and enforce standards for data governance, metadata management, and data observability across distributed systems. Implement automated data lineage tracking, schema evolution, and data quality monitoring.
Cloud Infrastructure: Configure and manage cloud-native data services, including core data storage and event ingestion infrastructure.
Required Experience:
Experience: 10+ years of proven experience in enterprise data architecture and engineering.
Core Platform Expertise: Strong, hands-on experience with the Azure Data Ecosystem including Azure Data Lake Storage (ADLS), Azure Synapse Analytics (or equivalent cloud DW), and Azure Purview (or equivalent data catalog).
Processing: Deep expertise in Databricks (or Apache Spark) for ETL/ELT pipeline implementation, using Delta Lake and SQL Server (or equivalent RDBMS).
Coding & Scripting: Strong proficiency in Python, Spark, and advanced SQL.
Data Governance: Hands-on experience implementing data lineage tracking and data quality monitoring (e.g., using Great Expectations or dbt).
Preferred Skills:
Semantic Technologies: Hands-on experience developing ontology frameworks using OWL, RDF, and SPARQL to enable semantic interoperability.
Advanced AI Data: Experience integrating structured/unstructured data into Knowledge Graphs and Vector Databases.
Streaming/Telemetry: Experience developing and maintaining semantic telemetry pipelines using services like Azure Event Hubs or Kafka.
Emerging Concepts: Exposure to linked data ecosystems, data mesh, or data fabric concepts.
Senior Data Engineer
Data scientist job in Miami, FL
Job Title: Sr Data Engineer
Type: Direct Hire
Unfortunately, we are not able to sponsor work visas or accept candidates on temporary work authorizations (e.g., H1B, CPT, OPT, etc.) for this role. Due to federal regulations and contractual requirements, we are only able to consider applications from U.S. Citizens and Green Card holders (permanent residents) and at this time we cannot sponsor or transfer a work visa. We are an equal opportunity employer and comply with all applicable laws regarding non-discrimination.
Company Overview:
The Senior Data Engineer will play a pivotal role in building and maintaining scalable, reliable, and high-performance data pipelines and enterprise analytics infrastructure. This role is ideal for someone who thrives in a collaborative environment, enjoys mentoring others, and wants to drive innovation in data engineering across the organization.
Your Responsibilities on the Team
Lead the design and implementation of scalable, high-performance data architectures that support diverse data sources, including structured, semi-structured, and unstructured data.
Architect and build data pipelines that can process and analyze large-scale datasets in real-time and batch modes.
Mentor and guide junior data engineers in best practices, code quality, and technical skills, fostering a culture of continuous learning.
Ensure the reliability, efficiency, and security of data pipelines by implementing monitoring, alerting, and automated recovery mechanisms.
Collaborate with cross-functional teams, including data scientists, analysts, and product managers, to align data solutions with business needs and goals.
Lead the adoption of new technologies and tools that enhance the data engineering capabilities of the team.
Oversee the development of data models, schemas, and data marts that enable efficient data analysis and reporting.
Implement data governance frameworks, including data lineage, metadata management, and data quality standards.
Your Toolbox
Bachelor's degree in Computer Science, Information Technology, Management Information Systems, or a related field.
6+ years of experience in data engineering or a related role, with demonstrated success in delivering enterprise-scale data solutions.
Proficient in SQL, with the ability to write complex queries, perform query optimization, and conduct performance tuning.
Experience with cloud data warehouses like Snowflake, Databricks, and understanding of their appropriate use cases.
Strong programming skills in Python or Java, with experience in data processing frameworks (e.g., Apache Spark, Hadoop).
Experience with cloud platforms (AWS, Azure, GCP) and administration, such as AWS Redshift, Azure Synapse, or Google BigQuery.
Proficiency with big data technologies, including Hadoop, Spark, Kafka, and HBase, with experience in distributed data processing.
Expertise with data orchestration tools, such as MWAA/Airflow, for scheduling and managing data workflows.
Experience with reporting tools such as Power BI and Tableau is a plus.
Understanding of data security practices, including encryption, access controls, and data masking.
Familiarity with cloud administration tools and frameworks such as AWS, dbt, Qlik Replicate, Tableau, and Snowflake is preferred.
Lead Data Engineer
Data scientist job in Tampa, FL
A leading Investment Management Firm is looking to bring on a Lead Data Engineer to join its team in Tampa, Denver, Memphis, or Southfield. This is an excellent chance to work alongside industry leaders while getting to be both hands on and helping lead the team.
Key Responsibilities
Project Oversight: Direct end-to-end software development activities, from initial requirements through deployment, ensuring projects meet deadlines and quality standards.
Database Engineering: Architect and refine SQL queries, stored procedures, and schema designs to maximize efficiency and scalability within Oracle environments.
Performance Tuning: Evaluate system performance and apply strategies to enhance data storage and retrieval processes.
Data Processing: Utilize tools like Pandas and Spark for data wrangling, transformation, and analysis.
Python Solutions: Develop and maintain Python-based applications and automation workflows.
Pipeline Automation: Implement and manage continuous integration and delivery pipelines using Jenkins and similar technologies to optimize build, test, and release cycles.
Team Development: Guide and support junior engineers, promoting collaboration and technical growth.
Technical Documentation: Create and maintain comprehensive documentation for all development initiatives.
Core Skills
Experience: Over a decade in software engineering, with deep expertise in Python and Oracle database systems.
Technical Knowledge: Strong command of SQL, Oracle, Python, Spark, Jenkins, Kubernetes, Pandas, and modern CI/CD practices.
Optimization Expertise: Skilled in database tuning and applying best practices for performance.
Leadership Ability: Proven track record in managing teams and delivering complex projects.
Analytical Strength: Exceptional problem-solving capabilities with a data-centric mindset.
Communication: Clear and effective written and verbal communication skills.
Education: Bachelor's degree in Computer Science, Engineering, or equivalent professional experience.
Preferred Qualifications
Certifications: Professional credentials in Oracle, Python, Kubernetes, or CI/CD technologies.
Agile Background: Hands-on experience with Agile or Scrum frameworks.
Cloud Platforms: Familiarity with AWS, Azure, or Google Cloud services.
Azure Data Engineer
Data scientist job in Tampa, FL
We are looking for a proficient Azure Data Engineer to design, build, and maintain scalable data pipelines and architectures on the Azure cloud platform. The ideal candidate will have hands-on experience with data engineering tools and technologies including Python, SQL, Postgres, MongoDB, PySpark, Databricks, and Snowflake.
You will collaborate with data scientists, analysts, and business stakeholders to deliver high-quality, performant data solutions that enable data-driven decision-making.
Key Responsibilities:
Design, develop, and optimize end-to-end data pipelines and ETL/ELT processes leveraging Azure Data services and frameworks.
Build scalable data solutions using Azure Databricks, PySpark, and Snowflake to process both batch and real-time workloads.
Develop and maintain data models and schemas in relational and NoSQL databases such as Postgres and MongoDB.
Write efficient, reusable, and maintainable code primarily in Python and SQL to transform and load data across various systems.
Collaborate with cross-functional teams including data scientists, analysts, and business users to gather requirements and deliver data solutions that meet business needs.
Monitor data pipeline performance and implement improvements for reliability, scalability, and optimization.
Ensure data quality, governance, and compliance across all data engineering efforts.
Troubleshoot and resolve data-related issues, working closely with cloud infrastructure and platform teams.
Document data architecture, workflows, and processes to support ongoing maintenance and knowledge sharing.
Data Engineer (Need Local to St Pete, FL and ONLY on W2)
Data scientist job in Saint Petersburg, FL
This is a Hybrid Position in St Petersburg, FL. Need Candidates Local to these locations, NO RELOCS Allowed
Strong proficiency with SQL and hands-on experience working with Oracle databases.
Experience designing and implementing ETL/ELT pipelines and data workflows.
Hands-on experience with AWS data services, such as S3, Glue, Redshift, Lambda, and IAM.
Proficiency in Python for data engineering (pandas, boto3, pyodbc, etc.).
Solid understanding of data modeling, relational databases, and schema design.
Familiarity with version control, CI/CD, and automation practices.
Ability to collaborate with data scientists to align data structures with model and analytics requirements
Preferred
Experience integrating data for use in AWS SageMaker or other ML platforms.
Exposure to MLOps or ML pipeline orchestration.
Familiarity with data cataloging and governance tools (AWS Glue Catalog, Lake Formation).
Knowledge of data warehouse design patterns and best practices.
Experience with data orchestration tools (e.g., Apache Airflow, Step Functions).
Working knowledge of Java is a plus.
Data Engineer - OrcaWorks AI
Data scientist job in Atlanta, GA
Experience Level: Entry-level (Master's preferred)
About OrcaWorks AI
At OrcaWorks AI, we're building next-generation AI systems that empower businesses to make data-driven decisions with intelligence and speed. We're seeking passionate Data Engineers who love solving real-world data challenges and want to be part of a growing team building cutting-edge AI infrastructure.
Key Responsibilities
Design, develop, and maintain data pipelines using tools like Airbyte and Prefect to feed AI and machine learning models.
Integrate data from multiple structured and unstructured sources into unified and queryable layers using ElasticSearch or Vespa.
Implement data validation, transformation, and storage solutions using modern ETL frameworks.
Collaborate with AI, LLM, and data science teams to ensure reliable and optimized data flow for model training.
Support database management, SQLModel, and data governance practices across services.
Required Skills & Qualifications
Master's degree (or Bachelor's with equivalent experience) in Computer Science, Information Systems, or Data Engineering.
Proficiency in Python and SQL; experience with PySpark or equivalent ETL frameworks.
Hands-on experience with Airbyte, Prefect, and DBT.
Familiarity with search and indexing systems like Vespa or ElasticSearch.
Knowledge of cloud data platforms (AWS, GCP, or Azure) and API integration.
Strong understanding of data security and applied AI workflows.
Lead Data Engineer - Palantir Foundry
Data scientist job in Atlanta, GA
Our technology organization is transforming how we work at WestRock. We align with our businesses to deliver innovative solutions that:
Address specific business challenges, integrate processes, and create great experiences
Connect our work to shared goals that propel WestRock forward in the Digital Age
Imagine how technology can advance the way we work by using disruptive technology
We are looking for forward thinking technologists that can accelerate our focus areas such as building stronger foundational technology capabilities, reducing complexity, employing digital transformation concepts, and leveraging disruptive technology.
As a Lead Data Engineer, you will play a pivotal role in building and scaling modern data infrastructure that powers decision-making across production, supply chain, and operations. Helps to define and analyze business requirements for Enterprise scale reports. Analyzes and evaluates business use cases for data engineering problems and helps design and develop processing solutions with ETL Cloud based technologies.
How you will impact WestRock:
Architect and implement scalable data pipelines using Palantir Foundry (pipelines, workshops, ontology) to unify and transform operational data.
Design and develop robust data workflows using Python, Apache Airflow, and Apache Spark to support real-time and batch processing needs.
Build and deploy solutions on cloud platforms (AWS or Azure), ensuring high availability, security, and performance.
Collaborate with data scientists, analysts, and operations teams to deliver actionable insights and operational tooling.
Define and enforce data engineering best practices, including CI/CD automation, version control (Git), and testing strategies.
Mentor junior developers, conduct code reviews, and help shape the technical roadmap for the data platform.
What you need to succeed:
Education: Bachelor's degree in computer science or similar
At least 6 years of strong Data Engineering experience
Hands-on experience with Palantir Foundry, including pipelines, ontology modeling, and workshop development.
Strong programming skills in Python or Java, with experience building and maintaining production-grade data pipelines.
Proficiency in Apache Airflow and Apache Spark for workflow orchestration and large-scale data processing.
Proven experience deploying data solutions on AWS or Azure, with strong understanding of cloud-native services.
Familiarity with Git for version control and CI/CD pipelines for automated testing and deployment.
Demonstrated ability to mentor junior engineers, lead projects, and work independently in a fast-paced environment.
Good communication skills, with the ability to collaborate effectively across technical and non-technical teams.
Good analytical and troubleshooting abilities.
What we offer:
Corporate culture based on integrity, respect, accountability and excellence
Comprehensive training with numerous learning and development opportunities
An attractive salary reflecting skills, competencies and potential
A career with a global packaging company where Sustainability, Safety and Inclusion are business drivers and foundational elements of the daily work.
Data Engineer
Data scientist job in Atlanta, GA
Key Responsibilities:
Design, develop, and maintain batch and streaming ETL/ELT pipelines using AWS services (Glue, Lambda, Step Functions, etc.).
Implement data ingestion frameworks from diverse sources (APIs, databases, streaming platforms).
Ensure data quality, governance, and security across all pipelines.
Build and optimize data lakes and warehouses leveraging Amazon S3, Redshift, Athena, and Lake Formation.
Collaborate with data scientists, analysts, and business stakeholders to deliver reliable datasets.
Monitor and troubleshoot data workflows, ensuring high availability and performance.
Stay updated with emerging AWS technologies and recommend improvements.
Required Skills & Qualifications
Strong experience with AWS cloud services: S3, Glue, Redshift, EMR, Kinesis, Lambda, DynamoDB, RDS, Athena, SNS. SQS.
Proficiency in SQL, Python and PySpark
Knowledge of data modeling, schema design, and performance tuning.
Familiarity with CI/CD pipelines and DevOps practices tool (Terraform, CloudFormation).
Experience with security best practices (IAM, encryption, compliance).
Lead Azure Databrick Engineer
Data scientist job in Atlanta, GA
****************Individual Contractors (W2/1099) are encouraged to apply. Visa sponsorship is not available for this role at this time************
An Azure Data Engineer is responsible for designing, implementing, and maintaining the data infrastructure within an organization. They collaborate with both business and IT teams to understand stakeholders' needs and unlock the full potential of data. They create conceptual and logical data models, analyze structural requirements, and ensure efficient database solutions.
Must Have Skills:
Experience of Migrating from other platform to Databricks
Proficiency in Databricks and Azure Cloud, Databricks Asset Bundles, Hoslistic vision on the Data Strategy.
Proficiency in Data Streaming and Data Modeling
Experience in architecting at least two large-scale big data projects
Strong understanding of data scaling and its complexities
Data Archiving and Purging mechanisms.
Job Requirements
• Degree in computer science or equivalent preferred
• Demonstrable experience in architecture, design, implementation, and/or support of highly distributed applications with Azure cloud and Databricks.
• 10+ Years of Hands-on experience with data modelling, database design, data mining, and segmentation techniques.
• Working knowledge and experience with "Cloud Architectures" (e.g., SaaS, PaaS, IaaS) and the ability to address the unique security considerations of secure Cloud computing
• Should have architected solutions for Cloud environments such as Microsoft Azure and/or GCP
• Experience with debugging and performance tuning in distributed environments
• Strong analytical skills with the ability to collect, organize, analyse, and broadcast significant amounts of information with attention to detail and accuracy
• Experience dealing with structured, unstructured data.
• Must have Python, PySpark experience.
• Experience in ML or/and graph analysis is a plus
Data Engineer
Data scientist job in Palm Beach Gardens, FL
Flybridge Staffing is currently searching for a Data Engineer for a client located in the Palm Beach Gardens area. This is a direct-hire position that will work off a hybrid schedule of 2 days remote. This person will design systems that supply high-performance datasets for advanced analytics.
Experience:
BA degree and 5+ years of Data Engineering experience
Strong experience building ETL data pipelines for on-premises SQL Server 2017 or newer
Deep understanding of the development of data pipelines with either SSIS or Python
Broad experience with SQL Server, including Columnstore, etc.
Extensive experience using SSMS and T-SQL to create and maintain SQL Server tables, views, functions, stored procedures, and user-defined table types.
Experience with data modeling indexes, Temporal tables, CLR, and Service Broker.
Experience in partitioning tables and indexes, and performance improvement with Query Analyzer
Experience writing C#, PowerShell, and Python.
Experience with source control integration with GitHub, BitBucket, and Azure DevOps.
Experience working in an Agile and Kanban SDLC.
Experience with cloud-based data management solutions such as Snowflake, Redshift.
Experience with Python programming is a plus. Libraries such as Pandas, Numpy, csv, Traceback, JSON, PyODBC, Math-Are nice to have.
Experience writing design documentation such as ERDs, Data Flow Diagrams, and Process Flow Diagrams.
Experience with open-source database engines such as Clickhouse, ArcticDB, and PostGreSQL is a plus.
Responsibilities:
Collaborate effectively with Stakeholders, Project Managers, Software Engineers, Data Analysts, QA Analysts, DBAs, and Data Engineers.
Build and maintain data pipelines based on functional and non-functional requirements.
Proactively seek out information and overcome obstacles to deliver projects efficiently.
Ensure that data pipelines incorporate best practices related to performance, scaling, extensibility, fault tolerance, instrumentation, and maintainability.
Ensure that data pipelines are kept simple and not overly engineered.
Produce and maintain design and operational documentation.
Analyze complex data problems and engineer elegant solutions.
****NO SPONSORSHIP AVAILABLE**** US Citizen, GC, EAD only please. If your background aligns with the above details and you would like to learn more, please submit your resume to jobs@flybridgestaffing.com or on our website, www.flybridgestaffing.com and one of our recruiters will be in touch with you ASAP.
Follow us on LinkedIn to keep up with all our latest job openings and referral program.
Big Data Architect
Data scientist job in Sunrise, FL
14+ years of overall IT experience with expertise in Data landscape - Data Warehouse, Data Lake etc.
Hands on experience in Big Data and Hadoop ecosystem; Strong skills in SQL, Python or Spark
Proficient in Data Warehousing concepts and Customer Data Management (Customer 360)
Experience in GCP platform - Dataflow, Dataproc, Kubernetes containers etc.
Expertise in deep Data exploration and Data analysis
Excellent communication and inter personal skills
GCP Data Architect with 14+ years (Day 1 onsite)
Data scientist job in Sunrise, FL
12-14 years of overall IT experience with expertise in Data landscape - Data Warehouse, Data lake etc.
Hands on experience in Big Data and Hadoop ecosystem; Strong skills in SQL, Python or Spark
Proficient in Data Warehousing concepts and Customer Data Management (Customer 360)
Experience in GCP platform - Dataflow, Dataproc, Kubernetes containers etc.
Expertise in deep Data exploration and Data analysis
Excellent communication and inter personal skills
Data Engineer w/ Python & SQL
Data scientist job in Alpharetta, GA
We're looking for a Data Engineer to build and maintain scalable data pipelines and cloud data infrastructure on GCP. The role focuses on BigQuery, Dataflow, and modern ETL/ELT to support analytics and ML workflows.
MUST HAVES
A problem solver with ability to analyze and research complex issues and problems; and proposing actionable solutions and/or strategies.
Solid understanding and hands on experience with major cloud platforms.
Experience in designing and implementing data pipelines.
Must have strong Python, SQL & GCP skills
Responsibilities
Build and optimize batch/streaming pipelines using Dataflow, Pub/Sub, Composer.
Develop and tune BigQuery models, queries, and ingestion processes.
Implement IaC (Terraform), CI/CD, monitoring, and data quality checks.
Ensure data governance, security, and reliable pipeline operations.
Collaborate with data science teams and support Vertex AI-based ML workflows.
Must-Have
Must have strong Python, SQL & GCP skills
3-5+ years of data engineering experience.
Hands-on GCP experience (BigQuery, Dataflow, Pub/Sub).
Solid ETL/ELT and data modeling experience.
Nice-to-Have
GCP certifications, Spark, Kafka, Airflow, dbt/Dataform, Docker/K8s.
BIG Data Architect
Data scientist job in Sunrise, FL
skills: Hadoop/Bigdata, (GCP OR AWS ) and Data warehousing:
14+ years of overall IT experience with expertise in Data landscape - Data Warehouse, Data lake etc.
Hands on experience in Big Data and Hadoop ecosystem; Strong skills in SQL, Python or Spark
Proficient in Data Warehousing concepts and Customer Data Management (Customer 360)
Experience in GCP platform - Dataflow, Dataproc, Kubernetes containers etc.
Expertise in deep Data exploration and Data analysis
Excellent communication and inter personal skills
Data Engineer
Data scientist job in Alpharetta, GA
Data Engineer, Wissen Technology, Alpharetta, GA
The Wissen team continues to expand its footprint in the Canada & USA. More openings to come as we continue to grow the team!
Please read below for a brilliant career opportunity.
Role: Data Engineer
Title: Associate
Location: Alpharetta. GA (Day 1 Onsite) - Fulltime
Mode of Work: Onsite
Your required skills: 1+ Years Required
Position Overview:
We are seeking a detail-oriented and motivated Data Engineer with 1+ years of experience to support the design, development, and maintenance of data pipelines and data workflows. The ideal candidate should have strong SQL and Python skills, with a solid understanding of ETL concepts.
Key Responsibilities:
Assist in building, maintaining, and optimizing ETL pipelines.
Work with structured and unstructured data to support analytics and reporting needs.
Collaborate with senior engineers and cross-functional teams to ensure data availability and quality.
Perform data validation, cleaning, and transformation tasks.
Monitor data pipeline performance and troubleshoot issues.
Support implementation of best practices in data engineering and data management.
Mandatory Skills:
Strong proficiency in SQL.
Good understanding of Python for data manipulation and scripting.
Knowledge of basic ETL concepts and data workflow fundamentals.
Preferred Skills:
Understanding of Hadoop ecosystem basics.
Familiarity with Kafka for real-time data streaming.
Benefits:
401K
Healthcare insurance for you and your family (medical, dental, vision).
Short / Long term disability insurance.
Life Insurance.
Accidental death & disability Insurance.
3 weeks of Paid Time Off.
Support and fee coverage for immigration needs.
Remote office set up stipend.
Support for industry certifications.
Additional cash incentives.
We are: A high end technical consulting firm built and run by highly qualified technologists. Our workforce consists of 5000+ highly skilled professionals, with leadership from Wharton, MIT, IITs, IIMs, and NITs and decades of experience at Goldman Sachs, Morgan Stanley, MSCI, Deutsche Bank, Credit Suisse, Verizon, British Telecom, ISRO etc. Without any external funding or investments, Wissen Technology has grown its revenues by 100% every other year since it started as a subsidiary of Wissen Group in 2015. We have a global presence with offices in the US, India, UK, Australia, Mexico, and Canada.
You are: A true tech or domain ninja. Or both. Comfortable working in a quickly growing profitable startup, have a “can do” attitude and are willing to take on any task thrown your way.
You will:
· Develop and promote the company's culture of engineering excellence.
· Define, develop and deliver solutions at a top tier investment bank or another esteemed client.
Perform other duties as needed
Your Education and Experience:
· We value candidates who can execute on our vision and help us build an industry-leading organization. Rather than requiring a specific certification or number of years of experience, we ask that you be able to demonstrate your skills in the areas outlined above.
· Graduate-level degree in computer science, engineering, or related technical field
Wissen embraces diversity and is an equal opportunity employer. We are committed to building a team that represents a variety of backgrounds, skills, and abilities. We believe that the more inclusive our team is, the better our work will be. All qualified applicants, including but not limited to LGBTQ+, Minorities, Females, the Disabled, and Veterans, are encouraged to apply.
About Wissen Technology:
The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015. Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world-class products.
We empower businesses with a dynamic portfolio of services and accelerators tailored to today's digital demands and based on future ready technology stack. Our services include Industry Leading Custom Software Development, AI-Driven Software Engineering, Generative AI & Machine Learning, Real-Time Data Analytics & Insights, Interactive Data Visualization & Decision Intelligence, Intelligent Process Automation, Multi-Cloud & Hybrid Cloud Strategies, Cross-Platform Mobile Experiences, CI/CD-Powered Agile DevOps, Automated Quality Engineering, and cutting-edge integrations.
Certified as a Great Place to Work for five consecutive years (2020-2025) and recognized as a Top 20 AI/ML vendor by CIO Insider, Wissen Group has delivered multimillion-dollar projects for over 20 Fortune 500 companies. Wissen Technology delivers exceptional value on mission-critical projects through thought leadership, ownership, and reliable, high-quality, on-time delivery.
Our industry-leading technical expertise stems from the talented professionals we attract. Committed to fostering their growth and providing top-tier career opportunities, Wissen ensures an outstanding experience and value for our clients and employees. We serve diverse industries, including Banking, E-commerce, Telecom, Healthcare, Manufacturing, and Energy.
About Wissen Technology:
Website : **************
LinkedIn: **************************************************
Wissen Leadership : https://**************/company/wissen-technology-leadership
Wissen Live : **************************************************/posts/feed View=All
Wissen Thought Leadership : https://**************/articles/
Employee Speak:
***************************************************************
*********************************************************************
Great Place to Work:
https://**************/blog/wissen-is-a-great-place-to-work-says-the-great-place-to-work-institute-…
https://www.linkedin.com/posts/wissen-infotech_wissen-leadership-wissenites-activity-**********1317…
About Wissen Interview Process:
https://**************/blog/we-work-on-highly-complex-technology-projects-here-is-how-it-changes-wh…
Latest in Wissen in CIO Insider :
****************************************************************************************************
Job Type: Full-time
Schedule:
Monday to Friday
Data Architect
Data scientist job in Sunrise, FL
JD:
14+ years of overall IT experience with expertise in Data landscape - Data Warehouse, Data lake etc.
Hands on experience in Big Data and Hadoop ecosystem; Strong skills in SQL, Python or Spark
Proficient in Data Warehousing concepts and Customer Data Management (Customer 360)
Experience in GCP platform - Dataflow, Dataproc, Kubernetes containers etc.
Expertise in deep Data exploration and Data analysis
Excellent communication and inter personal skills
Claims Data Engineer
Data scientist job in Plantation, FL
NationsBenefits is recognized as one of the fastest growing companies in America and a Healthcare Fintech provider of supplemental benefits, flex cards, and member engagement solutions. We partner with managed care organizations to provide innovative healthcare solutions that drive growth, improve outcomes, reduce costs, and bring value to their members.
Through our comprehensive suite of innovative supplemental benefits, fintech payment platforms, and member engagement solutions, we help health plans deliver high-quality benefits to their members that address the social determinants of health and improve member health outcomes and satisfaction.
Our compliance-focused infrastructure, proprietary technology systems, and premier service delivery model allow our health plan partners to deliver high-quality, value-based care to millions of members.
We offer a fulfilling work environment that attracts top talent and encourages all associates to contribute to delivering premier service to internal and external customers alike. Our goal is to transform the healthcare industry for the better! We provide career advancement opportunities from within the organization across multiple locations in the US, South America, and India.
Position Summary:
We are seeking a seasoned EDI 837 Claims Data Engineer to design, develop, and maintain data pipelines that process healthcare claims in compliance with HIPAA and ANSI X12 standards. This role requires deep expertise in Electronic Data Interchange (EDI), particularly the 837-transaction set, and will be pivotal in ensuring accurate, timely, and secure claims data exchange across payers, providers, clearinghouses, state agencies, and CMS.
Key Responsibilities
EDI Development & Integration
Design, build, and maintain pipelines for processing 837 healthcare claim transactions.
Implement and support EDI workflows across multiple trading partners.
Ensure compliance with HIPAA regulations and ANSI X12 standards.
Data Engineering
Develop ETL processes to transform, validate, and load claims data into enterprise data warehouses.
Optimize data flows for scalability, reliability, and performance.
Collaborate with analysts and stakeholders to ensure claims data accuracy.
Write and optimize SQL queries, stored procedures, and scripts for validation and reporting.
Monitoring & Troubleshooting
Monitor EDI transactions for errors, rejections, and compliance issues.
Troubleshoot and resolve data mapping, translation, and connectivity problems.
Perform root cause analysis and implement corrective actions.
Collaboration
Work closely with business analysts, QA teams, and IT operations to support claims processing.
Partner with healthcare domain experts to align technical solutions with business needs.
Required Skills & Qualifications
5+ years of experience in healthcare data engineering or claims integration.
Strong expertise with EDI 837 transactions and healthcare claims processing.
Proven experience with Medicaid and Medicare data exchanges between state agencies and CMS.
Hands-on experience with Databricks, SSIS, and SQL Server.
Knowledge of HIPAA compliance, CMS reporting requirements, and interoperability standards.
Strong problem-solving skills and ability to work in cross-functional teams.
Excellent communication and documentation skills.
Preferred Skills
Experience with Azure cloud platforms
Familiarity with other EDI transactions (835, 270/271, 276/277).
Exposure to data governance frameworks and security best practices.
Background in data warehousing and healthcare analytics.