Epidemiology Data Scientist
Data scientist job in Atlanta, GA
Job Family:
Data Science Consulting
Travel Required:
Up to 10%
Clearance Required:
Ability to Obtain Public Trust
What You Will Do:
Lead quantitative analysis and modeling of real-world data, including CMS, commercial claims, and clinical health data, to generate actionable insights for federal public health.
Lead analyses to estimate chronic disease population-level prevalence and incidence leveraging real-world data.
Design, implement, and refine machine learning and statistical models (e.g., regression, clustering, causal inference) for surveillance and research purposes.
Develop and maintain data pipelines and dashboards for large-scale health datasets using R, Python, and SQL.
Utilize DataBricks and Snowflake for scalable data processing and analytics.
Collaborate with epidemiologists, clinicians, and public health experts to interpret findings and inform CDC policy and strategy.
Prepare scientific reports, presentations, and publications for both technical and non-technical audiences.
Ensure data quality, documentation, and reproducibility across all analytic workflows.
Support onboarding and training of new team members as needed.
What You Will Need:
Advanced degree (MS or PhD) in Data Science, Epidemiology, Public Health, Biostatistics, or related field.
Minimum FIVE (5) years of experience in health data science and epidemiology, including deep experience analyzing CMS or commercial claims data, clinical data, or other large health datasets to generate epidemiological population-level estimates.
Strong proficiency in R, Python (including libraries such as Pandas, NumPy, Scikit-learn), and SQL, including large data set manipulation.
Demonstrated experience with statistical modeling, machine learning, and data visualization.
Experience with DataBricks and/or Snowflake.
Strong background in population-level public health, with experience in scientific writing and presentation.
Excellent communication and collaboration skills.
Ability to work in a fast-paced environment independently to produce high-quality deliverables on-time
What Would Be Nice To Have:
Experience with generative AI, NLP, or advanced analytics in healthcare settings.
Deep experience in biostatistics and using statistical platforms including SAS and R
Familiarity with public health surveillance systems, federal health programs, or similar public sector environments.
Experience in chronic disease surveillance, preferably neurological conditions
Prior work in consulting and working with multi-disciplinary teams and stakeholder engagement.
The annual salary range for this position is $102,000.00-$170,000.00. Compensation decisions depend on a wide range of factors, including but not limited to skill sets, experience and training, security clearances, licensure and certifications, and other business and organizational needs.
What We Offer:
Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace.
Benefits include:
Medical, Rx, Dental & Vision Insurance
Personal and Family Sick Time & Company Paid Holidays
Position may be eligible for a discretionary variable incentive bonus
Parental Leave and Adoption Assistance
401(k) Retirement Plan
Basic Life & Supplemental Life
Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts
Short-Term & Long-Term Disability
Student Loan PayDown
Tuition Reimbursement, Personal Development & Learning Opportunities
Skills Development & Certifications
Employee Referral Program
Corporate Sponsored Events & Community Outreach
Emergency Back-Up Childcare Program
Mobility Stipend
About Guidehouse
Guidehouse is an Equal Opportunity Employer-Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation.
Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco.
If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at ************** or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation.
All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or ************************. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process.
If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse's Ethics Hotline. If you want to check the validity of correspondence you have received, please contact *************************. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant's dealings with unauthorized third parties.
Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.
Auto-ApplyData Scientist (LLM & Financial data)
Data scientist job in Charlotte, NC
Job Responsibilities:
Analyze large financial datasets to extract insights and support business decisions.
Develop, implement, and evaluate machine learning models and algorithms tailored to banking and finance use cases (e.g., risk modeling, fraud detection, customer segmentation).
Apply and fine-tune large language models (LLMs) for tasks such as document analysis, customer communication, and regulatory compliance.
Collaborate with cross-functional teams to understand business requirements and deliver data-driven solutions.
Communicate findings and recommendations through reports, dashboards, and presentations.
Work with data engineers to ensure data quality and pipeline reliability.
Data Scientist
Data scientist job in Chattanooga, TN
BUILT TO CONNECT
At Astec, we believe in the power of connection and the importance of building long-lasting relationships with our employees, customers and the communities we call home. With a team more than 4,000 strong, our employees are our #1 advantage. We invest in skills training and provide opportunities for career development to help you grow along with the business. We offer programs that support physical safety, as well as benefits and resources to enhance total health and wellbeing, so you can be your best at work and at home.
Our equipment is used to build the roads and infrastructure that connects us to each other and to the goods and services we use. We are an industry leader known for delivering innovative solutions that create value for our customers. As our industry evolves, we are using new technology and data like never before.
We're looking for creative problem solvers to build the future with us. Connect with us today and build your career at Astec.
LOCATION: Chattanooga, TN On-site / Hybrid (Role must report on-site regularly)
ABOUT THE POSITION
The Data Scientist will play a key role in establishing the analytical foundation of Astec Smart Services. This individual will lead efforts to build pipelines from source to cloud, define data workflows, build predictive models, and help guide the team's approach to turning data into customer value. He or she will work closely within Smart Services and cross-functionally to ensure insights are actionable and impactful. The role blends Data architecture, data engineering, and data science to help build Smart Services analytical foundation. This person will be instrumental in helping to build Astec's digital transformation and aftermarket strategy.
Deliverables & Responsibilities
Data Engineering:
Build and maintain robust data pipelines for ingestion, transformation, and storage.
Optimize ETL processes for scalability and performance.
Data Architecture:
Design and implement data models that support analytics and operational needs.
Define standards for data governance, security, and integration.
Data Science:
Develop predictive models and advanced analytics to support business decisions.
Apply statistical and machine learning techniques to large datasets.
Strong business acumen to understand decision drivers with internal and external customers
Collaborate with individuals and departments across the company to ensure insights are aligned with customer needs and drive value.
To be successful in this role, your experience and competencies are:
Bachelor's degree in data science, engineering, or related field. (Adv. degrees a plus.)
5+ years of experience in data science, including at least 3 years in industrial or operational environments.
Strong communication and project management skills are critical.
Proficiency in data pipeline tools (e.g., Spark, Airflow) and cloud platforms (Azure, AWS, GCP).
Strong understanding of data modeling principles and database technologies (SQL/NoSQL).
Hands-on experience with machine learning frameworks (e.g., TensorFlow, PyTorch) and statistical analysis.
Ability to work across data architecture design and data science experimentation.
Programming: Python, SQL, and optionally Scala or Java.
Familiarity with distributed systems and big data technologies.
Strong communication skills for translating technical insights into business value.
Ability to work across technical, commercial, and customer-facing teams.
Supervisor and Leadership Expectations
This role will not have supervisory or managerial responsibilities.
This role will have program management responsibilities.
Our Culture and Values
Employees that become part of Astec embody the values below throughout their work.
Continuous devotion to meeting the needs of our customers
Honesty and integrity in all aspects of business
Respect for all individuals
Preserving entrepreneurial spirit and innovation
Safety, quality and productivity as means to ensure success
EQUAL OPPORTUNITY EMPLOYER
As an Equal Opportunity Employer, Astec does not discriminate on the basis of race, creed, color, religion, gender (sex), sexual orientation, gender identity, marital status, national origin, ancestry, age, disability, citizenship status, a person's veteran status or any other characteristic protected by law or executive order.
Data Scientist
Data scientist job in Atlanta, GA
Role: Data Scientist
Mode Of Hire: Full Time
Key Responsibilities
• Data wrangling & feature engineering: Ingest, clean, and transform data from SQL, APIs, and data lakes (e.g., Snowflake, Databricks). Design robust pipelines that feed into analytics and ML workflows.
• Data understanding & exploration: Work closely with domain experts to deeply understand the meaning, context, quality, and limitations of available datasets. Translate business questions into data requirements and analytics plans.
• Machine learning development: Build, tune, and validate predictive models using scikit-learn, SparkML, XGBoost, or TensorFlow.
• Cross-functional partnership: Collaborate with marketing, sales, and product teams to scope business use cases, define success metrics, and integrate models into operational workflows.
• Model deployment & MLOps: Deploy and manage models using MLflow, docker and CI/CD pipelines. Implement versioning, testing, performance monitoring, and retraining strategies as part of a robust MLOps practice.
• Infrastructure support: Work with data engineering and DevOps teams to maintain and improve model training and deployment infrastructure, including compute resources, workflow orchestration and environment configuration.
• Insight delivery: Build clear, actionable reporting and visualizations using tools like Power BI or Tableau. Focus on impact, not just analysis.
Skills Required:
• Bachelor's degree in Data Science, Computer Science, Engineering, or a related quantitative field.
• 5+ years of experience in a data science, ML engineering, or analytics role.
• Strong SQL, Python and ML Techniques programming skills.
• Experience with Azure Cloud, Databricks, and/or Snowflake.
• Experience building and deploying machine learning models in production environments. Hands-on experience with Databricks, including SparkML, and MLflow integration.
• Familiarity with MLOps best practices, including version control, model monitoring, and automated testing.
• Experience with tools such as Git, MLflow, Docker and workflow schedulers.
• Ability to communicate complex technical work to non-technical stakeholders.
• Experience with scalable model training environments and distributed computing.
Preferred Qualifications
• Master's degree in a quantitative or technical discipline.
• Experience in financial services, fintech, or enterprise B2B analytics.
• Knowledge of A/B testing, causal inference, and statistical experimentation.
• Familiarity with GenAI, LLM pipelines, and vector-based retrieval is a plus and platform like Snowflake Cortex.
Data Scientist
Data scientist job in Savannah, GA
We are looking for a Data Scientist with expertise in optimization and forecasting to help improve how we manage labor, staffing, and operational resources across our retail locations. This role is critical in building models and decision-support tools that ensure the right people, in the right place, at the right time - balancing customer service, efficiency, and cost. You will work closely with Operations, Finance, and Store Leadership teams to deliver practical solutions that improve labor planning, scheduling, and demand forecasting. The right candidate will be confident, resourceful, and excited to own both the technical and business-facing aspects of applying data science in a fast-paced retail environment.
Responsibilities
Build and maintain forecasting models (time-series, machine learning, and statistical) for sales and transactions.
Develop and deploy optimization models (linear/mixed-integer programming, heuristics, simulation) to improve workforce scheduling and labor allocation.
Partner with operations and finance to translate forecasts into actionable staffing and labor plans that reduce costs while maintaining service levels.
Build dashboards and automated tools to track forecast accuracy, labor KPIs, and staffing effectiveness.
Provide insights and “what-if” scenario modeling to support strategic workforce and budget planning.
Knowledge, Skills, And Abilities
Strong foundation in forecasting techniques (time-series models, regression, machine learning) and optimization methods (linear/mixed-integer programming, heuristics, simulation).
Proficiency in Python or R for modeling and analysis, along with strong SQL skills for working with large-scale datasets.
Knowledge of statistics, probability, and applied mathematics to support predictive and prescriptive modeling.
Experience building and deploying predictive models, optimization tools, and decision-support solutions that drive measurable business outcomes.
Strong data storytelling and visualization skills using tools such as Power BI, Tableau, or Looker.
Ability to translate analytical outputs into clear, actionable recommendations for non-technical stakeholders.
Strong collaboration skills with the ability to partner cross-functionally with Operations, Finance, and Store Leadership to drive adoption of data-driven approaches.
Ability to work independently and resourcefully, combining technical depth with practical problem-solving to deliver results in a fast-paced environment.
Education And Requirements
Required:
Bachelor's or Master's degree in Data Science, Statistics, Applied Mathematics, Industrial Engineering, Operations Research, or related field.
Minimum 2-3 years of professional experience in Data Science or a related area.
Strong skills in time-series forecasting (e.g., ARIMA, Prophet, ML-based approaches).
Proficiency in optimization techniques (linear programming, integer programming).
Strong Python or R programming skills.
SQL expertise for large, complex datasets.
Strong communication skills with the ability to partner with business stakeholders.
Preferred
Experience in Retail, Restaurant, and/or Convenience Stores a plus.
Experience with cloud platforms (Snowflake, AWS, GCP, Azure).
Knowledge of BI tools (Tableau, Power BI, Looker).
Physical Requirements
Prolonged periods sitting/standing at a desk and working on a computer
Must be able to lift up to 50 pounds
Parker's is an equal opportunity employer committed to hiring a diverse workforce and sustaining an inclusive culture. Parker's does not discriminate on the basis of disability, veteran status or any other basis protected under federal, state, or local laws.
Senior Data Scientist
Data scientist job in Charlotte, NC
About Us:
LTIMindtree is a global technology consulting and digital solutions company that enables enterprises across industries to reimagine business models, accelerate innovation, and maximize growth by harnessing digital technologies. As a digital transformation partner to more than 700+ clients, LTIMindtree brings extensive domain and technology expertise to help drive superior competitive differentiation, customer experiences, and business outcomes in a converging world. Powered by nearly 90,000 talented and entrepreneurial professionals across more than 30 countries, LTIMindtree - a Larsen & Toubro Group company - combines the industry-acclaimed strengths of erstwhile Larsen and Toubro Infotech and Mindtree in solving the most complex business challenges and delivering transformation at scale. For more information, please visit ********************
Role: Senior Data Scientist - Generative AI & Solution Architecture
Location: Charlotte, NC
Work Mode - Hybrid (2-3 days from client office)
Experience: 10+ Years
Job Description
We are looking for a Senior Data Scientist to lead the design and implementation of a Generative AI-driven Contract Risk Reporting System. This role involves building advanced RAG pipelines, a reporting engine, and an interactive chatbot. The ideal candidate combines deep technical expertise, AI/ML fundamentals, and the ability to interpret legal contract language into actionable solution requirements, while engaging directly with clients.
Key Responsibilities:
Architect and deliver AI solutions for contract risk analysis and reporting.
Design and implement LLM-based RAG systems, reporting dashboards, and conversational interfaces.
Translate legal domain requirements into technical specifications.
Collaborate with clients for requirement gathering, solution validation, and presentations.
Ensure MLOps best practices, model monitoring, and Model Risk Management (MRM) compliance.
Required Skills:
Expertise in Generative AI, LLMs, RAG architectures, and NLP techniques.
Strong foundation in machine learning algorithms, model evaluation, and feature engineering.
Hands-on experience with MLOps pipelines, model governance, and risk controls.
Proficiency in Python, LangChain, Vector Databases, and Cloud platforms (AWS/Azure/GCP).
Ability to interpret legal contract language and map it to technical solutions.
Excellent communication, client engagement, and solution design skills.
Preferred:
Experience in legal tech, contract analytics, or risk management solutions.
Familiarity with prompt engineering, domain-specific fine-tuning, and LLM optimization.
LTIMindtree is an equal opportunity employer that is committed to diversity in the workplace. Our employment decisions are made without regard to race, color, creed, religion, sex (including pregnancy, childbirth or related medical conditions), gender identity or expression, national origin, ancestry, age, family-care status, veteran status, marital status, civil union status, domestic partnership status, military service, handicap or disability or history of handicap or disability, genetic information, atypical hereditary cellular or blood trait, union affiliation, affectional or sexual orientation or preference, or any other characteristic protected by applicable federal, state, or local law, except where such considerations are bona fide occupational qualifications permitted by law.
Senior Data Scientist
Data scientist job in Knoxville, TN
This person is responsible for extracting insights and building predictive models from data. Analyzes curated data, builds ML models, and validates them with experiments. The DS researches and defines business problems and translates them into analytical or ML tasks. Conducts exploratory data analysis to identify patterns, correlations, and anomalies.
They will be in charge of designing, training, and evaluating statistical, ML, or AI models. Works with SMEs to determine use cases, model requirements, and alternative solutions. Defines requirements and specifications from which Data Specialists build data structures. Works closely with Data Engineers and customers to ensure correct implementation.
Required Experience:
Bachelor's in Mathematics, OR, Business, Economics, or related field preferred.
Minimum 7+ years designing and implementing mathematical and statistical models.
Strong experience with Python/R, PyTorch, Scikit-Learn, Jupyter, visualization, A/B testing.
SQL (Oracle, MySQL, or Hive).
Oracle DB and Hadoop experience preferred.
Ability to work independently and under deadlines.
Strong communication and problem-solving skills.
Clearance Requirement:
Must be a US Citizen with an active DoD Secret security clearance.
#cjjobs
Marathon TS is committed to the development of a creative, diverse and inclusive work environment. In order to provide equal employment and advancement opportunities to all individuals, employment decisions at Marathon TS will be based on merit, qualifications, and abilities. Marathon TS does not discriminate against any person because of race, color, creed, religion, sex, national origin, disability, age or any other characteristic protected by law (referred to as "protected status").
Senior Data Scientist
Data scientist job in Birmingham, AL
We're seeking a Contract-to-Hire Senior Data Scientist to lead and collaborate with a multidisciplinary team in designing and developing innovative analytical products and solutions using Machine Learning, NLP, and Deep Learning. This role is ideal for someone who thrives in ambiguity, enjoys solving complex problems, and can translate business needs into measurable outcomes.
What You'll Do
• Partner with business leaders to understand needs and define measurable goals
• Gather requirements, build project plans, manage deadlines, and communicate updates
• Analyze large structured and unstructured datasets
• Build, evaluate, implement, and maintain predictive models
• Present results to both technical and non-technical stakeholders
• Deploy models and monitor ongoing performance and data accuracy
• Contribute ideas, stay current with industry trends, and support team development
Lead-Level Opportunities Include:
• Driving data science strategy and overseeing project delivery
• Providing technical mentorship and leadership to the team
• Promoting innovation and exploring emerging tech, tools, and methodologies
What We're Looking For
• Bachelor's degree in Applied Mathematics, Statistics, Computer Science, Data Science, or related field
• 3-6 years of relevant experience (advanced degrees may reduce required experience)
• Strong skills in machine learning, statistical modeling, and data analysis
• Proficiency in Python or R
• Experience with large datasets, preprocessing, and feature engineering
• Prior management experience
• Experience with transfer learning
• Experience building and deploying deep learning solutions
• Strong communication skills and ability to present complex concepts clearly
• Experience in life insurance or a related domain is a plus
• Ability to independently manage projects end-to-end
Qualifications
• Master's or PhD
• Industry experience in similar roles
• Publications or patents in data science or ML
• Experience collaborating across technical and business teams
• Familiarity with software engineering best practices and version control
• Relevant certifications (AWS ML Specialty, Google Data Engineer, etc.)
Rooted in Birmingham. Focused on You.
We're a local recruiting firm based right here in Birmingham. We partner with top companies across the city-from large corporations to fast-growing startups-and we'd love to meet you for coffee to talk about your career goals. Whether you're actively searching or just exploring, we're here to guide you through the entire process- from resume tips to interview coaching.
At our clients' request, only individuals with required experience will be considered.
Please note - if you have recently submitted your resume to a PangeaTwo posting, your qualifications will be considered for other open opportunities.
Your resume will never be submitted to a client without your prior knowledge and consent to do so.
Data Governance Consultant
Data scientist job in Charlotte, NC
Job Title: Data Governance Consultant 4
Pay Rate: $61-$63/hr
Type/Duration of Assignment: 6 month contract
Hours/Schedule/Training (as needed): Standard M-F
Must Haves:
Need hands on financial governance experience
5+ years of experience within Data Governance, Governance, Controls, Risk and Compliance, Third Party Risk Management
Job Description:
Consult on complex initiatives with broad impact and large-scale planning for Business Execution.
Review and analyze complex multi-faceted, larger scale, or longer-term Business Execution challenges that require in-depth evaluation of multiple factors including intangibles or unprecedented factors.
Contribute to the resolution of complex and multi-faceted situations requiring solid understanding of the function, policies, procedures, and compliance requirements that meet deliverables.
Strategically collaborate and consult with client personnel.
Electronic Data Interchange Consultant
Data scientist job in Birmingham, AL
DETAILS: EDI CONSULTANT /TRAINER
Title: EDI Consultant
Length: 3-6 months for first project and extensions from there, possible for multiple projects
Compensation: Hourly DOE
Location: Meadowbrook, AL (Birmingham) / can be remote, but need to visit on the front end a couple of weeks and as needed occasionally down the road.
OVERVIEW: EDI CONSULTANT /TRAINER
This individual will plan, develop, and implement the EDI operations and strategy roadmap for the organization train and mentor a small team.
RESPONSIBILITIES: EDI CONSULTANT /TRAINER
Manage Mapping and Administration for TrustedLink/OpenText /BizManager for iSeries/AS400
Mentor a small team of resources to assist in EDI operations.
Oversees the design, development, testing, deployment, and maintenance of the EDI systems, applications, and integrations - Must be strong with TrustedLink and BizManager for iSeries.
Develop and Document Specifications
Monitors and evaluates the EDI system's performance, availability, security, and compliance, and initiates corrective actions as needed.
Ensures that the EDI systems adhere to the industry standards, best practices, and regulatory requirements.
Resolves complex EDI issues and provides technical support and guidance to the users.
Establishes and maintains effective relationships with the internal and external stakeholders, such as business units, IT departments, vendors, and trading partners.
MINIMUM REQUIREMENTS: EDI CONSULTANT /TRAINER
Experience with AS400 / iSeries and RPG development and data files.
Strong experience with OpenText, TrustedLink, and BizManager for iSeries
2+ years leadership experience training and leading a small team
10+ years of experience in EDI systems development, implementation, and management.
Extensive knowledge and expertise in EDI standards, formats, protocols, and technologies, such as ANSI X12, EDIFACT, XML, AS2, FTP, VAN communication protocols etc.
Data Architect
Data scientist job in Atlanta, GA
Note: Initial 100% onsite required for the first six months.
Employment Type: Permanent / Direct Hire / Full-time
Salary Up to $180,000 (depending on experience) + bonus
The Role:
We're seeking a highly skilled and hands-on Data Architect to lead the design, implementation, and ongoing evolution of our enterprise-grade data systems. This role is crucial for building scalable, secure, and intelligent data infrastructure that supports core analytics, operational excellence, and future AI initiatives. Success requires a seasoned technologist who can seamlessly integrate cloud-native services with traditional data warehousing to create a modern, unified data platform.
What You'll Do:
Architecture & Strategy: Lead the design and implementation of modern data platforms, including Data Lakes, Data Warehouses, and Lakehouse architectures, to enable a single source of truth for the enterprise.
Data Modeling & Integration: Architect unified data models that support both modular monoliths and microservices-based platforms. Design and optimize high-volume, low-latency streaming/batch ETL/ELT pipelines.
Technical Leadership: Drive the technical execution across the entire data lifecycle. Build and optimize core data processing scripts using Spark and Python.
Governance & Quality: Define and enforce standards for data governance, metadata management, and data observability across distributed systems. Implement automated data lineage tracking, schema evolution, and data quality monitoring.
Cloud Infrastructure: Configure and manage cloud-native data services, including core data storage and event ingestion infrastructure.
Required Experience:
Experience: 10+ years of proven experience in enterprise data architecture and engineering.
Core Platform Expertise: Strong, hands-on experience with the Azure Data Ecosystem including Azure Data Lake Storage (ADLS), Azure Synapse Analytics (or equivalent cloud DW), and Azure Purview (or equivalent data catalog).
Processing: Deep expertise in Databricks (or Apache Spark) for ETL/ELT pipeline implementation, using Delta Lake and SQL Server (or equivalent RDBMS).
Coding & Scripting: Strong proficiency in Python, Spark, and advanced SQL.
Data Governance: Hands-on experience implementing data lineage tracking and data quality monitoring (e.g., using Great Expectations or dbt).
Preferred Skills:
Semantic Technologies: Hands-on experience developing ontology frameworks using OWL, RDF, and SPARQL to enable semantic interoperability.
Advanced AI Data: Experience integrating structured/unstructured data into Knowledge Graphs and Vector Databases.
Streaming/Telemetry: Experience developing and maintaining semantic telemetry pipelines using services like Azure Event Hubs or Kafka.
Emerging Concepts: Exposure to linked data ecosystems, data mesh, or data fabric concepts.
ML Engineer with Timeseries data experience
Data scientist job in Atlanta, GA
Role: ML Engineer with Timeseries data experience
Hybrid in Atlanta, GA (locals preferred)
$58/hr on C2C, Any Visa
Model Development: Design, build, train, and optimize ML/DL models for time-series forecasting, prediction, anomaly detection, and causal inference.
Data Pipelines: Create robust data pipelines for collection, preprocessing, feature engineering, and labeling of large-scale time-series data.
Scalable Systems: Architect and implement scalable AI/ML infrastructure and MLOps pipelines (CI/CD, monitoring) for production deployment.
Collaboration: Work with data engineers, software developers, and domain experts to integrate AI solutions.
Performance: Monitor, troubleshoot, and optimize model performance, ensuring robustness and real-world applicability.
Languages & Frameworks: Good understanding of AWS Framework, Python (Pandas, NumPy), PyTorch, TensorFlow, Scikit-learn, PySpark.
ML/DL Expertise: Strong grasp of time-series models (ARIMA, Prophet, Deep Learning), anomaly detection, and predictive analytics
Data Handling: Experience with large datasets, feature engineering, and scalable data processing.
Data Engineer
Data scientist job in Charlotte, NC
Experience Level: Mid (5-7 Years) W2 ONLY - NO 3RD PARTIES PLEASE CONTRACT / C2H Role Objectives • These roles will be part of the Data Strategy team spanning across the Client Capital Markets teams. • These roles will be involved in the active development of the data platform in close coordination with the Client team, beginning with the establishment of a reference data system for securities and pricing data, and later moving to other data domains.
• The consulting team will need to follow internal developments standards to contribute to the overall agenda of the Data Strategy team.
Qualifications and Skills
• Proven experience as a Data Engineer with experience in Azure cloud.
• Experience implementing solutions using -
• Azure cloud services
• Azure Data Factory
• Azure Lake Gen 2
• Azure Databases
• Azure Data Fabric
• API Gateway management
• Azure Functions
• Well versed with Azure Databricks
• Strong SQL skills with RDMS or no SQL databases
• Experience with developing APIs using FastAPI or similar frameworks in Python
• Familiarity with the DevOps lifecycle (git, Jenkins, etc.), CI/CD processes
• Good understanding of ETL/ELT processes
• Experience in financial services industry, financial instruments, asset classes and market data are a plus.
AWS Data Engineer (Only W2)
Data scientist job in Charlotte, NC
Title: AWS Data Engineer
Exprience: 10 years
Must Have Skills:
• Strong experience in AWS services, primarily serverless, databases, storage services, container services, schedulers, and batch services.
• Experience in Snowflake and Data Build Tool.
• Expertise in DBT, NodeJS and Python.
• Expertise in Informatica, PowerBI , Database, Cognos.
Nice to Have Skills:
Detailed Job Description:
• Strong experience in AWS services, primarily serverless, databases, storage services, container services, schedulers, and batch services.
• Experience in Snowflake and Data Build Tool.Expertise in DBT, NodeJS and Python.
• Expertise in Informatica, PowerBI , Database, Cognos.
• Proven experience in leading teams across locations.
• Knowledge of DevOps processes, Infrastructure as Code and their purpose.
• Good understanding of data warehouses, their purpose, and implementation
• Good communication skills.
Kindly share the resume in ******************
Palantir Data Engineer
Data scientist job in Charlotte, NC
Build and maintain data pipelines and workflows in Palantir Foundry.
Design, train, and deploy ML models for classification, optimization, and forecasting use cases.
Apply feature engineering, data cleaning, and modeling techniques using Python, Spark, and ML libraries.
Create dashboards and data applications using Slate or Streamlit to enable operational decision-making.
Implement generative AI use cases using large language models (GPT-4, Claude, etc)
W2 Opportunity // GCP Data Engineer // Atlanta, GA
Data scientist job in Atlanta, GA
Job Description: GCP Data Engineer
Rate: $50/hr. on W2 (No C2C)
We are seeking a highly skilled GCP Data Engineer to design, build, and optimize cloud-native data pipelines and analytics solutions on Google Cloud Platform. The ideal candidate has strong experience with Python, BigQuery, Cloud Data Fusion, and core GCP services such as Cloud Composer, Cloud Storage, Cloud Functions, and Pub/Sub. This role requires a strong foundation in data warehousing concepts and scalable data engineering practices.
Responsibilities
Design, develop, and maintain robust ETL/ELT pipelines on Google Cloud Platform.
Build and optimize data workflows using Cloud Data Fusion, BigQuery, and Cloud Composer.
Write efficient and maintainable Python code to support data ingestion, transformation, and automation.
Develop optimized BigQuery SQL for analytics, reporting, and large-scale data modeling.
Utilize GCP services such as Cloud Storage, Pub/Sub, and Cloud Functions to build event-driven and scalable data solutions.
Ensure data quality, governance, and reliability across all pipelines.
Collaborate with cross-functional teams to deliver clean, trusted, production-ready datasets.
Monitor, troubleshoot, and resolve performance issues in cloud data pipelines and workflows.
Must-Have Skills
Strong experience with GCP BigQuery (data modeling, SQL development, performance tuning).
Proficiency in Python for data engineering and pipeline automation.
Hands-on experience with Cloud Data Fusion for ETL/ELT development.
Working experience with key GCP services:
Cloud Composer
Cloud Storage
Cloud Functions
Pub/Sub
Strong understanding of data warehousing concepts, star/snowflake schemas, and best practices.
Solid understanding of cloud data architecture and distributed processing.
Good-to-Have Skills
Experience with Vertex AI for ML pipeline integration or model deployment.
Familiarity with Dataproc (Spark/Hadoop) for large-scale processing.
Knowledge of CI/CD workflows, Git, and DevOps best practices.
Experience with Cloud Logging/Monitoring tools.
Lead Data Engineer - Palantir Foundry
Data scientist job in Atlanta, GA
Our technology organization is transforming how we work at WestRock. We align with our businesses to deliver innovative solutions that:
Address specific business challenges, integrate processes, and create great experiences
Connect our work to shared goals that propel WestRock forward in the Digital Age
Imagine how technology can advance the way we work by using disruptive technology
We are looking for forward thinking technologists that can accelerate our focus areas such as building stronger foundational technology capabilities, reducing complexity, employing digital transformation concepts, and leveraging disruptive technology.
As a Lead Data Engineer, you will play a pivotal role in building and scaling modern data infrastructure that powers decision-making across production, supply chain, and operations. Helps to define and analyze business requirements for Enterprise scale reports. Analyzes and evaluates business use cases for data engineering problems and helps design and develop processing solutions with ETL Cloud based technologies.
How you will impact WestRock:
Architect and implement scalable data pipelines using Palantir Foundry (pipelines, workshops, ontology) to unify and transform operational data.
Design and develop robust data workflows using Python, Apache Airflow, and Apache Spark to support real-time and batch processing needs.
Build and deploy solutions on cloud platforms (AWS or Azure), ensuring high availability, security, and performance.
Collaborate with data scientists, analysts, and operations teams to deliver actionable insights and operational tooling.
Define and enforce data engineering best practices, including CI/CD automation, version control (Git), and testing strategies.
Mentor junior developers, conduct code reviews, and help shape the technical roadmap for the data platform.
What you need to succeed:
Education: Bachelor's degree in computer science or similar
At least 6 years of strong Data Engineering experience
Hands-on experience with Palantir Foundry, including pipelines, ontology modeling, and workshop development.
Strong programming skills in Python or Java, with experience building and maintaining production-grade data pipelines.
Proficiency in Apache Airflow and Apache Spark for workflow orchestration and large-scale data processing.
Proven experience deploying data solutions on AWS or Azure, with strong understanding of cloud-native services.
Familiarity with Git for version control and CI/CD pipelines for automated testing and deployment.
Demonstrated ability to mentor junior engineers, lead projects, and work independently in a fast-paced environment.
Good communication skills, with the ability to collaborate effectively across technical and non-technical teams.
Good analytical and troubleshooting abilities.
What we offer:
Corporate culture based on integrity, respect, accountability and excellence
Comprehensive training with numerous learning and development opportunities
An attractive salary reflecting skills, competencies and potential
A career with a global packaging company where Sustainability, Safety and Inclusion are business drivers and foundational elements of the daily work.
Lead Azure Databrick Engineer
Data scientist job in Atlanta, GA
****************Individual Contractors (W2/1099) are encouraged to apply. Visa sponsorship is not available for this role at this time************
An Azure Data Engineer is responsible for designing, implementing, and maintaining the data infrastructure within an organization. They collaborate with both business and IT teams to understand stakeholders' needs and unlock the full potential of data. They create conceptual and logical data models, analyze structural requirements, and ensure efficient database solutions.
Must Have Skills:
Experience of Migrating from other platform to Databricks
Proficiency in Databricks and Azure Cloud, Databricks Asset Bundles, Hoslistic vision on the Data Strategy.
Proficiency in Data Streaming and Data Modeling
Experience in architecting at least two large-scale big data projects
Strong understanding of data scaling and its complexities
Data Archiving and Purging mechanisms.
Job Requirements
• Degree in computer science or equivalent preferred
• Demonstrable experience in architecture, design, implementation, and/or support of highly distributed applications with Azure cloud and Databricks.
• 10+ Years of Hands-on experience with data modelling, database design, data mining, and segmentation techniques.
• Working knowledge and experience with "Cloud Architectures" (e.g., SaaS, PaaS, IaaS) and the ability to address the unique security considerations of secure Cloud computing
• Should have architected solutions for Cloud environments such as Microsoft Azure and/or GCP
• Experience with debugging and performance tuning in distributed environments
• Strong analytical skills with the ability to collect, organize, analyse, and broadcast significant amounts of information with attention to detail and accuracy
• Experience dealing with structured, unstructured data.
• Must have Python, PySpark experience.
• Experience in ML or/and graph analysis is a plus
Data Engineer - OrcaWorks AI
Data scientist job in Atlanta, GA
Experience Level: Entry-level (Master's preferred)
About OrcaWorks AI
At OrcaWorks AI, we're building next-generation AI systems that empower businesses to make data-driven decisions with intelligence and speed. We're seeking passionate Data Engineers who love solving real-world data challenges and want to be part of a growing team building cutting-edge AI infrastructure.
Key Responsibilities
Design, develop, and maintain data pipelines using tools like Airbyte and Prefect to feed AI and machine learning models.
Integrate data from multiple structured and unstructured sources into unified and queryable layers using ElasticSearch or Vespa.
Implement data validation, transformation, and storage solutions using modern ETL frameworks.
Collaborate with AI, LLM, and data science teams to ensure reliable and optimized data flow for model training.
Support database management, SQLModel, and data governance practices across services.
Required Skills & Qualifications
Master's degree (or Bachelor's with equivalent experience) in Computer Science, Information Systems, or Data Engineering.
Proficiency in Python and SQL; experience with PySpark or equivalent ETL frameworks.
Hands-on experience with Airbyte, Prefect, and DBT.
Familiarity with search and indexing systems like Vespa or ElasticSearch.
Knowledge of cloud data platforms (AWS, GCP, or Azure) and API integration.
Strong understanding of data security and applied AI workflows.
Data Engineer w/ Python & SQL
Data scientist job in Alpharetta, GA
We're looking for a Data Engineer to build and maintain scalable data pipelines and cloud data infrastructure on GCP. The role focuses on BigQuery, Dataflow, and modern ETL/ELT to support analytics and ML workflows.
MUST HAVES
A problem solver with ability to analyze and research complex issues and problems; and proposing actionable solutions and/or strategies.
Solid understanding and hands on experience with major cloud platforms.
Experience in designing and implementing data pipelines.
Must have strong Python, SQL & GCP skills
Responsibilities
Build and optimize batch/streaming pipelines using Dataflow, Pub/Sub, Composer.
Develop and tune BigQuery models, queries, and ingestion processes.
Implement IaC (Terraform), CI/CD, monitoring, and data quality checks.
Ensure data governance, security, and reliable pipeline operations.
Collaborate with data science teams and support Vertex AI-based ML workflows.
Must-Have
Must have strong Python, SQL & GCP skills
3-5+ years of data engineering experience.
Hands-on GCP experience (BigQuery, Dataflow, Pub/Sub).
Solid ETL/ELT and data modeling experience.
Nice-to-Have
GCP certifications, Spark, Kafka, Airflow, dbt/Dataform, Docker/K8s.