Data Scientist
Data engineer job at Guidehouse
Job Family:
Data Science Consulting
Travel Required:
Up to 75%+
Clearance Required:
Active Secret
What You Will Do:
Our consultants on the Defense & Security AI & Data team help clients maximize the value of their data and automate business processes. This high performing team works with clients to implement the full spectrum of data analytics and data science, from data querying and wrangling to data engineering, to data visualization, dashboarding, and business intelligence (BI), to predictive analytics, machine learning (ML), and artificial intelligence (AI) as well as robotic process automation (RPA). Our services enable our clients to define their information strategy, enable mission critical insights and data-driven decision making, reduce cost and complexity, increase trust, and improve operational effectiveness.
Belong to a collaborative team dedicated to AI, Advanced Analytics and Intelligent Automation.
Focus on client services, while also supporting business development, internal firm initiatives, and ongoing professional development in data science, data analytics, and/or data engineering.
Work as a data scientist/data analytics/business intelligence professional in collaboration with cross-functional teams.
Continue career development in data science and data analytics software, methods, and consulting.
What You Will Need:
An ACTIVE and MAINTAINED "SECRET" Federal or DoD security clearance
Bachelor's degree is required
Minimum THREE (3) years of relevant professional experience OR 2 years of relevant professional experience with a Master's degree
High level of proficiency in Python
Expertise in Tableau dashboard development
Ability to design data modeling processes, create algorithms and predictive models, as well as perform custom analysis
Experience leading data driven analytic activities involving the identification of key indicators for both compliance and effectiveness critical to mission/business success
Experience manipulating large data sets and using them to identify trends and reach meaningful conclusions to inform strategic business decisions
Ability to clean, aggregate, organize data from multiple sources and transfer this information to data warehouses
Ability to define data requirements and ensure storage, availability, and coherence of electronically stored information to guarantee it meets the organization's needs
Competency in data mining, including structuring of raw data to formulate or recognize patterns to generate new information and unlock information and insights that help the organization better manage risks
Proficiency in PowerBI
Ability to support on-site and hybrid projects in the DC metro area
What Would Be Nice To Have:
Knowledge of additional coding languages such as R and SQL
Competencies in IT Risk and/or Cyber domains
Quality control experience including debugging and conducting Quality Assurance of new features and enhancements
Ability to facilitate meetings and present topics to diverse groups of functional and technical staff
Experience developing new and re-designing existing processes to meet new and/or evolving business requirements
Excellent verbal and written communication skills
The annual salary range for this position is $113,000.00-$188,000.00. Compensation decisions depend on a wide range of factors, including but not limited to skill sets, experience and training, security clearances, licensure and certifications, and other business and organizational needs.
What We Offer:
Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace.
Benefits include:
Medical, Rx, Dental & Vision Insurance
Personal and Family Sick Time & Company Paid Holidays
Position may be eligible for a discretionary variable incentive bonus
Parental Leave and Adoption Assistance
401(k) Retirement Plan
Basic Life & Supplemental Life
Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts
Short-Term & Long-Term Disability
Student Loan PayDown
Tuition Reimbursement, Personal Development & Learning Opportunities
Skills Development & Certifications
Employee Referral Program
Corporate Sponsored Events & Community Outreach
Emergency Back-Up Childcare Program
Mobility Stipend
About Guidehouse
Guidehouse is an Equal Opportunity Employer-Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation.
Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco.
If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at ************** or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation.
All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or ************************. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process.
If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse's Ethics Hotline. If you want to check the validity of correspondence you have received, please contact *************************. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant's dealings with unauthorized third parties.
Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.
Auto-ApplyAgentic AI & Data Scientist
Data engineer job at Guidehouse
Job Family:
Data Science Consulting
Travel Required:
Up to 10%
Clearance Required:
Ability to Obtain Public Trust
What You Will Do: We are seeking a forward-thinking Senior Consultant to lead initiatives in Agentic AI, Generative AI, and advanced Data Science. This role will drive innovation in healthcare analytics and enterprise AI solutions, leveraging cutting-edge technologies to transform data into actionable intelligence. The ideal candidate will replace a technical leader who pioneered AI-driven workflows and generative modeling within our organization.
Design and implement Agentic AI systems for autonomous decision-making and workflow optimization.
Develop Generative AI applications for summarization, predictive modeling, and conversational interfaces.
Build and maintain scalable data pipelines integrating structured and unstructured data.
Apply advanced statistical and machine learning techniques to policy evaluation.
Lead AI-driven projects in retrieval-augmented generation (RAG), prompt engineering, and knowledge graph integration.
Collaborate with cross-functional teams to deliver AI-powered dashboards and insights.
Mentor junior staff in AI methodologies and data science best practices.
What You Will Need:
US Citizenship is required
Master's degree is required
Minimum FIVE (5) years of experience in AI and data analytics, including healthcare or enterprise data.
Expertise in Python, R, and frameworks for AI/ML (e.g., TensorFlow, PyTorch).
Hands-on experience with Generative AI models (LLMs, diffusion models) and Agentic AI architectures.
Strong understanding of SQL and cloud-based data platforms.
Ability to translate complex AI concepts into business solutions.
Experience with RAG pipelines, vector databases, and knowledge graphs.
What Would Be Nice To Have:
Familiarity with AI governance, ethical AI practices, and compliance.
Proficiency in visualization tools and interactive dashboards.
Agile project management and sprint-based delivery.
Prior experience in technology consulting
The annual salary range for this position is $113,000.00-$188,000.00. Compensation decisions depend on a wide range of factors, including but not limited to skill sets, experience and training, security clearances, licensure and certifications, and other business and organizational needs.
What We Offer:
Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace.
Benefits include:
Medical, Rx, Dental & Vision Insurance
Personal and Family Sick Time & Company Paid Holidays
Position may be eligible for a discretionary variable incentive bonus
Parental Leave and Adoption Assistance
401(k) Retirement Plan
Basic Life & Supplemental Life
Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts
Short-Term & Long-Term Disability
Student Loan PayDown
Tuition Reimbursement, Personal Development & Learning Opportunities
Skills Development & Certifications
Employee Referral Program
Corporate Sponsored Events & Community Outreach
Emergency Back-Up Childcare Program
Mobility Stipend
About Guidehouse
Guidehouse is an Equal Opportunity Employer-Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation.
Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco.
If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at ************** or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation.
All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or ************************. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process.
If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse's Ethics Hotline. If you want to check the validity of correspondence you have received, please contact *************************. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant's dealings with unauthorized third parties.
Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.
Auto-ApplySenior Data Engineer
Charlotte, NC jobs
**NO 3rd Party vendor candidates or sponsorship**
Role Title: Senior Data Engineer
Client: Global construction and development company
Employment Type: Contract
Duration: 1 year
Preferred Location: Remote based in ET or CT time zones
Role Description:
The Senior Data Engineer will play a pivotal role in designing, architecting, and optimizing cloud-native data integration and Lakehouse solutions on Azure, with a strong emphasis on Microsoft Fabric adoption, PySpark/Spark-based transformations, and orchestrated pipelines. This role will lead end-to-end data engineering-from ingestion through APIs and Azure services to curated Lakehouse/warehouse layers-while ensuring scalable, secure, well-governed, and well-documented data products. The ideal candidate is hands-on in delivery and also brings data architecture knowledge to help shape patterns, standards, and solution designs.
Key Responsibilities
Design and implement end-to-end data pipelines and ELT/ETL workflows using Azure Data Factory (ADF), Synapse, and Microsoft Fabric.
Build and optimize PySpark/Spark transformations for large-scale processing, applying best practices for performance tuning (partitioning, joins, file sizing, incremental loads).
Develop and maintain API-heavy ingestion patterns, including REST/SOAP integrations, authentication/authorization handling, throttling, retries, and robust error handling.
Architect scalable ingestion, transformation, and serving solutions using Azure Data Lake / OneLake, Lakehouse patterns (Bronze/Silver/Gold), and data warehouse modeling practices.
Implement monitoring, logging, alerting, and operational runbooks for production pipelines; support incident triage and root-cause analysis.
Apply governance and security practices across the lifecycle, including access controls, data quality checks, lineage, and compliance requirements.
Write complex SQL, develop data models, and enable downstream consumption through analytics tools and curated datasets.
Drive engineering standards: reusable patterns, code reviews, documentation, source control, and CI/CD practices.
Requirements:
Bachelor's degree (or equivalent experience) in Computer Science, Engineering, or a related field.
5+ years of experience in data engineering with strong focus on Azure Cloud.
Strong experience with Azure Data Factory pipelines, orchestration patterns, parameterization, and production support.
Strong hands-on experience with Synapse (pipelines, SQL pools and/or Spark), and modern cloud data platform patterns.
Advanced PySpark/Spark experience for complex transformations and performance optimization.
Heavy experience with API-based integrations (building ingestion frameworks, handling auth, pagination, retries, rate limits, and resiliency).
Strong knowledge of SQL and data warehousing concepts (dimensional modeling, incremental processing, data quality validation).
Strong understanding of cloud data architectures including Data Lake, Lakehouse, and Data Warehouse patterns.
Preferred Skills
Experience with Microsoft Fabric (Lakehouse/Warehouse/OneLake, Pipelines, Dataflows Gen2, notebooks).
Architecture experience (formal or informal), such as contributing to solution designs, reference architectures, integration standards, and platform governance.
Experience with DevOps/CI-CD for data engineering using Azure DevOps or GitHub (deployment patterns, code promotion, testing).
Experience with Power BI and semantic model considerations for Lakehouse/warehouse-backed reporting.
Familiarity with data catalog/governance tooling (e.g., Microsoft Purview).
Senior Data Engineer
Nashville, TN jobs
Concert is a software and managed services company that promotes health by providing the digital infrastructure for reliable and efficient management of laboratory testing and precision medicine. We are wholeheartedly dedicated to enhancing the transparency and efficiency of health care. Our customers include health plans, provider systems, laboratories, and other important stakeholders. We are a growing organization driven by smart, creative people to help advance precision medicine and health care. Learn more about us at ***************
YOUR ROLE
Concert is seeking a skilled Senior Data Engineer to join our team. Your role will be pivotal in designing, developing, and maintaining our data infrastructure and pipelines, ensuring robust, scalable, and efficient data solutions. You will work closely with data scientists, analysts, and other engineers to support our mission of automating the application of clinical policy and payment through data-driven insights.
You will be joining an innovative, energetic, passionate team who will help you grow and build skills at the intersection of diagnostics, information technology and evidence-based clinical care.
As a Senior Data Engineer you will:
Design, develop, and maintain scalable and efficient data pipelines using AWS services such as Redshift, S3, Lambda, ECS, Step Functions, and Kinesis Data Streams.
Implement and manage data warehousing solutions, primarily with Redshift, and optimize existing data models for performance and scalability.
Utilize DBT (data build tool) for data transformation and modeling, ensuring data quality and consistency.
Develop and maintain ETL/ELT processes to ingest, process, and store large datasets from various sources.
Work with SageMaker for machine learning data preparation and integration.
Ensure data security, privacy, and compliance with industry regulations.
Collaborate with data scientists and analysts to understand data requirements and deliver solutions that meet their needs.
Monitor and troubleshoot data pipelines, identifying and resolving issues promptly.
Implement best practices for data engineering, including code reviews, testing, and automation.
Mentor junior data engineers and share knowledge on data engineering best practices.
Stay up-to-date with the latest advancements in data engineering, AWS services, and related technologies.
After 3 months on the job you will have:
Developed a strong understanding of Concert's data engineering infrastructure
Learned the business domain and how it maps to the information architecture
Made material contributions towards existing key results
After 6 months you will have:
Led a major initiative
Become the first point of contact when issues related to the data warehouse are identified
After 12 months you will have:
Taken responsibility for the long term direction of the data engineering infrastructure
Proposed and executed key results with an understanding of the business strategy
Communicated the business value of major technical initiatives to key non-technical business stakeholders
WHAT LEADS TO SUCCESS
Self-Motivated A team player with a positive attitude and a proactive approach to problem-solving.
Executes Well You are biased to action and get things done. You acknowledge unknowns and recover from setbacks well.
Comfort with Ambiguity You aren't afraid of uncertainty and blazing new trails, you care about building towards a future that is different from today.
Technical Bravery You are comfortable with new technologies and eager to dive in to understand data in the raw and in its processed states.
Mission-focused You are personally motivated to drive more affordable, equitable and effective integration of genomic technologies into clinical care.
Effective Communication You build rapport and great working relationships with senior leaders, peers, and use the relationships you've built to drive the company forward
RELEVANT SKILLS & EXPERIENCE
Minimum of 4 years experience working as a data engineer
Bachelor's degree in software or data engineering or comparable technical certification / experience
Ability to effectively communicate complex technical concepts to both technical and non-technical audiences.
Proven experience in designing and implementing data solutions on AWS, including Redshift, S3, Lambda, ECS, and Step Functions
Strong understanding of data warehousing principles and best practices
Experience with DBT for data transformation and modeling.
Proficiency in SQL and at least one programming language (e.g., Python, Scala)
Familiarity or experience with the following tools / concepts are a plus: BI tools such as Metabase; Healthcare claims data, security requirements, and HIPAA compliance; Kimball's dimensional modeling techniques; ZeroETL and Kinesis data streams
COMPENSATION
Concert is seeking top talent and offers competitive compensation based on skills and experience. Compensation will commensurate with experience. This position will report to the VP of Engineering.
LOCATION
Concert is based in Nashville, Tennessee and supports a remote work environment.
For further questions, please contact: ******************.
Data Engineer
Denver, CO jobs
Data Engineer
Compensation: $80 - $90/hour, depending on experience
Inceed has partnered with a great company to help find a skilled Data Engineer to join their team!
Join a dynamic team as a contract Data Engineer, where you'll be the backbone of data-driven operations. This role offers the opportunity to work with a modern tech stack in a hybrid on-prem and cloud environment. You'll design and implement innovative solutions to complex challenges, collaborating with data scientists, location intelligence experts, and ML engineers. This exciting opportunity has opened due to a new project initiative and you'll be making a tangible impact.
Key Responsibilities & Duties:
Design and deploy scalable data pipelines and architectures
Collaborate with stakeholders to deliver high-impact data solutions
Integrate data from multiple sources ensuring quality and reliability
Develop automation workflows and BI solutions
Mentor others and contribute to the knowledge base
Explore and implement emerging technologies
Required Qualifications & Experience:
8+ years of experience in data engineering
Experience with large oil and gas datasets
Proficiency in SQL and Python
Hands-on experience in cloud environments (Azure, AWS, or GCP)
Familiarity with Apache Kafka, Apache Flink, or Azure Event Hubs
Nice to Have Skills & Experience:
Experience with Palantir Foundry
Knowledge of query federation platforms
Experience with modern data stack tools like dbt or Airflow
Perks & Benefits:
3 different medical health insurance plans, dental, and vision insurance
Voluntary and Long-term disability insurance
Paid time off, 401k, and holiday pay
Weekly direct deposit or pay card deposit
If you are interested in learning more about the Data Engineer opportunity, please submit your resume for consideration. Our client is unable to provide sponsorship at this time.
We are Inceed, a staffing direct placement firm who believes in the possibility of something better. Our mission is simple: We're here to help every person, whether client, candidate, or employee, find and secure what's better for them.
Inceed is an equal opportunity employer. Inceed prohibits discrimination and harassment of any type and affords equal employment opportunities to employees and applicants without regard to race, color, religion, sex, sexual orientation, gender identity, or expression, pregnancy, age, national origin, disability status, genetic information, protected veteran status, or any other characteristic protected by law.
Data Engineer
Denver, CO jobs
Data Engineer
Compensation: $ 80 - 90 /hour, depending on experience
Inceed has partnered with a great energy company to help find a skilled Data Engineer to join their team!
Join a dynamic team where you'll be at the forefront of data-driven operations. This role offers the autonomy to design and implement groundbreaking data architectures, working primarily remotely. This position is open due to exciting new projects. You'll be collaborating with data scientists and engineers, making impactful contributions to the company's success.
Key Responsibilities & Duties:
Design and deploy scalable data pipelines and architectures
Collaborate with stakeholders to deliver high-impact data solutions
Integrate data from various sources ensuring consistency and reliability
Develop automation workflows and BI solutions
Mentor others and advise on data process best practices
Explore and implement emerging technologies
Required Qualifications & Experience:
8+ years of data engineering experience
Experience with PI
Experience with SCADA
Experience with Palantir
Experience with large oil and gas datasets
Proficiency in Python and SQL
Hands-on experience in cloud environments (Azure, AWS, GCP)
Nice to Have Skills & Experience:
Familiarity with Apache Kafka or Flink
Perks & Benefits:
3 different medical health insurance plans, dental, and vision insurance
Voluntary and Long-term disability insurance
Paid time off, 401k, and holiday pay
Weekly direct deposit or pay card deposit
If you are interested in learning more about the Data Engineer opportunity, please submit your resume for consideration. Our client is unable to provide sponsorship at this time.
We are Inceed, a staffing direct placement firm who believes in the possibility of something better. Our mission is simple: We're here to help every person, whether client, candidate, or employee, find and secure what's better for them.
Inceed is an equal opportunity employer. Inceed prohibits discrimination and harassment of any type and affords equal employment opportunities to employees and applicants without regard to race, color, religion, sex, sexual orientation, gender identity, or expression, pregnancy, age, national origin, disability status, genetic information, protected veteran status, or any other characteristic protected by law.
Data Engineer
Charlotte, NC jobs
Job Title: Azure Databricks Engineer (Onsite)
Years of Experience: 7- 12 Years
Full Time
We are seeking a highly skilled and motivated Technical Team Lead with extensive experience in Azure Databricks to join our dynamic team. The ideal candidate will possess a strong technical background, exceptional leadership abilities, and a passion for driving innovative solutions. As a Technical Team Lead, you will be responsible for guiding a team of developers and engineers in the design, development, and implementation of data driven solutions that leverage Azure Databricks.
Responsibilities:
Lead and mentor a team of technical professionals, fostering a collaborative and high performance culture.
Design and implement data processing solutions using Azure Databricks, ensuring scalability and efficiency.
Collaborate with cross functional teams to gather requirements and translate them into technical specifications.
Oversee the development lifecycle, from planning and design to deployment and maintenance.
Conduct code reviews and provide constructive feedback to team members to ensure code quality and adherence to best practices.
Stay up to date with industry trends and emerging technologies related to Azure Databricks and data engineering.
Facilitate communication between technical and non technical stakeholders to ensure alignment on project goals.
Identify and mitigate risks associated with project delivery and team performance.
Mandatory Skills:
Proven expertise in Azure Databricks, including experience with Spark, data pipelines, and data lakes.
Strong programming skills in languages such as Python, Scala, or SQL.
Experience with cloud based data storage solutions, particularly Azure Data Lake Storage and Azure SQL Database.
Solid understanding of data modeling, ETL processes, and data warehousing concepts.
Demonstrated ability to lead technical teams and manage multiple projects simultaneously.
Preferred Skills:
Familiarity with Azure DevOps for CI/CD processes.
Experience with machine learning frameworks and libraries.
Knowledge of data governance and compliance standards.
Strong analytical and problem solving skills.
Excellent communication and interpersonal skills.
Qualifications:
Bachelor's degree in Computer Science, Information Technology, or a related field.
7 10 years of experience in data engineering, software development, or a related technical role.
Proven track record of leading technical teams and delivering successful projects.
Relevant certifications in Azure or data engineering are a plus.
If you are a passionate Technical Team Lead with a strong background in Azure Databricks and a desire to drive innovation, we encourage you to apply and join our team
Data Engineer
Bloomington, MN jobs
Key Responsibilities
Design, build, and maintain scalable data pipelines for ingesting, cleaning, and transforming provider data.
Develop and optimize workflows in Databricks for large-scale data processing.
Implement and manage data storage solutions using the Microsoft Azure suite, including Azure Data Lake, Blob Storage, and Azure SQL.
Collaborate with API developers and data consumers to ensure seamless API data consumption.
Work closely with data scientists, analysts, and product owners to ensure data quality, consistency, and availability.
Contribute to the evolution of our data lake and warehouse architecture to support current and future analytics needs.
Required Qualifications
Hands-on experience with Databricks and Apache Spark.
Proficient in SQL, Python, PySpark, Git
Strong proficiency with Microsoft Azure cloud services, especially in data storage and compute.
Proven experience with data lakes and/or data warehouses.
Solid understanding of REST APIs and experience consuming them in data workflows.
Experience with data ingestion, ETL/ELT pipelines, and data cleaning techniques.
Preferred Qualifications
Hands-On Experience with PowerBI
Experience building or maintaining GraphQL APIs.
Experience designing and developing REST APIs.
Familiarity with AI/ML integration in data pipelines or analytics workflows.
Knowledge of healthcare data standards and provider data models is a plus.
The projected hourly range for this position is $65.00 to $85.00
On-Demand Group (ODG) provides employee benefits which includes healthcare, dental, and vision insurance. ODG is an equal opportunity employer that does not discriminate on the basis of race, color, religion, gender, sexual orientation, age, national origin, disability, or any other characteristic protected by law.
Senior Data Platform Engineer (28702)
Minnetonka, MN jobs
Title: Senior Data Platform Engineer - Oracle/Snowflake/Azure
Job Type: Contract-to-Hire (6 months) *All candidates must be interested & eligible for conversion without sponsorship.
Industry: Health Insurance
Pay range: $65 to $78/hour
Key Technologies: Oracle, Snowflake, Azure Cloud, MS SQL
---
About the Role
We are seeking a highly skilled Senior Data Platform Engineer to join a leading healthcare organization headquartered in Minnetonka, MN. This role focuses on designing, implementing, and maintaining both legacy and modern data platforms that support enterprise operations. You will collaborate with experienced engineers and architects to optimize databases, develop data pipelines, and drive cloud integration initiatives.
This position is ideal for a seasoned professional who thrives on solving complex data challenges, contributing to modernization efforts, and working in a fast-paced Agile environment.
Responsibilities
Design, build, and maintain robust data pipelines across cloud and on-premises environments.
Administer, monitor, and optimize databases including Oracle, Snowflake, Azure SQL, and MS SQL.
Manage database provisioning, configuration, patching, and backup/recovery processes.
Collaborate with developers, analysts, and DBAs to troubleshoot issues and optimize queries.
Support data migration and integration efforts as part of cloud transformation initiatives.
Ensure database security, access controls, and compliance with internal standards.
Contribute to documentation, runbooks, and knowledge sharing within the team.
Participate in Agile ceremonies and planning activities, fostering a culture of shared ownership and continuous improvement.
Join an on-call rotation to support 24/7 database operations and incident response.
Required Qualifications
7+ years of experience in database engineering or a related technical role.
Hands-on experience with at least one of the following: Oracle, Snowflake, or Azure SQL Database.
Solid knowledge of cloud platforms (Azure preferred) and cloud-native data services.
Strong understanding of system performance tuning and query optimization.
Ability to work collaboratively and communicate effectively with technical peers.
Preferred Qualifications
Experience building and maintaining data pipelines in cloud or hybrid environments.
Familiarity with Liquibase or other database change management tools.
Proficiency in scripting or automation (e.g., Ansible, Python, Terraform).
Experience with CI/CD pipelines or DevOps practices.
Knowledge of monitoring tools and observability platforms.
Background in Agile or SAFe environments.
Salary range for this position is $110,400-$154,600.
Annual salary range placement will depend on a variety of factors including, but not limited to, education, work experience, applicable certifications and/or licensure, the position's scope and responsibility, internal pay equity and external market salary data.
Benefits
Dahl Consulting is proud to offer a comprehensive benefits package to eligible employees that will allow you to choose the best coverage to meet your family's needs. For details, please review the DAHL Benefits Summary: ***********************************************
IT Data Engineer
Lakewood, CO jobs
IT Data Engineer
Compensation: $125k-$155k (DOE)
Inceed has partnered with a great company to help find a skilled IT Data Engineer to join their team!
Join a dynamic team where innovation meets opportunity. This role is pivotal in advancing AI and data modernization initiatives, bridging traditional database administration with cutting-edge AI data infrastructure. The team thrives on collaboration and offers a hybrid work schedule.
Key Responsibilities & Duties:
Design and maintain scalable data pipelines.
Develop RAG workflows for AI information access.
Build secure connectors and APIs for data retrieval.
Monitor and optimize data flows for consistency.
Lead database administration and performance tuning.
Manage database upgrades and storage optimization.
Implement database security controls and standards.
Support application integrations and data migrations.
Define and maintain data models and metadata.
Collaborate with teams to ensure compliance requirements.
Required Qualifications & Experience:
Bachelor's degree in Computer Science or related field.
7+ years in database administration or data engineering.
Advanced SQL and data modeling skills.
Experience with AI and analytics data pipelines.
Familiarity with cloud-based data ecosystems.
Hands-on experience with RAG and vectorization.
Proficiency in scripting languages like Python.
Experience leading vendor-to-internal transitions.
Nice to Have Skills & Experience:
Experience integrating enterprise systems into data platforms.
Knowledge of data governance frameworks.
Understanding of semantic data modeling.
Experience with cloud migration of database workloads.
Perks & Benefits:
This opportunity includes a comprehensive and competitive benefits package-details will be shared during later stages of the hiring process.
Other Information:
Hybrid work schedule
This position requires a background check and drug test
If you are interested in learning more about the IT Data Engineer opportunity, please submit your resume for consideration. Our client is unable to provide sponsorship at this time.
We are Inceed, a staffing direct placement firm who believes in the possibility of something better. Our mission is simple: We're here to help every person, whether client, candidate, or employee, find and secure what's better for them.
Inceed is an equal opportunity employer. Inceed prohibits discrimination and harassment of any type and affords equal employment opportunities to employees and applicants without regard to race, color, religion, sex, sexual orientation, gender identity, or expression, pregnancy, age, national origin, disability status, genetic information, protected veteran status, or any other characteristic protected by law.
Data Engineer
New York, NY jobs
Hey All, We are looking for a mid-level data engineer. No third parties As a result of this expansion, we are seeking experienced software Data engineers with 5+ years of relevant experience to support the design and development of a strategic data platform for SMBC Capital Markets and Nikko Securities Group.
Qualifications and Skills
• Proven experience as a Data Engineer with experience in Azure cloud.
• Experience implementing solutions using -
• Azure cloud services
• Azure Data Factory
• Azure Lake Gen 2
• Azure Databases
• Azure Data Fabric
• API Gateway management
• Azure Functions
• Well versed with Azure Databricks
• Strong SQL skills with RDMS or no SQL databases
• Experience with developing APIs using FastAPI or similar frameworks in Python
• Familiarity with the DevOps lifecycle (git, Jenkins, etc.), CI/CD processes
• Good understanding of ETL/ELT processes
• Experience in financial services industry, financial instruments, asset classes and market data are a plus.
Data Engineer - AI & Data Modernization
Data engineer job at Guidehouse
Job Family:
Data Science Consulting
Travel Required:
Up to 25%
Clearance Required:
Ability to Obtain Public Trust
What You Will Do:
We are seeking an experienced Data Engineer to join our growing AI and Data practice, with a dedicated focus on the Federal Civilian Agencies (FCA) market within the Communities, Energy & Infrastructure (CEI) segment. This individual will be a hands-on technical contributor, responsible for designing and implementing scalable data pipelines and interactive dashboards that enable federal clients to achieve mission outcomes, operational efficiency, and digital transformation.
This is a strategic delivery role for someone who thrives at the intersection of data engineering, cloud platforms, and public sector analytics.
Client Leadership & Delivery
Collaborate with FCA clients to understand data architecture and reporting needs.
Lead the development of ETL pipelines and dashboard integrations using Databricks and Tableau.
Ensure delivery excellence and measurable outcomes across data
migration and visualization efforts.
Solution Development & Innovation
Design and implement scalable ETL/ELT pipelines using Spark, SQL, and Python.
Develop and optimize Tableau dashboards aligned with federal reporting standards.
Apply AI/ML tools to automate metadata extraction, clustering, and dashboard scripting.
Practice & Team Leadership
Work closely with data architects, analysts, and cloud engineers to deliver integrated solutions.
Support documentation, testing, and deployment of data products.
Mentor junior developers and contribute to reusable frameworks and accelerators.
What You Will Need:
US Citizenship is required
Bachelor's degree is required
Minimum TWO (2) years of experience in data engineering and dashboard development
Proven experience with Databricks, Tableau, and cloud platforms (AWS, Azure)
Strong proficiency in SQL, Python, and Spark
Experience building ETL pipelines and integrating data sources into reporting platforms
Familiarity with data governance, metadata, and compliance frameworks
Excellent communication, facilitation, and stakeholder engagement skills
What Would Be Nice To Have:
AI/LLM Certifications
Experience working with FCA clients such as DOT, GSA, USDA, or similar
Familiarity with federal contracting and procurement processes
What We Offer:
Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace.
Benefits include:
Medical, Rx, Dental & Vision Insurance
Personal and Family Sick Time & Company Paid Holidays
Position may be eligible for a discretionary variable incentive bonus
Parental Leave and Adoption Assistance
401(k) Retirement Plan
Basic Life & Supplemental Life
Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts
Short-Term & Long-Term Disability
Student Loan PayDown
Tuition Reimbursement, Personal Development & Learning Opportunities
Skills Development & Certifications
Employee Referral Program
Corporate Sponsored Events & Community Outreach
Emergency Back-Up Childcare Program
Mobility Stipend
About Guidehouse
Guidehouse is an Equal Opportunity Employer-Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation.
Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco.
If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at ************** or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation.
All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or ************************. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process.
If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse's Ethics Hotline. If you want to check the validity of correspondence you have received, please contact *************************. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant's dealings with unauthorized third parties.
Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.
Auto-ApplyAWS Data Engineer
McLean, VA jobs
Responsibilities:
Design, build, and maintain scalable data pipelines using AWS Glue and Databricks.
Develop and optimize ETL/ELT processes using PySpark and Python.
Collaborate with data scientists, analysts, and stakeholders to enable efficient data access and transformation.
Implement and maintain data lake and warehouse solutions on AWS (S3, Glue Catalog, Redshift, Athena, etc.).
Ensure data quality, consistency, and reliability across systems.
Optimize performance of large-scale distributed data processing workflows.
Develop automation scripts and frameworks for data ingestion, transformation, and validation.
Follow best practices for data governance, security, and compliance.
Required Skills & Experience:
5-8 years of hands-on experience in Data Engineering.
Strong proficiency in Python and PySpark for data processing and transformation.
Expertise in AWS services - particularly Glue, S3, Lambda, Redshift, and Athena.
Hands-on experience with Databricks for building and managing data pipelines.
Experience working with large-scale data systems and optimizing performance.
Solid understanding of data modeling, data lake architecture, and ETL design principles.
Strong problem-solving skills and ability to work independently in a fast-paced environment.
“Mindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of - Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.”
Data Engineer - AI & Data Modernization
Data engineer job at Guidehouse
Job Family:
Data Science Consulting
Travel Required:
Up to 25%
Clearance Required:
Ability to Obtain Public Trust
What You Will Do:
We are seeking an experienced Data Engineer to join our growing AI and Data practice, with a dedicated focus on the Federal Civilian Agencies (FCA) market within the Communities, Energy & Infrastructure (CEI) segment. This individual will be a hands-on technical contributor, responsible for designing and implementing scalable data pipelines and interactive dashboards that enable federal clients to achieve mission outcomes, operational efficiency, and digital transformation.
This is a strategic delivery role for someone who thrives at the intersection of data engineering, cloud platforms, and public sector analytics.
Client Leadership & Delivery
Collaborate with FCA clients to understand data architecture and reporting needs.
Lead the development of ETL pipelines and dashboard integrations using Databricks and Tableau.
Ensure delivery excellence and measurable outcomes across data
migration and visualization efforts.
Solution Development & Innovation
Design and implement scalable ETL/ELT pipelines using Spark, SQL, and Python.
Develop and optimize Tableau dashboards aligned with federal reporting standards.
Apply AI/ML tools to automate metadata extraction, clustering, and dashboard scripting.
Practice & Team Leadership
Work closely with data architects, analysts, and cloud engineers to deliver integrated solutions.
Support documentation, testing, and deployment of data products.
Mentor junior developers and contribute to reusable frameworks and accelerators.
What You Will Need:
US Citizenship is required
Bachelor's degree is required
Minimum TWO (2) years of experience in data engineering and dashboard development
Proven experience with Databricks, Tableau, and cloud platforms (AWS, Azure)
Strong proficiency in SQL, Python, and Spark
Experience building ETL pipelines and integrating data sources into reporting platforms
Familiarity with data governance, metadata, and compliance frameworks
Excellent communication, facilitation, and stakeholder engagement skills
What Would Be Nice To Have:
AI/LLM Certifications
Experience working with FCA clients such as DOT, GSA, USDA, or similar
Familiarity with federal contracting and procurement processes
What We Offer:
Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace.
Benefits include:
Medical, Rx, Dental & Vision Insurance
Personal and Family Sick Time & Company Paid Holidays
Position may be eligible for a discretionary variable incentive bonus
Parental Leave and Adoption Assistance
401(k) Retirement Plan
Basic Life & Supplemental Life
Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts
Short-Term & Long-Term Disability
Student Loan PayDown
Tuition Reimbursement, Personal Development & Learning Opportunities
Skills Development & Certifications
Employee Referral Program
Corporate Sponsored Events & Community Outreach
Emergency Back-Up Childcare Program
Mobility Stipend
About Guidehouse
Guidehouse is an Equal Opportunity Employer-Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation.
Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco.
If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at ************** or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation.
All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or ************************. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process.
If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse's Ethics Hotline. If you want to check the validity of correspondence you have received, please contact *************************. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant's dealings with unauthorized third parties.
Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.
Auto-ApplyData Engineer
Tempe, AZ jobs
About the Role
We are seeking a highly skilled Databricks Data Engineer with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions.
What We're Looking For
8+ years designing and delivering scalable data pipelines in modern data platforms
Deep experience in data engineering, data warehousing, and enterprise-grade solution delivery
Ability to lead cross-functional initiatives in matrixed teams
Advanced skills in SQL, Python, and ETL/ELT development, including performance tuning
Hands-on experience with Azure, Snowflake, and Databricks, including system integrations
Key Responsibilities
Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform
Modernize and enhance cloud-based data ecosystems on Azure, contributing to architecture, modeling, security, and CI/CD
Use Apache Airflow and similar tools for workflow automation and orchestration
Work with financial or regulated datasets while ensuring strong compliance and governance
Drive best practices in data quality, lineage, cataloging, and metadata management
Primary Technical Skills
Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks Notebooks
Design efficient Delta Lake models for reliability and performance
Implement and manage Unity Catalog for governance, RBAC, lineage, and secure data sharing
Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables
Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems
Automate ingestion and workflows using Python and REST APIs
Support downstream analytics for BI, data science, and application workloads
Write optimized SQL/T-SQL queries, stored procedures, and curated datasets
Automate DevOps workflows, testing pipelines, and workspace configurations
Additional Skills
Azure: Data Factory, Data Lake, Key Vault, Logic Apps, Functions
CI/CD: Azure DevOps
Orchestration: Apache Airflow (plus)
Streaming: Delta Live Tables
MDM: Profisee (nice-to-have)
Databases: SQL Server, Cosmos DB
Soft Skills
Strong analytical and problem-solving mindset
Excellent communication and cross-team collaboration
Detail-oriented with a high sense of ownership and accountability
Data Scientist - ML, Python
McLean, VA jobs
10+years of experience required in Information Technology.
Python Programming: At least 5 years of hands-on experience with Python, particularly in
frameworks like FastAPI, Django, Flask, and experience using AI frameworks.
• Access Control Expertise: Strong understanding of access control models such as Role-Based
Access Control (RBAC) and Attribute-Based Access Control (ABAC).
• API and Connector Development: Experience in developing API connectors using Python for
extracting and managing access control data from platforms like Azure, SharePoint, Java, .NET,
WordPress, etc.
• AI and Machine Learning: Hands-on experience integrating AI into applications for automating
tasks such as access control reviews and identifying anomalies
• Cloud and Microsoft Technologies: Proficiency with Azure services, Microsoft Graph API, and
experience integrating Python applications with Azure for access control reviews and reporting.
• Reporting and Visualization: Experience using reporting libraries in Python (Pandas, Matplotlib,
Plotly, Dash) to build dashboards and reports related to security and access control metrics.
• Communication Skills: Ability to collaborate with various stakeholders, explain complex
technical solutions, and deliver high-quality solutions on time.
• PlainID: Experience or familiarity with PlainID platforms for identity and access management.
• Azure OpenAI: Familiarity with Azure OpenAI technologies and their application in access
control and security workflows.
• Power BI: Experience with Microsoft Power BI for data visualization and reporting.
• Agile Methodologies: Experience working in Agile environments and familiarity with Scrum
methodologies for delivering security solutions.
Senior Data Engineer
Austin, TX jobs
We are looking for a seasoned Azure Data Engineer to design, build, and optimize secure, scalable, and high-performance data solutions within the Microsoft Azure ecosystem. This will be a multi-year contract worked FULLY ONSITE in Austin, TX.
The ideal candidate brings deep technical expertise in data architecture, ETL/ELT engineering, data integration, and governance, along with hands-on experience in MDM, API Management, Lakehouse architectures, and data mesh or data hub frameworks. This position combines strategic architectural planning with practical, hands-on implementation, empowering cross-functional teams to leverage data as a key organizational asset.
Key Responsibilities
1. Data Architecture & Strategy
Design and deploy end-to-end Azure data platforms using Azure Data Lake, Azure Synapse Analytics, Azure Databricks, and Azure SQL Database.
Build and implement Lakehouse and medallion (Bronze/Silver/Gold) architectures for scalable and modular data processing.
Define and support data mesh and data hub patterns to promote domain-driven design and federated governance.
Establish standards for conceptual, logical, and physical data modeling across data warehouse and data lake environments.
2. Data Integration & Pipeline Development
Develop and maintain ETL/ELT pipelines using Azure Data Factory, Synapse Pipelines, and Databricks for both batch and streaming workloads.
Integrate diverse data sources (on-prem, cloud, SaaS, APIs) into a unified Azure data environment.
Optimize pipelines for cost-effectiveness, performance, and scalability.
3. Master Data Management (MDM) & Data Governance
Implement MDM solutions using Azure-native or third-party platforms (e.g., Profisee, Informatica, Semarchy).
Define and manage data governance, metadata, and data quality frameworks.
Partner with business teams to align data standards and maintain data integrity across domains.
4. API Management & Integration
Build and manage APIs for data access, transformation, and system integration using Azure API Management and Logic Apps.
Design secure, reliable data services for internal and external consumers.
Automate workflows and system integrations using Azure Functions, Logic Apps, and Power Automate.
5. Database & Platform Administration
Perform core DBA tasks, including performance tuning, query optimization, indexing, and backup/recovery for Azure SQL and Synapse.
Monitor and optimize cost, performance, and scalability across Azure data services.
Implement CI/CD and Infrastructure-as-Code (IaC) solutions using Azure DevOps, Terraform, or Bicep.
6. Collaboration & Leadership
Work closely with data scientists, analysts, business stakeholders, and application teams to deliver high-value data solutions.
Mentor junior engineers and define best practices for coding, data modeling, and solution design.
Contribute to enterprise-wide data strategy and roadmap development.
Required Qualifications
Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or related fields.
5+ years of hands-on experience in Azure-based data engineering and architecture.
Strong proficiency with the following:
Azure Data Factory, Azure Synapse, Azure Databricks, Azure Data Lake Storage Gen2
SQL, Python, PySpark, PowerShell
Azure API Management and Logic Apps
Solid understanding of data modeling approaches (3NF, dimensional modeling, Data Vault, star/snowflake schemas).
Proven experience with Lakehouse/medallion architectures and data mesh/data hub designs.
Familiarity with MDM concepts, data governance frameworks, and metadata management.
Experience with automation, data-focused CI/CD, and IaC.
Thorough understanding of Azure security, RBAC, Key Vault, and core networking principles.
What We Offer
Competitive compensation and benefits package
Luna Data Solutions, Inc. (LDS) provides equal employment opportunities to all employees. All applicants will be considered for employment. LDS prohibits discrimination and harassment of any type regarding age, race, color, religion, sexual orientation, gender identity, sex, national origin, genetics, protected veteran status, and/or disability status.
Data Engineer
Austin, TX jobs
About the Role
We are seeking a highly skilled Databricks Data Engineer with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions.
What We're Looking For
8+ years designing and delivering scalable data pipelines in modern data platforms
Deep experience in data engineering, data warehousing, and enterprise-grade solution delivery
Ability to lead cross-functional initiatives in matrixed teams
Advanced skills in SQL, Python, and ETL/ELT development, including performance tuning
Hands-on experience with Azure, Snowflake, and Databricks, including system integrations
Key Responsibilities
Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform
Modernize and enhance cloud-based data ecosystems on Azure, contributing to architecture, modeling, security, and CI/CD
Use Apache Airflow and similar tools for workflow automation and orchestration
Work with financial or regulated datasets while ensuring strong compliance and governance
Drive best practices in data quality, lineage, cataloging, and metadata management
Primary Technical Skills
Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks Notebooks
Design efficient Delta Lake models for reliability and performance
Implement and manage Unity Catalog for governance, RBAC, lineage, and secure data sharing
Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables
Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems
Automate ingestion and workflows using Python and REST APIs
Support downstream analytics for BI, data science, and application workloads
Write optimized SQL/T-SQL queries, stored procedures, and curated datasets
Automate DevOps workflows, testing pipelines, and workspace configurations
Additional Skills
Azure: Data Factory, Data Lake, Key Vault, Logic Apps, Functions
CI/CD: Azure DevOps
Orchestration: Apache Airflow (plus)
Streaming: Delta Live Tables
MDM: Profisee (nice-to-have)
Databases: SQL Server, Cosmos DB
Soft Skills
Strong analytical and problem-solving mindset
Excellent communication and cross-team collaboration
Detail-oriented with a high sense of ownership and accountability
Biostatistician/AI Data Scientist I
Data engineer job at Guidehouse
Job Family:
Data Science Consulting
Travel Required:
Up to 10%
Clearance Required:
Ability to Obtain Public Trust
What You Will Do:
The Biostatistician/AI data scientist I will assist senior team members in performing statistical analyses and developing automated workflows for federal public health and life sciences projects. This role emphasizes foundational biostatistical skills combined with emerging technologies such as GenAI and automation.
Key Responsibilities
Support development and execution of statistical analyses under supervision.
Assist in creating automated workflows and reproducible reports using R Markdown and Python.
Contribute to data cleaning, preparation, and visualization tasks.
Collaborate with senior biostatisticians and epidemiologists on project deliverables.
Learn and apply GenAI tools for enhancing analytic efficiency.
What You Will Need:
Bachelor's degree is required
Basic proficiency in R, SQL, and Python for data analysis.
Understanding of regression modeling and descriptive statistics.
Understanding of GenAI and other emerging technologies
Ability to work collaboratively in a team environment
What Would Be Nice To Have:
Master's degree
Experience with version control tools (e.g., GitHub).
Experience with AI and machine learning for data analytics
Exposure to GenAI tools and automation frameworks.
Familiarity with public health datasets and analytic standards.
The annual salary range for this position is $68,000.00-$113,000.00. Compensation decisions depend on a wide range of factors, including but not limited to skill sets, experience and training, security clearances, licensure and certifications, and other business and organizational needs.
What We Offer:
Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace.
Benefits include:
Medical, Rx, Dental & Vision Insurance
Personal and Family Sick Time & Company Paid Holidays
Position may be eligible for a discretionary variable incentive bonus
Parental Leave and Adoption Assistance
401(k) Retirement Plan
Basic Life & Supplemental Life
Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts
Short-Term & Long-Term Disability
Student Loan PayDown
Tuition Reimbursement, Personal Development & Learning Opportunities
Skills Development & Certifications
Employee Referral Program
Corporate Sponsored Events & Community Outreach
Emergency Back-Up Childcare Program
Mobility Stipend
About Guidehouse
Guidehouse is an Equal Opportunity Employer-Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation.
Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco.
If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at ************** or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation.
All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or ************************. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process.
If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse's Ethics Hotline. If you want to check the validity of correspondence you have received, please contact *************************. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant's dealings with unauthorized third parties.
Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.
Auto-ApplyData Scientist
Phoenix, AZ jobs
We are seeking a Data Scientist to support advanced analytics and machine learning initiatives across the organization. This role involves working with large, complex datasets to uncover insights, validate data integrity, and build predictive models. A key focus will be developing and refining machine learning models that leverage sales and operational data to optimize pricing strategies at the store level.
Day-to-Day Responsibilities
Compare and validate numbers across multiple data systems
Investigate discrepancies and understand how metrics are derived
Perform data science and data analysis tasks
Build and maintain AI/ML models using Python
Interpret model results, fine-tune algorithms, and iterate based on findings
Validate and reconcile data from different sources to ensure accuracy
Work with sales and production data to produce item-level pricing recommendations
Support ongoing development of a new data warehouse and create queries as needed
Review Power BI dashboards (Power BI expertise not required)
Contribute to both ML-focused work and general data science responsibilities
Improve and refine an existing ML pricing model already in production
Qualifications
Strong proficiency with MS SQL Server
Experience creating and deploying machine learning models in Python
Ability to interpret, evaluate, and fine-tune model outputs
Experience validating and reconciling data across systems
Strong foundation in machine learning, data modeling, and backend data operations
Familiarity with querying and working with evolving data environments