Post job

Data Engineer jobs at Guidehouse

- 5942 jobs
  • Data Engineer, Life Sciences Technology Solutions

    Guidehouse 3.7company rating

    Data engineer job at Guidehouse

    Job Family: Software Development & Support Travel Required: Up to 10% Clearance Required: None What You Will Do: : The Data Engineer (Life Sciences Technology Solutions) is responsible for designing, building, and maintaining robust data pipelines and backend systems that support scalable software solutions for biopharma clients. Working within the data science and technology domain, this role collaborates with solution architects and full stack developers under the direction of the Life Sciences AI & Data Lead. Success in this position is measured by the ability to deliver reliable, high-performance data infrastructure that enables advanced analytics and digital transformation in life sciences. Responsibilities and Duties: Design, develop, and maintain ETL processes and data pipelines for large-scale data integration. Implement and optimize data storage solutions using SQL and NoSQL databases. Build and manage big data frameworks such as Hadoop and Spark to support advanced analytics. Integrate cloud data services, including AWS Glue and Azure Data Factory, into enterprise data workflows. Develop backend solutions using Python and Java for data processing and transformation. Collaborate with solution architects and other team members to ensure seamless API integration. Orchestrate workflows using tools like Airflow and Luigi to automate data movement and processing. Ensure data quality, governance, and compliance with industry standards. Implement streaming technologies (Kafka) for real-time data ingestion and processing. Monitor and tune system performance to maintain reliability and scalability. Document data engineering processes and provide technical support to project teams. What You Will Need: Bachelor's degree in Computer Science, Information Systems, Engineering, or a related STEM field. Minimum 6 years of experience in data engineering, backend development, or related roles. Experience interconnecting multiple databases to better understand patient care and population health. Proficiency in ETL development, data pipeline design, and workflow orchestration. Competence in machine learning model development and applications. Advanced programming skills in Python and Javascript. Knowledge of data warehousing, constructing and integrating API calls, and documenting dataflows. Demonstrated ability to ensure data quality and governance. Excellent analytical, problem-solving, and communication skills. Ability to work collaboratively in a fast-paced, team-oriented environment. What Would Be Nice To Have: Master's degree. Experience building both application stacks in the biopharma industry or consulting environment. Demonstrated proficiency in building Databricks or Dataiku data pipelines to manage automation and CD/CI activities. Direct prior responsibility for data management in a biopharma or other life sciences context. The annual salary range for this position is $113,000.00-$188,000.00. Compensation decisions depend on a wide range of factors, including but not limited to skill sets, experience and training, security clearances, licensure and certifications, and other business and organizational needs. What We Offer: Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. Benefits include: Medical, Rx, Dental & Vision Insurance Personal and Family Sick Time & Company Paid Holidays Parental Leave 401(k) Retirement Plan Group Term Life and Travel Assistance Voluntary Life and AD&D Insurance Health Savings Account, Health Care & Dependent Care Flexible Spending Accounts Transit and Parking Commuter Benefits Short-Term & Long-Term Disability Tuition Reimbursement, Personal Development, Certifications & Learning Opportunities Employee Referral Program Corporate Sponsored Events & Community Outreach Care.com annual membership Employee Assistance Program Supplemental Benefits via Corestream (Critical Care, Hospital Indemnity, Accident Insurance, Legal Assistance and ID theft protection, etc.) Position may be eligible for a discretionary variable incentive bonus About Guidehouse Guidehouse is an Equal Opportunity Employer-Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at ************** or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or ************************. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse's Ethics Hotline. If you want to check the validity of correspondence you have received, please contact *************************. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant's dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.
    $113k-188k yearly Auto-Apply 13d ago
  • Senior Data Analytics Engineer

    Revel It 4.3company rating

    Columbus, OH jobs

    We are seeking a highly skilled Analytics Data Engineer with deep expertise in building scalable data solutions on the AWS platform. The ideal candidate is a 10/10 expert in Python and PySpark, with strong working knowledge of SQL. This engineer will play a critical role in translating business and end-user needs into robust analytics products-spanning ingestion, transformation, curation, and enablement for downstream reporting and visualization. You will work closely with both business stakeholders and IT teams to design, develop, and deploy advanced data pipelines and analytical capabilities that power enterprise decision-making. Key Responsibilities Data Engineering & Pipeline Development Design, develop, and optimize scalable data ingestion pipelines using Python, PySpark, and AWS native services. Build end-to-end solutions to move large-scale big data from source systems into AWS environments (e.g., S3, Redshift, DynamoDB, RDS). Develop and maintain robust data transformation and curation processes to support analytics, dashboards, and business intelligence tools. Implement best practices for data quality, validation, auditing, and error-handling within pipelines. Analytics Solution Design Collaborate with business users to understand analytical needs and translate them into technical specifications, data models, and solution architectures. Build curated datasets optimized for reporting, visualization, machine learning, and self-service analytics. Contribute to solution design for analytics products leveraging AWS services such as AWS Glue, Lambda, EMR, Athena, Step Functions, Redshift, Kinesis, Lake Formation, etc. Cross-Functional Collaboration Work with IT and business partners to define requirements, architecture, and KPIs for analytical solutions. Participate in Daily Scrum meetings, code reviews, and architecture discussions to ensure alignment with enterprise data strategy and coding standards. Provide mentorship and guidance to junior engineers and analysts as needed. Engineering (Supporting Skills) Employ strong skills in Python, Pyspark and SQL to support data engineering tasks, broader system integration requirements, and application layer needs. Implement scripts, utilities, and micro-services as needed to support analytics workloads. Required Qualifications 5+ years of professional experience in data engineering, analytics engineering, or full-stack data development roles. Expert-level proficiency (10/10) in: Python PySpark Strong working knowledge of: SQL and other programming languages Demonstrated experience designing and delivering big-data ingestion and transformation solutions through AWS. Hands-on experience with AWS services such as Glue, EMR, Lambda, Redshift, S3, Kinesis, CloudFormation, IAM, etc. Strong understanding of data warehousing, ETL/ELT, distributed computing, and data modeling. Ability to partner effectively with business stakeholders and translate requirements into technical solutions. Strong problem-solving skills and the ability to work independently in a fast-paced environment. Preferred Qualifications Experience with BI/Visualization tools such as Tableau Experience building CI/CD pipelines for data products (e.g., Jenkins, GitHub Actions). Familiarity with machine learning workflows or MLOps frameworks. Knowledge of metadata management, data governance, and data lineage tools.
    $88k-120k yearly est. 4d ago
  • Senior Data Engineer

    Revel It 4.3company rating

    Columbus, OH jobs

    Our direct client has a long-term contract need for a Sr. Data Engineer. Candidate Requirements: Candidates must be local to Columbus, Ohio Candidates must be willing and able to work the following: Hybrid schedule (3 days in office & 2 days WFH) The team is responsible for the implementation of the new Contract Management System (FIS Asset Finance) as well as the integration into the overall environment and the migration of data from the legacy contract management system to the new system. Candidate will be focused on the delivery of data migration topics to ensure that high quality data is migrated from the legacy systems to the new systems. This may involve data mapping, SQL development and other technical activities to support Data Migration objectives. Must Have Experience: Strong C# and SQL Server design and development skills. Analysis Design. IMPORTANT MUST HAVE! Strong technical analysis skills Strong collaboration skills to work effectively with cross-functional teams Exceptional ability to structure, illustrate, and communicate complex concepts clearly and effectively to diverse audiences, ensuring understanding and actionable insights. Demonstrated adaptability and problem-solving skills to navigate challenges and uncertainties in a fast-paced environment. Strong prioritization and time management skills to balance multiple projects and deadlines in a dynamic environment. In-depth knowledge of Agile methodologies and practices, with the ability to adapt and implement Agile principles in testing and delivery processes. Nice to have: ETL design and development; data mapping skills and experience; experience executing/driving technical design and implementation topics
    $88k-120k yearly est. 4d ago
  • Big Data Engineer

    Kellymitchell Group 4.5company rating

    Santa Monica, CA jobs

    Our client is seeking a Big Data Engineer to join their team! This position is located in Santa Monica, California. Design and build core components of a large-scale data platform for both real-time and batch processing, owning key features of big data applications that evolve with business needs Develop next-generation, cloud-based big data infrastructure supporting batch and streaming workloads, with continuous improvements to performance, scalability, reliability, and availability Champion engineering excellence, promoting best practices such as design patterns, CI/CD, thorough code reviews, and automated testing Drive innovation, contributing new ideas and applying cutting-edge technologies to deliver impactful solutions Participate in the full software development lifecycle, including system design, experimentation, implementation, deployment, and testing Collaborate closely with program managers, product managers, SDETs, and researchers in an open, agile, and highly innovative environment Desired Skills/Experience: Bachelor's degree in a STEM field such as: Science, Technology, Engineering, Mathematics 5+ years of relevant professional experience 4+ years of professional software development experience using Java, Scala, Python, or similar programming languages 3+ years of hands-on big data development experience with technologies such as Spark, Flink, SingleStore, Kafka, NiFi, and AWS big data tools Strong understanding of system and application design, architecture principles, and distributed system fundamentals Proven experience building highly available, scalable, and production-grade services Genuine passion for technology, with the ability to work across interdisciplinary areas and adopt new tools or approaches Experience processing massive datasets at the petabyte scale Proficiency with cloud infrastructure and DevOps tools, such as Terraform, Kubernetes (K8s), Spinnaker, IAM, and ALB Hands-on experience with modern data warehousing and analytics platforms, including ClickHouse, Druid, Snowflake, Impala, Presto, Kinesis, and more Familiarity with common web development frameworks, such as Spring Boot, React.js, Vue.js, or Angular Benefits: Medical, Dental, & Vision Insurance Plans Employee-Owned Profit Sharing (ESOP) 401K offered The approximate pay range for this position is between $52.00 and $75.00. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
    $52-75 hourly 2d ago
  • Senior Data Engineer

    Kellymitchell Group 4.5company rating

    Glendale, CA jobs

    Our client is seeking a Senior Data Engineer to join their team! This position is located in Glendale, California. Contribute to maintaining, updating, and expanding existing Core Data platform data pipelines Build tools and services to support data discovery, lineage, governance, and privacy Collaborate with other software and data engineers and cross-functional teams Work with a tech stack that includes Airflow, Spark, Databricks, Delta Lake, Kubernetes, and AWS Collaborate with product managers, architects, and other engineers to drive the success of the Core Data platform Contribute to developing and documenting internal and external standards and best practices for pipeline configurations, naming conventions, and more Ensure high operational efficiency and quality of Core Data platform datasets to meet SLAs and ensure reliability and accuracy for stakeholders in Engineering, Data Science, Operations, and Analytics Participate in agile and scrum ceremonies to collaborate and refine team processes Engage with customers to build relationships, understand needs, and prioritize both innovative solutions and incremental platform improvements Maintain detailed documentation of work and changes to support data quality and data governance requirements Desired Skills/Experience: 5+ years of data engineering experience developing large data pipelines Proficiency in at least one major programming language such as: Python, Java or Scala Strong SQL skills and the ability to create queries to analyze complex datasets Hands-on production experience with distributed processing systems such as Spark Experience interacting with and ingesting data efficiently from API data sources Experience coding with the Spark DataFrame API to create data engineering workflows in Databricks Hands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines Experience developing APIs with GraphQL Deep understanding of AWS or other cloud providers, as well as infrastructure-as-code Familiarity with data modeling techniques and data warehousing best practices Strong algorithmic problem-solving skills Excellent written and verbal communication skills Advanced understanding of OLTP versus OLAP environments Benefits: Medical, Dental, & Vision Insurance Plans Employee-Owned Profit Sharing (ESOP) 401K offered The approximate pay range for this position is between $51.00 and $73.00. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
    $51-73 hourly 1d ago
  • Azure Data Engineer

    Kellymitchell Group 4.5company rating

    Irving, TX jobs

    Our client is seeking an Azure Data Engineer to join their team! This position is located in Irving, Texas. THIS ROLE REQUIRES AN ONSITE INTERVIEW IN IRVING, please only apply if you are local and available to interview onsite. Duties: Lead the design, architecture, and implementation of key data initiatives and platform capabilities Optimize existing data workflows and systems to improve performance, cost-efficiency, identifying and guiding teams to implement solutions Lead and mentor a team of 2-5 data engineers, providing guidance on technical best practices, career development, and initiative execution Contribute to the development of data engineering standards, processes, and documentation, promoting consistency and maintainability across teams while enabling business stakeholders Desired Skills/Experience: Bachelor's degree or equivalent in Computer Science, Mathematics, Software Engineering, Management Information Systems, etc. 5+ years of relevant work experience in data engineering Strong technical skills in SQL, PySpark/Python, Azure, and Databricks Deep understanding of data engineering fundamentals, including database architecture and design, ETL, etc. Benefits: Medical, Dental, & Vision Insurance Plans Employee-Owned Profit Sharing (ESOP) 401K offered The approximate pay range for this position starting at $140-145,000+. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
    $140k-145k yearly 4d ago
  • Lead Data Engineer

    Kellymitchell Group 4.5company rating

    Denver, CO jobs

    Our client is seeking a Lead Data Engineer to join their team! This position is located in Denver, Colorado. Perform hands-on engineering and provide lead-level ownership to data engineering teams Collaborate cross-functionally to solve complex business and technical challenges Translate analytical requirements into actionable engineering solutions and conduct independent research and analysis to inform strategic decision Perform data pipeline development/maintenance and build ETL processes from various sources into Snowflake, Azure, and Fabric Design new data structures and explore how to leverage new tools or platforms Desired Skills/Experience: 7+ years of professional experience, 2+ of those in a dedicated lead/management role Comfortable bridging the gap between engineering teams and executives Hands on technical work will be required, seeking experience in Python, SQL, and ETL Experience working with cloud tools such as: AWS Fabric, Azure Databricks, Snowflake, Redshift Benefits: Medical, Dental, & Vision Insurance Plans Employee-Owned Profit Sharing (ESOP) 401K offered The approximate pay range for this position starting at $145,000 - $160,000+. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
    $145k-160k yearly 3d ago
  • Lead Data Engineer

    Mindlance 4.6company rating

    Boston, MA jobs

    Data Pipeline Development: Design, implement, and maintain robust, scalable, and efficient data pipelines using Jenkins for automated data extraction, transformation, and loading (ETL) processes. • Relational Database Management: Manage and optimize relational databases (e.g., MySQL, PostgreSQL) to ensure data integrity, availability, and performance for diverse applications and analytical purposes. • API Integration: Collaborate with software developers to integrate data from various sources through APIs, ensuring seamless data flow between systems and applications. • Data Modeling and Architecture: Create and maintain data models and data architecture to support analytics and reporting needs, ensuring data consistency and proper documentation. • Data Analysis and Insights: Utilize Python and RDS to perform advanced data analysis, interpret complex data sets, and deliver actionable insights to key stakeholders and decision-makers. • Performance Optimization: Identify and resolve performance bottlenecks within data pipelines, databases, and queries to improve system efficiency and response times. • Data Quality Assurance: Implement data quality checks and validation processes to ensure the accuracy and reliability of data throughout the data ecosystem. • Data Security and Compliance: Maintain data security standards and compliance with data protection regulations, implementing necessary measures to safeguard sensitive information. • Collaboration and Communication: Work closely with cross-functional teams, including data scientists, business analysts, and IT, to understand data requirements, develop solutions, and present findings effectively. Skills: • Bachelor's degree or relevant experience in Computer Science, Data Science, Information Technology, or a related field. An advanced degree/experience is preferred. • Proven experience as a Data Engineer, Data Analyst, or a related role with a strong technical background in Jenkins, Relational Databases, APIs, Python, and RDS. • Solid understanding of data modeling, database design principles, and data warehousing concepts. • Proficiency in programming languages like Python, SQL, and RDS for data manipulation, analysis, and automation tasks. • Familiarity with cloud-based technologies, such as AWS, GCP, or Azure, and the ability to leverage cloud services for data management. • Experience with Jenkins for automated build, test, and deployment processes. • Strong problem-solving skills and the ability to troubleshoot data-related issues efficiently. • Excellent communication and collaboration skills to work effectively in a team-oriented environment. • A passion for data-driven decision-making and a keen eye for detail. • Knowledge of data visualization tools (e.g., Tableau, Power BI) is a plus. Cyber Security: • The client is looking for data-security experience within data engineering, not a traditional cybersecurity role. • They want someone who has secured cloud-based ETL pipelines, handled IAM/RBAC, protected data in S3 and RDS, secured Jenkins pipelines, and implemented encryption, access controls, and basic compliance practices as part of their data engineering work. EEO: “Mindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of - Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.”
    $95k-127k yearly est. 2d ago
  • Data Engineer (Mid-Level)

    KPG99 Inc. 4.0company rating

    Orlando, FL jobs

    Job Title: Data Engineer (Mid-Level) Employment Type: 6 months contract to hire About the Role We are seeking a highly skilled professional who can bridge the gap between data engineering and data analysis. This role is ideal for someone who thrives on building robust data models and optimizing existing data infrastructure to drive actionable insights. 70% engineering/30% analyst Key Responsibilities · Design and implement data models for service lines that currently lack structured models. · Build and maintain scalable ETL pipelines to ensure efficient data flow and transformation. · Optimize and enhance existing data models and processes for improved performance and reliability. · Collaborate with stakeholders to understand business requirements and translate them into technical solutions. · Must have excellent communication skills Required Skills · SQL (must-have): Advanced proficiency in writing complex queries and optimizing performance. · Data Modeling: Strong experience in designing and implementing relational and dimensional models. · ETL Development: Hands-on experience with data extraction, transformation, and loading processes. · Alteryx or Azure Data Factory (ADF) for pipeline development. · Excel proficiency · Experience with BI tools and data visualization (Power BI, Tableau). Bachelor's degree required Thanks and Regards Ashish Tripathi || US IT RecruiterKPG99,INC ******************| *************** 3240 E State, St Ext | Hamilton, NJ 08
    $78k-112k yearly est. 3d ago
  • ML Engineer with Timeseries data experience

    Techstar Group 3.7company rating

    Atlanta, GA jobs

    Role: ML Engineer with Timeseries data experience Hybrid in Atlanta, GA (locals preferred) $58/hr on C2C, Any Visa Model Development: Design, build, train, and optimize ML/DL models for time-series forecasting, prediction, anomaly detection, and causal inference. Data Pipelines: Create robust data pipelines for collection, preprocessing, feature engineering, and labeling of large-scale time-series data. Scalable Systems: Architect and implement scalable AI/ML infrastructure and MLOps pipelines (CI/CD, monitoring) for production deployment. Collaboration: Work with data engineers, software developers, and domain experts to integrate AI solutions. Performance: Monitor, troubleshoot, and optimize model performance, ensuring robustness and real-world applicability. Languages & Frameworks: Good understanding of AWS Framework, Python (Pandas, NumPy), PyTorch, TensorFlow, Scikit-learn, PySpark. ML/DL Expertise: Strong grasp of time-series models (ARIMA, Prophet, Deep Learning), anomaly detection, and predictive analytics Data Handling: Experience with large datasets, feature engineering, and scalable data processing.
    $58 hourly 3d ago
  • Senior Data Engineer

    Pinnacle Partners, Inc. 4.4company rating

    Indianapolis, IN jobs

    Pinnacle Partners is assisting our client in the search for a Senior Data Engineer to join their team in the Indianapolis, IN area. This successful resource will be responsible for supporting the large-scale data modernization initiative and operationalize the platform moving forward. RESPONSIBILITIES: Design, develop, and refine BI focused data architecture and data platforms Work with internal teams to gather requirements and translate business needs into technical solutions Build and maintain data pipelines supporting transformation Develop technical designs, data models, and roadmaps Troubleshoot and resolve data quality and processing issues Create and maintain detailed documentation for data warehouses, data stores, and end-to-end data flows Mentor and support junior team members REQUIREMENTS: 5+ years of hands-on experience with data warehousing, databases, and dimensional data modeling 5+ years of experience across end-to-end data analysis and development Experience using GIT version control Advanced SQL skills Strong experience with AWS cloud PREFERRED SKILLS: Experience with Snowflake Experience with Python or R Bachelor's degree in an IT-Related field TERMS: This is a direct hire opportunity with a salary up to $130K based on experience. They offer benefits including medical, dental, and vision along with generous PTO, 401K matching, wellness programs, and other benefits.
    $130k yearly 21h ago
  • Data Engineer

    Kellymitchell Group 4.5company rating

    Denver, CO jobs

    Our client is seeking a Data Engineer to join their team! This position is located in Denver, Colorado. Perform data pipeline development and maintenance Build ETL processes from various sources into Snowflake, Azure, and Fabric Build views and monitor pipelines to ensure they run smoothly Design new data structures and explore how to leverage new tools or platforms Create proof-of-concept models and ensure data is organized for easy access and analysis Ensure proper data governance, security, and access control to create a comprehensive dataset Desired Skills/Experience: 3+ years of hands-on experience in data engineering, including designing, building, and maintaining data pipelines and architectures Hands on experience in both Python and SQL Proficient in ETL processes and a cloud technology such as: Databricks, Redshift, Snowflake, Fabric Benefits: Medical, Dental, & Vision Insurance Plans Employee-Owned Profit Sharing (ESOP) 401K offered The approximate pay range for this position starting at $110,000 - $113,000+. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
    $110k-113k yearly 3d ago
  • Senior Data Engineer / Data Modeler (Spark, Cloud, Databricks & Modeling)

    Employvision Inc. 3.7company rating

    Hollywood, FL jobs

    No VISA candidates please.. MUST BE ONSITE 3-4 DAYS IN MIAMI FL We are seeking a skilled Data Engineer to design and build analytics solutions that drive meaningful business value. In this role, you will collaborate closely with data teams, stakeholders, and leadership to ensure technical solutions align with business goals. You will create scalable data architectures, robust data models, and modern engineering pipelines that support the full data lifecycle. The ideal candidate is a strong database designer with hands-on experience in QL, data modeling, and Databricks. You should be comfortable gathering requirements, managing timelines, collaborating with offshore teams, and interacting directly with clients. This role requires 4 days per week onsite in Miami; no travel required. Key Responsibilities Design, build, and maintain scalable on-prem and cloud-based data architectures. Translate business requirements into clear, actionable technical specifications. Optimize data flows, pipelines, and integrations for performance and scalability. Develop and implement data engineering solutions using Databricks (AWS, Azure, or Google Cloud Platform). Lead end-to-end pipeline development, including ingestion, transformation, and storage. Ensure data quality, governance, security, and integrity across the lifecycle. Work directly with clients to understand needs and deliver tailored solutions. Provide guidance and mentorship to junior engineers and team members. Clearly communicate complex data concepts to technical and non-technical audiences. Manage stakeholder expectations and maintain strong client relationships. Stay current with modern data engineering, cloud, and analytics technologies. Skills & Attributes for Success Strong analytical, decision-making, and problem-solving abilities. Expertise in cloud architecture and modern data engineering concepts. Experience with data integration, modeling, and security best practices. Ability to handle complex business requirements and legacy system landscapes. Excellent communication and relationship-building skills. Required Qualifications Bachelor s degree in Computer Science, Engineering, or related field (Master s preferred). 5+ years of data engineering experience focused on cloud-based solutions. Hands-on expertise with Databricks and Spark for large-scale data processing. Strong programming skills in Python, Scala, and/or SQL. Deep experience in data modeling, ETL development, and data warehousing. Proven success delivering at least two end-to-end data engineering projects, such as: Building a data lake on Databricks integrating multiple sources for BI analytics. Developing real-time pipelines using Databricks and cloud-native services. Ability to work independently, lead tasks, meet deadlines, and manage client communication. Senior Project Expertise (Preferred for Consulting-Level Roles) Ability to connect technical solutions to broader business strategies. Experience managing multiple concurrent projects and deliverables. Skilled in stakeholder engagement at all levels, including executives. Change management exposure in data transformation initiatives. Ability to identify risks and define mitigation strategies. Experience contributing to architectural decision-making and technical leadership. Strong documentation, reporting, and client-facing communication. Nice to Have Experience with data quality frameworks and semantic layers. Familiarity with AWS, Azure, or Google Cloud Platform data services. Understanding of data governance, privacy, and compliance standards. Exposure to machine learning tools and frameworks.
    $79k-113k yearly est. 3d ago
  • Associate Data Scientist

    Kellymitchell Group 4.5company rating

    Minneapolis, MN jobs

    is remote. Develop service specific knowledge through greater exposure to peers, internal experts, clients, regular self-study, and formal training opportunities Gain exposure to a variety of program/project situations to develop business and organizational/planning skills Retain knowledge gained and performance feedback provided to transfer into future work Approach all problems and projects with a high level of professionalism, objectivity and an open mind to new ideas and solutions Collaborate with internal teams to collect, analyze, and automate data processing Leverage AI models, including LLMs, for developing intelligent solutions that enhance data-driven decision-making processes for both internal projects and external clients Leverage machine learning methodologies, including non-linear, linear, and forecasting methods to help build solutions aimed at better understanding the business, making the business more efficient, and planning our future Work under the guidance of a variety of Data Science team members, gain exposure to developing custom data models and algorithms to apply to data sets Gain experience with predictive and inferential analytics, machine learning, and artificial intelligence techniques Use existing processes and tools to monitor and analyze solution performance and accuracy and communicate findings to team members and end users Contribute to automating business workflows by incorporating LLMs and other AI models to streamline processes and improve efficiency Integrate AI-driven solutions within existing systems to provide advanced predictive capabilities and actionable insights Learn to work individually as well as in collaboration with others Desired Skills/Experience: Bachelor's degree is required in the field of Statistics, Computer Science, Economics, Analytics, or Data Science preferred 1+ year of experience preferred Experience with APIs, web scraping, SQL/no-SQL databases, and cloud-based data solutions preferred Combination of relevant experience, education, and training may be accepted in lieu of degree Benefits: Medical, Dental, & Vision Insurance Plans Employee-Owned Profit Sharing (ESOP) 401K offered The approximate pay range for this position starting at $90,000 - $125,000. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
    $90k-125k yearly 1d ago
  • Senior Data Engineer

    Firstpro, Inc. 4.5company rating

    Boston, MA jobs

    first PRO is now accepting resumes for a Senior Data Engineer role in Boston, MA. This is a direct hire role and onsite 2-3 days per week. RESPONSIBILITIES INCLUDE Support and enhance the firm's Data Governance, BI platforms, and data stores. Administer and extend data governance tools including Atlan, Monte Carlo, Snowflake, and Power BI. Develop production-quality code and data solutions supporting key business initiatives. Conduct architecture and code reviews to ensure security, scalability, and quality across deliverables. Collaborate with the cloud migration, information security, and business analysis teams to design and deploy new applications and migrate existing systems to the cloud. TECHNOLOGY EXPERIENCE Hands-on experience supporting SaaS, business-facing applications. Expertise in Python for data processing, automation, and production-grade development. Strong knowledge of SQL, data modeling, and data warehouse design (Kimball/star schema preferred). Experience with Power BI or similar BI/reporting tools. Familiarity with data pipeline technologies and orchestration tools (e.g., Airflow, dbt). Experience with Snowflake, Redshift, BigQuery, or Athena. Understanding of data governance, data quality, and metadata management frameworks. QUALIFICATIONSBS or MS in Computer Science, Engineering, or a related technical field. 7+ years of professional software or data engineering experience. Strong foundation in software design and architectural patterns.
    $103k-143k yearly est. 4d ago
  • Data Engineer

    Interactive Resources-IR 4.2company rating

    Tempe, AZ jobs

    About the Role We are seeking a highly skilled Databricks Data Engineer with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions. What We're Looking For 8+ years designing and delivering scalable data pipelines in modern data platforms Deep experience in data engineering, data warehousing, and enterprise-grade solution delivery Ability to lead cross-functional initiatives in matrixed teams Advanced skills in SQL, Python, and ETL/ELT development, including performance tuning Hands-on experience with Azure, Snowflake, and Databricks, including system integrations Key Responsibilities Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform Modernize and enhance cloud-based data ecosystems on Azure, contributing to architecture, modeling, security, and CI/CD Use Apache Airflow and similar tools for workflow automation and orchestration Work with financial or regulated datasets while ensuring strong compliance and governance Drive best practices in data quality, lineage, cataloging, and metadata management Primary Technical Skills Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks Notebooks Design efficient Delta Lake models for reliability and performance Implement and manage Unity Catalog for governance, RBAC, lineage, and secure data sharing Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems Automate ingestion and workflows using Python and REST APIs Support downstream analytics for BI, data science, and application workloads Write optimized SQL/T-SQL queries, stored procedures, and curated datasets Automate DevOps workflows, testing pipelines, and workspace configurations Additional Skills Azure: Data Factory, Data Lake, Key Vault, Logic Apps, Functions CI/CD: Azure DevOps Orchestration: Apache Airflow (plus) Streaming: Delta Live Tables MDM: Profisee (nice-to-have) Databases: SQL Server, Cosmos DB Soft Skills Strong analytical and problem-solving mindset Excellent communication and cross-team collaboration Detail-oriented with a high sense of ownership and accountability
    $92k-122k yearly est. 2d ago
  • Data Scientist

    Us Tech Solutions 4.4company rating

    Alhambra, CA jobs

    Title: Principal Data Scientist Duration: 12 Months Contract Additional Information California Resident Candidates Only. This position is HYBRID (2 days onsite, 2 days telework). Interviews will be conducted via Microsoft Teams. The work schedule follows a 4/40 (10-hour days, Monday-Thursday), with the specific shift determined by the program manager. Shifts may range between 7:15 a.m. and 6:00 p.m. Job description: The Principal Data Scientist works to establish a comprehensive Data Science Program to advance data-driven decision-making, streamline operations, and fully leverage modern platforms including Databricks, or similar, to meet increasing demand for predictive analytics and AI solutions. The Principal Data Scientist will guide program development, provide training and mentorship to junior members of the team, accelerate adoption of advanced analytics, and build internal capacity through structured mentorship. The Principal Data Scientist will possess exceptional communication abilities, both verbal and written, with a strong customer service mindset and the ability to translate complex concepts into clear, actionable insights; strong analytical and business acumen, including foundational experience with regression, association analysis, outlier detection, and core data analysis principles; working knowledge of database design and organization, with the ability to partner effectively with Data Management and Data Engineering teams; outstanding time management and organizational skills, with demonstrated success managing multiple priorities and deliverables in parallel; a highly collaborative work style, coupled with the ability to operate independently, maintain focus, and drive projects forward with minimal oversight; a meticulous approach to quality, ensuring accuracy, reliability, and consistency in all deliverables; and proven mentorship capabilities, including the ability to guide, coach, and upskill junior data scientists and analysts. Experience Required: Five (5)+ years of professional experience leading data science initiatives, including developing machine learning models, statistical analyses, and end-to-end data science workflows in production environments. Three (3)+ years of experience working with Databricks and similar cloud-based analytics platforms, including notebook development, feature engineering, ML model training, and workflow orchestration. Three (3)+ years of experience applying advanced analytics and predictive modeling (e.g., regression, classification, clustering, forecasting, natural language processing). Two (2)+ years of experience implementing MLOps practices, such as model versioning, CI/CD for ML, MLflow, automated pipelines, and model performance monitoring. Two (2)+ years of experience collaborating with data engineering teams to design data pipelines, optimize data transformations, and implement Lakehouse or data warehouse architectures (e.g., Databricks, Snowflake, SQL-based platforms). Two (2)+ years of experience mentoring or supervising junior data scientists or analysts, including code reviews, training, and structured skill development. Two (2)+ years of experience with Python and SQL programming, using data sources such as SQL Server, Oracle, PostgreSQL, or similar relational databases. One (1)+ year of experience operationalizing analytics within enterprise governance frameworks, partnering with Data Management, Security, and IT to ensure compliance, reproducibility, and best practices. Education Required & certifications: This classification requires possession of a Master's degree or higher in Data Science, Statistics, Computer Science, or a closely related field. Additional qualifying professional experience may be substituted for the required education on a year-for-year basis. At least one of the following industry-recognized certifications in data science or cloud analytics, such as: Microsoft Azure Data Scientist Associate (DP-100) Databricks Certified Data Scientist or Machine Learning Professional AWS Machine Learning Specialty Google Professional Data Engineer • or equivalent advanced analytics certifications. The certification is required and may not be substituted with additional experience. About US Tech Solutions: US Tech Solutions is a global staff augmentation firm providing a wide range of talent on-demand and total workforce solutions. To know more about US Tech Solutions, please visit ************************ US Tech Solutions is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Recruiter Details: Name: T Saketh Ram Sharma Email: ***************************** Internal Id: 25-54101
    $92k-133k yearly est. 3d ago
  • Data Scientist

    Kellymitchell Group 4.5company rating

    Irving, TX jobs

    Our client is seeking a Data Scientist to join their team! This position is located in Irving, Texas. Build, evaluate, and deploy models to identify and target customer segments for personalized experiences and marketing Design, run, and analyze A/B tests and other online experiments to measure the impact of new features, campaigns, and product changes Research, prototype, and develop AI solutions for personalized systems and customer segmentation Design and analyze online controlled experiments (A/B tests) to validate hypotheses and measure business impact Build, deploy, and analyze AI solutions; perform statistical experiments when deploying new AI products Desired Skills/Experience: Bachelor's Degree in Computer Science/Engineering/Math, or relevant experience 2+ years of experience with statistical data science techniques, feature engineering, and customer segmentation 2+ years of experience with SQL, PySpark, and Python 2+ years of experience training, evaluating, and deploying machine learning models 2+ years of experience productionizing and deploying ML workloads in AWS/Azure Experience working with MarTech platforms such as: CDPs, DMPs, ESPs and integrating data science into marketing workflows Benefits: Medical, Dental, & Vision Insurance Plans Employee-Owned Profit Sharing (ESOP) 401K offered The approximate pay range for this position starting at $115,000-$128,000. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
    $115k-128k yearly 4d ago
  • Data Engineer

    Interactive Resources-IR 4.2company rating

    Austin, TX jobs

    About the Role We are seeking a highly skilled Databricks Data Engineer with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions. What We're Looking For 8+ years designing and delivering scalable data pipelines in modern data platforms Deep experience in data engineering, data warehousing, and enterprise-grade solution delivery Ability to lead cross-functional initiatives in matrixed teams Advanced skills in SQL, Python, and ETL/ELT development, including performance tuning Hands-on experience with Azure, Snowflake, and Databricks, including system integrations Key Responsibilities Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform Modernize and enhance cloud-based data ecosystems on Azure, contributing to architecture, modeling, security, and CI/CD Use Apache Airflow and similar tools for workflow automation and orchestration Work with financial or regulated datasets while ensuring strong compliance and governance Drive best practices in data quality, lineage, cataloging, and metadata management Primary Technical Skills Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks Notebooks Design efficient Delta Lake models for reliability and performance Implement and manage Unity Catalog for governance, RBAC, lineage, and secure data sharing Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems Automate ingestion and workflows using Python and REST APIs Support downstream analytics for BI, data science, and application workloads Write optimized SQL/T-SQL queries, stored procedures, and curated datasets Automate DevOps workflows, testing pipelines, and workspace configurations Additional Skills Azure: Data Factory, Data Lake, Key Vault, Logic Apps, Functions CI/CD: Azure DevOps Orchestration: Apache Airflow (plus) Streaming: Delta Live Tables MDM: Profisee (nice-to-have) Databases: SQL Server, Cosmos DB Soft Skills Strong analytical and problem-solving mindset Excellent communication and cross-team collaboration Detail-oriented with a high sense of ownership and accountability
    $84k-111k yearly est. 2d ago
  • DevOps Engineer

    Guidehouse 3.7company rating

    Data engineer job at Guidehouse

    Job Family: Systems Engineering (Digital) Travel Required: Up to 10% Clearance Required: Ability to Obtain Public Trust What You Will Do: We are seeking a skilled and adaptable DevOps Engineer to join our team in supporting development, QA, and operations across multiple applications. This role is central to enabling fast, secure, and reliable software delivery through automation, cloud-native infrastructure, and collaborative practices. Key Responsibilities Support build, test, deployment, and monitoring tools and environments in a cloud-hosted infrastructure. Assist with software releases, infrastructure management, inventory, installations, configurations, and connectivity. Monitor and optimize system performance using tools like Prometheus, Grafana, Datadog, Splunk and ELK Stack. Collaborate with development, QA, and operations teams to improve deployment workflows and system reliability. Ensure security best practices are integrated into pipelines and infrastructure (DevSecOps). Troubleshoot and resolve issues across development and production environments. Maintain and modernize legacy systems using cloud-native services and automation. Design, implement, and maintain CI/CD pipelines using tools like GitLab CI, Cloud Build and Harness. Automate infrastructure provisioning and configuration using Infrastructure as Code (IaC) tools such as Terraform, Ansible, and Helm. Deploy and manage containerized applications using Kubernetes, OpenShift, or GKE, and Cloud Run(Knative) What You Will Need: US Citizenship required; must be able to obtain a Public Trust government clearance. Bachelor's Degree or Four (4) additional years of experience if needed in lieu of degree. Four(4) to Ten (10)+ years of industry experience. Two(2) to Eight (8)+ years of experience as DevOps Engineer. Experience working in a DevOps environment supporting multiple teams/applications. Proficiency in deploying and troubleshooting Java-based application stacks (NGINX, Apache HTTPD, JAVA JDK, Apache Tomcat, PostgreSQL). Strong understanding of application network configurations (Reverse Proxy, TLS Certificates, DNS, Load Balancers, Routers, Firewalls). Experience with CI/CD pipelines (Jenkins, Tekton, GitLab, Harness preferred). Hands-on experience deploying applications to Google Cloud and/or Kubernetes (OpenShift, GKE, or EKS). Proficient with infrastructure and deployment automation tools (Terraform, Ansible, YAML, ArgoCD, HELM). Skilled in Git source control (Git, GitLab, Bitbucket), including branching and merge requests. Experience with Linux (RHEL 8+). Familiarity with Windows Server (2016+). Experience with DataDog, Splunk, Google Logging and monitoring or similar tools. What Would Be Nice To Have: Proficiency in scripting or programming languages (Python, Bash, Java) and API usage. Federal government experience. Experience with deploying AI agentic applications. Experience implementing AIOps automation. Experience using AI tools (Gemini) for development and QA (code assist). Strong verbal and written communication skills. Familiar with DevSecOps and GitOps best practices. Enthusiastic about learning and implementing new technologies. Team-oriented mindset aligned with DevOps culture. #LI-DNI The annual salary range for this position is $98,000.00-$163,000.00. Compensation decisions depend on a wide range of factors, including but not limited to skill sets, experience and training, security clearances, licensure and certifications, and other business and organizational needs. What We Offer: Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. Benefits include: Medical, Rx, Dental & Vision Insurance Personal and Family Sick Time & Company Paid Holidays Parental Leave 401(k) Retirement Plan Group Term Life and Travel Assistance Voluntary Life and AD&D Insurance Health Savings Account, Health Care & Dependent Care Flexible Spending Accounts Transit and Parking Commuter Benefits Short-Term & Long-Term Disability Tuition Reimbursement, Personal Development, Certifications & Learning Opportunities Employee Referral Program Corporate Sponsored Events & Community Outreach Care.com annual membership Employee Assistance Program Supplemental Benefits via Corestream (Critical Care, Hospital Indemnity, Accident Insurance, Legal Assistance and ID theft protection, etc.) Position may be eligible for a discretionary variable incentive bonus About Guidehouse Guidehouse is an Equal Opportunity Employer-Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at ************** or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or ************************. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse's Ethics Hotline. If you want to check the validity of correspondence you have received, please contact *************************. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant's dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.
    $98k-163k yearly Auto-Apply 5d ago

Learn more about Guidehouse jobs

View all jobs