Post job

Data Engineer jobs at Lee Hecht Harrison

- 5965 jobs
  • Data Engineer - AI & Data Modernization

    Guidehouse 3.7company rating

    Arlington, VA jobs

    Job Family: Data Science Consulting Travel Required: Up to 25% Clearance Required: Ability to Obtain Public Trust What You Will Do: We are seeking an experienced Data Engineer to join our growing AI and Data practice, with a dedicated focus on the Federal Civilian Agencies (FCA) market within the Communities, Energy & Infrastructure (CEI) segment. This individual will be a hands-on technical contributor, responsible for designing and implementing scalable data pipelines and interactive dashboards that enable federal clients to achieve mission outcomes, operational efficiency, and digital transformation. This is a strategic delivery role for someone who thrives at the intersection of data engineering, cloud platforms, and public sector analytics. Client Leadership & Delivery Collaborate with FCA clients to understand data architecture and reporting needs. Lead the development of ETL pipelines and dashboard integrations using Databricks and Tableau. Ensure delivery excellence and measurable outcomes across data migration and visualization efforts. Solution Development & Innovation Design and implement scalable ETL/ELT pipelines using Spark, SQL, and Python. Develop and optimize Tableau dashboards aligned with federal reporting standards. Apply AI/ML tools to automate metadata extraction, clustering, and dashboard scripting. Practice & Team Leadership Work closely with data architects, analysts, and cloud engineers to deliver integrated solutions. Support documentation, testing, and deployment of data products. Mentor junior developers and contribute to reusable frameworks and accelerators. What You Will Need: US Citizenship is required Bachelor's degree is required Minimum TWO (2) years of experience in data engineering and dashboard development Proven experience with Databricks, Tableau, and cloud platforms (AWS, Azure) Strong proficiency in SQL, Python, and Spark Experience building ETL pipelines and integrating data sources into reporting platforms Familiarity with data governance, metadata, and compliance frameworks Excellent communication, facilitation, and stakeholder engagement skills What Would Be Nice To Have: AI/LLM Certifications Experience working with FCA clients such as DOT, GSA, USDA, or similar Familiarity with federal contracting and procurement processes What We Offer: Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. Benefits include: Medical, Rx, Dental & Vision Insurance Personal and Family Sick Time & Company Paid Holidays Position may be eligible for a discretionary variable incentive bonus Parental Leave and Adoption Assistance 401(k) Retirement Plan Basic Life & Supplemental Life Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts Short-Term & Long-Term Disability Student Loan PayDown Tuition Reimbursement, Personal Development & Learning Opportunities Skills Development & Certifications Employee Referral Program Corporate Sponsored Events & Community Outreach Emergency Back-Up Childcare Program Mobility Stipend About Guidehouse Guidehouse is an Equal Opportunity Employer-Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at ************** or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or ************************. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse's Ethics Hotline. If you want to check the validity of correspondence you have received, please contact *************************. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant's dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.
    $73k-97k yearly est. Auto-Apply 1d ago
  • Agentic AI & Data Scientist

    Guidehouse 3.7company rating

    Arlington, VA jobs

    Job Family: Data Science Consulting Travel Required: Up to 10% Clearance Required: Ability to Obtain Public Trust What You Will Do: We are seeking a forward-thinking Senior Consultant to lead initiatives in Agentic AI, Generative AI, and advanced Data Science. This role will drive innovation in healthcare analytics and enterprise AI solutions, leveraging cutting-edge technologies to transform data into actionable intelligence. The ideal candidate will replace a technical leader who pioneered AI-driven workflows and generative modeling within our organization. Design and implement Agentic AI systems for autonomous decision-making and workflow optimization. Develop Generative AI applications for summarization, predictive modeling, and conversational interfaces. Build and maintain scalable data pipelines integrating structured and unstructured data. Apply advanced statistical and machine learning techniques to policy evaluation. Lead AI-driven projects in retrieval-augmented generation (RAG), prompt engineering, and knowledge graph integration. Collaborate with cross-functional teams to deliver AI-powered dashboards and insights. Mentor junior staff in AI methodologies and data science best practices. What You Will Need: US Citizenship is required Master's degree is required Minimum FIVE (5) years of experience in AI and data analytics, including healthcare or enterprise data. Expertise in Python, R, and frameworks for AI/ML (e.g., TensorFlow, PyTorch). Hands-on experience with Generative AI models (LLMs, diffusion models) and Agentic AI architectures. Strong understanding of SQL and cloud-based data platforms. Ability to translate complex AI concepts into business solutions. Experience with RAG pipelines, vector databases, and knowledge graphs. What Would Be Nice To Have: Familiarity with AI governance, ethical AI practices, and compliance. Proficiency in visualization tools and interactive dashboards. Agile project management and sprint-based delivery. Prior experience in technology consulting The annual salary range for this position is $113,000.00-$188,000.00. Compensation decisions depend on a wide range of factors, including but not limited to skill sets, experience and training, security clearances, licensure and certifications, and other business and organizational needs. What We Offer: Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. Benefits include: Medical, Rx, Dental & Vision Insurance Personal and Family Sick Time & Company Paid Holidays Position may be eligible for a discretionary variable incentive bonus Parental Leave and Adoption Assistance 401(k) Retirement Plan Basic Life & Supplemental Life Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts Short-Term & Long-Term Disability Student Loan PayDown Tuition Reimbursement, Personal Development & Learning Opportunities Skills Development & Certifications Employee Referral Program Corporate Sponsored Events & Community Outreach Emergency Back-Up Childcare Program Mobility Stipend About Guidehouse Guidehouse is an Equal Opportunity Employer-Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at ************** or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or ************************. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse's Ethics Hotline. If you want to check the validity of correspondence you have received, please contact *************************. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant's dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.
    $113k-188k yearly Auto-Apply 1d ago
  • Biostatistician/AI Data Scientist I

    Guidehouse 3.7company rating

    Atlanta, GA jobs

    Job Family: Data Science Consulting Travel Required: Up to 10% Clearance Required: Ability to Obtain Public Trust What You Will Do: The Biostatistician/AI data scientist I will assist senior team members in performing statistical analyses and developing automated workflows for federal public health and life sciences projects. This role emphasizes foundational biostatistical skills combined with emerging technologies such as GenAI and automation. Key Responsibilities Support development and execution of statistical analyses under supervision. Assist in creating automated workflows and reproducible reports using R Markdown and Python. Contribute to data cleaning, preparation, and visualization tasks. Collaborate with senior biostatisticians and epidemiologists on project deliverables. Learn and apply GenAI tools for enhancing analytic efficiency. What You Will Need: Bachelor's degree is required Basic proficiency in R, SQL, and Python for data analysis. Understanding of regression modeling and descriptive statistics. Understanding of GenAI and other emerging technologies Ability to work collaboratively in a team environment What Would Be Nice To Have: Master's degree Experience with version control tools (e.g., GitHub). Experience with AI and machine learning for data analytics Exposure to GenAI tools and automation frameworks. Familiarity with public health datasets and analytic standards. The annual salary range for this position is $68,000.00-$113,000.00. Compensation decisions depend on a wide range of factors, including but not limited to skill sets, experience and training, security clearances, licensure and certifications, and other business and organizational needs. What We Offer: Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. Benefits include: Medical, Rx, Dental & Vision Insurance Personal and Family Sick Time & Company Paid Holidays Position may be eligible for a discretionary variable incentive bonus Parental Leave and Adoption Assistance 401(k) Retirement Plan Basic Life & Supplemental Life Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts Short-Term & Long-Term Disability Student Loan PayDown Tuition Reimbursement, Personal Development & Learning Opportunities Skills Development & Certifications Employee Referral Program Corporate Sponsored Events & Community Outreach Emergency Back-Up Childcare Program Mobility Stipend About Guidehouse Guidehouse is an Equal Opportunity Employer-Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at ************** or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or ************************. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse's Ethics Hotline. If you want to check the validity of correspondence you have received, please contact *************************. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant's dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.
    $68k-113k yearly Auto-Apply 1d ago
  • Data Engineer - AI & Data Modernization

    Guidehouse 3.7company rating

    San Antonio, TX jobs

    Job Family: Data Science Consulting Travel Required: Up to 25% Clearance Required: Ability to Obtain Public Trust What You Will Do: We are seeking an experienced Data Engineer to join our growing AI and Data practice, with a dedicated focus on the Federal Civilian Agencies (FCA) market within the Communities, Energy & Infrastructure (CEI) segment. This individual will be a hands-on technical contributor, responsible for designing and implementing scalable data pipelines and interactive dashboards that enable federal clients to achieve mission outcomes, operational efficiency, and digital transformation. This is a strategic delivery role for someone who thrives at the intersection of data engineering, cloud platforms, and public sector analytics. Client Leadership & Delivery Collaborate with FCA clients to understand data architecture and reporting needs. Lead the development of ETL pipelines and dashboard integrations using Databricks and Tableau. Ensure delivery excellence and measurable outcomes across data migration and visualization efforts. Solution Development & Innovation Design and implement scalable ETL/ELT pipelines using Spark, SQL, and Python. Develop and optimize Tableau dashboards aligned with federal reporting standards. Apply AI/ML tools to automate metadata extraction, clustering, and dashboard scripting. Practice & Team Leadership Work closely with data architects, analysts, and cloud engineers to deliver integrated solutions. Support documentation, testing, and deployment of data products. Mentor junior developers and contribute to reusable frameworks and accelerators. What You Will Need: US Citizenship is required Bachelor's degree is required Minimum TWO (2) years of experience in data engineering and dashboard development Proven experience with Databricks, Tableau, and cloud platforms (AWS, Azure) Strong proficiency in SQL, Python, and Spark Experience building ETL pipelines and integrating data sources into reporting platforms Familiarity with data governance, metadata, and compliance frameworks Excellent communication, facilitation, and stakeholder engagement skills What Would Be Nice To Have: AI/LLM Certifications Experience working with FCA clients such as DOT, GSA, USDA, or similar Familiarity with federal contracting and procurement processes What We Offer: Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. Benefits include: Medical, Rx, Dental & Vision Insurance Personal and Family Sick Time & Company Paid Holidays Position may be eligible for a discretionary variable incentive bonus Parental Leave and Adoption Assistance 401(k) Retirement Plan Basic Life & Supplemental Life Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts Short-Term & Long-Term Disability Student Loan PayDown Tuition Reimbursement, Personal Development & Learning Opportunities Skills Development & Certifications Employee Referral Program Corporate Sponsored Events & Community Outreach Emergency Back-Up Childcare Program Mobility Stipend About Guidehouse Guidehouse is an Equal Opportunity Employer-Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at ************** or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or ************************. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse's Ethics Hotline. If you want to check the validity of correspondence you have received, please contact *************************. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant's dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.
    $68k-94k yearly est. Auto-Apply 1d ago
  • Senior Data Engineer

    Bayforce 4.4company rating

    Charlotte, NC jobs

    **NO 3rd Party vendor candidates or sponsorship** Role Title: Senior Data Engineer Client: Global construction and development company Employment Type: Contract Duration: 1 year Preferred Location: Remote based in ET or CT time zones Role Description: The Senior Data Engineer will play a pivotal role in designing, architecting, and optimizing cloud-native data integration and Lakehouse solutions on Azure, with a strong emphasis on Microsoft Fabric adoption, PySpark/Spark-based transformations, and orchestrated pipelines. This role will lead end-to-end data engineering-from ingestion through APIs and Azure services to curated Lakehouse/warehouse layers-while ensuring scalable, secure, well-governed, and well-documented data products. The ideal candidate is hands-on in delivery and also brings data architecture knowledge to help shape patterns, standards, and solution designs. Key Responsibilities Design and implement end-to-end data pipelines and ELT/ETL workflows using Azure Data Factory (ADF), Synapse, and Microsoft Fabric. Build and optimize PySpark/Spark transformations for large-scale processing, applying best practices for performance tuning (partitioning, joins, file sizing, incremental loads). Develop and maintain API-heavy ingestion patterns, including REST/SOAP integrations, authentication/authorization handling, throttling, retries, and robust error handling. Architect scalable ingestion, transformation, and serving solutions using Azure Data Lake / OneLake, Lakehouse patterns (Bronze/Silver/Gold), and data warehouse modeling practices. Implement monitoring, logging, alerting, and operational runbooks for production pipelines; support incident triage and root-cause analysis. Apply governance and security practices across the lifecycle, including access controls, data quality checks, lineage, and compliance requirements. Write complex SQL, develop data models, and enable downstream consumption through analytics tools and curated datasets. Drive engineering standards: reusable patterns, code reviews, documentation, source control, and CI/CD practices. Requirements: Bachelor's degree (or equivalent experience) in Computer Science, Engineering, or a related field. 5+ years of experience in data engineering with strong focus on Azure Cloud. Strong experience with Azure Data Factory pipelines, orchestration patterns, parameterization, and production support. Strong hands-on experience with Synapse (pipelines, SQL pools and/or Spark), and modern cloud data platform patterns. Advanced PySpark/Spark experience for complex transformations and performance optimization. Heavy experience with API-based integrations (building ingestion frameworks, handling auth, pagination, retries, rate limits, and resiliency). Strong knowledge of SQL and data warehousing concepts (dimensional modeling, incremental processing, data quality validation). Strong understanding of cloud data architectures including Data Lake, Lakehouse, and Data Warehouse patterns. Preferred Skills Experience with Microsoft Fabric (Lakehouse/Warehouse/OneLake, Pipelines, Dataflows Gen2, notebooks). Architecture experience (formal or informal), such as contributing to solution designs, reference architectures, integration standards, and platform governance. Experience with DevOps/CI-CD for data engineering using Azure DevOps or GitHub (deployment patterns, code promotion, testing). Experience with Power BI and semantic model considerations for Lakehouse/warehouse-backed reporting. Familiarity with data catalog/governance tooling (e.g., Microsoft Purview).
    $70k-93k yearly est. 5d ago
  • Senior Data Engineer

    Concert 4.0company rating

    Nashville, TN jobs

    Concert is a software and managed services company that promotes health by providing the digital infrastructure for reliable and efficient management of laboratory testing and precision medicine. We are wholeheartedly dedicated to enhancing the transparency and efficiency of health care. Our customers include health plans, provider systems, laboratories, and other important stakeholders. We are a growing organization driven by smart, creative people to help advance precision medicine and health care. Learn more about us at *************** YOUR ROLE Concert is seeking a skilled Senior Data Engineer to join our team. Your role will be pivotal in designing, developing, and maintaining our data infrastructure and pipelines, ensuring robust, scalable, and efficient data solutions. You will work closely with data scientists, analysts, and other engineers to support our mission of automating the application of clinical policy and payment through data-driven insights. You will be joining an innovative, energetic, passionate team who will help you grow and build skills at the intersection of diagnostics, information technology and evidence-based clinical care. As a Senior Data Engineer you will: Design, develop, and maintain scalable and efficient data pipelines using AWS services such as Redshift, S3, Lambda, ECS, Step Functions, and Kinesis Data Streams. Implement and manage data warehousing solutions, primarily with Redshift, and optimize existing data models for performance and scalability. Utilize DBT (data build tool) for data transformation and modeling, ensuring data quality and consistency. Develop and maintain ETL/ELT processes to ingest, process, and store large datasets from various sources. Work with SageMaker for machine learning data preparation and integration. Ensure data security, privacy, and compliance with industry regulations. Collaborate with data scientists and analysts to understand data requirements and deliver solutions that meet their needs. Monitor and troubleshoot data pipelines, identifying and resolving issues promptly. Implement best practices for data engineering, including code reviews, testing, and automation. Mentor junior data engineers and share knowledge on data engineering best practices. Stay up-to-date with the latest advancements in data engineering, AWS services, and related technologies. After 3 months on the job you will have: Developed a strong understanding of Concert's data engineering infrastructure Learned the business domain and how it maps to the information architecture Made material contributions towards existing key results After 6 months you will have: Led a major initiative Become the first point of contact when issues related to the data warehouse are identified After 12 months you will have: Taken responsibility for the long term direction of the data engineering infrastructure Proposed and executed key results with an understanding of the business strategy Communicated the business value of major technical initiatives to key non-technical business stakeholders WHAT LEADS TO SUCCESS Self-Motivated A team player with a positive attitude and a proactive approach to problem-solving. Executes Well You are biased to action and get things done. You acknowledge unknowns and recover from setbacks well. Comfort with Ambiguity You aren't afraid of uncertainty and blazing new trails, you care about building towards a future that is different from today. Technical Bravery You are comfortable with new technologies and eager to dive in to understand data in the raw and in its processed states. Mission-focused You are personally motivated to drive more affordable, equitable and effective integration of genomic technologies into clinical care. Effective Communication You build rapport and great working relationships with senior leaders, peers, and use the relationships you've built to drive the company forward RELEVANT SKILLS & EXPERIENCE Minimum of 4 years experience working as a data engineer Bachelor's degree in software or data engineering or comparable technical certification / experience Ability to effectively communicate complex technical concepts to both technical and non-technical audiences. Proven experience in designing and implementing data solutions on AWS, including Redshift, S3, Lambda, ECS, and Step Functions Strong understanding of data warehousing principles and best practices Experience with DBT for data transformation and modeling. Proficiency in SQL and at least one programming language (e.g., Python, Scala) Familiarity or experience with the following tools / concepts are a plus: BI tools such as Metabase; Healthcare claims data, security requirements, and HIPAA compliance; Kimball's dimensional modeling techniques; ZeroETL and Kinesis data streams COMPENSATION Concert is seeking top talent and offers competitive compensation based on skills and experience. Compensation will commensurate with experience. This position will report to the VP of Engineering. LOCATION Concert is based in Nashville, Tennessee and supports a remote work environment. For further questions, please contact: ******************.
    $75k-102k yearly est. 3d ago
  • Lead Data Engineer

    Themesoft Inc. 3.7company rating

    Roseland, NJ jobs

    Job Title: Lead Data Engineer. Hybrid Role: 3 Times / Week. Type: 12 Months Contract - Rolling / Extendable Contract. Work Authorization: Candidates must be authorized to work in the U.S. without current or future sponsorship requirements. Must haves: AWS. Databricks. Lead experience- this can be supplemented for staff as well. Python. Pyspark. Contact Center Experience is a nice to have. Job Description: As a Lead Data Engineer, you will spearhead the design and delivery of a data hub/marketplace aimed at providing curated client service data to internal data consumers, including analysts, data scientists, analytic content authors, downstream applications, and data warehouses. You will develop a service data hub solution that enables internal data consumers to create and maintain data integration workflows, manage subscriptions, and access content to understand data meaning and lineage. You will design and maintain enterprise data models for contact center-oriented data lakes, warehouses, and analytic models (relational, OLAP/dimensional, columnar, etc.). You will collaborate with source system owners to define integration rules and data acquisition options (streaming, replication, batch, etc.). You will work with data engineers to define workflows and data quality monitors. You will perform detailed data analysis to understand the content and viability of data sources to meet desired use cases and help define and maintain enterprise data taxonomy and data catalog. This role requires clear, compelling, and influential communication skills. You will mentor developers and collaborate with peer architects and developers on other teams. TO SUCCEED IN THIS ROLE: Ability to define and design complex data integration solutions with general direction and stakeholder access. Capability to work independently and as part of a global, multi-faceted data warehousing and analytics team. Advanced knowledge of cloud-based data engineering and data warehousing solutions, especially AWS, Databricks, and/or Snowflake. Highly skilled in RDBMS platforms such as Oracle, SQLServer. Familiarity with NoSQL DB platforms like MongoDB. Understanding of data modeling and data engineering, including SQL and Python. Strong understanding of data quality, compliance, governance and security. Proficiency in languages such as Python, SQL, and PySpark. Experience in building data ingestion pipelines for structured and unstructured data for storage and optimal retrieval. Ability to design and develop scalable data pipelines. Knowledge of cloud-based and on-prem contact center technologies such as Salesforce.com, ServiceNow, Oracle CRM, Genesys Cloud, Genesys InfoMart, Calabrio Voice Recording, Nuance Voice Biometrics, IBM Chatbot, etc., is highly desirable. Experience with code repository and project tools such as GitHub, JIRA, and Confluence. Working experience with CI/CD (Continuous Integration & Continuous Deployment) process, with hands-on expertise in Jenkins, Terraform, Splunk, and Dynatrace. Highly innovative with an aptitude for foresight, systems thinking, and design thinking, with a bias towards simplifying processes. Detail-oriented with strong analytical, problem-solving, and organizational skills. Ability to clearly communicate with both technical and business teams. Knowledge of Informatica PowerCenter, Data Quality, and Data Catalog is a plus. Knowledge of Agile development methodologies is a plus. Having a Databricks data engineer associate certification is a plus but not mandatory. Data Engineer Requirements: Bachelor's degree in computer science, information technology, or a similar field. 8+ years of experience integrating and transforming contact center data into standard, consumption-ready data sets incorporating standardized KPIs, supporting metrics, attributes, and enterprise hierarchies. Expertise in designing and deploying data integration solutions using web services with client-driven workflows and subscription features. Knowledge of mathematical foundations and statistical analysis. Strong interpersonal skills. Excellent communication and presentation skills. Advanced troubleshooting skills. Regards, Purnima Pobbathy Senior Technical Recruiter ************ | ********************* |Themesoft Inc |
    $78k-106k yearly est. 5d ago
  • Data Engineer

    Inceed 4.1company rating

    Denver, CO jobs

    Data Engineer Compensation: $80 - $90/hour, depending on experience Inceed has partnered with a great company to help find a skilled Data Engineer to join their team! Join a dynamic team as a contract Data Engineer, where you'll be the backbone of data-driven operations. This role offers the opportunity to work with a modern tech stack in a hybrid on-prem and cloud environment. You'll design and implement innovative solutions to complex challenges, collaborating with data scientists, location intelligence experts, and ML engineers. This exciting opportunity has opened due to a new project initiative and you'll be making a tangible impact. Key Responsibilities & Duties: Design and deploy scalable data pipelines and architectures Collaborate with stakeholders to deliver high-impact data solutions Integrate data from multiple sources ensuring quality and reliability Develop automation workflows and BI solutions Mentor others and contribute to the knowledge base Explore and implement emerging technologies Required Qualifications & Experience: 8+ years of experience in data engineering Experience with large oil and gas datasets Proficiency in SQL and Python Hands-on experience in cloud environments (Azure, AWS, or GCP) Familiarity with Apache Kafka, Apache Flink, or Azure Event Hubs Nice to Have Skills & Experience: Experience with Palantir Foundry Knowledge of query federation platforms Experience with modern data stack tools like dbt or Airflow Perks & Benefits: 3 different medical health insurance plans, dental, and vision insurance Voluntary and Long-term disability insurance Paid time off, 401k, and holiday pay Weekly direct deposit or pay card deposit If you are interested in learning more about the Data Engineer opportunity, please submit your resume for consideration. Our client is unable to provide sponsorship at this time. We are Inceed, a staffing direct placement firm who believes in the possibility of something better. Our mission is simple: We're here to help every person, whether client, candidate, or employee, find and secure what's better for them. Inceed is an equal opportunity employer. Inceed prohibits discrimination and harassment of any type and affords equal employment opportunities to employees and applicants without regard to race, color, religion, sex, sexual orientation, gender identity, or expression, pregnancy, age, national origin, disability status, genetic information, protected veteran status, or any other characteristic protected by law.
    $80-90 hourly 1d ago
  • Data Engineer

    Inceed 4.1company rating

    Denver, CO jobs

    Data Engineer Compensation: $ 80 - 90 /hour, depending on experience Inceed has partnered with a great energy company to help find a skilled Data Engineer to join their team! Join a dynamic team where you'll be at the forefront of data-driven operations. This role offers the autonomy to design and implement groundbreaking data architectures, working primarily remotely. This position is open due to exciting new projects. You'll be collaborating with data scientists and engineers, making impactful contributions to the company's success. Key Responsibilities & Duties: Design and deploy scalable data pipelines and architectures Collaborate with stakeholders to deliver high-impact data solutions Integrate data from various sources ensuring consistency and reliability Develop automation workflows and BI solutions Mentor others and advise on data process best practices Explore and implement emerging technologies Required Qualifications & Experience: 8+ years of data engineering experience Experience with PI Experience with SCADA Experience with Palantir Experience with large oil and gas datasets Proficiency in Python and SQL Hands-on experience in cloud environments (Azure, AWS, GCP) Nice to Have Skills & Experience: Familiarity with Apache Kafka or Flink Perks & Benefits: 3 different medical health insurance plans, dental, and vision insurance Voluntary and Long-term disability insurance Paid time off, 401k, and holiday pay Weekly direct deposit or pay card deposit If you are interested in learning more about the Data Engineer opportunity, please submit your resume for consideration. Our client is unable to provide sponsorship at this time. We are Inceed, a staffing direct placement firm who believes in the possibility of something better. Our mission is simple: We're here to help every person, whether client, candidate, or employee, find and secure what's better for them. Inceed is an equal opportunity employer. Inceed prohibits discrimination and harassment of any type and affords equal employment opportunities to employees and applicants without regard to race, color, religion, sex, sexual orientation, gender identity, or expression, pregnancy, age, national origin, disability status, genetic information, protected veteran status, or any other characteristic protected by law.
    $80-90 hourly 1d ago
  • Data Engineer

    Yochana 4.2company rating

    Charlotte, NC jobs

    Job Title: Azure Databricks Engineer (Onsite) Years of Experience: 7- 12 Years Full Time We are seeking a highly skilled and motivated Technical Team Lead with extensive experience in Azure Databricks to join our dynamic team. The ideal candidate will possess a strong technical background, exceptional leadership abilities, and a passion for driving innovative solutions. As a Technical Team Lead, you will be responsible for guiding a team of developers and engineers in the design, development, and implementation of data driven solutions that leverage Azure Databricks. Responsibilities: Lead and mentor a team of technical professionals, fostering a collaborative and high performance culture. Design and implement data processing solutions using Azure Databricks, ensuring scalability and efficiency. Collaborate with cross functional teams to gather requirements and translate them into technical specifications. Oversee the development lifecycle, from planning and design to deployment and maintenance. Conduct code reviews and provide constructive feedback to team members to ensure code quality and adherence to best practices. Stay up to date with industry trends and emerging technologies related to Azure Databricks and data engineering. Facilitate communication between technical and non technical stakeholders to ensure alignment on project goals. Identify and mitigate risks associated with project delivery and team performance. Mandatory Skills: Proven expertise in Azure Databricks, including experience with Spark, data pipelines, and data lakes. Strong programming skills in languages such as Python, Scala, or SQL. Experience with cloud based data storage solutions, particularly Azure Data Lake Storage and Azure SQL Database. Solid understanding of data modeling, ETL processes, and data warehousing concepts. Demonstrated ability to lead technical teams and manage multiple projects simultaneously. Preferred Skills: Familiarity with Azure DevOps for CI/CD processes. Experience with machine learning frameworks and libraries. Knowledge of data governance and compliance standards. Strong analytical and problem solving skills. Excellent communication and interpersonal skills. Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. 7 10 years of experience in data engineering, software development, or a related technical role. Proven track record of leading technical teams and delivering successful projects. Relevant certifications in Azure or data engineering are a plus. If you are a passionate Technical Team Lead with a strong background in Azure Databricks and a desire to drive innovation, we encourage you to apply and join our team
    $80k-111k yearly est. 3d ago
  • Data Engineer

    Ztek Consulting 4.3company rating

    Hamilton, NJ jobs

    Key Responsibilities: Manage and support batch processes and data pipelines in Azure Databricks and Azure Data Factory. Integrate and process Bloomberg market data feeds and files into trading or analytics platforms. Monitor, troubleshoot, and resolve data and system issues related to trading applications and market data ingestion. Develop, automate, and optimize ETL pipelines using Python, Spark, and SQL. Manage FTP/SFTP file transfers between internal systems and external vendors. Ensure data quality, completeness, and timeliness for downstream trading and reporting systems. Collaborate with operations, application support, and infrastructure teams to resolve incidents and enhance data workflows. Required Skills & Experience: 10+ years of experience in data engineering or production support within financial services or trading environments. Hands-on experience with Azure Databricks, Azure Data Factory, and Azure Storage, Logic Apps, Fabric. Strong Python and SQL programming skills. Experience with Bloomberg data feeds (BPIPE, TSIP,SFTP). Experience with Git, CI/CD pipelines, and Azure DevOps. Proven ability to support batch jobs, troubleshoot failures, and manage job scheduling. Experience handling FTP/SFTP file transfers and automation (e.g., using scripts or managed file transfer tools). Solid understanding of equities trading, fixed income trading, trading workflows, and financial instruments. Excellent communication, problem-solving, and stakeholder management skills.
    $89k-125k yearly est. 1d ago
  • IT Data Engineer

    Inceed 4.1company rating

    Lakewood, CO jobs

    IT Data Engineer Compensation: $125k-$155k (DOE) Inceed has partnered with a great company to help find a skilled IT Data Engineer to join their team! Join a dynamic team where innovation meets opportunity. This role is pivotal in advancing AI and data modernization initiatives, bridging traditional database administration with cutting-edge AI data infrastructure. The team thrives on collaboration and offers a hybrid work schedule. Key Responsibilities & Duties: Design and maintain scalable data pipelines. Develop RAG workflows for AI information access. Build secure connectors and APIs for data retrieval. Monitor and optimize data flows for consistency. Lead database administration and performance tuning. Manage database upgrades and storage optimization. Implement database security controls and standards. Support application integrations and data migrations. Define and maintain data models and metadata. Collaborate with teams to ensure compliance requirements. Required Qualifications & Experience: Bachelor's degree in Computer Science or related field. 7+ years in database administration or data engineering. Advanced SQL and data modeling skills. Experience with AI and analytics data pipelines. Familiarity with cloud-based data ecosystems. Hands-on experience with RAG and vectorization. Proficiency in scripting languages like Python. Experience leading vendor-to-internal transitions. Nice to Have Skills & Experience: Experience integrating enterprise systems into data platforms. Knowledge of data governance frameworks. Understanding of semantic data modeling. Experience with cloud migration of database workloads. Perks & Benefits: This opportunity includes a comprehensive and competitive benefits package-details will be shared during later stages of the hiring process. Other Information: Hybrid work schedule This position requires a background check and drug test If you are interested in learning more about the IT Data Engineer opportunity, please submit your resume for consideration. Our client is unable to provide sponsorship at this time. We are Inceed, a staffing direct placement firm who believes in the possibility of something better. Our mission is simple: We're here to help every person, whether client, candidate, or employee, find and secure what's better for them. Inceed is an equal opportunity employer. Inceed prohibits discrimination and harassment of any type and affords equal employment opportunities to employees and applicants without regard to race, color, religion, sex, sexual orientation, gender identity, or expression, pregnancy, age, national origin, disability status, genetic information, protected veteran status, or any other characteristic protected by law.
    $125k-155k yearly 1d ago
  • Data Engineer

    Sharp Decisions 4.6company rating

    New York, NY jobs

    Hey All, We are looking for a mid-level data engineer. No third parties As a result of this expansion, we are seeking experienced software Data engineers with 5+ years of relevant experience to support the design and development of a strategic data platform for SMBC Capital Markets and Nikko Securities Group. Qualifications and Skills • Proven experience as a Data Engineer with experience in Azure cloud. • Experience implementing solutions using - • Azure cloud services • Azure Data Factory • Azure Lake Gen 2 • Azure Databases • Azure Data Fabric • API Gateway management • Azure Functions • Well versed with Azure Databricks • Strong SQL skills with RDMS or no SQL databases • Experience with developing APIs using FastAPI or similar frameworks in Python • Familiarity with the DevOps lifecycle (git, Jenkins, etc.), CI/CD processes • Good understanding of ETL/ELT processes • Experience in financial services industry, financial instruments, asset classes and market data are a plus.
    $85k-111k yearly est. 3d ago
  • Azure Data Engineer

    Sharp Decisions 4.6company rating

    Jersey City, NJ jobs

    Title: Senior Azure Data Engineer Client: Major Japanese Bank Experience Level: Senior (10+ Years) The Senior Azure Data Engineer will design, build, and optimize enterprise data solutions within Microsoft Azure for a major Japanese bank. This role focuses on architecting scalable data pipelines, enhancing data lake environments, and ensuring security, compliance, and data governance best practices. Key Responsibilities: Develop, maintain, and optimize Azure-based data pipelines and ETL/ELT workflows. Design and implement Azure Data Lake, Synapse, Databricks, and ADF solutions. Ensure data security, compliance, lineage, and governance controls. Partner with architecture, data governance, and business teams to deliver high-quality data solutions. Troubleshoot performance issues and improve system efficiency. Required Skills: 10+ years of data engineering experience. Strong hands-on expertise with Azure Synapse, Azure Data Factory, Azure Databricks, Azure Data Lake, and Azure SQL. Azure certifications strongly preferred. Strong SQL, Python, and cloud data architecture skills. Experience in financial services or large enterprise environments preferred.
    $77k-101k yearly est. 3d ago
  • AWS Data Engineer

    Mindlance 4.6company rating

    McLean, VA jobs

    Responsibilities: Design, build, and maintain scalable data pipelines using AWS Glue and Databricks. Develop and optimize ETL/ELT processes using PySpark and Python. Collaborate with data scientists, analysts, and stakeholders to enable efficient data access and transformation. Implement and maintain data lake and warehouse solutions on AWS (S3, Glue Catalog, Redshift, Athena, etc.). Ensure data quality, consistency, and reliability across systems. Optimize performance of large-scale distributed data processing workflows. Develop automation scripts and frameworks for data ingestion, transformation, and validation. Follow best practices for data governance, security, and compliance. Required Skills & Experience: 5-8 years of hands-on experience in Data Engineering. Strong proficiency in Python and PySpark for data processing and transformation. Expertise in AWS services - particularly Glue, S3, Lambda, Redshift, and Athena. Hands-on experience with Databricks for building and managing data pipelines. Experience working with large-scale data systems and optimizing performance. Solid understanding of data modeling, data lake architecture, and ETL design principles. Strong problem-solving skills and ability to work independently in a fast-paced environment. “Mindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of - Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.”
    $85k-113k yearly est. 5d ago
  • Data Engineer

    Interactive Resources-IR 4.2company rating

    Tempe, AZ jobs

    About the Role We are seeking a highly skilled Databricks Data Engineer with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions. What We're Looking For 8+ years designing and delivering scalable data pipelines in modern data platforms Deep experience in data engineering, data warehousing, and enterprise-grade solution delivery Ability to lead cross-functional initiatives in matrixed teams Advanced skills in SQL, Python, and ETL/ELT development, including performance tuning Hands-on experience with Azure, Snowflake, and Databricks, including system integrations Key Responsibilities Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform Modernize and enhance cloud-based data ecosystems on Azure, contributing to architecture, modeling, security, and CI/CD Use Apache Airflow and similar tools for workflow automation and orchestration Work with financial or regulated datasets while ensuring strong compliance and governance Drive best practices in data quality, lineage, cataloging, and metadata management Primary Technical Skills Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks Notebooks Design efficient Delta Lake models for reliability and performance Implement and manage Unity Catalog for governance, RBAC, lineage, and secure data sharing Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems Automate ingestion and workflows using Python and REST APIs Support downstream analytics for BI, data science, and application workloads Write optimized SQL/T-SQL queries, stored procedures, and curated datasets Automate DevOps workflows, testing pipelines, and workspace configurations Additional Skills Azure: Data Factory, Data Lake, Key Vault, Logic Apps, Functions CI/CD: Azure DevOps Orchestration: Apache Airflow (plus) Streaming: Delta Live Tables MDM: Profisee (nice-to-have) Databases: SQL Server, Cosmos DB Soft Skills Strong analytical and problem-solving mindset Excellent communication and cross-team collaboration Detail-oriented with a high sense of ownership and accountability
    $92k-122k yearly est. 5d ago
  • Data Scientist - ML, Python

    Avance Consulting 4.4company rating

    McLean, VA jobs

    10+years of experience required in Information Technology. Python Programming: At least 5 years of hands-on experience with Python, particularly in frameworks like FastAPI, Django, Flask, and experience using AI frameworks. • Access Control Expertise: Strong understanding of access control models such as Role-Based Access Control (RBAC) and Attribute-Based Access Control (ABAC). • API and Connector Development: Experience in developing API connectors using Python for extracting and managing access control data from platforms like Azure, SharePoint, Java, .NET, WordPress, etc. • AI and Machine Learning: Hands-on experience integrating AI into applications for automating tasks such as access control reviews and identifying anomalies • Cloud and Microsoft Technologies: Proficiency with Azure services, Microsoft Graph API, and experience integrating Python applications with Azure for access control reviews and reporting. • Reporting and Visualization: Experience using reporting libraries in Python (Pandas, Matplotlib, Plotly, Dash) to build dashboards and reports related to security and access control metrics. • Communication Skills: Ability to collaborate with various stakeholders, explain complex technical solutions, and deliver high-quality solutions on time. • PlainID: Experience or familiarity with PlainID platforms for identity and access management. • Azure OpenAI: Familiarity with Azure OpenAI technologies and their application in access control and security workflows. • Power BI: Experience with Microsoft Power BI for data visualization and reporting. • Agile Methodologies: Experience working in Agile environments and familiarity with Scrum methodologies for delivering security solutions.
    $76k-111k yearly est. 5d ago
  • Senior Data Engineer

    Luna Data Solutions, Inc. 4.4company rating

    Austin, TX jobs

    We are looking for a seasoned Azure Data Engineer to design, build, and optimize secure, scalable, and high-performance data solutions within the Microsoft Azure ecosystem. This will be a multi-year contract worked FULLY ONSITE in Austin, TX. The ideal candidate brings deep technical expertise in data architecture, ETL/ELT engineering, data integration, and governance, along with hands-on experience in MDM, API Management, Lakehouse architectures, and data mesh or data hub frameworks. This position combines strategic architectural planning with practical, hands-on implementation, empowering cross-functional teams to leverage data as a key organizational asset. Key Responsibilities 1. Data Architecture & Strategy Design and deploy end-to-end Azure data platforms using Azure Data Lake, Azure Synapse Analytics, Azure Databricks, and Azure SQL Database. Build and implement Lakehouse and medallion (Bronze/Silver/Gold) architectures for scalable and modular data processing. Define and support data mesh and data hub patterns to promote domain-driven design and federated governance. Establish standards for conceptual, logical, and physical data modeling across data warehouse and data lake environments. 2. Data Integration & Pipeline Development Develop and maintain ETL/ELT pipelines using Azure Data Factory, Synapse Pipelines, and Databricks for both batch and streaming workloads. Integrate diverse data sources (on-prem, cloud, SaaS, APIs) into a unified Azure data environment. Optimize pipelines for cost-effectiveness, performance, and scalability. 3. Master Data Management (MDM) & Data Governance Implement MDM solutions using Azure-native or third-party platforms (e.g., Profisee, Informatica, Semarchy). Define and manage data governance, metadata, and data quality frameworks. Partner with business teams to align data standards and maintain data integrity across domains. 4. API Management & Integration Build and manage APIs for data access, transformation, and system integration using Azure API Management and Logic Apps. Design secure, reliable data services for internal and external consumers. Automate workflows and system integrations using Azure Functions, Logic Apps, and Power Automate. 5. Database & Platform Administration Perform core DBA tasks, including performance tuning, query optimization, indexing, and backup/recovery for Azure SQL and Synapse. Monitor and optimize cost, performance, and scalability across Azure data services. Implement CI/CD and Infrastructure-as-Code (IaC) solutions using Azure DevOps, Terraform, or Bicep. 6. Collaboration & Leadership Work closely with data scientists, analysts, business stakeholders, and application teams to deliver high-value data solutions. Mentor junior engineers and define best practices for coding, data modeling, and solution design. Contribute to enterprise-wide data strategy and roadmap development. Required Qualifications Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or related fields. 5+ years of hands-on experience in Azure-based data engineering and architecture. Strong proficiency with the following: Azure Data Factory, Azure Synapse, Azure Databricks, Azure Data Lake Storage Gen2 SQL, Python, PySpark, PowerShell Azure API Management and Logic Apps Solid understanding of data modeling approaches (3NF, dimensional modeling, Data Vault, star/snowflake schemas). Proven experience with Lakehouse/medallion architectures and data mesh/data hub designs. Familiarity with MDM concepts, data governance frameworks, and metadata management. Experience with automation, data-focused CI/CD, and IaC. Thorough understanding of Azure security, RBAC, Key Vault, and core networking principles. What We Offer Competitive compensation and benefits package Luna Data Solutions, Inc. (LDS) provides equal employment opportunities to all employees. All applicants will be considered for employment. LDS prohibits discrimination and harassment of any type regarding age, race, color, religion, sexual orientation, gender identity, sex, national origin, genetics, protected veteran status, and/or disability status.
    $74k-95k yearly est. 5d ago
  • Data Engineer

    Interactive Resources-IR 4.2company rating

    Austin, TX jobs

    About the Role We are seeking a highly skilled Databricks Data Engineer with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions. What We're Looking For 8+ years designing and delivering scalable data pipelines in modern data platforms Deep experience in data engineering, data warehousing, and enterprise-grade solution delivery Ability to lead cross-functional initiatives in matrixed teams Advanced skills in SQL, Python, and ETL/ELT development, including performance tuning Hands-on experience with Azure, Snowflake, and Databricks, including system integrations Key Responsibilities Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform Modernize and enhance cloud-based data ecosystems on Azure, contributing to architecture, modeling, security, and CI/CD Use Apache Airflow and similar tools for workflow automation and orchestration Work with financial or regulated datasets while ensuring strong compliance and governance Drive best practices in data quality, lineage, cataloging, and metadata management Primary Technical Skills Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks Notebooks Design efficient Delta Lake models for reliability and performance Implement and manage Unity Catalog for governance, RBAC, lineage, and secure data sharing Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems Automate ingestion and workflows using Python and REST APIs Support downstream analytics for BI, data science, and application workloads Write optimized SQL/T-SQL queries, stored procedures, and curated datasets Automate DevOps workflows, testing pipelines, and workspace configurations Additional Skills Azure: Data Factory, Data Lake, Key Vault, Logic Apps, Functions CI/CD: Azure DevOps Orchestration: Apache Airflow (plus) Streaming: Delta Live Tables MDM: Profisee (nice-to-have) Databases: SQL Server, Cosmos DB Soft Skills Strong analytical and problem-solving mindset Excellent communication and cross-team collaboration Detail-oriented with a high sense of ownership and accountability
    $84k-111k yearly est. 5d ago
  • Data Scientist

    The Intersect Group 4.2company rating

    Phoenix, AZ jobs

    We are seeking a Data Scientist to support advanced analytics and machine learning initiatives across the organization. This role involves working with large, complex datasets to uncover insights, validate data integrity, and build predictive models. A key focus will be developing and refining machine learning models that leverage sales and operational data to optimize pricing strategies at the store level. Day-to-Day Responsibilities Compare and validate numbers across multiple data systems Investigate discrepancies and understand how metrics are derived Perform data science and data analysis tasks Build and maintain AI/ML models using Python Interpret model results, fine-tune algorithms, and iterate based on findings Validate and reconcile data from different sources to ensure accuracy Work with sales and production data to produce item-level pricing recommendations Support ongoing development of a new data warehouse and create queries as needed Review Power BI dashboards (Power BI expertise not required) Contribute to both ML-focused work and general data science responsibilities Improve and refine an existing ML pricing model already in production Qualifications Strong proficiency with MS SQL Server Experience creating and deploying machine learning models in Python Ability to interpret, evaluate, and fine-tune model outputs Experience validating and reconciling data across systems Strong foundation in machine learning, data modeling, and backend data operations Familiarity with querying and working with evolving data environments
    $76k-109k yearly est. 5d ago

Learn more about Lee Hecht Harrison jobs

View all jobs