Data Engineer jobs at Envision Healthcare - 684 jobs
Data Architect
Novocure Inc. 4.6
Rochester, NY jobs
We are seeking an experienced and innovative Data Architect to lead the design, development, and optimization of our enterprise data architecture. This individual will play a critical role in aligning data strategy with business objectives, ensuring data integrity, and driving value from data across multiple platforms. The ideal candidate will have deep expertise in data architecture best practices and technologies, particularly across SAP S/4 HANA, Veeva CRM, Veeva Vault, SaaS platforms, Operational Data Stores (ODS), and Master Data Management (MDM) platforms.
This is a full-time, position reporting to the Director, Enterprise Architecture
ESSENTIAL DUTIES AND RESPONSIBILITIES:
Design, develop, and maintain scalable and secure enterprise data architecture solutions across SAP S/4 HANA, Veeva CRM, and Veeva Vault environments.
Serve as a subject matter expert for Operational Data Stores and Master Data Management architecture, ensuring clean, consistent, and governed data across the enterprise.
Collaborate with cross-functional teams to identify data needs, establish data governance frameworks, and define data integration strategies.
Develop data models, data flows, and system integration patterns that support enterprise analytics and reporting needs.
Evaluate and recommend new tools, platforms, and methodologies for improving data management capabilities.
Ensure architectural alignment with data privacy, regulatory, and security standards.
Provide leadership and mentoring to dataengineering and analytics teams on best practices in data modeling, metadata management, and data lifecycle management.
Contribute to data governance initiatives by enforcing standards, policies, and procedures for enterprise data.
QUALIFICATIONS/KNOWLEDGE:
Bachelor's or Master's degree in Computer Science, Information Systems, Data Science, or a related field.
Minimum of 8+ years of experience in data architecture, data integration, or enterprise data management roles.
Proven experience in designing and implementing data solutions on SAP S/4 HANA, including integration with other enterprise systems.
Strong hands-on experience with SaaS platforms, including data extraction, modeling, and harmonization.
Deep understanding of Operational Data Stores and MDM design patterns, implementation, and governance practices.
Proficiency in data modeling tools (e.g., Erwin, SAP PowerDesigner), ETL tools (e.g., Business Objects Data Services, SAP Data Services), and integration platforms (e.g., MuleSoft).
Familiarity with cloud data architecture (e.g., AWS, Azure, GCP) and hybrid data environments.
Excellent communication and stakeholder management skills.
OTHER:
Experience with pharmaceutical, life sciences, or regulated industry environments.
Knowledge of data privacy regulations such as GDPR, HIPAA, and data compliance frameworks
Ability to travel domestically and internationally as needed for high priority projects
Novocure is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sexual orientation, gender identity, age, national origin, disability, protected veteran status or other characteristics protected by federal, state, or local law. We actively seek qualified candidates who are protected veteran and individuals with disabilities as defined under VEVRAA and Section 503 of the Rehabilitation Act.
Novocure is committed to providing an interview process that is inclusive of our applicant's needs. If you are an individual with a disability and would like to request an accommodation, please email
ABOUT NOVOCURE:
Our vision
Patient-forward: aspiring to make a difference in cancer.
Our patient-forward mission
Together with our patients, we strive to extend survival in some of the most aggressive forms of cancer by developing and commercializing our innovative therapy.
Our patient-forward values
innovation
focus
drive
courage
trust
empathy
#LI-RJ1
$97k-130k yearly est. 4d ago
Looking for a job?
Let Zippia find it for you.
Data Architect
Novocure Inc. 4.6
Orlando, FL jobs
We are seeking an experienced and innovative Data Architect to lead the design, development, and optimization of our enterprise data architecture. This individual will play a critical role in aligning data strategy with business objectives, ensuring data integrity, and driving value from data across multiple platforms. The ideal candidate will have deep expertise in data architecture best practices and technologies, particularly across SAP S/4 HANA, Veeva CRM, Veeva Vault, SaaS platforms, Operational Data Stores (ODS), and Master Data Management (MDM) platforms.
This is a full-time, position reporting to the Director, Enterprise Architecture
ESSENTIAL DUTIES AND RESPONSIBILITIES:
Design, develop, and maintain scalable and secure enterprise data architecture solutions across SAP S/4 HANA, Veeva CRM, and Veeva Vault environments.
Serve as a subject matter expert for Operational Data Stores and Master Data Management architecture, ensuring clean, consistent, and governed data across the enterprise.
Collaborate with cross-functional teams to identify data needs, establish data governance frameworks, and define data integration strategies.
Develop data models, data flows, and system integration patterns that support enterprise analytics and reporting needs.
Evaluate and recommend new tools, platforms, and methodologies for improving data management capabilities.
Ensure architectural alignment with data privacy, regulatory, and security standards.
Provide leadership and mentoring to dataengineering and analytics teams on best practices in data modeling, metadata management, and data lifecycle management.
Contribute to data governance initiatives by enforcing standards, policies, and procedures for enterprise data.
QUALIFICATIONS/KNOWLEDGE:
Bachelor's or Master's degree in Computer Science, Information Systems, Data Science, or a related field.
Minimum of 8+ years of experience in data architecture, data integration, or enterprise data management roles.
Proven experience in designing and implementing data solutions on SAP S/4 HANA, including integration with other enterprise systems.
Strong hands-on experience with SaaS platforms, including data extraction, modeling, and harmonization.
Deep understanding of Operational Data Stores and MDM design patterns, implementation, and governance practices.
Proficiency in data modeling tools (e.g., Erwin, SAP PowerDesigner), ETL tools (e.g., Business Objects Data Services, SAP Data Services), and integration platforms (e.g., MuleSoft).
Familiarity with cloud data architecture (e.g., AWS, Azure, GCP) and hybrid data environments.
Excellent communication and stakeholder management skills.
OTHER:
Experience with pharmaceutical, life sciences, or regulated industry environments.
Knowledge of data privacy regulations such as GDPR, HIPAA, and data compliance frameworks
Ability to travel domestically and internationally as needed for high priority projects
Novocure is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sexual orientation, gender identity, age, national origin, disability, protected veteran status or other characteristics protected by federal, state, or local law. We actively seek qualified candidates who are protected veteran and individuals with disabilities as defined under VEVRAA and Section 503 of the Rehabilitation Act.
Novocure is committed to providing an interview process that is inclusive of our applicant's needs. If you are an individual with a disability and would like to request an accommodation, please email
ABOUT NOVOCURE:
Our vision
Patient-forward: aspiring to make a difference in cancer.
Our patient-forward mission
Together with our patients, we strive to extend survival in some of the most aggressive forms of cancer by developing and commercializing our innovative therapy.
Our patient-forward values
innovation
focus
drive
courage
trust
empathy
#LI-RJ1
$86k-118k yearly est. 4d ago
Data Architect
Novocure Inc. 4.6
Miami, FL jobs
We are seeking an experienced and innovative Data Architect to lead the design, development, and optimization of our enterprise data architecture. This individual will play a critical role in aligning data strategy with business objectives, ensuring data integrity, and driving value from data across multiple platforms. The ideal candidate will have deep expertise in data architecture best practices and technologies, particularly across SAP S/4 HANA, Veeva CRM, Veeva Vault, SaaS platforms, Operational Data Stores (ODS), and Master Data Management (MDM) platforms.
This is a full-time, position reporting to the Director, Enterprise Architecture
ESSENTIAL DUTIES AND RESPONSIBILITIES:
Design, develop, and maintain scalable and secure enterprise data architecture solutions across SAP S/4 HANA, Veeva CRM, and Veeva Vault environments.
Serve as a subject matter expert for Operational Data Stores and Master Data Management architecture, ensuring clean, consistent, and governed data across the enterprise.
Collaborate with cross-functional teams to identify data needs, establish data governance frameworks, and define data integration strategies.
Develop data models, data flows, and system integration patterns that support enterprise analytics and reporting needs.
Evaluate and recommend new tools, platforms, and methodologies for improving data management capabilities.
Ensure architectural alignment with data privacy, regulatory, and security standards.
Provide leadership and mentoring to dataengineering and analytics teams on best practices in data modeling, metadata management, and data lifecycle management.
Contribute to data governance initiatives by enforcing standards, policies, and procedures for enterprise data.
QUALIFICATIONS/KNOWLEDGE:
Bachelor's or Master's degree in Computer Science, Information Systems, Data Science, or a related field.
Minimum of 8+ years of experience in data architecture, data integration, or enterprise data management roles.
Proven experience in designing and implementing data solutions on SAP S/4 HANA, including integration with other enterprise systems.
Strong hands-on experience with SaaS platforms, including data extraction, modeling, and harmonization.
Deep understanding of Operational Data Stores and MDM design patterns, implementation, and governance practices.
Proficiency in data modeling tools (e.g., Erwin, SAP PowerDesigner), ETL tools (e.g., Business Objects Data Services, SAP Data Services), and integration platforms (e.g., MuleSoft).
Familiarity with cloud data architecture (e.g., AWS, Azure, GCP) and hybrid data environments.
Excellent communication and stakeholder management skills.
OTHER:
Experience with pharmaceutical, life sciences, or regulated industry environments.
Knowledge of data privacy regulations such as GDPR, HIPAA, and data compliance frameworks
Ability to travel domestically and internationally as needed for high priority projects
Novocure is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sexual orientation, gender identity, age, national origin, disability, protected veteran status or other characteristics protected by federal, state, or local law. We actively seek qualified candidates who are protected veteran and individuals with disabilities as defined under VEVRAA and Section 503 of the Rehabilitation Act.
Novocure is committed to providing an interview process that is inclusive of our applicant's needs. If you are an individual with a disability and would like to request an accommodation, please email
ABOUT NOVOCURE:
Our vision
Patient-forward: aspiring to make a difference in cancer.
Our patient-forward mission
Together with our patients, we strive to extend survival in some of the most aggressive forms of cancer by developing and commercializing our innovative therapy.
Our patient-forward values
innovation
focus
drive
courage
trust
empathy
#LI-RJ1
$81k-114k yearly est. 4d ago
Data Architect
Novocure Inc. 4.6
Buffalo, NY jobs
We are seeking an experienced and innovative Data Architect to lead the design, development, and optimization of our enterprise data architecture. This individual will play a critical role in aligning data strategy with business objectives, ensuring data integrity, and driving value from data across multiple platforms. The ideal candidate will have deep expertise in data architecture best practices and technologies, particularly across SAP S/4 HANA, Veeva CRM, Veeva Vault, SaaS platforms, Operational Data Stores (ODS), and Master Data Management (MDM) platforms.
This is a full-time, position reporting to the Director, Enterprise Architecture
ESSENTIAL DUTIES AND RESPONSIBILITIES:
Design, develop, and maintain scalable and secure enterprise data architecture solutions across SAP S/4 HANA, Veeva CRM, and Veeva Vault environments.
Serve as a subject matter expert for Operational Data Stores and Master Data Management architecture, ensuring clean, consistent, and governed data across the enterprise.
Collaborate with cross-functional teams to identify data needs, establish data governance frameworks, and define data integration strategies.
Develop data models, data flows, and system integration patterns that support enterprise analytics and reporting needs.
Evaluate and recommend new tools, platforms, and methodologies for improving data management capabilities.
Ensure architectural alignment with data privacy, regulatory, and security standards.
Provide leadership and mentoring to dataengineering and analytics teams on best practices in data modeling, metadata management, and data lifecycle management.
Contribute to data governance initiatives by enforcing standards, policies, and procedures for enterprise data.
QUALIFICATIONS/KNOWLEDGE:
Bachelor's or Master's degree in Computer Science, Information Systems, Data Science, or a related field.
Minimum of 8+ years of experience in data architecture, data integration, or enterprise data management roles.
Proven experience in designing and implementing data solutions on SAP S/4 HANA, including integration with other enterprise systems.
Strong hands-on experience with SaaS platforms, including data extraction, modeling, and harmonization.
Deep understanding of Operational Data Stores and MDM design patterns, implementation, and governance practices.
Proficiency in data modeling tools (e.g., Erwin, SAP PowerDesigner), ETL tools (e.g., Business Objects Data Services, SAP Data Services), and integration platforms (e.g., MuleSoft).
Familiarity with cloud data architecture (e.g., AWS, Azure, GCP) and hybrid data environments.
Excellent communication and stakeholder management skills.
OTHER:
Experience with pharmaceutical, life sciences, or regulated industry environments.
Knowledge of data privacy regulations such as GDPR, HIPAA, and data compliance frameworks
Ability to travel domestically and internationally as needed for high priority projects
Novocure is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sexual orientation, gender identity, age, national origin, disability, protected veteran status or other characteristics protected by federal, state, or local law. We actively seek qualified candidates who are protected veteran and individuals with disabilities as defined under VEVRAA and Section 503 of the Rehabilitation Act.
Novocure is committed to providing an interview process that is inclusive of our applicant's needs. If you are an individual with a disability and would like to request an accommodation, please email
ABOUT NOVOCURE:
Our vision
Patient-forward: aspiring to make a difference in cancer.
Our patient-forward mission
Together with our patients, we strive to extend survival in some of the most aggressive forms of cancer by developing and commercializing our innovative therapy.
Our patient-forward values
innovation
focus
drive
courage
trust
empathy
#LI-RJ1
$97k-130k yearly est. 4d ago
Data Architect
Novocure Inc. 4.6
Houston, TX jobs
We are seeking an experienced and innovative Data Architect to lead the design, development, and optimization of our enterprise data architecture. This individual will play a critical role in aligning data strategy with business objectives, ensuring data integrity, and driving value from data across multiple platforms. The ideal candidate will have deep expertise in data architecture best practices and technologies, particularly across SAP S/4 HANA, Veeva CRM, Veeva Vault, SaaS platforms, Operational Data Stores (ODS), and Master Data Management (MDM) platforms.
This is a full-time, position reporting to the Director, Enterprise Architecture
ESSENTIAL DUTIES AND RESPONSIBILITIES:
Design, develop, and maintain scalable and secure enterprise data architecture solutions across SAP S/4 HANA, Veeva CRM, and Veeva Vault environments.
Serve as a subject matter expert for Operational Data Stores and Master Data Management architecture, ensuring clean, consistent, and governed data across the enterprise.
Collaborate with cross-functional teams to identify data needs, establish data governance frameworks, and define data integration strategies.
Develop data models, data flows, and system integration patterns that support enterprise analytics and reporting needs.
Evaluate and recommend new tools, platforms, and methodologies for improving data management capabilities.
Ensure architectural alignment with data privacy, regulatory, and security standards.
Provide leadership and mentoring to dataengineering and analytics teams on best practices in data modeling, metadata management, and data lifecycle management.
Contribute to data governance initiatives by enforcing standards, policies, and procedures for enterprise data.
QUALIFICATIONS/KNOWLEDGE:
Bachelor's or Master's degree in Computer Science, Information Systems, Data Science, or a related field.
Minimum of 8+ years of experience in data architecture, data integration, or enterprise data management roles.
Proven experience in designing and implementing data solutions on SAP S/4 HANA, including integration with other enterprise systems.
Strong hands-on experience with SaaS platforms, including data extraction, modeling, and harmonization.
Deep understanding of Operational Data Stores and MDM design patterns, implementation, and governance practices.
Proficiency in data modeling tools (e.g., Erwin, SAP PowerDesigner), ETL tools (e.g., Business Objects Data Services, SAP Data Services), and integration platforms (e.g., MuleSoft).
Familiarity with cloud data architecture (e.g., AWS, Azure, GCP) and hybrid data environments.
Excellent communication and stakeholder management skills.
OTHER:
Experience with pharmaceutical, life sciences, or regulated industry environments.
Knowledge of data privacy regulations such as GDPR, HIPAA, and data compliance frameworks
Ability to travel domestically and internationally as needed for high priority projects
Novocure is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sexual orientation, gender identity, age, national origin, disability, protected veteran status or other characteristics protected by federal, state, or local law. We actively seek qualified candidates who are protected veteran and individuals with disabilities as defined under VEVRAA and Section 503 of the Rehabilitation Act.
Novocure is committed to providing an interview process that is inclusive of our applicant's needs. If you are an individual with a disability and would like to request an accommodation, please email
ABOUT NOVOCURE:
Our vision
Patient-forward: aspiring to make a difference in cancer.
Our patient-forward mission
Together with our patients, we strive to extend survival in some of the most aggressive forms of cancer by developing and commercializing our innovative therapy.
Our patient-forward values
innovation
focus
drive
courage
trust
empathy
#LI-RJ1
$88k-120k yearly est. 4d ago
Data Architect
Novocure Inc. 4.6
Dallas, TX jobs
We are seeking an experienced and innovative Data Architect to lead the design, development, and optimization of our enterprise data architecture. This individual will play a critical role in aligning data strategy with business objectives, ensuring data integrity, and driving value from data across multiple platforms. The ideal candidate will have deep expertise in data architecture best practices and technologies, particularly across SAP S/4 HANA, Veeva CRM, Veeva Vault, SaaS platforms, Operational Data Stores (ODS), and Master Data Management (MDM) platforms.
This is a full-time, position reporting to the Director, Enterprise Architecture
ESSENTIAL DUTIES AND RESPONSIBILITIES:
Design, develop, and maintain scalable and secure enterprise data architecture solutions across SAP S/4 HANA, Veeva CRM, and Veeva Vault environments.
Serve as a subject matter expert for Operational Data Stores and Master Data Management architecture, ensuring clean, consistent, and governed data across the enterprise.
Collaborate with cross-functional teams to identify data needs, establish data governance frameworks, and define data integration strategies.
Develop data models, data flows, and system integration patterns that support enterprise analytics and reporting needs.
Evaluate and recommend new tools, platforms, and methodologies for improving data management capabilities.
Ensure architectural alignment with data privacy, regulatory, and security standards.
Provide leadership and mentoring to dataengineering and analytics teams on best practices in data modeling, metadata management, and data lifecycle management.
Contribute to data governance initiatives by enforcing standards, policies, and procedures for enterprise data.
QUALIFICATIONS/KNOWLEDGE:
Bachelor's or Master's degree in Computer Science, Information Systems, Data Science, or a related field.
Minimum of 8+ years of experience in data architecture, data integration, or enterprise data management roles.
Proven experience in designing and implementing data solutions on SAP S/4 HANA, including integration with other enterprise systems.
Strong hands-on experience with SaaS platforms, including data extraction, modeling, and harmonization.
Deep understanding of Operational Data Stores and MDM design patterns, implementation, and governance practices.
Proficiency in data modeling tools (e.g., Erwin, SAP PowerDesigner), ETL tools (e.g., Business Objects Data Services, SAP Data Services), and integration platforms (e.g., MuleSoft).
Familiarity with cloud data architecture (e.g., AWS, Azure, GCP) and hybrid data environments.
Excellent communication and stakeholder management skills.
OTHER:
Experience with pharmaceutical, life sciences, or regulated industry environments.
Knowledge of data privacy regulations such as GDPR, HIPAA, and data compliance frameworks
Ability to travel domestically and internationally as needed for high priority projects
Novocure is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sexual orientation, gender identity, age, national origin, disability, protected veteran status or other characteristics protected by federal, state, or local law. We actively seek qualified candidates who are protected veteran and individuals with disabilities as defined under VEVRAA and Section 503 of the Rehabilitation Act.
Novocure is committed to providing an interview process that is inclusive of our applicant's needs. If you are an individual with a disability and would like to request an accommodation, please email
ABOUT NOVOCURE:
Our vision
Patient-forward: aspiring to make a difference in cancer.
Our patient-forward mission
Together with our patients, we strive to extend survival in some of the most aggressive forms of cancer by developing and commercializing our innovative therapy.
Our patient-forward values
innovation
focus
drive
courage
trust
empathy
#LI-RJ1
$89k-121k yearly est. 4d ago
Senior Full Stack Software Developer
Advanced Recovery Systems 4.4
Maitland, FL jobs
Seeking a Senior Software Developer - Salesforce and/or WordPress & AI Integrations We are actively seeking a talented and experienced Senior Software Developer to become a vital member of our dynamic team. This is an exceptional chance to contribute your expertise and creativity in a collaborative environment, where innovation and excellence are at the forefront of our operations.
Advanced Recovery Systems (ARS), an integrated behavioral healthcare management company, is dedicated to providing exceptional care for addiction, substance abuse, and mental health issues. With our facilities located across various regions of the U.S., we have been committed to furthering our mission and applying advanced approaches to patient care. Learn more about ARS and our transformative work at ***********************************
We value our employees and recognize the critical role they play in saving lives.
The Senior Full Stack Developer - Salesforce, WordPress & AI Integrations will play a pivotal role in the development and integration of AI products, focusing on both front-end and back-end development. This position involves working on system integrations, API development, assistant system prompt engineering, multimodal LLM integration, UI/UX design, and creating robust back-end services.
This highly technical role bridges Salesforce, WordPress, and enterprise integration layers to deliver intelligent, scalable, and secure solutions. The ideal candidate is a hands-on senior engineer with deep experience in software engineering, cloud architecture, and AI-assisted development-leveraging modern tools like Cursor and OpenAI-based frameworks to create next-generation, AI-enabled digital systems.
CORE JOB DUTIES:
Salesforce Development & Integration
Serve as a senior technical resource for Salesforce development, integration, and architecture.
Design, develop, and maintain Apex classes, Lightning Web Components (LWC), and custom APIs.
Integrate Salesforce with EHR, HR, billing, and marketing systems using secure REST and GraphQL APIs.
Ensure compliance with HIPAA, organizational security standards, and best development practices.
Contribute to architectural and data design decisions for scalability and maintainability.
WordPress Development & Web Systems
Develop and maintain custom WordPress solutions, including plugins, themes, and API integrations with Salesforce and backend systems.
Write clean, efficient code using PHP, JavaScript (React/Vue), and REST APIs.
Collaborate with design teams to implement responsive, accessible, and user-friendly interfaces.
Manage deployment pipelines, staging environments, and perform code reviews to ensure reliability and maintainability.
Backend Engineering & System Architecture
Architect and implement API-driven services using js, Django, or Flask.
Design and maintain secure, scalable backend systems that unify data across Salesforce, WordPress, and internal applications.
Implement data synchronization, authentication, and caching strategies for performance optimization.
Maintain automated testing frameworks and CI/CD pipelines to ensure system quality and uptime.
AI-Enhanced Development & Automation
Use AI-assisted development tools such as Cursor or GitHub Copilot to enhance efficiency and maintain coding standards.
Contribute to the development and integration of AI products, including Heather AI and other intelligent systems.
Design and refine prompt engineering frameworks and multimodal LLM integrations (text, voice, visual) to enhance application capabilities.
Research and evaluate emerging AI technologies and APIs to identify opportunities for automation and operational improvement.
Technical Collaboration & Documentation
Partner with cross-functional teams across IT, Operations, Marketing, and Clinical divisions to translate business needs into scalable technical solutions.
Maintain detailed documentation of architectures, data flows, APIs, and integration points.
Provide technical mentorship through code reviews and best practice sharing.
Participate in architecture discussions, sprint planning, and system improvement initiatives.
Knowledge/Skills
Expert in Salesforce (Apex, LWC, SOQL) and WordPress (PHP, React, REST APIs).
Proficient in backend frameworks (Node.js, Django, Flask) and API-first architectures.
Skilled in AI and automation technologies, including Cursor, OpenAI, and LangChain.
Strong understanding of data modeling, integration security, and DevOps workflows.
Exceptional analytical, problem-solving, and cross-functional collaboration skills.
Core Competencies
Full-Stack Development & Architecture
AI-Integrated Software Engineering
Salesforce & WordPress Expertise
Secure API & Data Integration
System Scalability & Performance Optimization
Continuous Improvement & Innovation
Requirements
Education:
Bachelor's or Master's Degree in Computer Science, Data Science, Engineering, or a related field required. Advanced degrees are a plus.
Experience:
Minimum of 5 years in Salesforce software development, with a focus on AI product development, system integrations, API development, and front-end/back-end technologies.
Experience in healthcare or a related industry is highly desirable.
Benefits
We offer great benefits including 401(k), generous paid time off plan, medical, dental, vision, and many more benefits.
BENEFITS: Benefits begin on the 1st day of the month following date of hire.
Competitive Starting Pay
Paid Time Off: Up to 2 weeks/year | Accrual begins on Day 1 of employment
Retirement: 401K + match
Insurance: Health, Vision, Dental, Life
Matching HSA: Up to $1500/year company contribution
Travel Concierge, LifeMart Employee Discounts, Health Advocate, EAP Program
And much more!
Advanced Recovery Systems complies with state and federal nondiscrimination laws and policies that prohibit discrimination based on age, color, disability, national origin, race, religion, or sex. It is unlawful to retaliate against individuals or groups based on the basis of their participation in a complaint of discrimination or on the basis of their opposition to discriminatory practices/EEO. We are proud to be a drug-free workplace.
$90k-115k yearly est. 1d ago
Interoperability Engineer (Workday)
Moffitt Cancer Center 4.9
Tampa, FL jobs
Highlights
The Workday Interoperability Engineer serves as a senior technical expert responsible for architecting, deploying, and maintaining Workday integrations and interoperability frameworks that support secure, scalable data exchange across HR, finance, clinical, research, and enterprise systems.
Acts as a subject matter expert in Workday integration patterns including Workday Studio, EIBs, RaaS, APIs, event-driven integrations, and streaming/data pipelines.
Owns the design and operational delivery of Workday-centric interoperability initiatives, ensuring reliability and alignment with business outcomes.
Provides mentorship and technical leadership to engineers and analysts, guiding them in best practices for Workday and enterprise integration.
Combines deep Workday integration expertise with an understanding of cross-functional business processes and downstream system dependencies.
The role will also be responsible for developing and maintaining frameworks that support information exchange needs across clinical systems
Responsibilities
Hands-on experience building integrations with Workday HCM, Finance, Payroll, Recruiting, or other Workday modules.
Strong understanding of Workday data structures, security groups, calculated fields, and Workday report development (including RaaS).
Proficiency in developing integrations using Workday Studio, EIB, Core Connectors, and PECI(Payroll Effective Change Interface).
Translate Workday integration requirements into technical specifications, integration contracts, and design standards.
Ability to gather API requirements, translate them into technical specifications, and produce comprehensive API design documentation (standards, contracts, and specifications).
Hands-on experience implementing application security frameworks, including OAuth2, SAML, OpenID Connect, and JWT.
Experience in API testing strategies - functional, regression, performance, and security testing - using tools such as Postman, SoapUI, JMeter, or equivalents.
Good understanding of firewall and advanced networking concepts to support secure system integrations.
Provide on-call support and keep integration documentation and records up to date.
Credentials and Experience:
Bachelor's Degree - field of study: Computer Science, systems analysis, or a related study
Minimum 7 years of experience leading end-to-end integration implementations, with a strong emphasis on Workday and supporting middleware technologies like Cloverleaf and Boomi.
Minimum of 3 years' experience working with cross functional teams providing expert knowledge for ERP data analysis to design, build and deploy integrations.
The Ideal Candidate will have the following experience:
Strong hands-on experience developing Workday integrations using Workday Studio, EIBs, Core Connectors, RaaS, and Workday Web Services.
Experience designing and supporting interoperability between Workday and downstream systems
Familiarity with healthcare interoperability concepts and standards such as HL7 or FHIR, especially where Workday interacts with clinical or research environments.
Proficiency with integration platforms such as Boomi and/or Cloverleaf for orchestrating Workday-related data flows.
Experience with EMR systems such as Epic is a plus, particularly when supporting Workday-to-clinical data exchange.
$67k-87k yearly est. 2d ago
Cybersecurity Engineer
Independent Living Systems, LLC 4.4
Miami, FL jobs
We are seeking a Cybersecurity Engineer to join our team at Independent Living Systems (ILS). ILS, along with its affiliated health plans known as Florida Community Care and Florida Complete Care, is committed to promoting a higher quality of life and maximizing independence for all vulnerable populations.
About the Role:
As a Cybersecurity Engineer, you will play a critical role in protecting our organization's information systems and data from cyber threats. You will design, implement, and maintain security measures that safeguard our networks, applications, and infrastructure. Your work will involve continuous monitoring and analysis to detect vulnerabilities and respond to security incidents promptly. Collaborating with cross-functional teams, you will ensure compliance with industry standards and regulatory requirements. Ultimately, your efforts will help maintain the confidentiality, integrity, and availability of our digital assets, supporting the organization's overall risk management strategy.
Minimum Qualifications:
Bachelor's degree in Computer Science, Information Technology, Cybersecurity, or a related field.
5+ years of work experience in Cyber Security related duties such as incident detection, response and forensics.
Strong understanding of network protocols, firewalls, VPNs, and security technologies.
Experience with security monitoring tools and incident response processes.
Familiarity with regulatory frameworks such as NIST, ISO 27001, or GDPR.
Must have at least one Professional Cyber Security Certification (CISSP, CISM, CASP, CEH)
Relevant experience may substitute for the educational requirement on a year-for-year basis.
Preferred Qualifications:
Master's degree in Computer Science, Information Technology, Cybersecurity, or a related field.
Knowledge of scripting or programming languages such as Python, PowerShell, or Bash.
Hands-on experience with SIEM tools and automation of security workflows.
Background in threat intelligence and advanced persistent threat (APT) mitigation.
Design, implement, and manage systems for user authentication, authorization, and access control, requiring skills in platforms like Okta, Azure AD, SAML, OIDC
Responsibilities:
Develop, configure, and maintain security tools such as firewalls, intrusion detection/prevention systems, and endpoint protection solutions.
Conduct regular security assessments, vulnerability scans, and penetration tests to identify and remediate potential risks.
Monitor network traffic and security alerts to detect and respond to suspicious activities or breaches.
Collaborate with IT and development teams to integrate security best practices into system architecture and software development lifecycle.
Create and maintain documentation related to security policies, procedures, and incident response plans.
Stay current with emerging cybersecurity threats, technologies, and regulatory changes to proactively enhance security posture.
Provide training and guidance to staff on security awareness and best practices.
$65k-93k yearly est. 3d ago
Principal Data Platform Engineer
Blink Health Administration LLC 3.4
New York, NY jobs
Blink Health is the fastest growing healthcare technology company that builds products to make prescriptions accessible and affordable to everybody. Our two primary products - BlinkRx and Quick Save - remove traditional roadblocks within the current prescription supply chain, resulting in better access to critical medications and improved health outcomes for patients.
BlinkRx is the world's first pharma-to-patient cloud that offers a digital concierge service for patients who are prescribed branded medications. Patients benefit from transparent low prices, free home delivery, and world-class support on this first-of-its-kind centralized platform. With BlinkRx, never again will a patient show up at the pharmacy only to discover that they can't afford their medication, their doctor needs to fill out a form for them, or the pharmacy doesn't have the medication in stock.
We are a highly collaborative team of builders and operators who invent new ways of working in an industry that historically has resisted innovation. Join us!
About the Role
We are seeking a Principal Data Platform Engineer to define and evolve our real-time and batch data platform built on AWS and Databricks. This role owns the technical vision for how data is ingested, processed, stored, and served as trusted datasets, metrics, and APIs that power products, decisioning systems, and operational workflows.
As a Principal, you are a technical authority and force multiplier-deeply hands-on while setting architectural direction across streaming systems, lakehouse design, and data-serving layers. You will partner closely with engineering, analytics, and product teams to simplify the platform, eliminate legacy patterns, and establish scalable, reliable foundations for real-time analytics.
What You Will Do
* Own the end-to-end data platform architecture, spanning streaming ingestion, lakehouse storage, and data/insight serving layers
* Architect real-time streaming systems using AWS Kinesis and Spark Structured Streaming to support low-latency use cases
* Design stream-to-lakehouse convergence patterns that unify real-time and historical data with strong correctness guarantees
* Build and evolve data, metrics, and feature APIs that expose curated datasets for downstream applications and analytics
* Establish canonical event schemas and data contracts to support event-driven and API-based consumption
* Make deep technical decisions across AWS infrastructure (Kinesis, S3, IAM, networking) and Databricks internals (clusters, jobs, Delta Lake, performance tuning)
* Drive platform modernization, retiring legacy tools and patterns in favor of simpler, lakehouse-first designs
* Set standards for high-performance SQL and Spark workloads, optimizing for cost, latency, and scale
* Lead complex platform initiatives from architecture through production delivery and ongoing reliability
* Provide technical leadership and mentorship, shaping best practices for platform design, data quality, and operability
Technical Skills
* Deep expertise in real-time and distributed data systems, including AWS Kinesis and Spark Structured Streaming
* Strong command of Databricks on AWS (Delta Lake, clusters, jobs) and core AWS services (S3, IAM, networking)
* Proven experience designing data-serving architectures and APIs for analytics, metrics, and feature consumption
* Advanced Python and SQL skills for building scalable, high-performance data pipelines
* Demonstrated ability to design idempotent, replayable, and observable data platforms at scale
Why Join Us:
It is rare to have a company that both deeply impacts its customers and is able to provide its services across a massive population. At Blink, we have a huge impact on people when they are most vulnerable: at the intersection of their healthcare and finances. We are also the fastest growing healthcare company in the country and are driving that impact across millions of new patients every year. Our business model not only helps people, but drives economics that allow us to build a generational company. We are a relentlessly learning, constantly curious, and aggressively collaborative cross-functional team dedicated to inventing new ways to improve the lives of our customers.
We are an equal opportunity employer and value diversity of all kinds. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
$102k-146k yearly est. Auto-Apply 16d ago
Business Intelligence & Data Engineering - Phoenix
Meadows of Wickenburg 4.0
Phoenix, AZ jobs
This is where you change your story…
At Meadows we understand that new directions to career advancing and improvement can be scary, but we are excited to offer you a possible new rewarding chapter with us! Come join us in transforming lives!
Who are we?
Meadows Behavioral Health is a leader in the behavioral health industry. We offer a range of specialized programs including residential, outpatient and virtual treatment. We provide care for drug and alcohol addiction, trauma, sexual addiction, behavioral health conditions, and co-occurring disorders. We offer state-of-the-art care including neurofeedback and other services. Our evidence-based approach is rooted in decades of clinical experience, with more than 45 years in the field. Our approach is different and success stories from our patients are the proof.
Who are you?
Are you compassionate, innovative and have a passion to make an impact? Are you looking to get your foot in the door with a company that will believe in your abilities and train you to advance? 75% of our current top-level executive staff are organic internal promotions from within.
We might be a perfect fit for you!
*Please note - this is a hybrid position based out of our North Phoenix corporate offices. Please only apply if you live within a commutable distance to our location*
Position Summary:
The DataEngineer will play a crucial role in building, maintaining, and enhancing business intelligence solutions, focusing on data modeling, report building, and data analysis within cloud-based platforms, mainly Microsoft Azure. This role involves designing and optimizing data pipelines using Azure Data Factory, managing data lakes and data warehouses using Microsoft Azure Synapse, and creating meaningful reports and dashboards within Tableau. The DataEngineer will be responsible for designing, coding, and supporting functional intra-departmental data procedures and process flows, with additional focus on cross-departmental data integration procedures through ETL process builds. The DataEngineer will also be responsible for performing ad hoc SQL queries to extract data for analysis within the requesting department, ensuring the integrity and accessibility of critical business data to support decision-making. This role is not an analyst role and is majority focused on the technical "to-do" of making data available for the business in a form or fashion that the data can be analyzed. The DataEngineer needs to be comfortable building all aspects of the data pipeline from source through to storage, then presentation, rather than focusing on analysis of the impact of what the data means to the business.
Essential Job Functions:
Design, develop, and maintain ETL/ELT pipelines using Microsoft Azure Data Factory to move, transform, and load data from various sources into Azure Data Lakes and Azure Synapse. Primary focus on enhancement of data integration, normalization, and standardization for business intelligence and reporting.
Build, manage, and monitor data pipelines and ETL processes along with documentation and usage statistics/KPIs to improve processing performance, database capacity issues, data replication, and other distributed data issues.
Responsible for designing and building interactive dashboards, alerts, and reports on both a recurring and ad-hoc basis with our BI tools and platforms, primarily Tableau. Develops reports as needed against multiple types of data sources.
Design and implement data models for use in Tableau for reporting and dashboarding. Ensure data integrity, quality, and consistency in Tableau reports and dashboards.
Work closely with business stakeholders to understand their data needs and provide via ad hoc data extraction. Write and execute queries against the data warehouse or other sources in support of these ad hoc data requests and quick reporting needs. The sources include both SQL and No-SQL databases.
Implement security and access controls for databases and data pipelines to ensure compliance with security standards and protect sensitive data. Maintain proper documentation of data access, database schemas, and transformation logic for transparency and governance.
Collaborate with the Director of Infrastructure on cloud data architecture and integration strategies, ensuring that Azure Data Lakes and Azure Synapse are optimized for performance and scalability.
Creates and fosters a collaborative, friendly, and supportive environment for information technology users to interface their requirements and projects and troubleshoot their application and data-related problems.
Qualifications
Education, Skills, and Experience Requirements:
Bachelor's degree in computer science, data science, information technology, or related field, or equivalent education and experience.
Relevant certification in cloud platforms, data analytics, or business intelligence is a plus (i.e., Microsoft Certified: Azure DataEngineer Associate, Tableau Desktop Specialist).
At least 2 years of experience in data modeling and report building, specifically using Tableau for dashboard and report creation.
Direct experience with Microsoft Azure Data Factory, Azure Data Lakes, and Azure Synapse for building, maintaining, and optimizing data pipelines and data integration solutions.
Proficiency in SQL for querying relational and non-relational data sources, including experience with ad hoc queries for quick data extraction and reporting.
Experience in developing, maintaining, and enhancing ETL/ELT processes for data transformation and loading in cloud-based environments.
Strong understanding of data warehousing concepts, data modeling techniques, and best practices for cloud data architecture.
Proficiency in scripting languages like Python or R is preferred, particularly for data manipulation and analysis.
Experience with other AI/ML-based analytics tools or other reporting platforms is a plus, but not required. Examples include Crystal Reports, PowerBI, Salesforce CRM Reporting, IBM Cognos, etc.
We are a Drug Free Company. All positions are designated as “Safety Sensitive” positions and in light of our company mission, the Company does not employ medical marijuana cardholders. Following an offer of employment, and prior to reporting to work, all applicants will be required to submit to and pass a substance abuse screen.
The Meadows is an equal opportunity employer committed to diversity and inclusion in the workplace. Qualified candidates will receive consideration without regard to race, color, religion, sex, sexual orientation, age, national origin, disability, protected veteran status or any other factor protected by applicable federal, state or local laws. The Meadows provides reasonable accommodations to individuals with disabilities and if you need reasonable accommodation during any time of the employment process please reach out
$98k-124k yearly est. 6d ago
Senior Data Engineer (TS/SCI)
Kentro 3.9
Tampa, FL jobs
Thank you for considering IT Concepts dba Kentro, where innovation drives opportunity and collaboration leads to success. Our dynamic community of experts is fully committed to advancing our customers' missions, fostering professional growth, and making a positive impact on our communities.
By joining our supportive community, you will find that Kentro is dedicated to your personal and professional development. Together, we can drive meaningful change, spark innovation, and achieve extraordinary milestones.
Kentro is hiring a Senior DataEngineer to drive the massive data discovery and classification effort for the Zero Trust initiative at U.S. Special Operations Command (USSOCOM). Before data can be protected, it must be found and understood. You will be responsible for illuminating "dark data" across the Command's complex information environment, ranging from hyperscale cloud data lakes on NIPR to legacy file shares and isolated storage arrays on the SIPR and Top-Secret networks.
As a DataEngineer, you will architect and manage the deployment of advanced discovery platforms, specifically BigID and NetApp BlueXP. You will configure these tools to crawl petabytes of structured (SQL/Oracle), semi-structured (logs/NoSQL), and unstructured (SharePoint/File Shares) data. Your primary mission is to build the "Global Data Inventory"-a dynamic, real-time map of where sensitive CUI and classified intelligence resides-enabling the security teams to apply precision protection. You will use your knowledge of data pipelines and storage infrastructure to ensure that scanning operations provide 100% visibility without degrading network performance.
Responsibilities:
Data Discovery Architecture: Deploy and manage BigID and NetApp BlueXP scanners across hybrid environments, including configuring dockerized collectors for air-gapped discovery on the Top-Secret network.
Structured Data Mapping: Connect discovery tools to enterprise databases (SQL Server, Oracle, PostgreSQL) to scan schemas and columns for PII, DoD ID numbers, and other sensitive indicators without impacting database performance.
Unstructured Data Crawling: Configure scans for massive file repositories (NetApp NAS, QNAP, SharePoint On-Premises), optimizing scan windows and throttling to prevent latency for mission users.
Cloud Data Integration: Utilize Microsoft Purview Data Map and custom connectors to inventory data residing in AWS S3 buckets, Azure Blobs, and Data Lakes.
Classification Tuning: Collaborate with mission owners to train Machine Learning (ML) classifiers to recognize unique USSOCOM data types (e.g., mission names, operational codes) and reduce false positive rates in the data inventory.
Location: Onsite in Tampa, FL
Position Duration: The duration of this role is 18 months from project commencement, with the possibility of extension (Project expected to commence in January 2026).
Requirements
Master's degree (MA/MS)
10+ years of relevant experience
Data Discovery Expertise: Proven experience deploying and managing enterprise data discovery and governance platforms such as BigID, Varonis, NetApp BlueXP (Data Sense), or Informatica.
Storage & Database Knowledge: Strong understanding of storage protocols (NFS, SMB/CIFS, S3) and database structures (SQL, NoSQL) to troubleshoot connectivity and scanning access.
Containerization: Proficiency with Kubernetes and Docker, as many modern discovery collectors are deployed as containerized microservices.
Data Handling: Experience dealing with large-scale data inventories (Petabyte scale) and understanding of data lineage and provenance concepts.
Preferred Experience & Skills ("Nice-to-Haves"):
Experience with Microsoft Purview Data Map (formerly Azure Purview).
Knowledge of Regex for custom data pattern definition.
Background in DataEngineering or ETL pipeline development.
Familiarity with USSOCOM network architecture and storage standards.
Certifications:
Required: CompTIA Security+ CE (or higher) to meet DoD 8570 IAT Level II requirements.
Preferred: Certified Kubernetes Administrator (CKA).
Preferred: Azure DataEngineer Associate or BigID Certified Professional.
Clearance:
Active Top-Secret clearance with SCI eligibility.
Benefits
The Company
We believe in generating success collaboratively, enabling long-term mission success, and building trust for the next challenge. With you as our partner, let's solve challenges, think innovatively, and maximize impact. As a valued member of our team, you have the unique opportunity to work in a diverse range of technology and business career paths, all while supporting our nation and delivering innovative technology solutions. We are a close community of experts that pride ourselves on creating an environment defined by teamwork, dedication, and excellence.
We hold three ISO certifications (27001:2013, 20000-1:2011, 9001:2015) and two CMMI ML 3 ratings (DEV and SVC).
Industry Recognition
Growth | Inc 5000's Fastest Growing Private Companies, DC Metro List Fastest Growing; Washington Business Journal: Fastest Growing Companies, Top Performing Small Technology Companies in Greater D.C.
Culture | Northern Virginia Technology Council Tech 100 Honoree; Virginia Best Place to Work; Washington Business Journal: Best Places to Work, Corporate Diversity Index Winner - Mid-Size Companies, Companies Owned by People of Color; Department of Labor's HireVets for our work helping veterans transition; SECAF Award of Excellence finalist; Victory Military Friendly Brand; Virginia Values Veterans (V3); Cystic Fibrosis Foundation Corporate Breath Award
Benefits
We offer competitive benefits package including paid time off, healthcare benefits, supplemental benefits, 401k including an employer match, discount perks, rewards, and more. We invest in our employees - Every employee is eligible for education reimbursement for certifications, degrees, or professional development. Reimbursement amounts may fluctuate due to IRS limitations. We want you to grow as an expert and a leader and offer flexibility for you to take a course, complete a certification, or other professional growth and networking. We are committed to supporting your curiosity and sustaining a culture that prioritizes commitment to continuous professional development.
We work hard; we play hard. Kentro is committed to incorporating fun into every day. We dedicate funds for activities - virtual and in-person - e.g., we host happy hours, holiday events, fitness & wellness events, and annual celebrations. In alignment with our commitment to our communities, we also host and attend charity galas/events. We believe in appreciating your commitment and building a positive workspace for you to be creative, innovative, and happy.
Commitment Equal Opportunity Employment & VEVRAA
Kentro is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to disability, status as a protected veteran or any other status protected by applicable federal, state or local law.
Kentro is strongly committed to compliance with VEVRAA and other applicable federal, state, and local laws governing equal employment opportunity. We have developed comprehensive policies and procedures to ensure our hiring practices align with these requirements.
As part of our VEVRAA compliance efforts, Kentro has established an equal opportunity plan outlining our commitment to recruiting, hiring, and advancing protected veterans. This plan is regularly reviewed and updated to ensure its effectiveness.
We encourage protected veterans to self-identify during the application process. This information is strictly confidential and will only be used for reporting and compliance purposes as required by law. Providing this information is voluntary and will not impact your employment eligibility.
Our commitment to equal employment opportunity extends beyond legal compliance. We are dedicated to fostering an inclusive workplace where all employees, including protected veterans, are treated with dignity, respect, and fairness.
How to Apply
To apply to Kentro Positions- Please click on the: “Apply for this Job” button at the bottom of this Job Description or the button at the top: “Application.” Please upload your resume and complete all the application steps. You must submit the application for Kentro to consider you for a position. If you need alternative application methods, please email ***************** and request assistance.
Accommodations
To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. Reasonable Accommodations may be made to enable qualified individuals with disabilities to perform the essential functions. If you need to discuss reasonable accommodations, please email *****************.
#LI-SB2
$79k-112k yearly est. Auto-Apply 29d ago
SPARK Data Onboarding Engineer- Jersey City
Photon Group 4.3
Dallas, TX jobs
Job Title: PySpark DataEngineer
We are seeking a skilled PySpark DataEngineer to join our team and drive the development of robust data processing and transformation solutions within our data platform. You will be responsible for designing, implementing, and maintaining PySpark-based applications to handle complex data processing tasks, ensure data quality, and integrate with diverse data sources. The ideal candidate possesses strong PySpark development skills, experience with big data technologies, and the ability to work in a fast-paced, data-driven environment.
Key Responsibilities: DataEngineering Development:
Design, develop, and test PySpark-based applications to process, transform, and analyze large-scale datasets from various sources, including relational databases, NoSQL databases, batch files, and real-time data streams.
Implement efficient data transformation and aggregation using PySpark and relevant big data frameworks.
Develop robust error handling and exception management mechanisms to ensure data integrity and system resilience within Spark jobs.
Optimize PySpark jobs for performance, including partitioning, caching, and tuning of Spark configurations.
Data Analysis and Transformation:
Collaborate with data analysts, data scientists, and data architects to understand data processing requirements and deliver high-quality data solutions.
Analyze and interpret data structures, formats, and relationships to implement effective data transformations using PySpark.
Work with distributed datasets in Spark, ensuring optimal performance for large-scale data processing and analytics.
Data Integration and ETL:
Design and implement ETL (Extract, Transform, Load) processes to ingest and integrate data from various sources, ensuring consistency, accuracy, and performance.
Integrate PySpark applications with data sources such as SQL databases, NoSQL databases, data lakes, and streaming platforms
Qualifications and Skills:
Bachelor's degree in Computer Science, Information Technology, or a related field.
5+ years of hands-on experience in big data development, preferably with exposure to data-intensive applications.
Strong understanding of data processing principles, techniques, and best practices in a big data environment.
Proficiency in PySpark, Apache Spark, and related big data technologies for data processing, analysis, and integration.
Experience with ETL development and data pipeline orchestration tools (e.g., Apache Airflow, Luigi).
Strong analytical and problem-solving skills, with the ability to translate business requirements into technical solutions.
Excellent communication and collaboration skills to work effectively with data analysts, data architects, and other team members.
Compensation, Benefits and Duration
Minimum Compensation: USD 43,000
Maximum Compensation: USD 151,000
Compensation is based on actual experience and qualifications of the candidate. The above is a reasonable and a good faith estimate for the role.
Medical, vision, and dental benefits, 401k retirement plan, variable pay/incentives, paid time off, and paid holidays are available for full time employees.
This position is not available for independent contractors
No applications will be considered if received more than 120 days after the date of this post
$83k-116k yearly est. Auto-Apply 60d+ ago
Data Engineer - Dallas, TX
Photon Group 4.3
Dallas, TX jobs
We are seeking a DataEngineer to build and scale the data infrastructure powering our Agentic AI products. You will be responsible for the "Ingestion-to-Insight" pipeline that allows autonomous agents to access, search, and reason over vast amounts of proprietary and public data.
Your role is critical: you will design the RAG (Retrieval-Augmented Generation) architectures and data pipelines that ensure our agents have the right context at the right time to make accurate decisions.
Key Responsibilities
AI-Ready Data Pipelines: Design and implement scalable ETL/ELT pipelines that process both structured (SQL, logs) and unstructured (PDFs, emails, docs) data specifically for LLM consumption.
Vector Database Management: Architect and optimize Vector Databases (e.g., Pinecone, Weaviate, Milvus, or Qdrant) to ensure high-speed, relevant similarity searches for agentic retrieval.
Chunking & Embedding Strategies: Collaborate with AI Engineers to optimize data chunking strategies and embedding models to improve the "recall" and "precision" of the agent's knowledge retrieval.
Data Quality for AI: Develop automated "Data Cleaning" workflows to remove noise, PII (Personally Identifiable Information), and toxicity from training/context datasets.
Metadata Engineering: Enrich raw data with advanced metadata tagging to help agents filter and prioritize information during multi-step reasoning tasks.
Real-time Data Streaming: Build low-latency data streams (using Kafka or Flink) to provide agents with "fresh" data, enabling them to act on real-time market or operational changes.
Evaluation Frameworks: Construct "Gold Datasets" and versioned data snapshots to help the team benchmark agent performance over time.
Required Skills & Qualifications
Experience: 4+ years in DataEngineering, with at least 1 year focusing on data for LLMs or AI/ML applications.
Python Mastery: Deep expertise in Python (Pandas, Pydantic, FastAPI) for data manipulation and API integration.
Data Tooling: Strong experience with modern data stack tools (e.g., dbt, Airflow, Dagster, Snowflake, or Databricks).
Vector Expertise: Hands-on experience with at least one major Vector Database and knowledge of similarity search algorithms (HNSW, Cosine Similarity).
Search Knowledge: Familiarity with hybrid search techniques (combining semantic search with traditional keyword search like Elasticsearch/BM25).
Cloud Infrastructure: Proficiency in managing data workloads on AWS, Azure, or GCP.
Preferred Qualifications
Experience with LlamaIndex or LangChain for data ingestion.
Knowledge of Graph Databases (e.g., Neo4j) to help agents understand complex relationships between data points.
Familiarity with "Data-Centric AI" principles-prioritizing data quality over model size.
Compensation, Benefits and Duration
Minimum Compensation: USD 38,000
Maximum Compensation: USD 133,000
Compensation is based on actual experience and qualifications of the candidate. The above is a reasonable and a good faith estimate for the role.
Medical, vision, and dental benefits, 401k retirement plan, variable pay/incentives, paid time off, and paid holidays are available for full time employees.
This position is not available for independent contractors
No applications will be considered if received more than 120 days after the date of this post
$83k-116k yearly est. Auto-Apply 13d ago
SPARK Data Reconciliation Engineer- Jersey City
Photon Group 4.3
Dallas, TX jobs
Job Title: PySpark Data Reconciliation Engineer
We're seeking a skilled PySpark Data Reconciliation Engineer to join our team and drive the development of robust data reconciliation solutions within our financial systems. You will be responsible for designing, implementing, and maintaining PySpark-based applications to perform complex data reconciliations, identify and resolve discrepancies, and automate data matching processes. The ideal candidate possesses strong PySpark development skills, experience with data reconciliation techniques, and the ability to integrate with diverse data sources and rules engines.
Key Responsibilities:
Data Reconciliation Development:
Design, develop, and test PySpark-based applications to automate data reconciliation processes across various financial data sources, including relational databases, NoSQL databases, batch files, and real-time data streams.
Implement efficient data transformation, matching algorithms (deterministic and heuristic) using PySpark and relevant big data frameworks.
Develop robust error handling and exception management mechanisms to ensure data integrity and system resilience within Spark jobs.
Data Analysis and Matching:
Collaborate with business analysts and data architects to understand data requirements and matching criteria.
Analyze and interpret data structures, formats, and relationships to implement effective data matching algorithms using PySpark.
Work with distributed datasets in Spark, ensuring optimal performance for large-scale data reconciliation.
Rules Engine Integration:
Integrate PySpark applications with rules engines (e.g., Drools) or equivalent to implement and execute complex data matching rules.
Develop PySpark code to interact with the rules engine, manage rule execution, and handle rule-based decision-making.
Problem Solving and Gap Analysis:
Collaborate with cross-functional teams to identify and analyze data gaps and inconsistencies between systems.
Design and develop PySpark-based solutions to address data integration challenges and ensure data quality.
Contribute to the development of data governance and quality frameworks within the organization.
Qualifications and Skills:
Bachelor's degree in Computer Science or a related field.
5+ years of hands-on experience in big data development, preferably with exposure to data-intensive applications.
Strong understanding of data reconciliation principles, techniques, and best practices.
Proficiency in PySpark, Apache Spark, and related big data technologies for data processing and integration.
Experience with rules engine integration and development
Strong analytical and problem-solving skills, with the ability to translate business requirements into technical solutions.
Excellent communication and collaboration skills to work effectively with business analysts, data architects, and other team members.
Familiarity with data streaming platforms (e.g., Kafka, Kinesis) and big data technologies (e.g., Hadoop, Hive, HBase) is a plus.
Compensation, Benefits and Duration
Minimum Compensation: USD 43,000
Maximum Compensation: USD 151,000
Compensation is based on actual experience and qualifications of the candidate. The above is a reasonable and a good faith estimate for the role.
Medical, vision, and dental benefits, 401k retirement plan, variable pay/incentives, paid time off, and paid holidays are available for full time employees.
This position is not available for independent contractors
No applications will be considered if received more than 120 days after the date of this post
$83k-116k yearly est. Auto-Apply 60d+ ago
SPARK Data Onboarding Engineer- NJ
Photon Group 4.3
Dallas, TX jobs
Job Title: PySpark DataEngineer
We are seeking a skilled PySpark DataEngineer to join our team and drive the development of robust data processing and transformation solutions within our data platform. You will be responsible for designing, implementing, and maintaining PySpark-based applications to handle complex data processing tasks, ensure data quality, and integrate with diverse data sources. The ideal candidate possesses strong PySpark development skills, experience with big data technologies, and the ability to work in a fast-paced, data-driven environment.
Key Responsibilities: DataEngineering Development:
Design, develop, and test PySpark-based applications to process, transform, and analyze large-scale datasets from various sources, including relational databases, NoSQL databases, batch files, and real-time data streams.
Implement efficient data transformation and aggregation using PySpark and relevant big data frameworks.
Develop robust error handling and exception management mechanisms to ensure data integrity and system resilience within Spark jobs.
Optimize PySpark jobs for performance, including partitioning, caching, and tuning of Spark configurations.
Data Analysis and Transformation:
Collaborate with data analysts, data scientists, and data architects to understand data processing requirements and deliver high-quality data solutions.
Analyze and interpret data structures, formats, and relationships to implement effective data transformations using PySpark.
Work with distributed datasets in Spark, ensuring optimal performance for large-scale data processing and analytics.
Data Integration and ETL:
Design and implement ETL (Extract, Transform, Load) processes to ingest and integrate data from various sources, ensuring consistency, accuracy, and performance.
Integrate PySpark applications with data sources such as SQL databases, NoSQL databases, data lakes, and streaming platforms
Qualifications and Skills:
Bachelor's degree in Computer Science, Information Technology, or a related field.
5+ years of hands-on experience in big data development, preferably with exposure to data-intensive applications.
Strong understanding of data processing principles, techniques, and best practices in a big data environment.
Proficiency in PySpark, Apache Spark, and related big data technologies for data processing, analysis, and integration.
Experience with ETL development and data pipeline orchestration tools (e.g., Apache Airflow, Luigi).
Strong analytical and problem-solving skills, with the ability to translate business requirements into technical solutions.
Excellent communication and collaboration skills to work effectively with data analysts, data architects, and other team members.
$83k-116k yearly est. Auto-Apply 60d+ ago
Senior Data & Analytics Engineer
Blink Health Administration LLC 3.4
Tennessee jobs
Blink Health is the fastest growing healthcare technology company that builds products to make prescriptions accessible and affordable to everybody. Our two primary products - BlinkRx and Quick Save - remove traditional roadblocks within the current prescription supply chain, resulting in better access to critical medications and improved health outcomes for patients.
BlinkRx is the world's first pharma-to-patient cloud that offers a digital concierge service for patients who are prescribed branded medications. Patients benefit from transparent low prices, free home delivery, and world-class support on this first-of-its-kind centralized platform. With BlinkRx, never again will a patient show up at the pharmacy only to discover that they can't afford their medication, their doctor needs to fill out a form for them, or the pharmacy doesn't have the medication in stock.
We are a highly collaborative team of builders and operators who invent new ways of working in an industry that historically has resisted innovation. Join us!
As a Senior Data & Analytics Engineer, you will be a key technical contributor who designs and builds both data infrastructure and analytics-ready data models. You will develop and maintain data pipelines and transformation layers that enable self-service analytics across the organization. You will partner closely with business stakeholders, analysts, and data scientists to understand requirements and translate them into scalable, well-modeled data products. You will design for data integrity, reliability, and performance while ensuring data is accessible, documented, and trustworthy for downstream consumers. You will contribute to code quality through adherence to standards and participation in peer code and architecture reviews.
What You'll Need:
* 5+ years hands-on experience with a combination of:
* SQL mastery including writing complex, highly-optimized queries and designing dimensional data models
* Expert-level analytical SQL (window functions, CTEs, pivots, advanced joins).
* Building reusable, parameterized logic for transformations.
* Writing testable, modular SQL models (dbt or equivalent).
* Translating ambiguous business questions into structured data models and metrics.
* Designing semantic layers, metrics, and business logic for analytics consumption
* Python software development for data pipelines and automation
* Building and maintaining data transformation pipelines using tools like dbt, SQL, and Python
* Integration with Business Intelligence tools (e.g., Looker, QuickSight, Tableau)
* Building and maintaining data integration (ETL/ELT) pipelines using SQL, EMR, Python and Spark
* Good to have knowledge on ETL/ELT with batch (AWS Data Pipeline, Airflow) and streaming (Kinesis, Kafka)
* Creating scalable, well-documented data models following dimensional modeling best practices
* Experience with modern analytics engineering tools and practices (e.g., dbt, data modeling, testing frameworks, version control for analytics code)
* Experience using tools like Cursor/ChatGPT to speed up coding and testing, improve documentation, and raise code quality (maintainability, readability, security, performance) is strongly preferred
* Strong understanding of data modeling concepts (star schema, dimensional modeling, slowly changing dimensions)
* Experience working with data lakes and contributing to automation and simplification improvements across the data ecosystem
* Experience owning features from design through delivery along with ongoing support
* Strong ability to communicate effectively with both technical and non-technical stakeholders, translating business requirements into technical solutions
* Experience partnering with analysts, data scientists, and business teams to define metrics, KPIs, and reporting requirements
* Experience mentoring junior engineers and analysts, and promoting best practices for data management, quality, and governance
* Undergraduate or graduate degree in Computer Science, Information Systems, or related field
* Experience with the following will be added advantage:
* Working with columnar databases (e.g., Redshift, Snowflake) and optimizing for query performance
* Distributed data processing (Hadoop, Spark, Hive)
* Extra consideration will be given to those with Healthcare-relevant company experience, with demonstrated industry knowledge of handling sensitive information
Why Join Us:
It is rare to have a company that both deeply impacts its customers and is able to provide its services across a massive population. At Blink, we have a huge impact on people when they are most vulnerable: at the intersection of their healthcare and finances. We are also the fastest growing healthcare company in the country and are driving that impact across millions of new patients every year. Our business model not only helps people, but drives economics that allow us to build a generational company. We are a relentlessly learning, constantly curious, and aggressively collaborative cross-functional team dedicated to inventing new ways to improve the lives of our customers.
We are an equal opportunity employer and value diversity of all kinds. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
$75k-103k yearly est. Auto-Apply 12d ago
Data Platform Engineer
Monogram Health Inc. 3.7
Brentwood, TN jobs
Job DescriptionPosition:
Data Platform Engineer
The DataEngineering team is seeking a highly skilled and experienced Data Platform Engineer with expertise in DataEngineering, Database Modeling, and modern Cloud Data Platforms. The Data Platform Engineer designs, builds, and maintains scalable and secure data infrastructure, tools, and pipelines to support data analytics, machine learning, and business intelligence initiatives. They will also work closely with other teams, such as data analysts, data scientists, and software developers, to provide them with the data they need to perform their job functions.
Responsibilities
Design and implement robust, scalable, and efficient data models and pipelines across cloud-based platforms.
Develop, optimize, and maintain ETL/ELT pipelines using Azure Data Factory and Databricks.
Build and orchestrate Databricks Notebooks and Jobs using PySpark, Spark SQL, or Scala Spark.
Develop and manage data models, data warehousing solutions, and data integration architectures in Azure.
Implement Azure Functions, Azure WebApps, and Application Insights to support microservices and monitor distributed systems.
Configure and manage Databricks clusters, including autoscaling, Photon acceleration, and job orchestration.
Collaborate with cross-functional teams to support data-driven decision-making and analytics use cases.
Ensure data quality, governance, and security across the data lifecycle.
Collaborate with product managers by estimating technical tasks and deliverables.
Uphold the mission and values of Monogram Health in all aspects of your role and activities.
Position Requirements
A bachelor's degree in computer science, data science, software engineering or related field.
Minimum of five (5) years in designing and hands-on development in cloud-based analytics solutions, which includes a minimum of three (3) years' hands on work with big data frameworks and tools, such as Apache Kafka and Spark.
Expert level knowledge of Python or other scripting languages required.
Proficiency in SQL and other data query languages.
Understanding of data modeling and schema design principles
Ability to work with large datasets and perform data analysis
Designing and building data integration pipelines using API's and Streaming ingestion methods is desirable.
Familiarity with DevOps practices, including automation, CI/CD, and infrastructure as code (IaC).
Thorough understanding of Azure Cloud Infrastructure offerings.
Demonstrated problem-solving and troubleshooting skills.
Team player with demonstrated written and communication skills.
Benefits
Comprehensive Benefits - Medical, dental, and vision insurance, employee assistance program, employer-paid and voluntary life insurance, disability insurance, plus health and flexible spending accounts
Financial & Retirement Support - Competitive compensation, 401k with employer match, and financial wellness resources
Time Off & Leave - Paid holidays, flexible vacation time/PSSL, and paid parental leave
Wellness & Growth - Work life assistance resources, physical wellness perks, mental health support, employee referral program, and BenefitHub for employee discounts
Monogram Health is a leading multispecialty provider of in-home, evidence-based care for the most complex of patients who have multiple chronic conditions. Monogram health takes a comprehensive and personalized approach to a person's health, treating not only a disease, but all of the chronic conditions that are present - such as diabetes, hypertension, chronic kidney disease, heart failure, depression, COPD, and other metabolic disorders.
Monogram Health employs a robust clinical team, leveraging specialists across multiple disciplines including nephrology, cardiology, endocrinology, pulmonology, behavioral health, and palliative care to diagnose and treat health issues; review and prescribe medication; provide guidance, education, and counselling on a patient's healthcare options; as well as assist with daily needs such as access to food, eating healthy, transportation, financial assistance, and more. Monogram Health is available 24 hours a day, 7 days a week, and on holidays, to support and treat patients in their home.
Monogram Health's personalized and innovative treatment model is proven to dramatically improve patient outcomes and quality of life while reducing medical costs across the health care continuum.
$75k-103k yearly est. 2d ago
Data Platform Engineer
Monogram Health 3.7
Brentwood, TN jobs
Data Platform Engineer
The DataEngineering team is seeking a highly skilled and experienced Data Platform Engineer with expertise in DataEngineering, Database Modeling, and modern Cloud Data Platforms. The Data Platform Engineer designs, builds, and maintains scalable and secure data infrastructure, tools, and pipelines to support data analytics, machine learning, and business intelligence initiatives. They will also work closely with other teams, such as data analysts, data scientists, and software developers, to provide them with the data they need to perform their job functions.
Responsibilities
Design and implement robust, scalable, and efficient data models and pipelines across cloud-based platforms.
Develop, optimize, and maintain ETL/ELT pipelines using Azure Data Factory and Databricks.
Build and orchestrate Databricks Notebooks and Jobs using PySpark, Spark SQL, or Scala Spark.
Develop and manage data models, data warehousing solutions, and data integration architectures in Azure.
Implement Azure Functions, Azure WebApps, and Application Insights to support microservices and monitor distributed systems.
Configure and manage Databricks clusters, including autoscaling, Photon acceleration, and job orchestration.
Collaborate with cross-functional teams to support data-driven decision-making and analytics use cases.
Ensure data quality, governance, and security across the data lifecycle.
Collaborate with product managers by estimating technical tasks and deliverables.
Uphold the mission and values of Monogram Health in all aspects of your role and activities.
Position Requirements
A bachelor's degree in computer science, data science, software engineering or related field.
Minimum of five (5) years in designing and hands-on development in cloud-based analytics solutions, which includes a minimum of three (3) years' hands on work with big data frameworks and tools, such as Apache Kafka and Spark.
Expert level knowledge of Python or other scripting languages required.
Proficiency in SQL and other data query languages.
Understanding of data modeling and schema design principles
Ability to work with large datasets and perform data analysis
Designing and building data integration pipelines using API's and Streaming ingestion methods is desirable.
Familiarity with DevOps practices, including automation, CI/CD, and infrastructure as code (IaC).
Thorough understanding of Azure Cloud Infrastructure offerings.
Demonstrated problem-solving and troubleshooting skills.
Team player with demonstrated written and communication skills.
Benefits
Comprehensive Benefits - Medical, dental, and vision insurance, employee assistance program, employer-paid and voluntary life insurance, disability insurance, plus health and flexible spending accounts
Financial & Retirement Support - Competitive compensation, 401k with employer match, and financial wellness resources
Time Off & Leave - Paid holidays, flexible vacation time/PSSL, and paid parental leave
Wellness & Growth - Work life assistance resources, physical wellness perks, mental health support, employee referral program, and BenefitHub for employee discounts
Monogram Health is a leading multispecialty provider of in-home, evidence-based care for the most complex of patients who have multiple chronic conditions. Monogram health takes a comprehensive and personalized approach to a person's health, treating not only a disease, but all of the chronic conditions that are present - such as diabetes, hypertension, chronic kidney disease, heart failure, depression, COPD, and other metabolic disorders.
Monogram Health employs a robust clinical team, leveraging specialists across multiple disciplines including nephrology, cardiology, endocrinology, pulmonology, behavioral health, and palliative care to diagnose and treat health issues; review and prescribe medication; provide guidance, education, and counselling on a patient's healthcare options; as well as assist with daily needs such as access to food, eating healthy, transportation, financial assistance, and more. Monogram Health is available 24 hours a day, 7 days a week, and on holidays, to support and treat patients in their home.
Monogram Health's personalized and innovative treatment model is proven to dramatically improve patient outcomes and quality of life while reducing medical costs across the health care continuum.
$75k-103k yearly est. 60d+ ago
Principal Data Platform Engineer
Blink Health 3.4
Day, NY jobs
Blink Health is the fastest growing healthcare technology company that builds products to make prescriptions accessible and affordable to everybody. Our two primary products - BlinkRx and Quick Save - remove traditional roadblocks within the current prescription supply chain, resulting in better access to critical medications and improved health outcomes for patients.
BlinkRx is the world's first pharma-to-patient cloud that offers a digital concierge service for patients who are prescribed branded medications. Patients benefit from transparent low prices, free home delivery, and world-class support on this first-of-its-kind centralized platform. With BlinkRx, never again will a patient show up at the pharmacy only to discover that they can't afford their medication, their doctor needs to fill out a form for them, or the pharmacy doesn't have the medication in stock.
We are a highly collaborative team of builders and operators who invent new ways of working in an industry that historically has resisted innovation. Join us!
About the Role
We are seeking a Principal Data Platform Engineer to define and evolve our real-time and batch data platform built on AWS and Databricks. This role owns the technical vision for how data is ingested, processed, stored, and served as trusted datasets, metrics, and APIs that power products, decisioning systems, and operational workflows.
As a Principal, you are a technical authority and force multiplier-deeply hands-on while setting architectural direction across streaming systems, lakehouse design, and data-serving layers. You will partner closely with engineering, analytics, and product teams to simplify the platform, eliminate legacy patterns, and establish scalable, reliable foundations for real-time analytics.
What You Will Do
Own the end-to-end data platform architecture, spanning streaming ingestion, lakehouse storage, and data/insight serving layers
Architect real-time streaming systems using AWS Kinesis and Spark Structured Streaming to support low-latency use cases
Design stream-to-lakehouse convergence patterns that unify real-time and historical data with strong correctness guarantees
Build and evolve data, metrics, and feature APIs that expose curated datasets for downstream applications and analytics
Establish canonical event schemas and data contracts to support event-driven and API-based consumption
Make deep technical decisions across AWS infrastructure (Kinesis, S3, IAM, networking) and Databricks internals (clusters, jobs, Delta Lake, performance tuning)
Drive platform modernization, retiring legacy tools and patterns in favor of simpler, lakehouse-first designs
Set standards for high-performance SQL and Spark workloads, optimizing for cost, latency, and scale
Lead complex platform initiatives from architecture through production delivery and ongoing reliability
Provide technical leadership and mentorship, shaping best practices for platform design, data quality, and operability
Technical Skills
Deep expertise in real-time and distributed data systems, including AWS Kinesis and Spark Structured Streaming
Strong command of Databricks on AWS (Delta Lake, clusters, jobs) and core AWS services (S3, IAM, networking)
Proven experience designing data-serving architectures and APIs for analytics, metrics, and feature consumption
Advanced Python and SQL skills for building scalable, high-performance data pipelines
Demonstrated ability to design idempotent, replayable, and observable data platforms at scale
Why Join Us:
It is rare to have a company that both deeply impacts its customers and is able to provide its services across a massive population. At Blink, we have a huge impact on people when they are most vulnerable: at the intersection of their healthcare and finances. We are also the fastest growing healthcare company in the country and are driving that impact across millions of new patients every year. Our business model not only helps people, but drives economics that allow us to build a generational company. We are a relentlessly learning, constantly curious, and aggressively collaborative cross-functional team dedicated to inventing new ways to improve the lives of our customers.
We are an equal opportunity employer and value diversity of all kinds. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.