Staff Software Engineer
Santa Clara, CA jobs
Staff Software Engineer - SaaS Healthcare Solutions Location: Santa Clara, US
Exo is seeking an experienced Staff Software Engineer to lead the design and development of scalable, cloud-based SaaS healthcare applications. This hands-on role combines deep technical involvement (80% coding and code review) with strategic architecture responsibilities (20% requirements analysis, system design, production support, and team mentorship). The ideal candidate will have extensive experience building distributed systems and SaaS platforms, with a strong background in healthcare technology.
Key Responsibilities
Write high-quality, maintainable code and conduct thorough code reviews, providing technical guidance to engineering teams
Design scalable, fault-tolerant distributed systems for SaaS healthcare applications and lead development of critical system components
Collaborate with Product Managers and stakeholders to translate business requirements into technical specifications and architectural designs
Lead incident response, troubleshoot complex production issues, and optimize system performance across distributed environments
Ensure solutions meet HIPAA, cybersecurity, and medical device regulations while supporting QMS requirements
Design and implement integrations with EMR systems, medical imaging platforms, and healthcare protocols (DICOM, HL7, FHIR)
Evaluate and recommend technologies, frameworks, and architectural patterns to support business objectives
Work closely with cross-functional global teams including Product Managers, Project Managers, and Support Engineering teams
Required Qualifications and Skills
Bachelor's degree in Computer Science, Software Engineering, or related field
10+ years of software development experience with proven track record in senior technical
roles
5+ years of experience architecting and building SaaS applications at scale
3+ years of hands-on experience with distributed systems design and implementation
Expert-level proficiency in Python and JavaScript/TypeScript
Extensive experience with AWS services (EC2, S3, RDS, Lambda, EKS, etc.)
Proficiency with Terraform and Ansible for CI/CD automation
Deep understanding of microservices, event-driven architectures, and distributed data
management
Experience with both relational (PostgreSQL, MySQL) and NoSQL (MongoDB, Redis) databases
Experience with message brokers (Apache Kafka, RabbitMQ, AWS SQS/SNS) and containerization (Docker, Kubernetes)
Knowledge of RESTful APIs, GraphQL, and API gateway patterns
Experience with DICOM standards, HL7, FHIR, and EMR system integrations (preferred)
Experience with Active Directory, SSO, LDAP, OAuth, and OpenID Connect (preferred)
Salary: 160-200K
Sr. Data Engineer
Beaverton, OR jobs
FlexIT client has an immediate need for Sr Data Engineer - Spark, Snowflake, AWSfor 12 months Remote for now, later on-site contract in Beaverton, Oregon.
Looking for local candidates or who are open forrelocationto Oregon.
Email resumes to *********************
Responsibilities:
The Data Engineer will collaborate with product owners, developers, database architects, data analysts, visual developers and data scientists on data initiatives and will ensure optimal data delivery and architecture is consistent throughout ongoing projects.
Must be self-directed and comfortable supporting the data needs of the product roadmap. The right candidate will be excited by the prospect of optimizing and building integrated and aggregated data objects to architect and support our next generation of products and data initiatives.
Create and maintain optimal data pipeline architecture,
Assemble large, complex data sets that meet functional / non-functional business requirements.
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing for greater scalability
Comprehensive documentation and knowledge transfer to Production Support
Work with Production Support to analyze and fix Production issues
Participate in an Agile / Scrum methodology to deliver high -quality software releases every 2 weeks through Sprint
Refine, plan stories and deliver timely
Analyze requirement documents and Source to target mapping
Easy ApplyClinical Data Insight Consultant
Franklin Lakes, NJ jobs
Summary
We are the makers of possible.
BD is one of the largest global medical technology companies in the world. Advancing the world of healthâ„¢ is our Purpose, and it's no small feat. It takes the imagination and passion of all of us-from design and engineering to the manufacturing and marketing of our billions of MedTech products per year-to look at the impossible and find transformative solutions that turn dreams into possibilities.
We believe that the human element, across our global teams, is what allows us to continually evolve. Join us and discover an environment in which you'll be supported to learn, grow and become your best self. Become a maker of possible with us.
Job Description
We are seeking a full-time, experienced and dynamic Clinical Data Insight Consultant to join our team. This Insight Consultant will conduct data analysis to drive the adoption and optimization of Medication Management Solutions (MMS) products. The successful candidate will be responsible for data driven insights across the MMS product portfolio, specifically aligned with the top IDN's and must have experience with the Pyxis dispensing platform. The National Insight Consultant will be part of the Clinical Customer Success (CCSO) organization.
Responsibilities
Serves as a consultant to our most strategic customers, providing proactive insights, to achieve clinical, financial and operational outcomes with our Pyxis dispensing products.
Maintains deep domain and product expertise in the Pyxis Dispensing suite of products and understands how to use the data to drive operational improvements in hospital pharmacy.
Works with the Account Team and Clinical Solutions Consultants in the National Strategic Accounts to improve workflow and optimize the current suite of products.
Occasionally travels to customer locations for in-person engagements.
Working with internal and external teams to establish support for education and related data analytics activities.
Supports the Account Team and Clinical Solutions Consultant to manage, understand, and use data from dispensing products to drive process improvement with the customers and end users in the National Accounts.
Provides proactive consulting and insights to internal teams and our customers to improve medication safety and inventory management.
Translates success in key performance indicators and other outcomes to value realization with the customer either clinical, operational or financial.
Maintains a close working relationship with cross functional account team and service teams.
Is consulted by marketing, product managers with marketplace surveillance, customer needs, and future product development.
Serves as a subject matter expert to drive internal tool development to support our data analytics and efforts to measure outcomes.
Minimum Requirements:
Professional License/Certification: Pharmacist preferred. Pharmacy Technician (RPhT and/or CPhT) may also be considered.
Education: Bachelor's Degree
Product Knowledge: Must have experience with the Pyxis dispensing platform and software technology.
Experience: Three or more years in a healthcare environment with technical systems involvement.
Knowledge: Expertise in pharmacy operational workflow.
Travel: Ability to travel 25-50% of the time.
Preferred Qualifications:
Recent, relevant consulting experience
Process improvement methodology knowledge
Self-starter with passion for data analysis
Excellent communication skills
Self-directed with independent initiative
Ability to influence without authority
Excellent presentation skills
Strong relationship building and interpersonal skills
Ability to interface at all levels of organization (internal and external)
Ability to uncover customer needs and translate them into solutions
At BD, we prioritize on-site collaboration because we believe it fosters creativity, innovation, and effective problem-solving, which are essential in the fast-paced healthcare industry. For most roles, we require a minimum of 4 days of in-office presence per week to maintain our culture of excellence and ensure smooth operations, while also recognizing the importance of flexibility and work-life balance. Remote or field-based positions will have different workplace arrangements which will be indicated in the job posting.
For certain roles at BD, employment is contingent upon the Company's receipt of sufficient proof that you are fully vaccinated against COVID-19. In some locations, testing for COVID-19 may be available and/or required. Consistent with BD's Workplace Accommodations Policy, requests for accommodation will be considered pursuant to applicable law.
Why Join Us?
A career at BD means being part of a team that values your opinions and contributions and that encourages you to bring your authentic self to work. It's also a place where we help each other be great, we do what's right, we hold each other accountable, and learn and improve every day.
To find purpose in the possibilities, we need people who can see the bigger picture, who understand the human story that underpins everything we do. We welcome people with the imagination and drive to help us reinvent the future of health. At BD, you'll discover a culture in which you can learn, grow, and thrive. And find satisfaction in doing your part to make the world a better place.
To learn more about BD visit **********************
Becton, Dickinson, and Company is an Equal Opportunity Employer. We evaluate applicants without regard to race, color, religion, age, sex, creed, national origin, ancestry, citizenship status, marital or domestic or civil union status, familial status, affectional or sexual orientation, gender identity or expression, genetics, disability, military eligibility or veteran status, and other legally-protected characteristics.
Required Skills
Optional Skills
.
Primary Work LocationUSA NJ - Franklin LakesAdditional LocationsWork Shift
At BD, we are strongly committed to investing in our associates-their well-being and development, and in providing rewards and recognition opportunities that promote a performance-based culture. We demonstrate this commitment by offering a valuable, competitive package of compensation and benefits programs which you can learn more about on our Careers Site under Our Commitment to You.
Salary or hourly rate ranges have been implemented to reward associates fairly and competitively, as well as to support recognition of associates' progress, ranging from entry level to experts in their field, and talent mobility. There are many factors, such as location, that contribute to the range displayed. The salary or hourly rate offered to a successful candidate is based on experience, education, skills, and any step rate pay system of the actual work location, as applicable to the role or position. Salary or hourly pay ranges may vary for Field-based and Remote roles.
Salary Range Information
$114,500.00 - $189,100.00 USD Annual
Auto-ApplyData Engineer
Portland, OR jobs
FlexIT client has an immediate need for Data Engineer for 6 months Remote contract in Portland, Oregon.
Email resumes to *********************
Easy ApplyData Engineer - Python, Spark
Beaverton, OR jobs
FlexIT client is looking for an immediate Data Engineer - Python, Sparkfor a 12-month remote contract.
The client is looking for great Engineers with talent and persistence who can leverage their existing skills and learn new ones. You should have some of the specific technical skills were looking for and be expert enough in one or two to help ramp others quickly.
Job Duties:
We are building petabyte-class solutions that consume fast-moving streams from eCommerce, retail, and partner channels and power the critical decisions that drive our business. We are building the Cloud Platform for Data and Analytics on AWS that fuels in digital transformation.
Focus areas include:
Data Streaming / Enrichment / Business Rules / MDM
Data Lake / Warehousing
Data Governance / GDPR / SOX
Data Strategy / Unified Access / IAM / RBAC
Be a great teammate on an agile/SCRUM team that sets and meets aggressive goals.
Mentor new and less experienced developers to advance their proficiency.
Leverage expert development skills and solid design skills to deliver reliable, scalable, performant solutions with modern tooling, data structures and algorithms.
Work with Product Owners, Engineering Managers and Principal Engineers to deliver solutions that enable digital transformation
Data Engineer
Beaverton, OR jobs
FlexIT client is looking for a Data Engineer 12 months contract in Beaverton, Oregon.
Looking for local candidates to work on site.
Top skills: Python, SQL , AWS, Spark
Full Stack Data Engineer
Beaverton, OR jobs
APLA is building capabilities around the company's data foundation to build data sources that are needed for reporting and analytics
The type of engineer were looking for is a Full Stack Data Engineer
Knowledge of data visualization engineering as well as consumption and view build engineering
Data Science Engineer
Beaverton, OR jobs
We are looking for strong experience in Python, AWS, Machine Learning/Data Science, CI/CD integration and the ability work with cross functional team. The work will also involve building and incorporate automated unit & integration tests into the Data science platform
Sr. Data Engineer
Beaverton, OR jobs
FlexIT client has an immediate need for Sr. Data Engineer 12 months contract in Hillsboro, Oregon.
Top Skills: Python, Spark, Spark Streaming, AWS, SQL
Role responsibilities:
Design and implement data products and features in collaboration with product owners, data analysts, and business partners using Agile / Scrum methodology
Contribute to overall architecture, frameworks and patterns for processing and storing large data volumes
Translate product backlog items into engineering designs and logical units of work
Profile and analyze data for the purpose of designing scalable solutions
Define and apply appropriate data acquisition and consumption strategies for given technical scenarios
Design and implement distributed data processing pipelines using tools and languages prevalent in the big data ecosystem
Build utilities, user defined functions, libraries, and frameworks to better enable data flow patterns
Implement complex automated routines using workflow orchestration tools
Work with architecture, engineering leads and other teams to ensure quality solutions are implemented, and engineering best practices are defined and adhered to
Anticipate, identify and solve issues concerning data management to improve data quality
Build and incorporate automated unit tests and participate in integration testing efforts
Utilize and advance continuous integration and deployment frameworks
Troubleshoot data issues and perform root cause analysis
Work across teams to resolve operational & performance issues
Sr. Data Engineer
Beaverton, OR jobs
We have embraced big data technologies to enable data-driven decisions. Were looking to expand our Data Engineering team to keep pace. As a Senior Data Engineer, you will work with a variety of talented teammates and be a driving force for building first-class solutions for our organization and its business partners, working on development projects related to supply chain, commerce, consumer behavior and web analytics among others.
Role responsibilities:
Design and implement data products and features in collaboration with product owners, data analysts, and business partners using Agile / Scrum methodology
Contribute to overall architecture, frameworks and patterns for processing and storing large data volumes
Translate product backlog items into engineering designs and logical units of work
Profile and analyze data for the purpose of designing scalable solutions
Define and apply appropriate data acquisition and consumption strategies for given technical scenarios
Design and implement distributed data processing pipelines using tools and languages prevalent in the big data ecosystem
Build utilities, user defined functions, libraries, and frameworks to better enable data flow patterns
Implement complex automated routines using workflow orchestration tools
Work with architecture, engineering leads and other teams to ensure quality solutions are implemented, and engineering best practices are defined and adhered to
Anticipate, identify and solve issues concerning data management to improve data quality
Build and incorporate automated unit tests and participate in integration testing efforts
Utilize and advance continuous integration and deployment frameworks
Troubleshoot data issues and perform root cause analysis
Work across teams to resolve operational & performance issues
The following qualifications and technical skills will position you well for this role:
MS/BS in Computer Science, or related technical discipline
5+ years of experience in large-scale software development, 3+ years of big data experience
Strong programming experience, Python preferred
Extensive experience working with Hadoop and related processing frameworks such as Spark, Hive, etc.
Experience with messaging/streaming/complex event processing tooling and frameworks with an emphasis on Spark Streaming or Structured Streaming and Apache Nifi
Good understanding of file formats including JSON, Parquet, Avro, and others
Familiarity with data warehousing, dimensional modeling, and ETL development
Experience with RDBMS systems, SQL and SQL Analytical functions
Experience with workflow orchestration tools like Apache Airflow
Familiarity with data warehousing, dimensional modeling, and ETL development
Experience with performance and scalability tuning
The following skills and experience are also relevant to our overall environment, and nice to have:
Experience with Scala or Java
Experience working in a public cloud environment, particularly AWS, and with services like EMR, S3, Lambda, ElastiCache, DynamoDB, SNS, SQS, etc
Familiarity with cloud warehouse tools like Snowflake
Experience building RESTful APIs to enable data consumption
Familiarity with build tools such as Terraform or CloudFormation and automation tools such as Jenkins or Circle CI
Familiarity with practices like Continuous Development, Continuous Integration and Automated Testing
Experience in Agile/Scrum application development
These are the characteristics that we strive for in our own work. We would love to hear from candidates who embody the same:
Desire to work collaboratively with your teammates to come up with the best solution to a problem
Demonstrated experience and ability to deliver results on multiple projects in a fast-paced, agile environment
Excellent problem-solving and interpersonal communication skills
Strong desire to learn and share knowledge with others
Data Engineer - Spark, Python, AWS
Beaverton, OR jobs
We are looking for a motivated and experienced Software Data Engineer to join our clients Advanced Analytics & Machine Learning team.
This is a unique opportunity to enhance & upgrade our next generation Data Science Platform that supports a dynamic team of data scientists, data engineers, and data analysts, all working on Demand Sensing related projects.
Job Duties:
We are looking for strong experience in Python, AWS, Machine Learning/Data Science, CI/CD integration and the ability work with cross functional team.
The work will also involve building and incorporate automated unit & integration tests into the Data science platform
Big Data Engineer
Beaverton, OR jobs
FlexIT client is looking for a Big Data Engineer for 12 months contract in Beaverton, Oregon.
Looking for Local Candidates. Will consider candidates who can move to Oregon.
Top 3 Skills: Hadoop, Python, Spark
Email resumes to *********************
Job Description:
You will be working on projects that build data artifacts to answer questions about consumer behavior, commerce trends, consumer touchpoint preferences and more..
Job Duties:
Design and implement distributed data processing pipelines using Spark, Hive, Python, and other tools and languages prevalent in the Hadoop ecosystem.
You will be given the opportunity to own the design and implementation. You will collaborate with Product managers, Data Scientists, Engineering folks to accomplish your tasks.
Publish RESTful APIs to enable real-time data consumption using OpenAPI specifications. This will enable many teams to consume the data that is being produced.
Explore and build proof of concepts using open source NOSQL technologies such as HBase, DynamoDB, Cassandra and Distributed Stream Processing frameworks like ApacheSpark, Flink, Kafka stream.
Take part in DevOps by building utilities, user defined functions and frameworks to better enable data flow patterns.
Work with architecture/engineering leads and other teammates to ensure high quality solutions through code reviews, engineering best practices documentation.
Experience in Business Rule management systems like Drools will also come in handy.
Some combination of these qualifications and technical skills will position you well for this role:
Easy ApplyCloud Data Engineer - AWS, Hive, Airflow
Beaverton, OR jobs
FlexIT client has an immediate need for aCloud Data Engineer - AWS, Hive, Airflowfor 12 months Remote contract in Beaverton, Oregon.
Email Resumes to *********************
Easy ApplySenior Big Data Engineer
Beaverton, OR jobs
Were looking to expand our Big Data Engineering team to keep pace. As a Sr. Big Data Engineer, you will work with a variety of talented teammates and be a driving force in technical initiatives that will accelerate analytics. You will be working on projects that build data artifacts to answer questions about consumer behavior, commerce trends, consumer touchpoint preferences and more!
Data Engineer - Commercial Analytics
Beaverton, OR jobs
If youre ready to innovate and become part of our Enterprise Data organization, come join us now! You will be part of an organization that is revolutionizing technology platforms and architecting a data and analytics landscape that is simplified, modern, flexible.
As a Data Engineer within the North America Commercial Analytics team, you will be a key member of a growing and passionate group focused on collaborating across business and technology resources to drive forward key products enabling enterprise data & analytics capabilities.
As a Data Engineer within the North America Commercial Analytics team, you will be a key member of a growing and passionate group focused on collaborating across business and technology resources to drive forward key programs and projects building enterprise data & analytics capabilities
Senior Data Engineer
Beaverton, OR jobs
The Data Engineer will collaborate with product owners, developers, database architects, data analysts, visual developers and data scientists on data initiatives and will ensure optimal data delivery and architecture is consistent throughout ongoing projects.
Must be self-directed and comfortable supporting the data needs of the product roadmap. The right candidate will be excited by the prospect of optimizing and building integrated and aggregated data objects to architect and support our next generation of products and data initiatives.
Create and maintain optimal data pipeline architecture,
Assemble large, complex data sets that meet functional / non-functional business requirements.
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing for greater scalability
Comprehensive documentation and knowledge transfer to Production Support
Work with Production Support to analyze and fix Production issues
Participate in an Agile / Scrum methodology to deliver high -quality software releases every 2 weeks through Sprint
Refine, plan stories and deliver timely
Analyze requirement documents and Source to target mapping
Data Scientist (FSP)
Herndon, VA jobs
Job Description
Red Rock Government Services is a leading software engineering company recognized for its exceptional support to the intelligence community. With a proven track record of delivering innovative and mission-critical solutions, Red Rock specializes in developing secure, scalable, and cutting-edge technologies tailored to meet the complex needs of intelligence operations. The company's expertise in advanced analytics, cloud computing, and artificial intelligence enables it to empower agencies with enhanced decision-making capabilities and operational efficiency. Red Rock's commitment to excellence, agility, and collaboration solidifies its reputation as a trusted partner in safeguarding national security and advancing intelligence objectives.
This position requires a current and active TS/SCI with Full Scope Polygraph security clearance. This position does not have the ability to sponsor candidates for clearance processing.
RedRock is seeking a
Data Scientist
to join our team of diverse and qualified professionals. The role focuses on supporting the Sponsor's mission by delivering advanced language training to ensure proficiency across a broad spectrum of languages for the Sponsor's employees.
Responsibilities:
Works closely with the Sponsor to gather requirements and advise on AWS infrastructure design, development, and deployment of Cloud resources.
Designs, tests, and implements log aggregation in support of Cloud and AWS systems.
Designs, tests, and implements search and visualization infrastructure in support of AWS systems.
Works with vendors to develop and deploy Cloud based solutions to the learning environment in AWS.
Acts as a liaison between the Customer and Vendor Contacts to troubleshoot AWS when deploying new resources.
Recommends new technologies for use in the cloud environment (AWS).
Communicates IT requirements between management and technical entities.
Qualifications:
Experience in risk management and ability to identify project risks and facilitate the development and implementation of mitigation strategies
Experience implementing and integrating AWS solutions.
Experience in gathering requirements from vendor contacts and customers.
Experience using Linux in the AWS environment.
Experience with scripting and web programming technologies required to support web-based learning systems, such as PHP, PERL, Java, Jscript, or PowerShell.
Experience deploying third party software products.
Experience with Software Configuration Management (SCCM).
Experience working with desktop and network hardware.
Location: Herndon, VA
Pay and Benefits:
Pay and benefits are fundamental to any career decision. That's why we craft compensation packages that reflect the importance of the work we do for our customers. Employment benefits include competitive compensation, Health and Wellness programs, Paid Leave and Retirement.
Commitment to Diversity:
All qualified applicants will receive consideration for employment without regard to sex, race, ethnicity, age, national origin, citizenship, religion, physical or mental disability, medical condition, genetic information, pregnancy, family structure, marital status, ancestry, domestic partner status, sexual orientation, gender identity or expression, veteran or military status, or any other basis prohibited by law.
Data Scientist (FSP)
Herndon, VA jobs
Red Rock Government Services is a leading software engineering company recognized for its exceptional support to the intelligence community. With a proven track record of delivering innovative and mission-critical solutions, Red Rock specializes in developing secure, scalable, and cutting-edge technologies tailored to meet the complex needs of intelligence operations. The company's expertise in advanced analytics, cloud computing, and artificial intelligence enables it to empower agencies with enhanced decision-making capabilities and operational efficiency. Red Rock's commitment to excellence, agility, and collaboration solidifies its reputation as a trusted partner in safeguarding national security and advancing intelligence objectives.
This position requires a current and active TS/SCI with Full Scope Polygraph security clearance. This position does not have the ability to sponsor candidates for clearance processing.
RedRock is seeking a
Data Scientist
to join our team of diverse and qualified professionals. The role focuses on supporting the Sponsor's mission by delivering advanced language training to ensure proficiency across a broad spectrum of languages for the Sponsor's employees.
Responsibilities:
Works closely with the Sponsor to gather requirements and advise on AWS infrastructure design, development, and deployment of Cloud resources.
Designs, tests, and implements log aggregation in support of Cloud and AWS systems.
Designs, tests, and implements search and visualization infrastructure in support of AWS systems.
Works with vendors to develop and deploy Cloud based solutions to the learning environment in AWS.
Acts as a liaison between the Customer and Vendor Contacts to troubleshoot AWS when deploying new resources.
Recommends new technologies for use in the cloud environment (AWS).
Communicates IT requirements between management and technical entities.
Qualifications:
Experience in risk management and ability to identify project risks and facilitate the development and implementation of mitigation strategies
Experience implementing and integrating AWS solutions.
Experience in gathering requirements from vendor contacts and customers.
Experience using Linux in the AWS environment.
Experience with scripting and web programming technologies required to support web-based learning systems, such as PHP, PERL, Java, Jscript, or PowerShell.
Experience deploying third party software products.
Experience with Software Configuration Management (SCCM).
Experience working with desktop and network hardware.
Location: Herndon, VA
Pay and Benefits:
Pay and benefits are fundamental to any career decision. That's why we craft compensation packages that reflect the importance of the work we do for our customers. Employment benefits include competitive compensation, Health and Wellness programs, Paid Leave and Retirement.
Commitment to Diversity:
All qualified applicants will receive consideration for employment without regard to sex, race, ethnicity, age, national origin, citizenship, religion, physical or mental disability, medical condition, genetic information, pregnancy, family structure, marital status, ancestry, domestic partner status, sexual orientation, gender identity or expression, veteran or military status, or any other basis prohibited by law.
Data Engineer - Archimedes
Bridgeton, MO jobs
Company Archimedes About Us Archimedes - Transforming the Specialty Drug Benefit - Archimedes is the industry leader in specialty drug management solutions. Founded with the goal of transforming the PBM industry to provide the necessary ingredients for the sustainability of the prescription drug benefit - alignment, value and transparency - Archimedes achieves superior results for clients by eliminating tightly held PBM conflicts of interest including drug spread, rebate retention and pharmacy ownership and delivering the most rigorous clinical management at the lowest net cost. .______________________________________________________________________________________________________________________________________________________________________________________________________. Current associates must use SSO login option at ************************************ to be considered for internal opportunities. Pay Range USD $0.00 - USD $0.00 /Yr. STAR Bonus % (At Risk Maximum) 10.00 - Manager, Clinical Mgr, Pharm Supvr, CAE, Sr CAE I Work Schedule Description (e.g. M-F 8am to 5pm) Core Business Hours Overview
The Data Engineer is responsible for designing, building, and maintaining scalable data pipelines and infrastructure that support enterprise analytics, reporting, and operational data flows. This role plays a critical part in enabling data-driven decision-making across Centene and SPM by ensuring the availability, integrity, and performance of data systems. The Data Engineer collaborates with data scientists, analysts, software developers, and business stakeholders to deliver robust ETL solutions, optimize data storage and retrieval, and implement secure, compliant data architectures in cloud and hybrid environments.
Operating within a healthcare-focused, compliance-heavy landscape, the Data Engineer ensures that data platforms align with regulatory standards such as HIPAA and SOC 2, while embedding automation and CI/CD practices into daily workflows. The role supports both AWS and Azure environments, leveraging cloud-native services and modern tooling to streamline data ingestion, transformation, and delivery.
Responsibilities
Job Responsibilities:
Design, develop, and maintain ETL pipelines for structured and unstructured data across cloud and on-prem environments.
Build and optimize data models, schemas, and storage solutions in SQL Server, PostgreSQL, and cloud-native databases.
Implement CI/CD workflows for data pipeline deployment and monitoring using tools such as GitHub Actions, Azure DevOps, or Jenkins.
Develop and maintain data integrations using AWS Glue, Azure Data Factory, Lambda, EventBridge, and other cloud-native services.
Ensure data quality, lineage, and governance through automated validation, logging, and monitoring frameworks.
Collaborate with cross-functional teams to gather requirements, design scalable solutions, and support analytics and reporting needs.
Monitor and troubleshoot data pipeline performance, latency, and failures; implement proactive alerting and remediation strategies.
Support data security and compliance by enforcing access controls, encryption standards, and audit logging aligned with HIPAA and SOC 2.
Maintain documentation for data flows, architecture diagrams, and operational procedures.
Participate in sprint planning, code reviews, and agile ceremonies to support iterative development and continuous improvement.
Evaluate and integrate new data tools, frameworks, and cloud services to enhance platform capabilities.
Partner with DevOps and Security teams to ensure infrastructure-as-code and secure deployment practices are followed.
Participate in, adhere to, and support compliance, people and culture, and learning programs.
Perform other duties as assigned.
Qualifications
Essential Background Requirements:
Education: Bachelor's degree in Computer Science, Information Systems, Data Engineering, or related field required. Master's degree preferred.
Certification/Licenses: AWS Certified Data Analytics or Solutions Architect required. Microsoft Certified: Azure Data Engineer Associate required. Certified Data Management Professional (CDMP) required.
Experience:
5+ years of experience in data engineering, ETL development, or cloud data architecture required.
Proven experience with SQL, ETL tools, and CI/CD pipelines required.
Hands-on experience with AWS and Azure data services and infrastructure required.
Familiarity with data governance, compliance frameworks (HIPAA, SOC 2), and secure data handling practices required.
Familiarity with CI/CD pipelines, automated testing, and version control systems required.
Skills & Technologies:
Languages & Tools: SQL, Python, Bash, Git, Terraform, PowerShell
ETL & Orchestration: AWS Glue, Azure Data Factory, Apache Airflow
CI/CD: GitHub Actions, Azure DevOps, Jenkins
Cloud Platforms: AWS (S3, Lambda, RDS, Redshift), Azure (Blob Storage, Synapse, Functions)
Monitoring & Logging: CloudWatch, Azure Monitor, ELK Stack
Data Governance: Data cataloging, lineage tracking, encryption, and access control.
Location : Address 502 Earth City Expy STE 300 Location : City Earth City Location : State/Province MO Location : Postal Code 63045 Location : Country US
Auto-ApplyLead Data Engineer
Irvine, CA jobs
At Allergan Aesthetics, an AbbVie company, we develop, manufacture, and market a portfolio of leading aesthetics brands and products. Our aesthetics portfolio includes facial injectables, body contouring, plastics, skin care, and more. Our goal is to consistently provide our customers with innovation, education, exceptional service, and a commitment to excellence, all with a personal touch. For more information, visit *************************************** Follow Allergan Aesthetics on LinkedIn.
Allergan Aesthetics | An AbbVie Company
Job Description
As the Lead Data Engineer,you will report to the
Engineer Manager (Data Services) and continuously collaborate closely with key stakeholders across the business to solve critical technical challenges.
Key Responsibilities:
Take ownership for achieving objectives and key results for your team, oversee & own technical solutions, communicate schedule, status, and milestones
Collaborate with cross functional partners (Product Managers, Data Scientists, Machine Learning Engineers, Software Engineers, and Business teams) to build data products
Communicate effectively with both technical and non-technical stakeholders. Translate technical concepts into clear, accessible terms.
Develop, optimize, and maintain complex ETL processes for data movement and transformation
Review code and provide technical guidance to ensure adherence to high-quality standards and best practices in data engineering
Develop APIs and microservices to expose and integrate data products with software systems
Implement monitoring, logging, and alerting systems to proactively identify and resolve issues
Ensure data quality, security, and compliance with relevant regulations and standards
Stay current with industry trends, emerging technologies, and best practices in data engineering. Foster a culture of continuous learning and skill development within the team
Qualifications
Required Experience & Skills
BS, MS, or PhD in Computer Science, Mathematics, Statistics, Engineering, Operations Research, or a related quantitative field
7+ years of experience as a Data Engineer or Software Engineer developing and maintaining data pipelines, infrastructure and architecture
Strong programming skills in Python with a solid understanding of core computer science principles
Knowledge of relational and dimensional data modeling for building data products.
Experience with data quality checks and data monitoring solutions.
Experience orchestrating complex workflows and data pipelines using Airflow or similar tools
Proficiency with Git, CI/CD pipelines, Docker, and Kubernetes
Experience architecting solutions on AWS or equivalent public cloud platforms
Experience developing data APIs, microservices, and event-driven systems to integrate data products
Strong interpersonal and verbal communication skills
Proven leadership experience with the ability to mentor and guide a team
Preferred Experience & Skills:
Familiarity with data mesh concepts.
Domain knowledge in recommender systems, fraud detection, personalization, and marketing science.
Understanding of vector databases, knowledge graphs, and other advanced data organization techniques.
Hands-on experience with tools such as Snowflake, Postgres, DynamoDB, Kafka, Fivetran, dbt, Airflow, Docker, Kubernetes, SageMaker, Datadog, PagerDuty, data observability tools, and data governance tools.
Additional Information
Applicable only to applicants applying to a position in any location with pay disclosure requirements under state or local law:
The compensation range described below is the range of possible base pay compensation that the Company believes in good faith it will pay for this role at the time of this posting based on the job grade for this position. Individual compensation paid within this range will depend on many factors including geographic location, and we may ultimately pay more or less than the posted range. This range may be modified in the future.
We offer a comprehensive package of benefits including paid time off (vacation, holidays, sick), medical/dental/vision insurance and 401(k) to eligible employees.
This job is eligible to participate in our short-term incentive programs.
This job is eligible to participate in our long-term incentive programs
Note: No amount of pay is considered to be wages or compensation until such amount is earned, vested, and determinable. The amount and availability of any bonus, commission, incentive, benefits, or any other form of compensation and benefits that are allocable to a particular employee remains in the Company's sole and absolute discretion unless and until paid and may be modified at the Company's sole and absolute discretion, consistent with applicable law.
AbbVie is an equal opportunity employer and is committed to operating with integrity, driving innovation, transforming lives and serving our community. Equal Opportunity Employer/Veterans/Disabled.
US & Puerto Rico only - to learn more, visit *************************************************************************
US & Puerto Rico applicants seeking a reasonable accommodation, click here to learn more:
*************************************************************