Data Scientist
Data engineer job at Blue Cross Blue Shield of Michigan
Ready to help us transform healthcare? Bring your true colors to blue.
About the Role
The Financial and Evaluation Measurement team is seeking a Data Scientist with strong technical expertise and a background in a quantitative discipline. In this role, you'll work with large, complex healthcare datasets to project and evaluate the financial impact of policy changes, clinical programs, and third-party solutions. You'll design analytical models and compelling visualizations that inform key business decisions, helping shape strategies to lower costs and improve the quality of care for our members.
This role is eligible for our Flex Persona. For candidates local to our Boston, MA and Hingham MA offices.
Your Day to Day
Analyze large, complex datasets from a robust Enterprise Data Warehouse to identify drivers of healthcare cost trends, market pressures, and profitability.
Design and execute advanced statistical models and predictive analytics to inform financial projections and evaluate cost/benefit of business initiatives.
Create clear, compelling data visualizations and communicate actionable recommendations that reduce healthcare costs and improve member outcomes.
Lead cross-functional analytic projects from design through implementation, ensuring consistent representation of financial performance across the organization.
Partner closely with teams including Payment Integrity, Health and Medical Management, Provider Contracting, and others, to ensure alignment between analytics and business strategy.
We're Looking For
Bachelor's degree in Mathematics, Statistics, Data Science, Economics, Healthcare Analytics, or related field required. Master's degree preferred.
5+ years of professional experience in data science, healthcare analytics, or a related field.
Proficiency in SQL, Python and/or R for data extraction, manipulation, and modeling.
Experience with data visualization tools (e.g., Tableau, Power BI).
Familiarity with SAS is a plus but not required.
Strong analytical and problem-solving skills with attention to detail.
Excellent communication skills-able to distill complex analyses into clear, actionable insights for non-technical audiences.
Proven ability to manage multiple projects and meet deadlines in a fast-paced environment.
Collaborative mindset and adaptability to shifting business priorities.
What You Bring
Passion for using data to improve healthcare quality and affordability.
A creative, solution-oriented approach to complex business problems.
Ability to influence decision-making through clear data-driven insights and strong stakeholder partnerships.
A team-first perspective with a proven record of driving projects to completion and measurable impact.
What You'll Gain
You will design and validate analytic frameworks, uncover emerging drivers of healthcare cost and utilization, and deliver insights that shape strategy in pricing, product development, financial planning, and forecasting.
The Business & Account Analytics team executes innovative analytics to inform corporate strategy in critical areas including competitiveness, affordability, profitability, provider payment reform and health care policy. As an Analyst on the Business & Account Analytics team, you will work with a rich dataset and a robust well-established Enterprise Data Warehouse to identify emerging drivers of health care trend and cost, market pressures, and profitability.
#LI-Hybrid
Minimum Education Requirements:
High school degree or equivalent required unless otherwise noted above
LocationBostonTime TypeFull time Salary Range: $114,414.00 - $142,120.00
The job posting range is the lowest to highest salary we in good faith believe we would pay for this role at the time of this posting. We may ultimately pay more or less than the posted range, and the range may be modified in the future. An employee's pay position within the salary range will be based on several factors including, but limited to, relevant education, qualifications, certifications, experience, skills, performance, shift, travel requirements, sales or revenue-based metrics, and business or organizational needs and affordability.
This job is also eligible for variable pay.
We offer comprehensive package of benefits including paid time off, medical/dental/vision insurance, 401(k), and a suite of well-being benefits to eligible employees.
Note: No amount of pay is considered to be wages or compensation until such amount is earned, vested, and determinable. The amount and availability of any bonus, commission, or any other form of compensation that are allocable to a particular employee remains in the Company's sole discretion unless and until paid and may be modified at the Company's sole discretion, consistent with the law.
WHY Blue Cross Blue Shield of MA?
We understand that the confidence gap and imposter syndrome can prevent amazing candidates coming our way, so please don't hesitate to apply. We'd love to hear from you. You might be just what we need for this role or possibly another one at Blue Cross Blue Shield of MA. The more voices we have represented and amplified in our business, the more we will all thrive, contribute, and be brilliant. We encourage you to bring us your true colors, , your perspectives, and your experiences. It's in our differences that we will remain relentless in our pursuit to transform healthcare for ALL.
As an employer, we are committed to investing in your development and providing the necessary resources to enable your success. Learn how we are dedicated to creating an inclusive and rewarding workplace that promotes excellence and provides opportunities for employees to forge their unique career path by visiting our Company Culture page. If this sounds like something you'd like to be a part of, we'd love to hear from you. You can also join our Talent Community to stay “in the know” on all things Blue.
At Blue Cross Blue Shield of Massachusetts, we believe in wellness and that work/life balance is a key part of associate wellbeing. For more information on how we work and support that work/life balance visit our "How We Work" Page.
Auto-ApplyPrincipal Data Scientist
Sacramento, CA jobs
Lumen connects the world. We are igniting business growth by connecting people, data and applications - quickly, securely, and effortlessly. Together, we are building a culture and company from the people up - committed to teamwork, trust and transparency. People power progress.
We're looking for top-tier talent and offer the flexibility you need to thrive and deliver lasting impact. Join us as we digitally connect the world and shape the future.
**The Role**
Provide technical direction in the creation, delivery, and integration of multiple and moderately complex software solutions. May translate business requirements into specific designs and/or participate in the design, evaluation, and selection of IT solutions for software for a specific business process. Establish current and future use of the practice, metrics, and methodologies to determine current and future solutions. Explore and evaluate new and approved technologies. Consult on the application of existing and new, approved technologies to develop solutions. Ensure the process of creation and delivery of design and solution in accordance with the architectural direction.
**Location**
This is a work from home position within the US.
**The Main Responsibilities**
+ Lead development and deployment of Enterprise AI applications leveraging both supervised and unsupervised learning techniques
+ Design, implement, and optimize Retrieval-Augmented Generation (RAG) pipelines for AI-driven apps
+ Utilize Vector Databases and Knowledge Graphs to enhance AI applications in underwriting, claims processing, and customer engagement
+ Develop data pipelines for ingestion, transformation, and storage to support AI workloads
+ Design and implement scalable solutions using cloud-based AI platforms such as Azure AI Foundry or AWS Bedrock
+ Implement AIOps best practices, including CI/CD for model training, validation, deployment, and monitoring
+ Develop generative AI models for personalized customer experiences and automation of complex decision-making processes
+ Apply natural language processing (NLP) techniques to analyze and extract insights from unstructured data sources
+ Optimize AI models for performance, scalability, and reliability in enterprise environments
+ Conduct architecture design reviews and performance tuning for AI/ML applications
+ Work cross-functionally with business and technology teams to identify AI-driven opportunities and define strategies
+ Ensure compliance with ethical AI principles, model governance, and data privacy regulations
**What We Look For in a Candidate**
+ Bachelor's or master's degree in computer science, Software Engineering, Artificial Intelligence, Machine Learning or Data Science
+ 7+ years of enterprise-scale experience in designing, implementing, and deploying AI/ML models
+ 7+ years of experience working with cloud-based AI platforms, including Azure AI Foundry and AWS Bedrock
+ 7+ years of experience in implementing both supervised and unsupervised learning techniques in real-world applications
+ Strong problem-solving skills and a deep understanding of statistical and mathematical principles
+ Strong experience in natural language processing (NLP) and generative AI applications
+ Expertise in AIOps, model lifecycle management, and AI model deployment at scale
+ Proficient in Python and related libraries and SQ
+ Fluent in one or more object oriented languages like C#, C++, Scala, Java, and scripting languages like Python or Ruby
+ Experience working with advanced AI frameworks such as LangChain, LlamaIndex, and Hugging Face transformers is preferred
+ Hands-on experience with Gen AI, RAG pipelines, Vector Databases, and Knowledge Graphs
+ Experience in the Telecom industry, particularly in Network or Orchestration
+ Familiarity with Azure OpenAI, LLM fine-tuning is preferred
+ Familiarity with agile software delivery methodologies such as Scaled Agile
**Compensation**
This information reflects the anticipated base salary range for this position based on current national data. Minimums and maximums may vary based on location. Individual pay is based on skills, experience and other relevant factors.
Location Based Pay Ranges
$149,084 - $198,779 in these states: AL AR AZ FL GA IA ID IN KS KY LA ME MO MS MT ND NE NM OH OK PA SC SD TN UT VT WI WV WY
$156,539 - $208,718 in these states: CO HI MI MN NC NH NV OR RI
$163,993 - $218,657 in these states: AK CA CT DC DE IL MA MD NJ NY TX VA WA
Lumen offers a comprehensive package featuring a broad range of Health, Life, Voluntary Lifestyle benefits and other perks that enhance your physical, mental, emotional and financial wellbeing. We're able to answer any additional questions you may have about our bonus structure (short-term incentives, long-term incentives and/or sales compensation) as you move through the selection process.
Learn more about Lumen's:
Benefits (****************************************************
Bonus Structure
\#LI-Remote
\#LP1
Requisition #: 339933
**Background Screening**
If you are selected for a position, there will be a background screen, which may include checks for criminal records and/or motor vehicle reports and/or drug screening, depending on the position requirements. For more information on these checks, please refer to the Post Offer section of our FAQ page (************************************* . Job-related concerns identified during the background screening may disqualify you from the new position or your current role. Background results will be evaluated on a case-by-case basis.
Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.
**Equal Employment Opportunities**
We are committed to providing equal employment opportunities to all persons regardless of race, color, ancestry, citizenship, national origin, religion, veteran status, disability, genetic characteristic or information, age, gender, sexual orientation, gender identity, gender expression, marital status, family status, pregnancy, or other legally protected status (collectively, "protected statuses"). We do not tolerate unlawful discrimination in any employment decisions, including recruiting, hiring, compensation, promotion, benefits, discipline, termination, job assignments or training.
**Disclaimer**
The job responsibilities described above indicate the general nature and level of work performed by employees within this classification. It is not intended to include a comprehensive inventory of all duties and responsibilities for this job. Job duties and responsibilities are subject to change based on evolving business needs and conditions.
In any materials you submit, you may redact or remove age-identifying information such as age, date of birth, or dates of school attendance or graduation. You will not be penalized for redacting or removing this information.
Please be advised that Lumen does not require any form of payment from job applicants during the recruitment process. All legitimate job openings will be posted on our official website or communicated through official company email addresses. If you encounter any job offers that request payment in exchange for employment at Lumen, they are not for employment with us, but may relate to another company with a similar name.
Oracle Orbit Analytics and Data Modelling Consultant
Cleveland, OH jobs
THEY SHOULD MEET THE TOP 3 MUST HAVE SKILL SETS Title - Oracle Orbit Analytics and Data Modelling ConsultantClient - Fujitsu Job - Remote but 10% travel is required to Work Location - Cleveland , OH (hybrid) Top 3 skills required are1. Should have expertise in building Complex physical and logical data models including good knowledge of data modeling concepts. Must have a very good understanding of normal, complex, and circular joins. 2. Very good understanding of Object and data level securities, and roles and responsibilities.3. Should have strong hold building complex Orbit reports using calculation measures and report insights.
Responsibilities: Develop a comprehensive migration plan from Oracle Discoverer to Orbit Analytics. Conduct a detailed analysis of current Oracle Discoverer reports and dashboards. Design and implement solutions to migrate reports, dashboards, and data models to Orbit Analytics. Development includes understanding of requirements, use of existing data models or working with a data modeler, build of report, technical unit testing, and quality assurance testing before being moved to the UAT environment for CRP and UAT testing. Provide technical leadership and guidance throughout the migration process. Ensure data integrity, accuracy, and consistency during the migration. Optimize the performance of the new Orbit Analytics environment. Utilize in-depth knowledge of Oracle R12 ERP and Oracle Fusion Cloud table structures in the migration process. Ensure seamless integration of data from Oracle R12 ERP and Oracle Fusion Cloud into Orbit Analytics. Collaborate with ERP and cloud teams to understand and address data requirements and challenges. Collaborate with business users to understand their reporting needs and ensure they are met post-migration. Conduct training sessions and create documentation to support end-users in the transition to Orbit Analytics. Communicate progress, challenges, and solutions to stakeholders. Develop and execute test plans to ensure all migrated reports and dashboards function as expected. Identify and resolve any issues that arise during the migration process. Conduct post-migration reviews to ensure all objectives are met. Stay up-to-date with the latest features and best practices in Orbit Analytics. Identify opportunities for further optimization and enhancement of the Orbit Analytics environment. Provide ongoing support and maintenance post-migration.
Qualifications :- Bachelor's degree in Computer Science, Information Technology, or a related field. Masters degree preferred. -Proven experience in Oracle Discoverer and Orbit Analytics. -Extensive experience with Oracle R12 ERP and Oracle Fusion Cloud. - At least 5 years of experience in business intelligence and data analytics. -Demonstrated experience in leading migration projects. Technical Skills: - Strong knowledge of SQL, PL/SQL, and database management. -Proficiency in Orbit Analytics, Oracle Discoverer, Oracle R12 ERP, and Oracle Fusion Cloud. -Familiarity with ETL processes and data warehousing concepts. - Experience with data visualization tools and techniques. Soft Skills: -Excellent problem-solving and analytical skills. - Strong communication and interpersonal skills.- Ability to work independently and as part of a team. -Project management skills with the ability to manage multiple priorities.
Who We Are CARE ITS is a certified Woman-owned and operated minority company (certified as WMBE). At CARE ITS, we are the World Class IT Professionals, helping clients achieve their goals. Care ITS was established in 2010. Since then we have successfully executed several projects with our expert team of professionals with more than 20 years of experience each. We are globally operated with our Head Quarters in Plainsboro, NJ, with focused specialization in Salesforce, Guidewire and AWS. We provide expert solutions to our customers in various business domains.
Auto-ApplyPrincipal Data Scientist
Lincoln, NE jobs
Lumen connects the world. We are igniting business growth by connecting people, data and applications - quickly, securely, and effortlessly. Together, we are building a culture and company from the people up - committed to teamwork, trust and transparency. People power progress.
We're looking for top-tier talent and offer the flexibility you need to thrive and deliver lasting impact. Join us as we digitally connect the world and shape the future.
**The Role**
Provide technical direction in the creation, delivery, and integration of multiple and moderately complex software solutions. May translate business requirements into specific designs and/or participate in the design, evaluation, and selection of IT solutions for software for a specific business process. Establish current and future use of the practice, metrics, and methodologies to determine current and future solutions. Explore and evaluate new and approved technologies. Consult on the application of existing and new, approved technologies to develop solutions. Ensure the process of creation and delivery of design and solution in accordance with the architectural direction.
**Location**
This is a work from home position within the US.
**The Main Responsibilities**
+ Lead development and deployment of Enterprise AI applications leveraging both supervised and unsupervised learning techniques
+ Design, implement, and optimize Retrieval-Augmented Generation (RAG) pipelines for AI-driven apps
+ Utilize Vector Databases and Knowledge Graphs to enhance AI applications in underwriting, claims processing, and customer engagement
+ Develop data pipelines for ingestion, transformation, and storage to support AI workloads
+ Design and implement scalable solutions using cloud-based AI platforms such as Azure AI Foundry or AWS Bedrock
+ Implement AIOps best practices, including CI/CD for model training, validation, deployment, and monitoring
+ Develop generative AI models for personalized customer experiences and automation of complex decision-making processes
+ Apply natural language processing (NLP) techniques to analyze and extract insights from unstructured data sources
+ Optimize AI models for performance, scalability, and reliability in enterprise environments
+ Conduct architecture design reviews and performance tuning for AI/ML applications
+ Work cross-functionally with business and technology teams to identify AI-driven opportunities and define strategies
+ Ensure compliance with ethical AI principles, model governance, and data privacy regulations
**What We Look For in a Candidate**
+ Bachelor's or master's degree in computer science, Software Engineering, Artificial Intelligence, Machine Learning or Data Science
+ 7+ years of enterprise-scale experience in designing, implementing, and deploying AI/ML models
+ 7+ years of experience working with cloud-based AI platforms, including Azure AI Foundry and AWS Bedrock
+ 7+ years of experience in implementing both supervised and unsupervised learning techniques in real-world applications
+ Strong problem-solving skills and a deep understanding of statistical and mathematical principles
+ Strong experience in natural language processing (NLP) and generative AI applications
+ Expertise in AIOps, model lifecycle management, and AI model deployment at scale
+ Proficient in Python and related libraries and SQ
+ Fluent in one or more object oriented languages like C#, C++, Scala, Java, and scripting languages like Python or Ruby
+ Experience working with advanced AI frameworks such as LangChain, LlamaIndex, and Hugging Face transformers is preferred
+ Hands-on experience with Gen AI, RAG pipelines, Vector Databases, and Knowledge Graphs
+ Experience in the Telecom industry, particularly in Network or Orchestration
+ Familiarity with Azure OpenAI, LLM fine-tuning is preferred
+ Familiarity with agile software delivery methodologies such as Scaled Agile
**Compensation**
This information reflects the anticipated base salary range for this position based on current national data. Minimums and maximums may vary based on location. Individual pay is based on skills, experience and other relevant factors.
Location Based Pay Ranges
$149,084 - $198,779 in these states: AL AR AZ FL GA IA ID IN KS KY LA ME MO MS MT ND NE NM OH OK PA SC SD TN UT VT WI WV WY
$156,539 - $208,718 in these states: CO HI MI MN NC NH NV OR RI
$163,993 - $218,657 in these states: AK CA CT DC DE IL MA MD NJ NY TX VA WA
Lumen offers a comprehensive package featuring a broad range of Health, Life, Voluntary Lifestyle benefits and other perks that enhance your physical, mental, emotional and financial wellbeing. We're able to answer any additional questions you may have about our bonus structure (short-term incentives, long-term incentives and/or sales compensation) as you move through the selection process.
Learn more about Lumen's:
Benefits (****************************************************
Bonus Structure
\#LI-Remote
\#LP1
Requisition #: 339933
**Background Screening**
If you are selected for a position, there will be a background screen, which may include checks for criminal records and/or motor vehicle reports and/or drug screening, depending on the position requirements. For more information on these checks, please refer to the Post Offer section of our FAQ page (************************************* . Job-related concerns identified during the background screening may disqualify you from the new position or your current role. Background results will be evaluated on a case-by-case basis.
Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.
**Equal Employment Opportunities**
We are committed to providing equal employment opportunities to all persons regardless of race, color, ancestry, citizenship, national origin, religion, veteran status, disability, genetic characteristic or information, age, gender, sexual orientation, gender identity, gender expression, marital status, family status, pregnancy, or other legally protected status (collectively, "protected statuses"). We do not tolerate unlawful discrimination in any employment decisions, including recruiting, hiring, compensation, promotion, benefits, discipline, termination, job assignments or training.
**Disclaimer**
The job responsibilities described above indicate the general nature and level of work performed by employees within this classification. It is not intended to include a comprehensive inventory of all duties and responsibilities for this job. Job duties and responsibilities are subject to change based on evolving business needs and conditions.
In any materials you submit, you may redact or remove age-identifying information such as age, date of birth, or dates of school attendance or graduation. You will not be penalized for redacting or removing this information.
Please be advised that Lumen does not require any form of payment from job applicants during the recruitment process. All legitimate job openings will be posted on our official website or communicated through official company email addresses. If you encounter any job offers that request payment in exchange for employment at Lumen, they are not for employment with us, but may relate to another company with a similar name.
Data Scientist
Hawaii jobs
The Data Scientist will lead the development of datasets for data-driven visualizations and performance metrics to support public health decision-making within the Hawaii Department of Health (HDOH) as part of the Hawaii Data & Analytics Modernization Project. This role is embedded in the Workforce Acceleration Initiative (WAI), a federally funded CDC Foundation program aimed at enhancing public health agencies' technology and data systems. The Data Scientist will focus on making datasets accessible and designing and implementing PowerBI dashboards for over 40 HDOH programs. This comprises importing datasets to the DOH Enterprise Data Warehouse (EDW) in Azure, developing robust Extract, Transform, Load (ETL) processes, and delivering a Train-the-Trainer program to build staff capacity for data pipelines and dashboards maintenance.
The role requires close collaboration with HDOH program staff, Subject Matter Experts (SMEs), and the Health Data & Informatics Office (HDIO) staff PowerBI and IT teams. The role ensures data is made accessible and dashboards are developed to align with Public Health Accreditation Board (PHAB) standards, Continuous Quality Improvement (CQI) initiatives, and HDOH program goals. The Data Scientist will integrate stakeholder engagement to address diverse programmatic needs and support the synthesis of a core data model within the data warehouse across HDOH programs. This position is eligible for a fully remote work arrangement for U.S.-based candidates, hired through the CDC Foundation, and assigned to HDOH's Health Data & Informatics Office.
Responsibilities
· Data Management and ETL Process Development:
o Assess, plan, and develop data pipelines from multiple operational systems for ingest into a common HDOH data warehouse, with appropriate governance controls, for use in analysis and reporting projects.
o Assess and manage ETL processes to support data visualization for over 40 HDOH programs, ensuring reliable data integration and quality from source systems to the HDOH data warehouse, for PowerBI dashboards.
o Collaborate with HDIO's PowerBI and IT teams to design and implement ETL workflows, integrating disparate data sources into a unified core data model.
o Develop and maintain standardized data collection, quality, and processing protocols to ensure data accuracy, consistency, and timeliness.
o Produce updated weekly progress reports on ETL and data management activities, accessible to all relevant stakeholders, to ensure transparency and alignment.
· PowerBI Dashboard Development:
o Lead the design, development, and deployment of interactive PowerBI dashboards for over 40 HDOH programs, visualizing previously identified KPIs and performance metrics, alongside HDOH technical staff.
o Ensure dashboards are user-friendly, with drill-down capabilities, and aligned with PHAB accreditation readiness, CQI initiatives, and HDOH program goals.
o Conduct iterative design reviews and usability testing with HDOH staff to refine dashboard functionality and address program-specific needs.
o Produce weekly progress reports on dashboard development, accessible to all relevant stakeholders, to document milestones and incorporate feedback.
· Train-the-Trainer Program Development:
o Develop and deliver a Train-the-Trainer program to equip HDOH staff with skills to create, maintain, and update data pipelines and PowerBI dashboards, including ETL details for troubleshooting.
o Design modular, flexible training sessions with recorded materials and user-friendly guides to accommodate varying schedules and skill levels.
o Provide ongoing support through a helpdesk or peer mentoring system to reinforce learning and ensure long-term sustainability.
o Produce updated weekly progress reports on training efforts, accessible to all relevant stakeholders, to monitor adoption and impact.
· Project Coordination and Stakeholder Collaboration:
o Oversee coordination and execution of dashboard development, metrics refinement, and training, ensuring integration with interrelated projects (data governance, EDSS modernization, core data model synthesis).
o Lead regular project meetings with HDOH staff, SMEs, and HDIO's PowerBI and IT teams to review progress, address issues, and ensure PHAB alignment.
o Use project management tools to track ETL, dashboard, and training milestones, allocating resources to meet timelines and stakeholder expectations.
o Produce updated weekly progress reports on project coordination, accessible to all relevant stakeholders, to maintain clear communication and accountability.
· Risk Management and Communication:
o Identify and mitigate risks in ETL processes, dashboard usability, and training adoption, collaborating with HDIO teams to ensure data accuracy and stakeholder satisfaction.
o Develop stakeholder communication materials (reports, presentations) using data visualization tools to share progress, ETL performance, and dashboard insights with clarity.
Qualifications
· Education:
o Bachelor's degree in Information Systems, Data Science, Public Health, Computer Science, or a related field. Master's degree in a similar field preferred but not required.
· Experience:
o 7-10 years of experience in data management, ETL development or maintenance, business intelligence, or public health informatics, ideally with a focus on healthcare or public health IT systems.
o Demonstrated expertise in designing and implementing ETL processes and data management frameworks for data integration and quality assurance.
o Significant experience developing and deploying PowerBI dashboards for performance tracking and visualization in complex organizational settings.
o Experience working with public health agencies or healthcare systems, particularly in data systems, performance metrics, and stakeholder engagement, is highly desirable.
· Technical Skills:
o Demonstrated proficiency in developing and maintaining ETL processes, database design, and data quality assurance, with experience integrating disparate data sources into a common data warehouse and associated common data model.
o Proficiency with data quality tools and data catalogs, for developing and maintaining consistent data resources or data products for an organization.
o Proficiency in PowerBI for dashboard design, data modeling, and DAX (Data Analysis Expressions) for advanced analytics.
o Familiarity with SQL for querying and managing relational databases.
o Experience with project management tools for tracking milestones and resource allocation.
· Public Health and Evaluation Knowledge:
o Understanding of public health workflows, data collection methods, data quality methods, and evaluation methodologies for performance metrics.
o Familiarity with PHAB standards, CQI initiatives, and national/international public health metrics frameworks is desired.
o Experience implementing training programs for technical tools (e.g., PowerBI, ETL processes) in public health settings.
· Communication and Collaboration:
o Exceptional interpersonal and communication skills to facilitate collaboration with diverse stakeholders, including public health professionals, technical teams, and leadership, while demonstrating cultural sensitivity and respect for Hawaii's unique cultural context.
o Ability to adapt communication styles and approaches to align with HDOH values, fostering trust and effective partnerships with HDOH staff and community stakeholders.
o Proven ability to bridge technical and business requirements, ensuring alignment between data solutions and organizational goals, while being mindful of cultural nuances and organizational priorities.
o Experience managing stakeholders and leading cross-functional teams in fast-paced environments, with a focus on building inclusive and culturally responsive collaborations.
· Project Management:
o Strong project management skills, including planning, creating work breakdown structures, and tracking milestones.
o Ability to manage multiple priorities, meet deadlines, and solve complex problems with minimal supervision.
o Experience with organizational change management, preferably using models like ADKAR.
Job Highlights
· Location: Remote, must be based in the United States. The individual must align their work hours with Hawaii Standard Time (HST) to ensure effective collaboration and communication with HDOH teams and stakeholders. Resources based in the Western US time zones preferred.
· Salary Range: $92,700-$134,275 per year, plus benefits. Individual salary offers will be based on experience and qualifications.
· Position Type: Grant-funded, limited-term opportunity.
· Position End Date: June 30, 2026.
Special Notes
This role is involved in a dynamic public health program. As such, roles and responsibilities are subject to change as situations evolve. Roles and responsibilities listed above may be expanded upon or updated to match priorities and needs, once written approval is received by the CDC Foundation in order to best support the public health programming.
All qualified applicants will receive consideration for employment and will not be discriminated against on the basis of race, color, religion, sex, national origin, age, mental or physical disabilities, veteran status, and all other characteristics protected by law.
We comply with all applicable laws including E.O. 11246 and the Vietnam Era Readjustment Assistance Act of 1974 governing employment practices and do not discriminate on the basis of any unlawful criteria in accordance with 41 C.F.R. §§ 60-300.5(a)(12) and 60-741.5(a)(7). As a federal government contractor, we take affirmative action on behalf of protected veterans.
The CDC Foundation is a smoke-free environment.
Relocation expenses are not included.
About the CDC Foundation
The CDC Foundation helps CDC save and improve lives by unleashing the power of collaboration between CDC, philanthropies, corporations, organizations and individuals to protect the health, safety and security of America and the world. The CDC Foundation is the go-to nonprofit authorized by Congress to mobilize philanthropic partners and private-sector resources to support CDC's critical health protection mission. The CDC Foundation manages hundreds of programs each year impacting a variety of health threats from chronic disease conditions including cardiovascular disease and cancer, to infectious diseases like rotavirus and HIV, to emergency responses, including COVID-19 and Ebola. Visit ********************* for more information.
Auto-ApplySenior Data Engineer IS Mod - Remote
Rochester, MN jobs
**Why Mayo Clinic** Mayo Clinic is top-ranked in more specialties than any other care provider according to U.S. News & World Report. As we work together to put the needs of the patient first, we are also dedicated to our employees, investing in competitive compensation and comprehensive benefit plans (************************************** - to take care of you and your family, now and in the future. And with continuing education and advancement opportunities at every turn, you can build a long, successful career with Mayo Clinic.
**Benefits Highlights**
+ Medical: Multiple plan options.
+ Dental: Delta Dental or reimbursement account for flexible coverage.
+ Vision: Affordable plan with national network.
+ Pre-Tax Savings: HSA and FSAs for eligible expenses.
+ Retirement: Competitive retirement package to secure your future.
**Responsibilities**
Mayo Clinic is looking for a SR Data Engineer to help secure its API and microservice environment. This position will be responsible t for integrating Mayo's Cloud API management platform i.e. Apigee with industry leading threat protection tools like Akamai, Google Cloud Armor and other advanced security platforms.
Develops and deploys data pipelines, integrations and transformations to support analytics and machine learning applications and solutions as part of an assigned product team using various open-source programming languages and vended software to meet the desired design functionality for products and programs. The position requires maintaining an understanding of the organization's current solutions, coding languages, tools, and regularly requires the application of independent judgment. May provide consultative services to departments/divisions and leadership committees. Demonstrated experience in designing, building, and installing data systems and how they are applied to the Department of Data & Analytics technology framework is required. Candidate will partner with product owners and Analytics and Machine Learning delivery teams to identify and retrieve data, conduct exploratory analysis, pipeline and transform data to help identify and visualize trends, build and validate analytical models, and translate qualitative and quantitative assessments into actionable insights.
This vacancy is not eligible for sponsorship/ we will not sponsor or transfer visas for this position. Also, Mayo Clinic DOES NOT participate in the F-1 STEM OPT extension program.
**Qualifications**
A Bachelor's degree in a relevant field such as engineering, mathematics, computer science, information technology, health science, or other analytical/quantitative field and a minimum of five years of professional or research experience in data visualization, data engineering, analytical modeling techniques; OR an Associate's degree in a relevant field such as engineering, mathematics, computer science, information technology, health science, or other analytical/quantitative field and a minimum of seven years of professional or research experience in data visualization, data engineering, analytical modeling techniques. In-depth business or practice knowledge will also be considered.
Incumbent must have the ability to manage a varied workload of projects with multiple priorities and stay current on healthcare trends and enterprise changes. Interpersonal skills, time management skills, and demonstrated experience working on cross functional teams are required. Requires strong analytical skills and the ability to identify and recommend solutions and a commitment to customer service. The position requires excellent verbal and written communication skills, attention to detail, and a high capacity for learning and problem resolution.
Advanced experience in SQL is required. Strong Experience in scripting languages such as Python, JavaScript, PHP, C++ or Java & API integration is required. Experience in hybrid data processing methods (batch and streaming) such as Apache Spark, Hive, Pig, Kafka is required. Experience with big data, statistics, and machine learning is required. The ability to navigate linux and windows operating systems is required. Knowledge of workflow scheduling (Apache Airflow Google Composer), Infrastructure as code (Kubernetes, Docker) CI/CD (Jenkins, Github Actions) is preferred. Experience in DataOps/DevOps and agile methodologies is preferred. Experience with hybrid data virtualization such as Denodo is preferred. Working knowledge of Tableau, Power BI, SAS, ThoughtSpot, DASH, d3, React, Snowflake, SSIS, and Google Big Query is preferred.
**Preferred Qualifications**
Experience with Google Cloud Security and API Security.
Experience with Akamai Security platform.
**Exemption Status**
Exempt
**Compensation Detail**
$138,257 - $200,512 / year
**Benefits Eligible**
Yes
**Schedule**
Full Time
**Hours/Pay Period**
80
**Schedule Details**
Monday - Friday; Normal business hours
This position is 100% remote, but incumbent may be asked to come onsite once a month of IT Culture and Connect activities.
**Weekend Schedule**
As needed
**International Assignment**
No
**Site Description**
Just as our reputation has spread beyond our Minnesota roots, so have our locations. Today, our employees are located at our three major campuses in Phoenix/Scottsdale, Arizona, Jacksonville, Florida, Rochester, Minnesota, and at Mayo Clinic Health System campuses throughout Midwestern communities, and at our international locations. Each Mayo Clinic location is a special place where our employees thrive in both their work and personal lives. Learn more about what each unique Mayo Clinic campus has to offer, and where your best fit is. (*****************************************
**Equal Opportunity**
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender identity, sexual orientation, national origin, protected veteran status or disability status. Learn more about the "EOE is the Law" (**************************** . Mayo Clinic participates in E-Verify (******************************************************************************************** and may provide the Social Security Administration and, if necessary, the Department of Homeland Security with information from each new employee's Form I-9 to confirm work authorization.
**Recruiter**
Shelly Weir
**Equal opportunity**
As an Affirmative Action and Equal Opportunity Employer Mayo Clinic is committed to creating an inclusive environment that values the diversity of its employees and does not discriminate against any employee or candidate. Women, minorities, veterans, people from the LGBTQ communities and people with disabilities are strongly encouraged to apply to join our teams. Reasonable accommodations to access job openings or to apply for a job are available.
Principal Data Scientist
Boston, MA jobs
Lumen connects the world. We are igniting business growth by connecting people, data and applications - quickly, securely, and effortlessly. Together, we are building a culture and company from the people up - committed to teamwork, trust and transparency. People power progress.
We're looking for top-tier talent and offer the flexibility you need to thrive and deliver lasting impact. Join us as we digitally connect the world and shape the future.
**The Role**
Provide technical direction in the creation, delivery, and integration of multiple and moderately complex software solutions. May translate business requirements into specific designs and/or participate in the design, evaluation, and selection of IT solutions for software for a specific business process. Establish current and future use of the practice, metrics, and methodologies to determine current and future solutions. Explore and evaluate new and approved technologies. Consult on the application of existing and new, approved technologies to develop solutions. Ensure the process of creation and delivery of design and solution in accordance with the architectural direction.
**Location**
This is a work from home position within the US.
**The Main Responsibilities**
+ Lead development and deployment of Enterprise AI applications leveraging both supervised and unsupervised learning techniques
+ Design, implement, and optimize Retrieval-Augmented Generation (RAG) pipelines for AI-driven apps
+ Utilize Vector Databases and Knowledge Graphs to enhance AI applications in underwriting, claims processing, and customer engagement
+ Develop data pipelines for ingestion, transformation, and storage to support AI workloads
+ Design and implement scalable solutions using cloud-based AI platforms such as Azure AI Foundry or AWS Bedrock
+ Implement AIOps best practices, including CI/CD for model training, validation, deployment, and monitoring
+ Develop generative AI models for personalized customer experiences and automation of complex decision-making processes
+ Apply natural language processing (NLP) techniques to analyze and extract insights from unstructured data sources
+ Optimize AI models for performance, scalability, and reliability in enterprise environments
+ Conduct architecture design reviews and performance tuning for AI/ML applications
+ Work cross-functionally with business and technology teams to identify AI-driven opportunities and define strategies
+ Ensure compliance with ethical AI principles, model governance, and data privacy regulations
**What We Look For in a Candidate**
+ Bachelor's or master's degree in computer science, Software Engineering, Artificial Intelligence, Machine Learning or Data Science
+ 7+ years of enterprise-scale experience in designing, implementing, and deploying AI/ML models
+ 7+ years of experience working with cloud-based AI platforms, including Azure AI Foundry and AWS Bedrock
+ 7+ years of experience in implementing both supervised and unsupervised learning techniques in real-world applications
+ Strong problem-solving skills and a deep understanding of statistical and mathematical principles
+ Strong experience in natural language processing (NLP) and generative AI applications
+ Expertise in AIOps, model lifecycle management, and AI model deployment at scale
+ Proficient in Python and related libraries and SQ
+ Fluent in one or more object oriented languages like C#, C++, Scala, Java, and scripting languages like Python or Ruby
+ Experience working with advanced AI frameworks such as LangChain, LlamaIndex, and Hugging Face transformers is preferred
+ Hands-on experience with Gen AI, RAG pipelines, Vector Databases, and Knowledge Graphs
+ Experience in the Telecom industry, particularly in Network or Orchestration
+ Familiarity with Azure OpenAI, LLM fine-tuning is preferred
+ Familiarity with agile software delivery methodologies such as Scaled Agile
**Compensation**
This information reflects the anticipated base salary range for this position based on current national data. Minimums and maximums may vary based on location. Individual pay is based on skills, experience and other relevant factors.
Location Based Pay Ranges
$149,084 - $198,779 in these states: AL AR AZ FL GA IA ID IN KS KY LA ME MO MS MT ND NE NM OH OK PA SC SD TN UT VT WI WV WY
$156,539 - $208,718 in these states: CO HI MI MN NC NH NV OR RI
$163,993 - $218,657 in these states: AK CA CT DC DE IL MA MD NJ NY TX VA WA
Lumen offers a comprehensive package featuring a broad range of Health, Life, Voluntary Lifestyle benefits and other perks that enhance your physical, mental, emotional and financial wellbeing. We're able to answer any additional questions you may have about our bonus structure (short-term incentives, long-term incentives and/or sales compensation) as you move through the selection process.
Learn more about Lumen's:
Benefits (****************************************************
Bonus Structure
\#LI-Remote
\#LP1
Requisition #: 339933
**Background Screening**
If you are selected for a position, there will be a background screen, which may include checks for criminal records and/or motor vehicle reports and/or drug screening, depending on the position requirements. For more information on these checks, please refer to the Post Offer section of our FAQ page (************************************* . Job-related concerns identified during the background screening may disqualify you from the new position or your current role. Background results will be evaluated on a case-by-case basis.
Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.
**Equal Employment Opportunities**
We are committed to providing equal employment opportunities to all persons regardless of race, color, ancestry, citizenship, national origin, religion, veteran status, disability, genetic characteristic or information, age, gender, sexual orientation, gender identity, gender expression, marital status, family status, pregnancy, or other legally protected status (collectively, "protected statuses"). We do not tolerate unlawful discrimination in any employment decisions, including recruiting, hiring, compensation, promotion, benefits, discipline, termination, job assignments or training.
**Disclaimer**
The job responsibilities described above indicate the general nature and level of work performed by employees within this classification. It is not intended to include a comprehensive inventory of all duties and responsibilities for this job. Job duties and responsibilities are subject to change based on evolving business needs and conditions.
In any materials you submit, you may redact or remove age-identifying information such as age, date of birth, or dates of school attendance or graduation. You will not be penalized for redacting or removing this information.
Please be advised that Lumen does not require any form of payment from job applicants during the recruitment process. All legitimate job openings will be posted on our official website or communicated through official company email addresses. If you encounter any job offers that request payment in exchange for employment at Lumen, they are not for employment with us, but may relate to another company with a similar name.
Senior Data Engineer
Boston, MA jobs
Job Description
Care Access is working to make the future of health better for all. With hundreds of research locations, mobile clinics, and clinicians across the globe, we bring world-class research and health services directly into communities that often face barriers to care. We are dedicated to ensuring that every person has the opportunity to understand their health, access the care they need, and contribute to the medical breakthroughs of tomorrow.
With programs like
Future of Medicine
, which makes advanced health screenings and research opportunities accessible to communities worldwide, and
Difference Makers
, which supports local leaders to expand their community health and wellbeing efforts, we put people at the heart of medical progress. Through partnerships, technology, and perseverance, we are reimagining how clinical research and health services reach the world. Together, we are building a future of health that is better and more accessible for all.
To learn more about Care Access, visit *******************
How This Role Makes a Difference
We are seeking an experienced and detail-oriented professional to join our team as a Sr. Data Engineer. In this pivotal role, you will be responsible for designing, developing, and maintaining robust data pipelines that ensure the reliable ingestion, transformation, and delivery of complex data (demographics, medical, financial, marketing, etc.) across systems. The ideal candidate will bring deep expertise in Databricks, SQL, and modern data engineering practices, along with strong collaboration skills to help drive excellence across our data infrastructure.
How You'll Make An Impact
Data Engineering Strategy and Architecture:
Design and implement scalable, reliable, and efficient data pipelines to support clinical, operational, and business needs.
Develop and maintain architecture standards, reusable frameworks, and best practices across data engineering workflows.
Build automated systems for data ingestion, transformation, and orchestration leveraging cloud-native and open-source tools.
Data Infrastructure and Performance Optimization:
Optimize data storage and processing in data lakes and cloud data warehouses (Azure, Databricks).
Develop and monitor batch and streaming data processes to ensure data accuracy, consistency, and timeliness.
Maintain documentation and lineage tracking across datasets and pipelines to support transparency and governance.
Collaboration and Stakeholder Engagement:
Work cross-functionally with analysts, data scientists, software engineers, and business stakeholders to understand data requirements and deliver fit-for-purpose data solutions.
Review and refine work completed by other team members, ensuring quality and performance standards are met.
Provide technical mentorship to junior team members and collaborate with contractors and third-party vendors to extend engineering capacity
Technology and Tools:
Use Databricks, DBT, Azure Data Factory, and SQL to architect and deploy robust data engineering solutions.
Integrate APIs, structured/unstructured data sources, and third-party systems into centralized data platforms.
Evaluate and implement new technologies to enhance the scalability, observability, and automation of data operations.
Other Responsibilities:
Continuous Improvement: Proactively suggest improvements to infrastructure, processes, and automation to improve system efficiency, reduce costs, and enhance performance.
The Expertise Required
Strong expertise in Databricks, SQL, dbt, Python, and cloud data ecosystems such as Azure.
Experience working with structured and semi-structured data from diverse domains.
Familiarity with CI/CD pipelines, orchestration tools (e.g., Airflow, Azure Data Factory), and modern software engineering practices.
Strong analytical and problem-solving skills, with the ability to address complex data challenges and drive toward scalable solutions.
Certifications/Licenses, Education, and Experience:
Bachelor's or master's degree in computer science, Information Systems, Engineering, or a related field.
5+ years of experience in data engineering with a proven track record of building cloud-based, production-grade data pipelines.
How We Work Together
This role requires 100% of work to be performed in a remote office environment and requires the ability to use keyboards and other computer equipment.
This is a remote position with less than 10% travel requirements. Occasional planned travel may be required as part of the role.
The expected salary range for this role is $120,000-$160,000 USD per year. In addition to base pay, employees may be eligible for 401k, stock options, health and wellness benefits and paid time off.
Diversity & Inclusion
We work with and serve people from diverse cultures and communities around the world. We are stronger and better when we build a team representing the communities we support. We maintain an inclusive culture where people from a broad range of backgrounds feel valued and respected as they contribute to our mission.
We are an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to, and will not be discriminated against on the basis of, race, color, religion, sex, sexual orientation, gender identity or expression, pregnancy, age, national origin, disability status, genetic information, protected veteran status, or any other characteristic protected by law.
Care Access is unable to sponsor work visas at this time.
If you need an accommodation to apply for a role with Care Access, please reach out to: ********************************
The Data Engineer will play a crucial role in advancing the CDC Foundation's mission by designing, building, and maintaining modern data infrastructure for the Northwest Portland Area Indian Health Board (NPAIHB) Data Hub project. Working closely with the Data Hub Team, the Data Engineer will support the architecture needed for data storage, processing, analysis, and secure transfer to Tribal Leaders and public health professionals. The Data Engineer will collaborate with epidemiologists, data content experts, IT staff, the Data Hub Project Director, and others to develop and implement scalable solutions that align with the objectives of the NPAIHB's Data Hub project.
NPAIHB's Data Hub Team is currently developing a system, “The NW Tribal Data Hub,” to provide comprehensive, user-friendly public health data dashboards for its 43 member Tribes. The Data Engineer will ensure the successful design and implementation of a newly created public health database, the ingestion of additional data into the system, and create tables, views, and other database structures to support epidemiological analysis, visualization, and reporting to Tribes. The data, sourced primarily from state and federal agencies, include vital statistics (births, deaths), cancer registries, emergency department, clinical service data, and others. The Data Engineer's work will be pivotal in enhancing the capacity of Tribal public health departments to conduct data-driven activities, advancing Tribal data sovereignty, and empowering Tribes to improve health outcomes within their communities.
The Data Engineer will be hired by the CDC Foundation and assigned to the Data Hub Team at NPAIHB. This position is eligible for a fully remote work arrangement for U.S. based candidates.
NPAIHB is a tribally owned and operated non-profit organization serving the 43 federally recognized Tribes in the states of Idaho, Oregon, and Washington. Led by a Board of Directors, NPAIHB's mission is to “eliminate health disparities and improve the quality of life of American Indians and Alaska Natives by supporting Northwest Tribes in their delivery of culturally appropriate, high-quality health programs and services.” NPAIHB is a mission-driven organization with a staff of over 120 professionals dedicated to advancing Tribal health for the 7th generation in the Pacific Northwest.
Responsibilities
Create new and enhance existing systems and pipelines that enable efficient, reliable, and secure flow of data, including ingestion, processing, and storage.
Load data into storage systems or data warehouses, transforming, cleaning, and organizing with dimensional modeling techniques to ensure accuracy, consistency, and efficient querying.
Transform and structure data to ensure it is optimized for use in data visualization software, enabling accurate and effective visual representations of epidemiological data.
Ensure thorough and clear documentation of database architecture and workflows to promote sustainability, consistency, and ease of maintenance.
Apply rigorous data quality checks and validation processes to guarantee the accuracy and reliability of the data released, emphasizing the importance of delivering correct and trustworthy data to support public health initiatives.
Optimize data pipelines, infrastructure, and workflows for performance and scalability.
Monitor data pipelines and systems for performance issues, errors, and anomalies, and implement solutions to address them.
Implement security measures to protect sensitive information.
Collaborate with epidemiologists, analysts, and other partners to understand current and future data needs and requirements, and to ensure that the data infrastructure supports the organization's goals and objectives.
Collaborate with cross-functional teams to understand data requirements and design scalable solutions that meet business needs.
Implement and maintain ETL processes to ensure the accuracy, completeness, and consistency of data.
Design and manage data storage systems, including a PostgreSQL relational database
Apply knowledge about industry trends, best practices, and emerging technologies in data engineering, and incorporate the trends into the organization's data infrastructure.
Provide technical guidance to other staff on preparing and structuring data for visualization, leveraging knowledge of visualization tools to support the creation of meaningful and insightful visual outputs.
Communicate effectively with partners at all levels of the organization to gather requirements, provide updates, and present findings.
Qualifications
Bachelor's degree in Computer Science, Information Technology, Data Science, or a related field.
Minimum of five (5) years of related informatics experience, preferably with three (3) years of experience in a lead data engineer position.
Demonstrated expertise in building SQL relational databases and transitioning non-relational data into a structured relational format, ensuring seamless integration and optimized performance.
Proficiency in SQL programming and other languages commonly used in data engineering, such as Python, PySpark, Java, Scala. Candidate should be able to implement data automations within existing frameworks as opposed to writing one off scripts.
Experience transforming and preparing data into formats suitable for data visualization software, ensuring it is structured for optimal use in dashboards and other visual outputs.
Strong understanding of database systems, including relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra), with PostgreSQL preferred.
Experience regarding engineering best practices such as source control, automated testing, continuous integration and deployment, and peer review, and serving as a subject matter expert on these topics.
Knowledge of data warehousing concepts and tools.
Experience with cloud computing platforms, with preference for experience in AWS environment.
Expertise in data modeling, ETL (Extract, Transform, Load) processes, and data integration techniques.
Familiarity with agile development methodologies, software design patterns, and best practices.
Strong analytical thinking and problem-solving abilities.
Excellent verbal and written communication skills, including the ability to convey technical concepts to non-technical partners effectively.
Flexibility to adapt to evolving project requirements and priorities.
Outstanding interpersonal and teamwork skills; and the ability to develop productive working relationships with colleagues and partners.
Experience working in a virtual environment with remote partners and teams.
Proficiency in Microsoft Office.
Ability to travel occasionally for in-person meetings (travel costs will be covered by NPAIHB).
Preferred Qualifications
Experience gathering requirements and designing and planning data models based on those requirements.
Experience creating complex fields and visuals in AWS QuickSight or similar data visualization tools (Tableau, Microsoft Power BI, etc).
Experience building data pipelines within Amazon Web Services (AWS), such as AWS Relational Database Services (RDS), Amazon Aurora Serverless, AWS Glue, Lambda
Experience working with complex public health, health care, or other non-business data requiring advanced processing and analysis techniques.
Experience transitioning SAS datasets and analyses into relational database structures.
Experience with dimensional modeling in scenarios where dimensions and fields change over time.
Experience with implementing data suppression techniques and familiarity with HIPAA, PHI, and other data confidentiality regulations.
Job Highlights
Location: Remote, must be based in the United States
Salary Range: $103,500-143,500, plus benefits. Individual salary offers will be based on experience and qualifications unique to each candidate.
Position Type: Grant funded, limited-term opportunity
Position End Date: June 30, 2026
Special Notes
This role is involved in a dynamic public health program. As such, roles and responsibilities are subject to change as situations evolve. Roles and responsibilities listed above may be expanded upon or updated to match priorities and needs, once written approval is received by the CDC Foundation in order to best support the public health programming.
The CDC Foundation is a smoke-free environment.
Relocation expenses are not included.
All qualified applicants will receive consideration for employment and will not be discriminated against on the basis of race, color, religion, sex, national origin, age, mental or physical disabilities, veteran status, and all other characteristics protected by law.
We comply with all applicable laws including E.O. 11246 and the Vietnam Era Readjustment Assistance Act of 1974 governing employment practices and do not discriminate on the basis of any unlawful criteria in accordance with 41 C.F.R. §§ 60-300.5(a)(12) and 60-741.5(a)(7).
As a federal government contractor, we take affirmative action on behalf of protected veterans.
About the CDC Foundation
The CDC Foundation helps CDC save and improve lives by unleashing the power of collaboration between CDC, philanthropies, corporations, organizations and individuals to protect the health, safety and security of America and the world. The CDC Foundation is the go-to nonprofit authorized by Congress to mobilize philanthropic partners and private-sector resources to support CDC's critical health protection mission. The CDC Foundation
manages hundreds of programs each year impacting a variety of health threats from chronic disease conditions including cardiovascular disease and cancer, to infectious diseases like rotavirus and HIV, to emergency responses, including COVID-19 and Ebola. Visit ********************* for more information.
Auto-ApplyJohn Snow Labs US-Based Healthcare Data Scientist
Delaware City, DE jobs
John Snow Labs is an award-winning AI and NLP company, accelerating progress in data science by providing state-of-the-art software, data, and models. Founded in 2015, it helps healthcare and life science companies build, deploy, and operate AI products and services. John Snow Labs is the winner of the 2018 AI Solution Provider of the Year Award, the 2019 AI Platform of the Year Award, the 2019 International Data Science Foundation Technology award, and the 2020 AI Excellence Award.
John Snow Labs is the developer of Spark NLP - the world's most widely used NLP library in the enterprise - and is the world's leading provider of state-of-the-art clinical NLP software, powering some of the world's largest healthcare & pharma companies. John Snow Labs is a global team of specialists, of which 33% hold a Ph.D. or M.D. and 75% hold at least a Master's degree in disciplines covering data science, medicine, software engineering, pharmacy, DevOps and SecOps.
Job Description
John Snow Labs is seeking a highly skilled and motivated Data Scientist to contribute to transformative initiatives within the healthcare industry. The ideal candidate will possess a strong background in developing and optimizing machine learning models, specifically within healthcare contexts. We are looking for a results-oriented individual proficient in training and fine-tuning models, building robust, production-ready model inference pipelines, and conducting comprehensive exploratory data analysis and data enrichment.
Qualifications
Key Responsibilities:
Train, fine tune, and enhance LLM & NLP models using the open-source Python library ecosystem. Experience with LLMs, Generative AI, and deep learning is a significant advantage.
Build data science and data engineering pipelines specific to analyzing clinical data, such as extracting information from medical text or images, or integrating uncertain information from multiple medical data sources.
Collaborate with our team on customer-facing projects, utilizing your expertise to create advanced machine learning, deep learning, large language models, and time series forecasting pipelines tailored to address specific business needs.
Ensure models are validated for issues like bias, overfitting, and concept drift to ensure reliability and effectiveness.
Engage directly with customers, requiring strong oral and written communication skills to convey complex technical concepts clearly.
Mandatory Skills:
Proven experience in consistently delivering real-world projects covering the key responsibilities. Knowledge that is limited to an academic setting, or to using existing APIs to building applications, is not sufficient for this role.
Hands-on experience with OMOP, FHIR, clinical terminologies, and understanding of the patient journey.
Strong background in healthcare-related fields such as medicine, pharma, bioinformatics, or biostatistics is highly beneficial.
A PhD in a relevant field is preferred but not required if exceptional experience is demonstrated.
Experience with John Snow Labs' technology stack, such as Spark NLP or the medical language models, is a plus.
What We Offer:
A chance to work on cutting-edge problems in healthcare and life sciences, contributing to meaningful projects that impact patient outcomes.
Long-term freelancing contracts with a commitment of at least 30 hours per week. We are seeking individuals, not agencies or teams.
The opportunity to grow your skills and knowledge, working with a team of big data and data science experts in a supportive, collaborative environment.
To apply, please include the words 'John Snow Labs' in your cover letter and detail why you believe you are the best fit for this role. This is more than just a contract - it's a chance to make a real difference.
Additional Information
Our Commitment to You
At John Snow Labs, we believe that diversity is the catalyst of innovation. We're committed to empowering talented people from every background and perspective to thrive.
We are an award-winning global collaborative team focused on helping our customers put artificial intelligence to good use faster. Our website includes The Story of John Snow, and our Social Impact page details how purpose and giving back is part of our DNA. More at JohnSnowLabs.com
We are a fully virtual company, collaborating across 28 countries.
This is a contract opportunity, not a full-time employment role.
This role requires the availability of at least 30-40 hours per week.
Senior Data Engineer
Remote
At MCG, we lead the healthcare community to deliver patient-focused care. We have a mission-driven team of talented physicians and technical experts developing our evidence-based content and innovating our products to accelerate improvements in healthcare. If you are driven to enhance the US healthcare system, MCG is eager to have you join our team. We cultivate a work environment that nurtures personal and professional growth, and this is a thrilling time to become a part of our organization. With dynamic roles that offer meaningful impact, you'll be able to fully realize your potential. Plus, you'll enjoy world-class benefits and the security, stability, and resources of our parent company, Hearst, with over 100 years of experience.
As a Senior Data Engineer you will be responsible for enabling efficient and effective data ingestion & delivery systems. Our team collaborates with data producers (application teams) and data consumers/stakeholders (Data Science, Product, Analytics & Reporting teams) to ensure the availability, quality, and accessibility of data through robust pipelines and storage platforms.
You will:
Explore, analyze, and onboard data sets from data producers to ensure they are ready for processing and consumption.
Develop and maintain scalable and efficient data pipelines for data collection, processing (quality checks, de-duplication, etc.), and integration into Data lake and Data warehouse systems.
Optimize and monitor data pipeline performance to ensure minimal downtime.
Implement data quality control mechanisms to maintain data set integrity.
Collaborate with stakeholders for seamless data flow and address issues or needs for improvement.
Manage the deployment and automation of pipelines and infrastructure using Terraform, Flyte, and Kubernetes.
Support strategic data analysis and operational tasks as needed.
Lead end-to-end data pipeline development - from initial data discovery and ingestion to transformation, modeling, and delivery into production-grade data platforms.
Integrate and manage data from 3+ distinct sources, designing efficient, reusable frameworks for multi-source data processing and harmonization.
What We're Looking For:
Demonstrated ability to navigate ambiguous data challenges, ask the right questions, and design effective, scalable solutions.
Proficient in designing, building, and maintaining large-scale, reliable data pipeline systems.
Competence in designing and handling large-scale data pipeline systems.
Advanced SQL skills for querying and processing data.
Proficiency in Python, with experience in Spark for data processing.
3+ years of experience in data engineering, including data modeling and ETL pipelines.
Familiarity with cloud-based tools and infrastructure management using Terraform and Kubernetes is a plus.
Bonus:
Experience working with healthcare and clinical data sets
Experience with orchestration tools like Flyte
Pay Range: $136,000 - $190,400
Other compensation: Bonus Eligible
Perks & Benefits:
💻 Hybrid work
✈️ Travel expected 2-3 times per year for company-sponsored events
🩺 Medical, dental, vision, life, and disability insurance
📈 401K retirement plan; flexible spending and health savings account
🏝️ 15 days of paid time off + additional front-loaded personal days
🏖️ 14 company-recognized holidays + paid volunteer days
👶 up to 8 weeks of paid parental leave + 10 weeks of paid bonding leave
🌈 LGBTQ+ Health Services
🐶 Pet insurance
📣 Check out more of our benefits here: *******************************************
MCG Health is a Seattle, Washington-based company and is considering remote/hybrid candidates with some travel for company-sponsored events.
The ideal candidate should be comfortable balancing the independence of remote/hybrid work with the collaborative opportunities offered by periodic in-person engagements.
We embrace diversity and equal opportunity and are committed to building a team that represents a variety of backgrounds, perspectives, and skills. Only with diverse thoughts and ideas will we be able to create the change we want in healthcare. The more inclusive we are, the better our work will be for it.
All roles at MCG are expected to engage in occasional travel to participate in team or company-sponsored events for the purposes of connection and collaboration.
It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability.
MCG is a leading healthcare organization dedicated to patient-focused care. We value our employees' unique differences and are an Equal Employment Opportunity (EEO) employer. Our diverse workforce helps us achieve our goal of providing the right care to everyone. We welcome all qualified applicants without regard to race, religion, nationality, gender, sexual orientation, gender identity, age, marital status, veteran status, disability, pregnancy, parental status, genetic information, or political affiliation. We are committed to improving equity in healthcare and believe that a diverse workplace fosters curiosity, innovation, and business success. We are happy to provide accommodations for individuals. Please let us know if you require any support.
Auto-ApplySenior Big Data Engineer (Python/PySpark)
Strongsville, OH jobs
Senior Big Data Engineer (Python/PySpark) Provide locations/flexible work by preference: 1. Pittsburgh PA - Two PNC Plaza OR Cleveland OH - Strongsville Technology Center **Please note, the hiring team is not looking to source out of any other tech hubs at this time**
Ability to work remote: Hybrid - 3 days in office, 2 Remote
Acceptable time zone(s): EST
Days of the week: Mon-Fri, 40 hours
Working Hours (Flexible): 8-5pm EST
Roles and Responsibilities:
+ Interact with Business to understand the requirement on a daily evolving need and adapt to the needs.
+ Able to understand the business requirements, tech stack involved and guide the team towards Integral solution approach.
+ Dedicated and committed approach with good communication skills and participation in agile ceremonies.
+ Hands on Experience with Object Oriented Programming including limitations.
+ Familiarity with cross-platform development
+ Able to handle all the database storage and retrieval mechanisms on Data Lake Platforms and extensive experience in handling.
+ Extensive Unix/FTP/ File Handling
+ Strong Hands-on experience with SQL databases
Must Have Technical Skills:
+ A deep understanding and multi-process architecture and the threading limitations of Python.
+ Pyspark Data engineering skills
+ Experience using libraries like Pandas, NumPy and MatPlotLib. With experience working with file based systems(supporting csv/parquet/)
+ Extensive knowledge in understanding Data and creating Data Pipelines
+ Hands on experience designing and implementing APIs
+ Hands on experience building microservices using FastAPI or related technologies.
+ Experience writing unit testing and ensure code coverage.
+ Experience with Agile Methodology/JIRA/Confluence
+ Experience in version control using Git
Flex Skills/Nice to Have: Added advantage if worked on Big Data, PySpark libraries.
Education/Certifications:
Bachelors Preferred
Python or Spark certifications are a plus
Role Differentiator: Conversion possibility, growth opportunities are immense, there is always work and always ways to grow and a place where you can be energized by new work and opportunities. Supporting the bank in a huge way in this department and very critical to the bank's success.
Skills:
+ A deep understanding and multi-process architecture and the threading limitations of Python
+ Experience in version control using Git
+ Experience using libraries like Pandas, NumPy and MatPlotLib. With experience working with file based systems(supporting csv/parquet/)
+ Experience with Agile Methodology/JIRA/Confluence
+ Experience writing unit testing and ensure code coverage.
+ Extensive knowledge in understanding Data and creating Data Pipelines
+ Hands on experience building microservices using FastAPI or related technologies.
+ Hands on experience designing and implementing APIs
+ Pyspark Data engineering skills
Share your resume with ***********************. Also connect me at LinkedIn : (16) Ariz J. Khan | LinkedIn (**************************************************
Ref: #404-IT Pittsburgh
System One, and its subsidiaries including Joulé, ALTA IT Services, CM Access, TPGS, and MOUNTAIN, LTD., are leaders in delivering workforce solutions and integrated services across North America. We help clients get work done more efficiently and economically, without compromising quality. System One not only serves as a valued partner for our clients, but we offer eligible full-time employees health and welfare benefits coverage options including medical, dental, vision, spending accounts, life insurance, voluntary plans, as well as participation in a 401(k) plan.
System One is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, age, national origin, disability, family care or medical leave status, genetic information, veteran status, marital status, or any other characteristic protected by applicable federal, state, or local law.
Easy ApplyData Engineer
South Dakota jobs
The Data Engineer will play a crucial role in advancing the designing, building, and maintaining data infrastructure. This role is aligned to the Workforce Acceleration Initiative (WAI). WAI is a federally funded CDC Foundation program with the goal of helping the nation's public health agencies by providing them with the technology and data experts they need to accelerate their information system improvements.
Working within South Dakota Department of Health's (SD-DOH) Epidemiology, Surveillance and Informatics Center (ESIC), the Data Engineer will play a pivotal role in documenting and maintaining the case-based disease surveillance system architecture required for data generation, storage, processing, and analysis. This position is eligible for a fully remote work arrangement for U.S. based candidates.
Responsibilities
· Document the existing architecture of the case-based disease surveillance system, including system interactions and data generation, storage, processing, and analysis.
· Maintain current and develop the future-state case-based disease surveillance system architectural diagrams to ensure the system continues to be robust, scalable, and aligned with the SD-DOH's public health goals.
· Facilitate and collaborate with end-users and technical staff to understand requirements and deliver effective surveillance system solutions.
· Evaluate, design, and implement both the existing and proposed future-state disease surveillance system solutions. Ensure that the solutions meet the needs of all stakeholders within SD-DOH, including being adaptable to evolving challenges. This includes conducting an environmental scan of system requirements completed by other jurisdictions.
· Document all aspects of the system architecture, including design decisions, data flow processes, and system requirements. Maintain clear and comprehensive records to support ongoing system maintenance and future upgrades.
· A work product for this position includes creating a final requirements document with all necessary technical specifications for a future-state case-based surveillance system listed clearly and concisely.
· Create and manage the systems and pipelines that enable efficient and reliable flow of data, including ingestion, processing, and storage.
· Collect data from various sources, transforming and cleaning it to ensure accuracy and consistency. Load data into storage systems or data warehouses.
· Optimize data pipelines, infrastructure, and workflows for performance and scalability.
· Monitor data pipelines and systems for performance issues, errors, and anomalies, and implement solutions to address them.
· Implement security measures to protect sensitive information.
· Collaborate with data scientists, analysts, and other partners to understand their data needs and requirements, and to ensure that the data infrastructure supports the organization's goals and objectives.
· Collaborate with cross-functional teams to understand data requirements and design scalable solutions that meet business needs.
· Implement and maintain ETL processes to ensure the accuracy, completeness, and consistency of data.
· Design and manage data storage systems, including relational databases, NoSQL databases, and data warehouses.
· Knowledgeable about industry trends, best practices, and emerging technologies in data engineering, and incorporating the trends into the organization's data infrastructure.
· Provide technical guidance to other staff.
· Work closely with a multidisciplinary team, including data content experts, analysts, data scientists, epidemiologists, IT staff, and other organizational personnel.
· Communicate effectively with partners at all levels of the organization to gather requirements, provide updates, and present findings.
Qualifications
· Bachelor's degree in Computer Science, Information Technology, Data Science, or a related field.
· 5 years of experience with project oversight, including communications to end-users and technical staff
· Proficiency in programming languages commonly used in data engineering, such as Python, Java, Scala, or SQL. Candidate should be able to implement data automations within existing frameworks as opposed to writing one off scripts.
· Experience with Microsoft Azure cloud technologies and frameworks or similar (AWS).
· Strong understanding of database systems, including relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra).
· Experience regarding engineering best practices such as source control, automated testing, continuous integration and deployment, and peer review.
· Knowledge of data warehousing concepts and tools.
· Experience with cloud computing platforms.
· Expertise in data modeling, ETL (Extract, Transform, Load) processes, and data integration techniques.
· Solid understanding of FHIR and API-based architectures.
· Familiarity with agile development methodologies, software design patterns, and best practices.
· Strong analytical thinking and problem-solving abilities.
· Excellent verbal and written communication skills, including the ability to convey technical concepts to non-technical partners effectively.
· Experience working with project management and tracking software (e.g., JIRA and DevOps)
· Flexibility to adapt to evolving project requirements and priorities.
· Outstanding interpersonal and teamwork skills; and the ability to develop productive working relationships with colleagues and partners.
· Experience working in a virtual environment with remote partners and teams
· Proficiency in Microsoft Office.
Job Highlights
· Location: Remote, must be based in the United States
· Work Schedule: 8 am - 5 pm Central Time. Flexible work schedule of 1 hour before or after.
· Salary Range: $103,500-$143,500 per year, plus benefits. Individual salary offers will be based on experience and qualifications unique to each candidate.
· Position Type: Grant funded, limited-term opportunity
· Position End Date: June 30, 2026
Special Notes
This role is involved in a dynamic public health program. As such, roles and responsibilities are subject to change as situations evolve. Roles and responsibilities listed above may be expanded upon or updated to match priorities and needs, once written approval is received by the CDC Foundation in order to best support the public health programming.
The CDC Foundation is a smoke-free environment.
Relocation expenses are not included.
All qualified applicants will receive consideration for employment and will not be discriminated against on the basis of race, color, religion, sex, national origin, age, mental or physical disabilities, veteran status, and all other characteristics protected by law.
We comply with all applicable laws including E.O. 11246 and the Vietnam Era Readjustment Assistance Act of 1974 governing employment practices and do not discriminate on the basis of any unlawful criteria in accordance with 41 C.F.R. §§ 60-300.5(a)(12) and 60-741.5(a)(7). As a federal government contractor, we take affirmative action on behalf of protected veterans.
About the CDC Foundation
The CDC Foundation helps CDC save and improve lives by unleashing the power of collaboration between CDC, philanthropies, corporations, organizations and individuals to protect the health, safety and security of America and the world. The CDC Foundation is the go-to nonprofit authorized by Congress to mobilize philanthropic partners and private-sector resources to support CDC's critical health protection mission. The CDC Foundation
manages hundreds of programs each year impacting a variety of health threats from chronic disease conditions including cardiovascular disease and cancer, to infectious diseases like rotavirus and HIV, to emergency responses, including COVID-19 and Ebola. Visit ********************* for more information.
Auto-ApplySenior Data Engineer II
Boston, MA jobs
WHO WE ARE
ActBlue is a nonprofit organization dedicated to creating cutting-edge technology that fuels Democratic victories and enables progressive causes to thrive.
Our vision is simple: building change through the power of people. Since our founding, we've been building innovative solutions to revolutionize grassroots fundraising - if you've donated to a Democratic campaign or a progressive organization online, you've probably used our platform! We believe in putting power in the hands of small-dollar donors by helping thousands of groups - from local candidates to national movements - mobilize their communities and create a lasting impact. Every member of our team is deeply committed to advancing our shared mission and core values. Together, we are shaping the future of democracy.
THE OPPORTUNITY
We're looking for a Senior Data Engineer II to join ActBlue's Data Department and contribute to building and evolving the data products that power our platform.
As part of the Data Product Team, you'll help integrate our internal data platform with user-facing applications and infrastructure. You'll work across the data lifecycle-from ingestion and modeling to deploying machine learning pipelines-focused on unlocking insights, personalization, and automation that support political campaigns, organizations, and donors.
This is a hands-on role for someone who thrives in a collaborative environment and brings deep Python engineering expertise along with a pragmatic, product-minded approach to data systems.
WHAT YOU WILL DO
Design, build, and maintain scalable, reliable, and secure data pipelines using Python, with a focus on enabling data access and insight across product teams, engineering, and entities.
Develop reusable data services and frameworks that support high-quality data ingestion, transformation, and ML model deployment - accelerating analytics and experimentation across the organization.
Collaborate with data scientists and ML engineers to productionize machine learning workflows using SageMaker, Vertex AI, or other MLOps tools
Implement monitoring, testing, and CI/CD automation for data pipelines and ML services
Own and evolve real-time and batch data integrations between ActBlue's core systems and user-facing applications - influencing design decisions and architecture across multiple teams including Product, Engineering, and Analytics.
Develop, optimize and support reverse ETL workflows using tools like Hightouch
Participate in code reviews, mentor junior engineers, and help foster a high-trust engineering culture.
Demonstrate technical leadership through writing documentation, establishing effective monitoring, and fostering clear and audience-oriented communication.
WHAT YOU BRING
5+ years of relevant professional experience in data engineering or backend development with a strong focus on Python.
Expertise in writing clean, modular, tested, and production-ready Python code.
Strong understanding of data architecture, distributed systems, and security best practices.
Experience deploying and supporting production ML workflows (e.g., SageMaker, Vertex AI, or equivalent).
Familiarity with ETL tools such as Fivetran and data modeling frameworks like DBT.
Solid command of SQL and experience working with large analytical databases (e.g., Redshift, PostgreSQL).
Experience with monitoring and observability using Datadog or similar tools.
A team player mentality. You keep the end user in mind and enjoy hearing feedback from your teammates, yet know when and how to defend your own ideas in a respectful manner.
Commitment to ActBlue's mission and values, including equity, accessibility, and civic engagement
BONUS POINTS
Experience with ML platforms like SageMaker, Vertex AI, TensorFlow, or Modelbi.
Experience with real-time data systems or streaming platforms.
Experience contributing to internal platforms or tooling used across engineering teams.
Experience implementing robust testing frameworks for data workflows (e.g., Pytest, dbt tests)
WORK & BENEFITS SNAPSHOT
This posting is for a full-time, remote, salaried position. Travel may be required on a limited basis to attend all-staff and departmental retreats (1-2 times per year). Additional travel may be required for select positions.
Registered States*: Arizona, California, Colorado, Connecticut, Florida, Georgia, Hawaii, Illinois, Maryland, Massachusetts, Michigan, Minnesota, Missouri, New Hampshire, New Jersey, New York, North Carolina, North Dakota, Ohio, Oregon, Pennsylvania, Rhode Island, South Carolina, Texas, Utah, Vermont, Virginia, Washington, Wisconsin, and Washington D.C.
*While ActBlue is currently registered to support remote work in the states listed above, we possess the ability to register in additional states as needed. If you are located in a state not listed, we may still be able to proceed with your application, but please note that the offer process may take longer to accommodate registration requirements.
Work Schedule:
This role requires availability during established, regular business hours (Mon-Fri) and is expected to be a part of an on-call rotation which will result in working nontraditional hours as needed.
Work Environment:
Employees can expect to work with distributed teams across all U.S. time zones. Our roles require extended technology usage, and proficiency with virtual communication tools such as Zoom and Slack. Regular attendance in virtual meetings is inherent to every position.
Salary Range Details:
Salary Range: $173,676 - $192,209 - $210,741
ActBlue is committed to consistent compensation practices across our organization. Final salary offers will take into account factors such as candidate experience, interview performance and current team salary parity.
Benefits:
Flexible work schedules and an unlimited time-off policy
Fully paid and trans-inclusive health, dental, and vision insurance for employees and their families; plus fully-paid health reimbursement arrangement to use for out of pocket expenses and fully-paid short- and long-term disability
Fully paid basic and AD&D life insurance and a voluntary supplemental life insurance option
Dependent and health care flexible spending account options
Employee Assistance Program (EAP) benefits for employees
Automatic 2% Employer-paid 401K contribution, plus up to an additional 6% match on employee contributions
A minimum of three months paid medical, family and parental leave (for all new parents, adoptions included)
Commuter or home-office benefits, including a $1,000 home-office setup allowance for all new full-time remote employees
Additional perks including quarterly snack deliveries and digital subscriptions to the Boston Globe & New York Times
ActBlue is unable to sponsor work visas at this time.
UNION INFORMATION
The terms and conditions of this position are subject to a collective bargaining agreement with the Communications Workers of America, the exclusive bargaining agent of covered ActBlue Technical Services employees.
BACKGROUND CHECKS
As part of our hiring process, ActBlue will conduct a background check at the time of offer. This will be completed in compliance with applicable laws and will not be initiated without your consent.
INCLUSION STATEMENT FROM ACTBLUE
ActBlue is committed to equal employment opportunities and fostering a diverse, inclusive workplace. We celebrate unique perspectives, honor the dignity of all individuals, and recognize that diverse backgrounds and identities strengthen our mission.
If you're passionate about our work and see yourself in this role, we encourage you to apply-even if you don't meet every requirement.
We also provide reasonable accommodations for individuals with disabilities throughout the hiring process and employment. To request an accommodation, email ***********************.
*ActBlue will
never
ask candidates to buy equipment, nor will we email from anything other than an actblue.com or actbluetech.com email address.
Auto-ApplySenior Data Engineer
Remote
Application Instructions
Click Apply to submit your online application. Please attach a resume and thoughtful cover letter on the "My Experience" page in the "Resume/CV" field.
Active City Year Staff members must login to Workday to apply internally.
Number of Positions: 1Work Location: 100% Remote
Position Overview
The Senior Data Engineer works closely with local City Year data experts, school district IT professionals and a multitude of partners to manage ingestion of data from many sources. Our ideal candidate is professionally experienced in all things Azure and familiar with dev ops. They will use their Azure experience, especially with Azure Data Factories and Databricks, to lead the end-to-end development and implementation of modern ETL/ELT data pipelines for our team. This candidate will be excited to promote the effective use of timely and accurate k-12 education data and empower front-line practitioners with the information needed to have greater impact on the students we serve. The Senior Data Engineer reports to the Director of Data Management and Reporting.
Job Description
As a Senior Data Engineer at City Year, you will:
Be our resident Azure expert and trusted advisor for Azure
Own the design, development, and implementation of modern data pipelines, data factories and data streams. This is a hands-on role.
Lead the planning and then implement the data platform services including sizing, configuration, and needs assessment
Own the management and development of various third-party data integrations
Lead development of frameworks, and data architectures for large-scale data processing that ensure timely and accurate processing of data into and among City Year's systems and help implement them.
Influence and make recommendations on the technical direction for the team by leveraging your prior experiences and your knowledge of emerging technologies and approaches
Lead our team in identifying and promoting data management best practices
Conduct full technical discovery, identify pain points, gather business and technical requirements, and explain the “as is” and “to be” scenarios to fully understand user stories from stakeholders and users
Lead and participate in deep architectural discussions to build confidence and ensure customer success when building new solutions and migrating existing data applications on the Azure platform
Serve as a leader in bridging the gap between technical and non-technical staff in understanding our data and its processing at City Year.
Own the implementation and use of ETL and data management best practices, devops, ci/cd, and deployment strategies to ensure product quality, agility, robustness, and recoverability
Partner with districts and internal customers to understand their requirements and implement data solutions.
Create integrations between internal systems and support them operationally.
Teach others in the DMAR team about Azure, databricks and best practices to ensure all services and technology implemented can be supported by others on the team.
You must have:
At least 3+ years of professional experience (not simply coursework or capstone projects) in the following:
Working in an Azure environment on ETL/ELT projects
Azure DevOps
Azure Databricks
Azure Data Factories
SQL
Python
Relational databases
Working with heterogeneous datasets/formats from many vendors, providers, and platforms
Data Architecture experience
Have experience and demonstrated ability to successfully integrate and manage data across disparate information systems
Excellent written and verbal communication skills, especially the ability to express complex technical information in understandable and rigorously accurate terms as well as to express technical items in a manner that non-technical users can understand
Attention to detail while working on multiple projects at once
Demonstrated success and effectiveness working in and promoting a rapidly changing, collaborative, and time-critical work environment
Nice to have:
Some experience
with Databricks Lakehouse
with Microsoft Fabric
with archiving and backup solutions in Microsoft Azure
in a role at a school district, Charter Management Organization (CMO), or Ed-tech company with an emphasis on supporting end-user data needs
in an Agile environment
Commitment to continuous improvement and City Year's mission: our desire to focus talents on helping improve outcomes for kids in school and our AmeriCorps members who support them
You love learning new things. You're curious and ask good questions. You solicit feedback from others, accept it with grace, and act on it
What we offer:
Your technical skills will be used to have a significant, positive impact on children's education outcomes and future opportunities
A role on a small, high-powered team that is integral to delivering on the mission of City Year
Opportunity for control over the design and implementation of solutions
Space and opportunity to develop new technical skills
Focus on creating a work environment that is diverse, inclusive, equitable, and encourages belonging
Opportunity to work with some amazing people!
Benefits
Full-time employees will be eligible for all benefits including vacation, sick days and organization holidays. You may participate in all benefit programs that City Year establishes and makes available to eligible employees, under (and subject to all provisions of) the plan documents that govern those programs. Currently, City Year offers medical, dental, vision, life, accidental death and dismemberment and disability coverage, Flexible Spending Accounts (FSA), and other benefits including 401(k) plan(s) pursuant to the terms and conditions of company policy and the 401(k) plan document. For more information, click here.
Employment at City Year is at-will.
City Year does not sponsor work authorization visas.
Auto-ApplyData Scientist Supervisor
Los Angeles, CA jobs
Salary Range: $9,852.82 - $13,278.10 Monthly
Data Scientist Supervisor (“The Director”) provides administrative and technical supervision to a unit responsible for development and maintenance of Information Technology (IT) systems, and data analytics and business intelligence initiatives for Community Programs (CP) within the Department of Health Services (DHS). Community Programs includes the Office of Diversion and Reentry, the Harm Reduction Division, CalAIM Housing and Reentry Initiatives and other programs that serve people who are justice-involved and/or experiencing homelessness. This position serves as the primary technical resources for data analytics and business intelligence and delivers reporting and evaluation that supports data-driven decision-making and improves operational efficiency, works with senior IT and program leadership to design, implement, and maintain data systems and analytics infrastructure, ensures the integration of CP data with data from multiple health, housing, and justice platforms; and defines the overall vision, governance, and strategy for data analytics, business intelligence, and IT integration within Community Programs.
Position requires expert knowledge of and experience in advanced data and statistical modeling and computational methods, statistical programming languages and packages, and big-data engineering solutions; and the ability to identify and resolve business needs and issues of strategic importance within the assigned department. The position will be a hybrid role, and the office is based in downtown Los Angeles.
ESSENTIAL FUNCTIONS of the Data Scientist Supervisor include, but are not limited to:
Plans, organizes, supervises, and evaluates complex data analytics, business intelligence and IT integration operations across Community Programs.
Develops, implements, and monitors large-scale data science and reporting projects from inception to deployment, ensuring objectives and outcomes are met in a timely manner
Supervises and mentors a diverse team of analysts and data scientists; supports training and development of technical skills; promotes continuous professional development; fosters a collaborative, high-performing work environment; and manages cross-department partnerships on data analytics.
Advises program leadership on analytics opportunities, emerging technologies, and strategic initiatives to improve operational efficiency and data-driven decision-making.
Directs the design, development, and maintenance of complex data systems that hold case management, healthcare and justice data, including the Comprehensive Health Accompaniment and Management Platform (CHAMP), the Diversion Database, and the LA County InfoHub. Oversees enhancements to ensure interoperability, usability, and performance. Leads the integration of the data systems with Community Programs' operations across health, housing and justice programs.
Works closely with CP leadership across programs to identify business needs and to define the overall strategy and vision for data analytics and IT integration, aligning with DHS and Countywide priorities in health, housing, and justice.
Collaborates with DHS' Chief Information Office and DHS Population Health Analytics, and other County departments to improve IT systems, large databases, data pipelines, and analytic tools that improve service delivery, operational efficiency, and program effectiveness.
Oversees the design and production of dashboards, performance reports, and analytics tools for internal and external stakeholders, including CP teams and contracted providers (e.g. community-based organizations), the Board of Supervisors, managed care plans, and the broader community.
Leads efforts to standardize data collection, analysis, and reporting practices across programs and providers.
Directs data governance, privacy, and security initiatives across Community Programs, ensuring compliance with DHS, County, and State requirements.
Works closely with the CP CalAIM team to support Medicaid claiming; oversees data management, reporting, and data quality assurance for Medicaid billing operations, including CalAIM Community Supports and Enhanced Care Management, and Specialty Mental Health Services and Drug Medi-Cal billing.
Implements and maintains data infrastructure projects, including creation of a CP data repository in InfoHub and integration of data from non-DHS sources
Serves as the technical liaison for system integrations with external data exchanges with other County Departments through the County's Infohub and with non-County partners through LANES (a Countywide Health Information Exchange) and other health information exchange platforms.
Oversees creation and implementation of data-sharing agreements and MOUs to expand data access and improve interdepartmental collaboration.
Oversees risk management, incident investigations, and IT security reviews related to CP data systems and reporting platforms.
Represents CP in Countywide data governance, oversight, and strategic planning committees.
Promotes innovation in data science and analytics through the exploration and implementation of machine learning, predictive modeling, and other advanced analytic methodologies.
JOB QUALIFICATIONS
Minimum Education/Experience
A Bachelor's degree from an accredited college or university in Data Science, Computer Science, Information Systems, Mathematics, Machine Learning, Statistics, Business Analytics, Data Analytics, Public Health, Epidemiology or a related field.
AND five (5) years of experience doing large-scale data integration, data reporting and analytics or IT business analysis; coordination or oversight of complex data science projects to support program, policy and operational decision-making and management; and data-driven program design, implementation, evaluation and quality improvement. That experience must include two (2) years supervising a team of a data science professionals.
A Master's or Doctoral degree in Data Science, Computer Science, Information Systems, Mathematics, Machine Learning, Statistics, Business Analytics, Data Analytics, Public Health, Epidemiology or a related field may substitute for up to two (2) years of experience.
Certificates/Licenses/Clearances
A valid California Class C Driver License or the ability to utilize an alternative method of transportation when needed to carry out job-related essential functions.
Successful clearing through the Live Scan process with the County of Los Angeles.
Other Skills, Knowledge, and Abilities
Strong leadership and management skills, with the ability to motivate and inspire a team.
Proven experience in data analysis and visualization tools, such as SQL, Excel, Tableau, Power BI, or similar tools.
Excellent analytical and problem-solving skills, with a strong attention to detail.
In-depth knowledge of statistical analysis techniques and methodologies
Proficiency in data modeling and data manipulation.
Strong business acumen and the ability to connect data insights to business objectives.
Excellent communication and presentation skills, with the ability to translate complex data into clear and actionable insights.
Ability to work effectively in a fast paced, dynamic environment, managing multiple priorities and meeting deadlines.
Experience with cloud data platforms like MSFT Azure, AWS, Databricks, Snowflake, Google BigQuery.
Proven track record of delivering impactful insights and recommendations based on data analysis.
Demonstrated experience in developing and implementing analytics strategies in a corporate environment.
Strong understanding of data governance principles and practices.
Familiarity with data visualization best practices and tools.
Experience working with large datasets and using statistical analysis techniques.
Knowledge of programming languages such as Python or R is preferred.
Certification in relevant analytics tools or methodologies is a plus.
ONLINE APPLICATION REQUIREMENTS
At a minimum, candidates need to submit/upload electronic copies of a resume describing education to include training certifications, and relevant paid and volunteer experience - relevant to the essential job functions. Applications need to include legible copies of education diplomas/transcripts as applicable.
A cover introductory letter may be submitted while not required.
Legible copies of certificates to substantiate proficiency in skills, knowledge and abilities may be submitted.
Applications without supporting documentation at the time of application or no more than 5 business days after the initial application will not be included in the candidate pool.
Review of job description at *************************************** is suggested, especially if applying to the position from a third-party online application.
PHYSICAL DEMANDS
Stand Frequently
Walk Frequently
Sit Frequently
Handling Occasionally
Reach Outward Occasionally
Reach Above Shoulder Occasionally
Climb, Crawl, Kneel, Bend Occasionally
Lift / Carry Occasionally - Up to 35 lbs
Push/Pull Occasionally - Up to 35 lbs
See Constantly
Taste/ Smell Not Applicable
Not Applicable Not required for essential functions
Occasionally (0 - 2 hrs/day)
Frequently (2 - 5 hrs/day)
Constantly (5+ hrs/day)
WORK ENVIRONMENT
General Office Setting, Indoors Temperature Controlled
EEOC STATEMENT
It is the policy of Heluna Health to provide equal employment opportunities to all employees and applicants, without regard to age (40 and over), national origin or ancestry, race, color, religion, sex, gender, sexual orientation, pregnancy or perceived pregnancy, reproductive health decision making, physical or mental disability, medical condition (including cancer or a record or history of cancer), AIDS or HIV, genetic information or characteristics, veteran status or military service.
Marketing Data and Analytics
California jobs
Doctors are overworked, burnt out, and are quitting in masses.
At Freed, we combine clinician love with the latest AI tech and intense execution to create products that make clinicians happier.
Our first product is an AI scribe that automates medical documentation.
Since May of 2023, we have:
Acquired 26,000 paying and loving clinicians
Generated 100,000 patient notes daily and over 3 million monthly
Made thousands of clinicians happier
With the backing of Sequoia Capital and other world-class VC's, we are rapidly expanding our product offering. Patient-facing assistants, patient insights, EHR integrations, and other products are being built and used by thousands of clinicians every day.
We are looking for entrepreneurs. Fast, ambitious, and smart individuals who want to take care of the people who care for our health. Expect intense, clinician-focused, and interesting co-workers who want to win.
With an office in San Francisco, we embrace a hybrid schedule that brings out the best in teamwork and innovation. Our teams come together in person three days a week to collaborate, connect, and have a little fun along the way.
ABOUT THE ROLE
As the first Marketing Data and Analytics Marketer, you will play a key role in shaping our marketing efforts by structuring, gathering, analyzing, and interpreting data to optimize our strategies. You will be responsible for establishing data-driven decision-making processes, driving marketing team performance analysis, and continuously improving ROI. You will collaborate closely with cross-functional teams to deliver actionable insights and provide strategic recommendations that drive growth.
HOW YOU'LL HAVE IMPACT
Lead the development of the marketing analytics strategy and execution across all digital channels.
Oversee the integration of marketing data from various sources (e.g., CRM, web analytics, paid media, email campaigns, social media, etc).
Ensure data quality, accuracy, and integrity across all marketing systems.
Establish KPIs and develop dashboards to measure and track the success of marketing campaigns and initiatives.
Perform deep-dive analyses into campaign performance, identifying trends, insights, and areas for optimization.
Use data-driven insights to continuously enhance marketing strategies, including customer acquisition, retention, and overall engagement.
Build complex models and conduct multivariate testing to optimize marketing efforts (e.g., A/B testing, predictive modeling).
Provide regular reporting to the CMO and executive team on the health and performance of marketing efforts.
Conduct cohort analysis, customer segmentation, and lifetime value (LTV) analysis to guide decision-making.
Partner with marketing, product, and sales teams to align on business objectives, understand data needs, and deliver impactful insights.
Serve as the go-to expert on marketing analytics for senior leadership, translating data insights into actionable business recommendations.
Constantly assess the effectiveness of marketing strategies, implementing iterative improvements based on real-time data and results.
Recommend innovative solutions for marketing automation and efficiency improvements.
WHAT YOU'LL BRING
Bachelor's degree in Marketing, Data Science, Statistics, Business Analytics, or a related field (Master's degree is a plus).
7-10 years of experience in marketing data and analytics, preferably in a fast-paced startup or technology environment.
Strong background in measuring a PLG motion as well as digital marketing channels.
Proven track record of using data to drive marketing strategy and decision-making.
Experience with advanced data analysis tools (e.g., Google Analytics, Looker, SQL, R, Python, etc).
Advanced proficiency in data visualization and reporting tools (e.g., Looker, Google Data Studio, Power BI).
Strong analytical and problem-solving skills with the ability to turn complex data into clear, actionable insights.
Deep understanding of customer segmentation, behavior tracking, and predictive analytics.
Excellent communication and presentation skills, with the ability to convey complex data insights to non-technical stakeholders.
WHAT WE'LL BRING
Competitive salary and equity in a high-growth company
Opportunity to make an immediate impact
Medical, dental, and vision coverage
Unlimited paid time off
Company-sponsored annual retreats
Commuter stipend for our San Francisco based employees
401(k) plan to support your long-term financial goals
Auto-ApplyData Scientist, Ambulatory Transformation & Performance
Los Angeles, CA jobs
General Information Press space or enter keys to toggle section visibility Onsite or Remote Flexible Hybrid Work Schedule Monday-Friday 8am-5pm Posted Date 10/18/2024 Salary Range: $105700 - 234500 Annually Employment Type 2 - Staff: Career
Duration
Indefinite
Job #
19946
Primary Duties and Responsibilities
Press space or enter keys to toggle section visibility
The Data Scientist plays a pivotal role in transforming raw data into actionable insights that drive the efficient operation of ambulatory clinics. This position involves advanced data modeling, statistical analysis, and the application of machine learning techniques to identify trends, optimize performance, and support data-driven decision-making. The Data Scientist will design and maintain robust data models, develop predictive models, and collaborate with stakeholders to interpret complex data sets. The role requires expertise in data analysis, a strong understanding of healthcare data, and a commitment to enhancing the financial viability and operational efficiency of ambulatory clinics.
* Develop, refine, and maintain complex data models that support ambulatory operations, ensuring data accuracy and consistency.
* Apply statistical analysis and machine learning techniques to analyze large datasets, identify trends, and generate predictive insights.
* Design and implement predictive models to forecast key performance indicators, patient outcomes, and ambulatory operational efficiencies.
* Create and validate algorithms for data mining, cleansing, and transformation to enhance data usability
* Lead or participate in projects focused on enhancing data infrastructure, analytical capabilities, and reporting frameworks.
* Explore and implement innovative data science techniques and tools to address complex challenges in ambulatory operations.
* Stay current with advancements in data science and healthcare analytics, applying new methods and technologies to improve performance and outcomes.
* Work closely with cross-functional teams, including IT, clinical, and business stakeholders, to understand data needs and deliver actionable insights.
* Present findings and recommendations to leadership and other stakeholders in a clear, concise, and actionable manner.
* Provide guidance and mentorship to analysts within the team
* Lead efforts in data integration, ensuring seamless interoperability between multiple data sources and systems
* Enforce data governance standards, including data quality, metadata management, and data security protocols.
* Develop and manage ETL (Extract, Transform, Load) processes to curate data from various sources into structured formats suitable for analysis.
* Document and maintain data lineage, ownership, and access requirements, ensuring compliance with healthcare regulations
salary range: $102500-$227700
*
Job Qualifications
Press space or enter keys to toggle section visibility
Required Skills and Experience:
* Bachelor's degree in a related field or equivalent experience/training.
* Minimum of 3+ years of experience in a healthcare-related organization, with a strong understanding of healthcare data and operations
* Minimum of 5 years of experience with Python or R for data analysis, modeling, and machine learning applications.
* Proven expertise in data modeling, information design, and data integration.
* Advanced knowledge of data management systems, practices, and standards.
* Experience with complex data quality, governance issues, and data conversion.
* Strong analytical and problem-solving skills with attention to detail.
* Ability to abstract and represent information flows in systems through effective modeling.
* Excellent communication and interpersonal skills, with a demonstrated ability to work collaboratively across diverse teams.
Preferred Skills:
* Experience with Databricks, including managing and processing large datasets in a distributed environment.
* Experience with Azure DevOps, including managing workflows, version control, and collaborative project management.
Data Scientist, Ambulatory Transformation & Performance
Los Angeles, CA jobs
The Data Scientist plays a pivotal role in transforming raw data into actionable insights that drive the efficient operation of ambulatory clinics. This position involves advanced data modeling, statistical analysis, and the application of machine learning techniques to identify trends, optimize performance, and support data-driven decision-making. The Data Scientist will design and maintain robust data models, develop predictive models, and collaborate with stakeholders to interpret complex data sets. The role requires expertise in data analysis, a strong understanding of healthcare data, and a commitment to enhancing the financial viability and operational efficiency of ambulatory clinics.
+ Develop, refine, and maintain complex data models that support ambulatory operations, ensuring data accuracy and consistency.
+ Apply statistical analysis and machine learning techniques to analyze large datasets, identify trends, and generate predictive insights.
+ Design and implement predictive models to forecast key performance indicators, patient outcomes, and ambulatory operational efficiencies.
+ Create and validate algorithms for data mining, cleansing, and transformation to enhance data usability
+ Lead or participate in projects focused on enhancing data infrastructure, analytical capabilities, and reporting frameworks.
+ Explore and implement innovative data science techniques and tools to address complex challenges in ambulatory operations.
+ Stay current with advancements in data science and healthcare analytics, applying new methods and technologies to improve performance and outcomes.
+ Work closely with cross-functional teams, including IT, clinical, and business stakeholders, to understand data needs and deliver actionable insights.
+ Present findings and recommendations to leadership and other stakeholders in a clear, concise, and actionable manner.
+ Provide guidance and mentorship to analysts within the team
+ Lead efforts in data integration, ensuring seamless interoperability between multiple data sources and systems
+ Enforce data governance standards, including data quality, metadata management, and data security protocols.
+ Develop and manage ETL (Extract, Transform, Load) processes to curate data from various sources into structured formats suitable for analysis.
+ Document and maintain data lineage, ownership, and access requirements, ensuring compliance with healthcare regulations
salary range: $102500-$227700
Qualifications
Required Skills and Experience:
+ Bachelor's degree in a related field or equivalent experience/training.
+ Minimum of 3+ years of experience in a healthcare-related organization, with a strong understanding of healthcare data and operations
+ Minimum of 5 years of experience with Python or R for data analysis, modeling, and machine learning applications.
+ Proven expertise in data modeling, information design, and data integration.
+ Advanced knowledge of data management systems, practices, and standards.
+ Experience with complex data quality, governance issues, and data conversion.
+ Strong analytical and problem-solving skills with attention to detail.
+ Ability to abstract and represent information flows in systems through effective modeling.
+ Excellent communication and interpersonal skills, with a demonstrated ability to work collaboratively across diverse teams.
Preferred Skills:
+ Experience with Databricks, including managing and processing large datasets in a distributed environment.
+ Experience with Azure DevOps, including managing workflows, version control, and collaborative project management.
UCLA Health welcomes all individuals, without regard to race, sex, sexual orientation, gender identity, religion, national origin or disabilities, and we proudly look to each person's unique achievements and experiences to further set us apart.
Senior Data Architect
Data engineer job at Blue Cross Blue Shield of Michigan
About Blue Cross and Blue Shield of Minnesota
At Blue Cross and Blue Shield of Minnesota, we are committed to paving the way for everyone to achieve their healthiest life. We are looking for dedicated and motivated individuals who share our vision of transforming healthcare. As a Blue Cross associate, you are joining a culture that is built on values of succeeding together, finding a better way, and doing the right thing. If you are ready to make a difference, join us.
The Impact You Will Have
Assist and support the design and implementation of Enterprise Data Architecture for IT, which will be used in the development and deployment of technology driven business solutions for BCBSM and its business partners. Works with IT management, technology vendors, and customers to establish a strategic data technology direction with emphasis on improving the efficiency, effectiveness, integration, and quality of business solutions provided to our clients.
Your Responsibilities
Develops and maintains an Enterprise Data Architecture that articulates the principles, blueprints, and standards, across the data domains of Transaction, Interaction, and Analytic, which are used in the deployment of technology solutions for BCBSM customers.
Develops and maintains an Enterprise Data Model (EDM) to serve as both the strategic and tactical planning vehicles to manage enterprise data. This effort involves working closely with the business data stewards by business divisions.
Work with business driven project teams to ensure quality and compliance to the Enterprise Data Architecture by participating in data analysis/design activities and conducting appropriate technical data design reviews at various stages during the development life cycle. This includes providing data modeling expertise with both relational (i.e., entity relationship diagrams) and dimensional (i.e., star join schema) modeling techniques.
Work with applications project teams to adopt a strong data reconciliation process concerning the replication of data that includes defining reconciliation architecture and conducting technical design reviews.
Identifies and recommends specific infrastructure initiatives to further enhance the Enterprise Data Architecture Plan, aligning with the overall BCBSM business direction and IS strategies and coordinating with the annual IS planning/budget cycles.
Guide and mentor IS staff in the analysis and selection of technology acquisitions, ensuring that new products conform and support the overall Enterprise Data Architecture strategy.
Guide, educate, and mentor the Data Architecture Strategy directives, principles, and standards to individuals who play data-related roles (e.g., data analysts, data modelers, and business analysts).
Required Skills and Experience
5+ years of related professional experience. All relevant experience including work, education, transferable skills, and military experience will be considered.
Demonstrated strong skills in applying the data modeling techniques of relational (i.e., entity relationship diagrams) modeling and dimensional (i.e., star oin schema) modeling.
Demonstrated human relation skills to effectively interact with peers, subordinates, internal and external customers and vendors.
Demonstrated ability to influence and motivate individuals and teams.
Advanced presentation skills and oral and written communication skills.
Advanced technical knowledge of mainframe and client/server environments.
Advanced analytical skills related to cost / benefit analysis of large dollar hardware and software implementations.
High school diploma (or equivalency) and legal authorization to work in the U.S.
Preferred Skills and Experience
Bachelor's degree
Experience with Power Designer and AWS Redshift a plus
Role DesignationHybrid
Anchored in Connection
Our hybrid approach is designed to balance flexibility with meaningful in-person connection and collaboration. We come together in the office two days each week - most teams designate at least one anchor day to ensure team interaction. These in-person moments foster relationships, creativity, and alignment. The rest of the week you are empowered to work remote.
Compensation and Benefits$100,000.00 - $135,000.00 - $170,000.00 Annual
Pay is based on several factors which vary based on position, including skills, ability, and knowledge the selected individual is bringing to the specific job.
We offer a comprehensive benefits package which may include:
Medical, dental, and vision insurance
Life insurance
401k
Paid Time Off (PTO)
Volunteer Paid Time Off (VPTO)
And more
To discover more about what we have to offer, please review our benefits page.
Equal Employment Opportunity Statement
At Blue Cross and Blue Shield of Minnesota, we are committed to paving the way for everyone to achieve their healthiest life. Blue Cross of Minnesota is an Equal Opportunity Employer and maintains an Affirmative Action plan, as required by Minnesota law applicable to state contractors. All qualified applications will receive consideration for employment without regard to, and will not be discriminated against based on any legally protected characteristic.
Individuals with a disability who need a reasonable accommodation in order to apply, please contact us at: **********************************.
Blue Cross and Blue Shield of Minnesota and Blue Plus are nonprofit independent licensees of the Blue Cross and Blue Shield Association.
Auto-Apply