Data Scientist - Research Administration
Data scientist job in Kansas City, KS
Department:RI Administration
-----
Research InformaticsPosition Title:Data Scientist - Research AdministrationJob Family Group: Professional Staff Summary:The Data Scientist - Research Administration provides dedicated data science and engineering support to the Department of Surgery at the University of Kansas Medical Center. The role will focus on developing data pipelines, performing statistical and machine learning analyses, and generating high-quality research outputs using large-scale clinical datasets. This position will work closely with surgeons and researchers to translate clinical questions into data-driven insights. It is a strategic role designed to strengthen the department's research infrastructure and competitiveness for external funding.:
Key Roles and Responsibilities:
Collaborate with Department of Surgery researchers to define project requirements and analytic goals
Develop and maintain scalable data pipelines and perform ETL processes for clinical data
Conduct statistical and machine learning analyses on large, complex healthcare datasets
Clean, transform, and prepare high-quality analytic datasets for research
Build and validate predictive models to support research questions and clinical insights
Develop and maintain reusable data marts for commonly used research variables
Document data workflows, coding processes, and analytic decisions to ensure reproducibility
Prepare visualizations, summary reports, and presentations of research findings
Contribute to manuscript and grant writing by providing data-related content and results
Ensure compliance with data governance, privacy regulations, and institutional policies
This job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. It is only a summary of the typical functions of the job, not an exhaustive list of all possible job responsibilities, tasks, duties, and assignments. Furthermore, job duties, responsibilities and activities may change at any time with or without notice.
Required Qualifications
Education:
Master's degree in data science, computer science, biostatistics, informatics, or a related quantitative field. Education may be substituted for experience on a year for year basis.
Work Experience:
2 years of experience applying statistical methods (e.g., linear/logistic regression, survival analysis in research or healthcare settings.
2 years of experience with data science tools and programming languages such as Python or R.
1 year of experience in developing and maintaining data pipelines and performing data wrangling/cleaning tasks.
1 year of experience working with large healthcare datasets, including electronic health records (EHR).
Preferred Qualifications
Education:
Ph.D. in data science, biomedical informatics, computer science, biostatistics, or a related quantitative discipline. Education may be substituted for experience on a year for year basis.
Certifications/Licenses:
Certified Health Data Analyst (CHDA)
Certified Specialist in Predictive Analytics
AMIA credentials
Work Experience:
2 years of experience working with electronic health record (EHR) data from systems such as Epic or eClinicalWorks.
2 years of experience developing machine learning models (e.g., random forests, gradient boosting, neural networks) for healthcare or clinical research applications.
1 year of experience with high-performance computing or cloud platforms (e.g., AWS, Azure, Google Cloud).
1 year of experience contributing to peer-reviewed research publications or grant applications involving data analysis.
1 year of experience building and maintaining data marts or reusable data products for research.
Skills
Statistical analysis using R or Python
Data pipeline development and ETL processes
SQL and relational database querying
Machine learning model development and validation
Data cleaning and wrangling
Understanding of HIPAA and data privacy in research
Experience with EHR systems (e.g., Epic, eClinicalWorks)
Familiarity with clinical data models (e.g., OMOP, PCORnet)
Development of advanced machine learning models (e.g., deep learning, ensemble methods)
Advanced Use of cloud-based data platforms or high-performance computing environments
Proficient Experience with data visualization tools (e.g., Tableau, Power BI, Plotly)
Knowledge of version control systems (e.g., Git)
Familiarity with research workflows and publication processes
Required Documents
Resume
Cover Letter
Comprehensive Benefits Package:
Coverage begins on day one for health, dental, and vision insurance and includes health expense accounts with generous employer contributions if the employee participates in a qualifying health plan. Employer-paid life insurance, long-term disability insurance, and various additional voluntary insurance plans are available. Paid time off, including vacation and sick, begins accruing upon hire, plus ten paid holidays. One paid discretionary day is available after six months of employment, and paid time off for bereavement, jury duty, military service, and parental leave is available after 12 months of employment. A retirement program with a generous employer contribution and additional voluntary retirement programs (457 or 403b) are available. **************************************************
Employee Type: RegularTime Type: Full time Rate Type: Salary
Compensation Statement:
The pay range listed for this position is determined by our compensation program using market data and salary benchmarking. A combination of factors is considered in making compensation decisions including, but not limited to, education, experience and training, qualifications relative to the requirements of the position, and funding. At the University of Kansas Medical Center, a reasonable estimate for the starting pay range will be the minimum to midpoint of the posted range, taking into account the combination of factors listed above.
Pay Range:$75,000.00 - $115,000.00
Minimum
$75,000.00
Midpoint
$95,000.00
Maximum
$115,000.00
Auto-ApplyPrincipal Data Scientist
Data scientist job in Kansas City, KS
Description & Requirements We now have an exciting opportunity for a Principal Data Scientist to join the Maximus AI Accelerator supporting both the enterprise and our clients. We are looking for an accomplished hands-on individual contributor and team player to be a part of the AI Accelerator team.
You will be responsible for architecting and optimizing scalable, secure AI systems and integrating AI models in production using MLOps best practices, ensuring systems are resilient, compliant, and efficient. This role requires strong systems thinking, problem-solving abilities, and the capacity to manage risk and change in complex environments. Success depends on cross-functional collaboration, strategic communication, and adaptability in fast-paced, evolving technology landscapes.
This position will be focused on strategic company-wide initiatives but will also play a role in project delivery and capture solutioning (i.e., leaning in on existing or future projects and providing solutioning to capture new work.)
This position requires occasional travel to the DC area for client meetings.
Essential Duties and Responsibilities:
- Make deep dives into the data, pulling out objective insights for business leaders.
- Initiate, craft, and lead advanced analyses of operational data.
- Provide a strong voice for the importance of data-driven decision making.
- Provide expertise to others in data wrangling and analysis.
- Convert complex data into visually appealing presentations.
- Develop and deploy advanced methods to analyze operational data and derive meaningful, actionable insights for stakeholders and business development partners.
- Understand the importance of automation and look to implement and initiate automated solutions where appropriate.
- Initiate and take the lead on AI/ML initiatives as well as develop AI/ML code for projects.
- Utilize various languages for scripting and write SQL queries. Serve as the primary point of contact for data and analytical usage across multiple projects.
- Guide operational partners on product performance and solution improvement/maturity options.
- Participate in intra-company data-related initiatives as well as help foster and develop relationships throughout the organization.
- Learn new skills in advanced analytics/AI/ML tools, techniques, and languages.
- Mentor more junior data analysts/data scientists as needed.
- Apply strategic approach to lead projects from start to finish;
Job-Specific Minimum Requirements:
- Develop, collaborate, and advance the applied and responsible use of AI, ML and data science solutions throughout the enterprise and for our clients by finding the right fit of tools, technologies, processes, and automation to enable effective and efficient solutions for each unique situation.
- Contribute and lead the creation, curation, and promotion of playbooks, best practices, lessons learned and firm intellectual capital.
- Contribute to efforts across the enterprise to support the creation of solutions and real mission outcomes leveraging AI capabilities from Computer Vision, Natural Language Processing, LLMs and classical machine learning.
- Contribute to the development of mathematically rigorous process improvement procedures.
- Maintain current knowledge and evaluation of the AI technology landscape and emerging. developments and their applicability for use in production/operational environments.
Minimum Requirements
- Bachelor's degree in related field required.
- 10-12 years of relevant professional experience required.
Job-Specific Minimum Requirements:
- 10+ years of relevant Software Development + AI / ML / DS experience.
- Professional Programming experience (e.g. Python, R, etc.).
- Experience in two of the following: Computer Vision, Natural Language Processing, Deep Learning, and/or Classical ML.
- Experience with API programming.
- Experience with Linux.
- Experience with Statistics.
- Experience with Classical Machine Learning.
- Experience working as a contributor on a team.
Preferred Skills and Qualifications:
- Masters or BS in quantitative discipline (e.g. Math, Physics, Engineering, Economics, Computer Science, etc.).
- Experience developing machine learning or signal processing algorithms:
- Ability to leverage mathematical principles to model new and novel behaviors.
- Ability to leverage statistics to identify true signals from noise or clutter.
- Experience working as an individual contributor in AI.
- Use of state-of-the-art technology to solve operational problems in AI and Machine Learning.
- Strong knowledge of data structures, common computing infrastructures/paradigms (stand alone and cloud), and software engineering principles.
- Ability to design custom solutions in the AI and Advanced Analytics sphere for customers. This includes the ability to scope customer needs, identify currently existing technologies, and develop custom software solutions to fill any gaps in available off the shelf solutions.
- Ability to build reference implementations of operational AI & Advanced Analytics processing solutions.
Background Investigations:
- IRS MBI - Eligibility
#techjobs #VeteransPage
EEO Statement
Maximus is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, age, national origin, disability, veteran status, genetic information and other legally protected characteristics.
Pay Transparency
Maximus compensation is based on various factors including but not limited to job location, a candidate's education, training, experience, expected quality and quantity of work, required travel (if any), external market and internal value analysis including seniority and merit systems, as well as internal pay alignment. Annual salary is just one component of Maximus's total compensation package. Other rewards may include short- and long-term incentives as well as program-specific awards. Additionally, Maximus provides a variety of benefits to employees, including health insurance coverage, life and disability insurance, a retirement savings plan, paid holidays and paid time off. Compensation ranges may differ based on contract value but will be commensurate with job duties and relevant work experience. An applicant's salary history will not be used in determining compensation. Maximus will comply with regulatory minimum wage rates and exempt salary thresholds in all instances.
Accommodations
Maximus provides reasonable accommodations to individuals requiring assistance during any phase of the employment process due to a disability, medical condition, or physical or mental impairment. If you require assistance at any stage of the employment process-including accessing job postings, completing assessments, or participating in interviews,-please contact People Operations at **************************.
Minimum Salary
$
156,740.00
Maximum Salary
$
234,960.00
Easy ApplyData Scientist / Data Architect / Data Governance Lead
Data scientist job in Kansas City, MO
KEY RESPONSIBILITIES: Data Governance, Strategy and Architecture * Define and drive the organization's overall vision, data strategy, roadmap, and architecture vision. This includes the data AI architecture vision, strategy, and roadmap. This includes the design of scalable data lakes, data warehouses, and data fabric architectures.
* Establish and enforce data governance policies and standards to ensure data quality, consistency, and compliance with all relevant regulations (e.g., GDPR, CCPA). Lead the implementation of a comprehensive data governance framework, including data quality management, data lineage tracking, and master data management (MDM). Collaborate with data owners and stewards across business units to establish clear roles, responsibilities, and accountability for data assets.
* Establish clear rules and policies governing the responsible usage of data within AI and ML models, including documentation of data lineage for model training. Design data infrastructure specifically optimized for AI workloads, including data pipelines for machine learning models, and architect solutions for large language models (LLMs). Develop bias mitigations strategies to ensure diverse and representative datasets to prevent AI biases, and architect monitoring systems for model drift.
* Evaluate, recommend, and select appropriate data management technologies, including cloud platforms (e.g., AWS, Azure, GCP), storage solutions, and governance tools.
* Architect complex data integration patterns to connect disparate data sources across the organization, ensuring seamless data flow and a unified data view.
Data Security and Privacy
* Design and implement a robust data security architecture to protect sensitive data from unauthorized access, breaches, and corruption.
* Develop security protocols, such as encryption, access controls (IAM), and masking techniques to safeguard data in transit and at rest.
* Conduct regular security audits and vulnerability testing to identify gaps in security architecture and develop remediation plans.
* Ensure the data architecture and its supporting systems are compliant with internal policies and external data protection regulations.
Data Modeling and Management
* Design and maintain conceptual, logical, and physical data models for transactional and analytical systems.
* Oversee the development of database schemas, metadata management, and data cataloging efforts to improve data discoverability and understanding.
* Define and standardize data architecture components, including storage solutions (data lakes, warehouses, etc.), data pipelines, and integration patterns.
* Evaluate and recommend new data technologies, tools, and platforms that align with the organization's strategic needs.
Data Classification
* Design and implement a robust data security architecture, including controls for access management, encryption, and data masking to protect sensitive information.
* Create and manage an organization-wide data classification scheme based on data sensitivity and importance (e.g., public, internal, confidential, restricted).
* Implement technical controls and processes to automatically classify and tag data assets, ensuring proper handling and security.
* Collaborate with business and legal teams to define and apply data classification rules consistently.
Team Collaboration and Leadership
* Provide technical guidance and mentorship to data engineers, analysts, developers, and other IT teams on best practices for data management and security.
* Work closely with business stakeholders to understand their data requirements and translate them into effective architectural solutions.
* Foster a data-centric culture across the organization, promoting awareness and understanding of data governance principles.
ABOUT THE COMPANY:
Bluebird Fiber is a premier fiber telecommunications provider of internet, data transport, and other services to carriers, businesses, schools, hospitals, and other enterprises in the Midwest. To learn more, please visit bluebirdfiber.com.
Join an amazing team of telecommunication professionals! Bluebird is a dynamic growing company in need of a Data Architect to be a part of a collaborative team. This is a full-time, benefit eligible position in our Kansas City Office. All of us at Bluebird work hard to meet objectives for the organization and live the mission and values of this growing company to meet a common goal. Check out this video that highlights our amazing company culture.
JOB SUMMARY:
We are seeking a highly skilled and strategic Data Architect to lead our data governance, security, and management initiatives. This senior role will be responsible for designing and implementing the organization's enterprise data architecture, ensuring that our data is secure, reliable, and accessible for business-critical functions. The ideal candidate is a proactive leader who can define data strategy, enforce best practices, and collaborate with cross-functional teams to align our data ecosystem with business goals.
REQUIRED QUALIFICATIONS:
* Bachelor's or master's degree in Computer Science, Information Technology, or a related technical field.
* 10+ years of hands-on experience in data architecture, data modeling, and data governance, with a proven track record of designing and implementing complex data ecosystems. Experience working in regulated industries is a plus.
* Proven experience (8+ years) designing and implementing enterprise-level data architectures.
* Extensive experience with data modeling, data warehousing, and modern data platforms (e.g., cloud environments like AWS, Azure, or GCP).
* Deep expertise in data modeling, data warehousing, database technologies (SQL, NoSQL), big data technologies (e.g., Spark), and modern cloud platforms (e.g., AWS, Azure, GCP).
* Deep expertise in data governance and security principles, including regulatory compliance frameworks.
* Strong knowledge of how to structure data for machine learning and AI workloads, including experience with MLOps platforms.
* Hands-on experience with data classification and data cataloging tools (e.g., Collibra, Alation).
* Excellent communication, interpersonal, and leadership skills, with the ability to influence and build consensus across the organization.
PREFERRED QUALIFICATIONS:
* Professional certifications in data architecture, data governance, or cloud platforms.
* Experience with big data technologies (e.g., Hadoop, Spark).
* Familiarity with data integration and ETL/ELT frameworks.
Data Scientist
Data scientist job in Leawood, KS
Job Description
**Please Note: This position is open only to candidates authorized to work in the U.S. without the need for current or future visa sponsorship. Additionally, this position is based in the Kansas City area, and we are only considering candidates who reside locally.**
At Sunlighten, we're not just about infrared saunas, we're on a mission to improve lives through innovative health and wellness solutions. As a global leader in infrared sauna therapy with a 25-year legacy of innovation, we've grown from our Kansas City roots to establish a global footprint. With the wellness market projected to reach $7 trillion by 2026, we're proud to lead the way in light science and longevity. We're rapidly expanding our BI, AI, and Automation team and are on the hunt for a curious, proactive, and methodical Data Scientist- Forecasting, Experimentation & Scoring to take us to the next level. Based in the Kansas City metro area, we offer the best of both worlds: the growth of a global company with the close-knit culture of a local business.
Sunlighten's Data Scientist will drive measurable impact across Sales, Marketing, CX, and Operations through analytics and ML. You'll lead forecasting, lead/opportunity scoring, what‑if analysis, and rigorous experimentation so leaders can make confident decisions.
Duties/Responsibilities:
Partner with stakeholders (e.g., CSO, Marketing) to shape analytical questions into testable plans; frame success metrics and hypotheses; deliver what‑if analysis, statistical testing, and clear recommendations.
Design & run experiments (e.g., changing how we serve chat leads): define treatment/control, power, guardrails; implement tracking (Salesforce/Service Cloud/Marketing Cloud/GA4), and build Power BI readouts.
Own ML for BI priorities: improve Lead Score & Opp Score; demand planning; end‑to‑end forecasting; support website/product‑specific models.
Define business and model metrics; build golden labels/holdouts; quantify ROI.
Feature engineering from Salesforce/NetSuite/Five9/Marketing Cloud/Shopify/telemetry; partner with DE to productionize in Fabric.
Implement offline/online evaluation; drift monitoring; experiment tracking; reproducible notebooks; monthly goal snapshotting.
Wear multiple hats: when needed, take an analysis/ML project end‑to‑end and collaborate with devs to surface results in products or lightweight UIs.
Other duties as discussed and assigned.
Requirements
This position is open only to candidates authorized to work in the U.S. without the need for current or future visa sponsorship. Additionally, this position is based in the Kansas City area, and we are only considering candidates who reside locally.
2-5 years of enterprise experience in applied data science/analytics
2-5 years of experience developing and shipping predictive models, and delivering stakeholder-facing analyses.
Expertise in Python (pandas, scikit‑learn/lightgbm); SQL; forecasting (Prophet/ARIMA/XGB‑based); causal/AB testing (t‑tests, proportion tests, chi‑square), power analysis.
Expertise with Power BI for experiment/forecast dashboards; Microsoft Fabric for data/feature pipelines; CI/CD with Git.
Excellent communication skills: concise narratives, assumptions, and trade‑offs; clear documentation.
Bachelor's or Masters Degree in Data Science/Computer Science/Stats/OR or equivalent experience portfolio/GitHub preferred
Benefits
Opportunity to work in a collaborative and innovative environment.
Career growth opportunities in a market leading and rapidly growing wellness technology company.
Competitive Paid Time Off Policy + Paid Holidays + Floating Holidays.
Fully Equipped Fitness Center On-Site.
Lunch Program featuring a James-Beard Award Winning Chef.
Health (HSA & FSA Options), Dental, and Vision Insurance.
401(k) with company contributions.
Profit Sharing.
Life and Short-Term Disability Insurance.
Professional Development and Tuition Reimbursement.
Associate Discounts on Saunas, Spa Products and Day Spa Services.
Sunlighten provides equal employment opportunity. Discrimination of any type will not be tolerated. Sunlighten is an Equal Opportunity / Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability, protected veteran status or any other characteristic protected by state, federal, or local law.
Actuary (Kansas City Metro)
Data scientist job in Overland Park, KS
Employment Type: Full-time, In-Office Salary: $120,000 - $160,000/year + equity Steadily is hiring an Actuary in the Kansas City Metro area who is the best at what they do. You'll be surrounded by team members who are also the best at what they do, which will make you even better! This is a full-time, in-office position based in Overland Park, KS.
As an Actuary, you will:
* Be responsible for the growth and profitability of our DP products nationwide.
* Develop industry-leading approaches to rating, exposure management, and capital modeling.
* Develop pricing of new products and enhance pricing of existing products
* Expand our product into new geographies.
* Analyze the drivers of profitability including loss ratios, actuarial indications, frequency/severity trends, retention and other data for all products, books, and channels.
* Manage rate filings across multiple states to ensure we are priced to achieve the required return across all products and segments.
* Develop and implement robust data-driven action plans to continually improve performance.
* Drive innovation by finding new and different ways to price better than the traditional solutions.
Your Background
* Experienced: You've been a high achiever in insurance for four years or more. You have experience managing property products (preferably personal lines - HO or DP). You understand the current US property market.
* Education: You are nearing actuarial accreditation with the Casualty Actuarial Society (CAS), having passed a minimum of 5 actuarial exams.
* Builder: You have a builder's mindset and can take projects and products from inception to launch and beyond. You have a bias towards action.
* Digital: Your tech savvy is exceptional. You have strong product sense and are a master of learning and implementing new software and processes. Your technical and analytical skills are top notch.
* Hungry: You want to make the leap into an earlier-stage tech company to rapidly accelerate your growth. You want to roll up your sleeves and hustle - you are not looking for a traditional 9-5 job.
Compensation and Benefits
* Compensation $120k -$160k salary + equity in the company
* 3 weeks PTO plus six federal holidays
* Health Insurance including Medical, Dental, Vision, Life, Disability, HSA, FSA
* 401K
* Free snacks & regular team lunches
Locations
* Overland Park, KS
* Relocation assistance available for out of state candidates
* Steadily is building a workplace environment of team members who are passionate and excited to be together in person. Our office is located in Overland Park, KS and key to our fast-paced growth trajectory.
Why Join Steadily
* Good company. Our founders have three successful startups under their belt and have recruited a stellar team to match.
* Top compensation. We pay at the top of the Kansas City market (see comp).
* Growth opportunity: We're an early-stage, fast-growing company where you'll wear a lot of hats and shape product decisions.
* Strong backing. We're growing fast, we manage over $20 billion in risk, and we're exceptionally well-funded.
* Culture: Steadily boasts a very unique culture that our teammates love. We call it like we see it and we're nothing if not candid. Plus, we love to have a good time. Check out our culture deck to learn what we're all about.
* Awards: We've been recognized both locally and nationally as a top place to work. We were named a Top 2025 Startup in Newsweek, winner of Austin Business Journal's Best Places to Work in 2025, recognized in Investopedia's Best Landlord Insurance Companies, ranked No. 6 on Inc's list of Fastest Growing Regional Companies, 44th on Forbes' 2025 Best Startup Employers list, and 63rd on the prestigious Inc 5000 Fastest Growing Companies list.
P&C Actuary
Data scientist job in Kansas City, MO
This role helps support Lockton's goal of providing clear, actionable insights for our clients. These insights are used to facilitate informed decision-making and deliver improved outcomes. The role is responsible for creating tailored solutions for individual clients as well as enhancing the Analytics group's brand, workflows, and capabilities.
Job Responsibilities:
* Collaborate with internal and external stakeholders to understand needs, identify opportunities, and develop strategic solutions to clients' Property and Casualty challenges
* Work with various tools (Excel, VBA, Power BI, R, Python, SQL, etc.) to create sophisticated and scalable analytic tools and processes
* Leverage Lockton's data to answer ad hoc questions for internal and external audiences
* Create materials and trainings to enhance internal understanding of Analytics' capabilities and contribute to white papers / research articles to advance Lockton Analytics' brand in the market
* Remain informed on industry developments and effectively communicate the implications both internally and externally
* Work with analysts and other stakeholders to identify new or improved analytic solutions that solve business needs
* Communicate and explain complex analytic concepts to clients and internal associates
* Work with developers to embed actuarial methodologies in new proprietary Lockton applications
* Compile, analyze, interpret, and present data trends to guide decision-making
* Prepare a variety of analytic outputs based on client or prospect data, including loss projection and stratification, collateral analysis, simulation models, predictive models, claim dashboards, and other ad hoc reports
Data Scientist - Retail Pricing
Data scientist job in Overland Park, KS
We are looking for a Data Scientist! This position will play a key role in shaping data-driven strategies that directly influence the bank's profitability, customer value, and market competitiveness. This role sits at the intersection of analytics, finance, and product strategy - transforming data into pricing intelligence that supports smarter, faster business decisions.
Will design and implement advanced pricing and profitability models for retail banking products, leveraging internal performance metrics, market benchmarks, and third-party data sources. Through predictive modeling, elasticity analysis, and scenario testing, will help the organization optimize deposit and loan pricing, forecast financial outcomes, and identify growth opportunities.
Collaborating across product, finance, and executive teams, will translate complex analytical findings into clear business recommendations that drive strategic action. Will also contribute to enhancing our analytics infrastructure - improving data pipelines, model governance, and reporting capabilities to strengthen enterprise-wide decision-making.
Core Expertise: Pricing strategy · Profitability modeling · Financial forecasting · Machine learning · SQL · Python · R · Data visualization · Strategic analytics · Cross-functional collaboration
CapFed is an equal opportunity employer.
Auto-ApplyData Center Quality Inspector-Intern
Data scientist job in Kansas City, MO
As a Data Center Quality Inspector I, you provide superb quality control services for our clients. The work includes performing essential inspections of electrical equipment and systems, such as switchgear and substations. Detailed knowledge of mechanical and instrumentation is critical to conduct inspections and interpret electrical drawings and specifications. Attentively, you ensure the equipment and technicians around you comply with company and site safety requirements. You prepare detailed daily reports about your findings, including time tracking and special inspection reports. Diligently, you approve and submit important documents such as Lockout/Tagout (LOTO) requests, team reports, and time tracking reports. You take pride in assuring that our clients receive safely installed and functional equipment every time.
If you are an analytical and detail-oriented individual who communicates effectively and prioritizes safety, this could be the position for you! Travel may be required.
Responsibilities
Ability to read and interpret electrical drawings and specifications. Prepare written documentation such as daily narratives (work reports), special inspection reports, and daily time tracking.
Perform inspections of switchgear, standby generators, protective relays, and other electrical distribution components.
Submit LOTO requests, review and approve team reports, and time tracking.
Ensure compliance with all company and site safety requirements.
Benefits
Competitive pay, depending on experience.
Medical, dental, vision, 401(k) with company match, among other benefits.
Holidays and paid vacation time.
Extensive learning and development opportunities.
Requirements
Requirements
High school diploma or equivalent. Higher education degree preferred.
Minimum of two years of experience inspecting, testing, commissioning, or operating electrical distribution systems. Commercial or naval nuclear experience is strongly desired.
OSHA 10-hour Construction Safety training.
Commitment to excellence and high standards.
Data Science Intern
Data scientist job in Kansas City, MO
Position Profile: Data Science InternDepartment: Finance, Strategy & Analytics Reports to: Senior Data ScientistStatus: Intern, Hourly JOB SUMMARY:The Data Science Intern will play a key role in supporting the Kansas City Chiefs' Finance, Strategy & Analytics team. You will help enhance our data infrastructure by assisting with potential migration efforts to a modern data platform, supporting updates to connected Plotly Dash applications, and validating data pipelines as needed. You will also contribute to real-time insights, data visualization, and advanced analytics that help transform data into meaningful, actionable decisions across the organization. JOB RESPONSIBILITIES:
Support the team in evaluating and potentially transitioning data workflows from PostgreSQL to a modern data platform.
Assist in restructuring, optimizing, and validating datasets and queries across evolving data environments.
Help update and maintain Plotly Dash applications, ensuring stable data connections during platform transitions.
Maintain and enhance reports, dashboards, and analytical tools used by executives and department leaders.
Develop or improve internal applications that support real-time, data-driven decision making.
Assist with analytical projects such as retention modeling, customer segmentation, dynamic pricing, and financial forecasting.
Document data processes, sources, and architectural changes as infrastructure evolves.
Work with internal teams and vendors to ensure data accuracy, timeliness, and reliability.
Support ad-hoc analysis requests and identify opportunities for new projects or process improvements.
JOB REQUIREMENTS:Required Qualifications
Proficiency in Python and familiarity with commonly used analytics and data engineering libraries.
Strong SQL skills and understanding of relational database concepts.
Familiarity with PySpark or an interest in learning distributed computing frameworks.
Interest in modern data platforms, cloud technologies, or large-scale data processing environments.
Ability to troubleshoot API integrations and application data connections.
Ability to combine data from multiple sources and validate data accuracy.
Strong analytical and problem-solving skills.
Ability to collaborate effectively with teammates and stakeholders across the organization.
Ability to work flexible hours, including evenings, weekends, and holidays as needed.
Familiarity with Microsoft Office tools (Excel, PowerPoint, Dynamics CRM).
Preferred Qualifications
0 - 1 year of experience in related analytics, engineering, or technical role.
Experience with Plotly Dash (enterprise or open source) or other application frameworks.
Experience with PySpark, Spark SQL, or cloud-based data processing tools such as Databricks, Snowflake, or similar platforms.
Experience with HTML/CSS and JavaScript.
Pursuing or holding a degree in Mathematics, Statistics, Computer Science, Engineering, or a related field.
Experience with Tableau or Power BI.
Experience with Git or other version control systems.
Data Engineer
Data scientist job in Overland Park, KS
The Data Engineer is a key contributor in advancing the firm's data strategy and analytics ecosystem, transforming raw data into actionable insights that drive business decisions. This role requires a technically strong, curious professional committed to continuous learning and innovation. The ideal candidate combines analytical acumen with data engineering skills to ensure reliable, efficient, and scalable data pipelines and reporting solutions.
ESSENTIAL DUTIES AND RESPONSIBILITIES
Data Engineering & Integration
Design, build, and maintain data pipelines and integrations using Azure Data Factory, SSIS, or equivalent ETL/ELT tools.
Automate data imports, transformations, and loads from multiple sources (on-premise, SaaS, APIs, and cloud).
Optimize and monitor data workflows for reliability, performance, and cost efficiency.
Implement and maintain data quality, validation, and error-handling frameworks.
Data Analysis & Reporting
Develop and maintain reporting databases, views, and semantic models for business intelligence solutions.
Design and publish dashboards and visualizations in Power BI and SSRS, ensuring alignment with business KPIs.
Perform ad-hoc data exploration and statistical analysis to support business initiatives.
Collaboration & Governance
Partner with stakeholders across marketing, underwriting, operations, and IT to define analytical and data integration requirements.
Maintain data integrity, enforce governance standards, and promote best practices in data stewardship.
Support data security and compliance initiatives in coordination with IT and business teams.
Continuous Improvement
Stay current with emerging data technologies and analytics practices.
Recommend tools, processes, or automation improvements to enhance data accessibility and insight delivery.
QUALIFICATIONS
Required:
Strong SQL development skills and experience with Microsoft SQL Server and Azure SQL Database.
Hands-on experience with data import, transformation, and integration using Azure Data Factory, SSIS, or similar tools.
Proficiency in building BI solutions using Power BI and/or SSRS.
Strong data modeling and relational database design skills.
Proficiency in Microsoft Excel (advanced formulas, pivot tables, external data connections).
Ability to translate business goals into data requirements and technical solutions.
Excellent communication and collaboration skills.
Bachelor's degree in Computer Science, Information Systems, or a related field (or equivalent experience).
Preferred:
Experience with cloud-based data platforms (Azure Data Lake, Synapse Analytics, Databricks).
Familiarity with version control tools (Git, Azure DevOps) and Agile development practices.
Exposure to Python or PowerShell for data transformation or automation.
Experience integrating data from insurance or financial systems.
Compensation: $120-129K
This position is 3 days onsite/hybrid located in Overland Park, KS
We look forward to reviewing your application. We encourage everyone to apply - even if every box isn't checked for what you are looking for or what is required.
PDSINC, LLC is an Equal Opportunity Employer.
Data Engineer III
Data scientist job in Kansas City, MO
Who We Are: Spring Venture Group is a leading digital direct-to-consumer sales and marketing company with product offerings focused on the senior market. We specialize in distributing Medicare Supplement, Medicare Advantage, and related products via our family of brands and dedicated team of licensed insurance agents. Powered by our unique technologies that combine sophisticated marketing, comparison shopping, sales execution, and customer engagement - we help thousands of seniors across the country navigate the complex world of Medicare every day.
Job Description
This person has the opportunity to work primarily remote in the Kansas City or surrounding areas, making occasional visits to the office, but must CURRENTLY be in the Kansas City area.
We are unable to sponsor for this role, this includes international students.
OVERVIEW
The Data Management team is responsible for all things data at Spring Venture Group. Most importantly, our team is responsible for constructing high quality datasets that enable our business stakeholders and world-class Analytics department to make data informed decisions. Data engineers, combining Software Engineering and Database Engineering, serve as a primary resource for expertise with writing scripts and SQL queries, monitoring our database stability, and assisting with data governance ensuring availability for business-critical systems. The DE III works with a team of engineers of varying levels to design, develop, test, and maintain software applications and programs. The DE III will be expected to work independently when needed to solve the most complex problems encountered. They will be expected to be a leader and a mentor.
ESSENTIAL DUTIES
The essential duties for this role include, but are not limited to:
* Serve as a primary advisor to Data Engineering Manager to identify and bring attention to opportunities for technical improvements, reduction of technical debt, or automation of repeated tasks.
* Build advanced data pipelines utilizing the medallion architecture to create high quality single source of truth data sources in Snowflake
* Architect replacements of current Data Management systems with respect to all aspects of data governance
* Design advanced services with multiple data pipelines to securely and appropriately store company assets in our enterprise data stores.
* Technically advise any member of the data engineering department, providing direction when multiple paths forward present themselves.
* Actively participate as a leader in regular team meetings, listening and ensuring that one is assisting others at every chance for growth and development.
* Write advanced ETL/ELT scripts where appropriate to integrate data of various formats into enterprise data stores.
* Take ownership (both individually and as part of a team) of services and applications
* Write complex SQL queries, scripts, and stored procedures to reliably and consistently modify data throughout our organization according to business requirements
* Collaborate directly and independently with stakeholders to build familiarity, fully understand their needs, and create custom, modular, and reliable solutions to resolve their requests
* Document and demonstrate solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code
* Work with Project Managers, Solution Architects, and Software Development teams to build solutions for Company Initiatives on time, on budget, and on value.
* Independently architect solutions to problems of high complexity, and advise junior and mid-level engineers on problems of medium complexity.
* Create data pipelines using appropriate and applicable technologies from Amazon Web Services (AWS) to serve the specific needs of the business.
* Ensure 99.95% uptime of our company's services monitoring data anomalies, batch failures, and our support chat for one week per team cycle from 8am-9pm.
* Follow and embrace procedures of both the Data Management team and SVG Software Development Life Cycle (SDLC), including obtaining and retaining IT Security Admin III clearance.
* Support after hours and weekend releases from our internal Software Development teams.
* Actively participate in code review and weekly technicals with another more senior engineer or manager.
* Assist departments with time-critical SQL execution and debug database performance problems.
ROLE COMPETENCIES
The competencies for this role include, but are not limited to:
* Emotional Intelligence
* Drive for Results
* Continuous Improvement
* Communication
* Strategic Thinking
* Teamwork and Collaboration
Qualifications
POSITION REQUIREMENTS
The requirements to fulfill this position are as follows:
* Bachelor's degree in Computer Science, or a related technical field.
* 4-7 years of practical production work in Data Engineering.
* Expertise of the Python programming language.
* Expertise of Snowflake
* Expertise of SQL, databases, & query optimization.
* Must have experience in a large cloud provider such as AWS, Azure, GCP.
* Advanced at reading code independently and understanding its intent.
* Advanced at writing readable, modifiable code that solves business problems.
* Ability to construct reliable and robust data pipelines to support both scheduled and event based workflows.
* Working directly with stakeholders to create solutions.
* Mentoring junior and mid-level engineers on best practices in programming, query optimization, and business tact.
Additional Information
Benefits:
The Company offers the following benefits for this position, subject to applicable eligibility requirements:
* Competitive Compensation
* Medical, Dental and vision benefits after a short waiting period
* 401(k) matching program
* Life Insurance, and Short-term and Long-term Disability Insurance
* Optional enrollment includes HSA/FSA, AD&D, Spousal/Dependent Life Insurance, Travel Assist and Legal Plan
* Generous paid time off (PTO) program starting off at 15 days your first year
* 15 paid Holidays (includes holiday break between Christmas and New Years)
* 10 days of Paid Parental Leave and 5 days of Paid Birth Recovery Leave
* Annual Volunteer Time Off (VTO) and a donation matching program
* Employee Assistance Program (EAP) - health and well-being on and off the job
* Rewards and Recognition
* Diverse, inclusive and welcoming culture
* Training program and ongoing support throughout your Venture Spring Venture Group career
Security Responsibilities:
* Operating in alignment with policies and standards
* Reporting Security Incidents Completing assigned training
* Protecting assigned organizational assets
Spring Venture Group is an Equal Opportunity Employer
Senior. Data Engineer
Data scientist job in Overland Park, KS
The Senior Data Engineer will be responsible for building and maintaining the data infrastructure that powers the organization's data-driven decision-making. Designs, develops, and maintains data pipelines, data warehouses, and other data-related infrastructure. This role expects to work closely with data scientists, analysts, and other stakeholders to understand their data needs and translate them into robust and scalable solutions.
Key Responsibilities:
Build, maintain, and optimize data pipelines, including ELT processes, data models, reports, and dashboards to drive business insights.
Develop and implement data solutions for enterprise data warehouses and business intelligence (BI) initiatives.
Continuously monitor and optimize data pipelines for performance, reliability, and cost-effectiveness. This includes identifying bottlenecks, tuning queries, and scaling infrastructure as needed.
Automate data ingestion, processing, and validation tasks to ensure data quality and consistency.
Implement data governance policies and procedures to ensure data quality, consistency, and compliance with relevant regulations.
Contribute to the development of the organization's overall data strategy.
Conduct code reviews and contribute to the establishment of coding standards and best practices.
Required Qualifications:
Bachelor's degree in a relevant field or equivalent professional experience.
4-6 years of hands-on experience in data engineering.
Strong expertise in SQL and NoSQL databases, including PostgreSQL, DynamoDB, and MongoDB.
Experience working with cloud platforms such as GCP, Azure, or AWS and their associated data services.
Practical knowledge of data warehouses like BigQuery, Snowflake, and Redshift.
Programming skills in Python or JavaScript.
Proficiency with BI tools such as Sisense, Power BI, or Tableau.
Preferred Qualifications:
Direct experience with Google Cloud Platform (GCP).
Knowledge of CI/CD pipelines, including tools like Docker and Terraform.
Background in the healthcare industry.
Familiarity with modern data integration tools such as DBT, Matillion, and Airbyte. Compensation: $125,000.00 per year
Who We Are CARE ITS is a certified Woman-owned and operated minority company (certified as WMBE). At CARE ITS, we are the World Class IT Professionals, helping clients achieve their goals. Care ITS was established in 2010. Since then we have successfully executed several projects with our expert team of professionals with more than 20 years of experience each. We are globally operated with our Head Quarters in Plainsboro, NJ, with focused specialization in Salesforce, Guidewire and AWS. We provide expert solutions to our customers in various business domains.
Auto-ApplyGoogle Cloud Data & AI Engineer
Data scientist job in Kansas City, MO
Who You'll Work With As a modern technology company, our Slalom Technologists are disrupting the market and bringing to life the art of the possible for our clients. We have passion for building strategies, solutions, and creative products to help our clients solve their most complex and interesting business problems. We surround our technologists with interesting challenges, innovative minds, and emerging technologies
You will collaborate with cross-functional teams, including Google Cloud architects, data scientists, and business units, to design and implement Google Cloud data and AI solutions. As a Consultant, Senior Consultant or Principal at Slalom, you will be a part of a team of curious learners who lean into the latest technologies to innovate and build impactful solutions for our clients.
What You'll Do
* Design, build, and operationalize large-scale enterprise data and AI solutions using Google Cloud services such as BigQuery, Vertex AI, Dataflow, Cloud Storage, Pub/Sub and more.
* Implement cloud-based data solutions for data ingestion, transformation, and storage; and AI solutions for model development, deployment, and monitoring, ensuring both areas meet performance, scalability, and compliance needs.
* Develop and maintain comprehensive architecture plans for data and AI solutions, ensuring they are optimized for both data processing and AI model training within the Google Cloud ecosystem.
* Provide technical leadership and guidance on Google Cloud best practices for data engineering (e.g., ETL pipelines, data pipelines) and AI engineering (e.g., model deployment, MLOps).
* Conduct assessments of current data architectures and AI workflows, and develop strategies for modernizing, migrating, or enhancing data systems and AI models within Google Cloud.
* Stay current with emerging Google Cloud data and AI technologies, such as BigQuery ML, AutoML, and Vertex AI, and lead efforts to integrate new innovations into solutions for clients.
* Mentor and develop team members to enhance their skills in Google Cloud data and AI technologies, while providing leadership and training on both data pipeline optimization and AI/ML best practices.
What You'll Bring
* Proven experience as a Cloud Data and AI Engineer or similar role, with hands-on experience in Google Cloud tools and services (e.g., BigQuery, Vertex AI, Dataflow, Cloud Storage, Pub/Sub, etc.).
* Strong knowledge of data engineering concepts, such as ETL processes, data warehousing, data modeling, and data governance.
* Proficiency in AI engineering, including experience with machine learning models, model training, and MLOps pipelines using tools like Vertex AI, BigQuery ML, and AutoML.
* Strong problem-solving and decision-making skills, particularly with large-scale data systems and AI model deployment.
* Strong communication and collaboration skills to work with cross-functional teams, including data scientists, business stakeholders, and IT teams, bridging data engineering and AI efforts.
* Experience with agile methodologies and project management tools in the context of Google Cloud data and AI projects.
* Ability to work in a fast-paced environment, managing multiple Google Cloud data and AI engineering projects simultaneously.
* Knowledge of security and compliance best practices as they relate to data and AI solutions on Google Cloud.
* Google Cloud certifications (e.g., Professional Data Engineer, Professional Database Engineer, Professional Machine Learning Engineer) or willingness to obtain certification within a defined timeframe.
About Us
Slalom is a fiercely human business and technology consulting company that leads with outcomes to bring more value, in all ways, always. From strategy through delivery, our agile teams across 52 offices in 12 countries collaborate with clients to bring powerful customer experiences, innovative ways of working, and new products and services to life. We are trusted by leaders across the Global 1000, many successful enterprise and mid-market companies, and 500+ public sector organizations to improve operations, drive growth, and create value. At Slalom, we believe that together, we can move faster, dream bigger, and build better tomorrows for all.
Compensation and Benefits
Slalom prides itself on helping team members thrive in their work and life. As a result, Slalom is proud to invest in benefits that include meaningful time off and paid holidays, parental leave, 401(k) with a match, a range of choices for highly subsidized health, dental, & vision coverage, adoption and fertility assistance, and short/long-term disability. We also offer yearly $350 reimbursement account for any well-being-related expenses, as well as discounted home, auto, and pet insurance.
Slalom is committed to fair and equitable compensation practices. For this position the target base salaries are listed below. In addition, individuals may be eligible for an annual discretionary bonus. Actual compensation will depend upon an individual's skills, experience, qualifications, location, and other relevant factors. The target salary pay range is subject to change and may be modified at any time.
East Bay, San Francisco, Silicon Valley:
* Consultant $114,000-$171,000
* Senior Consultant: $131,000-$196,500
* Principal: $145,000-$217,500
San Diego, Los Angeles, Orange County, Seattle, Houston, New Jersey, New York City, Westchester, Boston, Washington DC:
* Consultant $105,000-$157,500
* Senior Consultant: $120,000-$180,000
* Principal: $133,000-$199,500
All other locations:
* Consultant: $96,000-$144,000
* Senior Consultant: $110,000-$165,000
* Principal: $122,000-$183,000
EEO and Accommodations
Slalom is an equal opportunity employer and is committed to inclusion, diversity, and equity in the workplace. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veterans' status, or any other characteristic protected by federal, state, or local laws. Slalom will also consider qualified applications with criminal histories, consistent with legal requirements. Slalom welcomes and encourages applications from individuals with disabilities. Reasonable accommodations are available for candidates during all aspects of the selection process. Please advise the talent acquisition team if you require accommodations during the interview process.
We are accepting applications until 12/31.
#LI-FB1
Data Engineer II
Data scientist job in Leawood, KS
Full-time Description
27Global is a rapidly growing company in the dynamic industry of software, cloud, and data engineering. We pride ourselves on the quality of the services we deliver, the clients we serve, and the strength of our culture. Our commitment to our employees is evidenced by our five Best Places to Work awards.
We're looking for a Data Engineer to join our team! You'll be responsible for contributing to the design and development of enterprise data solutions that support analytics, business intelligence, and scalable applications. You'll work closely with data and software architects, consultants and other engineers to deliver data models, integration strategies, and governance practices that empower client's data-driven decisions.
Joining 27Global as a Data Engineer is an exciting high-growth opportunity offering a competitive base salary, performance bonuses, and variable compensation.
Your Role:
Participate in the design and implementation of scalable, secure, and high-performance data architectures.
Develop and maintain conceptual, logical, and physical data models.
Work closely with architects to define standards for data integration, quality, and governance.
Collaborate with engineers, analysts, and business stakeholders to align data solutions with organizational needs.
Support cloud-based data strategies including data warehousing, pipelines, and real-time processing.
Design and optimize data pipelines that support AI, machine learning, and advanced analytics workloads.
Implement data preprocessing, feature engineering, and real-time inference capabilities for predictive modeling.
Integrate AI/ML models into production environments using tools such as AWS SageMaker, Azure Machine Learning, or Databricks.
Assess, learn, and apply emerging data technologies and frameworks to enhance solutions and stay current with industry trends.
Requirements
What You Bring:
BA/BS/Master's degree in Computer Science, Information Systems, Data Science, or related field.
2 - 4 years of experience in data architecture, data engineering, or related roles delivering scalable architecture solutions from design to production.
2 - 4 years of experience writing .Net code or other OOP languages in an Agile environment.
Demonstrated leadership skills with the ability to collaborate with and lead on-shore and off-shore team members.
Proficient technical skills in: Spark, Scala, C#, PySpark, Data Lake, Delta Lake, Relational and NoSQL Databases, AWS Glue and Azure Synapse
Experience with SQL, ETL/ELT, and data modeling.
Experience with cloud platforms (AWS, Azure, GCP) and implementing modern data platforms with data lake.
Knowledge of data governance, security, and compliance frameworks.
Ability to context switch and work on a variety of projects over specified periods of time.
Ability to work at the 27Global office in Leawood, KS with hybrid work flexibility after 90 days, and occasionally onsite at client offices.
Flexibility to occasionally travel to client sites may be required, typically 1 week per quarter or less.
Legal authorization to work in the United States and the ability to prove eligibility at the time of hire.
Ways to Stand Out:
Certifications: AWS Solution Architect, Azure Data Engineer, Databricks Data Engineer
Hands-on experience with Databricks for building and optimizing scalable data pipelines, Delta Lake, and Spark-based analytics.
Hands-on experience with big data tools (Spark, Kafka).
Modern data warehouses (Snowflake, Redshift, BigQuery).
Familiarity with machine learning pipelines and real-time analytics.
Strong communication skills and ability to influence stakeholders.
Prior experience implementing enterprise data governance frameworks.
Experience in a client-facing role, working directly with clients from multiple levels of the organization; often presenting and documenting client environment suggestions and improvements.
Why 27G?:
Four-time award winner of Best Place to Work by the Kansas City Business Journal.
A casual and fun small business work environment.
Competitive compensation, benefits, time off, profit sharing, and quarterly bonus potential.
Dedicated time for learning, development, research, and certifications.
Data Engineer
Data scientist job in Overland Park, KS
At Quest Analytics, our mission is to make healthcare more accessible for all Americans. As part of our team, you'll work in an innovative, collaborative, challenging, and flexible environment that supports your personal growth every day. We are looking for a talented and motivated Data Engineer with experience in building scalable infrastructure, implementing automation, and enabling cross-functional teams with reliable and accessible data.
The Data Engineer will run daily operations of the data infrastructure, automate and optimize our data operations and data pipeline architecture while ensuring active monitoring and troubleshooting. This hire will also support other engineers and analysts on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. APPLY TODAY!What you'll do:
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Review project objectives and determine the best technology for implementation. Implement best practice standards for development, build and deployment automation.
Run daily operations of the data infrastructure and support other engineers and analysts on data investigations and operations.
Monitor and report on all data pipeline tasks while working with appropriate teams to take corrective action quickly, in case of any issues.
Work with internal teams to understand current process and areas for efficiency gains
Write well-abstracted, reusable, and efficient code.
Participate in the training and/or mentoring programs as assigned or required.
Adheres to the Quest Analytics Values and supports a positive company's culture.
Responds to the needs and requests of clients and Quest Analytics management and staff in a professional and expedient manner.
What it requires:
Bachelor's degree in computer science or related field.
3 years of work experience with ETL, data operations and troubleshooting, preferably in healthcare data.
Proficiency with Azure ecosystems, specifically in Azure Data Factory and ADLS.
Strong proficiency in Python for scripting, automation, and data processing.
Advanced SQL skills for query optimization and data manipulation.
Experience with distributed data pipeline tools like Apache Spark, Databricks, etc.
Working knowledge of database modeling (schema design, and data governance best practices.)
Working knowledge of libraries like Pandas, numpy, etc.
Self-motivated and able to work in a fast paced, deadline-oriented environment
Excellent troubleshooting, listening, and problem-solving skills.
Proven ability to solve complex issues.
Customer focused.
What you'll appreciate:•Workplace flexibility - you choose between remote, hybrid or in-office•Company paid employee medical, dental and vision•Competitive salary and success sharing bonus•Flexible vacation with no cap, plus sick time and holidays•An entrepreneurial culture that won't limit you to a job description•Being listened to, valued, appreciated -- and having your contributions rewarded•Enjoying your work each day with a great group of people Apply TODAY!careers.questanalytics.com
About Quest AnalyticsFor more than 20 years, we've been improving provider network management one groundbreaking innovation at a time. 90% of America's health plans use our tools, including the eight largest in the nation. Achieve your personal quest to build a great career here. Visa sponsorship is not available at this time.
Preferred work locations are within one of the following states: Alabama, Arizona, Arkansas, Colorado, Connecticut, Delaware, Florida, Georgia, Idaho, Illinois (outside of Chicago proper), Indiana, Iowa, Kansas, Kentucky, Louisiana, Maine, Maryland, Massachusetts, Michigan, Minnesota, Mississippi, Missouri, Montana, Nebraska, Nevada, New Hampshire, New Mexico, North Carolina, North Dakota, Ohio, Oklahoma, Oregon, South Carolina, South Dakota, Tennessee, Texas, Utah, Vermont, Virginia, West Virginia, Wisconsin, or Wyoming.
Quest Analytics provides equal employment opportunities to all people without regard to race, color, religion, sex, national origin, ancestry, marital status, veteran status, age, disability, sexual orientation or gender identity or expression or any other legally protected category. We are committed to creating and maintaining a workforce environment that is free from any form of discriminations or harassment.
Applicants must be legally authorized to work in the United States. Verification of employment eligibility will be required at the time of hire.
Persons with disabilities who anticipate needing accommodations for any part of the application process may contact, in confidence *********************
NOTE: Staffing agencies, headhunters, recruiters, and/or placement agencies, please do not contact our hiring managers directly. We are not currently working with additional outside agencies at this time. Any job posting displayed on websites other than questanalytics.com or jobs.lever.co/questanalytics/ may be out of date, inaccurate and unavailable
Auto-ApplySenior Data Engineer
Data scientist job in Overland Park, KS
Velocity Staff, Inc. is working with our client located in the Overland Park, KS area to identify a Senior Level Data Engineer to join their Data Services Team. The right candidate will utilize their expertise in data warehousing, data pipeline creation/support and analytical reporting and be responsible for gathering and analyzing data from several internal and external sources, designing a cloud-focused data platform for analytics and business intelligence, reliably providing data to our analysts. This role requires significant understanding of data mining and analytical techniques. An ideal candidate will have strong technical capabilities, business acumen, and the ability to work effectively with cross-functional teams.
Responsibilities
Work with Data architects to understand current data models, to build pipelines for data ingestion and transformation.
Design, build, and maintain a framework for pipeline observation and monitoring, focusing on reliability and performance of jobs.
Surface data integration errors to the proper teams, ensuring timely processing of new data.
Provide technical consultation for other team members on best practices for automation, monitoring, and deployments.
Provide technical consultation for the team with “infrastructure as code” best practices: building deployment processes utilizing technologies such as Terraform or AWS Cloud Formation.
Qualifications
Bachelor's degree in computer science, data science or related technical field, or equivalent practical experience
Proven experience with relational and NoSQL databases (e.g. Postgres, Redshift, MongoDB, Elasticsearch)
Experience building and maintaining AWS based data pipelines: Technologies currently utilized include AWS Lambda, Docker / ECS, MSK
Mid/Senior level development utilizing Python: (Pandas/Numpy, Boto3, SimpleSalesforce)
Experience with version control (git) and peer code reviews
Enthusiasm for working directly with customer teams (Business units and internal IT)
Preferred but not required qualifications include:
Experience with data processing and analytics using AWS Glue or Apache Spark
Hands-on experience building data-lake style infrastructures using streaming data set technologies (particularly with Apache Kafka)
Experience data processing using Parquet and Avro
Experience developing, maintaining, and deploying Python packages
Experience with Kafka and the Kafka Connect ecosystem.
Familiarity with data visualization techniques using tools such as Grafana, PowerBI, AWS Quick Sight, and Excel.
Not ready to apply? Connect with us to learn about future opportunities.
Auto-ApplySenior Data Engineer
Data scientist job in Overland Park, KS
Company Details
Intrepid Direct Insurance (IDI) is a rapidly growing direct to consumer property and casualty insurance company. A member of the W. R. Berkley Corporation, a fortune 500 company, rated A+ (Superior) by A.M. Best, Intrepid Direct's vision is to make life better for business. The insurance industry has not evolved with innovation like other major industries. We're here to change that. We are making life better for our customers, shareholders, and our team members by leveraging data and technology as insurance experts for our targeted customers. You will be part of a highly collaborative team of talented and focused professionals. Join a group that enjoys working together, trusts each other, and takes pride in our hard-earned success.
***************************
The Company is an equal employment opportunity employer.
Responsibilities
Intrepid Direct Insurance is looking for an experienced Senior Data Engineer to mentor, orchestrate, implement, and monitor the flowing through our organization. This opportunity will have a direct influence on how data is made available to our business units, as well as our customers. You'll primarily be working with our operations and engineering teams to create and enhance data pipelines, conform and enrich data, and deliver information to business users. Learn the ins and outs of what we do so that you can focus on improving availability and quality of the data we use to service our customers.
Key functions include but are not limited to:
Assist with long-term strategic planning for modern data warehousing needs.
Contribute to data modeling exercises and the buildout of our data warehouse.
Monitor, support, and analyze existing pipelines and recommend performance and process improvements to address gaps in existing process.
Automate manual processes owned by data team.
Troubleshoot and remediate ingestion and reporting related issues.
Design and build new pipelines to ingest data from additional disparate sources.
Responsible for the accuracy and availability of data in our data warehouse.
Collaborate with a multi-disciplinary team to develop data-driven solutions that align with our business and technical needs.
Create and deploy reports as needed.
Assist with cataloging and classifying existing data sets.
Participate in peer reviews with emphasis on continuous improvement.
Respond to regulatory requests for information.
Assumes other tasks and duties as assigned by management.
Mentor team members and advise on best practices.
Qualifications
Bachelor's degree in Mathematics, Statistics, Computer Science, or equivalent experience.
6+ years of relevant data engineering experience.
Analytical thinker with experience working in a fast-paced, startup environment.
Technical expertise with Microsoft SQL Server.
Familiarity with ETL tools and concepts.
Hands-on experience with database design and data modeling, preferable experience with Data Vault methodology.
Experience supporting and troubleshooting SSIS packages.
Experience consuming event-based data through APIs or queues.
Experience in Agile software development.
Experience with insurance data highly desired.
Detail oriented, solid organizational, and problem-solving.
Strong written, visual, and verbal communication skills.
Team oriented with a strong willingness to serve others in an agile startup environment.
Flexible in assuming new responsibilities as they arise.
Experience with Power Bi desired.
Additional Company Details We do not accept unsolicited resumes from third party recruiting agencies or firms.
The actual salary for this position will be determined by a number of factors, including the scope, complexity and location of the role; the skills, education, training, credentials and experience of the candidate; and other conditions of employment. Sponsorship Details Sponsorship not Offered for this Role
Auto-ApplyData Analytics Engineer
Data scientist job in Kansas City, MO
At Emprise Bank, everything we do is focused on empowering people to thrive. We proudly work to provide an extraordinary customer experience to help our customers achieve their goals. We are currently seeking a Data Analytics Engineer to join our team in Wichita, KS or Kansas City, MO. As a Data Analytics Engineer, you'll be responsible for administrative, technical, and professional work within the technology department. This role will work on-site in Wichita, KS with hybrid scheduling. For candidates in the Kansas City metro area, the role will be remote.
A successful candidate will have:
* Confident and articulate communications skills
* Initiative and a strong work ethic
* A strategic mindset
* A demonstrated ability to make sense of complex and sometimes contradictory information to effectively solve problems
* Strong attention to detail
* The ability to work both independently and collaboratively
* An understanding of and commitment to our values
Essential functions of the role:
* Demonstrate a strong understanding of privacy and security principles, particularly as they pertain to data pipelines and dataset
* Develop, test, and maintain data pipelines supporting business intelligence functions
* Collaborate with data analysts to create and optimize datasets for data analysis and reporting
* Maintain documentation of data pipelines, workflows, and data dictionaries for internal reference and ensure it is aligned with data governance protocols
* Develop code for the business logic of operational pipelines, ensuring that data processing aligns with business requirements
* Serve as a liaison for data analysts and data engineers, facilitating communication and collaboration between the two positions to ensure that data needs are met efficiently
* Implement robust data quality checks and transformation processes to ensure data accuracy and consistency
Other duties as assigned within the scope of the role
Requirements
* Bachelor's degree in quantitative field
* Experience with data transformation tooling
* Experience with BI tools (Tableau, PowerBI, Qlikview, etc)
* Proficiency with Python and SQL language is required
* Strong communication skills and the ability work with business teams to define metrics, elicit requirements and explain technical issues to non-technical associates
* Proficiency in Pyspark language is preferred
* Experience in Azure Cloud is preferred
* Experience with SQL server database is preferred
* Familiarity with medallion data architecture is preferred
* Experience with Data Factory and Databricks is preferred
Benefits
In addition to a competitive salary and benefits, Emprise offers professional growth, a rewarding and challenging environment, opportunities to be involved in our communities, and a culture of integrity, passion, and success. We also offer shift differential pay for bilingual candidates!
At Emprise Bank, empowering people to thrive means having an all-inclusive culture that honors our commitment to all dimensions of diversity in our workforce and embraces inclusion of all people. People of color, women, LGBTQIA+, veterans, and persons with disabilities are encouraged to apply.
To learn more, please visit our website at ********************
Emprise Bank is an EEO/AA/ADA/Veteran Employer/Member FDIC/Drug Free Workplace.
Emprise Bank participates in E-Verify and will provide your Form-I 9 to the federal government to confirm authorization to work in the United States.
Principal Data Scientist
Data scientist job in Kansas City, MO
Description & Requirements We now have an exciting opportunity for a Principal Data Scientist to join the Maximus AI Accelerator supporting both the enterprise and our clients. We are looking for an accomplished hands-on individual contributor and team player to be a part of the AI Accelerator team.
You will be responsible for architecting and optimizing scalable, secure AI systems and integrating AI models in production using MLOps best practices, ensuring systems are resilient, compliant, and efficient. This role requires strong systems thinking, problem-solving abilities, and the capacity to manage risk and change in complex environments. Success depends on cross-functional collaboration, strategic communication, and adaptability in fast-paced, evolving technology landscapes.
This position will be focused on strategic company-wide initiatives but will also play a role in project delivery and capture solutioning (i.e., leaning in on existing or future projects and providing solutioning to capture new work.)
This position requires occasional travel to the DC area for client meetings.
Essential Duties and Responsibilities:
- Make deep dives into the data, pulling out objective insights for business leaders.
- Initiate, craft, and lead advanced analyses of operational data.
- Provide a strong voice for the importance of data-driven decision making.
- Provide expertise to others in data wrangling and analysis.
- Convert complex data into visually appealing presentations.
- Develop and deploy advanced methods to analyze operational data and derive meaningful, actionable insights for stakeholders and business development partners.
- Understand the importance of automation and look to implement and initiate automated solutions where appropriate.
- Initiate and take the lead on AI/ML initiatives as well as develop AI/ML code for projects.
- Utilize various languages for scripting and write SQL queries. Serve as the primary point of contact for data and analytical usage across multiple projects.
- Guide operational partners on product performance and solution improvement/maturity options.
- Participate in intra-company data-related initiatives as well as help foster and develop relationships throughout the organization.
- Learn new skills in advanced analytics/AI/ML tools, techniques, and languages.
- Mentor more junior data analysts/data scientists as needed.
- Apply strategic approach to lead projects from start to finish;
Job-Specific Minimum Requirements:
- Develop, collaborate, and advance the applied and responsible use of AI, ML and data science solutions throughout the enterprise and for our clients by finding the right fit of tools, technologies, processes, and automation to enable effective and efficient solutions for each unique situation.
- Contribute and lead the creation, curation, and promotion of playbooks, best practices, lessons learned and firm intellectual capital.
- Contribute to efforts across the enterprise to support the creation of solutions and real mission outcomes leveraging AI capabilities from Computer Vision, Natural Language Processing, LLMs and classical machine learning.
- Contribute to the development of mathematically rigorous process improvement procedures.
- Maintain current knowledge and evaluation of the AI technology landscape and emerging. developments and their applicability for use in production/operational environments.
Minimum Requirements
- Bachelor's degree in related field required.
- 10-12 years of relevant professional experience required.
Job-Specific Minimum Requirements:
- 10+ years of relevant Software Development + AI / ML / DS experience.
- Professional Programming experience (e.g. Python, R, etc.).
- Experience in two of the following: Computer Vision, Natural Language Processing, Deep Learning, and/or Classical ML.
- Experience with API programming.
- Experience with Linux.
- Experience with Statistics.
- Experience with Classical Machine Learning.
- Experience working as a contributor on a team.
Preferred Skills and Qualifications:
- Masters or BS in quantitative discipline (e.g. Math, Physics, Engineering, Economics, Computer Science, etc.).
- Experience developing machine learning or signal processing algorithms:
- Ability to leverage mathematical principles to model new and novel behaviors.
- Ability to leverage statistics to identify true signals from noise or clutter.
- Experience working as an individual contributor in AI.
- Use of state-of-the-art technology to solve operational problems in AI and Machine Learning.
- Strong knowledge of data structures, common computing infrastructures/paradigms (stand alone and cloud), and software engineering principles.
- Ability to design custom solutions in the AI and Advanced Analytics sphere for customers. This includes the ability to scope customer needs, identify currently existing technologies, and develop custom software solutions to fill any gaps in available off the shelf solutions.
- Ability to build reference implementations of operational AI & Advanced Analytics processing solutions.
Background Investigations:
- IRS MBI - Eligibility
#techjobs #VeteransPage
EEO Statement
Maximus is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, age, national origin, disability, veteran status, genetic information and other legally protected characteristics.
Pay Transparency
Maximus compensation is based on various factors including but not limited to job location, a candidate's education, training, experience, expected quality and quantity of work, required travel (if any), external market and internal value analysis including seniority and merit systems, as well as internal pay alignment. Annual salary is just one component of Maximus's total compensation package. Other rewards may include short- and long-term incentives as well as program-specific awards. Additionally, Maximus provides a variety of benefits to employees, including health insurance coverage, life and disability insurance, a retirement savings plan, paid holidays and paid time off. Compensation ranges may differ based on contract value but will be commensurate with job duties and relevant work experience. An applicant's salary history will not be used in determining compensation. Maximus will comply with regulatory minimum wage rates and exempt salary thresholds in all instances.
Accommodations
Maximus provides reasonable accommodations to individuals requiring assistance during any phase of the employment process due to a disability, medical condition, or physical or mental impairment. If you require assistance at any stage of the employment process-including accessing job postings, completing assessments, or participating in interviews,-please contact People Operations at **************************.
Minimum Salary
$
156,740.00
Maximum Salary
$
234,960.00
Easy ApplyData Engineer III
Data scientist job in Kansas City, MO
Who We Are: Spring Venture Group is a leading digital direct-to-consumer sales and marketing company with product offerings focused on the senior market. We specialize in distributing Medicare Supplement, Medicare Advantage, and related products via our family of brands and dedicated team of licensed insurance agents. Powered by our unique technologies that combine sophisticated marketing, comparison shopping, sales execution, and customer engagement - we help thousands of seniors across the country navigate the complex world of Medicare every day.
Job Description
This person has the opportunity to work primarily remote in the Kansas City or surrounding areas, making occasional visits to the office, but must CURRENTLY be in the Kansas City area.
We are unable to sponsor for this role, this includes international students.
OVERVIEW
The Data Management team is responsible for all things data at Spring Venture Group. Most importantly, our team is responsible for constructing high quality datasets that enable our business stakeholders and world-class Analytics department to make data informed decisions. Data engineers, combining Software Engineering and Database Engineering, serve as a primary resource for expertise with writing scripts and SQL queries, monitoring our database stability, and assisting with data governance ensuring availability for business-critical systems. The DE III works with a team of engineers of varying levels to design, develop, test, and maintain software applications and programs. The DE III will be expected to work independently when needed to solve the most complex problems encountered. They will be expected to be a leader and a mentor.
ESSENTIAL DUTIES
The essential duties for this role include, but are not limited to:
Serve as a
primary advisor
to Data Engineering Manager to identify and bring attention to opportunities for technical improvements, reduction of technical debt, or automation of repeated tasks.
Build advanced data pipelines utilizing the medallion architecture to create high quality single source of truth data sources in
Snowflake
Architect replacements of current
Data Management systems
with respect to all aspects of data governance
Design advanced services with
multiple data pipelines
to securely and appropriately store company assets in our enterprise data stores.
Technically advise any member of the data engineering department, providing direction when multiple paths forward present themselves.
Actively participate as a leader in regular team meetings, listening and ensuring that one is assisting others at every chance for growth and development.
Write advanced
ETL/ELT scripts
where appropriate to integrate data of various formats into enterprise data stores.
Take ownership (both individually and as part of a team) of services and applications
Write complex
SQL queries, scripts, and stored procedures
to reliably and consistently modify data throughout our organization according to business requirements
Collaborate directly and independently with stakeholders to build familiarity, fully understand their needs, and create custom, modular, and reliable solutions to resolve their requests
Document and demonstrate solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code
Work with Project Managers, Solution Architects, and Software Development teams to build solutions for Company Initiatives on time, on budget, and on value.
Independently architect solutions to problems of high complexity, and advise junior and mid-level engineers on problems of medium complexity.
Create data pipelines using appropriate and applicable technologies from
Amazon Web
Services (AWS)
to serve the specific needs of the business.
Ensure 99.95% uptime of our company's services monitoring data anomalies, batch failures, and our support chat for one week per team cycle from 8am-9pm.
Follow and embrace procedures of both the Data Management team and SVG Software Development Life Cycle (SDLC), including obtaining and retaining IT Security Admin III clearance.
Support after hours and weekend releases from our internal Software Development teams.
Actively participate in code review and weekly technicals with another more senior engineer or manager.
Assist departments with
time-critical SQL execution
and debug database performance problems.
ROLE COMPETENCIES
The competencies for this role include, but are not limited to:
Emotional Intelligence
Drive for Results
Continuous Improvement
Communication
Strategic Thinking
Teamwork and Collaboration
Qualifications
POSITION REQUIREMENTS
The requirements to fulfill this position are as follows:
Bachelor's degree in Computer Science, or a related technical field.
4-7 years of practical production work in Data Engineering.
Expertise of the Python programming language.
Expertise of Snowflake
Expertise of SQL, databases, & query optimization.
Must have experience in a large cloud provider such as AWS, Azure, GCP.
Advanced at reading code independently and understanding its intent.
Advanced at writing readable, modifiable code that solves business problems.
Ability to construct reliable and robust data pipelines to support both scheduled and event based workflows.
Working directly with stakeholders to create solutions.
Mentoring junior and mid-level engineers on best practices in programming, query optimization, and business tact.
Additional Information
Benefits:
The Company offers the following benefits for this position, subject to applicable eligibility requirements:
Competitive Compensation
Medical, Dental and vision benefits after a short waiting period
401(k) matching program
Life Insurance, and Short-term and Long-term Disability Insurance
Optional enrollment includes HSA/FSA, AD&D, Spousal/Dependent Life Insurance, Travel Assist and Legal Plan
Generous paid time off (PTO) program starting off at 15 days your first year
15 paid Holidays (includes holiday break between Christmas and New Years)
10 days of Paid Parental Leave and 5 days of Paid Birth Recovery Leave
Annual Volunteer Time Off (VTO) and a donation matching program
Employee Assistance Program (EAP) - health and well-being on and off the job
Rewards and Recognition
Diverse, inclusive and welcoming culture
Training program and ongoing support throughout your Venture Spring Venture Group career
Security Responsibilities:
Operating in alignment with policies and standards
Reporting Security Incidents Completing assigned training
Protecting assigned organizational assets
Spring Venture Group is an Equal Opportunity Employer