Palantir Engineer
Data engineer job in Kansas City, MO
Palantir Engineer
Compensation: $60 - $80 /hour, depending on experience
Inceed has partnered with a great company to help find a skilled Palantir Engineer to join their team!
Step into a dynamic role as a Senior Data Engineer, where you'll design and build reliable data pipelines, transforming raw data into clean, consumable datasets. Collaborate with product and analytics teams, ensuring data quality and lineage for downstream users. This hands-on position offers the opportunity to implement best practices for data governance and reliability.
Key Responsibilities & Duties:
Design and maintain scalable data pipelines
Collaborate to translate data requirements into solutions
Build data ingestion frameworks using Kafka Confluent
Implement ELT and ETL pipelines using PySpark and SQL
Manage data models and warehousing structures
Ensure data quality through validation mechanisms
Implement data governance practices
Develop automated monitoring and alerting mechanisms
Optimize data processing in GCP environments
Required Qualifications & Experience:
Experience with Palantir Foundry applications
Strong programming skills in Python and PySpark
Proficiency in SQL for data manipulation
Experience with Dataform, Dataproc, and BigQuery
Hands-on experience with Kafka and Confluent
Knowledge of Cloud Scheduler and Dataflow
Understanding of Data Governance principles
Experience using Git for version control
Nice to Have Skills & Experience:
Familiarity with DBT, Machine Learning, and AI concepts
Working knowledge of Infrastructure as Code (IaC)
Perks & Benefits:
3 different medical health insurance plans, dental, and vision insurance
Voluntary and Long-term disability insurance
Paid time off, 401k, and holiday pay
Weekly direct deposit or pay card deposit
If you are interested in learning more about the Palantir Engineer opportunity, please submit your resume for consideration. Our client is unable to provide sponsorship at this time.
We are Inceed, a staffing direct placement firm who believes in the possibility of something better. Our mission is simple: We're here to help every person, whether client, candidate, or employee, find and secure what's better for them.
Inceed is an equal opportunity employer. Inceed prohibits discrimination and harassment of any type and affords equal employment opportunities to employees and applicants without regard to race, color, religion, sex, sexual orientation, gender identity, or expression, pregnancy, age, national origin, disability status, genetic information, protected veteran status, or any other characteristic protected by law.
#IND
Endpoint Engineer (Intune)
Data engineer job in Kansas City, MO
Job Title: Endpoint Engineer (Intune)
Duration: Contract
Must Have Skills:
Endpoint
Intune
Security
Job Description
Administer and support endpoint management platforms, including configuration, deployment, and migration for Windows, mac OS, and mobile devices.
Implement and manage policies, configurations, and applications through Microsoft Intune and Autopilot.
Develop scripts (e.g., PowerShell) to automate deployment, patching, and other routine tasks.
Ensure endpoint security through threat detection, remediation, and by adhering to compliance standards.
Provide Level 2/3 support for escalated endpoint-related issues and perform root cause analysis.
Manage and automate the deployment of security patches and software updates.
Handle hardware and software asset management and support identity and acces management tools like Azure AD.
PLC/SCADA Engineer
Data engineer job in Saint Joseph, MO
Role - PLC/SCADA Engineer
Description--
Key Skills:
• Allen Bradley & Siemens PLCs (RSLogix, Studio 5000, TIA Portal)
• HMI Development (FactoryTalk View, WinCC)
• Ignition SCADA Platform
• SQL & relational databases
• Kepware / OPC communication
• Strong troubleshooting & client communication skills
• Willingness to travel across U.S.
Good to Have:
• Manufacturing domain experience
• Scripting (Python, JavaScript in Ignition)
• Strong analytical & documentation skills
Responsibilities:
• Design & develop PLC programs (Allen Bradley / Siemens)
• Configure Kepware & manage OPC communication
• Develop SQL queries for reporting/integration
• Perform on-site commissioning & troubleshooting
• Maintain detailed system and code documentation
Senior Software Engineer Technical Lead
Data engineer job in Overland Park, KS
ABOUT THE OPPORTUNITY
Our client is a profitable, long-standing SaaS platform serving thousands of business users across the U.S. They are entering a major modernization and growth phase and are seeking a hands-on VP of Engineering / Lead Developer to take full ownership of engineering, elevate delivery, and guide the next generation of the product.
This is a rare opportunity to step into a stable SaaS business with real revenue, real customers, and a clear vision - while also being empowered to rebuild, modernize, and scale the platform with autonomy.
WHAT YOU'LL LEAD
The incoming leader will own both day-to-day development and long-term technology direction, including:
Writing high-quality, efficient production code and setting the technical standard
Leading and mentoring a small distributed engineering team
Bringing predictability, clarity, and accuracy to estimates, project planning, and delivery
Steering architecture decisions and modernizing the existing codebase
Owning application reliability, security, and performance as user growth continues
Partnering directly with executive leadership to prioritize initiatives and roadmap investments
Guiding a large-scale platform modernization currently underway
You will be inheriting a strong foundation with significant upside - including a maturing V4 rebuild, a committed customer base, and major opportunities to drive quality, velocity, and engineering discipline.
KEY TECHNICAL ENVIRONMENT
We're looking for someone comfortable leading and building in an environment that includes:
.NET Core (v8) / MVC
SQL & PostgreSQL
Azure (cloud + security)
React
REST APIs / Integrations
Experience navigating both legacy code and modern frameworks is essential.
WHAT WE'RE LOOKING FOR
The ideal candidate brings:
10+ years of professional software development experience
Strong experience shipping SaaS products at scale
Proven leadership capabilities - mentoring, coaching, establishing processes
Ability to translate business priorities into clear technical plans
A reputation for dependable, predictable delivery
Strength in architecture, systems thinking, and modernization
An ownership mindset - someone energized by taking something good and making it great
Bonus points for experience with: AI-driven features, POS integrations, SSO, workflow systems, or high-volume transactional applications.
WHY THIS ROLE STANDS OUT
This opportunity offers the best of both worlds:
Stability: profitable, established, long-tenured customer base
Impact: full ownership of engineering and technical direction
Challenge: modernization, technical debt reduction, and delivery transformation
Growth: platform expansion, new features, and a multi-year product roadmap
Upside: potential equity pathway for the right long-term leader
You'll step into a role where your leadership, organization, and technical strength immediately matter.
HOW TO APPLY
If you're a
SENIOR
engineering leader who still loves to code - and you're excited by the idea of modernizing and scaling a real SaaS platform - we'd love to connect.
Please apply directly or reach out for a confidential conversation. The President would LOVE to be able to start the new year with a new partner on their team!
NO C2C / SPONSORSHIP OPPORTUNITIES
Senior Dotnet Developer
Data engineer job in Kansas City, MO
Client is looking for an innovative and modernization-minded engineer to re-envision our entire client experience. This person will join a collaborative and design-driven team, with talent and tenure to thrive on. They will assist in leading the development, implementation, and management of technology-based business solutions. This person will design software applications to meet both functional and technical requirements for the client experience team at a high level.
The ideal engineer will possess the ability to prioritize well, communicate clearly, have a consistent track record of delivery and excellent software engineering skills. They will be able to adapt to new tech and methodologies, as needed.
Duties:
Participate in all phases of the SDLC, including requirements analysis, design, development, testing, deployment, and maintenance.
Develop dynamic and responsive user interfaces using Angular, TypeScript, HTML, CSS, and related front-end frameworks and libraries.
Design and develop robust and scalable back-end services and APIs using C#, ASP.NET MVC, .NET Core, and Web API.
Integrate front-end applications with back-end APIs.
Work with SQL Server databases to design schemas, write queries, and manage data.
Write clean, well-documented, and testable code.
Perform code reviews, refactor code, and ensure adherence to coding standards and best practices.
Implement unit and integration tests to ensure software quality.
Provide technical guidance and mentorship to junior developers, sharing knowledge and promoting best practices.
Percentage of time spent on duties will be as follows:
Software development including database design, solution architecture, and project planning - 80%
Production and incident support - 20%
Required Skills:
9+ years extensive experience in .NET development, particularly with MVC, Angular, and C#.
Proficiency in Angular and related front-end technologies (TypeScript, HTML, CSS, JavaScript).
Strong understanding: of object-oriented programming (OOP) principles and design patterns.
Extensive experience with database systems and SQL.
Familiarity with Git version control system.
Knowledge of Azure cloud platforms.
Data Engineer
Data engineer job in Overland Park, KS
Description The Data Engineer role within the Cloud Payments Reporting team is responsible for developing, deploying, and maintaining data solutions. Primary responsibilities will include querying databases, maintaining data transformation pipelines through ETL tools and systems into data warehouses, provide and support data visualizations, dashboards, ad hoc and customer deliverable reports. Responsibilities
Provide primary support for existing databases, automated jobs, and reports, including monitoring notifications, performing root cause analysis, communicating findings, and resolving issues.
Subject Matter Expert for payments reports, databases, and processes.
Ensure data and report integrity and accuracy through thorough testing and validation.
Build analytical tools to utilize the data, providing actionable insight into key business performance metrics including operational efficiency.
Implement, support, and optimize ETL pipelines, data aggregation processes, and reports using various tools and technologies.
Collaborate with operational leaders and teams to understand reporting and data usage across the business and provide efficient solutions.
Participate in recurring meetings with working groups and management teams to discuss operational improvements.
Work with stakeholders including data, design, product, and executive teams to support their data infrastructure needs while assisting with data-related technical issues.
Handle tasks on your own, adjust to new deadlines, and adapt to changing priorities.
Design, develop and implement special projects, based on business needs.
Perform other job-related duties and responsibilities as assigned.
Qualifications
Five or more years of experience with Oracle, MySQL, Power BI / QuickSight, and Excel.
Thorough knowledge of SQL, relational databases and data modeling principles.
Proficiency in programming languages such as PL/SQL, Bash, PowerShell and Python.
Exceptional problem-solving, analytical, and critical thinking skills.
Excellent interpersonal and communication skills, with the ability to consult with stakeholders to facilitate requirements gathering, troubleshooting, and solution validation.
Detail-oriented with the ability to understand the bigger picture.
Ability to communicate complex quantitative analysis clearly.
Strong organizational skills, including multi-tasking and teamwork.
Self-motivated, task oriented and an aptitude for complex problem solving.
Experience with AWS, Jenkins and SnapLogic is a plus.
Data streaming, API calls (SOAP and REST), database replication and real-time processing is a plus.
Experience with Atlassian JIRA and Confluence is a plus.
Auto-ApplyData Scientist / Data Architect / Data Governance Lead
Data engineer job in Kansas City, MO
KEY RESPONSIBILITIES: Data Governance, Strategy and Architecture * Define and drive the organization's overall vision, data strategy, roadmap, and architecture vision. This includes the data AI architecture vision, strategy, and roadmap. This includes the design of scalable data lakes, data warehouses, and data fabric architectures.
* Establish and enforce data governance policies and standards to ensure data quality, consistency, and compliance with all relevant regulations (e.g., GDPR, CCPA). Lead the implementation of a comprehensive data governance framework, including data quality management, data lineage tracking, and master data management (MDM). Collaborate with data owners and stewards across business units to establish clear roles, responsibilities, and accountability for data assets.
* Establish clear rules and policies governing the responsible usage of data within AI and ML models, including documentation of data lineage for model training. Design data infrastructure specifically optimized for AI workloads, including data pipelines for machine learning models, and architect solutions for large language models (LLMs). Develop bias mitigations strategies to ensure diverse and representative datasets to prevent AI biases, and architect monitoring systems for model drift.
* Evaluate, recommend, and select appropriate data management technologies, including cloud platforms (e.g., AWS, Azure, GCP), storage solutions, and governance tools.
* Architect complex data integration patterns to connect disparate data sources across the organization, ensuring seamless data flow and a unified data view.
Data Security and Privacy
* Design and implement a robust data security architecture to protect sensitive data from unauthorized access, breaches, and corruption.
* Develop security protocols, such as encryption, access controls (IAM), and masking techniques to safeguard data in transit and at rest.
* Conduct regular security audits and vulnerability testing to identify gaps in security architecture and develop remediation plans.
* Ensure the data architecture and its supporting systems are compliant with internal policies and external data protection regulations.
Data Modeling and Management
* Design and maintain conceptual, logical, and physical data models for transactional and analytical systems.
* Oversee the development of database schemas, metadata management, and data cataloging efforts to improve data discoverability and understanding.
* Define and standardize data architecture components, including storage solutions (data lakes, warehouses, etc.), data pipelines, and integration patterns.
* Evaluate and recommend new data technologies, tools, and platforms that align with the organization's strategic needs.
Data Classification
* Design and implement a robust data security architecture, including controls for access management, encryption, and data masking to protect sensitive information.
* Create and manage an organization-wide data classification scheme based on data sensitivity and importance (e.g., public, internal, confidential, restricted).
* Implement technical controls and processes to automatically classify and tag data assets, ensuring proper handling and security.
* Collaborate with business and legal teams to define and apply data classification rules consistently.
Team Collaboration and Leadership
* Provide technical guidance and mentorship to data engineers, analysts, developers, and other IT teams on best practices for data management and security.
* Work closely with business stakeholders to understand their data requirements and translate them into effective architectural solutions.
* Foster a data-centric culture across the organization, promoting awareness and understanding of data governance principles.
ABOUT THE COMPANY:
Bluebird Fiber is a premier fiber telecommunications provider of internet, data transport, and other services to carriers, businesses, schools, hospitals, and other enterprises in the Midwest. To learn more, please visit bluebirdfiber.com.
Join an amazing team of telecommunication professionals! Bluebird is a dynamic growing company in need of a Data Architect to be a part of a collaborative team. This is a full-time, benefit eligible position in our Kansas City Office. All of us at Bluebird work hard to meet objectives for the organization and live the mission and values of this growing company to meet a common goal. Check out this video that highlights our amazing company culture.
JOB SUMMARY:
We are seeking a highly skilled and strategic Data Architect to lead our data governance, security, and management initiatives. This senior role will be responsible for designing and implementing the organization's enterprise data architecture, ensuring that our data is secure, reliable, and accessible for business-critical functions. The ideal candidate is a proactive leader who can define data strategy, enforce best practices, and collaborate with cross-functional teams to align our data ecosystem with business goals.
REQUIRED QUALIFICATIONS:
* Bachelor's or master's degree in Computer Science, Information Technology, or a related technical field.
* 10+ years of hands-on experience in data architecture, data modeling, and data governance, with a proven track record of designing and implementing complex data ecosystems. Experience working in regulated industries is a plus.
* Proven experience (8+ years) designing and implementing enterprise-level data architectures.
* Extensive experience with data modeling, data warehousing, and modern data platforms (e.g., cloud environments like AWS, Azure, or GCP).
* Deep expertise in data modeling, data warehousing, database technologies (SQL, NoSQL), big data technologies (e.g., Spark), and modern cloud platforms (e.g., AWS, Azure, GCP).
* Deep expertise in data governance and security principles, including regulatory compliance frameworks.
* Strong knowledge of how to structure data for machine learning and AI workloads, including experience with MLOps platforms.
* Hands-on experience with data classification and data cataloging tools (e.g., Collibra, Alation).
* Excellent communication, interpersonal, and leadership skills, with the ability to influence and build consensus across the organization.
PREFERRED QUALIFICATIONS:
* Professional certifications in data architecture, data governance, or cloud platforms.
* Experience with big data technologies (e.g., Hadoop, Spark).
* Familiarity with data integration and ETL/ELT frameworks.
Principal Data Scientist
Data engineer job in Kansas City, KS
Description & Requirements We now have an exciting opportunity for a Principal Data Scientist to join the Maximus AI Accelerator supporting both the enterprise and our clients. We are looking for an accomplished hands-on individual contributor and team player to be a part of the AI Accelerator team.
You will be responsible for architecting and optimizing scalable, secure AI systems and integrating AI models in production using MLOps best practices, ensuring systems are resilient, compliant, and efficient. This role requires strong systems thinking, problem-solving abilities, and the capacity to manage risk and change in complex environments. Success depends on cross-functional collaboration, strategic communication, and adaptability in fast-paced, evolving technology landscapes.
This position will be focused on strategic company-wide initiatives but will also play a role in project delivery and capture solutioning (i.e., leaning in on existing or future projects and providing solutioning to capture new work.)
This position requires occasional travel to the DC area for client meetings.
Essential Duties and Responsibilities:
- Make deep dives into the data, pulling out objective insights for business leaders.
- Initiate, craft, and lead advanced analyses of operational data.
- Provide a strong voice for the importance of data-driven decision making.
- Provide expertise to others in data wrangling and analysis.
- Convert complex data into visually appealing presentations.
- Develop and deploy advanced methods to analyze operational data and derive meaningful, actionable insights for stakeholders and business development partners.
- Understand the importance of automation and look to implement and initiate automated solutions where appropriate.
- Initiate and take the lead on AI/ML initiatives as well as develop AI/ML code for projects.
- Utilize various languages for scripting and write SQL queries. Serve as the primary point of contact for data and analytical usage across multiple projects.
- Guide operational partners on product performance and solution improvement/maturity options.
- Participate in intra-company data-related initiatives as well as help foster and develop relationships throughout the organization.
- Learn new skills in advanced analytics/AI/ML tools, techniques, and languages.
- Mentor more junior data analysts/data scientists as needed.
- Apply strategic approach to lead projects from start to finish;
Job-Specific Minimum Requirements:
- Develop, collaborate, and advance the applied and responsible use of AI, ML and data science solutions throughout the enterprise and for our clients by finding the right fit of tools, technologies, processes, and automation to enable effective and efficient solutions for each unique situation.
- Contribute and lead the creation, curation, and promotion of playbooks, best practices, lessons learned and firm intellectual capital.
- Contribute to efforts across the enterprise to support the creation of solutions and real mission outcomes leveraging AI capabilities from Computer Vision, Natural Language Processing, LLMs and classical machine learning.
- Contribute to the development of mathematically rigorous process improvement procedures.
- Maintain current knowledge and evaluation of the AI technology landscape and emerging. developments and their applicability for use in production/operational environments.
Minimum Requirements
- Bachelor's degree in related field required.
- 10-12 years of relevant professional experience required.
Job-Specific Minimum Requirements:
- 10+ years of relevant Software Development + AI / ML / DS experience.
- Professional Programming experience (e.g. Python, R, etc.).
- Experience in two of the following: Computer Vision, Natural Language Processing, Deep Learning, and/or Classical ML.
- Experience with API programming.
- Experience with Linux.
- Experience with Statistics.
- Experience with Classical Machine Learning.
- Experience working as a contributor on a team.
Preferred Skills and Qualifications:
- Masters or BS in quantitative discipline (e.g. Math, Physics, Engineering, Economics, Computer Science, etc.).
- Experience developing machine learning or signal processing algorithms:
- Ability to leverage mathematical principles to model new and novel behaviors.
- Ability to leverage statistics to identify true signals from noise or clutter.
- Experience working as an individual contributor in AI.
- Use of state-of-the-art technology to solve operational problems in AI and Machine Learning.
- Strong knowledge of data structures, common computing infrastructures/paradigms (stand alone and cloud), and software engineering principles.
- Ability to design custom solutions in the AI and Advanced Analytics sphere for customers. This includes the ability to scope customer needs, identify currently existing technologies, and develop custom software solutions to fill any gaps in available off the shelf solutions.
- Ability to build reference implementations of operational AI & Advanced Analytics processing solutions.
Background Investigations:
- IRS MBI - Eligibility
#techjobs #VeteransPage
EEO Statement
Maximus is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, age, national origin, disability, veteran status, genetic information and other legally protected characteristics.
Pay Transparency
Maximus compensation is based on various factors including but not limited to job location, a candidate's education, training, experience, expected quality and quantity of work, required travel (if any), external market and internal value analysis including seniority and merit systems, as well as internal pay alignment. Annual salary is just one component of Maximus's total compensation package. Other rewards may include short- and long-term incentives as well as program-specific awards. Additionally, Maximus provides a variety of benefits to employees, including health insurance coverage, life and disability insurance, a retirement savings plan, paid holidays and paid time off. Compensation ranges may differ based on contract value but will be commensurate with job duties and relevant work experience. An applicant's salary history will not be used in determining compensation. Maximus will comply with regulatory minimum wage rates and exempt salary thresholds in all instances.
Accommodations
Maximus provides reasonable accommodations to individuals requiring assistance during any phase of the employment process due to a disability, medical condition, or physical or mental impairment. If you require assistance at any stage of the employment process-including accessing job postings, completing assessments, or participating in interviews,-please contact People Operations at **************************.
Minimum Salary
$
156,740.00
Maximum Salary
$
234,960.00
Easy ApplyData Engineer
Data engineer job in Overland Park, KS
The Data Engineer is a key contributor in advancing the firm's data strategy and analytics ecosystem, transforming raw data into actionable insights that drive business decisions. This role requires a technically strong, curious professional committed to continuous learning and innovation. The ideal candidate combines analytical acumen with data engineering skills to ensure reliable, efficient, and scalable data pipelines and reporting solutions.
ESSENTIAL DUTIES AND RESPONSIBILITIES
Data Engineering & Integration
Design, build, and maintain data pipelines and integrations using Azure Data Factory, SSIS, or equivalent ETL/ELT tools.
Automate data imports, transformations, and loads from multiple sources (on-premise, SaaS, APIs, and cloud).
Optimize and monitor data workflows for reliability, performance, and cost efficiency.
Implement and maintain data quality, validation, and error-handling frameworks.
Data Analysis & Reporting
Develop and maintain reporting databases, views, and semantic models for business intelligence solutions.
Design and publish dashboards and visualizations in Power BI and SSRS, ensuring alignment with business KPIs.
Perform ad-hoc data exploration and statistical analysis to support business initiatives.
Collaboration & Governance
Partner with stakeholders across marketing, underwriting, operations, and IT to define analytical and data integration requirements.
Maintain data integrity, enforce governance standards, and promote best practices in data stewardship.
Support data security and compliance initiatives in coordination with IT and business teams.
Continuous Improvement
Stay current with emerging data technologies and analytics practices.
Recommend tools, processes, or automation improvements to enhance data accessibility and insight delivery.
QUALIFICATIONS
Required:
Strong SQL development skills and experience with Microsoft SQL Server and Azure SQL Database.
Hands-on experience with data import, transformation, and integration using Azure Data Factory, SSIS, or similar tools.
Proficiency in building BI solutions using Power BI and/or SSRS.
Strong data modeling and relational database design skills.
Proficiency in Microsoft Excel (advanced formulas, pivot tables, external data connections).
Ability to translate business goals into data requirements and technical solutions.
Excellent communication and collaboration skills.
Bachelor's degree in Computer Science, Information Systems, or a related field (or equivalent experience).
Preferred:
Experience with cloud-based data platforms (Azure Data Lake, Synapse Analytics, Databricks).
Familiarity with version control tools (Git, Azure DevOps) and Agile development practices.
Exposure to Python or PowerShell for data transformation or automation.
Experience integrating data from insurance or financial systems.
Compensation: $120-129K
This position is 3 days onsite/hybrid located in Overland Park, KS
We look forward to reviewing your application. We encourage everyone to apply - even if every box isn't checked for what you are looking for or what is required.
PDSINC, LLC is an Equal Opportunity Employer.
Senior. Data Engineer
Data engineer job in Overland Park, KS
The Senior Data Engineer will be responsible for building and maintaining the data infrastructure that powers the organization's data-driven decision-making. Designs, develops, and maintains data pipelines, data warehouses, and other data-related infrastructure. This role expects to work closely with data scientists, analysts, and other stakeholders to understand their data needs and translate them into robust and scalable solutions.
Key Responsibilities:
Build, maintain, and optimize data pipelines, including ELT processes, data models, reports, and dashboards to drive business insights.
Develop and implement data solutions for enterprise data warehouses and business intelligence (BI) initiatives.
Continuously monitor and optimize data pipelines for performance, reliability, and cost-effectiveness. This includes identifying bottlenecks, tuning queries, and scaling infrastructure as needed.
Automate data ingestion, processing, and validation tasks to ensure data quality and consistency.
Implement data governance policies and procedures to ensure data quality, consistency, and compliance with relevant regulations.
Contribute to the development of the organization's overall data strategy.
Conduct code reviews and contribute to the establishment of coding standards and best practices.
Required Qualifications:
Bachelor's degree in a relevant field or equivalent professional experience.
4-6 years of hands-on experience in data engineering.
Strong expertise in SQL and NoSQL databases, including PostgreSQL, DynamoDB, and MongoDB.
Experience working with cloud platforms such as GCP, Azure, or AWS and their associated data services.
Practical knowledge of data warehouses like BigQuery, Snowflake, and Redshift.
Programming skills in Python or JavaScript.
Proficiency with BI tools such as Sisense, Power BI, or Tableau.
Preferred Qualifications:
Direct experience with Google Cloud Platform (GCP).
Knowledge of CI/CD pipelines, including tools like Docker and Terraform.
Background in the healthcare industry.
Familiarity with modern data integration tools such as DBT, Matillion, and Airbyte. Compensation: $125,000.00 per year
Who We Are CARE ITS is a certified Woman-owned and operated minority company (certified as WMBE). At CARE ITS, we are the World Class IT Professionals, helping clients achieve their goals. Care ITS was established in 2010. Since then we have successfully executed several projects with our expert team of professionals with more than 20 years of experience each. We are globally operated with our Head Quarters in Plainsboro, NJ, with focused specialization in Salesforce, Guidewire and AWS. We provide expert solutions to our customers in various business domains.
Auto-ApplySr. Data Engineer
Data engineer job in Overland Park, KS
At Quest Analytics, our mission is to make healthcare more accessible for all Americans. As part of our team, you'll work in an innovative, collaborative, challenging, and flexible environment that supports your personal growth every day. We are looking for a talented and motivated Senior Data Engineer with experience in building scalable infrastructure, implementing automation, and enabling cross-functional teams with reliable and accessible data.
The Senior Data Engineer will help modernize and scale our data environment. This person will play a key role in transforming these workflows into automated, cloud-based pipelines using Azure Data Factory, Databricks, and modern data platforms. If you are looking for a high-impact opportunity to shape how data flows across the business, APPLY TODAY! What you'll do:
Identify, design, and implement internal process improvements (e.g., automating manual processes, optimizing data delivery, and re-designing infrastructure for scalability).
Transform manual SQL/SSMS/stored procedure workflows into automated pipelines using Azure Data Factory.
Write clean, reusable, and efficient code in Python (and optionally C# or Scala).
Leverage distributed data tools such as Spark and Databricks for large-scale processing.
Review project objectives to determine and implement the most suitable technologies.
Apply best practice standards for development, build, and deployment automation.
Manage day-to-day operations of the data infrastructure and support engineers and analysts with data investigations.
Monitor and report on data pipeline tasks, collaborating with teams to resolve issues quickly.
Partner with internal teams to analyze current processes and identify efficiency opportunities.
Participate in training and mentoring programs as assigned or required.
Uphold Quest Analytics values and contribute to a positive company culture.
Respond professionally and promptly to client and internal requests.
Perform other duties as assigned.
What it requires:
Bachelor's Degree in Computer Science or equivalent education/experience.
3-5 years of experience with ETL, data operations, and troubleshooting, preferably in Healthcare data.
Strong SQL development skills (SSMS, stored procedures, and optimization).
Proficiency in Python, C#, or Scala (experience with pandas and NumPy is a plus).
Solid understanding of the Azure ecosystem, especially Azure Data Factory and Azure Data Lake Storage (ADLS).
Hands-on experience with Azure Data Factory and ADLS.
Familiarity with Spark, Databricks, and data modeling techniques.
Experience working with both relational databases (e.g., SQL Server) and NoSQL databases (e.g., MongoDB).
Self-motivated, strong problem-solver, and thrives in fast-paced environments.
Excellent troubleshooting, listening, and analytical skills.
Customer-focused mindset with a collaborative, team-oriented approach.
We are not currently engaging with outside agencies on this role.Visa sponsorship is not available at this time.
What you'll appreciate:•Workplace flexibility - you choose between remote, hybrid or in-office•Company paid employee medical, dental and vision•Competitive salary and success sharing bonus•Flexible vacation with no cap, plus sick time and holidays•An entrepreneurial culture that won't limit you to a job description•Being listened to, valued, appreciated -- and having your contributions rewarded•Enjoying your work each day with a great group of people Apply TODAY!careers.questanalytics.com
About Quest AnalyticsFor more than 20 years, we've been improving provider network management one groundbreaking innovation at a time. 90% of America's health plans use our tools, including the eight largest in the nation. Achieve your personal quest to build a great career here. Visa sponsorship is not available at this time.
Preferred work locations are within one of the following states: Alabama, Arizona, Arkansas, Colorado, Connecticut, Delaware, Florida, Georgia, Idaho, Illinois (outside of Chicago proper), Indiana, Iowa, Kansas, Kentucky, Louisiana, Maine, Maryland, Massachusetts, Michigan, Minnesota, Mississippi, Missouri, Montana, Nebraska, Nevada, New Hampshire, New Mexico, North Carolina, North Dakota, Ohio, Oklahoma, Oregon, South Carolina, South Dakota, Tennessee, Texas, Utah, Vermont, Virginia, West Virginia, Wisconsin, or Wyoming.
Quest Analytics provides equal employment opportunities to all people without regard to race, color, religion, sex, national origin, ancestry, marital status, veteran status, age, disability, sexual orientation or gender identity or expression or any other legally protected category. We are committed to creating and maintaining a workforce environment that is free from any form of discriminations or harassment.
Applicants must be legally authorized to work in the United States. Verification of employment eligibility will be required at the time of hire.
Persons with disabilities who anticipate needing accommodations for any part of the application process may contact, in confidence *********************
NOTE: Staffing agencies, headhunters, recruiters, and/or placement agencies, please do not contact our hiring managers directly. We are not currently working with additional outside agencies at this time. Any job posting displayed on websites other than questanalytics.com or jobs.lever.co/questanalytics/ may be out of date, inaccurate and unavailable
Auto-ApplyData Engineer II
Data engineer job in Leawood, KS
Full-time Description
27Global is a rapidly growing company in the dynamic industry of software, cloud, and data engineering. We pride ourselves on the quality of the services we deliver, the clients we serve, and the strength of our culture. Our commitment to our employees is evidenced by our five Best Places to Work awards.
We're looking for a Data Engineer to join our team! You'll be responsible for contributing to the design and development of enterprise data solutions that support analytics, business intelligence, and scalable applications. You'll work closely with data and software architects, consultants and other engineers to deliver data models, integration strategies, and governance practices that empower client's data-driven decisions.
Joining 27Global as a Data Engineer is an exciting high-growth opportunity offering a competitive base salary, performance bonuses, and variable compensation.
Your Role:
Participate in the design and implementation of scalable, secure, and high-performance data architectures.
Develop and maintain conceptual, logical, and physical data models.
Work closely with architects to define standards for data integration, quality, and governance.
Collaborate with engineers, analysts, and business stakeholders to align data solutions with organizational needs.
Support cloud-based data strategies including data warehousing, pipelines, and real-time processing.
Design and optimize data pipelines that support AI, machine learning, and advanced analytics workloads.
Implement data preprocessing, feature engineering, and real-time inference capabilities for predictive modeling.
Integrate AI/ML models into production environments using tools such as AWS SageMaker, Azure Machine Learning, or Databricks.
Assess, learn, and apply emerging data technologies and frameworks to enhance solutions and stay current with industry trends.
Requirements
What You Bring:
BA/BS/Master's degree in Computer Science, Information Systems, Data Science, or related field.
2 - 4 years of experience in data architecture, data engineering, or related roles delivering scalable architecture solutions from design to production.
2 - 4 years of experience writing .Net code or other OOP languages in an Agile environment.
Demonstrated leadership skills with the ability to collaborate with and lead on-shore and off-shore team members.
Proficient technical skills in: Spark, Scala, C#, PySpark, Data Lake, Delta Lake, Relational and NoSQL Databases, AWS Glue and Azure Synapse
Experience with SQL, ETL/ELT, and data modeling.
Experience with cloud platforms (AWS, Azure, GCP) and implementing modern data platforms with data lake.
Knowledge of data governance, security, and compliance frameworks.
Ability to context switch and work on a variety of projects over specified periods of time.
Ability to work at the 27Global office in Leawood, KS with hybrid work flexibility after 90 days, and occasionally onsite at client offices.
Flexibility to occasionally travel to client sites may be required, typically 1 week per quarter or less.
Legal authorization to work in the United States and the ability to prove eligibility at the time of hire.
Ways to Stand Out:
Certifications: AWS Solution Architect, Azure Data Engineer, Databricks Data Engineer
Hands-on experience with Databricks for building and optimizing scalable data pipelines, Delta Lake, and Spark-based analytics.
Hands-on experience with big data tools (Spark, Kafka).
Modern data warehouses (Snowflake, Redshift, BigQuery).
Familiarity with machine learning pipelines and real-time analytics.
Strong communication skills and ability to influence stakeholders.
Prior experience implementing enterprise data governance frameworks.
Experience in a client-facing role, working directly with clients from multiple levels of the organization; often presenting and documenting client environment suggestions and improvements.
Why 27G?:
Four-time award winner of Best Place to Work by the Kansas City Business Journal.
A casual and fun small business work environment.
Competitive compensation, benefits, time off, profit sharing, and quarterly bonus potential.
Dedicated time for learning, development, research, and certifications.
Principal Data Engineer
Data engineer job in Lenexa, KS
About the Role: weavix is seeking a hands-on, business-minded Senior or Principal Data Engineer to architect and own our data infrastructure from the ground up. This is a unique opportunity to shape the future of data at a high-growth startup where IoT, scale, and performance are core to our mission.
You'll be the technical lead for everything data - building pipelines, architecting systems, and working cross-functionally to extract insights that power customer growth, analyze user behavior, and improve system reliability and performance. This is a highly autonomous role, perfect for someone with startup experience who enjoys solving complex problems independently.
What You'll Do:
Architect, build, and maintain scalable data systems and pipelines to ingest and process large-scale data from IoT devices and user activity
Own the design and implementation of our cloud-based data platform (Microsoft Azure strongly preferred; GCP or AWS also acceptable)
Enable data-driven decision-making across product, engineering, and business teams
Create a data architecture that supports both operational and analytical use cases (growth analytics, performance monitoring, system scaling)
Ensure data quality, observability, governance, and security across all systems
Serve as the subject matter expert on data systems, operating as a senior IC without a team initially
What You Bring:
6+ years of experience in data engineering, ideally within a startup or high-growth environment
Proven ability to independently design, implement, and manage scalable data architectures
Deep experience working with large datasets, ideally from IoT sources or other high-volume systems
Proficiency with modern data tools and languages (e.g., Typescript, NodeJS, SQL, etc.)
Strong cloud experience, ideally with Microsoft Azure (but AWS or GCP also acceptable)
A business-focused mindset with the ability to connect technical work to strategic outcomes
Experience with New Relic, Metabase, Postgres, Grafana, Azure Storage, MongoDB, or other storage, database, graphing, or alerting platforms.
Excellent communication and collaboration skills across technical and non-technical teams
Bonus Points For:
Experience with event-driven or real-time data systems (Kafka, Kinesis, etc.)
Familiarity with BI tools and self-service analytics platforms
Background in system performance monitoring and observability tools
Why weavix
Being a part of the weavix team is being a part of something bigger. We value the innovators and the risk-takers-the ones who love a challenge. Through our shared values and dedication to our mission to Connect every Disconnected Worker, we're reshaping the future of work to focus on this world's greatest assets: people.
It's truly amazing what happy, engaged team members can achieve. Our ever-evolving list of benefits means you'll be able to achieve work/life balance, perform impactful work, grow in your role, look after yourself/your family, and invest in your future.
Perks and Benefits
Competitive Compensation
Employee Equity Stock Program
Competitive Benefits Package including: Medical, Dental, Vision, Life, and Disability Insurance
401(k) Retirement Plan + Company Match
Flexible Spending & Health Savings Accounts
Paid Holidays
Flexible Time Off
Employee Assistance Program (EAP)
Other exciting company benefits
About Us
weavix , the Internet of Workers platform, revolutionizes frontline communication and productivity at scale. Since its founding, weavix has shaped the future of work by introducing innovative methods to better connect and enable the frontline workforce. weavix transforms enterprise by providing data-driven insights into facilities and teams to maximize productivity and achieve breakthrough results. weavix is the single source of truth for both workers and executives.
Our mission is simple: to connect every disconnected worker through disruptive technology.
How do you want to make your impact?
For more information about us, visit weavix.com.
Equal Employment Opportunity (EEO) Statement
weavix is an Equal Opportunity Employer. At weavix, diversity fuels innovation. We are dedicated to fostering an inclusive environment where every team member is empowered to contribute to our mission of connecting the disconnected workforce.
We do not discriminate on the basis of race, color, religion, sex (including pregnancy, sexual orientation, and gender identity), national origin, age, disability, veteran status, genetic information, or any other legally protected characteristic. All qualified applicants will receive consideration for employment.
Americans with Disabilities Act (ADA) Statement
weavix is committed to working with and providing reasonable accommodations to individuals with disabilities. If you need assistance or an accommodation during the application process due to a disability, you may contact us at *************.
E-Verify Notice
Notice: weavix participates in the E-Verify program to confirm employment eligibility as required by law.
Auto-ApplySlalom Flex (Project Based)- Java Data Engineer
Data engineer job in Kansas City, MO
About the Role: We are seeking a highly skilled and motivated Data Engineer to join our team as an individual contributor. In this role, you will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure that support our data-driven initiatives. You will work closely with cross-functional teams to ensure data availability, quality, and performance across the organization.
About Us
Slalom is a purpose-led, global business and technology consulting company. From strategy to implementation, our approach is fiercely human. In six countries and 43 markets, we deeply understand our customers-and their customers-to deliver practical, end-to-end solutions that drive meaningful impact. Backed by close partnerships with over 400 leading technology providers, our 10,000+ strong team helps people and organizations dream bigger, move faster, and build better tomorrows for all. We're honored to be consistently recognized as a great place to work, including being one of Fortune's 100 Best Companies to Work For seven years running. Learn more at slalom.com.
Key Responsibilities:
* Design, develop, and maintain robust data pipelines using Java and Python.
* Build and optimize data workflows on AWS using services such as EMR, Glue, Lambda, and NoSQL databases.
* Leverage open-source frameworks to enhance data processing capabilities and performance.
* Collaborate with data scientists, analysts, and other engineers to deliver high-quality data solutions.
* Participate in Agile development practices, including sprint planning, stand-ups, and retrospectives.
* Ensure data integrity, security, and compliance with internal and external standards.
Required Qualifications:
* 5+ years of hands-on experience in software development using Java and Python (Spring Boo).
* 1+ years of experience working with AWS services including EMR, Glue, Lambda, and NoSQL databases.
* 3+ years of experience working with open-source data processing frameworks (e.g., Apache Spark, Kafka, Airflow).
* 2+ years of experience in Agile software development environments.
* Strong problem-solving skills and the ability to work independently in a fast-paced environment.
* Excellent communication and collaboration skills.
Preferred Qualifications:
* Experience with CI/CD pipelines and infrastructure-as-code tools (e.g., Terraform, CloudFormation).
* Familiarity with data governance and data quality best practices.
* Exposure to data lake and data warehouse architectures.
Compensation and Benefits
Slalom prides itself on helping team members thrive in their work and life. As a result, Slalom is proud to invest in benefits that include meaningful time off and paid holidays, 401(k) with a match, a range of choices for highly subsidized health, dental, & vision coverage, adoption and fertility assistance, and short/long-term disability. We also offer yearly $350 reimbursement account for any well-being-related expenses.
EEO and Accommodations
Slalom is an equal opportunity employer and is committed to inclusion, diversity, and equity in the workplace. All qualified applicants will receive consideration
for employment without regard to race, color, religion, sex, national origin, disability status, protected veterans' status, or any other characteristic protected by federal, state, or local laws. Slalom will also consider qualified applications with criminal histories, consistent with legal requirements.
Slalom welcomes and encourages applications from individuals with disabilities. Reasonable accommodations are available for candidates during all aspects of the
selection process. Please advise the talent acquisition team if you require accommodations during the interview process.
Senior Data Engineer
Data engineer job in Overland Park, KS
Company Details
Intrepid Direct Insurance (IDI) is a rapidly growing direct to consumer property and casualty insurance company. A member of the W. R. Berkley Corporation, a fortune 500 company, rated A+ (Superior) by A.M. Best, Intrepid Direct's vision is to make life better for business. The insurance industry has not evolved with innovation like other major industries. We're here to change that. We are making life better for our customers, shareholders, and our team members by leveraging data and technology as insurance experts for our targeted customers. You will be part of a highly collaborative team of talented and focused professionals. Join a group that enjoys working together, trusts each other, and takes pride in our hard-earned success.
***************************
The Company is an equal employment opportunity employer.
Responsibilities
Intrepid Direct Insurance is looking for an experienced Senior Data Engineer to mentor, orchestrate, implement, and monitor the flowing through our organization. This opportunity will have a direct influence on how data is made available to our business units, as well as our customers. You'll primarily be working with our operations and engineering teams to create and enhance data pipelines, conform and enrich data, and deliver information to business users. Learn the ins and outs of what we do so that you can focus on improving availability and quality of the data we use to service our customers.
Key functions include but are not limited to:
Assist with long-term strategic planning for modern data warehousing needs.
Contribute to data modeling exercises and the buildout of our data warehouse.
Monitor, support, and analyze existing pipelines and recommend performance and process improvements to address gaps in existing process.
Automate manual processes owned by data team.
Troubleshoot and remediate ingestion and reporting related issues.
Design and build new pipelines to ingest data from additional disparate sources.
Responsible for the accuracy and availability of data in our data warehouse.
Collaborate with a multi-disciplinary team to develop data-driven solutions that align with our business and technical needs.
Create and deploy reports as needed.
Assist with cataloging and classifying existing data sets.
Participate in peer reviews with emphasis on continuous improvement.
Respond to regulatory requests for information.
Assumes other tasks and duties as assigned by management.
Mentor team members and advise on best practices.
Qualifications
Bachelor's degree in Mathematics, Statistics, Computer Science, or equivalent experience.
6+ years of relevant data engineering experience.
Analytical thinker with experience working in a fast-paced, startup environment.
Technical expertise with Microsoft SQL Server.
Familiarity with ETL tools and concepts.
Hands-on experience with database design and data modeling, preferable experience with Data Vault methodology.
Experience supporting and troubleshooting SSIS packages.
Experience consuming event-based data through APIs or queues.
Experience in Agile software development.
Experience with insurance data highly desired.
Detail oriented, solid organizational, and problem-solving.
Strong written, visual, and verbal communication skills.
Team oriented with a strong willingness to serve others in an agile startup environment.
Flexible in assuming new responsibilities as they arise.
Experience with Power Bi desired.
Additional Company Details We do not accept unsolicited resumes from third party recruiting agencies or firms.
The actual salary for this position will be determined by a number of factors, including the scope, complexity and location of the role; the skills, education, training, credentials and experience of the candidate; and other conditions of employment. Sponsorship Details Sponsorship not Offered for this Role
Auto-ApplyCorporate Treasury Data & Risk Analytics
Data engineer job in Overland Park, KS
We are seeking a driven and analytically minded professional to join our Corporate Treasury team. This individual will play a key role supporting asset/liability management, liquidity management, budgeting & forecasting, data analytics, and performance analysis/reporting.
In this role, you will work closely with senior and executive leadership to deliver strategic financial insights, optimize business performance, support and influence decision-making, uncover data-driven stories, and challenge existing processes with fresh, innovative thinking.
Essential Duties & Responsibilities
Responsibilities will be tailored to the experience and skillset of the selected candidate and may include:
* Developing and enhancing financial models and simulations
* Supporting forecasting, liquidity, and ALM analytics
* Conducting "what-if" scenario analysis and presenting actionable insights
* Building dashboards, reporting tools, and performance summaries
* Driving or contributing to process improvement initiatives
* Collaborating cross-functionally with senior leaders across the organization
Experience & Knowledge
* Financial modeling and earnings simulation experience using risk/performance management tools
* Designing and developing mathematical or statistical models to support strategic decision-making and risk management
* Experience running scenario analysis and synthesizing insights for executive audiences
* Familiarity with financial asset/liability instruments, market instruments, and their interactions
* Experience with Funds Transfer Pricing (FTP) and capital allocation is a plus
* Demonstrated success driving effective process improvements
Education
* Bachelor's degree in Accounting, Finance, or a related field required
CapFed is an equal opportunity employer.
Auto-ApplyData Engineer, Mid-Level (Leawood, KS & Arlington, VA)
Data engineer job in Leawood, KS
Become Part of a Meaningful Mission
Torch.AI is a defense-focused AI-software company on a mission to become the leading provider of critical data infrastructure for U.S. Defense and National Security. We deliver advanced AI and data software capabilities directly to customer mission owners to meet flexible, user-defined specifications and enable a decision advantage for the warfighter. We're passionate about solving complex problems that improve national security, support our warfighters, and protect our nation. Join us in our mission to help organizations Unlock Human Potential.
The U.S. defense and national security industry offers an unparalleled opportunity to contribute to the safety and well-being of the nation while engaging with cutting-edge technologies. As a vital sector that shapes global stability, it offers a dynamic environment to tackle complex challenges across multidisciplinary domains. With substantial investment in innovation, the industry is at the forefront of developing AI, autonomous systems, and advanced national security solutions, each founded on the premise that information is the new battlefield. If this type of work is of interest, we'd love to hear from you.
The Environment: Unlock Your Potential
As a Data Engineer at Torch.AI, you will be at the forefront of building software that scales across Torch.AI's platform capabilities. Your software will be deployed across an array of operational and research & development efforts for mission-critical customer programs and projects.
Each of our customers requires unique technical solutions to enable an asymmetric advantage on the battlefield. Torch.AI's patented software helps remove common obstacles such as manual-intensive data processing, parsing, and analysis, thereby reducing cognitive burden for the warfighter. Our end-to-end data processing, orchestration, and fusion platform supports a wide variety of military use cases, domains, operations, and echelons. Customers enjoy enterprise-grade capabilities that meet specialized needs.
Torch.AI encourages company-wide collaboration to share context, skills, and expertise across a variety of tools, technologies, and development practices. You'll work autonomously while driving coordinated, collaborative decisions across cross-functional teams comprised of defense and national security experts, veterans, business leaders, and experienced software engineers. Your code will advance back-end data orchestration and graph-compute capabilities to deliver elegant data and intelligence products. You will have the opportunity to harden and scale existing platform capabilities, tools, and technologies, while also working to innovate and introduce new iterative capabilities and features which benefit our company and customers.
Successful candidates thrive in a fast-paced, entrepreneurial, and mission-driven environment. We hire brilliant patriots. You'll be encouraged to think creatively, challenge conventional thinking, and identify alternative approaches for delivering value to customers across complex problem sets. Your day-to-day will vary, adapting to the requirements of our customers and the technical needs of respective use cases. One day, you may be supporting the development of a new proof of capability concept for a new customer program; another you may be focused on optimizing system performance to help scale a production deployment; the next you may be working directly with customers to understand their requirements with deep intellectual curiosity.
Our flat operating model puts every employee at the forefront of our customers' missions.
We value customer intimacy, unique perspectives, and dedication to delivering lasting impact and results. You'll have the opportunity to work on the frontlines of major customer programs and influence lasting success for Torch.AI and your teammates.
You'll have the opportunity to gain experience across a wide range of projects and tasks, from designing and demonstrating early capabilities and prototypes to deploying large-scale mission systems.
You'll contribute directly to Torch.AI's continued position as a market leader for data infrastructure AI and compete against multi-billion-dollar incumbents and high-tech AI companies.
Responsibilities
Design, build, and maintain scalable data pipelines using tools like Apache NiFi, Airflow, or equivalent orchestration systems.
Work with structured and semi-structured data using SQL and NoSQL systems (e.g., PostgreSQL, MongoDB, Elasticsearch, Neo4j).
Develop services and integrations using Java (primary) and optionally Python for ETL workflows and data transformation.
Integrate data from internal and external REST APIs; handle data format translation (e.g., JSON, Parquet, Avro).
Optimize data flows for reliability and performance, and support large-scale batch and streaming data jobs.
Implement and document ETL mappings, schemas, and transformation logic aligned with mission use cases.
Collaborate with software, DevOps, and AI teams to support downstream data science and ML workflows.
Use Git-based workflows and participate in CI/CD processes for data infrastructure deployments.
Contribute to application specifications, data quality checks, and internal documentation.
What We Value
B.S. degree in Computer Science, Technology, Engineering, or a relevant field.
4-6 years of experience in data engineering, backend software engineering, or data integration roles.
Strong experience with Java development in data pipeline or ETL contexts; Python is a plus.
Proficiency with SQL and NoSQL databases, including query optimization and large dataset processing.
Familiarity with data integration tools such as Apache NiFi, Airflow, or comparable platforms.
Knowledge of RESTful API interactions, JSON parsing, and schema transformations.
Exposure to cloud environments (especially AWS: S3, EC2, Lambda) and distributed systems.
Comfortable with Git-based version control and Agile team practices.
Industry experience, preferably within the defense industry and/or intelligence community or related sectors, a plus.
Capability to work collaboratively in interdisciplinary teams.
Awareness of ethical considerations and responsible AI practices.
Excellent problem-solving skills, attention to detail, and ability to thrive in a fast-paced, collaborative environment.
Experience with data messaging and streaming technologies (e.g., Kafka) (nice to have, not required).
Understanding of IAM/security concepts in data environments (e.g., role-based access, encryption) (nice to have, not required).
Exposure to data modeling, time-series data analysis, or graph databases (e.g., Neo4j) (nice to have, not required).
Familiarity with Spark or other distributed processing frameworks (nice to have, not required).
Security Clearance
We are hiring for multiple positions for each role. Some roles require a
Secret
,
Top Secret
, or
Top Secret/SCI
Security Clearance on Day 1. If you do not currently hold a clearance but are interested in this role and believe you are eligible for a clearance, we still encourage you to submit an application.
Work Location
We are hiring for roles at our headquarters in Leawood, KS and remotely in the Arlington, VA region. Candidates in the Arlington, VA region may work remotely while not on customer site. Candidates in the Leawood, KS region may require some limited travel to customer sites (
Incentives
Equity: All employees are eligible to participate in the company equity incentive program within their first 12 months of employment. We are proud that 100% of our employees are equity-owning partners at Torch.AI.
Competitive salary and annual performance bonus opportunities.
Unlimited PTO.
11 paid holidays each year.
Incredible professional development and learning opportunities in a fast-paced high-tech environment and exciting industry.
Weekly in-office catering in our Leawood HQ.
Benefits
Torch.AI values employee well-being and, in turn, offers exceptional benefits options which greatly exceed regional and national averages for similar companies.
401k Plan
Torch.AI offers a 401k plan through John Hancock. While the company does not offer employee matching, we offer 3% profit sharing for all employees who elect to participate in the 401k plan. Profit sharing is calculated based on company performance at the end of each calendar year and distributed to 401k accounts at the start of each calendar year.
Medical
Three medical options: PPO, HSA, and TRICARE.
Torch.AI's HSA contribution is 250%-350% higher than average employer contributions in Kansas City and Arlington regions.
Only ~18% of employers offer TRICARE Supplement plans.
Spending Accounts
Above-market employer funding and flexibility.
HSA: Triple-tax advantage
FSA: $50-$3,300 annual contribution, $660 rollover
Dependent Care FSA: $100-$5,000, pre-tax savings on child/dependent care.
Dental
High Plan annual maximum is ~2.6x higher than the national average.
High Renaissance Plan: $5,000 annual max, 50% ortho up to $1,000.
Low Renaissance Plan: $1,000 annual max, no ortho.
Vision
Frame allowance is 25-35% higher than typical employer-sponsored plans.
Vision through Renaissance with VSP Choice network: $0 exams, lenses covered in full, and $180 frame allowance
Life Insurance
Employer-paid 1x base salary and additional voluntary options for employees and spouses, compared to most employers who only cover $50k basic life on average.
Disability & Illness
Torch.AI ranks in the top 10% of regional employers for disability benefits
Short-Term Disability (employer paid): 60% income, up to $2,000/week
Long-Term Disability (employer paid): 60% income, up to $5,000/month
Voluntary Benefits
Robust Voluntary plans offer direct pash payout flexibility and wellness incentives.
Accidental Insurance
Critical Illness
Hospital Indemnity
Commuter Benefits: up to $300/month tax-free for transit/parking
Torch.AI is an Equal Opportunity /Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, age, sex, national origin, protected veteran status or status as an individual with a disability.
These positions are being reviewed and filled on a rolling basis, and multiple openings may be available for each role.
JOB CODE: 1000108
Software Engineer
Data engineer job in Kansas City, MO
Software Engineer
Compensation: $90,000 - $100,000 annually, depending on experience
Inceed has partnered with a great company to help find a skilled Software Engineer to join their team!
Join a dynamic team in a collaborative environment, where innovation meets creativity. This opportunity arises as the team is expanding due to the retirement of a valued developer. Work in an environment and contribute to exciting new projects and legacy product updates.
Key Responsibilities & Duties:
Collaborate with QA and UX teams to meet business needs
Provide technical support and troubleshoot current systems
Develop new product features using coding standards
Participate in code reviews and knowledge sharing
Create and maintain documentation on system architecture
Required Qualifications & Experience:
3+ years of experience in back-end services and API development
Proficiency in C#, .NET Core, and Node.js
Experience with relational databases like SQL Server or MySQL
Nice to Have Skills & Experience:
Experience with cloud development (Azure, AWS)
Containerization experience with Docker and Kubernetes
Other Information:
Work in a hybrid environment, with Mondays and Fridays from home
Collaborative team with a flexible and innovative work culture
Potential to work on diverse development projects
If you are interested in learning more about the Software Engineer opportunity, please submit your resume for consideration. Our client is unable to provide sponsorship at this time.
We are Inceed, a staffing direct placement firm who believes in the possibility of something better. Our mission is simple: We're here to help every person, whether client, candidate, or employee, find and secure what's better for them.
Inceed is an equal opportunity employer. Inceed prohibits discrimination and harassment of any type and affords equal employment opportunities to employees and applicants without regard to race, color, religion, sex, sexual orientation, gender identity, or expression, pregnancy, age, national origin, disability status, genetic information, protected veteran status, or any other characteristic protected by law.
#IND
Principal Data Scientist
Data engineer job in Kansas City, MO
Description & Requirements We now have an exciting opportunity for a Principal Data Scientist to join the Maximus AI Accelerator supporting both the enterprise and our clients. We are looking for an accomplished hands-on individual contributor and team player to be a part of the AI Accelerator team.
You will be responsible for architecting and optimizing scalable, secure AI systems and integrating AI models in production using MLOps best practices, ensuring systems are resilient, compliant, and efficient. This role requires strong systems thinking, problem-solving abilities, and the capacity to manage risk and change in complex environments. Success depends on cross-functional collaboration, strategic communication, and adaptability in fast-paced, evolving technology landscapes.
This position will be focused on strategic company-wide initiatives but will also play a role in project delivery and capture solutioning (i.e., leaning in on existing or future projects and providing solutioning to capture new work.)
This position requires occasional travel to the DC area for client meetings.
Essential Duties and Responsibilities:
- Make deep dives into the data, pulling out objective insights for business leaders.
- Initiate, craft, and lead advanced analyses of operational data.
- Provide a strong voice for the importance of data-driven decision making.
- Provide expertise to others in data wrangling and analysis.
- Convert complex data into visually appealing presentations.
- Develop and deploy advanced methods to analyze operational data and derive meaningful, actionable insights for stakeholders and business development partners.
- Understand the importance of automation and look to implement and initiate automated solutions where appropriate.
- Initiate and take the lead on AI/ML initiatives as well as develop AI/ML code for projects.
- Utilize various languages for scripting and write SQL queries. Serve as the primary point of contact for data and analytical usage across multiple projects.
- Guide operational partners on product performance and solution improvement/maturity options.
- Participate in intra-company data-related initiatives as well as help foster and develop relationships throughout the organization.
- Learn new skills in advanced analytics/AI/ML tools, techniques, and languages.
- Mentor more junior data analysts/data scientists as needed.
- Apply strategic approach to lead projects from start to finish;
Job-Specific Minimum Requirements:
- Develop, collaborate, and advance the applied and responsible use of AI, ML and data science solutions throughout the enterprise and for our clients by finding the right fit of tools, technologies, processes, and automation to enable effective and efficient solutions for each unique situation.
- Contribute and lead the creation, curation, and promotion of playbooks, best practices, lessons learned and firm intellectual capital.
- Contribute to efforts across the enterprise to support the creation of solutions and real mission outcomes leveraging AI capabilities from Computer Vision, Natural Language Processing, LLMs and classical machine learning.
- Contribute to the development of mathematically rigorous process improvement procedures.
- Maintain current knowledge and evaluation of the AI technology landscape and emerging. developments and their applicability for use in production/operational environments.
Minimum Requirements
- Bachelor's degree in related field required.
- 10-12 years of relevant professional experience required.
Job-Specific Minimum Requirements:
- 10+ years of relevant Software Development + AI / ML / DS experience.
- Professional Programming experience (e.g. Python, R, etc.).
- Experience in two of the following: Computer Vision, Natural Language Processing, Deep Learning, and/or Classical ML.
- Experience with API programming.
- Experience with Linux.
- Experience with Statistics.
- Experience with Classical Machine Learning.
- Experience working as a contributor on a team.
Preferred Skills and Qualifications:
- Masters or BS in quantitative discipline (e.g. Math, Physics, Engineering, Economics, Computer Science, etc.).
- Experience developing machine learning or signal processing algorithms:
- Ability to leverage mathematical principles to model new and novel behaviors.
- Ability to leverage statistics to identify true signals from noise or clutter.
- Experience working as an individual contributor in AI.
- Use of state-of-the-art technology to solve operational problems in AI and Machine Learning.
- Strong knowledge of data structures, common computing infrastructures/paradigms (stand alone and cloud), and software engineering principles.
- Ability to design custom solutions in the AI and Advanced Analytics sphere for customers. This includes the ability to scope customer needs, identify currently existing technologies, and develop custom software solutions to fill any gaps in available off the shelf solutions.
- Ability to build reference implementations of operational AI & Advanced Analytics processing solutions.
Background Investigations:
- IRS MBI - Eligibility
#techjobs #VeteransPage
EEO Statement
Maximus is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, age, national origin, disability, veteran status, genetic information and other legally protected characteristics.
Pay Transparency
Maximus compensation is based on various factors including but not limited to job location, a candidate's education, training, experience, expected quality and quantity of work, required travel (if any), external market and internal value analysis including seniority and merit systems, as well as internal pay alignment. Annual salary is just one component of Maximus's total compensation package. Other rewards may include short- and long-term incentives as well as program-specific awards. Additionally, Maximus provides a variety of benefits to employees, including health insurance coverage, life and disability insurance, a retirement savings plan, paid holidays and paid time off. Compensation ranges may differ based on contract value but will be commensurate with job duties and relevant work experience. An applicant's salary history will not be used in determining compensation. Maximus will comply with regulatory minimum wage rates and exempt salary thresholds in all instances.
Accommodations
Maximus provides reasonable accommodations to individuals requiring assistance during any phase of the employment process due to a disability, medical condition, or physical or mental impairment. If you require assistance at any stage of the employment process-including accessing job postings, completing assessments, or participating in interviews,-please contact People Operations at **************************.
Minimum Salary
$
156,740.00
Maximum Salary
$
234,960.00
Easy ApplyData Scientist - Retail Pricing
Data engineer job in Overland Park, KS
We are looking for a Data Scientist! This position will play a key role in shaping data-driven strategies that directly influence the bank's profitability, customer value, and market competitiveness. This role sits at the intersection of analytics, finance, and product strategy - transforming data into pricing intelligence that supports smarter, faster business decisions.
Will design and implement advanced pricing and profitability models for retail banking products, leveraging internal performance metrics, market benchmarks, and third-party data sources. Through predictive modeling, elasticity analysis, and scenario testing, will help the organization optimize deposit and loan pricing, forecast financial outcomes, and identify growth opportunities.
Collaborating across product, finance, and executive teams, will translate complex analytical findings into clear business recommendations that drive strategic action. Will also contribute to enhancing our analytics infrastructure - improving data pipelines, model governance, and reporting capabilities to strengthen enterprise-wide decision-making.
Core Expertise: Pricing strategy · Profitability modeling · Financial forecasting · Machine learning · SQL · Python · R · Data visualization · Strategic analytics · Cross-functional collaboration
CapFed is an equal opportunity employer.
Auto-Apply