Sr. Dell Boomi Developer
Data engineer job in Kenosha, WI
Client is looking for Sr.Dell Boomi Developer with experience in ERP.
RESPONSIBILITIES
Design and Architect Solutions: Bringing deep knowledge to design stable, reliable, and scalable integration solutions using the Dell Boomi Atmosphere platform and its components (Integration, API Management, MDM, etc.)
Hands-on Development: Designing, developing, and implementing complex integration processes, workflows, and APIs (REST/SOAP) to connect various applications (on-premises and cloud-based), ERP systems (like Microsoft Dynamics, Oracle EBS, SAP), and other data sources.
Data Transformation: Proficiently handling various data formats such as XML, JSON, CSV and database formats, and using Boomi''s capabilities and scripting languages (like Groovy or JavaScript) for complex data mapping and transformations.
Dell Boomi Platform Knowledge: Proficiency in Dell Boomi is crucial. Familiarize yourself with Boomi components such as connectors, processes, maps, and APIs. Understand how to design, build, and deploy integrations using Boomi.
API Development: Strong knowledge of RESTful and SOAP APIs. You'll create, consume, and manage APIs within Boomi.
Working with team members and business users to understand project requirements and deliver successful design, implementation, and post implementation support.
Working closely with team members to translate business requirements into feasible and efficient technical solutions.
Develop and maintain documentation for integration and testing processes
Be highly accurate in activity assessment, effort estimation and delivery commitment to ensure all project activities are delivered on time without comprising quality. • Diagnose complex technical issues and provide recommendations on solutions with consideration of best practices and longer-term impacts of decisions.
Lead/Perform third party testing, performance testing and UAT coordination.
Selecting the appropriate development platform(s) to execute business requirements and ensure post implementation success.
Serve as technical lead on projects to design, develop, test, document and deploy robust integration solutions.
Working both independently and as part of a team; collaborating closely with other IT and non-IT team members.
Assessing and troubleshooting production issues with a varying degree of priority and complexity.
Optimizing existing and developing new integration solutions to support business requirements.
Providing continuous support and management of the integration layer ensuring the integrity of our data and integrations and remove single points of failure.
Good knowledge of best practices in error handling, logging, and monitoring.
Documenting and cross-training team members for support continuity.
Compliance to all of Jockey's policies and procedures.
QUALIFICATIONS
10-15 years of experience with enterprise integration platform
Bachelor's degree in computer science
Troubleshooting Skills: Be adept at diagnosing and resolving integration issues. Familiarity with Boomi's debugging tools is valuable.
Security Awareness: Knowledge of authentication methods, encryption, and secure data transmission.
Experience and proven track record of implementing integration projects.
Extensible Stylesheet Language Transformations (XSLT) experience is a plus.
Project Management experience is a plus
Experience of ERP systems within a fast-moving wholesale, retail, and Ecommerce environment is highly desirable.
Experience of Boomi implementation with Microsoft Dynamics ERP system is a plus.
Healthcare Data Analyst Lead (CMH Health)
Data engineer job in Brookfield, WI
Individual(s) must be legally authorized to work in the United States without the need for immigration support or sponsorship from Milliman now or in the future. Milliman is seeking a technically savvy, analytically strong Healthcare Data Analyst Lead to manage analytics and technology projects as well as coach and mentor junior analytical staff. The ideal candidate is someone seeking to join a challenging, yet rewarding environment focused on delivering world-class analytics across a variety of healthcare-related domains to a variety of healthcare entities.
Who We Are
Independent for over 75 years, Milliman delivers market-leading services and solutions to clients worldwide. Today, we are helping companies take on some of the world's most critical and complex issues, including retirement funding and healthcare financing, risk management and regulatory compliance, data analytics and business transformation.
Job Responsibilities
* Lead and manage analytics and technology projects from data ingestion through delivery.
* Design, develop, and optimize data processes and workflows supporting large healthcare datasets.
* Perform and oversee ETL, validation, and transformation tasks for claims, eligibility, and pharmacy data.
* Guide project teams through the full "data-to-deliverable" lifecycle, ensuring accuracy and efficiency.
* Build analytical models, dashboards, and data pipelines to support consulting engagements.
* Collaborate with consultants, actuaries, and project managers to interpret results and deliver client insights.
* Review and approve technical work from peers and junior analysts to ensure quality standards are met.
* Mentor, coach, and delegate to analytical staff to strengthen technical and professional development.
* Contribute to innovation and process improvement initiatives, including automation, cloud enablement, and AI integration.
* Participate in client meetings and presentations, occasionally requiring travel.
Minimum requirements
* Bachelor's degree required (Computer Science, Management Information Systems, Computer Engineering, Math, Actuarial Science, Data Analytics, or related degree is preferred)
* 6+ years of experience in healthcare data analytics or a related technical analytics role.
* Advanced proficiency with SQL and Microsoft Excel for data analysis, validation, and automation.
* Strong programming skills in Python, R, or other analytical languages.
* Experience with data visualization and reporting tools (e.g., Power BI, Tableau, or R Shiny).
* Solid understanding of healthcare data structures, including claims, eligibility, and provider data.
* Proven ability to lead multiple projects simultaneously while mentoring and developing junior team members.
* Experience with cloud data technologies (e.g., Azure Data Factory, Databricks, Snowflake, AWS Redshift, or similar).
* Exposure to AI or Generative AI tools for data analysis, automation, or insight generation is preferred, but not required.
Competencies and Behaviors that Support Success in this Role
* Deep understanding of database architecture and large-scale healthcare data environments.
* Strong analytical thinking and the ability to translate complex data into actionable insights.
* Excellent communication skills, including the ability to explain technical concepts to non-technical audiences.
* Highly organized, detail-oriented, and able to manage competing priorities.
* Collaborative and proactive leadership style with a focus on mentorship and knowledge-sharing.
* Passion for applying analytics to improve healthcare performance, quality, and cost outcomes.
* Demonstrated accountability for quality, timelines, and client satisfaction.
* Fast learner who thrives in a dynamic, innovation-driven environment.
The Team
The Healthcare Data Analyst Lead will join a team that thrives on leveraging data, analytics, and technology to deliver meaningful business value. This is a team with technical aptitude and analytical prowess that enjoys building efficient and scalable products and processes. Ultimately, we are passionate about effecting change in healthcare. We also believe that collaboration and communication are cornerstones of success.
The Healthcare Data Analyst Lead will also join a mix of Healthcare Analysts, Leads, Consultants, and Principals. In addition, as part of the broader Milliman landscape, they will work alongside Healthcare Actuaries, Pharmacists, Clinicians, and Physicians. We aim to provide everyone a supportive environment, where we foster learning and growth through rewarding challenges.
Salary:
The overall salary range for this role is $104,900 - $199,065.
For candidates residing in:
* Alaska, California, Connecticut, Illinois, Maryland, Massachusetts, New Jersey, Notable New York City, Newark, San Jose, San Francisco, Pennsylvania, Virginia, Washington, or the District of Columbia the salary range is $120,635 - $199,065.
* All other locations the salary range is $104,900 - $173,100.
A combination of factors will be considered, including, but not limited to, education, relevant work experience, qualifications, skills, certifications, etc.
Location: It is preferred that candidates work on-site at our Brookfield, Wisconsin office, however, remote candidates will be considered.
The expected application deadline for this job is May 25, 2026.
Benefits
We offer a comprehensive benefits package designed to support employees' health, financial security, and well-being. Benefits include:
* Medical, Dental and Vision - Coverage for employees, dependents, and domestic partners.
* Employee Assistance Program (EAP) - Confidential support for personal and work-related challenges.
* 401(k) Plan - Includes a company matching program and profit-sharing contributions.
* Discretionary Bonus Program - Recognizing employee contributions.
* Flexible Spending Accounts (FSA) - Pre-tax savings for dependent care, transportation, and eligible medical expenses.
* Paid Time Off (PTO) - Begins accruing on the first day of work. Full-time employees accrue 15 days per year, and employees working less than full-time accrue PTO on a prorated basis.
* Holidays - A minimum of 10 observed holidays per year.
* Family Building Benefits - Includes adoption and fertility assistance.
* Paid Parental Leave - Up to 12 weeks of paid leave for employees who meet eligibility criteria.
* Life Insurance & AD&D - 100% of premiums covered by Milliman.
* Short-Term and Long-Term Disability - Fully paid by Milliman.
Equal Opportunity:
All qualified applicants will receive consideration for employment, without regard to race, color, religion, sex, sexual orientation, national origin, disability, or status as a protected veteran.
Senior Data Engineer
Data engineer job in Park City, IL
Abbott is a global healthcare leader that helps people live more fully at all stages of life. Our portfolio of life-changing technologies spans the spectrum of healthcare, with leading businesses and products in diagnostics, medical devices, nutritionals and branded generic medicines. Our 114,000 colleagues serve people in more than 160 countries.JOB DESCRIPTION:
We're focused on helping people with diabetes manage their health with life-changing products that provide accurate data to drive better-informed decisions. We're revolutionizing the way people monitor their glucose levels with our new sensing technology.
Working at Abbott
At Abbott, you can do work that matters, grow, and learn, care for yourself and family, be your true self and live a full life. You'll also have access to:
Career development with an international company where you can grow the career you dream of.
Employees can qualify for free medical coverage in our Health Investment Plan (HIP) PPO medical plan in the next calendar year
An excellent retirement savings plan with high employer contribution
Tuition reimbursement, the Freedom 2 Save student debt program and FreeU education benefit - an affordable and convenient path to getting a bachelor's degree.
A company recognized as a great place to work in dozens of countries around the world and named one of the most admired companies in the world by Fortune.
A company that is recognized as one of the best big companies to work for as well as a best place to work for diversity, working mothers, female executives, and scientists.
THE OPPORTUNITY
This Senior Data Engineer position can work out remotely within the U.S.
Are you ready to apply your technical expertise to make a real impact in the medical field and help improve the lives of people with diabetes? This role offers the opportunity to lead cloud-based big data engineering efforts, including data wrangling, analysis, and pipeline development. You'll help define and implement the organization's Big Data strategy, working closely with data engineers, analysts, and scientists to solve complex business problems using data science and machine learning.
As a senior member of the Data Engineering & Analytics team, you'll build scalable data solutions that uncover insights across customer behavior, product performance, and operations. You'll work in a distributed team environment using modern technologies like Databricks, Redshift, S3, Lambda, DynamoDB, Spark, and Python. The ideal candidate is passionate about software engineering, thrives in fast-paced environments, and brings versatility, curiosity, and a collaborative spirit to the team.
What You'll Work On
Design and implement data pipelines to be processed and visualized across a variety of projects and initiatives
Develop and maintain optimal data pipeline architecture by designing and implementing data ingestion solutions on AWS using AWS native services.
Design and optimize data models on AWS Cloud using Databricks and AWS data stores such as Redshift, RDS, S3
Integrate and assemble large, complex data sets that meet a broad range of business requirements
Read, extract, transform, stage and load data to selected tools and frameworks as required and requested
Customizing and managing integration tools, databases, warehouses, and analytical systems
Process unstructured data into a form suitable for analysis and assist in analysis of the processed data
Working directly with the technology and engineering teams to integrate data processing and business objectives
Monitoring and optimizing data performance, uptime, and scale; Maintaining high standards of code quality and thoughtful design
Create software architecture and design documentation for the supported solutions and overall best practices and patterns
Support team with technical planning, design, and code reviews including peer code reviews
Provide Architecture and Technical Knowledge training and support for the solution groups
Develop good working relations with the other solution teams and groups, such as Engineering, Marketing, Product, Test, QA.
Stay current with emerging trends, making recommendations as needed to help the organization innovate
Qualifications
Bachelors Degree in Computer Science, Information Technology or other relevant field
At least 2 to 6 years of recent experience in Software Engineering, Data Engineering or Big Data
Ability to work effectively within a team in a fast-paced changing environment
Knowledge of or direct experience with Databricks and/or Spark.
Software development experience, ideally in Python, PySpark, Kafka or Go, and a willingness to learn new software development languages to meet goals and objectives.
Knowledge of strategies for processing large amounts of structured and unstructured data, including integrating data from multiple sources
Knowledge of data cleaning, wrangling, visualization and reporting
Ability to explore new alternatives or options to solve data mining issues, and utilize a combination of industry best practices, data innovations and experience
Familiarity of databases, BI applications, data quality and performance tuning
Excellent written, verbal and listening communication skills
Comfortable working asynchronously with a distributed team
Preferred
Knowledge of or direct experience with the following AWS Services desired S3, RDS, Redshift, DynamoDB, EMR, Glue, and Lambda.
Experience working in an agile environment
Practical Knowledge of Linux
#software
Apply Now
Learn more about our health and wellness benefits, which provide the security to help you and your family live full lives: **********************
Follow your career aspirations to Abbott for diverse opportunities with a company that can help you build your future and live your best life. Abbott is an Equal Opportunity Employer, committed to employee diversity.
Connect with us at *************** on Facebook at *********************** and on Twitter @AbbottNews and @AbbottGlobal
The base pay for this position is
$75,300.00 - $150,700.00
In specific locations, the pay range may vary from the range posted.
JOB FAMILY:Product DevelopmentDIVISION:ADC Diabetes CareLOCATION:United States of America : RemoteADDITIONAL LOCATIONS:WORK SHIFT:StandardTRAVEL:Yes, 10 % of the TimeMEDICAL SURVEILLANCE:Not ApplicableSIGNIFICANT WORK ACTIVITIES:Continuous sitting for prolonged periods (more than 2 consecutive hours in an 8 hour day), Keyboard use (greater or equal to 50% of the workday) Abbott is an Equal Opportunity Employer of Minorities/Women/Individuals with Disabilities/Protected Veterans.EEO is the Law link - English: ************************************************************ EEO is the Law link - Espanol: ************************************************************
Auto-ApplyPrincipal Data Scientist
Data engineer job in Milwaukee, WI
Description & Requirements We now have an exciting opportunity for a Principal Data Scientist to join the Maximus AI Accelerator supporting both the enterprise and our clients. We are looking for an accomplished hands-on individual contributor and team player to be a part of the AI Accelerator team.
You will be responsible for architecting and optimizing scalable, secure AI systems and integrating AI models in production using MLOps best practices, ensuring systems are resilient, compliant, and efficient. This role requires strong systems thinking, problem-solving abilities, and the capacity to manage risk and change in complex environments. Success depends on cross-functional collaboration, strategic communication, and adaptability in fast-paced, evolving technology landscapes.
This position will be focused on strategic company-wide initiatives but will also play a role in project delivery and capture solutioning (i.e., leaning in on existing or future projects and providing solutioning to capture new work.)
This position requires occasional travel to the DC area for client meetings.
Essential Duties and Responsibilities:
- Make deep dives into the data, pulling out objective insights for business leaders.
- Initiate, craft, and lead advanced analyses of operational data.
- Provide a strong voice for the importance of data-driven decision making.
- Provide expertise to others in data wrangling and analysis.
- Convert complex data into visually appealing presentations.
- Develop and deploy advanced methods to analyze operational data and derive meaningful, actionable insights for stakeholders and business development partners.
- Understand the importance of automation and look to implement and initiate automated solutions where appropriate.
- Initiate and take the lead on AI/ML initiatives as well as develop AI/ML code for projects.
- Utilize various languages for scripting and write SQL queries. Serve as the primary point of contact for data and analytical usage across multiple projects.
- Guide operational partners on product performance and solution improvement/maturity options.
- Participate in intra-company data-related initiatives as well as help foster and develop relationships throughout the organization.
- Learn new skills in advanced analytics/AI/ML tools, techniques, and languages.
- Mentor more junior data analysts/data scientists as needed.
- Apply strategic approach to lead projects from start to finish;
Job-Specific Minimum Requirements:
- Develop, collaborate, and advance the applied and responsible use of AI, ML and data science solutions throughout the enterprise and for our clients by finding the right fit of tools, technologies, processes, and automation to enable effective and efficient solutions for each unique situation.
- Contribute and lead the creation, curation, and promotion of playbooks, best practices, lessons learned and firm intellectual capital.
- Contribute to efforts across the enterprise to support the creation of solutions and real mission outcomes leveraging AI capabilities from Computer Vision, Natural Language Processing, LLMs and classical machine learning.
- Contribute to the development of mathematically rigorous process improvement procedures.
- Maintain current knowledge and evaluation of the AI technology landscape and emerging. developments and their applicability for use in production/operational environments.
Minimum Requirements
- Bachelor's degree in related field required.
- 10-12 years of relevant professional experience required.
Job-Specific Minimum Requirements:
- 10+ years of relevant Software Development + AI / ML / DS experience.
- Professional Programming experience (e.g. Python, R, etc.).
- Experience in two of the following: Computer Vision, Natural Language Processing, Deep Learning, and/or Classical ML.
- Experience with API programming.
- Experience with Linux.
- Experience with Statistics.
- Experience with Classical Machine Learning.
- Experience working as a contributor on a team.
Preferred Skills and Qualifications:
- Masters or BS in quantitative discipline (e.g. Math, Physics, Engineering, Economics, Computer Science, etc.).
- Experience developing machine learning or signal processing algorithms:
- Ability to leverage mathematical principles to model new and novel behaviors.
- Ability to leverage statistics to identify true signals from noise or clutter.
- Experience working as an individual contributor in AI.
- Use of state-of-the-art technology to solve operational problems in AI and Machine Learning.
- Strong knowledge of data structures, common computing infrastructures/paradigms (stand alone and cloud), and software engineering principles.
- Ability to design custom solutions in the AI and Advanced Analytics sphere for customers. This includes the ability to scope customer needs, identify currently existing technologies, and develop custom software solutions to fill any gaps in available off the shelf solutions.
- Ability to build reference implementations of operational AI & Advanced Analytics processing solutions.
Background Investigations:
- IRS MBI - Eligibility
#techjobs #VeteransPage
EEO Statement
Maximus is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, age, national origin, disability, veteran status, genetic information and other legally protected characteristics.
Pay Transparency
Maximus compensation is based on various factors including but not limited to job location, a candidate's education, training, experience, expected quality and quantity of work, required travel (if any), external market and internal value analysis including seniority and merit systems, as well as internal pay alignment. Annual salary is just one component of Maximus's total compensation package. Other rewards may include short- and long-term incentives as well as program-specific awards. Additionally, Maximus provides a variety of benefits to employees, including health insurance coverage, life and disability insurance, a retirement savings plan, paid holidays and paid time off. Compensation ranges may differ based on contract value but will be commensurate with job duties and relevant work experience. An applicant's salary history will not be used in determining compensation. Maximus will comply with regulatory minimum wage rates and exempt salary thresholds in all instances.
Accommodations
Maximus provides reasonable accommodations to individuals requiring assistance during any phase of the employment process due to a disability, medical condition, or physical or mental impairment. If you require assistance at any stage of the employment process-including accessing job postings, completing assessments, or participating in interviews,-please contact People Operations at **************************.
Minimum Salary
$
156,740.00
Maximum Salary
$
234,960.00
Easy ApplyData Engineer - Platform & Product
Data engineer job in Milwaukee, WI
We are seeking a skilled and solution-oriented Data Engineer to contribute to the development of our growing Data Engineering function. This role will be instrumental in designing and optimizing data workflows, building domain-specific pipelines, and enhancing platform services. The ideal candidate will help evolve our Snowflake-based data platform into a scalable, domain-oriented architecture that supports business-critical analytics and machine learning initiatives.
Responsibilities
The candidate is expected to:
Design and build reusable platform services, including pipeline frameworks, CI/CD workflows, data validation utilities, data contracts, lineage integrations
Develop and maintain data pipelines for sourcing, transforming, and delivering trusted datasets into Snowflake
Partner with Data Domain Owners to onboard new sources, implement data quality checks, and model data for analytics and machine learning use cases
Collaborate with the Lead Data Platform Engineer and Delivery Manager to deliver monthly feature releases and support bug remediation
Document and promote best practices for data pipeline development, testing, and deployment
Qualifications
The successful candidate will possess strong analytical skills and attention to detail. Additionally, the ideal candidate will possess:
3-6 years of experience in data engineering or analytics engineering
Strong SQL and Python skills; experience with dbt or similar transformation frameworks
Demonstrated experience building pipelines and services on Snowflake or other modern cloud data platforms
Understanding of data quality, validation, lineage, and schema evolution
Background in financial or market data (trading, pricing, benchmarks, ESG) is a plus
Strong collaboration and communication skills, with a passion for enabling domain teams
Privacy Notice for California Applicants
Artisan Partners Limited Partnership is an equal opportunity employer. Artisan Partners does not discriminate on the basis of race, religion, color, national origin, gender, age, disability, marital status, sexual orientation or any other characteristic protected under applicable law. All employment decisions are made on the basis of qualifications, merit and business need.
#LI-Hybrid/span>
Auto-ApplyData Scientist, US Supply Chain
Data engineer job in Milwaukee, WI
What you will do
In this exciting role you will lead the effort to build and deploy predictive and prescriptive analytics in our next generation decision intelligence platform. The work will require helping to build /maintain a digital twin of our production supply chain, perform optimization and forecasting, and connect our analytics and ML solutions to enable our people to make the best data driven decisions possible!
This will require working with predictive, prescriptive analytics and decision-intelligence across the US / Canada region at Clarios. You'll apply modern statistics, machine learning and AI to real manufacturing and supply chain problems, working side-by-side with our business stakeholders and our global analytics team to deploy transformative solutions- not just models.
How you will do it
Build production-ready ML/statistical models (regression/classification, clustering, time series, linear / non-linear optimizations) to detect patterns, perform scenario analytics and generate actionable insights / outcomes.
Wrangle and analyze data with Python and SQL; perform feature engineering, data quality checks, and exploratory analysis to validate hypotheses and model readiness.
Develop digital solutions /visuals in Power BI and our decision intelligence platform to communicate results and monitor performance with business users.
Partner with stakeholders to clarify use cases, translate needs into technical tasks/user stories, and iterate solutions in sprints.
Manage model deployment (e.g., packaging models, basic MLOps) with guidance from Global Analytics
Document and communicate model methodology, assumptions, and results to non-technical audiences; support troubleshooting and continuous improvement of delivered analytics.
Deliver value realization as part of our business analytics team to drive positive business outcomes for our metals team.
Deliver incremental value quickly (first dashboards, baseline models) and iterate with stakeholder feedback.
Balance rigor with practicality-choose the simplest model that solves the problem and can be supported in production.
Keep data quality front-and-center; instrument checks to protect decisions from drift and bad inputs.
Travel: Up to ~10% for plant or stakeholder visits
What we look for
Bachelor's degree in Statistics, Mathematics, Computer Science, Engineering, or related field-or equivalent practical experience.
1-3 years (or strong internship/co-op) applying ML/statistics on business data.
Python proficiency (pandas, scikit-learn, SciPy/statsmodels) and SQL across common platforms (e.g., SQL Server, Snowflake).
Core math/stats fundamentals: probability, hypothesis testing/DoE basics, linear algebra, and the principles behind common ML methods.
Data visualization experience with Power BI / Decision Intelligence Platforms for analysis and stakeholder storytelling.
Ability to work in cross-functional teams and explain technical work clearly to non-technical partners. Candidates must be self-driven, curious, and creative
Preferred
Cloud & big data exposure: Azure (or AWS), Databricks/Spark; Snowpark is a plus.
Understanding of ETL/ELT tools such as ADF, SSIS, Talend, Informatica, or Matillion.
Experience in an Decision Intelligence platform like Palantir, Aera, etc building and deploying models.
MLOps concepts (model validation, monitoring, packaging with Docker/Kubernetes).
Deep learning basics (PyTorch/Keras) for the right use cases.
Experience contributing to agile backlogs, user stories, and sprint delivery.
3+ years of experience in data analytics
Master's Degree in Statistics, Economics, Data Science or computer science.
What you get:
Medical, dental and vision care coverage and a 401(k) savings plan with company matching - all starting on date of hire
Tuition reimbursement, perks, and discounts
Parental and caregiver leave programs
All the usual benefits such as paid time off, flexible spending, short-and long-term disability, basic life insurance, business travel insurance, Employee Assistance Program, and domestic partner benefits
Global market strength and worldwide market share leadership
HQ location earns LEED certification for sustainability plus a full-service cafeteria and workout facility
Clarios has been recognized as one of 2025's Most Ethical Companies by Ethisphere. This prestigious recognition marks the third consecutive year Clarios has received this distinction.
Who we are:
Clarios is the force behind the world's most recognizable car battery brands, powering vehicles from leading automakers like Ford, General Motors, Toyota, Honda, and Nissan. With 18,000 employees worldwide, we develop, manufacture, and distribute energy storage solutions while recovering, recycling, and reusing up to 99% of battery materials-setting the standard for sustainability in our industry. At Clarios, we're not just making batteries; we're shaping the future of sustainable transportation. Join our mission to innovate, push boundaries, and make a real impact. Discover your potential at Clarios-where your power meets endless possibilities.
Veterans/Military Spouses:
We value the leadership, adaptability, and technical expertise developed through military service. At Clarios, those capabilities thrive in an environment built on grit, ingenuity, and passion-where you can grow your career while helping to power progress worldwide. All qualified applicants will be considered without regard to protected characteristics.
We recognize that people come with a wealth of experience and talent beyond just the technical requirements of a job. If your experience is close to what you see listed here, please apply. Diversity of experience and skills combined with passion is key to challenging the status quo. Therefore, we encourage people from all backgrounds to apply to our positions. Please let us know if you require accommodations during the interview process by emailing Special.Accommodations@Clarios.com. We are an Equal Opportunity Employer and value diversity in our teams in terms of work experience, area of expertise, gender, ethnicity, and all other characteristics protected by laws in the countries where we operate. For more information on our commitment to sustainability, diversity, and equal opportunity, please read our latest report. We want you to know your rights because EEO is the law.
A Note to Job Applicants: please be aware of scams being perpetrated through the Internet and social media platforms. Clarios will never require a job applicant to pay money as part of the application or hiring process.
To all recruitment agencies: Clarios does not accept unsolicited agency resumes/CVs. Please do not forward resumes/CVs to our careers email addresses, Clarios employees or any other company location. Clarios is not responsible for any fees related to unsolicited resumes/CVs.
Auto-ApplyData Scientist II - Clinical - Looking for only W2
Data engineer job in North Chicago, IL
Job Description
Data Scientist II - Clinical - Looking for only W2
Duration: 12 Months
Contract Type: W2
Primary Skills: AWS Cloud Formation, R, Data Analysis, Python, SQL
Position Title: Computational Data Scientist
Seeking a highly motivated and driven data scientist to join our Quantitative, Translational & ADME Sciences (QTAS) team in North Chicago, IL. The QTAS organization supports the discovery and early clinical pipeline through mechanistically investigating how drug molecules are absorbed, distributed, excreted, metabolized, and transported across the body to predict duration and intensity of exposure and pharmacological action of drug candidates in humans. Digital workflows, systems, IT infrastructure, and computational sciences are critical and growing components within the organization to help deliver vital results in the early pipeline. This specific job role is designed to act as an SME (subject matter expert) for data science within the technical organization of QTAS.
For this role, the successful candidate will have a substantial background in data and computer science with an emphasis on supporting, developing and implementing IT solutions for lab-based systems as well as utilizing computational methods. The candidate should possess a deep knowledge in AI/ML, with a focus on both supervised (like neural networks, decision trees) and unsupervised learning techniques (such as clustering, PCA). They must be adept at applying these methods to large datasets for predictive modeling; in this context- drug properties and discovery patterns in ADME datasets. Proficiency in model validation, optimization, and feature engineering is essential to ensure accuracy and robustness in predictions. The role requires effective collaboration with interdisciplinary teams to integrate AI insights into drug development processes. Strong communication skills are necessary to convey complex AI/ML concepts to a diverse audience.
Key Responsibilities:
Provide business-centric support of IT systems and platforms in support of our scientific operations and processes.
Develop, implement, troubleshoot and support solutions independently for the digital infrastructure and workflows within QTAS including custom platform/coding solutions, visualization tools, integration of new software/hardware, and analysis and troubleshooting support.
Lead the analysis of large ADME-related datasets, contributing to the understanding and optimization of drug absorption, distribution, metabolism, and excretion properties.
Apply computational tools and machine learning/deep learning techniques to analyze and interpret complex biological data relevant to drug discovery.
Develop predictive models and algorithms for identifying potential drug candidates with desirable ADME properties.
Collaborate with teams across biological sciences and drug discovery to integrate computational insights into practical drug development strategies.
Communicate findings and strategic input to cross-functional teams, including Translational Science, Medicine, and Late Development groups.
Qualifications:
Bachelor's or Master's Degree in Data Science, Computer Science, Computational Chemistry, or related relevant discipline typically with 5 to 10 (BS) or 2 to 5 (MS) years related industry experience.
Passion for data analysis, solving technical problems and applying new technologies to further scientific goals.
Strong proficiency in programming (e.g., SQL, Python, R, MATLAB), database technologies (Oracle, my SQL, relational databases; graph databases are a plus), machine learning/deep learning (network architectures are a plus), dimensionality reduction techniques (e.g., PCA), and possible cheminformatics software suites
Demonstrated experience in the analysis and visualization of large datasets. Proficiency in any of the following technologies is valued: Python (including libraries such as Matplotlib, Seaborn, Plotly, Bokeh), JavaScript, Julia, Java/Scala, or R (including Shiny).
Comfortable working in cloud and high-performance computational environments (e.g., AWS and Oracle Cloud)
Excellent communication skills and ability to work effectively in interdisciplinary teams.
Understanding of pharma R&D process and challenges in drug discovery is preferred.
Proven ability to work in a team environment; ability to work well in a collaborative fast-paced team environment.
Excellent oral and written communication skills and the ability to convey IT related notions to cross-disciplinary scientists.
Thorough theoretical and practical understanding of own scientific discipline
Background and/or experience in the biotechnology, pharmaceutical, biology, or chemistry fields is preferred.
Key Leadership Competencies:
Builds strong relationships with peers and cross-functionally with partners outside of team to enable higher performance.
Learns fast, grasps the "essence" and can change course quickly where indicated.
Raises the bar and is never satisfied with the status quo.
Creates a learning environment, open to suggestions and experimentation for improvement.
Embraces the ideas of others, nurtures innovation and manages innovation to reality.
Kindly please share your resumes to **********************
Easy ApplyBusiness Intelligence Data Modeler
Data engineer job in Milwaukee, WI
CapB is a global leader on IT Solutions and Managed Services. Our R&D is focused on providing cutting edge products and solutions across Digital Transformations from Cloud, AI/ML, IOT, Blockchain to MDM/PIM, Supply chain, ERP, CRM, HRMS and Integration solutions. For our growing needs we need consultants who can work with us on salaried or contract basis. We provide industry standard benefits, and an environment for LEARNING & Growth.
For one of our going on project we are looking for a Business Intelligence Data Modeler. The position is based out of Milwaukee.
Responsibilities:
The Business Intelligence Analyst performs a variety of project-oriented tasks to support the information needs of the organization.
This position is responsible for all phases of reporting, decision support, and data analysis activities including report design, measure development, data collection, summarization, and validation.
The BI Analyst exercises independent judgement and discretionary decision making related to managing multiple and more complex reporting projects.
The BI Analyst is proficient with analytical and reporting tools, database development, ETL processes, query languages, and database and spreadsheet tools.
The BI Analyst will participate in reporting and presentations to various levels of management and staff and may also be included in action plan development.
BI Analyst will have in depth experience with Power BI to create dashboards, data exploration, visualization, and data storytelling from concept to final deliverables.
Advanced experience with Power BI, Power BI dataflow and dashboard. Technical expertise with data modeling and design to interact with multiple sources of data.
Ability to write complex DAX code and SQL queries for data manipulation.
Skills:
10 years experience required to Data analysis.
10 years experience required to Dashboarding/Business Objects Xcelsius.
Data Warehouse Developer
Data engineer job in Milwaukee, WI
Description We are looking for a skilled Data Warehouse Analyst to join our team on a contract basis in Milwaukee, Wisconsin. In this role, you will play a pivotal part in developing and maintaining data solutions that support organizational analytics and decision-making. You will work closely with cross-functional teams to ensure data integration, accuracy, and accessibility using modern tools and methodologies.
Responsibilities:
- Design and implement data warehouse solutions to support business intelligence and reporting needs.
- Develop and maintain ETL processes to extract, transform, and load data from Oracle into Azure SQL Server.
- Collaborate with stakeholders and business analysts to gather requirements and translate them into actionable technical solutions.
- Optimize workflows and ensure efficient performance of the data warehouse environment.
- Validate and monitor data quality to ensure integrity and reliability.
- Create and maintain documentation for processes, architecture, and data models.
- Troubleshoot and resolve issues related to data integration and system performance.
- Utilize Azure Data Factory for data processing and workflow management.
- Apply Kimball methodology to design and maintain efficient data models.
- Support the ongoing improvement of data systems and analytics processes. Requirements - Proven experience in data warehousing, including design and development.
- Expertise in ETL processes and tools, with a focus on data integration.
- Proficiency in Azure Data Factory for creating workflows and managing data pipelines.
- Strong knowledge of Microsoft SQL Server and Azure SQL Database.
- Familiarity with Oracle Autonomous Data Warehouse systems.
- Experience with business intelligence and data warehousing methodologies.
- Ability to apply Kimball methodology in data model design.
- Strong problem-solving skills and attention to detail to ensure data accuracy and quality. Technology Doesn't Change the World, People Do.
Robert Half is the world's first and largest specialized talent solutions firm that connects highly qualified job seekers to opportunities at great companies. We offer contract, temporary and permanent placement solutions for finance and accounting, technology, marketing and creative, legal, and administrative and customer support roles.
Robert Half works to put you in the best position to succeed. We provide access to top jobs, competitive compensation and benefits, and free online training. Stay on top of every opportunity - whenever you choose - even on the go. Download the Robert Half app (https://www.roberthalf.com/us/en/mobile-app) and get 1-tap apply, notifications of AI-matched jobs, and much more.
All applicants applying for U.S. job openings must be legally authorized to work in the United States. Benefits are available to contract/temporary professionals, including medical, vision, dental, and life and disability insurance. Hired contract/temporary professionals are also eligible to enroll in our company 401(k) plan. Visit roberthalf.gobenefits.net for more information.
© 2025 Robert Half. An Equal Opportunity Employer. M/F/Disability/Veterans. By clicking "Apply Now," you're agreeing to Robert Half's Terms of Use (https://www.roberthalf.com/us/en/terms) .
Data Engineer
Data engineer job in Mequon, WI
Charter Manufacturing is a fourth-generation family-owned business where our will to grow drives us to do it better. Join the team and become part of our family!
Applicants must be authorized to work for ANY employer in the U.S. Charter Manufacturing is unable to sponsor for employment visas at this time.
This position is hybrid, 3 days a week in office in Mequon, WI.
BI&A- Lead Data Engineer
Charter Manufacturing continues to invest in Data & Analytics. Come join a great team and great culture leveraging your expertise to drive analytics transformation across Charter's companies. This is a key role in the organization that will provide thought leadership, as well as add substantial value by delivering trusted data pipelines that will be used to develop models and visualizations that tell a story and solve real business needs/problems. This role will collaborate with team members and business stakeholders to leverage data as an asset driving business outcomes aligned to business strategies.
Having 7+ years prior experience in developing data pipelines and partnering with team members and business stakeholders to drive adoption will be critical to the success of this role.
MINIMUM QUALIFICATIONS:
Bachelor's degree in computer science, data science, software engineering, information systems, or related quantitative field; master's degree preferred
At least seven years of work experience in data management disciplines, including data integration, modeling, optimization and data quality, or other areas directly relevant to data engineering responsibilities and tasks
Proven project experience designing, developing, deploying, and maintaining data pipelines used to support AI, ML, and BI using big data solutions (Azure, Snowflake)
Strong knowledge in Azure technologies such as Azure Web Application, Azure Data Explorer, Azure DevOps, and Azure Blob Storage to build scalable and efficient data pipelines
Strong knowledge using programming languages such as R, Python, C#, and Azure Machine Learning Workspace development
Proficiency in the design and implementation of modern data architectures and concepts such as cloud services (AWS, Azure, GCP) and modern data warehouse tools (Snowflake, Databricks)
Experience with database technologies such as SQL, Oracle, and Snowflake
Prior experience with ETL/ELT data ingestion into data lakes/data warehouses for analytics consumption
Strong SQL skills
Ability to collaborate within and across teams of different technical knowledge to support delivery and educate end users on data products
Passionate about teaching, coaching, and mentoring others
Strong problem-solving skills, including debugging skills, allowing the determination of sources of issues in unfamiliar code or systems, and the ability to recognize and solve problems
Excellent business acumen and interpersonal skills; able to work across business lines at a senior level to influence and effect change to achieve common goals
Ability to describe business use cases/outcomes, data sources and management concepts, and analytical approaches/options
Demonstrated experience delivering business value by structuring and analyzing large, complex data sets
Demonstrated initiative, strong sense of accountability, collaboration, and known as a trusted business partner
PREFERRED QUALIFICATIONS INCLUDES EXPERIENCE WITH:
Manufacturing industry experience specifically heavy industry, supply chain and operations
Designing and supporting data integrations with ERP systems such as Oracle or SAP
MAJOR ACCOUNTABILITIES:
Designs, develops, and supports data pipelines for batch and streaming data extraction from various sources (databases, API's, external systems), transforming it into the desired format, and load it into the appropriate data storage systems
Collaborates with data scientists and analysts to optimize models and algorithms for in accordance with data quality, security, and governance policies
Ensures data quality, consistency, and integrity during the integration process, performing data cleansing, aggregation, filtering, and validation as needed to ensure accuracy, consistency, and completeness of data
Optimizes data pipelines and data processing workflows for performance, scalability, and efficiency.
Monitors and tunes data pipelines for performance, scalability, and efficiency resolving performance bottlenecks
Establish architecture patterns, design standards, and best practices to accelerate delivery and adoption of solutions
Assist, educate, train users to drive self-service enablement leveraging best practices
Collaborate with business subject matter experts, analysts, and offshore team members to develop and deliver solutions in a timely manner
Embraces and establishes governance of data and algorithms, quality, standards, and best practices, ensuring data accuracy
We offer comprehensive health, dental, and vision benefits, along with a 401(k) plan that includes employer matching and profit sharing. Additionally, we offer company-paid life insurance, disability coverage, and paid time off (PTO).
Auto-ApplyAzure Data Engineer (Python/SQL) - 6013914
Data engineer job in Milwaukee, WI
Accenture Flex offers you the flexibility of local fixed-duration project-based work powered by Accenture, a leading global professional services company. Accenture is consistently recognized on FORTUNE's 100 Best Companies to Work For and Diversity Inc's Top 50 Companies For Diversity lists.
As an Accenture Flex employee, you will apply your skills and experience to help drive business transformation for leading organizations and communities. In addition to delivering innovative solutions for Accenture's clients, you will work with a highly skilled, diverse network of people across Accenture businesses who are using the latest emerging technologies to address today's biggest business challenges.
You will receive competitive rewards and access to benefits programs and world-class learning resources. Accenture Flex employees work in their local metro area onsite at the project, significantly reducing and/or eliminating the demands to travel.
Job Description:
Join our dynamic team and embark on a journey where you will be empowered to perform independently and become an SME. Required active participation/contribution in team discussions will be key as you contribute in providing solutions to work related problems. Let's work together to achieve greatness!
Responsibilities:
+ Create new data pipelines leveraging existing data ingestion frameworks, tools
+ Orchestrate data pipelines using the Azure Data Factory service.
+ Develop/Enhance data transformations based on the requirements to parse, transform and load data into Enterprise Data Lake, Delta Lake, Enterprise DWH (Synapse Analytics)
+ Perform Unit Testing, coordinate integration testing and UAT Create HLD/DD/runbooks for the data pipelines
+ Configure compute, DQ Rules, Maintenance Performance tuning/optimization
Basic Qualifications:
+ Minimum of 3 years of work experience with one or more of the following: Databricks Data Engineering, DLT, Azure Data Factory, SQL, PySpark, Synapse Dedicated SQL Pool, Azure DevOps, Python
Preferred Qualifications:
+ Azure Function Apps
+ Azure Logic Apps
+ Precisely & COSMOS DB
+ Advanced proficiency in PySpark.
+ Advanced proficiency in Microsoft Azure Databricks, Azure DevOps, Databricks Delta Live Tables and Azure Data Factory.
+ Bachelor's or Associate's degree
Compensation at Accenture varies depending on a wide array of factors, which may include but are not limited to the specific office location, role, skill set, and level of experience. As required by local law, Accenture provides a reasonable range of compensation for roles that may be hired as set forth below. We accept applications on an on-going basis and there is no fixed deadline to apply.
Information on benefits is here. (************************************************************
Role Location:
California - $47.69 - $57.69
Cleveland - $47.69 - $57.69
Colorado - $47.69 - $57.69
District of Columbia - $47.69 - $57.69
Illinois - $47.69 - $57.69
Minnesota - $47.69 - $57.69
Maryland - $47.69 - $57.69
Massachusetts - $47.69 - $57.69
New York/New Jersey - $47.69 - $57.69
Washington - $47.69 - $57.69
Requesting an Accommodation
Accenture is committed to providing equal employment opportunities for persons with disabilities or religious observances, including reasonable accommodation when needed. If you are hired by Accenture and require accommodation to perform the essential functions of your role, you will be asked to participate in our reasonable accommodation process. Accommodations made to facilitate the recruiting process are not a guarantee of future or continued accommodations once hired.
If you would like to be considered for employment opportunities with Accenture and have accommodation needs such as for a disability or religious observance, please call us toll free at **************** or send us an email or speak with your recruiter.
Equal Employment Opportunity Statement
We believe that no one should be discriminated against because of their differences. All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Our rich diversity makes us more innovative, more competitive, and more creative, which helps us better serve our clients and our communities.
For details, view a copy of the Accenture Equal Opportunity Statement (********************************************************************************************************************************************
Accenture is an EEO and Affirmative Action Employer of Veterans/Individuals with Disabilities.
Accenture is committed to providing veteran employment opportunities to our service men and women.
Other Employment Statements
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States.
Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration.
Job candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process. Further, at Accenture a criminal conviction history is not an absolute bar to employment.
The Company will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay or the pay of another employee or applicant. Additionally, employees who have access to the compensation information of other employees or applicants as a part of their essential job functions cannot disclose the pay of other employees or applicants to individuals who do not otherwise have access to compensation information, unless the disclosure is (a) in response to a formal complaint or charge, (b) in furtherance of an investigation, proceeding, hearing, or action, including an investigation conducted by the employer, or (c) consistent with the Company's legal duty to furnish information.
California requires additional notifications for applicants and employees. If you are a California resident, live in or plan to work from Los Angeles County upon being hired for this position, please click here for additional important information.
Please read Accenture's Recruiting and Hiring Statement for more information on how we process your data during the Recruiting and Hiring process.
AWS Data Engineer
Data engineer job in North Chicago, IL
Must Have Technical/Functional Skills * Strong experience in SDLC delivery, including waterfall, hybrid and Agile methodologies. Experience delivering in an agile environment, * Experience of implementing and delivering data solutions and pipelines on AWS Cloud Platform - using Redshift
* Strong in Python hands on experience. Not expected to code in depth but lead a team of developers and provide thought leadership, guidance and recommendations.
* Apache Airflow experience is a plus
* A strong understanding of data modelling, data structures, databases, and ETL process and Data wharehousing layers
* An in-depth understanding of large-scale data sets, including both structured and unstructured data
* Ability to analyze and troubleshoot complex data / SQL issues
* Knowledge of ETL (Extract, Transform, Load) processes
* Understanding of big data concepts
* Knowledge and experience of delivering CI/CD and DevOps capabilities in a data environment
Roles & Responsibilities
Lead a team of developers, code reviews, suggest improvements in code, provide thought leadership
Work on Proposal and POCs, developing prototype and reference models to help in tailoring Offerings & value propositions.
Lead, Mentor and groom other team members to work on AWS technologies
Generic Managerial Skills, If any
Need to have some experience in leading technical team.
Salary Range: $100,000 - $130,000 a Year
TCS Employee Benefits Summary:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & amp; Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
#LI-SP1
Data Architect
Data engineer job in West Allis, WI
Invest In You! Tri City National Bank is your hometown bank. We believe in putting customers first, building relationships, and fostering a sense of community. We work in a team environment with opportunities for hard workers to grow personally and professionally. We enjoy celebrating success and great benefits along the way. Most importantly, we believe superior customer service paired with the right banking solutions help our customers and businesses fulfill their financial dreams, and our communities grow. Our ideal candidate believes in our mission, values continuous learning, and is comfortable adapting to change. If this resonates with you, apply today and come join our team. #investinyou
The Data Architect is a strategic support role responsible for designing and implementing the organization's enterprise data management strategy. Reporting to the Director of IT, this role will define and drive standards, policies, and architectural practices for data access, modeling, stewardship, categorization, quality, archival, and destruction.
The Data Architect will serve as the organizational authority on data architecture, owning database platform strategy and participating in enterprise governance through the Architectural Review Board. This role will collaborate across technical and business domains to ensure data is managed as a strategic asset, with a strong emphasis on governance across SaaS platforms and third-party data processors.
Compensation: $150,000+ annually depending on experience.
This position is in-office at our Operations Center in West Allis, WI
Responsibilities
Strategic Planning & Execution
Conduct a comprehensive assessment of the current data environment and maturity.
Deliver a multi-year data strategy including prioritized recommendations, implementation timeline, and required resources.
Provide data governance thought leadership.
Build and/or coordinate the future data management organization. This may include shaping potential staffing needs.
Strategic Architecture & Governance
Develop and maintain the enterprise data architecture strategy, including standards for data access, modeling, categorization, stewardship, and quality.
Own the architectural design of database platforms, including redundancy, backup, and recovery strategies.
Participate in the Architectural Review Board as the data management representative.
Recommend and guide the adoption of supporting tools for data inventory, modeling, categorization, and quality assurance.
Establish architectural standards that apply across both on-premises and SaaS-hosted data environments.
SaaS & Third-Party Data Governance
Lead efforts to inventory and categorize data stored in SaaS applications and cloud platforms.
Track and document data processors and sub-processors handling corporate data to meet regulatory obligations.
Define governance policies for data shared with third-party applications, ensuring visibility and control over sensitive information.
Collaborate with Information Security and Compliance teams to align SaaS data governance with enterprise risk and regulatory frameworks.
Policy & Process Development
Define and implement policies and procedures for data archival and destruction, including unstructured data across the corporate network.
Co-lead efforts to track and manage sensitive data shared with third-party and cloud-based applications.
Establish standards and constraints to guide the Business Intelligence team, while maintaining separation from BI execution.
Cross-Functional Collaboration
Facilitate the development of a data stewardship program by partnering with business departments and subject matter experts.
Serve as a key contributor to corporate projects involving the creation, modification, or destruction of sensitive data.
Collaborate with Information Security, Infrastructure, and Application teams to ensure alignment with enterprise architecture and compliance requirements.
Operational
Review database design decisions to ensure that solutions exhibit data storage and access that is secure, recoverable, performant, scalable, and maintainable.
Provide recommendations or, where applicable, implement enhancements that enhance data security, access performance, and platform reliability.
Support regulatory data security and compliance through routine review and monitoring of relevant controls. Participate in design and development of data-oriented control processes.
Maintain a current and working knowledge of Tri City's products, processes, and data exchanges.
Perform other duties as requested.
Qualifications
Minimum 7 years of experience in data architecture or related disciplines.
Strong understanding of enterprise data management principles and practices.
Experience with third-party and cloud data integrations.
Background in highly regulated industries with demonstrated compliance awareness.
Familiarity with database administration and platform ownership (SQL Server preferred).
Prior career demonstration of technical, analytical, and administrative skills.
Excellent written, verbal communication skills and ability to work well independently and with others.
Preferred Qualifications
Prior hands-on experience as a database administrator.
Certifications such as CDMP, CIMP, DAMA, or relevant cloud/data platform credentials.
Experience facilitating data governance programs or stewardship initiatives.
Prior experience with typical data architecture technologies (i.e., ETL, data modeling, data visualization, database performance monitoring, BI, MDM, etc.)
Why Join Us:
Community Impact: Be part of a local bank deeply rooted in community values, contributing to the growth and prosperity of our neighborhoods.
Innovation: Embrace a dynamic and evolving work environment that encourages fresh perspectives and continuous learning.
Career Growth: Unlock future opportunities for personal and professional development as you navigate through our Pathways for Success.
Celebration of Success: Join a team that values and celebrates individual and collective achievements.
Work Life Balance: No early mornings or late nights, enjoy a predictable schedule with major holidays off.
Great Employee Benefits that start on the 1st of the month after your hire date!
Part-Time:
401(k) with company match
Up to 20 hours of paid vacation after 3 months (must work an average of 20+ hours per week in order to be eligible for paid vacation.)
Full-Time:
401(k) with company match
Tuition reimbursement
Medical, dental, and vision coverage
Paid vacation and more!
Equal Opportunity Employer/Veterans/Disabled
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, or protected veteran status and will not be discriminated against on the basis of disability.
Reasonable Accommodation
If you are an individual with a disability and would like to request a reasonable accommodation as part of the employment selection process, please contact Human Resources at ************ or ************
Auto-ApplySecurity Lead-Data Protection
Data engineer job in Milwaukee, WI
blue Stone Recruiting is a national search firm with a focus of placing top Cyber Security talent from the Analyst level to CISO with prestigious organizations nationwide.
Job Description
Currently working with a Fortune 500 global manufacturing client that is looking for a Security Lead- Data Protection focused individual. This client is looking to add a permanent member to their security team that can grow within the company.
Responsibilities:
• Develop and lead the Security Lead-Global Data Protection program across all clients business units
• Build and manage an effective data classification program
• Develop and drive Security Lead-Data Protection policies, standards and processes
• Develop and drive a data loss prevention tools strategy partnering with companies Information Technology groups and the clients business units
• Align Security -Lead Data Protection programs with other information security and cross-functional programs
• Direct and improve the Security Lead-Data Protection and DLP programs and associated governance activities including metrics, issue tracking and remediation, and programs supporting client policies.
• Establish and maintain cross-business relationships with key Leads and stakeholders across clients business, information security, and IT functions
• Coordinate with the Global IT Council to ensure compliance with clients standards and share best practices
• Conduct research and make recommendations on products, services, and standards in support of all global infrastructure security efforts.
• Develop and maintain appropriate response playbooks, facilitate routine exercises, and ensure a sound communication process for all cyber events
Job Requirements:
Formal Education & Certification
• Bachelor's degree in Information Security, IT Engineering, or Computer Science with 5+ years of IT experience
• Industry-recognized security certification such as GIAC, CISSP, CISM, or CISA are preferred
.
Qualifications
• Minimum 5 years' information security experience including experience mapping and securing business processes / data flows
• Demonstrated experience building and leading a cyber-security program
• Advanced knowledge of data protection/DLP and Windows client/server security concepts, best practices, and procedures
• Solid “hands on” technical experience is essential
• Experience in Information Security Incident Response
• Experience in IaaS/SaaS environments
• Broad understanding of all aspects of IT and enterprise systems interoperability
• Ability to communicate technical topics (verbal and written) to multiple organizational levels
• Global enterprise experience is preferred
Personal Attributes:
Demonstrated Leadship managing direct and matrix-reporting global teams
• Demonstrated experience leading global programs across technology and business functions
• Strong interpersonal, written, and oral communication skills
• Able to conduct research into issues and products as required
• Ability to prioritize and execute tasks in a fast-paced environment and make sound decisions in emergency situation
• Highly self-motivated and directed
• Keen attention to detail
• Proven analytical and problem-solving abilities
Additional Information
Work with blue Stone recruiting to find your next Cyber Security role. You can find us at ******************************* We look forward to speaking with you.
All your information will be kept confidential according to EEO guidelines.
Slalom Flex (Project Based)- Java Data Engineer
Data engineer job in Milwaukee, WI
About the Role: We are seeking a highly skilled and motivated Data Engineer to join our team as an individual contributor. In this role, you will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure that support our data-driven initiatives. You will work closely with cross-functional teams to ensure data availability, quality, and performance across the organization.
About Us
Slalom is a purpose-led, global business and technology consulting company. From strategy to implementation, our approach is fiercely human. In six countries and 43 markets, we deeply understand our customers-and their customers-to deliver practical, end-to-end solutions that drive meaningful impact. Backed by close partnerships with over 400 leading technology providers, our 10,000+ strong team helps people and organizations dream bigger, move faster, and build better tomorrows for all. We're honored to be consistently recognized as a great place to work, including being one of Fortune's 100 Best Companies to Work For seven years running. Learn more at slalom.com.
Key Responsibilities:
* Design, develop, and maintain robust data pipelines using Java and Python.
* Build and optimize data workflows on AWS using services such as EMR, Glue, Lambda, and NoSQL databases.
* Leverage open-source frameworks to enhance data processing capabilities and performance.
* Collaborate with data scientists, analysts, and other engineers to deliver high-quality data solutions.
* Participate in Agile development practices, including sprint planning, stand-ups, and retrospectives.
* Ensure data integrity, security, and compliance with internal and external standards.
Required Qualifications:
* 5+ years of hands-on experience in software development using Java and Python (Spring Boo).
* 1+ years of experience working with AWS services including EMR, Glue, Lambda, and NoSQL databases.
* 3+ years of experience working with open-source data processing frameworks (e.g., Apache Spark, Kafka, Airflow).
* 2+ years of experience in Agile software development environments.
* Strong problem-solving skills and the ability to work independently in a fast-paced environment.
* Excellent communication and collaboration skills.
Preferred Qualifications:
* Experience with CI/CD pipelines and infrastructure-as-code tools (e.g., Terraform, CloudFormation).
* Familiarity with data governance and data quality best practices.
* Exposure to data lake and data warehouse architectures.
Compensation and Benefits
Slalom prides itself on helping team members thrive in their work and life. As a result, Slalom is proud to invest in benefits that include meaningful time off and paid holidays, 401(k) with a match, a range of choices for highly subsidized health, dental, & vision coverage, adoption and fertility assistance, and short/long-term disability. We also offer yearly $350 reimbursement account for any well-being-related expenses.
EEO and Accommodations
Slalom is an equal opportunity employer and is committed to inclusion, diversity, and equity in the workplace. All qualified applicants will receive consideration
for employment without regard to race, color, religion, sex, national origin, disability status, protected veterans' status, or any other characteristic protected by federal, state, or local laws. Slalom will also consider qualified applications with criminal histories, consistent with legal requirements.
Slalom welcomes and encourages applications from individuals with disabilities. Reasonable accommodations are available for candidates during all aspects of the
selection process. Please advise the talent acquisition team if you require accommodations during the interview process.
Associate Data Engineer
Data engineer job in Milwaukee, WI
Baker Tilly is a leading advisory, tax and assurance firm, providing clients with a genuine coast-to-coast and global advantage in major regions of the U.S. and in many of the world's leading financial centers - New York, London, San Francisco, Los Angeles, Chicago and Boston. Baker Tilly Advisory Group, LP and Baker Tilly US, LLP (Baker Tilly) provide professional services through an alternative practice structure in accordance with the AICPA Code of Professional Conduct and applicable laws, regulations and professional standards. Baker Tilly US, LLP is a licensed independent CPA firm that provides attest services to its clients. Baker Tilly Advisory Group, LP and its subsidiary entities provide tax and business advisory services to their clients. Baker Tilly Advisory Group, LP and its subsidiary entities are not licensed CPA firms.
Baker Tilly Advisory Group, LP and Baker Tilly US, LLP, trading as Baker Tilly, are independent members of Baker Tilly International, a worldwide network of independent accounting and business advisory firms in 141 territories, with 43,000 professionals and a combined worldwide revenue of $5.2 billion. Visit bakertilly.com or join the conversation on LinkedIn, Facebook and Instagram.
Please discuss the work location status with your Baker Tilly talent acquisition professional to understand the requirements for an opportunity you are exploring.
Baker Tilly is an equal opportunity/affirmative action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability or protected veteran status, gender identity, sexual orientation, or any other legally protected basis, in accordance with applicable federal, state or local law.
Any unsolicited resumes submitted through our website or to Baker Tilly Advisory Group, LP, employee e-mail accounts are considered property of Baker Tilly Advisory Group, LP, and are not subject to payment of agency fees. In order to be an authorized recruitment agency ("search firm") for Baker Tilly Advisory Group, LP, there must be a formal written agreement in place and the agency must be invited, by Baker Tilly's Talent Attraction team, to submit candidates for review via our applicant tracking system.
Job Description:
Associate Data Engineer
As a Senior Consultant - Associate Data Engineer you will design, build, and optimize modern data solutions for our mid‑market and enterprise clients. Working primarily inside the Microsoft stack (Azure, Synapse, and Microsoft Fabric), you will transform raw data into trusted, analytics‑ready assets that power dashboards, advanced analytics, and AI use cases. You'll collaborate with solution architects, analysts, and client stakeholders while sharpening both your technical depth and consulting skills.
Key Responsibilities:
* Data Engineering: Develop scalable, well‑documented ETL/ELT pipelines using T‑SQL, Python, Azure Data Factory/Fabric Data Pipelines, and Databricks; implement best‑practice patterns for performance, security, and cost control.
* Modeling & Storage: Design relational and lakehouse models; create Fabric OneLake shortcuts, medallion‑style layers, and dimensional/semantic models for Power BI.
* Quality & Governance: Build automated data‑quality checks, lineage, and observability metrics; contribute to CI/CD workflows in Azure DevOps or GitHub.
* Client Delivery: Gather requirements, demo iterative deliverables, document technical designs, and translate complex concepts to non‑technical audiences.
* Continuous Improvement: Research new capabilities, share findings in internal communities of practice, and contribute to reusable accelerators. Collaborate with clients and internal stakeholders to design and implement scalable data engineering solutions.
Qualifications:
* Education - Bachelor's in Computer Science, Information Systems, Engineering, or related field (or equivalent experience)
* Experience - 2-3 years delivering production data solutions, preferably in a consulting or client‑facing role.
* Technical Skills:
Strong T‑SQL for data transformation and performance tuning.
Python for data wrangling, orchestration, or notebook‑based development.
Hands‑on ETL/ELT with at least one Microsoft service (ADF, Synapse Pipelines, Fabric Data Pipelines).
* Project experience with Microsoft Fabric (OneLake, Lakehouses, Data Pipelines, Notebooks, Warehouse, Power BI DirectLake) preferred
* Familiarity with Databricks, Delta Lake, or comparable lakehouse technologies preferred
* Exposure to DevOps (YAML pipelines, Terraform/Bicep) and test automation frameworks preferred
* Experience integrating SaaS/ERP sources (e.g., Dynamics 365, Workday, Costpoint) preferred
Auto-ApplySAP Data Migration Lead/ Architect
Data engineer job in Milwaukee, WI
Deltacubes is currently looking for a SAP Data Migration Lead/ Architect for one of our clients located in Milwaukee, WI
This role is mostly remote with some travel to client on need basis.
If you are interested, kindly apply here
Acceptable Visa Type :H1B , US Citizen/Green Card holder preferred.
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Data Architect
Data engineer job in Waukegan, IL
Skill Data Architect
Total Experience 10 yrs.
Max Salary $ DOE Per Hour
Employment Type Contract Jobs (Temp/Consulting)
Job Duration 10+ months
Domain Any
Description
• 10+ years as a Data Architect
• Hands on Experience with various data architecture and modeling tools (Erwin and ER Studio)
• Some experience with ALL of Informatica Suite (ETL)
o Metadata manager
o PowerCenter
o Master Data Management (MDM)
o Analyst
• Excellent written and verbal communication skills
PLUSSES:
• Experience working in Biopharmaceutical industry (regulatory affairs)
• Experience with Data Warehouse and Data Mart Design
Additional Information
Multiple Openings for OPT/CPT/H4/L2/EAD/Citizen's.
Healthcare Data Analyst Lead (CMH Health)
Data engineer job in Brookfield, WI
Individual(s) must be legally authorized to work in the United States without the need for immigration support or sponsorship from Milliman now or in the future.
Milliman is seeking a technically savvy, analytically strong Healthcare Data Analyst Lead to manage analytics and technology projects as well as coach and mentor junior analytical staff. The ideal candidate is someone seeking to join a challenging, yet rewarding environment focused on delivering world-class analytics across a variety of healthcare-related domains to a variety of healthcare entities.
Who We Are
Independent for over 75 years, Milliman delivers market-leading services and solutions to clients worldwide. Today, we are helping companies take on some of the world's most critical and complex issues, including retirement funding and healthcare financing, risk management and regulatory compliance, data analytics and business transformation.
Job Responsibilities
Lead and manage analytics and technology projects from data ingestion through delivery.
Design, develop, and optimize data processes and workflows supporting large healthcare datasets.
Perform and oversee ETL, validation, and transformation tasks for claims, eligibility, and pharmacy data.
Guide project teams through the full “data-to-deliverable” lifecycle, ensuring accuracy and efficiency.
Build analytical models, dashboards, and data pipelines to support consulting engagements.
Collaborate with consultants, actuaries, and project managers to interpret results and deliver client insights.
Review and approve technical work from peers and junior analysts to ensure quality standards are met.
Mentor, coach, and delegate to analytical staff to strengthen technical and professional development.
Contribute to innovation and process improvement initiatives, including automation, cloud enablement, and AI integration.
Participate in client meetings and presentations, occasionally requiring travel.
Minimum requirements
Bachelor's degree required (Computer Science, Management Information Systems, Computer Engineering, Math, Actuarial Science, Data Analytics, or related degree is preferred)
6+ years of experience in healthcare data analytics or a related technical analytics role.
Advanced proficiency with SQL and Microsoft Excel for data analysis, validation, and automation.
Strong programming skills in Python, R, or other analytical languages.
Experience with data visualization and reporting tools (e.g., Power BI, Tableau, or R Shiny).
Solid understanding of healthcare data structures, including claims, eligibility, and provider data.
Proven ability to lead multiple projects simultaneously while mentoring and developing junior team members.
Experience with cloud data technologies (e.g., Azure Data Factory, Databricks, Snowflake, AWS Redshift, or similar).
Exposure to AI or Generative AI tools for data analysis, automation, or insight generation is preferred, but not required.
Competencies and Behaviors that Support Success in this Role
Deep understanding of database architecture and large-scale healthcare data environments.
Strong analytical thinking and the ability to translate complex data into actionable insights.
Excellent communication skills, including the ability to explain technical concepts to non-technical audiences.
Highly organized, detail-oriented, and able to manage competing priorities.
Collaborative and proactive leadership style with a focus on mentorship and knowledge-sharing.
Passion for applying analytics to improve healthcare performance, quality, and cost outcomes.
Demonstrated accountability for quality, timelines, and client satisfaction.
Fast learner who thrives in a dynamic, innovation-driven environment.
The Team
The Healthcare Data Analyst Lead will join a team that thrives on leveraging data, analytics, and technology to deliver meaningful business value. This is a team with technical aptitude and analytical prowess that enjoys building efficient and scalable products and processes. Ultimately, we are passionate about effecting change in healthcare. We also believe that collaboration and communication are cornerstones of success.
The Healthcare Data Analyst Lead will also join a mix of Healthcare Analysts, Leads, Consultants, and Principals. In addition, as part of the broader Milliman landscape, they will work alongside Healthcare Actuaries, Pharmacists, Clinicians, and Physicians. We aim to provide everyone a supportive environment, where we foster learning and growth through rewarding challenges.
Salary:
The overall salary range for this role is $104,900 - $199,065.
For candidates residing in:
Alaska, California, Connecticut, Illinois, Maryland, Massachusetts, New Jersey, Notable New York City, Newark, San Jose, San Francisco, Pennsylvania, Virginia, Washington, or the District of Columbia the salary range is $120,635 - $199,065.
All other locations the salary range is $104,900 - $173,100.
A combination of factors will be considered, including, but not limited to, education, relevant work experience, qualifications, skills, certifications, etc.
Location
:
It is preferred that candidates work on-site at our Brookfield, Wisconsin office, however, remote candidates will be considered.
The expected application deadline for this job is May 25, 2026.
Benefits
We offer a comprehensive benefits package designed to support employees' health, financial security, and well-being. Benefits include:
Medical, Dental and Vision - Coverage for employees, dependents, and domestic partners.
Employee Assistance Program (EAP) - Confidential support for personal and work-related challenges.
401(k) Plan - Includes a company matching program and profit-sharing contributions.
Discretionary Bonus Program - Recognizing employee contributions.
Flexible Spending Accounts (FSA) - Pre-tax savings for dependent care, transportation, and eligible medical expenses.
Paid Time Off (PTO) - Begins accruing on the first day of work. Full-time employees accrue 15 days per year, and employees working less than full-time accrue PTO on a prorated basis.
Holidays - A minimum of 10 observed holidays per year.
Family Building Benefits - Includes adoption and fertility assistance.
Paid Parental Leave - Up to 12 weeks of paid leave for employees who meet eligibility criteria.
Life Insurance & AD&D - 100% of premiums covered by Milliman.
Short-Term and Long-Term Disability - Fully paid by Milliman.
Equal Opportunity:
All qualified applicants will receive consideration for employment, without regard to race, color, religion, sex, sexual orientation, national origin, disability, or status as a protected veteran.
Data Scientist II - Clinical - Looking for only W2
Data engineer job in North Chicago, IL
Duration: 12 Months
Contract Type: W2
Primary Skills: AWS Cloud Formation, R, Data Analysis, Python, SQL
Computational Data Scientist
Seeking a highly motivated and driven data scientist to join our Quantitative, Translational & ADME Sciences (QTAS) team in North Chicago, IL. The QTAS organization supports the discovery and early clinical pipeline through mechanistically investigating how drug molecules are absorbed, distributed, excreted, metabolized, and transported across the body to predict duration and intensity of exposure and pharmacological action of drug candidates in humans. Digital workflows, systems, IT infrastructure, and computational sciences are critical and growing components within the organization to help deliver vital results in the early pipeline. This specific job role is designed to act as an SME (subject matter expert) for data science within the technical organization of QTAS.
For this role, the successful candidate will have a substantial background in data and computer science with an emphasis on supporting, developing and implementing IT solutions for lab-based systems as well as utilizing computational methods. The candidate should possess a deep knowledge in AI/ML, with a focus on both supervised (like neural networks, decision trees) and unsupervised learning techniques (such as clustering, PCA). They must be adept at applying these methods to large datasets for predictive modeling; in this context- drug properties and discovery patterns in ADME datasets. Proficiency in model validation, optimization, and feature engineering is essential to ensure accuracy and robustness in predictions. The role requires effective collaboration with interdisciplinary teams to integrate AI insights into drug development processes. Strong communication skills are necessary to convey complex AI/ML concepts to a diverse audience.
Key Responsibilities:
Provide business-centric support of IT systems and platforms in support of our scientific operations and processes.
Develop, implement, troubleshoot and support solutions independently for the digital infrastructure and workflows within QTAS including custom platform/coding solutions, visualization tools, integration of new software/hardware, and analysis and troubleshooting support.
Lead the analysis of large ADME-related datasets, contributing to the understanding and optimization of drug absorption, distribution, metabolism, and excretion properties.
Apply computational tools and machine learning/deep learning techniques to analyze and interpret complex biological data relevant to drug discovery.
Develop predictive models and algorithms for identifying potential drug candidates with desirable ADME properties.
Collaborate with teams across biological sciences and drug discovery to integrate computational insights into practical drug development strategies.
Communicate findings and strategic input to cross-functional teams, including Translational Science, Medicine, and Late Development groups.
Qualifications:
Bachelor's or Master's Degree in Data Science, Computer Science, Computational Chemistry, or related relevant discipline typically with 5 to 10 (BS) or 2 to 5 (MS) years related industry experience.
Passion for data analysis, solving technical problems and applying new technologies to further scientific goals.
Strong proficiency in programming (e.g., SQL, Python, R, MATLAB), database technologies (Oracle, my SQL, relational databases; graph databases are a plus), machine learning/deep learning (network architectures are a plus), dimensionality reduction techniques (e.g., PCA), and possible cheminformatics software suites
Demonstrated experience in the analysis and visualization of large datasets. Proficiency in any of the following technologies is valued: Python (including libraries such as Matplotlib, Seaborn, Plotly, Bokeh), JavaScript, Julia, Java/Scala, or R (including Shiny).
Comfortable working in cloud and high-performance computational environments (e.g., AWS and Oracle Cloud)
Excellent communication skills and ability to work effectively in interdisciplinary teams.
Understanding of pharma R&D process and challenges in drug discovery is preferred.
Proven ability to work in a team environment; ability to work well in a collaborative fast-paced team environment.
Excellent oral and written communication skills and the ability to convey IT related notions to cross-disciplinary scientists.
Thorough theoretical and practical understanding of own scientific discipline
Background and/or experience in the biotechnology, pharmaceutical, biology, or chemistry fields is preferred.
Key Leadership Competencies:
Builds strong relationships with peers and cross-functionally with partners outside of team to enable higher performance.
Learns fast, grasps the "essence" and can change course quickly where indicated.
Raises the bar and is never satisfied with the status quo.
Creates a learning environment, open to suggestions and experimentation for improvement.
Embraces the ideas of others, nurtures innovation and manages innovation to reality.
Kindly please share your resumes to **********************
Auto-Apply