Data Scientist, NLP
Data engineer job in Madison, WI
Datavant is a data platform company and the world's leader in health data exchange. Our vision is that every healthcare decision is powered by the right data, at the right time, in the right format. Our platform is powered by the largest, most diverse health data network in the U.S., enabling data to be secure, accessible and usable to inform better health decisions. Datavant is trusted by the world's leading life sciences companies, government agencies, and those who deliver and pay for care.
By joining Datavant today, you're stepping onto a high-performing, values-driven team. Together, we're rising to the challenge of tackling some of healthcare's most complex problems with technology-forward solutions. Datavanters bring a diversity of professional, educational and life experiences to realize our bold vision for healthcare.
We are looking for a motivated Data Scientist to help Datavant revolutionize the healthcare industry with AI. This is a critical role where the right candidate will have the ability to work on a wide range of problems in the healthcare industry with an unparalleled amount of data.
You'll join a team focused on deep medical document understanding, extracting meaning, intent, and structure from unstructured medical and administrative records. Our mission is to build intelligent systems that can reliably interpret complex, messy, and high-stakes healthcare documentation at scale.
This role is a unique blend of applied machine learning, NLP, and product thinking. You'll collaborate closely with cross-functional teams to:
+ Design and develop models to extract entities, detect intents, and understand document structure
+ Tackle challenges like long-context reasoning, layout-aware NLP, and ambiguous inputs
+ Evaluate model performance where ground truth is partial, uncertain, or evolving
+ Shape the roadmap and success metrics for replacing legacy document processing systems with smarter, scalable solutions
We operate in a high-trust, high-ownership environment where experimentation and shipping value quickly are key. If you're excited by building systems that make healthcare data more usable, accurate, and safe, please reach out.
**Qualifications**
+ 3+ years of experience with data science and machine learning in an industry setting, particularly in designing and building NLP models.
+ Proficiency with Python
+ Experience with the latest in language models (transformers, LLMs, etc.)
+ Proficiency with standard data analysis toolkits such as SQL, Numpy, Pandas, etc.
+ Proficiency with deep learning frameworks like PyTorch (preferred) or TensorFlow
+ Industry experience shepherding ML/AI projects from ideation to delivery
+ Demonstrated ability to influence company KPIs with AI
+ Demonstrated ability to navigate ambiguity
**Bonus Experience**
+ Experience with document layout analysis (using vision, NLP, or both).
+ Experience with Spark/PySpark
+ Experience with Databricks
+ Experience in the healthcare industry
**Responsibilities**
+ Play a key role in the success of our products by developing models for document understanding tasks.
+ Perform error analysis, data cleaning, and other related tasks to improve models.
+ Collaborate with your team by making recommendations for the development roadmap of a capability.
+ Work with other data scientists and engineers to optimize machine learning models and insert them into end-to-end pipelines.
+ Understand product use-cases and define key performance metrics for models according to business requirements.
+ Set up systems for long-term improvement of models and data quality (e.g. active learning, continuous learning systems, etc.).
**After 3 Months, You Will...**
+ Have a strong grasp of technologies upon which our platform is built.
+ Be fully integrated into ongoing model development efforts with your team.
**After 1 Year, You Will...**
+ Be independent in reading literature and doing research to develop models for new and existing products.
+ Have ownership over models internally, communicating with product managers, customer success managers, and engineers to make the model and the encompassing product succeed.
+ Be a subject matter expert on Datavant's models and a source from which other teams can seek information and recommendations.
\#LI-BC1
We are committed to building a diverse team of Datavanters who are all responsible for stewarding a high-performance culture in which all Datavanters belong and thrive. We are proud to be an Equal Employment Opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, disability, veteran status, or other legally protected status.
At Datavant our total rewards strategy powers a high-growth, high-performance, health technology company that rewards our employees for transforming health care through creating industry-defining data logistics products and services.
The range posted is for a given job title, which can include multiple levels. Individual rates for the same job title may differ based on their level, responsibilities, skills, and experience for a specific job.
The estimated total cash compensation range for this role is:
$136,000-$170,000 USD
To ensure the safety of patients and staff, many of our clients require post-offer health screenings and proof and/or completion of various vaccinations such as the flu shot, Tdap, COVID-19, etc. Any requests to be exempted from these requirements will be reviewed by Datavant Human Resources and determined on a case-by-case basis. Depending on the state in which you will be working, exemptions may be available on the basis of disability, medical contraindications to the vaccine or any of its components, pregnancy or pregnancy-related medical conditions, and/or religion.
This job is not eligible for employment sponsorship.
Datavant is committed to a work environment free from job discrimination. We are proud to be an Equal Employment Opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, disability, veteran status, or other legally protected status. To learn more about our commitment, please review our EEO Commitment Statement here (************************************************** . Know Your Rights (*********************************************************************** , explore the resources available through the EEOC for more information regarding your legal rights and protections. In addition, Datavant does not and will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay.
At the end of this application, you will find a set of voluntary demographic questions. If you choose to respond, your answers will be anonymous and will help us identify areas for improvement in our recruitment process. (We can only see aggregate responses, not individual ones. In fact, we aren't even able to see whether you've responded.) Responding is entirely optional and will not affect your application or hiring process in any way.
Datavant is committed to working with and providing reasonable accommodations to individuals with physical and mental disabilities. If you need an accommodation while seeking employment, please request it here, (************************************************************** Id=**********48790029&layout Id=**********48795462) by selecting the 'Interview Accommodation Request' category. You will need your requisition ID when submitting your request, you can find instructions for locating it here (******************************************************************************************************* . Requests for reasonable accommodations will be reviewed on a case-by-case basis.
For more information about how we collect and use your data, please review our Privacy Policy (**************************************** .
Data Engineer
Data engineer job in Madison, WI
**This role is based remotely but if you live within a 50-mile radius of [Austin, Detroit, Milford or Mountain View], you are expected to report to that location three times a week, at minimum.** **What You'll Do** + Communicates and maintains Master Data, Metadata, Data Management Repositories, Logical Data Models, Data Standards
+ Create and maintain optimal data pipeline architecture
+ You will assemble large, complex data sets that meet functional / non-functional business requirements
+ You will identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
+ Build industrialized analytic datasets and delivery mechanisms that utilize the data pipeline to deliver actionable insights into customer acquisition, operational efficiency and other key business performance metrics
+ Work with business partners on data-related technical issues and develop requirements to support their data infrastructure needs
+ Create highly consistent and accurate analytic datasets suitable for business intelligence and data scientist team members
**Your Skills & Abilities (Required Qualifications)**
+ Bachelor's degree in business administration, marketing, communications, information systems or related field
+ 7 or more years with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
+ At least 3 years of hands on experience with Big Data Tools: Hadoop, Spark, Kafka, etc.
+ You have mastery with databases - Advanced SQL and NoSQL databases, including Postgres and Cassandra
+ Data Wrangling and Preparation: Alteryx, Trifacta, SAS, Datameer
+ Stream-processing systems: Storm, Spark-Streaming, etc.
+ Ability to tackle problems quickly and completely
+ Ability to identify tasks which require automation and automate them
+ A demonstrable understanding of networking/distributed computing environment concepts
+ Ability to multi-task and stay organized in a dynamic work environment
**What Can Give You a Competitive Advantage (Preferred Qualifications)**
+ Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
+ AWS cloud services: EC2, EMR, RDS, Redshift
This job is not eligible for relocation benefits. Any relocation costs would be the responsibility of the selected candidate.
**Compensation:**
+ The expected base compensation for this role is: $125,000 - $205,100. Actual base compensation within the identified range will vary based on factors relevant to the position.
+ Bonus Potential: An incentive pay program offers payouts based on company performance, job level, and individual performance.
+ Benefits: GM offers a variety of health and wellbeing benefit programs. Benefit options include medical, dental, vision, Health Savings Account, Flexible Spending Accounts, retirement savings plan, sickness and accident benefits, life insurance, paid vacation & holidays, tuition assistance programs, employee assistance program, GM vehicle discounts and more.
GM DOES NOT PROVIDE IMMIGRATION-RELATED SPONSORSHIP FOR THIS ROLE. DO NOT APPLY FOR THIS ROLE IF YOU WILL NEED GM IMMIGRATION SPONSORSHIP NOW OR IN THE FUTURE. THIS INCLUDES DIRECT COMPANY SPONSORSHIP, ENTRY OF GM AS THE IMMIGRATION EMPLOYER OF RECORD ON A GOVERNMENT FORM, AND ANY WORK AUTHORIZATION REQUIRING A WRITTEN SUBMISSION OR OTHER IMMIGRATION SUPPORT FROM THE COMPANY (e.g., H-1B, OPT, STEM OPT, CPT, TN, J-1, etc.)
\#LI-CC1
**About GM**
Our vision is a world with Zero Crashes, Zero Emissions and Zero Congestion and we embrace the responsibility to lead the change that will make our world better, safer and more equitable for all.
**Why Join Us**
We believe we all must make a choice every day - individually and collectively - to drive meaningful change through our words, our deeds and our culture. Every day, we want every employee to feel they belong to one General Motors team.
**Benefits Overview**
From day one, we're looking out for your well-being-at work and at home-so you can focus on realizing your ambitions. Learn how GM supports a rewarding career that rewards you personally by visiting Total Rewards resources (************************************************************* .
**Non-Discrimination and Equal Employment Opportunities (U.S.)**
General Motors is committed to being a workplace that is not only free of unlawful discrimination, but one that genuinely fosters inclusion and belonging. We strongly believe that providing an inclusive workplace creates an environment in which our employees can thrive and develop better products for our customers.
All employment decisions are made on a non-discriminatory basis without regard to sex, race, color, national origin, citizenship status, religion, age, disability, pregnancy or maternity status, sexual orientation, gender identity, status as a veteran or protected veteran, or any other similarly protected status in accordance with federal, state and local laws.
We encourage interested candidates to review the key responsibilities and qualifications for each role and apply for any positions that match their skills and capabilities. Applicants in the recruitment process may be required, where applicable, to successfully complete a role-related assessment(s) and/or a pre-employment screening prior to beginning employment. To learn more, visit How we Hire (********************************************* .
**Accommodations**
General Motors offers opportunities to all job seekers including individuals with disabilities. If you need a reasonable accommodation to assist with your job search or application for employment, email (Careers.Accommodations@GM.com) us or call us at ************. In your email, please include a description of the specific accommodation you are requesting as well as the job title and requisition number of the position for which you are applying.
We are leading the change to make our world better, safer and more equitable for all through our actions and how we behave. Learn more about:
**Our Company (**************************************************
**Our Culture**
**How we hire (************************************************
Our diverse team of employees bring their collective passion for engineering, technology and design to deliver on our vision of a world with Zero Crashes, Zero Emissions and Zero Congestion. We are looking for adventure-seekers and imaginative thought leaders to help us transform mobility.
Explore our global locations (********************************************
We are determined to lead change for the world through technology, ingenuity and harnessing the creativity of our diverse team. Join us to help lead the change that will make our world better, safer and more equitable for all by becoming a member of GM's Talent Community (beamery.com) (*********************************************** . As a part of our Talent Community, you will receive updates about GM, open roles, career insights and more.
Please note that filling out the form below will not add you to our Talent Community automatically; you will need to use the link above. If you are seeking to apply to a specific role, we encourage you to click "Apply Now" on the job posting of interest.
The policy of General Motors is to extend opportunities to qualified applicants and employees on an equal basis regardless of an individual's age, race, color, sex, religion, national origin, disability, sexual orientation, gender identity/expression or veteran status. Additionally, General Motors is committed to being an Equal Employment Opportunity Employer and offers opportunities to all job seekers including individuals with disabilities. If you need a reasonable accommodation to assist with your job search or application for employment, email us at Careers.Accommodations@GM.com .In your email, please include a description of the specific accommodation you are requesting as well as the job title and requisition number of the position for which you are applying.
Data Engineer - Platform & Product
Data engineer job in Milwaukee, WI
We are seeking a skilled and solution-oriented Data Engineer to contribute to the development of our growing Data Engineering function. This role will be instrumental in designing and optimizing data workflows, building domain-specific pipelines, and enhancing platform services. The ideal candidate will help evolve our Snowflake-based data platform into a scalable, domain-oriented architecture that supports business-critical analytics and machine learning initiatives.
Responsibilities
The candidate is expected to:
* Design and build reusable platform services, including pipeline frameworks, CI/CD workflows, data validation utilities, data contracts, lineage integrations
* Develop and maintain data pipelines for sourcing, transforming, and delivering trusted datasets into Snowflake
* Partner with Data Domain Owners to onboard new sources, implement data quality checks, and model data for analytics and machine learning use cases
* Collaborate with the Lead Data Platform Engineer and Delivery Manager to deliver monthly feature releases and support bug remediation
* Document and promote best practices for data pipeline development, testing, and deployment
Qualifications
The successful candidate will possess strong analytical skills and attention to detail. Additionally, the ideal candidate will possess:
* 3-6 years of experience in data engineering or analytics engineering
* Strong SQL and Python skills; experience with dbt or similar transformation frameworks
* Demonstrated experience building pipelines and services on Snowflake or other modern cloud data platforms
* Understanding of data quality, validation, lineage, and schema evolution
* Background in financial or market data (trading, pricing, benchmarks, ESG) is a plus
* Strong collaboration and communication skills, with a passion for enabling domain teams
Privacy Notice for California Applicants
Artisan Partners Limited Partnership is an equal opportunity employer. Artisan Partners does not discriminate on the basis of race, religion, color, national origin, gender, age, disability, marital status, sexual orientation or any other characteristic protected under applicable law. All employment decisions are made on the basis of qualifications, merit and business need.
#LI-Hybrid/span>
Auto-ApplyBig Data/ Hadoop with AWS Cloud and Spark Experience
Data engineer job in Madison, WI
We are directly work with Infosys. Established in 1981, Infosys is a NYSE listed global consulting and IT services company with more than 198,000 employees. From a capital of US$ 250, we have grown to become a US$ 10.4 billion (LTM Q1 FY 18 revenues) company with a market capitalization of approximately US$ 34.50 billion.
In our journey of over 35 years, we have catalyzed some of the major changes that have led to India's emergence as the global destination for software services talent. We pioneered the Global Delivery Model and became the first IT Company from India to be listed on NASDAQ. Our employee stock options program created some of India's first salaried millionaires.
Read more about the defining moments in the history of Infosys.
Hi
Hope you are doing great,
This is
Siva Maddineni
, currently we are looking for a
Big Data || Madison WI (
Only GC's or USC's
)
At the time of Submission, I need VISA and ID proof.
Check with your consultants before submitting me. If they are comfortable with this requirement then only forward me.
Client: Infosys
Title: Big Data
Location: Madison WI (Only
GC's or USC's
)
Duration: 6 Months
Experience Need: Min 9+ yrs
Rate: $55-60/hr on w2
:
Must Have Skills:
1. AWS cloud
2. Big Data Hadoop
3. Spark
4. HBase
5. Azure
Detailed Job Description:
Responsible for architecting, designing and developing cloud infrastructure and applications. Provide operational and technical expertise to initiatives for the Big Data analytics platform, data and applications, addressing a broad range of technologies including Big Data Hadoop, Spark, Kafka, HBase, Cassandra, Eslatic Search on Linux hosted on Azure or AWS cloud.
Top 3 responsibilities you would expect the Subcon to shoulder and execute*:
1. Responsible for architecting, designing and developing cloud infrastructure and applications
2. Provide operational and technical expertise to initiatives for the Big Data analytics platform, data and applications, addressing a broad range of technologies including Big Data Hadoop, Spark, Kafka, HBase, Cassandra, Eslatic Search on Linux hosted on Azure or AWS cloud.
3. Researching, architecting, designing and deploying new tools, frameworks and patterns to build a sustainable big data platform.
Additional Information
All your information will be kept confidential according to EEO guidelines.
RISE Data Scientist
Data engineer job in Madison, WI
Current Employees: If you are currently employed at any of the Universities of Wisconsin, log in to Workday to apply through the internal application process.
Are you passionate about improving patient health outcomes, optimizing healthcare processes, and advancing science? Join our dynamic Informatics and Information Technology team at the University of Wisconsin School of Medicine and Public Health in Madison, Wisconsin. We are committed to revolutionizing healthcare through implementation of advanced data science approaches, conducting cutting-edge data-centric research, and generating real-world evidence to improve patient health outcomes at UW Health and beyond. We are seeking an experienced informatician/Data Scientist to be our Patient Reported Outcomes Data Scientist. On a day-to-day basis, the incumbent could expect to engage in the following activities:
Work with stakeholders to develop and modernize patient reported outcomes collection, storage, and utilization
Integrate research workflows into the collection of patient reported outcomes for clinical purposes.
When appropriate, coordinate with the technical teams to ensure data feeds from Epic to REDCap and vice-versa.
Assist users in designing, developing, and managing REDCap projects related to patient reported outcomes, including data collection instruments, surveys, and databases.
Coordinate with technical teams to troubleshoot and resolve technical issues related to patient reported outcomes data usage and functionality.
Develop and maintain comprehensive documentation, user guides, and training materials the use of patient reported outcomes data.
Conduct workshops, webinars, and one-on-one training sessions to enhance user proficiency with patient reported outcomes data.
Collaborate with infrastructure and data teams to ensure the patient reported outcomes data storage is up-to-date and operating efficiently.
Monitor and evaluate user feedback to identify areas for improvement and implement enhancements to the patient reported outcomes support services.
Stay current with the latest developments and best practices in data management.
It is anticipated that this position will be remote and requires work be performed at an offsite, non-campus work location.
This position is part of the Wisconsin Research, Innovation and Scholarly Excellence (RISE) Initiative. Through accelerated and strategic faculty hiring, research infrastructure enhancement, interdisciplinary collaboration, and increased student and educational opportunities, RISE addresses complex societal challenges of importance to the state, nation and world. Building on UW-Madison's strengths, RISE expands the University's successful track record of connecting with communities and industry on collaborative solutions.
Over the next three academic years, UW-Madison will substantially increase current hiring levels, bringing 150 new RISE faculty to campus. Candidates hired through RISE will join a community of scholars working across disciplines, schools and colleges on research, teaching and outreach endeavors. The community will engage regularly in venues such as seminar series and colloquia to share ongoing projects and identify opportunities to work together. The University will support the community, facilitating access to research infrastructure, and funding to support broad and rich collaboration.
Further information regarding RISE can be found at: ***********************
Key Job Responsibilities:
Composes and assembles reproducible workflows and reports to clearly articulate patterns to researchers and/or administrators
Contributes to the informatics and IT education and training initiatives
Identifies and implements or guides others in implementing appropriate data science techniques to find data patterns and answer research questions chosen by the lead researcher including data visualization, statistical analysis, machine learning, and data mining
Serves as an institutional subject matter expert and liaison to key internal and external stakeholders regarding data science best practices and methodologies and represents the interests of data science
Prepares data sets for analysis including cleaning/quality assurance, transformations, restructuring, and integration of multiple data sources
Organizes and automates project steps for data preparation and analysis
Documents approaches to address research questions and contributes to the establishment of reproducible research methodologies and analysis workflows
Participates in interdisciplinary and collaborative efforts with other departments, schools and colleges
Department:
School of Medicine and Public Health, Office of Informatics and Information Technology, RISE.
This position is within the School of Medicine and Public Health's Office of Informatics and Information Technology (IIT). IIT is a multidisciplinary team of data scientists, engineers, developers, and IT support staff. We offer a variety of Informatics and IT services to departments and research staff within the School of Medicine and Public Health and beyond to support the conduct of high-quality clinical and translational research.
Informatics: We provide innovative solutions and training for a broad spectrum of clinical and translational research utilizing real-world data to facilitate rapid translation of research findings into clinical practice, with an emphasis on precision medicine, healthcare delivery, and population health.
Technology Solutions: We provide technology solutions to the School of Medicine and Public Health including cybersecurity, educational technology, and IT support.
Compensation:
The starting salary for the position is $95,000; but is negotiable based on experience and qualifications.
Employees in this position can expect to receive benefits such as generous vacation, holidays, and sick leave; competitive insurances and savings accounts; retirement benefits. For more information, refer to the campus benefits webpage. SMPH Faculty /Academic Staff Benefits Flyer 2026
Required Qualifications:
Proven experience with REDCap, including project design, data management, and user support.
Strong technical skills and familiarity with database management and data collection tools.
Excellent communication and interpersonal skills, with the ability to explain complex technical concepts to non-technical users.
Strong problem-solving and analytical abilities.
Ability to work independently and as part of a collaborative team.
Preferred Qualifications:
Experience in a research or academic environment.
Knowledge of regulatory requirements and standards related to research data (e.g., HIPAA, GDPR).
Familiarity with other data management and analysis tools (e.g., SQL, R, Python).
Education:
PhD preferred; focus in Biomedical Sciences, Data Science, Epidemiology, Informatics, Information Technology, or a related field preferred
How to Apply:
For the best experience completing your application, we recommend using Chrome or Firefox as your web browser.
To apply for this position, select either “I am a current employee” or “I am not a current employee” under Apply Now. You will then be prompted to upload your application materials.
Important: The application has only one attachment field. Upload all required documents in that field, either as a single combined file or as multiple files in the same upload area.
Upload required documents:
• Cover letter
• Resume
Your cover letter should address how your training and experience align with the required and preferred qualifications listed above. Application reviewers will rely on these written materials to determine which applicants move forward in the process. References will be requested from final candidates. All applicants will be notified once the search concludes and a candidate is selected
University sponsorship is not available for this position, including transfers of sponsorship and TN visas. The selected applicant will be responsible for ensuring their continuous eligibility to work in the United States (i.e. a citizen or national of the United States, a lawful permanent resident, a foreign national authorized to work in the United States without the need of an employer sponsorship) on or before the effective date of appointment. This position is an ongoing position that will require continuous work eligibility. If you are selected for this position you must provide proof of work authorization and eligibility to work.
Contact Information:
Cody Roekle, ****************, ************
Relay Access (WTRS): 7-1-1. See RELAY_SERVICE for further information.
Institutional Statement on Diversity:
Diversity is a source of strength, creativity, and innovation for UW-Madison. We value the contributions of each person and respect the profound ways their identity, culture, background, experience, status, abilities, and opinion enrich the university community. We commit ourselves to the pursuit of excellence in teaching, research, outreach, and diversity as inextricably linked goals.
The University of Wisconsin-Madison fulfills its public mission by creating a welcoming and inclusive community for people from every background - people who as students, faculty, and staff serve Wisconsin and the world.
The University of Wisconsin-Madison is an Equal Opportunity Employer.
Qualified applicants will receive consideration for employment without regard to, including but not limited to, race, color, religion, sex, sexual orientation, national origin, age, pregnancy, disability, or status as a protected veteran and other bases as defined by federal regulations and UW System policies. We promote excellence by acknowledging skills and expertise from all backgrounds and encourage all qualified individuals to apply. For more information regarding applicant and employee rights and to view federal and state required postings, visit the Human Resources Workplace Poster website.
To request a disability or pregnancy-related accommodation for any step in the hiring process (e.g., application, interview, pre-employment testing, etc.), please contact the Divisional Disability Representative (DDR) in the division you are applying to. Please make your request as soon as possible to help the university respond most effectively to you.
Employment may require a criminal background check. It may also require your references to answer questions regarding misconduct, including sexual violence and sexual harassment.
The University of Wisconsin System will not reveal the identities of applicants who request confidentiality in writing, except that the identity of the successful candidate will be released. See Wis. Stat. sec. 19.36(7).
The Annual Security and Fire Safety Report contains current campus safety and disciplinary policies, crime statistics for the previous 3 calendar years, and on-campus student housing fire safety policies and fire statistics for the previous 3 calendar years. UW-Madison will provide a paper copy upon request; please contact the University of Wisconsin Police Department.
Auto-ApplyAI Data Scientist
Data engineer job in Milwaukee, WI
What you Will Do
Clarios is seeking a skilled AI Data Scientist to design, develop, and deploy machine learning and AI solutions that unlock insights, optimize processes, and drive innovation across operations, offices, and products. This role focuses on transforming complex, high-volume data into actionable intelligence and enabling predictive and prescriptive capabilities that deliver measurable business impact. The AI Data Scientist will collaborate closely with AI Product Owners and business SMEs to ensure solutions are robust, scalable, and aligned with enterprise objectives.
This role requires an analytical, innovative, and detail-oriented team member with a strong foundation in AI/ML and a passion for solving complex problems. The individual must be highly collaborative, an effective communicator, and committed to continuous learning and improvement. This will be onsite three days a week in Glendale.
How you will do it
Hypothesis Framing & Metric Measurement: Translate business objectives into well-defined AI problem statements with clear success metrics and decision criteria. Prioritize opportunities by ROI, feasibility, risk, and data readiness; define experimental plans and acceptance thresholds to progress solutions from concept to scaled adoption.
Data Analysis & Feature Engineering: Conduct rigorous exploratory data analysis to uncover patterns, anomalies, and relationships across heterogeneous datasets. Apply advanced statistical methods and visualization to generate actionable insights; engineer high-value features (transformations, aggregations, embeddings) and perform preprocessing (normalization, encoding, outlier handling, dimensionality reduction). Establish data quality checks, schemas, and data contracts to ensure trustworthy inputs.
Model Development & Iteration: Design and build models across classical ML and advanced techniques-deep learning, NLP, computer vision, time-series forecasting, anomaly detection, and optimization. Run statistically sound experiments (cross-validation, holdouts, A/B testing), perform hyperparameter tuning and model selection, and balance accuracy, latency, stability, and cost. Extend beyond prediction to prescriptive decision-making (policy, scheduling, setpoint optimization, reinforcement learning), with domain applications such as OEE improvement, predictive maintenance, production process optimization, and digital twin integration in manufacturing contexts.
MLOps & Performance: Develop end-to-end pipelines for ingestion, training, validation, packaging, and deployment using CI/CD, reproducibility, and observability best practices. Implement performance and drift monitoring, automated retraining triggers, rollback strategies, and robust versioning to ensure reliability in dynamic environments. Optimize for scale, latency, and cost; support real-time inference and edge/plant-floor constraints under defined SLAs/SLOs.
Collaboration & Vendor Leadership: Partner with AI Product Owners, business SMEs, IT, and operations teams to translate requirements into pragmatic, integrated solutions aligned with enterprise standards. Engage process owners to validate data sources, constraints, and hypotheses; design human-in-the-loop workflows that drive adoption and continuous feedback. Provide technical oversight of external vendors-evaluating capabilities, directing data scientists/engineers/solution architects, validating architectures and algorithms, and ensuring seamless integration, timely delivery, and measurable value. Mentor peers, set coding/modeling standards, and foster a culture of excellence.
Responsible AI & Knowledge Management: Ensure data integrity, model explainability, fairness, privacy, and regulatory compliance throughout the lifecycle. Establish model risk controls; maintain documentation (model cards, data lineage, decision logs), audit trails, and objective acceptance criteria for production release. Curate reusable assets (feature catalogs, templates, code libraries) and best-practice playbooks to accelerate delivery while enforcing Responsible AI principles and rigorous quality assurance
What we look for
5+ years of experience in data science and machine learning, delivering production-grade solutions in corporate or manufacturing environments.
Strong proficiency in Python and common data science libraries (e.g., Pandas, NumPy, scikit-learn); experience with deep learning frameworks (TensorFlow, PyTorch) and advanced techniques (NLP, computer vision, time-series forecasting).
Hands-on experience with data preprocessing, feature engineering, and EDA for large, complex datasets.
Expertise in model development, validation, and deployment, including hyperparameter tuning, optimization, and performance monitoring.
Experience interacting with databases and writing SQL queries.
Experience using data visualization techniques for analysis and model explanation.
Familiarity with MLOps best practices-CI/CD pipelines, containerization (Docker), orchestration, model versioning, and drift monitoring.
Knowledge of cloud platforms (e.g., Microsoft Azure, Snowflake) and distributed computing frameworks (e.g., Spark) for scalable AI solutions.
Experience with agile methodologies and collaboration tools (e.g., JIRA, Azure DevOps), working in matrixed environments across IT, analytics, and business teams.
Strong analytical and business acumen, with the ability to quantify ROI and build business cases for AI initiatives.
Excellent communication and stakeholder engagement skills; able to present insights and recommendations to technical and non-technical audiences.
Knowledge of LLMs and VLMs is a strong plus.
Understanding of manufacturing systems (SCADA, PLCs, MES) and the ability to integrate AI models into operational workflows is a strong plus.
Willingness to travel up to 10% as needed.
#LI-AL1
#LI-HYBRID
What you get:
Medical, dental and vision care coverage and a 401(k) savings plan with company matching - all starting on date of hire
Tuition reimbursement, perks, and discounts
Parental and caregiver leave programs
All the usual benefits such as paid time off, flexible spending, short-and long-term disability, basic life insurance, business travel insurance, Employee Assistance Program, and domestic partner benefits
Global market strength and worldwide market share leadership
HQ location earns LEED certification for sustainability plus a full-service cafeteria and workout facility
Clarios has been recognized as one of 2025's Most Ethical Companies by Ethisphere. This prestigious recognition marks the third consecutive year Clarios has received this distinction.
Who we are:
Clarios is the force behind the world's most recognizable car battery brands, powering vehicles from leading automakers like Ford, General Motors, Toyota, Honda, and Nissan. With 18,000 employees worldwide, we develop, manufacture, and distribute energy storage solutions while recovering, recycling, and reusing up to 99% of battery materials-setting the standard for sustainability in our industry. At Clarios, we're not just making batteries; we're shaping the future of sustainable transportation. Join our mission to innovate, push boundaries, and make a real impact. Discover your potential at Clarios-where your power meets endless possibilities.
Veterans/Military Spouses:
We value the leadership, adaptability, and technical expertise developed through military service. At Clarios, those capabilities thrive in an environment built on grit, ingenuity, and passion-where you can grow your career while helping to power progress worldwide. All qualified applicants will be considered without regard to protected characteristics.
We recognize that people come with a wealth of experience and talent beyond just the technical requirements of a job. If your experience is close to what you see listed here, please apply. Diversity of experience and skills combined with passion is key to challenging the status quo. Therefore, we encourage people from all backgrounds to apply to our positions. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, age, status as a protected veteran or other protected characteristics protected by law. As a federal contractor, we are committed to not discriminating against any applicant or employee based on these protected statuses. We will also take affirmative action to ensure equal employment opportunities. Please let us know if you require accommodations during the interview process by emailing Special.Accommodations@Clarios.com. We are an Equal Opportunity Employer and value diversity in our teams in terms of work experience, area of expertise, and all characteristics protected by laws in the countries where we operate. For more information on our commitment to sustainability, diversity, and equal opportunity, please read our latest report. We want you to know your rights because EEO is the law.
A Note to Job Applicants: please be aware of scams being perpetrated through the Internet and social media platforms. Clarios will never require a job applicant to pay money as part of the application or hiring process.
To all recruitment agencies: Clarios does not accept unsolicited agency resumes/CVs. Please do not forward resumes/CVs to our careers email addresses, Clarios employees or any other company location. Clarios is not responsible for any fees related to unsolicited resumes/CVs.
Auto-ApplyLead Data & BI Scientist
Data engineer job in Milwaukee, WI
The Company Zurn Elkay Water Solutions Corporation is a thriving, values-driven company focused on doing the right things. We're a fast growing, publicly traded company (NYSE: ZWS), with an enduring reputation for integrity, giving back, and providing an engaging, inclusive environment where careers flourish and grow.
Named by Newsweek as One of America's Most Responsible Companies and an Energage USA Top Workplace, at Zurn Elkay Water Solutions Corporation, we never forget that our people are at the center of what makes us successful. They are the driving force behind our superior quality, product ingenuity, and exceptional customer experience. Our commitment to our people and their professional development is a recipe for success that has fueled our growth for over 100 years, as one of today's leading international suppliers of plumbing and water delivery solutions.
Headquartered in Milwaukee, WI, Zurn Elkay Water Solutions Corporation employs over 2800 employees worldwide, working from 24 locations across the U.S., China, Canada, Dubai, and Mexico, with sales offices available around the globe. We hope you'll visit our website and learn more about Zurn Elkay at zurnelkay.com.
If you're ready to join a company where what you do makes a difference and you have pride in the work you are doing, talk to us about joining the Zurn Elkay Water Solutions Corporation family!
If you are a current employee, please navigate here to apply internally.
Job Description
The Lead Data & BI Scientist is a senior-level role that blends advanced data science capabilities with business intelligence leadership. This position is responsible for driving strategic insight generation, building predictive models, and leading analytics initiatives across departments such as sales, marketing, pricing, manufacturing, logistics, supply chain, and finance.
The role requires both technical depth and business acumen to ensure that data-driven solutions are aligned with organizational goals and deliver measurable value.
Key Accountabilities
Strategic Insight & Business Partnership
* Partner with business leaders to identify high-impact opportunities and form hypotheses.
* Present findings and recommendations to leadership in a clear, impactful manner.
* Demonstrate ROI and business value from analytics initiatives.
Data Science Leadership
* Define and implement data science processes, tools, and governance frameworks.
* Mentor junior team members and foster a culture of continuous learning.
Advanced Analytics & Modeling
* Design, build and validate predictive models, machine learning algorithms and statistical analyses.
* Translate complex data into actionable insights for strategic decision-making.
Technology & Tools
* Utilize tools such as Tableau, Power BI, OBIEE/OAC, Snowflake, SQL, R, Python, and data catalogs.
* Stay current with emerging technologies like agentic analytics and AI-driven insights.
* Evaluate and recommend BI platforms and data science tools.
Project & Change Management
* Manage analytics projects from inception to delivery.
* Provide training and change management support for new tools and processes.
* Lead the establishment of a data science center of excellence.
Qualifications/Requirements
* Bachelor degree required, in a quantitative field such as engineering, mathematics, science, and/or MIS. Master degree preferred
* 10+ years of overall work experience
* 7+ years of experience in data science and statistical analysis
* Strong understanding and experience with analytics tool such as Tableau and OBIEE/OAC (or similar tools) for reporting and visualization, Snowflake for data storage, data modeling, data prep or ETL tools, R or Python, SQL, and data catalogs.
* Strong understanding and experience with multiple statistical and quantitative models and techniques such as (but not limited to) those used for predictive analytics, machine learning, AI, linear models and optimization, clustering, and decision tree.
* Deep experience applying data science to solve problems in at least one of the following areas is required. Experience in multiple areas is preferred: marketing, manufacturing, pricing, logistics, sourcing, and sales.
* Strong communication (verbal, written) skills, and ability to work with all levels of the organization effectively
* Working knowledge of and proven experience applying project management tools
* Strong analytical skills
* Ability to lead and mentor the work of others
* High degree of creativity and latitude is expected
Capabilities and Success Factors
* Decision Quality - Making good and timely decisions that keep the organization moving forward.
* Manages Complexity - Making sense of complex, high quantity and sometimes contradictory information to effectively solve problems.
* Plans & Aligns - Planning and prioritizing work to meet commitments aligned with organizational goals.
* Drives Results - Consistently achieving results, even under tough circumstances.
* Collaborates - Building partnerships and working collaboratively with others to meet shared objectives.
Total Rewards and Benefits
* Competitive Salary
* Medical, Dental, Vision, STD, LTD, AD&D, and Life Insurance
* Matching 401(k) Contribution
* Health Savings Account
* Up to 3 weeks starting Vacation (may increase with tenure)
* 12 Paid Holidays
* Annual Bonus Eligibility
* Educational Reimbursement
* Matching Gift Program
* Employee Stock Purchase Plan - purchase company stock at a discount!
THIRD PARTY AGENCY: Any unsolicited submissions received from recruitment agencies will be considered property of Zurn Elkay, and we will not be liable for any fees or obligations related to those submissions.
Equal Opportunity Employer - Minority/Female/Disability/Veteran
Auto-ApplyData Scientist
Data engineer job in Madison, WI
Innovizant LLC (headquartered in Chicago, USA) is a leading-edge, global IT Services organization with Sales and delivery offices in Asia. Innovizant LLC, is a Full-service IT provider, focused on delivering Innovative and value driven business analytical solutions leveraging data science, data engineering and decision science to provide winning actionable insights assisting our financial services clients banking, Insurance and credit union business in achieve their business goals.
Innovizant made up of exceptional data scientists and domain experts with a great experience in Our financial services industry solutions include Credit Risk insights, Customer Churn analysis, Customer segmentation, Fraud detection, Asset and Liability analysis, channel optimization analytics, Financial Advisor Network Analytics and product bundling analytics.
Some of our sur accelerators and solution frameworks assist our clients including FIN-CDO (which provided pre-delivered data strategies for the office of Chief Data Officer), BASEL-PRO (for achieving compliance with industry requirements of Basel-BCBS239,) and SmartCECL (Risk mitigation strategies by predicting default and loss given a default)
With data becoming the new 'oxygen' of businesses, many data science consulting firms have evolved in recent times, and they are also contributing the best of their solutions to the modern-day clients. It means today; you can easily find a solution for data sciences. However, the biggest challenge during managing this data comes across in the terms of 'Value Realization'. The true measure of success is to be able to put the data science insights into actionable events.
Many organizations have ended up spending a significant chunk of their analytics budget in some implementing data sciences solution - with minimal to no returns.
Role:
Data Scientist
Location:
Madison, WI
Full Time (Direct Hire)
Job description
· 5+ Years of total experience and 2-4 years of in-depth and superlative experience in Data Science.
·
Experience in Statistical modeling.
·
Candidate will be helping the client to lead this process or improve upon the program.
· Extremely proficient in Data Analysis, data wrangling, model development, software development, A/B Testing, Back Testing.
·
Extremely efficient in Python and R.
· Extremely efficient in identifying right analytics model methods and using them. Gradient Boosting, Decision Tree, Regression - are absolutely must.
· Exposure to Insurance (especially consumer Insurance/Retail Insurance) is a minor edge.
· MS in AI/Data Science would also be a plus.
Qualifications
Data Science, Analysis, Python, R, Statistical Modeling, Machine Learning
Additional Information
Thanks & Regards,
Aditya Prakash | Resource Manager | Innovizant LLC
Phone : ************
************************************
Easy ApplyAZCO Data Scientist - IT (Appleton, WI)
Data engineer job in Appleton, WI
The Staff Data Scientist plays a critical role in leveraging data to drive business insights, optimize operations, and support strategic decision-making. You will be responsible for designing and implementing advanced analytical models, developing data pipelines, and applying statistical and machine learning techniques to solve complex business challenges. This position requires a balance of technical expertise, business acumen, and communication skills to translate data findings into actionable recommendations.
+ Develop and apply data solutions for cleansing data to remove errors and review consistency.
+ Perform analysis of data to discover information, business value, patterns and trends in support to guide development of asset business solutions.
+ Gather data, find patterns and relationships and create prediction models to evaluate client assets.
+ Conduct research and apply existing data science methods to business line problems.
+ Monitor client assets and perform predictive and root cause analysis to identify adverse trends; choose best fit methods, define algorithms, and validate and deploy models to achieve desired results.
+ Produce reports and visualizations to communicate technical results and interpretation of trends; effectively communicate findings and recommendations to all areas of the business.
+ Collaborate with cross-functional stakeholders to assess needs, provide assistance and resolve problems.
+ Translate business problems into data science solutions.
+ Performs other duties as assigned
+ Complies with all policies and standards
**Qualifications**
+ Bachelor Degree in Analytics, Computer Science, Information Systems, Statistics, Math, or related field from an accredited program and 4 years related experience required or experience may be substituted for degree requirement required
+ Experience in data mining and predictive analytics.
+ Strong problem-solving skills, analytical thinking, attention to detail and hypothesis-driven approach.
+ Excellent verbal/written communication, and the ability to present and explain technical concepts to business audiences.
+ Proficiency with data visualization tools (Power BI, Tableau, or Python libraries).
+ Experience with Azure Machine Learning, Databricks, or similar ML platforms.
+ Expert proficiency in Python with pandas, scikit-learn, and statistical libraries.
+ Advanced SQL skills and experience with large datasets.
+ Experience with predictive modeling, time series analysis, and statistical inference.
+ Knowledge of A/B testing, experimental design, and causal inference.
+ Familiarity with computer vision for image/video analysis.
+ Understanding of NLP techniques for document processing.
+ Experience with optimization algorithms and operations research techniques preferred.
+ Knowledge of machine learning algorithms, feature engineering, and model evaluation.
This job posting will remain open a minimum of 72 hours and on an ongoing basis until filled.
EEO/Disabled/Veterans
**Job** Information Technology
**Primary Location** US-WI-Appleton
**Schedule:** Full-time
**Travel:** Yes, 5 % of the Time
**Req ID:** 253790
\#LI-MF #ACO N/A
Data Engineer
Data engineer job in Appleton, WI
Amplifi is the go-to data consultancy for enterprise organizations that want their success to be driven by data. We empower our clients to innovate, grow and succeed by establishing and delivering strategies across all elements of the data value chain. From the governance and management of data through to analytics and automation, our integrated approach to modern data ecosystems delivers measurable results through a combination of expert consultancy and best-in-breed technology. Our company and team members are proud to empower our clients' businesses by providing exceptional solutions and value, as we truly believe their success is our success. We thrive on delivering excellent solutions and overcoming technical and business challenges. As such, we're looking for like-minded individuals to learn, grow, and mentor others as a part of the Amplifi family.
Position Summary
The Data Engineer will be responsible for designing, building, and maintaining scalable, secure data pipelines that drive analytics and support operational data products. The ideal candidate brings a strong foundation in SQL, Python, and modern data warehousing with a deep understanding of Snowflake, Databricks, or Microsoft Fabric, and a solid understanding of cloud-based architectures.
What You Will Get To Do
Design, develop, and optimize robust ETL/ELT pipelines to ingest, transform, and expose data across multiple systems.
Build and maintain data models and warehouse layers, enabling high-performance analytics and reporting.
Collaborate with analytics, product, and engineering teams to understand data needs and deliver well-structured data solutions.
Write clean, efficient, and testable code in SQL and Python to support automation, data quality, and transformation logic.
Support deployment and orchestration workflows, using Azure Data Factory, dbt, or similar tools.
Work across multi-cloud environments (Azure preferred; AWS and GCP optional) to integrate data sources and manage cloud-native components.
Contribute to CI/CD practices and data pipeline observability (monitoring, logging, alerting).
Ensure data governance, security, and compliance in all engineering activities.
Support ad hoc data science and machine learning workflows within Dataiku.
What You Bring to the Team
4+ years of experience in a data engineering or related software engineering role.
Proficiency in SQL and Python for data manipulation, transformation, and scripting.
Strong experience working with Snowflake and MSSQL Server.
Practical knowledge of working with cloud data platforms, especially Microsoft Azure.
Experience with modern data modeling and warehouse optimization techniques.
Experience with Databricks, Azure Data Factory, or DBT preferred.
Exposure to Microsoft Fabric components like OneLake, Pipelines, or Direct Lake.
Familiarity with cloud services across AWS, GCP, or hybrid cloud environments.
Understanding of or curiosity about Dataiku for data science and advanced analytics collaboration.
Ability to work independently and with a team in a hybrid/remote environment.
Location
Wisconsin is preferred.
Travel
Ability to travel up to 10% of the time.
Benefits & Compensation
Amplifi offers excellent compensation and benefits including, but not limited to, health, dental, 401(k) program, employee assistance program, short and long-term disability, life insurance, accidental death and dismemberment (AD&D), PTO program, flex work schedules and paid holidays.
Equal Opportunity Employer
Amplifi is proud to be an equal opportunity employer. We do not discriminate against applicants based on race, religion, disability, medical condition, national origin, gender, sexual orientation, marital status, gender identity, pregnancy, childbirth, age, veteran status or other legally protected characteristics.
Sr Data Engineer, Palantir
Data engineer job in Madison, WI
**A Day in the Life:** We are seeking a talented **Sr Data Engineer, Palantir (experience required)** to join our Strategic Data & Analytics team working on Hertz's strategic applications and initiatives. This role will work in multi-disciplinary teams rapidly building high-value products that directly impact our financial performance and customer experience. You'll build cloud-native, large-scale, employee facing software using modern technologies including React, Python, Java, AWS, and Palantir Foundry.
The ideal candidate will have strong development skills across the full stack, a growth mindset, and a passion for building software at a sustainable pace in a highly productive engineering culture. Experience with Palantir Foundry is highly preferred but not required - we're looking for engineers who are eager to learn and committed to engineering excellence.
We expect the starting salary to be around $135k but will be commensurate with experience.
**What You'll Do:**
Day-to-Day Responsibilities
+ Work in balanced teams consisting of Product Managers, Product Designers, and engineers
+ Test first - We strive for Test-Driven Development (TDD) for all production code
+ CI (Continuous Integration) everything - Automation is core to our development process
+ Architect user-facing interfaces and design functions that help users visualize and interact with their data
+ Contribute to both frontend and backend codebases to enhance and develop projects
+ Build software at a sustainable pace to ensure longevity, reliability, and higher quality output
Frontend Development
+ Design and develop responsive, intuitive user interfaces using React and modern JavaScript/TypeScript
+ Build reusable component libraries and implement best practices for frontend architecture
+ Generate UX/UI designs (no dedicated UX/UI designers on team) with considerations for usability and efficiency
+ Optimize applications for maximum speed, scalability, and accessibility
+ Develop large-scale, web and mobile software utilizing appropriate technologies for use by our employees
Backend Development
+ Develop and maintain RESTful APIs and backend services using Python or Java
+ Design and implement data models and database schemas
+ Deploy to cloud environments (primarily AWS)
+ Integrate with third-party services and APIs
+ Write clean, maintainable, and well-documented code
Palantir Foundry Development (Highly Preferred)
+ Build custom applications and integrations within the Palantir Foundry platform
+ Develop Ontology-based applications leveraging object types, link types, and actions
+ Create data pipelines and transformations using Python transforms
+ Implement custom widgets and user experiences using the Foundry SDK
+ Design and build functions that assist users to visualize and interact with their data
Product Development & Delivery
+ Research problems and break them into deliverable parts
+ Work with a Lean mindset and deliver value quickly
+ Participate in all stages of the product development and deployment lifecycle
+ Conduct code reviews and provide constructive feedback to team members
+ Work with product managers and stakeholders to define requirements and deliverables
+ Contribute to architectural decisions and technical documentation
**What We're Looking For:**
+ Experience with Palantir Foundry platform, required
+ 5+ years in web front-end or mobile development
+ Bachelor's or Master's degree in Computer Science or other related field, preferred
+ Strong proficiency in React, JavaScript/TypeScript, HTML, and CSS for web front-end development
+ Strong knowledge of one or more Object Oriented Programming or Functional Programming languages such as JavaScript, Typescript, Java, Python, or Kotlin
+ Experience with RESTful API design and development
+ Experience deploying to cloud environments (AWS preferred)
+ Understanding of version control systems, particularly GitHub
+ Experience with relational and/or NoSQL databases
+ Familiarity with modern frontend build tools and package managers (e.g., Webpack, npm, yarn)
+ Experience with React, including React Native for mobile app development, preferred
+ Experience in Android or iOS development, preferred
+ Experience with data visualization libraries (e.g., D3.js, Plotly, Chart.js), preferred
+ Familiarity with CI/CD pipelines and DevOps practices, preferred
+ Experience with Spring framework, preferred
+ Working knowledge of Lean, User Centered Design, and Agile methodologies
+ Strong communication skills and ability to collaborate effectively across teams
+ Growth mindset - Aptitude and willingness to learn new technologies
+ Empathy - Kindness and empathy when building software for end users
+ Pride - Takes pride in engineering excellence and quality craftsmanship
+ Customer obsession - Obsessed with the end user experience of products
+ Strong problem-solving skills and attention to detail
+ Ability to work independently and as part of a balanced, multi-disciplinary team
+ Self-motivated with a passion for continuous learning and improvement
**What You'll Get:**
+ Up to 40% off the base rate of any standard Hertz Rental
+ Paid Time Off
+ Medical, Dental & Vision plan options
+ Retirement programs, including 401(k) employer matching
+ Paid Parental Leave & Adoption Assistance
+ Employee Assistance Program for employees & family
+ Educational Reimbursement & Discounts
+ Voluntary Insurance Programs - Pet, Legal/Identity Theft, Critical Illness
+ Perks & Discounts -Theme Park Tickets, Gym Discounts & more
The Hertz Corporation operates the Hertz, Dollar Car Rental, Thrifty Car Rental brands in approximately 9,700 corporate and franchisee locations throughout North America, Europe, The Caribbean, Latin America, Africa, the Middle East, Asia, Australia and New Zealand. The Hertz Corporation is one of the largest worldwide airport general use vehicle rental companies, and the Hertz brand is one of the most recognized in the world.
**US EEO STATEMENT**
At Hertz, we champion and celebrate a culture of diversity and inclusion. We take affirmative steps to promote employment and advancement opportunities. The endless variety of perspectives, experiences, skills and talents that our employees invest in their work every day represent a significant part of our culture - and our success and reputation as a company.
Individuals are encouraged to apply for positions because of the characteristics that make them unique.
EOE, including disability/veteran
Data Engineer
Data engineer job in Mequon, WI
Charter Manufacturing is a fourth-generation family-owned business where our will to grow drives us to do it better. Join the team and become part of our family!
Applicants must be authorized to work for ANY employer in the U.S. Charter Manufacturing is unable to sponsor for employment visas at this time.
This position is hybrid, 3 days a week in office in Mequon, WI.
BI&A- Lead Data Engineer
Charter Manufacturing continues to invest in Data & Analytics. Come join a great team and great culture leveraging your expertise to drive analytics transformation across Charter's companies. This is a key role in the organization that will provide thought leadership, as well as add substantial value by delivering trusted data pipelines that will be used to develop models and visualizations that tell a story and solve real business needs/problems. This role will collaborate with team members and business stakeholders to leverage data as an asset driving business outcomes aligned to business strategies.
Having 7+ years prior experience in developing data pipelines and partnering with team members and business stakeholders to drive adoption will be critical to the success of this role.
MINIMUM QUALIFICATIONS:
Bachelor's degree in computer science, data science, software engineering, information systems, or related quantitative field; master's degree preferred
At least seven years of work experience in data management disciplines, including data integration, modeling, optimization and data quality, or other areas directly relevant to data engineering responsibilities and tasks
Proven project experience designing, developing, deploying, and maintaining data pipelines used to support AI, ML, and BI using big data solutions (Azure, Snowflake)
Strong knowledge in Azure technologies such as Azure Web Application, Azure Data Explorer, Azure DevOps, and Azure Blob Storage to build scalable and efficient data pipelines
Strong knowledge using programming languages such as R, Python, C#, and Azure Machine Learning Workspace development
Proficiency in the design and implementation of modern data architectures and concepts such as cloud services (AWS, Azure, GCP) and modern data warehouse tools (Snowflake, Databricks)
Experience with database technologies such as SQL, Oracle, and Snowflake
Prior experience with ETL/ELT data ingestion into data lakes/data warehouses for analytics consumption
Strong SQL skills
Ability to collaborate within and across teams of different technical knowledge to support delivery and educate end users on data products
Passionate about teaching, coaching, and mentoring others
Strong problem-solving skills, including debugging skills, allowing the determination of sources of issues in unfamiliar code or systems, and the ability to recognize and solve problems
Excellent business acumen and interpersonal skills; able to work across business lines at a senior level to influence and effect change to achieve common goals
Ability to describe business use cases/outcomes, data sources and management concepts, and analytical approaches/options
Demonstrated experience delivering business value by structuring and analyzing large, complex data sets
Demonstrated initiative, strong sense of accountability, collaboration, and known as a trusted business partner
PREFERRED QUALIFICATIONS INCLUDES EXPERIENCE WITH:
Manufacturing industry experience specifically heavy industry, supply chain and operations
Designing and supporting data integrations with ERP systems such as Oracle or SAP
MAJOR ACCOUNTABILITIES:
Designs, develops, and supports data pipelines for batch and streaming data extraction from various sources (databases, API's, external systems), transforming it into the desired format, and load it into the appropriate data storage systems
Collaborates with data scientists and analysts to optimize models and algorithms for in accordance with data quality, security, and governance policies
Ensures data quality, consistency, and integrity during the integration process, performing data cleansing, aggregation, filtering, and validation as needed to ensure accuracy, consistency, and completeness of data
Optimizes data pipelines and data processing workflows for performance, scalability, and efficiency.
Monitors and tunes data pipelines for performance, scalability, and efficiency resolving performance bottlenecks
Establish architecture patterns, design standards, and best practices to accelerate delivery and adoption of solutions
Assist, educate, train users to drive self-service enablement leveraging best practices
Collaborate with business subject matter experts, analysts, and offshore team members to develop and deliver solutions in a timely manner
Embraces and establishes governance of data and algorithms, quality, standards, and best practices, ensuring data accuracy
We offer comprehensive health, dental, and vision benefits, along with a 401(k) plan that includes employer matching and profit sharing. Additionally, we offer company-paid life insurance, disability coverage, and paid time off (PTO).
Auto-ApplyData Engineer
Data engineer job in Madison, WI
Join our team as a Sr. Data Engineer in a hybrid role based in the Madison, WI area! We're looking for someone passionate about data, with hands-on experience in Databricks, SQL Server, SSIS, and SSRS, to help drive impactful reporting and robust data solutions. If you're energized by solving complex data challenges and enjoy working collaboratively, we encourage you to apply.
The Sr. Data Engineer is responsible for the design, development, and maintenance of data integration and reporting solutions. The ideal candidate will possess expertise in Databricks and strong skills in SQL Server, SSIS and SSRS, and experience with other modern data engineering tools such as Azure Data Factory. This position requires a proactive and results-oriented individual with a passion for data and a strong understanding of data warehousing principles.
Responsibilities
Data Integration
* Design, develop, and maintain robust and efficient ETL pipelines and processes on Databricks.
* Troubleshoot and resolve Databricks pipeline errors and performance issues.
* Maintain legacy SSIS packages for ETL processes.
* Troubleshoot and resolve SSIS package errors and performance issues.
* Optimize data flow performance and minimize data latency.
* Implement data quality checks and validations within ETL processes.
Databricks Development
* Develop and maintain Databricks pipelines and datasets using Python, Spark and SQL.
* Migrate legacy SSIS packages to Databricks pipelines.
* Optimize Databricks jobs for performance and cost-effectiveness.
* Integrate Databricks with other data sources and systems.
* Participate in the design and implementation of data lake architectures.
Data Warehousing
* Participate in the design and implementation of data warehousing solutions.
* Support data quality initiatives and implement data cleansing procedures.
Reporting and Analytics
* Collaborate with business users to understand data requirements for department driven reporting needs.
* Maintain existing library of complex SSRS reports, dashboards, and visualizations.
* Troubleshoot and resolve SSRS report issues, including performance bottlenecks and data inconsistencies.
Collaboration and Communication
* Comfortable in entrepreneurial, self-starting, and fast-paced environment, working both independently and with our highly skilled teams.
* Collaborate effectively with business users, data analysts, and other IT teams.
* Communicate technical information clearly and concisely, both verbally and in writing.
* Document all development work and procedures thoroughly.
Continuous Growth
* Keep abreast of the latest advancements in data integration, reporting, and data engineering technologies.
* Continuously improve skills and knowledge through training and self-learning.
Requirements
* Bachelor's degree in computer science, Information Systems, or a related field.
* 7+ years of experience in data integration and reporting.
* Extensive experience with Databricks, including Python, Spark, and Delta Lake.
* Strong proficiency in SQL Server, including T-SQL, stored procedures, and functions.
* Experience with SSIS (SQL Server Integration Services) development and maintenance.
* Experience with SSRS (SQL Server Reporting Services) report design and development.
* Experience with data warehousing concepts and best practices.
* Experience with Microsoft Azure cloud platform and Microsoft Fabric desirable.
* Strong analytical and problem-solving skills.
* Excellent communication and interpersonal skills.
* Ability to work independently and as part of a team.
* Experience with Agile methodologies.
* Must be legally authorized to work in the United States.
As a holding company with cooperative and private ownership, URUS is a family of businesses at the heart of the dairy and beef industry - Alta Genetics, GENEX, Genetics Australia, Leachman Cattle, Jetstream, PEAK, SCCL, Trans Ova Genetics and VAS. Each organization has its unique identity, products, and services. These companies work globally to provide cutting-edge dairy and beef genetics, customized reproductive services to maximize conceptions, dairy management information to take producers to the frontline of progressive dairy farming, and an array of products and services to help bovines reach their full genetic potential. URUS has 9 brands in 17 retail countries and employs nearly 2,800 people globally.
Auto-ApplySr. Data Engineer
Data engineer job in Madison, WI
Join our team as a Sr. Data Engineer in a hybrid role based in the Madison, WI area! We're looking for someone passionate about data, with hands-on experience in Databricks, SQL Server, SSIS, and SSRS, to help drive impactful reporting and robust data solutions. If you're energized by solving complex data challenges and enjoy working collaboratively, we encourage you to apply.
The Sr. Data Engineer is responsible for the design, development, and maintenance of data integration and reporting solutions. The ideal candidate will possess expertise in Databricks and strong skills in SQL Server, SSIS and SSRS, and experience with other modern data engineering tools such as Azure Data Factory. This position requires a proactive and results-oriented individual with a passion for data and a strong understanding of data warehousing principles.
Responsibilities
Data Integration
Design, develop, and maintain robust and efficient ETL pipelines and processes on Databricks.
Troubleshoot and resolve Databricks pipeline errors and performance issues.
Maintain legacy SSIS packages for ETL processes.
Troubleshoot and resolve SSIS package errors and performance issues.
Optimize data flow performance and minimize data latency.
Implement data quality checks and validations within ETL processes.
Databricks Development
Develop and maintain Databricks pipelines and datasets using Python, Spark and SQL.
Migrate legacy SSIS packages to Databricks pipelines.
Optimize Databricks jobs for performance and cost-effectiveness.
Integrate Databricks with other data sources and systems.
Participate in the design and implementation of data lake architectures.
Implement DevOps best practices for data pipelines, including CI/CD, monitoring, observability, and automated testing.
Integrate data ingestion from multiple sources (API, streaming, batch, databases) into centralized data platforms.
Use Terraform/CloudFormation (or similar IaC tools) for provisioning Databricks clusters, cloud infrastructure, and networking components.
Improve system performance and cost efficiency through monitoring, autoscaling, and cluster configurations.
Provide mentorship and technical guidance to junior data engineers and collaborate with cross-functional teams.
Data Warehousing
Participate in the design and implementation of data warehousing solutions.
Support data quality initiatives and implement data cleansing procedures.
Reporting and Analytics
Collaborate with business users to understand data requirements for department driven reporting needs.
Maintain existing library of complex SSRS reports, dashboards, and visualizations.
Troubleshoot and resolve SSRS report issues, including performance bottlenecks and data inconsistencies.
Collaboration and Communication
Comfortable in entrepreneurial, self-starting, and fast-paced environment, working both independently and with our highly skilled teams.
Collaborate effectively with business users, data analysts, and other IT teams.
Communicate technical information clearly and concisely, both verbally and in writing.
Document all development work and procedures thoroughly.
Continuous Growth
Keep abreast of the latest advancements in data integration, reporting, and data engineering technologies.
Continuously improve skills and knowledge through training and self-learning.
Requirements
Bachelor's degree in computer science, Information Systems, or a related field.
7+ years of experience in data integration and reporting.
Extensive experience with Databricks, including Python, Spark, and Delta Lake.
Strong proficiency in SQL Server, including T-SQL, stored procedures, and functions.
Experience with SSIS (SQL Server Integration Services) development and maintenance.
Experience with SSRS (SQL Server Reporting Services) report design and development.
Experience with data warehousing concepts and best practices.
Experience with Microsoft Azure cloud platform and Microsoft Fabric desirable.
Strong analytical and problem-solving skills.
Excellent communication and interpersonal skills.
Ability to work independently and as part of a team.
Experience with Agile methodologies.
Must be legally authorized to work in the United States.
Auto-ApplyPrincipal Data Engineer - AI Architect
Data engineer job in Madison, WI
As the Principal Data Engineer - AI Architect you will lead the architectural design (conceptual, logical and physical) of large-scale AI business solutions. As a technical lead, you will guide and align development efforts across multiple teams to ensure cohesive and delivery.
You will report to the Director, Data Science.
#LI-Hybrid.
Position Compensation Range:
$147,000.00 - $250,000.00
Pay Rate Type:
Salary
Compensation may vary based on the job level and your geographic work location.
Relocation support is offered for eligible candidates.
Primary Accountabilities:
Lead architectural design of scalable, secure, resilient AI system integrated with primary business platform (policy administration, CRM, claims management)
Design and implement operational pattern for AI solution, including real-time API, pub/sub, and batch processing architecture
Develop strategy for applying diverse data source, from application integration to analytical data lake, to power AI model and application
Provide expertise on AI system architecture, security, and infrastructure; oversee project implementation for production-grade solution
Establish governance process for model life cycle management, including architecture review, registry, and versioning
Define and promote strategic direction for AI engineering; maintain technology roadmap reflecting latest AI and cloud trend
Be the senior escalation point for complex AI system issue, assessing scope, urgency, and growth potential
Establish and promote engineering best practice, framework, and reusable pattern for AI development to promote collaboration and quality
Specialized Knowledge & Skills Requirements:
Demonstrated experience providing customer-driven solutions, support or service.
•Demonstrated experience leading software engineering architectures, system/software designs, and system deployments.
•Extensive knowledge and understanding of software engineering technologies, enterprise level network architecture, and application development methodologies.
•Demonstrated experience developing solution-delivery and design approaches and solutions.
•Demonstrated experience developing complex software/systems using multiple programming languages.
•Extensive knowledge and understanding of the systems development life cycle (SDLC).
•Extensive knowledge and understanding of infrastructure technologies, operating systems, and the interconnectivity between platforms and software tools.
•Extensive knowledge and understanding of integration and migration strategies and technologies.
•Demonstrated experience providing technical guidance and leadership to less experienced staff.
Preferred Skills:
Experience architecting and deploying complex, software solutions, with a strong emphasis on AI/ML-powered applications.
Knowledge of AI/ML system design, MLOps principles, and modern application development methodologies. This includes frameworks and tooling for observability and monitoring.
Hands-on experience with Google Cloud Platform (GCP), including deep familiarity with services such as Vertex AI, Cloud Run, Google Kubernetes Engine (GKE), BigQuery, Cloud Storage (GCS), and IAM for building and securing solutions.
5+ years of experience designing and implementing integration strategies for deploying AI models into production environments and core business systems.
5+ years of experience with traditional machine learning and statistical methods, as well as the infrastructure required to support them.
Demonstrated experience with designing scalable Generative AI solutions, including knowledge of concepts such as retrieval-augmented generation (RAG), model context protocol (MCP), prompt management, evaluation frameworks, and AI safety guardrails.
Extensive knowledge of CI/CD for both software and machine learning models. This includes aspects of versioning, change management, monitoring, and visibility.
Demonstrated experience providing technical guidance and mentorship to engineering staff.
Licenses:
Not Applicable.
Travel Requirements
Up to 15%.
Physical Requirements
Work that primarily involves sitting/standing.
Working Conditions
Not Applicable.
Additional Information
Offer to selected candidate will be made contingent on the results of applicable background checks
Offer to selected candidate is contingent on signing a non-disclosure agreement for proprietary information, trade secrets, and inventions
Sponsorship will not be considered for this position unless specified in the posting
In this hybrid role you will be required to work a minimum of 10 days per month out of our Boston, MA or Madison, WI offices.
Internals are encouraged to apply regardless of location.
We provide benefits that support your physical, emotional, and financial wellbeing. You will have access to comprehensive medical, dental, vision and wellbeing benefits that enable you to take care of your health. We also offer a competitive 401(k) contribution, a pension plan, an annual incentive, 9 paid holidays and a paid time off program (23 days accrued annually for full-time employees). In addition, our student loan repayment program and paid-family leave are available to support our employees and their families. Interns and contingent workers are not eligible for American Family Insurance Group benefits.
We are an equal opportunity employer. It is our policy to comply with all applicable federal, state and local laws pertaining to non-discrimination, non-harassment and equal opportunity. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law.
American Family Insurance is committed to the full inclusion of all qualified individuals. If a reasonable accommodation is needed to participate in the job application or interview process, to perform essential job functions, and/or to receive other benefits and privileges of employment, please email *************** to request a reasonable accommodation.
#LI-AB1
Auto-ApplyAssociate Data Engineer
Data engineer job in Milwaukee, WI
Baker Tilly is a leading advisory, tax and assurance firm, providing clients with a genuine coast-to-coast and global advantage in major regions of the U.S. and in many of the world's leading financial centers - New York, London, San Francisco, Los Angeles, Chicago and Boston. Baker Tilly Advisory Group, LP and Baker Tilly US, LLP (Baker Tilly) provide professional services through an alternative practice structure in accordance with the AICPA Code of Professional Conduct and applicable laws, regulations and professional standards. Baker Tilly US, LLP is a licensed independent CPA firm that provides attest services to its clients. Baker Tilly Advisory Group, LP and its subsidiary entities provide tax and business advisory services to their clients. Baker Tilly Advisory Group, LP and its subsidiary entities are not licensed CPA firms.
Baker Tilly Advisory Group, LP and Baker Tilly US, LLP, trading as Baker Tilly, are independent members of Baker Tilly International, a worldwide network of independent accounting and business advisory firms in 141 territories, with 43,000 professionals and a combined worldwide revenue of $5.2 billion. Visit bakertilly.com or join the conversation on LinkedIn, Facebook and Instagram.
Please discuss the work location status with your Baker Tilly talent acquisition professional to understand the requirements for an opportunity you are exploring.
Baker Tilly is an equal opportunity/affirmative action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability or protected veteran status, gender identity, sexual orientation, or any other legally protected basis, in accordance with applicable federal, state or local law.
Any unsolicited resumes submitted through our website or to Baker Tilly Advisory Group, LP, employee e-mail accounts are considered property of Baker Tilly Advisory Group, LP, and are not subject to payment of agency fees. In order to be an authorized recruitment agency ("search firm") for Baker Tilly Advisory Group, LP, there must be a formal written agreement in place and the agency must be invited, by Baker Tilly's Talent Attraction team, to submit candidates for review via our applicant tracking system.
Job Description:
Associate Data Engineer
As a Senior Consultant - Associate Data Engineer you will design, build, and optimize modern data solutions for our mid‑market and enterprise clients. Working primarily inside the Microsoft stack (Azure, Synapse, and Microsoft Fabric), you will transform raw data into trusted, analytics‑ready assets that power dashboards, advanced analytics, and AI use cases. You'll collaborate with solution architects, analysts, and client stakeholders while sharpening both your technical depth and consulting skills.
Key Responsibilities:
* Data Engineering: Develop scalable, well‑documented ETL/ELT pipelines using T‑SQL, Python, Azure Data Factory/Fabric Data Pipelines, and Databricks; implement best‑practice patterns for performance, security, and cost control.
* Modeling & Storage: Design relational and lakehouse models; create Fabric OneLake shortcuts, medallion‑style layers, and dimensional/semantic models for Power BI.
* Quality & Governance: Build automated data‑quality checks, lineage, and observability metrics; contribute to CI/CD workflows in Azure DevOps or GitHub.
* Client Delivery: Gather requirements, demo iterative deliverables, document technical designs, and translate complex concepts to non‑technical audiences.
* Continuous Improvement: Research new capabilities, share findings in internal communities of practice, and contribute to reusable accelerators. Collaborate with clients and internal stakeholders to design and implement scalable data engineering solutions.
Qualifications:
* Education - Bachelor's in Computer Science, Information Systems, Engineering, or related field (or equivalent experience)
* Experience - 2-3 years delivering production data solutions, preferably in a consulting or client‑facing role.
* Technical Skills:
Strong T‑SQL for data transformation and performance tuning.
Python for data wrangling, orchestration, or notebook‑based development.
Hands‑on ETL/ELT with at least one Microsoft service (ADF, Synapse Pipelines, Fabric Data Pipelines).
* Project experience with Microsoft Fabric (OneLake, Lakehouses, Data Pipelines, Notebooks, Warehouse, Power BI DirectLake) preferred
* Familiarity with Databricks, Delta Lake, or comparable lakehouse technologies preferred
* Exposure to DevOps (YAML pipelines, Terraform/Bicep) and test automation frameworks preferred
* Experience integrating SaaS/ERP sources (e.g., Dynamics 365, Workday, Costpoint) preferred
Auto-ApplyData Scientist
Data engineer job in Luxemburg, WI
Your career at Deutsche Börse Group Your Area of Work Join Clearstream Fund Services as a Data Scientist to design and prototype data products that empower data monetization and business users through curated datasets, semantic models, and advanced analytics. You'll work across the data stack-from pipelines to visualizations-and contribute to the evolution of AI-driven solutions.
Your Responsibilities
* Prototype data products including curated datasets and semantic models to support data democratization and self-service BI
* Design semantic layers to simplify data access and usability
* Develop and optimize data pipelines using data engineering tool (e.g. Databricks)
* Use SQL, Python, and PySpark for data processing and transformation
* Create Power BI dashboards to support prototyping and reporting
* Apply ML/AI techniques to support early-stage modeling and future product innovation
* Collaborate with data product managers, functional analyst, engineers, and business stakeholders
* Ensure data quality, scalability, and performance in all deliverables
Your Profile
* Master's in Data Science, Computer Science, Engineering, or related field
* 3+ years of experience in data pipeline development and prototyping in financial services or fund administration
* Proficiency in SQL, Python, and PySpark
* Hands-on experience with Databricks
* Experience building Power BI dashboards and semantic models
* Strong analytical and communication skills
* Fluent in English
Data Scientist, US Supply Chain
Data engineer job in Milwaukee, WI
What you will do
In this exciting role you will lead the effort to build and deploy predictive and prescriptive analytics in our next generation decision intelligence platform. The work will require helping to build /maintain a digital twin of our production supply chain, perform optimization and forecasting, and connect our analytics and ML solutions to enable our people to make the best data driven decisions possible!
This will require working with predictive, prescriptive analytics and decision-intelligence across the US / Canada region at Clarios. You'll apply modern statistics, machine learning and AI to real manufacturing and supply chain problems, working side-by-side with our business stakeholders and our global analytics team to deploy transformative solutions- not just models.
How you will do it
Build production-ready ML/statistical models (regression/classification, clustering, time series, linear / non-linear optimizations) to detect patterns, perform scenario analytics and generate actionable insights / outcomes.
Wrangle and analyze data with Python and SQL; perform feature engineering, data quality checks, and exploratory analysis to validate hypotheses and model readiness.
Develop digital solutions /visuals in Power BI and our decision intelligence platform to communicate results and monitor performance with business users.
Partner with stakeholders to clarify use cases, translate needs into technical tasks/user stories, and iterate solutions in sprints.
Manage model deployment (e.g., packaging models, basic MLOps) with guidance from Global Analytics
Document and communicate model methodology, assumptions, and results to non-technical audiences; support troubleshooting and continuous improvement of delivered analytics.
Deliver value realization as part of our business analytics team to drive positive business outcomes for our metals team.
Deliver incremental value quickly (first dashboards, baseline models) and iterate with stakeholder feedback.
Balance rigor with practicality-choose the simplest model that solves the problem and can be supported in production.
Keep data quality front-and-center; instrument checks to protect decisions from drift and bad inputs.
Travel: Up to ~10% for plant or stakeholder visits
What we look for
Bachelor's degree in Statistics, Mathematics, Computer Science, Engineering, or related field-or equivalent practical experience.
1-3 years (or strong internship/co-op) applying ML/statistics on business data.
Python proficiency (pandas, scikit-learn, SciPy/statsmodels) and SQL across common platforms (e.g., SQL Server, Snowflake).
Core math/stats fundamentals: probability, hypothesis testing/DoE basics, linear algebra, and the principles behind common ML methods.
Data visualization experience with Power BI / Decision Intelligence Platforms for analysis and stakeholder storytelling.
Ability to work in cross-functional teams and explain technical work clearly to non-technical partners. Candidates must be self-driven, curious, and creative
Preferred
Cloud & big data exposure: Azure (or AWS), Databricks/Spark; Snowpark is a plus.
Understanding of ETL/ELT tools such as ADF, SSIS, Talend, Informatica, or Matillion.
Experience in an Decision Intelligence platform like Palantir, Aera, etc building and deploying models.
MLOps concepts (model validation, monitoring, packaging with Docker/Kubernetes).
Deep learning basics (PyTorch/Keras) for the right use cases.
Experience contributing to agile backlogs, user stories, and sprint delivery.
3+ years of experience in data analytics
Master's Degree in Statistics, Economics, Data Science or computer science.
What you get:
Medical, dental and vision care coverage and a 401(k) savings plan with company matching - all starting on date of hire
Tuition reimbursement, perks, and discounts
Parental and caregiver leave programs
All the usual benefits such as paid time off, flexible spending, short-and long-term disability, basic life insurance, business travel insurance, Employee Assistance Program, and domestic partner benefits
Global market strength and worldwide market share leadership
HQ location earns LEED certification for sustainability plus a full-service cafeteria and workout facility
Clarios has been recognized as one of 2025's Most Ethical Companies by Ethisphere. This prestigious recognition marks the third consecutive year Clarios has received this distinction.
Who we are:
Clarios is the force behind the world's most recognizable car battery brands, powering vehicles from leading automakers like Ford, General Motors, Toyota, Honda, and Nissan. With 18,000 employees worldwide, we develop, manufacture, and distribute energy storage solutions while recovering, recycling, and reusing up to 99% of battery materials-setting the standard for sustainability in our industry. At Clarios, we're not just making batteries; we're shaping the future of sustainable transportation. Join our mission to innovate, push boundaries, and make a real impact. Discover your potential at Clarios-where your power meets endless possibilities.
Veterans/Military Spouses:
We value the leadership, adaptability, and technical expertise developed through military service. At Clarios, those capabilities thrive in an environment built on grit, ingenuity, and passion-where you can grow your career while helping to power progress worldwide. All qualified applicants will be considered without regard to protected characteristics.
We recognize that people come with a wealth of experience and talent beyond just the technical requirements of a job. If your experience is close to what you see listed here, please apply. Diversity of experience and skills combined with passion is key to challenging the status quo. Therefore, we encourage people from all backgrounds to apply to our positions. Please let us know if you require accommodations during the interview process by emailing Special.Accommodations@Clarios.com. We are an Equal Opportunity Employer and value diversity in our teams in terms of work experience, area of expertise, gender, ethnicity, and all other characteristics protected by laws in the countries where we operate. For more information on our commitment to sustainability, diversity, and equal opportunity, please read our latest report. We want you to know your rights because EEO is the law.
A Note to Job Applicants: please be aware of scams being perpetrated through the Internet and social media platforms. Clarios will never require a job applicant to pay money as part of the application or hiring process.
To all recruitment agencies: Clarios does not accept unsolicited agency resumes/CVs. Please do not forward resumes/CVs to our careers email addresses, Clarios employees or any other company location. Clarios is not responsible for any fees related to unsolicited resumes/CVs.
Auto-ApplyData Scientist
Data engineer job in Madison, WI
Innovizant LLC (headquartered in Chicago, USA) is a leading-edge, global IT Services organization with Sales and delivery offices in Asia. Innovizant LLC, is a Full-service IT provider, focused on delivering Innovative and value driven business analytical solutions leveraging data science, data engineering and decision science to provide winning actionable insights assisting our financial services clients banking, Insurance and credit union business in achieve their business goals.
Innovizant made up of exceptional data scientists and domain experts with a great experience in Our financial services industry solutions include Credit Risk insights, Customer Churn analysis, Customer segmentation, Fraud detection, Asset and Liability analysis, channel optimization analytics, Financial Advisor Network Analytics and product bundling analytics.
Some of our sur accelerators and solution frameworks assist our clients including FIN-CDO (which provided pre-delivered data strategies for the office of Chief Data Officer), BASEL-PRO (for achieving compliance with industry requirements of Basel-BCBS239,) and SmartCECL (Risk mitigation strategies by predicting default and loss given a default)
With data becoming the new 'oxygen' of businesses, many data science consulting firms have evolved in recent times, and they are also contributing the best of their solutions to the modern-day clients. It means today; you can easily find a solution for data sciences. However, the biggest challenge during managing this data comes across in the terms of 'Value Realization'. The true measure of success is to be able to put the data science insights into actionable events.
Many organizations have ended up spending a significant chunk of their analytics budget in some implementing data sciences solution - with minimal to no returns.
Role: Data Scientist
Location: Madison, WI
Full Time (Direct Hire)
Job description
· 5+ Years of total experience and 2-4 years of in-depth and superlative experience in Data Science.
· Experience in Statistical modeling.
· Candidate will be helping the client to lead this process or improve upon the program.
· Extremely proficient in Data Analysis, data wrangling, model development, software development, A/B Testing, Back Testing.
· Extremely efficient in Python and R.
· Extremely efficient in identifying right analytics model methods and using them. Gradient Boosting, Decision Tree, Regression - are absolutely must.
· Exposure to Insurance (especially consumer Insurance/Retail Insurance) is a minor edge.
· MS in AI/Data Science would also be a plus.
Qualifications
Data Science, Analysis, Python, R, Statistical Modeling, Machine Learning
Additional Information
Thanks & Regards,
Aditya Prakash | Resource Manager | Innovizant LLC
Phone : ************
************************************
Easy ApplyData Engineer
Data engineer job in Madison, WI
Join our team as a Sr. Data Engineer in a hybrid role based in the Madison, WI area! We're looking for someone passionate about data, with hands-on experience in Databricks, SQL Server, SSIS, and SSRS, to help drive impactful reporting and robust data solutions. If you're energized by solving complex data challenges and enjoy working collaboratively, we encourage you to apply.
The Sr. Data Engineer is responsible for the design, development, and maintenance of data integration and reporting solutions. The ideal candidate will possess expertise in Databricks and strong skills in SQL Server, SSIS and SSRS, and experience with other modern data engineering tools such as Azure Data Factory. This position requires a proactive and results-oriented individual with a passion for data and a strong understanding of data warehousing principles.
Responsibilities
Data Integration
Design, develop, and maintain robust and efficient ETL pipelines and processes on Databricks.
Troubleshoot and resolve Databricks pipeline errors and performance issues.
Maintain legacy SSIS packages for ETL processes.
Troubleshoot and resolve SSIS package errors and performance issues.
Optimize data flow performance and minimize data latency.
Implement data quality checks and validations within ETL processes.
Databricks Development
Develop and maintain Databricks pipelines and datasets using Python, Spark and SQL.
Migrate legacy SSIS packages to Databricks pipelines.
Optimize Databricks jobs for performance and cost-effectiveness.
Integrate Databricks with other data sources and systems.
Participate in the design and implementation of data lake architectures.
Data Warehousing
Participate in the design and implementation of data warehousing solutions.
Support data quality initiatives and implement data cleansing procedures.
Reporting and Analytics
Collaborate with business users to understand data requirements for department driven reporting needs.
Maintain existing library of complex SSRS reports, dashboards, and visualizations.
Troubleshoot and resolve SSRS report issues, including performance bottlenecks and data inconsistencies.
Collaboration and Communication
Comfortable in entrepreneurial, self-starting, and fast-paced environment, working both independently and with our highly skilled teams.
Collaborate effectively with business users, data analysts, and other IT teams.
Communicate technical information clearly and concisely, both verbally and in writing.
Document all development work and procedures thoroughly.
Continuous Growth
Keep abreast of the latest advancements in data integration, reporting, and data engineering technologies.
Continuously improve skills and knowledge through training and self-learning.
Requirements
Bachelor's degree in computer science, Information Systems, or a related field.
7+ years of experience in data integration and reporting.
Extensive experience with Databricks, including Python, Spark, and Delta Lake.
Strong proficiency in SQL Server, including T-SQL, stored procedures, and functions.
Experience with SSIS (SQL Server Integration Services) development and maintenance.
Experience with SSRS (SQL Server Reporting Services) report design and development.
Experience with data warehousing concepts and best practices.
Experience with Microsoft Azure cloud platform and Microsoft Fabric desirable.
Strong analytical and problem-solving skills.
Excellent communication and interpersonal skills.
Ability to work independently and as part of a team.
Experience with Agile methodologies.
Must be legally authorized to work in the United States.
Auto-Apply