Post job

Data engineer jobs in Racine, WI

- 341 jobs
All
Data Engineer
Data Scientist
ETL Architect
Hadoop Developer
Lead Data Analyst
Senior Software Engineer
Data Architect
  • Sr Boomi Developer

    Vista Applied Solutions Group Inc. 4.0company rating

    Data engineer job in Kenosha, WI

    Responsibilities: Design and Architect Solutions: Bringing deep knowledge to design stable, reliable, and scalable integration solutions using the Dell Boomi AtomSphere platform and its components (Integration, API Management, MDM, etc.) Hands-on Development: Designing, developing, and implementing complex integration processes, workflows, and APIs (REST/SOAP) to connect various applications (on-premises and cloud-based), ERP systems (like Microsoft Dynamics, Oracle EBS, SAP), and other data sources. Data Transformation: Proficiently handling various data formats such as XML, JSON, CSV and database formats, and using Boomi's capabilities and scripting languages (like Groovy or JavaScript) for complex data mapping and transformations. Dell Boomi Platform Knowledge: Proficiency in Dell Boomi is crucial. Familiarize yourself with Boomi components such as connectors, processes, maps, and APIs. Understand how to design, build, and deploy integrations using Boomi. API Development: Strong knowledge of RESTful and SOAP APIs. You'll create, consume, and manage APIs within Boomi. Working with team members and business users to understand project requirements and deliver successful design, implementation, and post implementation support. Working closely with team members to translate business requirements into feasible and efficient technical solutions. Develop and maintain documentation for integration and testing processes Be highly accurate in activity assessment, effort estimation and delivery commitment to ensure all project activities are delivered on time without comprising quality. Diagnose complex technical issues and provide recommendations on solutions with consideration of best practices and longer-term impacts of decisions. Lead/Perform third party testing, performance testing and UAT coordination. Selecting the appropriate development platform(s) to execute business requirements and ensure post implementation success. Serve as technical lead on projects to design, develop, test, document and deploy robust integration solutions. Working both independently and as part of a team; collaborating closely with other IT and non-IT team members. Assessing and troubleshooting production issues with a varying degree of priority and complexity. Optimizing existing and developing new integration solutions to support business requirements. Providing continuous support and management of the integration layer ensuring the integrity of our data and integrations and remove single points of failure. Good knowledge of best practices in error handling, logging, and monitoring. Documenting and cross-training team members for support continuity. Qualifications: 10-15 years of experience with enterprise integration platform Bachelor's degree in computer science Troubleshooting Skills: Be adept at diagnosing and resolving integration issues. Familiarity with Boomi's debugging tools is valuable. Security Awareness: Knowledge of authentication methods, encryption, and secure data transmission. Experience and proven track record of implementing integration projects. Extensible Stylesheet Language Transformations (XSLT) experience is a plus. Project Management experience is a plus Experience of ERP systems within a fast-moving wholesale, retail, and Ecommerce environment is highly desirable. Experience of Boomi implementation with Microsoft Dynamics ERP system is a plus. Strong communication and ability to work cross-functionally in a fast-paced environment.
    $82k-106k yearly est. 3d ago
  • Healthcare Data Analyst Lead (CMH Health)

    Milliman 4.6company rating

    Data engineer job in Brookfield, WI

    Individual(s) must be legally authorized to work in the United States without the need for immigration support or sponsorship from Milliman now or in the future. Milliman is seeking a technically savvy, analytically strong Healthcare Data Analyst Lead to manage analytics and technology projects as well as coach and mentor junior analytical staff. The ideal candidate is someone seeking to join a challenging, yet rewarding environment focused on delivering world-class analytics across a variety of healthcare-related domains to a variety of healthcare entities. Who We Are Independent for over 75 years, Milliman delivers market-leading services and solutions to clients worldwide. Today, we are helping companies take on some of the world's most critical and complex issues, including retirement funding and healthcare financing, risk management and regulatory compliance, data analytics and business transformation. Job Responsibilities * Lead and manage analytics and technology projects from data ingestion through delivery. * Design, develop, and optimize data processes and workflows supporting large healthcare datasets. * Perform and oversee ETL, validation, and transformation tasks for claims, eligibility, and pharmacy data. * Guide project teams through the full "data-to-deliverable" lifecycle, ensuring accuracy and efficiency. * Build analytical models, dashboards, and data pipelines to support consulting engagements. * Collaborate with consultants, actuaries, and project managers to interpret results and deliver client insights. * Review and approve technical work from peers and junior analysts to ensure quality standards are met. * Mentor, coach, and delegate to analytical staff to strengthen technical and professional development. * Contribute to innovation and process improvement initiatives, including automation, cloud enablement, and AI integration. * Participate in client meetings and presentations, occasionally requiring travel. Minimum requirements * Bachelor's degree required (Computer Science, Management Information Systems, Computer Engineering, Math, Actuarial Science, Data Analytics, or related degree is preferred) * 6+ years of experience in healthcare data analytics or a related technical analytics role. * Advanced proficiency with SQL and Microsoft Excel for data analysis, validation, and automation. * Strong programming skills in Python, R, or other analytical languages. * Experience with data visualization and reporting tools (e.g., Power BI, Tableau, or R Shiny). * Solid understanding of healthcare data structures, including claims, eligibility, and provider data. * Proven ability to lead multiple projects simultaneously while mentoring and developing junior team members. * Experience with cloud data technologies (e.g., Azure Data Factory, Databricks, Snowflake, AWS Redshift, or similar). * Exposure to AI or Generative AI tools for data analysis, automation, or insight generation is preferred, but not required. Competencies and Behaviors that Support Success in this Role * Deep understanding of database architecture and large-scale healthcare data environments. * Strong analytical thinking and the ability to translate complex data into actionable insights. * Excellent communication skills, including the ability to explain technical concepts to non-technical audiences. * Highly organized, detail-oriented, and able to manage competing priorities. * Collaborative and proactive leadership style with a focus on mentorship and knowledge-sharing. * Passion for applying analytics to improve healthcare performance, quality, and cost outcomes. * Demonstrated accountability for quality, timelines, and client satisfaction. * Fast learner who thrives in a dynamic, innovation-driven environment. The Team The Healthcare Data Analyst Lead will join a team that thrives on leveraging data, analytics, and technology to deliver meaningful business value. This is a team with technical aptitude and analytical prowess that enjoys building efficient and scalable products and processes. Ultimately, we are passionate about effecting change in healthcare. We also believe that collaboration and communication are cornerstones of success. The Healthcare Data Analyst Lead will also join a mix of Healthcare Analysts, Leads, Consultants, and Principals. In addition, as part of the broader Milliman landscape, they will work alongside Healthcare Actuaries, Pharmacists, Clinicians, and Physicians. We aim to provide everyone a supportive environment, where we foster learning and growth through rewarding challenges. Salary: The overall salary range for this role is $104,900 - $199,065. For candidates residing in: * Alaska, California, Connecticut, Illinois, Maryland, Massachusetts, New Jersey, Notable New York City, Newark, San Jose, San Francisco, Pennsylvania, Virginia, Washington, or the District of Columbia the salary range is $120,635 - $199,065. * All other locations the salary range is $104,900 - $173,100. A combination of factors will be considered, including, but not limited to, education, relevant work experience, qualifications, skills, certifications, etc. Location: It is preferred that candidates work on-site at our Brookfield, Wisconsin office, however, remote candidates will be considered. The expected application deadline for this job is May 25, 2026. Benefits We offer a comprehensive benefits package designed to support employees' health, financial security, and well-being. Benefits include: * Medical, Dental and Vision - Coverage for employees, dependents, and domestic partners. * Employee Assistance Program (EAP) - Confidential support for personal and work-related challenges. * 401(k) Plan - Includes a company matching program and profit-sharing contributions. * Discretionary Bonus Program - Recognizing employee contributions. * Flexible Spending Accounts (FSA) - Pre-tax savings for dependent care, transportation, and eligible medical expenses. * Paid Time Off (PTO) - Begins accruing on the first day of work. Full-time employees accrue 15 days per year, and employees working less than full-time accrue PTO on a prorated basis. * Holidays - A minimum of 10 observed holidays per year. * Family Building Benefits - Includes adoption and fertility assistance. * Paid Parental Leave - Up to 12 weeks of paid leave for employees who meet eligibility criteria. * Life Insurance & AD&D - 100% of premiums covered by Milliman. * Short-Term and Long-Term Disability - Fully paid by Milliman. Equal Opportunity: All qualified applicants will receive consideration for employment, without regard to race, color, religion, sex, sexual orientation, national origin, disability, or status as a protected veteran.
    $120.6k-199.1k yearly 31d ago
  • Data Engineer - Platform & Product

    Artisan Partners 4.9company rating

    Data engineer job in Milwaukee, WI

    We are seeking a skilled and solution-oriented Data Engineer to contribute to the development of our growing Data Engineering function. This role will be instrumental in designing and optimizing data workflows, building domain-specific pipelines, and enhancing platform services. The ideal candidate will help evolve our Snowflake-based data platform into a scalable, domain-oriented architecture that supports business-critical analytics and machine learning initiatives. Responsibilities The candidate is expected to: Design and build reusable platform services, including pipeline frameworks, CI/CD workflows, data validation utilities, data contracts, lineage integrations Develop and maintain data pipelines for sourcing, transforming, and delivering trusted datasets into Snowflake Partner with Data Domain Owners to onboard new sources, implement data quality checks, and model data for analytics and machine learning use cases Collaborate with the Lead Data Platform Engineer and Delivery Manager to deliver monthly feature releases and support bug remediation Document and promote best practices for data pipeline development, testing, and deployment Qualifications The successful candidate will possess strong analytical skills and attention to detail. Additionally, the ideal candidate will possess: 3-6 years of experience in data engineering or analytics engineering Strong SQL and Python skills; experience with dbt or similar transformation frameworks Demonstrated experience building pipelines and services on Snowflake or other modern cloud data platforms Understanding of data quality, validation, lineage, and schema evolution Background in financial or market data (trading, pricing, benchmarks, ESG) is a plus Strong collaboration and communication skills, with a passion for enabling domain teams Privacy Notice for California Applicants Artisan Partners Limited Partnership is an equal opportunity employer. Artisan Partners does not discriminate on the basis of race, religion, color, national origin, gender, age, disability, marital status, sexual orientation or any other characteristic protected under applicable law. All employment decisions are made on the basis of qualifications, merit and business need. #LI-Hybrid/span>
    $106k-148k yearly est. Auto-Apply 60d+ ago
  • Data Scientist II - Clinical - Looking for only W2

    Isofttek Solutions Inc.

    Data engineer job in North Chicago, IL

    Job Description Data Scientist II - Clinical - Looking for only W2 Duration: 12 Months Contract Type: W2 Primary Skills: AWS Cloud Formation, R, Data Analysis, Python, SQL Position Title: Computational Data Scientist Seeking a highly motivated and driven data scientist to join our Quantitative, Translational & ADME Sciences (QTAS) team in North Chicago, IL. The QTAS organization supports the discovery and early clinical pipeline through mechanistically investigating how drug molecules are absorbed, distributed, excreted, metabolized, and transported across the body to predict duration and intensity of exposure and pharmacological action of drug candidates in humans. Digital workflows, systems, IT infrastructure, and computational sciences are critical and growing components within the organization to help deliver vital results in the early pipeline. This specific job role is designed to act as an SME (subject matter expert) for data science within the technical organization of QTAS. For this role, the successful candidate will have a substantial background in data and computer science with an emphasis on supporting, developing and implementing IT solutions for lab-based systems as well as utilizing computational methods. The candidate should possess a deep knowledge in AI/ML, with a focus on both supervised (like neural networks, decision trees) and unsupervised learning techniques (such as clustering, PCA). They must be adept at applying these methods to large datasets for predictive modeling; in this context- drug properties and discovery patterns in ADME datasets. Proficiency in model validation, optimization, and feature engineering is essential to ensure accuracy and robustness in predictions. The role requires effective collaboration with interdisciplinary teams to integrate AI insights into drug development processes. Strong communication skills are necessary to convey complex AI/ML concepts to a diverse audience. Key Responsibilities: Provide business-centric support of IT systems and platforms in support of our scientific operations and processes. Develop, implement, troubleshoot and support solutions independently for the digital infrastructure and workflows within QTAS including custom platform/coding solutions, visualization tools, integration of new software/hardware, and analysis and troubleshooting support. Lead the analysis of large ADME-related datasets, contributing to the understanding and optimization of drug absorption, distribution, metabolism, and excretion properties. Apply computational tools and machine learning/deep learning techniques to analyze and interpret complex biological data relevant to drug discovery. Develop predictive models and algorithms for identifying potential drug candidates with desirable ADME properties. Collaborate with teams across biological sciences and drug discovery to integrate computational insights into practical drug development strategies. Communicate findings and strategic input to cross-functional teams, including Translational Science, Medicine, and Late Development groups. Qualifications: Bachelor's or Master's Degree in Data Science, Computer Science, Computational Chemistry, or related relevant discipline typically with 5 to 10 (BS) or 2 to 5 (MS) years related industry experience. Passion for data analysis, solving technical problems and applying new technologies to further scientific goals. Strong proficiency in programming (e.g., SQL, Python, R, MATLAB), database technologies (Oracle, my SQL, relational databases; graph databases are a plus), machine learning/deep learning (network architectures are a plus), dimensionality reduction techniques (e.g., PCA), and possible cheminformatics software suites Demonstrated experience in the analysis and visualization of large datasets. Proficiency in any of the following technologies is valued: Python (including libraries such as Matplotlib, Seaborn, Plotly, Bokeh), JavaScript, Julia, Java/Scala, or R (including Shiny). Comfortable working in cloud and high-performance computational environments (e.g., AWS and Oracle Cloud) Excellent communication skills and ability to work effectively in interdisciplinary teams. Understanding of pharma R&D process and challenges in drug discovery is preferred. Proven ability to work in a team environment; ability to work well in a collaborative fast-paced team environment. Excellent oral and written communication skills and the ability to convey IT related notions to cross-disciplinary scientists. Thorough theoretical and practical understanding of own scientific discipline Background and/or experience in the biotechnology, pharmaceutical, biology, or chemistry fields is preferred. Key Leadership Competencies: Builds strong relationships with peers and cross-functionally with partners outside of team to enable higher performance. Learns fast, grasps the "essence" and can change course quickly where indicated. Raises the bar and is never satisfied with the status quo. Creates a learning environment, open to suggestions and experimentation for improvement. Embraces the ideas of others, nurtures innovation and manages innovation to reality. Kindly please share your resumes to **********************
    $71k-97k yearly est. Easy Apply 18d ago
  • ETL Conversion Architect

    Pcmi LLC 3.7company rating

    Data engineer job in Park Ridge, IL

    Who We Are PCMI (Policy Claim Management International) is a fast-growing, leading provider of integrated software for Extended Warranty Management and Finance and Insurance (F&I) administration. We are a SaaS company that operates in a fast paced, entrepreneurial environment. Our 3 teams located in the US, Poland, and Thailand work collaboratively around the clock to build our PCRS platform that automates the full administration lifecycle of all extended warranties, F&I products, and service contracts for our customers. What You'll Do The ETL Conversion Architect will play a critical leadership role in shaping the data conversion strategy for enterprise-scale implementations of PCMI's PCRS platform. Rather than focusing solely on individual project delivery, this role is also responsible for establishing scalable frameworks, validation models, and governance structures that enable the broader Professional Services team to execute conversions with consistency, accuracy, and minimal rework. Acting as the subject matter expert in ETL methodology, this individual will define how legacy data is transformed, validated, and migrated into PCRS-prioritizing transparency, client alignment, and time-to-value. From developing SQL-based source-to-target comparison models to enabling trust-but-verify conversion tracking, this role is essential to building a repeatable, high-confidence conversion process that accelerates client onboarding and elevates delivery quality across the organization. In this role, you will own: Practice-Level Framework Design: Architect and maintain source-to-target field-level validation frameworks that ensure field-level integrity across legacy and PCRS systems. Design standardized SQL Server validation pipelines (leveraging SSIS) to log and store pre/post conversion data, enabling clear traceability and trust-but-verify validation with clients. Establish conversion hypothesis protocols-defining expected outcomes from each extract-transformation-load (ETL) step before execution begins. Define and govern control total expectations (e.g., contract count, rate bucket totals, claim payments) across major business objects like Contracts, Claims, and Payments. Automation & Quality Enablement: Build automated validation assets and exception monitoring templates that the broader team can leverage in Excel, SQL Server, or other tooling. Partner with DevOps/Engineering to maintain a centralized SQL repository of validation rules, transformation logic, and data anomaly flags. Provide field-level transformation and mapping patterns for common edge cases (e.g., address concatenation, rate truncation, legacy formatting inconsistencies). Client Alignment & Delivery Readiness: Define standardized conversion preview formats (e.g., before/after field-level reports) to be used in pre-conversion workshops with clients-helping them visualize how legacy data will translate into PCRS before any transformation begins. Serve as a client-facing SME during data onboarding and conversion planning sessions-guiding clients through expected outcomes, resolving mapping discrepancies, and confirming mutual alignment prior to data transformation. Act as a Trusted Advisor for clients throughout the full conversion lifecycle, from pre-conversion planning through post-load validation - ensuring transparency, accuracy, and accountability are maintained across all data migration milestones. Ensure ongoing data quality assurance by defining scalable approaches for post-load reconciliation, control totals, and referential integrity validations - enabling early detection of discrepancies and reducing reliance on UAT as the primary validation checkpoint. Governance & Oversight: Own the conversion data quality strategy for PCRS implementations - including release-specific adjustments based on schema evolution. Govern the process for tracking, storing, and surfacing control totals (financial and transactional) across conversions-ensuring full auditability. Define procedures for flagging scope changes or deviations from standard conversions and providing inputs into project change control discussions. Reasonable accommodations may be made to enable individuals with disabilities to perform these essential functions. Supervisory Responsibilities None. What You'll Need to Join Our Team 10+ years of progressive experience in ETL development, data conversion, and SaaS implementation, with a proven track record of designing and delivering scalable, SQL-based data frameworks in complex client environments. Bachelor's or Master's degree in Computer Science, Information Systems, Engineering, or a related field, or equivalent hands-on experience with high-volume data conversion projects. Deep expertise in Microsoft SQL Server, including schema design, scripting, data transformation, validation logic, stored procedures, and performance tuning. Proficiency in ETL frameworks and tools (e.g., SSIS, custom SQL-based ETL engines), with strong ability to architect reusable pipelines and enforce field-level mapping logic. Solid understanding of SaaS platforms, especially in highly configurable systems within the following industries: Insurance, F&I, Accounting, Risk Management, or warranty admin platforms. Experience working in Professional Services organizations with a focus on implementation quality, timeline predictability, and cost control. Strong communication skills with demonstrated ability to translate complex technical designs into digestible summary insights for non-technical audiences, including PMs and clients. Background in mentoring technical consultants and conversion engineers, and establishing standards for scalable data practices across client projects. Familiarity with Agile or Waterfall delivery methodologies, as well as Change Management, Data Governance, and Test Validation frameworks. Experience collaborating with Product and Engineering to identify platform gaps, inform roadmap priorities, and drive customer outcomes through technical innovation. Required Skills/Abilities Expert-level proficiency in ETL processes, frameworks, and source-to-target data mapping. Strong SQL skills and experience working with SQL Server for complex data transformations. Proficient in managing and querying large datasets across staging and production layers. Familiarity with REST APIs, JSON, XML, and Batch processing for system integrations. Competent in scripting with PowerShell and Python for ETL automation and validation. Strong analytical mindset with a focus on quality, traceability, and reusability of conversions. Excellent communication skills, capable of translating technical insights for non-technical audiences. Highly organized with the ability to manage multiple priorities under tight deadlines. Comfortable in a high-paced, client-facing environment with evolving business needs. Physical Requirements Prolonged periods of sitting at a desk and working on a computer. Must be able to lift up to 15 pounds at times. Travel Requirements Must be able to travel to client meetings or PCMI office; up to 10% Why Work For Us Competitive Compensation from $150,000-$170,000 Annually* Comprehensive Benefit Package** Health, Dental & Vision Insurance Health Savings Account (HSA) Flexible Spending Account (FSA) Short- & Long-Term Disability Insurance Company-paid Long-Term Disability Company-paid Life Insurance Voluntary Life Insurance Voluntary Accident Insurance Employee Assistance Program 401k with generous Company Match Commuter Benefits Paid Time Off accrued per pay period. 10 Paid Holidays Paid Parental Leave Annual Bonus Program Professional Development Opportunities Employee Events Wellness Programs Employee Discount Programs Office in Park Ridge, IL - Convenient location to Blue Line *Individual compensation packages are based on various factors unique to each candidate, including skill set, experience, qualifications, and other job-related aspects. **Eligible to enroll in the first day of employment for immediate coverage. Although the role is remote, PCMI can only hire employees in the following states: AL CT FL GA IL KY MO NH NC OH PA TX Note: It is required for this role to be in the Park Ridge, IL office, 2 days per week if candidate is located in the Chicagoland area.
    $150k-170k yearly Auto-Apply 60d+ ago
  • AI Data Scientist

    Clarios

    Data engineer job in Milwaukee, WI

    What you Will Do Clarios is seeking a skilled AI Data Scientist to design, develop, and deploy machine learning and AI solutions that unlock insights, optimize processes, and drive innovation across operations, offices, and products. This role focuses on transforming complex, high-volume data into actionable intelligence and enabling predictive and prescriptive capabilities that deliver measurable business impact. The AI Data Scientist will collaborate closely with AI Product Owners and business SMEs to ensure solutions are robust, scalable, and aligned with enterprise objectives. This role requires an analytical, innovative, and detail-oriented team member with a strong foundation in AI/ML and a passion for solving complex problems. The individual must be highly collaborative, an effective communicator, and committed to continuous learning and improvement. This will be onsite three days a week in Glendale. How you will do it * Hypothesis Framing & Metric Measurement: Translate business objectives into well-defined AI problem statements with clear success metrics and decision criteria. Prioritize opportunities by ROI, feasibility, risk, and data readiness; define experimental plans and acceptance thresholds to progress solutions from concept to scaled adoption. * Data Analysis & Feature Engineering: Conduct rigorous exploratory data analysis to uncover patterns, anomalies, and relationships across heterogeneous datasets. Apply advanced statistical methods and visualization to generate actionable insights; engineer high-value features (transformations, aggregations, embeddings) and perform preprocessing (normalization, encoding, outlier handling, dimensionality reduction). Establish data quality checks, schemas, and data contracts to ensure trustworthy inputs. * Model Development & Iteration: Design and build models across classical ML and advanced techniques-deep learning, NLP, computer vision, time-series forecasting, anomaly detection, and optimization. Run statistically sound experiments (cross-validation, holdouts, A/B testing), perform hyperparameter tuning and model selection, and balance accuracy, latency, stability, and cost. Extend beyond prediction to prescriptive decision-making (policy, scheduling, setpoint optimization, reinforcement learning), with domain applications such as OEE improvement, predictive maintenance, production process optimization, and digital twin integration in manufacturing contexts. * MLOps & Performance: Develop end-to-end pipelines for ingestion, training, validation, packaging, and deployment using CI/CD, reproducibility, and observability best practices. Implement performance and drift monitoring, automated retraining triggers, rollback strategies, and robust versioning to ensure reliability in dynamic environments. Optimize for scale, latency, and cost; support real-time inference and edge/plant-floor constraints under defined SLAs/SLOs. * Collaboration & Vendor Leadership: Partner with AI Product Owners, business SMEs, IT, and operations teams to translate requirements into pragmatic, integrated solutions aligned with enterprise standards. Engage process owners to validate data sources, constraints, and hypotheses; design human-in-the-loop workflows that drive adoption and continuous feedback. Provide technical oversight of external vendors-evaluating capabilities, directing data scientists/engineers/solution architects, validating architectures and algorithms, and ensuring seamless integration, timely delivery, and measurable value. Mentor peers, set coding/modeling standards, and foster a culture of excellence. * Responsible AI & Knowledge Management: Ensure data integrity, model explainability, fairness, privacy, and regulatory compliance throughout the lifecycle. Establish model risk controls; maintain documentation (model cards, data lineage, decision logs), audit trails, and objective acceptance criteria for production release. Curate reusable assets (feature catalogs, templates, code libraries) and best-practice playbooks to accelerate delivery while enforcing Responsible AI principles and rigorous quality assurance What we look for * 5+ years of experience in data science and machine learning, delivering production-grade solutions in corporate or manufacturing environments. * Strong proficiency in Python and common data science libraries (e.g., Pandas, NumPy, scikit-learn); experience with deep learning frameworks (TensorFlow, PyTorch) and advanced techniques (NLP, computer vision, time-series forecasting). * Hands-on experience with data preprocessing, feature engineering, and EDA for large, complex datasets. * Expertise in model development, validation, and deployment, including hyperparameter tuning, optimization, and performance monitoring. * Experience interacting with databases and writing SQL queries. * Experience using data visualization techniques for analysis and model explanation. * Familiarity with MLOps best practices-CI/CD pipelines, containerization (Docker), orchestration, model versioning, and drift monitoring. * Knowledge of cloud platforms (e.g., Microsoft Azure, Snowflake) and distributed computing frameworks (e.g., Spark) for scalable AI solutions. * Experience with agile methodologies and collaboration tools (e.g., JIRA, Azure DevOps), working in matrixed environments across IT, analytics, and business teams. * Strong analytical and business acumen, with the ability to quantify ROI and build business cases for AI initiatives. * Excellent communication and stakeholder engagement skills; able to present insights and recommendations to technical and non-technical audiences. * Knowledge of LLMs and VLMs is a strong plus. * Understanding of manufacturing systems (SCADA, PLCs, MES) and the ability to integrate AI models into operational workflows is a strong plus. * Willingness to travel up to 10% as needed. #LI-AL1 #LI-HYBRID What you get: * Medical, dental and vision care coverage and a 401(k) savings plan with company matching - all starting on date of hire * Tuition reimbursement, perks, and discounts * Parental and caregiver leave programs * All the usual benefits such as paid time off, flexible spending, short-and long-term disability, basic life insurance, business travel insurance, Employee Assistance Program, and domestic partner benefits * Global market strength and worldwide market share leadership * HQ location earns LEED certification for sustainability plus a full-service cafeteria and workout facility * Clarios has been recognized as one of 2025's Most Ethical Companies by Ethisphere. This prestigious recognition marks the third consecutive year Clarios has received this distinction. Who we are: Clarios is the force behind the world's most recognizable car battery brands, powering vehicles from leading automakers like Ford, General Motors, Toyota, Honda, and Nissan. With 18,000 employees worldwide, we develop, manufacture, and distribute energy storage solutions while recovering, recycling, and reusing up to 99% of battery materials-setting the standard for sustainability in our industry. At Clarios, we're not just making batteries; we're shaping the future of sustainable transportation. Join our mission to innovate, push boundaries, and make a real impact. Discover your potential at Clarios-where your power meets endless possibilities. Veterans/Military Spouses: We value the leadership, adaptability, and technical expertise developed through military service. At Clarios, those capabilities thrive in an environment built on grit, ingenuity, and passion-where you can grow your career while helping to power progress worldwide. All qualified applicants will be considered without regard to protected characteristics. We recognize that people come with a wealth of experience and talent beyond just the technical requirements of a job. If your experience is close to what you see listed here, please apply. Diversity of experience and skills combined with passion is key to challenging the status quo. Therefore, we encourage people from all backgrounds to apply to our positions. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, age, status as a protected veteran or other protected characteristics protected by law. As a federal contractor, we are committed to not discriminating against any applicant or employee based on these protected statuses. We will also take affirmative action to ensure equal employment opportunities. Please let us know if you require accommodations during the interview process by emailing Special.Accommodations@Clarios.com. We are an Equal Opportunity Employer and value diversity in our teams in terms of work experience, area of expertise, and all characteristics protected by laws in the countries where we operate. For more information on our commitment to sustainability, diversity, and equal opportunity, please read our latest report. We want you to know your rights because EEO is the law. A Note to Job Applicants: please be aware of scams being perpetrated through the Internet and social media platforms. Clarios will never require a job applicant to pay money as part of the application or hiring process. To all recruitment agencies: Clarios does not accept unsolicited agency resumes/CVs. Please do not forward resumes/CVs to our careers email addresses, Clarios employees or any other company location. Clarios is not responsible for any fees related to unsolicited resumes/CVs.
    $68k-94k yearly est. Auto-Apply 1d ago
  • Data Engineer

    Tree Top Staffing 4.7company rating

    Data engineer job in Wheeling, IL

    Benefits: 401(k) 401(k) matching Dental insurance Health insurance Opportunity for advancement Paid time off Parental leave Training & development Vision insurance Overview: We are looking for a Data Engineer who will use various methods to transform raw data into useful data systems. For example, you'll create algorithms and conduct statistical analysis. Overall, you'll strive for efficiency by aligning data systems with business goals. To succeed in this Data Engineering position, you should have strong analytical skills and the ability to combine data from different sources. Responsibilities: Analyze and organize raw data Build data systems and pipelines Evaluate business needs and objectives Interpret trends and patterns Conduct complex data analysis and report on results Prepare data for prescriptive and predictive modeling Build algorithms and prototypes Combine raw information from different sources Explore ways to enhance data quality and reliability Identify opportunities for data acquisition Develop analytical tools and programs Collaborate with data scientists and architects on several projects Qualifications: Previous experience as a data engineer or in a similar role Technical expertise with data models, data mining, and segmentation techniques SSRS & SSAS experience Hands-on experience with SQL database design Great numerical and analytical skills Degree in Computer Science, IT, or similar field Data Engineering certification (e.g IBM Certified Data Engineer) is a plus Compensation: $75,000.00 - $90,000.00 per year Our Story At Tree Top Staffing, we take pride in helping job seekers find their ideal role and employers find the right candidate for their company. Our organization is instantiated by experienced professionals providing full service employment solutions including: contract, contract-to-hire, and direct-hire placements within multiple lines of business. Our Mission We adhere to a set of 4 defining principles encapsulating: Servitude Accountability Integrity Discipline If you make a promise, keep it, as your actions prove your greatness. Our goal at Tree Top Staffing is to set our clients and consultants up for success. It is imperative to ensure an all-around fit from both sides for long term relations to thrive. Our Results Tree Top Staffing utilizes advanced recruiting tools to ensure top talent is presented to our clients when their needs arise. Our success is measured by the success of our clients. It is a privilege to help job seekers find their dream position and employers find the right fit for their company.
    $75k-90k yearly Auto-Apply 60d+ ago
  • Real World Data Scientist, Oncology (Associate Director)

    Astellas Pharma, Inc. 4.9company rating

    Data engineer job in Northbrook, IL

    Do you want to be part of an inclusive team that works to develop innovative therapies for patients? Every day, we are driven to develop and deliver innovative and effective new medicines to patients and physicians. If you want to be part of this exciting work, you belong at Astellas! Astellas Pharma Inc. is a pharmaceutical company conducting business in more than 70 countries around the world. We are committed to turning innovative science into medical solutions that bring value and hope to patients and their families. Keeping our focus on addressing unmet medical needs and conducting our business with ethics and integrity enables us to improve the health of people throughout the world. For more information on Astellas, please visit our website at ***************** This position is based in Northbrook, Illinois. Hybrid work from certain states may be permitted in accordance with Astellas' Responsible Flexibility Guidelines. Candidates interested in hybrid work are encouraged to apply. Purpose: We are hiring an experienced real-world data scientist to join our Real-World Data Science (RWDS) team. As an Associate Director of RWDS, you will be an analytic researcher informing and conducting Real World Data (RWD) studies at any time in the drug lifecycle. You will work directly within the RWDS team to execute observational studies for internal and external consumption and partner closely with Development, Medical Affairs, and Pharmacovigilance/Pharmacoepidemiology colleagues in their research. Additionally, you will collaborate closely with others in RWDS, Biostatistics and the broader Quantitative Sciences & Evidence Generation department to enhance our RWD and analytics offerings. RWDS is multidisciplinary and provides RWE strategic input, study design, statistical and programming support to projects. Team members apply their unique knowledge, skills and experience in teams to deliver decision-shaping real-world evidence. Essential Job Responsibilities: * Provide best-in-class data science support to Astellas drug development programs & marketed products in relation to RWD * Design of observational studies (primary and/or secondary data) * Execute (program and analyze) observational studies using in-house RWD or oversee vendors or other RWDS staff in executing observational studies * Write, review, or contribute to key study documents to ensure optimal methodological & statistical presentation. These documents include, protocols, analysis plans, tables and figure (TLF) specifications, study reports, publications * Ensure efficient planning, execution and reporting of analyses * Advise as subject matter expert in specific data access partnerships * Represent the company on matters related to RWD analysis at meetings with regulatory authorities, key opinion leaders and similar experts/bodies as needed * Contributes to vendor selection with partner functions * Participate in the creation and upkeep of best practices, tools/macros, and standards related to methods, data and data analysis at Astellas * Collaborate with RWDS and Biostatistics colleagues and cross-functional teams in Development, Medical Affairs and Pharmacovigilance * Mentor and guide junior members of the RWD Analytics team
    $75k-104k yearly est. 23d ago
  • Big Data Engineer

    Forhyre

    Data engineer job in Rolling Meadows, IL

    Job Description Looking for an experienced Senior Big Data Developer Experience: 8 - 10 years Requirements Primary / Essential Skills : SPARK with Scala or Python Secondary / Optional Skills : AWS, UNIX & SQL Looking for an experienced Senior Big Data Developer who will be responsible for, Technical leadership in driving solutions and hands on contributions To build new cloud based ingestion, transformation and data movement applications To migrate / modernize legacy data platforms Contribute and Assist in translating the requirements to high level and low level solution designs and working program / code Interact with business/IT stakeholders, other involved teams to understand requirements, identify dependencies, suggest and convincingly present solutions any or all parties involved. Perform hands-on to deliver on commitments and coordinating with team members both onsite and offshore and enable the team to deliver on commitments. To be successful in this role, the candidate must need, Good working knowledge and strong on concepts on Spark framework Comfortably work with any one or more of the scripting language listed in the order of preference, Scala, Python , Unix shell scripting Experience / exposure in AWS Services (EC2, Lambda, S3, RDS) and related cloud technologies Good understanding of DATA space - Data Integration, Building Data Warehouse solutions
    $75k-100k yearly est. 21d ago
  • Principal Data Engineer

    Sdevops

    Data engineer job in Rolling Meadows, IL

    Key Responsibilities of Principal Data Engineer: Guide the team towards successful project delivery Provide technical leadership and key decision making Work with and mentor individual team members to meet professional goals Continuous effort to automate and increase team efficiency Maintain high standards of software quality via code review, testing, automation standardization, and tooling Collaborate directly with both developers and business stakeholders Provide estimates and risk-reducing spike stories Assist in collection and documentation of requirements from business Assist in planning deployments and migrations Prepare status updates and run the daily standup Analyze data problems and provide solutions Assess opportunities for improvements and optimization Required Qualifications of the Principal Data Engineer: Masters Degree or equivalent work experience Minimum 12 years of experience working with open source databases Minimum 10 years of experience working with ETL and related data pipeline technologies. Highly proficient in open source SQL systems, particularly MySQL and PostgreSQL Proficiency in multiple scripting languages, including Python and Ruby Demonstrable experience with cloud-based ETL tools, including EMR, Expertise with distributed data stores, with demonstrable experience using Redshift and with optimizing query performance Deep understanding of data structures and schema design Prior work with AWS ecosystem, particularly RDS, SQS and SNS, StepFunctions, CDK) Exceptional analytical, organizational, interpersonal, and communication (both oral and written) skills Self-motivated, driven, resourceful, and able to get things done Enjoy working in fast-paced, collaborative, Agile environments Ability to adapt and learn quickly is a fundamental necessity Benefits: 401(k) Dental Insurance Health insurance Health savings account Paid time off Professional development assistance Vision insurance
    $75k-100k yearly est. 60d+ ago
  • Lead ETL Architect (No H1B)

    Sonsoft 3.7company rating

    Data engineer job in Deerfield, IL

    Sonsoft , Inc. is a USA based corporation duly organized under the laws of the Commonwealth of Georgia. Sonsoft Inc. is growing at a steady pace specializing in the fields of Software Development, Software Consultancy and Information Technology Enabled Services. As background, this client has been working with us and internal recruiting to fill this role for a LONG time. Previously, the client was looking for someone who could both technically lead the team specific to SAG technologies and also work with the business. Two things have changed: 1) the span of control of the team increased from SAG to include the other technologies listed in the job description, and 2) he has been unsuccessful in finding someone who was both the best technical architect on the team and also had manager qualities. He is now open to lesser capabilities on the technical, as long as they have the manager. I WOULD TARGET CURRENT/ FORMER SR. MANAGERS/ DIRECTORS OF INTEGRATION/ SOA. PREFERABLY, WERE TECHNICAL AT SOME POINT IN THEIR CAREER, BUT MOVED INTO MANAGEMENT. WITH THIS TARGET, YOU SHOULD BE ABLE TO ACCESS THE APPROPRIATE TALENT AT THIS PAY RATE. Lead ETL Architect Client's is seeking a Lead ETL Architect as a key member of their Center of Expertise for Integration Technologies. This consultant will be primarily responsible for leading the demand intake/ management process with the business leaders and other IT groups related to managing the demand for the design, build and ongoing support of new ETL architectures. They also will interact extensively with the team of architects supporting these technologies. Expertise in one of more of the following integration technology areas is required: -- ETL - Datastage, AnInito, Talend (client is moving from AbInitio to Talend as their primary ETL tool) The overall team is responsible for addressing any architecture impacts and defining technology solution architectures focusing on infrastructure, logical and physical layout of the systems and technologies not limited to hardware, distributed and virtual systems, software, security, data management, storage, backup/recovery, appliances, messaging, networking, work flow, interoperability, flexibility, scalability, monitoring and reliability including fault tolerance and disaster recovery in collaboration with Enterprise Architect(s). Additionally, responsible for the build-out and ongoing run support of these integration platforms. Skills Set · Possession of fundamental skills of a solution architect with ability to inquire and resolve vague requirements · Ability to analyze existing systems through an interview process with technology SMEs. Does not need to be the SME. · Takes a holistic view and communicate to others to be able to understand the enterprise view to ensure coherence of all aspects of the project as an integrated system · Perform gap analysis between current state and the future state architecture to identify single points of failure, capabilities, capacity, fault tolerance, hours of operation (SLA), change windows, · Strong verbal and written communication with proven skills in facilitating the design sessions. · Able to influence, conceptualize, visualize and communicate the target architecture. · Able to communicate complex technical or architecture concepts in a simple manner and can adapt to different audiences · Ability to work independently and market architecture best practices, guidelines, standards, principles, and patterns while working with infrastructure and technology project teams. · Ability to document end-to-end application transaction flows through the enterprise · Ability to document the technology architecture decision process, and if required develop relevant templates · Resolve conflicts within infrastructure and application teams, business units and other architects · Identify opportunities to cut cost without sacrificing the overall business goals. · Ability to estimate the financial impact of solution architecture alternatives / options · Knowledge of all components of an enterprise technical architecture. Additional Information ** U.S. Citizens and those who are authorized to work independently in the United States are encouraged to apply. We are unable to sponsor at this time. Note:- This is a Contract job opportunity for you. Only US Citizen , Green Card Holder , GC-EAD , H4-EAD, L2-EAD, OPT-EAD & TN-Visa can apply. No H1B candidates, please. Please mention your Visa Status in your email or resume . ** All your information will be kept confidential according to EEO guidelines.
    $87k-114k yearly est. 14h ago
  • BigData Hadoop Developer

    Jobsbridge

    Data engineer job in Des Plaines, IL

    Jobs Bridge Inc is among the fastest growing IT staffing / professional services organization with its own job portal. Jobs Bridge works extremely closely with a large number of IT organizations in the most in-demand technology skill sets. Job Description Skill BigData Hadoop Developer Location Des Plaines, IL Total Experience 8 yrs. Max Salary Not Mentioned Employment Type Direct Jobs (Full Time) Domain Any Description GC can apply Level: 7+ years Good understanding of Hadoop and ecosystem Strong experience in managing, monitoring and troubleshooting Hadoop clusters and environments. (IBM BigInsights, HDP). Proficient in MapReduce, Hive, Flume, Sqoop Proficient in debugging hive, mapreduce,s qoop issue Strong Linux administration background and experience in troubleshooting and analyzing Linux and resident application issues Good knowledge of scripting (shell, python) Good analytical skills. Additional Information Multiple Openings for GC/Citizen
    $76k-99k yearly est. 60d+ ago
  • Hadoop Developer

    Info. Services Inc. 4.2company rating

    Data engineer job in Riverwoods, IL

    Role: Hadoop Developer Duration: Fulltime BGV will be done for the selected candidates. • The Senior/Lead Hadoop Developer is responsible for designing, developing, testing, tuning and building a large-scale data processing system, for Data Ingestion and Data products that allow the Client to improve quality, velocity and monetization of our data assets for both Operational Applications and Analytical needs. • Design, develop, validate and deploy the ETL processes • Must have used HADOOP (PIG, HIVE, SQOOP) on HORTONWORKS Distribution. • Responsible for the documentation of all Extract, Transform and Load (ETL) processes • Maintain and enhance ETL code, work with the QA and DBA team to fix performance issues • Collaborate with the Application team to design and develop required ETL processes, performance tune ETL programs/scripts. • Work with business partners to develop business rules and business rule execution • Perform process improvement and re-engineering with an understanding of technical problems and solutions as they relate to the current and future business environment. • Design and develop innovative solutions for demanding business situations. • Help drive cross team design / development via technical leadership / mentoring. Work with Offshore team of developers. • Analyze complex distributed production deployments, and make recommendations to optimize performance Essential skills • Minimum 3 years ETL experience with RDBNS and Big Data strongly preferred, may consider experience with Informatica or Datastage as an alternate. • Minimum 2+ years of experience in creating reports using TABLEAU. • Proficiency with HORTONWORKS Hadoop distribution components and custom packages • Proven understanding and related experience with Hadoop, HBase, Hive, Pig, Sqoop, Flume, Hbase and/or Map/Reduce • Excellent RDBMS (Oracle, SQL Server) knowledge for development using SQL/PL SQL • Basic UNIX OS and Shell Scripting skills • 6+ years' experience in UNIX and Shell Scripting. • 3+ years' experience in job scheduling tools like AutoSys. • 3+ years' experience in Pig and Hive Queries. • 3+ years' experience Hand on experience with Oozie. • 3+ years' experience in Importing and exporting the data using Sqoop from HDFS to Relational Database systems/mainframe and vice-versa. • 3+ years' experience in Importing and exporting the data using Sqoop from HDFS to Relational Database systems/Teradata and vice-versa. • Must have 2+ Experience in working with Spark for data manipulation, preparation, cleansing. Please respond with your word resume and requested details: Full Name : Work Authorization: Contact Number : Email ID : Skype ID: Current location: Willing to relocate : Salary : Additional Information All your information will be kept confidential according to EEO guidelines.
    $77k-98k yearly est. 14h ago
  • Data Engineer

    Influur

    Data engineer job in Mundelein, IL

    We're building the world's first viral agent. An AI purpose built for influencer marketing. Not a tool. Not a platform. An autonomous agent that goes from "I need this campaign to break" to "50 influencers live next week" with minimal human intervention. Why this matters: We have 3 years of proprietary data on what makes influencer content go viral. Warner, Sony, and Universal trust us to break their biggest artists. We're backed by the biggest names in entertainment including Sofia Vergara, Karol G, Tommy Mottola, and Tier 1 VCs like Point72 Ventures. We're not building for hypothetical use cases. We're shipping production AI that drives millions in revenue today. Why we'll win: We have what no one else has: both the data and the distribution. We have proprietary data on thousands of influencers and what makes content go viral, plus direct relationships with the influencers themselves. In the AI era, having both is the moat. Everyone else has one or the other. We have both. Why now: AGI agents will replace middle level work by 2027. We have a 12 to 18 month window before the market floods with agents. We're building the category defining social media viral agent right now. What we're looking for: Young talent ready to go all in. We're offering significant equity to people who want to build something that matters. This isn't a job. It's an opportunity to define the future of AI in influencer marketing and own a meaningful piece of it.Your Skillset Strong programming with Python and SQL. Comfortable building from scratch and improving existing code. Expertise in data modeling and warehousing, including dimensional modeling and performance tuning. Experience designing and operating ETL and ELT pipelines with tools like Airflow or Dagster, plus dbt for transformations. Hands-on with batch and streaming systems such as Spark and Kafka, and with Lakehouse or warehouse tech on AWS or GCP. Proficiency integrating third-party APIs and datasets, ensuring reliability, lineage, and governance. Familiarity with AI data needs: feature stores, embedding pipelines, vector databases, and feedback loops that close the gap between model and outcome. High standards for code quality, testing, observability, and CI. Comfortable with Docker and modern cloud infra. You're the Type Who Treats data as a product and ships improvements that users feel. Moves fast without breaking trust. You value contracts, schemas, and backward compatibility. Owns problems across the stack, from ingestion to modeling to serving. Communicates clearly with ML engineers, analysts, and business partners. Experiments, measures, and iterates. You set measurable SLAs and keep them green. Sees ambiguity as a chance to design the standard everyone else will follow. Gross salary range What We Offer• Competitive equity in a venture-backed company shaping the future of music influencer marketing.• A seat at the table as we redefine how the most iconic record labels, artists, and brands go viral (think Bad Bunny), with our tech, support, and strategic guidance.• Access to elite tools, AI copilots, and a team that builds daily at top speed.• Hybrid flexibility. We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.
    $75k-100k yearly est. 20d ago
  • Data Engineer

    Randa Corp 3.9company rating

    Data engineer job in Des Plaines, IL

    Randa Apparel & Accessories is one of the world's leading fashion clothing and accessories companies, operating across 11 countries with a portfolio of 30+ iconic brands, including Haggar, Levi's, Tommy Hilfiger, Calvin Klein, Totes, Isotoner, and Columbia Sportswear. With over 100 years of industry leadership, RAA continues to produce exceptional products and services, delighting customers and empowering partners worldwide. From the #1 dress pant brand in North America to the #1 belt and wallet, RAA sets the standard for excellence and innovation We empower our associates, create growth opportunities at every level, and strive to make RAA the best place to build a career. Want to work at a diverse, equitable, & inclusive workspace where associates are encouraged to bring their true, authentic selves? Apply today and fashion your future with RAA. Location: Rosemont, IL Schedule: Hybrid (3 days in-office / 2 days remote) Position Summary: The Data Engineer contributes to the design, development, and management of Randa Apparel & Accessories' large-scale database systems while supporting the strategic architectural vision of quality, scalability, performance, and function. Essential Job Functions: Develop solutions and contribute to development, leveraging Object-Oriented programming techniques (.Net), Software Development Lifecycles, Unit Test Techniques, and Debugging/Analytical Techniques. Design and develop data integration solutions with on-prem to cloud and vice versa architecture Collaborate with the team to develop database structures that fit into the overall architecture of the systems under development. Code, install, optimize, and debug database queries and stored procedures using appropriate tools and editors. Perform code reviews and provide feedback in a timely manner. Promote collective code ownership for everyone to have visibility into the feature codebase. Present technical ideas and concepts in business-friendly language. Provide recommendations, analysis, and evaluation of systems improvements, optimization, development, and maintenance efforts, including capacity planning. Identify and correct performance bottlenecks related to SQL code. Support timely production releases and adherence to release activities. Contribute to the data retention strategy. Minimum Qualifications: 1-3 years of proven hands-on AWS development experience in the realm of: Data processing with near real-time flows Highly available architecture (Lambda, Python, S3 event management, Redshift (or other), Aurora (serverless/other), Redis ElastiCache 1-3 years in a commercial-grade business applications environment, leveraging the following: SQL Server, T-SQL, SSIS, stored procedures, user-defined functions, and table functions Managing design risk 1-3 years leveraging OO programming techniques Software development lifecycles, Unit test techniques, debugging/analytical techniques Required Skills: MS SQL Server - T-SQL SQL Server Integration Services (SSIS) Adept at creating stored procedures, views, user-defined functions, table functions Python, C#, Java Cloud Knowledge - AWS (Amazon Web Services) (GLUE, Redshift, Lambda, Kinesis) Reporting: Tableau, Power BI Analysis: OLAP Excellent communication, verbal and writing skills Minimal oversight required What we Offer: Competitive base salary. Hybrid work schedule. Three weeks of paid time off within the first year of employment. Company provided life insurance, short-term disability, long-term disability, and paid parental leave. Health, vision, and dental insurance options with low employee contributions. Commuter benefit plan. Optional supplemental life insurance, pet insurance, and accident & critical illness insurance offered at a group discount rate. 401(k). Unlimited access to our award-winning online fitness, and wellness program. A great place to work, fast-paced, with terrific career growth “The statements in this job description are intended to describe the general nature and level of work being performed by people assigned to this work. This is not an exhaustive list of all duties and responsibilities. Randa management reserves the right to amend and change responsibilities to meet business and organizational needs as necessary.”
    $69k-93k yearly est. 24d ago
  • Data Engineer

    Charter Manufacturing 4.1company rating

    Data engineer job in Mequon, WI

    Charter Manufacturing is a fourth-generation family-owned business where our will to grow drives us to do it better. Join the team and become part of our family! Applicants must be authorized to work for ANY employer in the U.S. Charter Manufacturing is unable to sponsor for employment visas at this time. This position is hybrid, 3 days a week in office in Mequon, WI. BI&A- Lead Data Engineer Charter Manufacturing continues to invest in Data & Analytics. Come join a great team and great culture leveraging your expertise to drive analytics transformation across Charter's companies. This is a key role in the organization that will provide thought leadership, as well as add substantial value by delivering trusted data pipelines that will be used to develop models and visualizations that tell a story and solve real business needs/problems. This role will collaborate with team members and business stakeholders to leverage data as an asset driving business outcomes aligned to business strategies. Having 7+ years prior experience in developing data pipelines and partnering with team members and business stakeholders to drive adoption will be critical to the success of this role. MINIMUM QUALIFICATIONS: Bachelor's degree in computer science, data science, software engineering, information systems, or related quantitative field; master's degree preferred At least seven years of work experience in data management disciplines, including data integration, modeling, optimization and data quality, or other areas directly relevant to data engineering responsibilities and tasks Proven project experience designing, developing, deploying, and maintaining data pipelines used to support AI, ML, and BI using big data solutions (Azure, Snowflake) Strong knowledge in Azure technologies such as Azure Web Application, Azure Data Explorer, Azure DevOps, and Azure Blob Storage to build scalable and efficient data pipelines Strong knowledge using programming languages such as R, Python, C#, and Azure Machine Learning Workspace development Proficiency in the design and implementation of modern data architectures and concepts such as cloud services (AWS, Azure, GCP) and modern data warehouse tools (Snowflake, Databricks) Experience with database technologies such as SQL, Oracle, and Snowflake Prior experience with ETL/ELT data ingestion into data lakes/data warehouses for analytics consumption Strong SQL skills Ability to collaborate within and across teams of different technical knowledge to support delivery and educate end users on data products Passionate about teaching, coaching, and mentoring others Strong problem-solving skills, including debugging skills, allowing the determination of sources of issues in unfamiliar code or systems, and the ability to recognize and solve problems Excellent business acumen and interpersonal skills; able to work across business lines at a senior level to influence and effect change to achieve common goals Ability to describe business use cases/outcomes, data sources and management concepts, and analytical approaches/options Demonstrated experience delivering business value by structuring and analyzing large, complex data sets Demonstrated initiative, strong sense of accountability, collaboration, and known as a trusted business partner PREFERRED QUALIFICATIONS INCLUDES EXPERIENCE WITH: Manufacturing industry experience specifically heavy industry, supply chain and operations Designing and supporting data integrations with ERP systems such as Oracle or SAP MAJOR ACCOUNTABILITIES: Designs, develops, and supports data pipelines for batch and streaming data extraction from various sources (databases, API's, external systems), transforming it into the desired format, and load it into the appropriate data storage systems Collaborates with data scientists and analysts to optimize models and algorithms for in accordance with data quality, security, and governance policies Ensures data quality, consistency, and integrity during the integration process, performing data cleansing, aggregation, filtering, and validation as needed to ensure accuracy, consistency, and completeness of data Optimizes data pipelines and data processing workflows for performance, scalability, and efficiency. Monitors and tunes data pipelines for performance, scalability, and efficiency resolving performance bottlenecks Establish architecture patterns, design standards, and best practices to accelerate delivery and adoption of solutions Assist, educate, train users to drive self-service enablement leveraging best practices Collaborate with business subject matter experts, analysts, and offshore team members to develop and deliver solutions in a timely manner Embraces and establishes governance of data and algorithms, quality, standards, and best practices, ensuring data accuracy We offer comprehensive health, dental, and vision benefits, along with a 401(k) plan that includes employer matching and profit sharing. Additionally, we offer company-paid life insurance, disability coverage, and paid time off (PTO).
    $80k-110k yearly est. Auto-Apply 58d ago
  • AWS Data Engineer

    Tata Consulting Services 4.3company rating

    Data engineer job in North Chicago, IL

    Must Have Technical/Functional Skills * Strong experience in SDLC delivery, including waterfall, hybrid and Agile methodologies. Experience delivering in an agile environment, * Experience of implementing and delivering data solutions and pipelines on AWS Cloud Platform - using Redshift * Strong in Python hands on experience. Not expected to code in depth but lead a team of developers and provide thought leadership, guidance and recommendations. * Apache Airflow experience is a plus * A strong understanding of data modelling, data structures, databases, and ETL process and Data wharehousing layers * An in-depth understanding of large-scale data sets, including both structured and unstructured data * Ability to analyze and troubleshoot complex data / SQL issues * Knowledge of ETL (Extract, Transform, Load) processes * Understanding of big data concepts * Knowledge and experience of delivering CI/CD and DevOps capabilities in a data environment Roles & Responsibilities Lead a team of developers, code reviews, suggest improvements in code, provide thought leadership Work on Proposal and POCs, developing prototype and reference models to help in tailoring Offerings & value propositions. Lead, Mentor and groom other team members to work on AWS technologies Generic Managerial Skills, If any Need to have some experience in leading technical team. Salary Range: $100,000 - $130,000 a Year TCS Employee Benefits Summary: Discretionary Annual Incentive. Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans. Family Support: Maternal & Parental Leaves. Insurance Options: Auto & Home Insurance, Identity Theft Protection. Convenience & Professional Growth: Commuter Benefits & Certification & amp; Training Reimbursement. Time Off: Vacation, Time Off, Sick Leave & Holidays. Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing. #LI-SP1
    $100k-130k yearly 46d ago
  • Data Architect

    Nshs

    Data engineer job in Skokie, IL

    Hourly Pay Range: $41.64 - $64.54 - The hourly pay rate offered is determined by a candidate's expertise and years of experience, among other factors. Data Architect Full Time Hours: Monday-Friday, 8am- 4:30pm Hybrid schedule On Call (rotating) What you will do: Interact and lead discussions with stakeholders to understand and document the scope, business requirements and translate these into the technical design to support Endeavor Health's NorthShore entity Enterprise Data Warehouse. Lead the translation of business needs into feasible and acceptable data-centric semantic layer designs. Create conceptual, logical and physical data models Kimball, Inmon and other modern frameworks using 3NF and de-normalized dimensional design philosophies. Drive and document the source to target mapping with the ETL developers. Design highly performant and secure structures such as staging areas, integrated data, data marts, cubes and operational data stores. Suggest, document and enforce data warehousing best practices including overall Data warehouse architecture relating to ODS, ETL Drive adoption of cloud data warehouse and lakehouse solutions such as Snowflake, Databricks, BigQuery, or Azure Fabric. Partner with engineering teams to design ETL/ELT pipelines, data marts, and semantic layers. Ensure data governance, security, lineage, and compliance standards are embedded in architecture. Estimate and communicate the work effort to EDW management, Stakeholders and own the deliverables from inception, review and signoff of the deliverables. Review, improve and enhance existing models to ensure compliance with best practices around master data management and a 3-layer architecture across technology platforms. Act independently under general direction and may also provide technical consulting on complex projects throughout the organization. Suggest improvements to data quality through data quality frameworks and processes preferred. Knowledge of defining Master Data Management solutions architecture and setting technical direction and defining component architecture. What you will need: Education: 4 year college degree in computer science/data processing or equivalent work experience Experience: 6+ years supporting and developing software applications. 6+ years of experience as a Data Architect, Data Modeler with a data warehouse (PLSQL\MSSQL database experience is key, DBA experience not required) Unique or Preferred Skills: Strong dimensional modeling and star schema design skills Ability to translate business needs into technical design. 2+ years required experience with any Data Modeling tool (Erwin preferred) Familiarity with cloud-native architectures, APIs, microservices, and event-driven systems is a plus 1+ year of Proven hands-on experience with at least one major cloud data platform: Snowflake or Databricks or Google BigQuery or Azure Synapse/ Fabric Experience with SQL, ETL/ELT frameworks, and data integration tools (Informatica, dbt, Azure Data Factory, Talend, etc.). Understanding of data governance, metadata management, MDM, and data security practices. Experience with reporting tools such as Cognos/Power BI is a plus. Experience with Epic Clarity and Caboodle databases is a plus Familiarity with Healthcare industry is a plus Experience and detailed understanding of iterative system implementation, programming, integration, data conversion and testing techniques. Strong verbal and written communication, presentation and customer service skills Ability to solve highly complex technical and operational problems Programming skills and the ability to maintain and understand complex applications Benefits (For full time or part time positions): Incentive pay for select positions Opportunity for annual increases based on performance Career Pathways to Promote Professional Growth and Development Various Medical, Dental, Pet and Vision options Tuition Reimbursement Free Parking Wellness Program Savings Plan Health Savings Account Options Retirement Options with Company Match Paid Time Off and Holiday Pay Community Involvement Opportunities Endeavor Health is a fully integrated healthcare delivery system committed to providing access to quality, vibrant, community-connected care, serving an area of more than 4.2 million residents across six northeast Illinois counties. Our more than 25,000 team members and more than 6,000 physicians aim to deliver transformative patient experiences and expert care close to home across more than 300 ambulatory locations and eight acute care hospitals - Edward (Naperville), Elmhurst, Evanston, Glenbrook (Glenview), Highland Park, Northwest Community (Arlington Heights) Skokie and Swedish (Chicago) - all recognized as Magnet hospitals for nursing excellence. For more information, visit *********************** When you work for Endeavor Health, you will be part of an organization that encourages its employees to achieve career goals and maximize their professional potential. Please explore our website (*********************** to better understand how Endeavor Health delivers on its mission to “help everyone in our communities be their best”. Endeavor Health is committed to working with and providing reasonable accommodation to individuals with disabilities. Please refer to the main career page for more information. Diversity, equity and inclusion is at the core of who we are; being there for our patients and each other with compassion, respect and empathy. We believe that our strength resides in our differences and in connecting our best to provide community-connected healthcare for all. EOE: Race/Color/Sex/Sexual Orientation/ Gender Identity/Religion/National Origin/Disability/Vets, VEVRRA Federal Contractor.
    $41.6-64.5 hourly Auto-Apply 60d+ ago
  • Data-Senior Data Engineer-CL

    Endava 4.2company rating

    Data engineer job in Deerfield, IL

    Technology is our how. And people are our why. For over two decades, we have been harnessing technology to drive meaningful change. By combining world-class engineering, industry expertise and a people-centric mindset, we consult and partner with leading brands from various industries to create dynamic platforms and intelligent digital experiences that drive innovation and transform businesses. From prototype to real-world impact - be part of a global shift by doing work that matters. Job Description Our data team has expertise across engineering, analysis, architecture, modeling, machine learning, artificial intelligence, and data science. This discipline is responsible for transforming raw data into actionable insights, building robust data infrastructures, and enabling data-driven decision-making and innovation through advanced analytics and predictive modeling. Responsibilities: Work closely with the Data Analyst/Data Scientist to understand evolving needs and define the data processing flow or interactive reports. Discuss with the stakeholders from other teams to better understand how data flows are used within the existing environment. Propose solutions for the cloud-based architecture and deployment flow. Design and build processes, data transformation, and metadata to meet business requirements and platform needs. Design and propose solutions for the Relational and Dimensional Model based on platform capabilities. Develop, maintain, test, and evaluate big data solutions. Focus on production status and data quality of the data environment. Pioneer initiatives around data quality, integrity, and security. Qualifications Required: 5+ years of experience in Data Engineering. Proficiency in Apache Spark Proficiency in Python. Some experience leading IT projects and stakeholder management. Experience implementing ETL/ELT process and Data pipelines. Experience with Snowflake. Strong SQL scripting experience. Background and experience with cloud data technologies and tools. Familiar with data tools and technologies like: Spark, Hadoop, Apache beam, Dataproc or similar. BigQuery, Redshift, or other Data warehouse tools. Real-time pipelines with Kinesis or Kafka. Batch processing. Serverless processing. Strong analytic skills related to working with structured and unstructured data sensibilities. Must be able to work onsite 2-3 days a week Additional Information Discover some of the global benefits that empower our people to become the best version of themselves: Finance: Competitive salary package, share plan, company performance bonuses, value-based recognition awards, referral bonus; Career Development: Career coaching, global career opportunities, non-linear career paths, internal development programmes for management and technical leadership; Learning Opportunities: Complex projects, rotations, internal tech communities, training, certifications, coaching, online learning platforms subscriptions, pass-it-on sessions, workshops, conferences; Work-Life Balance: Hybrid work and flexible working hours, employee assistance programme; Health: Global internal wellbeing programme, access to wellbeing apps; Community: Global internal tech communities, hobby clubs and interest groups, inclusion and diversity programmes, events and celebrations. Additional Employee Requirements Participation in both internal meetings and external meetings via video calls, as necessary. Ability to go into corporate or client offices to work onsite, as necessary. Prolonged periods of remaining stationary at a desk and working on a computer, as necessary. Ability to bend, kneel, crouch, and reach overhead, as necessary. Hand-eye coordination necessary to operate computers and various pieces of office equipment, as necessary. Vision abilities including close vision, toleration of fluorescent lighting, and adjusting focus, as necessary. For positions that require business travel and/or event attendance, ability to lift 25 lbs, as necessary. For positions that require business travel and/or event attendance, a valid driver's license and acceptable driving record are required, as driving is an essential job function. *If requested, reasonable accommodations will be made to enable employees requiring accommodations to perform the essential functions of their jobs, absent undue hardship. USA Benefits (Full time roles only, does not apply to contractor positions) Robust healthcare and benefits including Medical, Dental, vision, Disability coverage, and various other benefit options Flexible Spending Accounts (Medical, Transit, and Dependent Care) Employer Paid Life Insurance and AD&D Coverages Health Savings account paired with our low-cost High Deductible Medical Plan 401(k) Safe Harbor Retirement plan with employer match with immediately vest At Endava, we're committed to creating an open, inclusive, and respectful environment where everyone feels safe, valued, and empowered to be their best. We welcome applications from people of all backgrounds, experiences, and perspectives-because we know that inclusive teams help us deliver smarter, more innovative solutions for our customers. Hiring decisions are based on merit, skills, qualifications, and potential. If you need adjustments or support during the recruitment process, please let us know.
    $79k-103k yearly est. 60d+ ago
  • Associate Data Engineer

    Baker Tilly Virchow Krause, LLP 4.6company rating

    Data engineer job in Milwaukee, WI

    Baker Tilly is a leading advisory, tax and assurance firm, providing clients with a genuine coast-to-coast and global advantage in major regions of the U.S. and in many of the world's leading financial centers - New York, London, San Francisco, Los Angeles, Chicago and Boston. Baker Tilly Advisory Group, LP and Baker Tilly US, LLP (Baker Tilly) provide professional services through an alternative practice structure in accordance with the AICPA Code of Professional Conduct and applicable laws, regulations and professional standards. Baker Tilly US, LLP is a licensed independent CPA firm that provides attest services to its clients. Baker Tilly Advisory Group, LP and its subsidiary entities provide tax and business advisory services to their clients. Baker Tilly Advisory Group, LP and its subsidiary entities are not licensed CPA firms. Baker Tilly Advisory Group, LP and Baker Tilly US, LLP, trading as Baker Tilly, are independent members of Baker Tilly International, a worldwide network of independent accounting and business advisory firms in 141 territories, with 43,000 professionals and a combined worldwide revenue of $5.2 billion. Visit bakertilly.com or join the conversation on LinkedIn, Facebook and Instagram. Please discuss the work location status with your Baker Tilly talent acquisition professional to understand the requirements for an opportunity you are exploring. Baker Tilly is an equal opportunity/affirmative action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability or protected veteran status, gender identity, sexual orientation, or any other legally protected basis, in accordance with applicable federal, state or local law. Any unsolicited resumes submitted through our website or to Baker Tilly Advisory Group, LP, employee e-mail accounts are considered property of Baker Tilly Advisory Group, LP, and are not subject to payment of agency fees. In order to be an authorized recruitment agency ("search firm") for Baker Tilly Advisory Group, LP, there must be a formal written agreement in place and the agency must be invited, by Baker Tilly's Talent Attraction team, to submit candidates for review via our applicant tracking system. Job Description: Associate Data Engineer As a Senior Consultant - Associate Data Engineer you will design, build, and optimize modern data solutions for our mid‑market and enterprise clients. Working primarily inside the Microsoft stack (Azure, Synapse, and Microsoft Fabric), you will transform raw data into trusted, analytics‑ready assets that power dashboards, advanced analytics, and AI use cases. You'll collaborate with solution architects, analysts, and client stakeholders while sharpening both your technical depth and consulting skills. Key Responsibilities: * Data Engineering: Develop scalable, well‑documented ETL/ELT pipelines using T‑SQL, Python, Azure Data Factory/Fabric Data Pipelines, and Databricks; implement best‑practice patterns for performance, security, and cost control. * Modeling & Storage: Design relational and lakehouse models; create Fabric OneLake shortcuts, medallion‑style layers, and dimensional/semantic models for Power BI. * Quality & Governance: Build automated data‑quality checks, lineage, and observability metrics; contribute to CI/CD workflows in Azure DevOps or GitHub. * Client Delivery: Gather requirements, demo iterative deliverables, document technical designs, and translate complex concepts to non‑technical audiences. * Continuous Improvement: Research new capabilities, share findings in internal communities of practice, and contribute to reusable accelerators. Collaborate with clients and internal stakeholders to design and implement scalable data engineering solutions. Qualifications: * Education - Bachelor's in Computer Science, Information Systems, Engineering, or related field (or equivalent experience) * Experience - 2-3 years delivering production data solutions, preferably in a consulting or client‑facing role. * Technical Skills: Strong T‑SQL for data transformation and performance tuning. Python for data wrangling, orchestration, or notebook‑based development. Hands‑on ETL/ELT with at least one Microsoft service (ADF, Synapse Pipelines, Fabric Data Pipelines). * Project experience with Microsoft Fabric (OneLake, Lakehouses, Data Pipelines, Notebooks, Warehouse, Power BI DirectLake) preferred * Familiarity with Databricks, Delta Lake, or comparable lakehouse technologies preferred * Exposure to DevOps (YAML pipelines, Terraform/Bicep) and test automation frameworks preferred * Experience integrating SaaS/ERP sources (e.g., Dynamics 365, Workday, Costpoint) preferred
    $68k-90k yearly est. Auto-Apply 10d ago

Learn more about data engineer jobs

How much does a data engineer earn in Racine, WI?

The average data engineer in Racine, WI earns between $67,000 and $116,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.

Average data engineer salary in Racine, WI

$88,000
Job type you want
Full Time
Part Time
Internship
Temporary