Post job

Data engineer jobs in Skokie, IL

- 1,555 jobs
All
Data Engineer
Data Architect
Devops Engineer
Data Scientist
Senior Systems Developer
Senior Data Architect
Senior Software Engineer
Requirements Engineer
Principal Software Engineer
  • Data Scientist

    Tag-The Aspen Group

    Data engineer job in Chicago, IL

    The Aspen Group (TAG) is one of the largest and most trusted retail healthcare business support organizations in the U.S. and has supported over 20,000 healthcare professionals and team members with close to 1,500 health and wellness offices across 48 states in four distinct categories: dental care, urgent care, medical aesthetics, and animal health. Working in partnership with independent practice owners and clinicians, the team is united by a single purpose: to prove that healthcare can be better and smarter for everyone. TAG provides a comprehensive suite of centralized business support services that power the impact of five consumer-facing businesses: Aspen Dental, ClearChoice Dental Implant Centers, WellNow Urgent Care, Chapter Aesthetic Studio, and Lovet Pet Health Care. Each brand has access to a deep community of experts, tools and resources to grow their practices, and an unwavering commitment to delivering high-quality consumer healthcare experiences at scale. As a reflection of our current needs and planned growth we are very pleased to offer a new opportunity to join our dedicated team as Data Scientist. Join us as a Data Scientist and play a key role in shaping how patients access care across a growing network of brands including Aspen Dental, ClearChoice, WellNow, Chapter Aesthetic Studio, and Lovet. In this role, you won't just analyze data - you'll lead end-to-end initiatives that shape how we optimize revenue across offices, days, and even hours. From designing strategies, deploying solutions and building performance dashboards to partnering with data science on automation and collaborating with teams across Finance, Marketing, Product, Technology and Operations, you'll have a direct hand in driving measurable results. This role is ideal for someone who excels at turning data into decisions, building repeatable processes, and uncovering insights that drive measurable improvements in revenue performance and patient access. You will lead initiatives across forecasting, scheduling optimization, demand modeling, capacity planning, and revenue strategy, while also shaping how analytics is delivered and scaled across the organization. If you're a builder who loves solving complex problems with data, operational logic, and automation, this opportunity is for you. Essential Responsibilities: Revenue Strategy & Optimization Lead strategy development for optimizing revenue performance at the office, day, and hour level by leveraging forecasting, scheduling, and demand modeling - while balancing access for patients and operational efficiency. Build analytical frameworks to support pricing, demand forecasting, scheduling, and access optimization. Identify revenue opportunities through data-driven analysis of booking trends, cancellations, no-shows, and utilization. Monitor and update demand and schedule availability through the analysis of historical and future booking trends, the pricing environment, industry capacity trends, competitive landscape, and other factors. Analytics, Insights & Experimentation Develop and maintain forecasting, demand models, dashboards, and scenario analyses. Run experiments and structured tests to evaluate new operational and scheduling strategies. Create clear, actionable insights that influence senior leaders and cross-functional partners. Process Building & Automation Map existing manual workflows and identify opportunities to automate recurring analyses or reporting. Cross-Functional Leadership Work closely with Operations, Finance, Product, Marketing, and Clinical teams to align strategies and execution. Help shape and scale the function by building new playbooks, reports, and best practices. Act as a subject matter expert in forecasting, demand modeling, and capacity optimization. Qualifications (Skills-Based): We welcome candidates with diverse academic and career pathways. You may have gained your skills through industry experience, coursework, certificates, or hands-on practice. Experience/Education: 5+ years of experience in Revenue Management, Pricing, Operations Research, Supply/Demand Optimization (Airline, Travel, Healthcare, or multi-location service industries preferred). Bachelor's degree in Business, Finance, Economics, Analytics, or Statistics required; Master's degree a plus. Experience working alongside data science/engineering teams to automate and scale analytics processes. Exceptional analytical, problem-solving, and communication skills - with the ability to influence senior stakeholders. Detail-oriented, self-starter mindset with a passion for driving results. Strong analytical and quantitative skills, with experience in forecasting, modeling, or optimization. Strong technical proficiency in SQL and a modern BI platform (e.g., Tableau, Looker). Familiarity with scripting (e.g., Python or R) or automation tools (e.g., DBT, Airflow) - not required, but helpful. Additional Job Description: Base Pay Range: $115,000 - $130,000, plus 10% annual bonus (Actual pay may vary based on experience, performance, and qualifications.) A generous benefits package that includes paid time off, health, dental, vision, and 401(k) savings plan with match. If you are an applicant residing in California, please view our privacy policy here: *********************************************************************************
    $115k-130k yearly 2d ago
  • Data Architect

    Geowealth

    Data engineer job in Oak Brook, IL

    GeoWealth is a Chicago-based fintech firm that offers an award-winning digital advisory platform, including Turnkey Asset Management Platform (β€œTAMP”) capabilities. We deliver a comprehensive and fully integrated wealth management technology platform to professionals in the financial services industry. OPPORTUNITY: We're looking for a Data Architect to join our Engineering Team. In this role, you will oversee the overall data architecture, helping us deliver our best-in-class solutions to our customers. This role will be key in organizing, designing, and leading our team through well-designed data architecture. If you love architecting complex systems, delivering customer focused software, designing best-in-class systems and leading data architecture design this role is for you. RESPONSIBILITIES: Own data architecture and oversee data implementation Set coding/implementation standards Lead our data warehouse design Deliver performant, maintainable, and quality software in collaboration with our teams. Improve our database design to reduce replication and increase performance Partner with other architects and engineers to produce better designed systems SKILLS, KNOWLEDGE, AND EXPERIENCE: 5+ years of experience as Data Architect or equivalent role Bachelor's degree in computer science or equivalent degree Hands-on experience with Oracle Designed and implemented data warehouse Experience with the following is preferred but not required: designing and building monolithic and distributed systems, Postgres, Logi Symphony, PowerBI, Java and JIRA/Confluence COMPANY CULTURE & PERKS - HIGHLIGHTS: Investing in Your Growth 🌱 Casual work environment with fun, hard-working, and open-minded coworkers Competitive salary with opportunity for performance-based annual bonus Opportunities to up-skill, explore new responsibilities, and network across departments Defined and undefined career pathways allowing you to grow your own way Work/Life Balance πŸ—“οΈ Flexible PTO and work schedule to ensure our team balances work and life Hybrid work schedule Maternity and paternity leave Taking Care of Your Future β™₯️ Medical, Dental, and Vision, Disability insurance Free access to Spring Health, a comprehensive mental health solution 401(k) with company match and a broad selection of investments Voluntary insurance: short-term disability, long-term disability, and life insurance FSA and transit benefits for employees that contribute pre-tax dollars Other Fun Stuff ⭐ Free on-site gym and parking Weekly catered lunches in the office, plus monthly happy hours Stocked kitchen with snacks and drinks GeoWealth was recognized as β€œBest Place to Work” by Purpose Job's 2025, 2024 and 2022 GeoWealth was recognized as β€œBest Place to Work” by Built In in 2024, 2023 and 2022 SALARY RANGE: Starting at $170,000-$220,000 + Benefits + Opportunity for Performance Bonus This is an estimated range based on the circumstances at the time of posting, however, may change based on a combination of factors, including but not limited to skills, experience, education, market factors, geographical location, budget, and demand.
    $170k-220k yearly 1d ago
  • Data Engineer

    Scaylor

    Data engineer job in Chicago, IL

    Scaylor helps mid-market and enterprise companies make sense of their data. Most teams past $20M ARR are stuck with fragmented systems, old databases, and hundreds of spreadsheets that never quite line up. We build the pipelines that clean, normalize, and connect that data so it can actually be used. Our platform handles everything from parsing financial models and reports to mapping tables across legacy databases and modern APIs. The goal is simple: give companies a single source of truth they can trust. We're a small team of four - one backend engineer, one frontend engineer, and 2 founders. We're looking for our fifth teammate to help us scale the engine behind it all. βΈ» The Role You'll work across data engineering and full-stack development, helping us build reliable data infrastructure that powers real workflows. You'll touch everything from ingestion and transformation pipelines to the APIs and dashboards that surface insights to clients. You'll work directly with the founding team and help make technical decisions that define the next version of Scaylor's core platform. βΈ» What You'll Work On β€’ Build data pipelines that extract, clean, and standardize information from Excel files, PDFs, APIs, and legacy databases β€’ Design schemas and transformation logic for structured and semi-structured data β€’ Develop and maintain backend APIs (Python/FastAPI or Node/Express) for data access and analytics β€’ Help connect backend services to our frontend dashboards (React, Node.js, or similar) β€’ Set up and maintain AWS infrastructure (Lambda, S3, ECS, CloudFormation) β€’ Collaborate with clients to understand their data problems and design workflows that fix them βΈ» You'd Be Great Here If You β€’ Have 3-6 years of experience in data engineering, backend, or full-stack roles β€’ Write clean, maintainable code in Python + JS β€’ Understand ETL, data normalization, and schema mapping β€’ Have experience with SQL and working with legacy databases or systems β€’ Are comfortable managing cloud services and debugging data pipelines β€’ Enjoy solving messy data problems and care about building things that last βΈ» Nice to Have β€’ Familiarity with GCP or SQL databases β€’ Understanding of enterprise data flows (ERP, CRM, or financial systems) β€’ Experience building and deploying containers (Docker, GitHub Actions, CI/CD) β€’ Interest in lightweight ML or LLM-assisted data transformation βΈ» Why Join Scaylor β€’ Be one of the first five team members shaping the product and the company β€’ Work directly with the founder and help define Scaylor's technical direction β€’ Build infrastructure that solves real problems for real companies β€’ Earn meaningful equity and have a say in how the company grows βΈ» Compensation β€’ $130k - $150k with a raise based on set revenue triggers β€’ .4% equity β€’ Relocation to Chicago, IL required
    $130k-150k yearly 1d ago
  • Data Scientist

    Alined Consulting Group

    Data engineer job in Chicago, IL

    Minimum Qualifications: ● 5+ years of experience leading data science projects that have a direct impact on a company's objectives or PhD degree in quantitative fields such as Statistics, Data Science, Computer Science with 3+ years of experience ● 5+ years of experience utilizing data mining techniques, ML models to assist business decision making. Hands-on experience with deep learning frameworks , LLMs, GenAI tools, and NLP techniques. ● Deep expertise in statistical methods and machine learning concepts, with the ability to mentor team members on methodologies, model tuning, and evaluation techniques. ● 2+ years hands-on experience with deep learning frameworks, LLMs, GenAI tools, and NLP techniques. ● 5+ years of experience using Python to process large, diverse datasets, and develop and deploy predictive models in cloud-based environments and other computing platforms. ● 5+ years of experience in SQL and cloud-hosted data platforms (Google Cloud Platform, AWS, etc.). ● Demonstrated ability to assist business decision-making through data mining and machine learning. ● Strong communication skills to collaborate effectively with business stakeholders. Must be able to interact cross-functionally and drive both business and technical discussions. ● Ability to translate complex business problems into actionable project plans and solve
    $70k-97k yearly est. 2d ago
  • Senior Data Engineer

    Programmers.Io 3.8company rating

    Data engineer job in Chicago, IL

    requires visa independent candidates. Note: (OPT, CPT, H1B holders will not work at this time) Design, develop, and maintain scalable ETL pipelines using AWSGlue Collaborate with data engineers and analysts to understand data requirements Build and manage data extraction, transformation, and loading processes Optimize and troubleshoot existing Glue jobs and workflows Ensure data quality, integrity, and security throughout the ETL process Integrate AWS Glue with other AWS services like S3, Lambda, Redshift, and Step Functions Maintain documentation of data workflows and processes Stay updated with the latest AWS tools and best practices Required Skills Strong hands-on experience with AWS Glue, PySpark, and Python Proficiency in SQL and working with structured/unstructured data (JSON, CSV, Parquet) Experience with data warehousing concepts and tools Familiarity with CI/CD pipelines, Terraform, and scripting (PowerShell, Bash) Solid understanding of data modeling, data integration, and data management Exposure to AWS Batch, Step Functions, and Data Catalogs
    $81k-112k yearly est. 4d ago
  • Data Scientist

    Talent 4.8company rating

    Data engineer job in Chicago, IL

    This role supports a financial services organization by applying advanced data science and machine learning techniques to solve complex business problems using large-scale datasets. The position focuses on end-to-end feature engineering, model development, and writing production-quality code in a fast-paced, collaborative environment. The individual partners closely with product and engineering teams to uncover trends, improve algorithm performance, and drive data-informed decisions. Key Responsibilities Independently analyze and aggregate large, complex datasets to identify anomalies that affect model and algorithm performance Own the full lifecycle of feature engineering, including ideation, development, validation, and selection Develop and maintain production-quality code in a fast-paced, agile environment Solve challenging analytical problems using extremely large (terabyte-scale) datasets Evaluate and apply a range of machine learning techniques to determine the most effective approach for business use cases Collaborate closely with product and engineering partners to identify trends, opportunities, and data-driven solutions Communicate insights, results, and model performance clearly through visualizations and explanations tailored to non-technical stakeholders Adhere to established standards and practices to ensure the security, integrity, and confidentiality of systems and data Minimum Qualifications Bachelor's degree in Mathematics, Statistics, Computer Science, Operations Research, or a related field At least 4 years of professional experience in data science, analytics, engineering, or a closely related discipline Hands-on experience building data science pipelines and workflows using Python, R, or similar programming languages Strong SQL skills, including query development and performance tuning Experience working with large-scale, high-volume datasets (terabyte-scale) Practical experience applying a variety of machine learning methods and understanding the parameters that impact model performance Familiarity with common machine learning libraries (e.g., scikit-learn, Spark ML, or similar) Experience with data visualization tools and techniques Ability to write clean, maintainable, and production-ready code Strong interest in rapid prototyping, experimentation, and proof-of-concept development Proven ability to communicate complex analytical findings to non-technical audiences Ability to meet standard employment screening requirements
    $71k-100k yearly est. 1d ago
  • Data Engineer

    Acuity Analytics

    Data engineer job in Chicago, IL

    The Data Engineer will design, build, and optimize the data pipelines and models that support the firm's evolving research, analytics, and systematic portfolio construction environment. This role is central to enabling data-driven investment processes, including quantitative research, AI/ML capabilities, and front-office automation. Candidates must have deep expertise with Snowflake, strong SQL skills, and experience integrating diverse datasets used across investment organizations. The role is highly collaborative and requires comfort working in an iterative, fast-moving environment where data needs evolve rapidly based on stakeholder input. Responsibilities Design, build, and enhance ETL/ELT pipelines in Snowflake, ensuring high performance, reliability, and scalability. Integrate internal and external datasets, including pricing, research content, economic releases, market data, and security reference data. Support real-time or near-real-time data flows where needed (e.g., pricing, indicative quotes, market-sensitive inputs). Collaborate closely with Product Leads, Quant Developers, and UI/UX teams to ensure data structures meet the requirements of research workflows, analytical models, and user-facing applications. Partner with front-office stakeholders to rapidly iterate on evolving analytical and data needs. Implement data validation, monitoring, and quality frameworks to ensure accuracy and reliability across critical datasets. Translate prototype pipelines into production-ready workflows with appropriate documentation, standards, and controls. Contribute to data modeling standards, metadata frameworks, and data governance practices across the platform. Requirements 10+ years of data engineering experience within investment management, financial technology, or similar data-intensive environments. Expert-level SQL, including complex queries, schema design, and performance optimization. Deep hands-on experience with Snowflake, including advanced features such as tasks, streams, performance tuning, and secure data sharing. Strong Python capabilities for ETL/ELT development, data processing, and workflow automation. Experience integrating APIs and working with structured, semi-structured, and unstructured datasets. Familiarity with NLP or AI/ML-oriented datasets (e.g., textual research content, PDFs) is a plus. Experience with Domino or willingness to work within a Domino-based model environment. Working knowledge of investment data structures (holdings, benchmarks, pricing, exposures) is highly preferred. Ability to thrive in a rapid prototyping environment with evolving requirements and close partnership with front-office teams.
    $75k-100k yearly est. 2d ago
  • Data Engineer

    Binarybees Business Solutions LLC

    Data engineer job in Itasca, IL

    Primary Location: Itasca, IL Hybrid in Chicago's Northwest Suburbs 2 Days In-Office, 3 Days WFH TYPE: Direct Hire / Permanent Role MUST BE Citizen and Green Card The Data Engineer will be responsible for designing, developing, and maintaining scalable data pipelines and integrations that support data analytics and business intelligence across the organization. This role is essential to ensuring high-quality data delivery, optimizing performance, and enabling effective decision-making through reliable data solutions. What You Bring to the Role (Ideal Experience) Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent work experience). 5+ years of experience as a Data Engineer. 3+ years of experience with the following: Building and supporting data lakehouse architectures using Delta Lake and change data feeds. Working with PySpark and Python, with strong Object-Oriented Programming (OOP) experience to extend existing frameworks. Designing data warehouse table architecture such as star schema or Kimball method. Writing and maintaining versioned Python wheel packages to manage dependencies and distribute code. Creating and managing CI/CD pipelines, especially using Azure DevOps for Microsoft Fabric-related assets. Experience establishing scalable and maintainable data integrations and pipelines in Databricks environments. Nice to Have's Hands-on experience implementing data solutions using Microsoft Fabric. Experience with machine learning/ML and data science tools. Knowledge of data governance and security best practices. Experience in a larger IT environment with 3,000+ users and multiple domains. Current industry certifications from Microsoft cloud/data platforms or equivalent certifications. One or more of the following is preferred: Microsoft Certified: Fabric Data Engineer Associate Microsoft Certified: Azure Data Scientist Associate Microsoft Certified: Azure Data Fundamentals Google Professional Data Engineer Certified Data Management Professional (CDMP) IBM Certified Data Architect - Big Data What You'll Do (Skills Used in this Position) Design and develop scalable data pipelines to collect, process, and store large volumes of structured and unstructured data. Extend and enhance existing OOP-based frameworks developed in Python and PySpark. Partner with data scientists and analysts to define requirements and design robust data analytics solutions. Ensure data quality and integrity through data cleansing, validation, and automated testing procedures. Develop and maintain technical documentation, including requirements, design specifications, and test plans. Implement and manage data integrations from multiple internal and external sources. Optimize data workflows to improve performance, reliability, and reduce cloud consumption. Monitor, troubleshoot, and resolve data pipeline issues to ensure consistent data delivery. Establish and manage CI/CD pipelines and release processes, particularly using Azure DevOps for Microsoft Fabric. Provide technical leadership and coordination for global development and support teams. Participate in creating a safe and healthy workplace by adhering to organizational safety protocols. Support additional projects and initiatives as assigned by management.
    $75k-100k yearly est. 1d ago
  • Senior Data Architect

    Hub Group 4.8company rating

    Data engineer job in Oak Brook, IL

    We are seeking a highly skilled and strategic Senior Data Solution Architect to join our IT Enterprise Data Warehouse team. This role is responsible for designing and implementing scalable, secure, and high-performing data solutions that bridge business needs with technical execution. Design solutions for provisioning data to our cloud data platform using ingestion, transformation, and semantic layer techniques. Additionally, this position provides technical thought leadership and guidance to ensure that data platforms and pipelines effectively support ODS, analytics, reporting, and AI initiatives across the organization. Key Responsibilities: Architecture & Design: Design end-to-end data architecture solutions including operational data stores, data warehouses, and real-time data pipelines. Define standards and best practices for data modeling, integration, and governance. Evaluate and recommend tools, platforms, and frameworks for data management and analytics. Collaboration & Leadership: Partner with business stakeholders, data engineers, data analysts, and other IT teams to translate business requirements into technical solutions. Lead architecture reviews and provide technical guidance to development teams. Advocate for data quality, security, and compliance across all data initiatives. Implementation & Optimization Oversee the implementation of data solutions, ensuring scalability, performance, and reliability. Optimize data workflows and storage strategies for cost and performance efficiency. Monitor and troubleshoot data systems, ensuring high availability and integrity. Required Qualifications: Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or related field. 7+ years of experience in data architecture, data engineering, or related roles. Strong expertise in cloud platforms (e.g., Azure, AWS, GCP) and modern data stack tools (e.g., Snowflake, Databricks). Proficiency in SQL, Python, and data modeling techniques (e.g. Data Vault 2.0) Experience with ETL/ELT tools, APIs, and real-time streaming technologies (e.g., dbt, Coalesce, SSIS, Datastage, Kafka, Spark). Familiarity with data governance, security, and compliance frameworks Preferred Qualifications: Certifications in cloud architecture or data engineering (e.g., SnowPro Advanced: Architect). Strong communication and stakeholder management skills. Why Join Us? Work on cutting-edge data platforms and technologies. Collaborate with cross-functional teams to drive data-driven decision-making. Be part of a culture that values innovation, continuous learning, and impact. ** This is a full-time, W2 position with Hub Group - We are NOT able to provide sponsorship at this time ** Salary: $135,000 - $175,000/year base salary + bonus eligibility This is an estimated range based on the circumstances at the time of posting, however, may change based on a combination of factors, including but not limited to skills, experience, education, market factors, geographical location, budget, and demand Benefits We offer a comprehensive benefits plan including: Medical Dental Vision Flexible Spending Account (FSA) Employee Assistance Program (EAP) Life & AD&D Insurance Disability Paid Time Off Paid Holidays BEWARE OF FRAUD! Hub Group has become aware of online recruiting related scams in which individuals who are not affiliated with or authorized by Hub Group are using Hub Group's name in fraudulent emails, job postings, or social media messages. In light of these scams, please bear the following in mind Hub Group will never solicit money or credit card information in connection with a Hub Group job application. Hub Group does not communicate with candidates via online chatrooms such as Signal or Discord using email accounts such as Gmail or Hotmail. Hub Group job postings are posted on our career site: ******************************** About Us Hub Group is the premier, customer-centric supply chain company offering comprehensive transportation and logistics management solutions. Keeping our customers' needs in focus, Hub Group designs, continually optimizes and applies industry-leading technology to our customers' supply chains for better service, greater efficiency and total visibility. As an award-winning, publicly traded company (NASDAQ: HUBG) with $4 billion in revenue, our 6,000 employees and drivers across the globe are always in pursuit of "The Way Ahead" - a commitment to service, integrity and innovation. We believe the way you do something is just as important as what you do. For more information, visit ****************
    $135k-175k yearly 3d ago
  • Sr. Data Engineer - PERM - MUST BE LOCAL

    Resource 1, Inc.

    Data engineer job in Naperville, IL

    Resource 1 is in need of a Sr. Data Engineer for a full-time/ permanent position with our client in Naperville, IL. Candidate must be local to Illinois because of future hybrid onsite in Naperville expected. Our client is an employee-owned company with excellent benefits, growth opportunities and profit-sharing bonus. This position is focused on building modern data pipelines, integrations and back-end data solutions. Selected individual will work within cross-functional Agile teams, collaborating with product owners, business analysts and other engineers to design and deliver data solutions that power business insights and AI products. Responsibilities: Design and develop scalable data pipelines for ingestion, transformation and integration using AWS services. Pull data from PostgreSQL and SQL Server to migrate to AWS. Create and modify jobs in AWS and modify logic in SQL Server. Create SQL queries, stored procedures and functions in PostgreSQL and RedShift. Provide input on data modeling and schema design as needed. Manage infrastructure through infrastructure-as-code templates (Serverless Framework), supporting new data products and services in AWS. Support inbound/ outbound data flows, including APIs, S3 replication and secured data. Assist with data visualization/ reporting as needed. Follow an Agile development methodology, with regular workshops and standup meetings, working in two-week sprints. Qualifications: 5+ years of data engineering experience. Experience with AWS and its associated array of offerings (Glue, Redshift, Athena, S3, Lambda, Spectrum). Strong experience with SQL, Python and PySpark. A background in supply chain, logistics or distribution would be a plus. Experience with Power BI is a plus.
    $75k-100k yearly est. 1d ago
  • Data Architect

    Mastek

    Data engineer job in Chicago, IL

    Job Title: Architect / Senior Data Engineer We are seeking a highly skilled Architect / Senior Data Engineer to design, build, and optimize our modern data ecosystem. The ideal candidate will have deep experience with AWS cloud services, Snowflake, and dbt, along with a strong understanding of scalable data architecture, ETL/ELT development, and data modeling best practices. Key Responsibilities Architect, design, and implement scalable, reliable, and secure data solutions using AWS, Snowflake, and dbt. Develop end-to-end data pipelines (batch and streaming) to support analytics, machine learning, and business intelligence needs. Lead the modernization and migration of legacy data systems to cloud-native architectures. Define and enforce data engineering best practices including coding standards, CI/CD, testing, and monitoring. Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements and translate them into technical solutions. Optimize Snowflake performance through query tuning, warehouse sizing, and cost management. Establish and maintain data governance, security, and compliance standards across the data platform. Mentor and guide junior data engineers, providing technical leadership and direction. Required Skills & Qualifications 8+ years of experience in Data Engineering, with at least 3+ years in a cloud-native data environment. Hands-on expertise in AWS services such as S3, Glue, Lambda, Step Functions, Redshift, and IAM. Strong experience with Snowflake - data modeling, warehouse design, performance optimization, and cost governance. Proven experience with dbt (data build tool) - model development, documentation, and deployment automation. Proficient in SQL, Python, and ETL/ELT pipeline development. Experience with CI/CD pipelines, version control (Git), and workflow orchestration tools (Airflow, Dagster, Prefect, etc.). Familiarity with data governance and security best practices, including role-based access control and data masking. Strong understanding of data modeling techniques (Kimball, Data Vault, etc.) and data architecture principles. Preferred Qualifications AWS Certification (e.g., AWS Certified Data Analytics - Specialty, Solutions Architect). Strong communication and collaboration skills, with a track record of working in agile environments.
    $83k-113k yearly est. 3d ago
  • Data Architect - Pharma

    Mathco

    Data engineer job in Chicago, IL

    MathCo Role - Data/AI Engineering Manager Onsite - Chicago - 4 days in office (Mandatory) Industry - Pharma (Mandatory) As platform architect/owner, you will: Lead the end-to-end architecture, lifecycle, and governance of the AI/Analytics platform, defining standards, reusable components, and integration patterns. Partner with AI/Data architects to enable scalable model deployment and enhance agentic orchestration. Translate business needs into platform features, manage onboarding, documentation, and cross-functional collaboration for platform adoption. Oversee infrastructure-as-code, CI/CD, observability, and containerized environments to ensure reliability and scalability. Evaluate complex technical proposals and develop actionable platform roadmaps and architecture recommendations Stay current on key AI platform developments and assess their impact on architecture and client strategy Coach others, recognize their strengths, and encourage them to take ownership of their personal development Skills Required Experience in designing, architecting, or managing distributed data and AI platforms in cloud environments (AWS, Azure, or GCP) Proven ability to carry out complex Proof of Concept (POC), pilot projects, and limited production rollouts for AI use-cases, focusing on developing new or improved techniques and procedures. Strong skills in pipeline/workflow optimization and data processing frameworks to evaluate architectural choices Years of Experience Minimum of 8 years in relevant experience, preferably with a consulting background and experience with Pharma clients
    $83k-113k yearly est. 3d ago
  • Principal Data Architect

    Independence Pet Holdings

    Data engineer job in Chicago, IL

    Established in 2021, Independence Pet Holdings is a corporate holding company that manages a diverse and broad portfolio of modern pet health brands and services, including insurance, pet education, lost recovery services, and more throughout North America. We believe pet insurance is more than a financial product and build solutions to simplify the pet parenting journey and help improve the well-being of pets. As a leading authority in the pet category, we operate with a full stack of resources, capital, and services to support pet parents. Our multi-brand and omni-channel approach include our own insurance carrier, insurance brands and partner brands. Role Overview In close collaboration with the CDO team, this role leads enterprise data architecture and strategy for IPH, driving the architecture and governance of the Unified data platform and enabling advanced analytics and AI/ML capabilities across all Pet Insurance and non-insurance brands. It combines data platform, architecture, engineering, data governance, and business intelligence architecture to ensure a unified, secure, and scalable data ecosystem. Key Focus Areas: Building unified Customer Data Platform (CDP) across multiple zones/domains/lines of business - Enabling data monetization through cross-sell and up-sell insights and customer journey analytics - Driving Gen-AI/Agentic AI adoption (agents, skills, RAG) with data as foundation - Handling non-standard data from various sources (pet metadata, third-party data with formatting issues) - Transforming non-insurance data into pet insurance actionable insights and vice versa. Key Responsibilities Calandra Data Platform Architecture (30%) Architect, design, and support / govern the implementation the Azure-based enterprise data platform using cloud-native services Architect a medallion lakehouse (Bronze/Silver/Gold tiers) for structured and unstructured data Enable real-time streaming and batch processing for analytics and operational insights Architect a multi-region, multi-AZ architecture with multiple instances per zone for high availability Ensure unified CDP across multiple zones/domains/lines of business Data Governance & Master Data Management (20%) With the CDO team, establish an enterprise data governance framework and implement a data catalog for discovery and lineage Define MDM strategy to ensure a single source of truth for critical data domains (Customer, Product, Policy) Implement data quality monitoring and remediation processes Handle non-standard input data with formatting issues from various sources Analytics & BI Architecture (20%) Build a Customer Data Platform (CDP) for all brands, including enrichment pipelines and analytical models Enable self-service BI through Power BI and semantic modeling Define architecture for reporting and advanced analytics capabilities Drive data monetization strategies for cross-sell/up-sell and customer journey insights Integration & Standards (15%) Define data integration patterns, pipeline architecture, and API standards Ensure compliance with SOC 2, PCI DSS, and internal security baselines Align with Calandra Toolkit and target architecture standards Strategic Leadership (15%) With the CDO team, develop future-state data architecture roadmap aligned with IPH's digital transformation goals Partner with business and technology leaders to drive adoption and maturity of data capabilities Drive organization toward 90% agentic-driven operations through AI/ML adoption Data Architecture Process Workflow (Expected Participation) As Solution Architect: Define high-level design, scope, and intended outcomes Align to business goals, governance standards, and target architecture (Calandra Toolkit) Hand off requirements to Data Architecture As Data Architect: Translate solution into data models, integrations, security, and governance patterns Validate scalability, regulatory, compliance, and long-term roadmap alignment Produce clear technical specs for engineering Architecture Review: Conduct "Solution Architecture + Data Architecture" review - In collaboration with the CDO team Support implementation, design, and go-live Performance, security, and governance Reusability and standards compliance Ensure required fixes are addressed before proceeding Technical Requirements - Required Platforms & Tools Data Platforms Data Platforms, Databricks, Azure Synapse, Microsoft Fabric BI & Analytics Power BI, Self-Service BI, Semantic Modeling CDP Customer Data Platform architecture, enrichment pipelines Cloud Azure (primary), AWS Streaming Kafka, Kinesis, Spark Streaming, Real-time pipelines Data Governance Data Catalog, Lineage, MDM, Quality Frameworks Gen-AI & Agentic Requirements (Nice to Have β†’ Becoming Critical) Machine Learning Model building, deployment, ML pipelines Gen-AI Agents & Skills Multi-agent pipelines, agentic workflows RAG Retrieval Augmented Generation for data insights Data Monetization E2E framework with data as a base, AI on top to deliver customer value 90% Agentic Driven Vision to drive org toward agentic automation Client Technology Stack (Familiarity Expected) MS Dynamics Internal client CRM New Portal External client portal (in-house, React + MS stack) DocGen Document generation (MS stack) EIS Insurance Platform Middle layer - Policy/Group Admin Earnix 3rd party Rating/Pricing engine General Qualifications 10+ years enterprise data architecture experience; deep expertise with Azure data platforms. 8+ years in analytics/BI architecture and Power BI at enterprise scale. Preferred Qualifications Azure Data Engineer or Solutions Architect Expert certification Experience with Databricks, Synapse, and Microsoft Fabric Strong knowledge of lakehouse patterns, streaming architectures, and data governance frameworks Insurance industry experience is a plus Experience building end-to-end frameworks with data as a foundation and AI on top Hands-on with Data, Gen-AI agents, RAG, and agentic workflows Key Success Metrics Unified CDP Single customer 360 view across all brands/zones Data Quality 95%+ quality scores Agentic Automation Progress toward 90% agentic-driven operations Cross-sell/Up-sell Data-driven insights driving revenue Compliance SOC 2, PCI DSS compliance maintained Platform Adoption Self-service BI adoption across business units All of our jobs come with great benefits including healthcare, parental leave and opportunities for career advancements. Some offerings are dependent upon the location of where you work and can include the following: Comprehensive full medical, dental and vision Insurance Basic Life Insurance at no cost to the employee Company paid short-term and long-term disability 12 weeks of 100% paid Parental Leave Health Savings Account (HSA) Flexible Spending Accounts (FSA) Retirement savings plan Personal Paid Time Off Paid holidays and company-wide Wellness Day off Paid time off to volunteer at nonprofit organizations Pet friendly office environment Commuter Benefits Group Pet Insurance On the job training and skills development Employee Assistance Program (EAP)
    $83k-113k yearly est. 18h ago
  • Distinguished Data Engineer - Card Data

    Capital One 4.7company rating

    Data engineer job in Chicago, IL

    Distinguished Data Engineers are individual contributors who strive to be diverse in thought so we visualize the problem space. At Capital One, we believe diversity of thought strengthens our ability to influence, collaborate and provide the most innovative solutions across organizational boundaries. Distinguished Engineers will significantly impact our trajectory and devise clear roadmaps to deliver next generation technology solutions. About the Team: Capital One is seeking a Distinguished Data Engineer, to work in our Credit Card Technology Data Engineering Team and build the future of financial services. We are a fast-paced, mission-driven group responsible for managing and leveraging petabytes of sensitive, real-time and batch data that powers everything from fraud detection models and personalized reward systems to regulatory compliance reporting. As a leader in Data Engineering, you won't just move data; you'll architect high-availability that directly influence millions of customer experiences and secure billions in transactions daily. You'll own critical data domains end-to-end, working cross-functionally with ML Scientists, Product Managers, and Business Analysts teams etc to solve complex, high-stakes problems with cutting-edge cloud technologies (like Snowflake, Kafka, and AWS). If you thrive on technical challenges, demand data integrity, and want your work to have a clear, measurable impact on the bank's core profitability and security, this is your team. This leader must have the ability to attract and recruit the industry's best talent, and simultaneously have the technical chops to ensure that we build compelling, customer-oriented solutions in an iterative methodology. Success in the role requires an innovative mind, a proven track record of delivering next generation software and data products, rigorous analytical skills, and a passion for delivering customer value through automation, machine learning and predictive analytics. Our Distinguished Engineers Are: Deep technical experts and thought leaders that help accelerate adoption of the very best engineering practices, while maintaining knowledge on industry innovations, trends and practices Visionaries, collaborating on Capital One's toughest issues, to deliver on business needs that directly impact the lives of our customers and associates Role models and mentors, helping to coach and strengthen the technical expertise and know-how of our engineering and product community Evangelists, both internally and externally, helping to elevate the Distinguished Engineering community and establish themselves as a go-to resource on given technologies and technology-enabled capabilities Responsibilities: Build awareness, increase knowledge and drive adoption of modern technologies, sharing consumer and engineering benefits to gain buy-in Strike the right balance between lending expertise and providing an inclusive environment where others' ideas can be heard and championed; leverage expertise to grow skills in the broader Capital One team Promote a culture of engineering excellence, using opportunities to reuse and innersource solutions where possible Effectively communicate with and influence key stakeholders across the enterprise, at all levels of the organization Operate as a trusted advisor for a specific technology, platform or capability domain, helping to shape use cases and implementation in an unified manner Lead the way in creating next-generation talent for Tech, mentoring internal talent and actively recruiting external talent to bolster Capital One's Tech talent Basic Qualifications: Bachelor's Degree At least 7 years of experience in data engineering At least 3 years of experience in data architecture At least 2 years of experience building applications in AWS Preferred Qualifications: Masters' Degree 9+ years of experience in data engineering 3+ years of data modeling experience 2+ years of experience with ontology standards for defining a domain 2+ years of experience using Python, SQL or Scala 1+ year of experience deploying machine learning models 3+ years of experience implementing big data processing solutions on AWS Capital One will consider sponsoring a new qualified applicant for employment authorization for this position The minimum and maximum full-time annual salaries for this role are listed below, by location. Please note that this salary information is solely for candidates hired to perform work within one of these locations, and refers to the amount Capital One is willing to pay at the time of this posting. Salaries for part-time roles will be prorated based upon the agreed upon number of hours to be regularly worked. Chicago, IL: $239,900 - $273,800 for Distinguished Data Engineer McLean, VA: $263,900 - $301,200 for Distinguished Data Engineer New York, NY: $287,800 - $328,500 for Distinguished Data Engineer Richmond, VA: $239,900 - $273,800 for Distinguished Data Engineer San Francisco, CA: $287,800 - $328,500 for Distinguished Data Engineer Candidates hired to work in other locations will be subject to the pay range associated with that location, and the actual annualized salary amount offered to any candidate at the time of hire will be reflected solely in the candidate's offer letter. This role is also eligible to earn performance based incentive compensation, which may include cash bonus(es) and/or long term incentives (LTI). Incentives could be discretionary or non discretionary depending on the plan. Capital One offers a comprehensive, competitive, and inclusive set of health, financial and other benefits that support your total well-being. Learn more at the Capital One Careers website . Eligibility varies based on full or part-time status, exempt or non-exempt status, and management level. This role is expected to accept applications for a minimum of 5 business days.No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections ; New York City's Fair Chance Act; Philadelphia's Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1- or via email at . All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC).
    $75k-97k yearly est. 15h ago
  • Senior Back End Developer - Distributed Systems (C# or Golang)

    Access Search, Inc.

    Data engineer job in Chicago, IL

    Our client, a fast-growing organization developing secure, scalable technologies for next-generation AI applications, is seeking a Backend Engineer to join their core platform team. In this role, you'll help build and refine the foundational services that power authentication, observability, data flows, and high-availability systems across a distributed ecosystem. This is an opportunity to work on complex backend challenges while shaping the infrastructure that supports mission-critical applications. What You'll Do Develop, enhance, and support backend services that form the foundation of the platform. Build and maintain core authentication and authorization capabilities. Apply principles of Domain-Driven Design to guide how services and components evolve over time. Architect, extend, and support event-sourced systems to ensure durable, consistent operations at scale. Participate in API design and integration efforts across internal and external stakeholders. Implement and support messaging frameworks (e.g., NATS) to enable reliable service-to-service communication. Maintain and improve observability tooling-including metrics, tracing, and logging-to ensure healthy system performance. Work closely with infrastructure, DevOps, and engineering teams to ensure robust, secure, and maintainable operations. What You Bring 3-6+ years of experience as a backend engineer. Strong knowledge of distributed systems and microservices. Proficiency in at least one modern backend programming language (C#, Go, Rust, etc.). Practical experience with IAM concepts and authentication/authorization frameworks. Exposure to event-sourcing patterns, DDD, and common messaging systems (e.g., NATS, Kafka, SNS, RabbitMQ). Familiarity with Redis or similar in-memory caching technologies. Experience working with observability tools such as Prometheus, Jaeger, ELK, or Application Insights. Understanding of cloud-native environments and deployment workflows (AWS, Azure, or GCP). Why This Role Is Compelling You'll contribute directly to a foundational platform used across an entire organization-impacting performance, reliability, and security at every layer. If you enjoy solving distributed-system challenges and working on complex, high-scale backend services, this is a strong match. #BackendEngineering #DistributedSystems #PlatformEngineering #CloudNative #SoftwareJobs
    $90k-117k yearly est. 18h ago
  • Lead DevOps Engineer

    Qorali

    Data engineer job in Chicago, IL

    Qorali is seeking a Lead DevOps Engineer to drive the evolution of our cloud and automation strategy. In this role, you'll own the design and delivery of enterprise-scale cloud infrastructure, lead mission-critical DevOps initiatives, and mentor engineers across the organization. We're looking for a hands-on technical leader with deep expertise in AWS, Kubernetes, CI/CD pipelines, Terraform, and Kafka - someone who thrives on solving complex challenges and setting best practices for scalable, secure, and resilient systems. Key Responsibilities Architect and implement highly available, automated cloud solutions on AWS. Build and optimize CI/CD pipelines to accelerate software delivery. Design, deploy, and manage containerized workloads with Kubernetes. Lead Kafka platform operations to support real-time, high-throughput applications. Champion infrastructure-as-code with Terraform, driving automation and repeatability. Provide technical leadership, mentoring, and serve as escalation point for critical issues. Collaborate with development, security, and operations teams to deliver end-to-end DevOps solutions. Qualifications 7+ years of experience in DevOps, cloud engineering, or infrastructure automation. Proven expertise in AWS, Kubernetes, Terraform, CI/CD (Jenkins/GitHub Actions), Python and Kafka. Experience with configuration management (Ansible, Puppet, or Chef). Strong understanding of cloud security, compliance frameworks (CIS, NIST), and high-availability design. Demonstrated leadership experience, guiding technical teams and influencing DevOps best practices. Compensation & Benefits $150-180k base salary + 15% bonus 22+ days PTO Health, vision, dental & life insurance 6% 401k matching Location: Hybrid, Chicago or Dallas
    $150k-180k yearly 2d ago
  • Azure Cloud & DevOps Engineer

    Sprocket Sports

    Data engineer job in Chicago, IL

    πŸ“± Azure Cloud & DevOps Engineer πŸ“ Chicago, IL | 🏒 Hybrid | πŸ’Ό Full-Time At Sprocket Sports, We are currently seeking an Azure Cloud & DevOps Engineer to join our Team. The ideal candidate has a passion for youth sports and managing a best-in-class software platform that will be used by thousands of youth sports club administrators, coaches, parents and players. About Sprocket Sprocket Sports is a fast-growing technology company based in Chicago and a national leader in the youth sports space. Our software and services help clubs streamline operations, reduce costs, and grow faster, so they can focus on what really matters: kids playing sports. We're also proud to be a certified Great Place to Work 2024, with a culture that balances high standards, accountability, and fun. What You'll Do As an experienced DevOps / cloud engineer you will help us scale and maintain a high-performing, reliable, and cost-effective cloud infrastructure. As an Azure Cloud Engineer, you will be the backbone of our cloud infrastructure, ensuring our platform is always available, fast, and secure for our users. You will manage our resources in Microsoft Azure, focusing heavily on performance optimization, cost control, and proactive system health monitoring. This role is perfect for someone passionate about cloud technology, DevOps principles, and continuous improvement. In this role you will interact with our software engineers, product managers and occasionally with operational stakeholders. We are seeking individuals who like to think creatively and have a passion for continually improving the platform. Responsibilities: Core Azure Cloud Management Resource & Cost Optimization: Manage, provision, and maintain our complete suite of Azure resources (e.g., App Services, Azure Functions, AKS, VMs). Proactively manage and reduce cloud costs by identifying and implementing efficiencies in resource utilization and recommending right-sizing strategies. Security and Compliance: Ensure security best practices are implemented across all Azure services, including network segmentation, access control (IAM), and patching. Performance & Reliability Engineering (SRE Focus) System Health and Monitoring: Ongoing monitoring of application and system performance using Azure and DataDog to detect and diagnose issues before they impact users. Review system logs, metrics, and tracing data to identify areas of concern, bottlenecks, and opportunities for performance tuning. Performance Testing Lead efforts to conduct load testing and performance testing on the system. Database Performance Tuning: Review and optimize SQL performance by analyzing query plans, identifying slow-running queries, and recommending improvements (indexing, schema changes, stored procedures). Manage and monitor our Azure SQL Database resources for optimal health and throughput. Incident Response: Participate in on-call rotation to provide 24/7 support for critical infrastructure incidents and drive root cause analysis (RCA). DevOps Automation Infrastructure as Code (IaC): Implement Infrastructure-as-Code (ARM, Bicep, or Terraform) to maintain consistent, auditable deployments. Continuous Integration / Continuous Delivery (CI/CD): Work closely with the development team to automate and streamline deployment pipelines (CI/CD) using Azure DevOps, ensuring fast and reliable releases. Configuration Management: Implement and manage configuration for applications and infrastructure. What We're Looking For: Bachelor's degree in a Computer Science or related field. 3+ years of professional experience in Cloud Engineering, DevOps, or a similar role, with a strong focus on Microsoft Azure. Deep hands-on experience with core Azure services and strong networking fundamentals. Solid experience with monitoring and observability platforms, specifically DataDog. Scripting proficiency in PowerShell. Demonstrated ability to analyze and optimize relational database performance (SQL/T-SQL). Strong problem-solving skills. Strong communication and interpersonal skills; ability to analytically defend design decisions and take feedback without ego. Strong attention to detail and accountability. Why Join Us? βœ… Certified Great Place to Work 2024 🀝 Mission-driven team with a big vision πŸš€ Fast-growing startup with room to grow πŸ’Ό Competitive salary + equity πŸ“Š 401(k) with company match 🩺 Comprehensive medical and dental πŸŽ‰ A culture built on Higher Standards, Greater Accountability, and More Fun
    $80k-105k yearly est. 5d ago
  • Senior Devops Engineer

    Valuemomentum 3.6company rating

    Data engineer job in Naperville, IL

    We are looking for a Senior DevOps Engineer to build and manage CI/CD pipelines and Kubernetes platforms while working directly with clients to improve software delivery, reliability, and security. Key Responsibilities Technical Responsibilities Design and implement CI/CD pipelines using Azure DevOps and GitHub Actions Build, deploy, and manage containerized workloads on Amazon EKS Automate infrastructure provisioning using Terraform Implement DevSecOps best practices and CI/CD security controls Support release management, production deployments, and platform reliability Monitor and troubleshoot CI/CD pipelines and Kubernetes environments Client-Facing Responsibilities Serve as a trusted DevOps advisor for client engineering teams Collaborate with client developers to design and optimize CI/CD workflows Conduct client workshops and knowledge-transfer sessions Lead DevOps onboarding for new client applications Participate in client architecture reviews and technical deep-dives Support client incident resolution, RCA discussions, and post-mortems Provide documentation, runbooks, and best-practice guidance to clients Communicate progress, risks, and recommendations clearly to client stakeholders Required Qualifications 10+ years of DevOps or platform engineering experience Hands-on experience with Azure DevOps and GitHub Strong Kubernetes and Amazon EKS experience Strong Docker, Helm, and Terraform skills Proven client-facing experience with excellent communication skills Preferred Qualifications Experience working in MSP or consulting environments Experience with GitOps tools (ArgoCD, Flux) Cloud or Kubernetes certifications Experience supporting multiple client environments simultaneously
    $75k-94k yearly est. 18h ago
  • Lead Principal Java Scala Blockchain Software Engineer

    Request Technology, LLC

    Data engineer job in Chicago, IL

    This is not a C2C role, permanent W2 direct-hire only*** is bonus eligible*** Prestigious Financial Institution is currently seeking a Lead Principal Software Java Engineer, with Scala and Blockchain experience. Candidate will be responsible for the development and delivery of business features, integrating a variety of upstream data sources and presenting data through the user interface, all while enriching and advancing the platform. This software must achieve a blend of data-rich presentation, performance, user experience, and the capacity to support the busiest trading days in the world economy with rock-solid reliability. The candidate must be able to solve problems creatively, communicate effectively, and proactively engage in technical decision making to achieve these objectives. Responsibilities: Working alongside experts that are building next generation blockchain-based securities lending system, and paving the future of digital transformation in the capital markets industry Collaborating with others to deliver complex projects which may involve multiple systems Continuously thinking about the next steps while improving yourself and others around you Developing solutions to complex technical challenges while coding, testing, troubleshooting, debugging, and documenting the systems you develop Optimizing application performance through analysis, code refactoring, and system tuning Recommending technologies and tools that improve the efficiency and quality of the systems and development processes. Qualifications: [Required] 2+ years of development experience with Scala [Required] 7+ years of experience in software development [Required] 5+ years of experience in Java or related technologies [Required] 3+ years of experience in react js or similar technologies [Required] 1+ years of experience with distributed application design & blockchain [Required] Experience with Akka or other actor-based systems [Required] Experience with Devops and CICD tools (GIT, Jenkins, Docker, Kubernetes, Harness, Rancher) [Required] Ability to write clean, bug-free code that is easy to understand and easily maintainable [Required] Experience with BDD methodologies & automated acceptance testing Technical Skills & Background: [Required] Scala-based software development experience [Required] Web/mobile application development experience [Required] Understanding of message brokers, Queues and distributed datastores (Kafka, MQ, Redis, Splunk) [Required] Experience working Unix/Linux environments, large software system development, security software development, public-cloud platforms [Required] Fluent in functional programming, object-oriented design, industry best practices, software patterns, and architecture principles [Required] Proficient in the following types of testing: unit, integration, system, functional, non-functional, regression, performance, security, and acceptance [Required] Deep understanding of performance issues and multi-threaded development [Required] Experience with continuous integration tools and techniques, automating processes, and writing scripts using Python and other languages. Education: [Required] BS degree in Computer Science, similar technical field required [Preferred] Masters preferred
    $97k-129k yearly est. 3d ago
  • Senior Dotnet Developer

    Hexaware Technologies 4.2company rating

    Data engineer job in Chicago, IL

    Required Skills & Experience Strong and practical expertise in .NET development. Solid experience with Microsoft Azure and AI Foundry/AI-related solutions. Strong Python programming skills. Proficiency with low-code/no-code platforms, especially Retool. Ability to quickly prototype, iterate, and convert ideas into workable models. Strong debugging skills with a proactive attitude toward fixing code and optimizing performance. Personal Attributes Must be a β€œvibe coder”-creative, curious, and passionate about building cool things. A true self-starter who can work independently with minimal supervision. A go-getter who thrives in fast-paced environments. High energy, enthusiasm, and a strong sense of ownership in getting work done.
    $63k-80k yearly est. 18h ago

Learn more about data engineer jobs

How much does a data engineer earn in Skokie, IL?

The average data engineer in Skokie, IL earns between $66,000 and $114,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.

Average data engineer salary in Skokie, IL

$87,000

What are the biggest employers of Data Engineers in Skokie, IL?

The biggest employers of Data Engineers in Skokie, IL are:
  1. RANDA Solutions
  2. Berkshire Hathaway
  3. Haggar Clothing Co.
  4. 360-Tsg
Job type you want
Full Time
Part Time
Internship
Temporary