Data Engineer
Data engineer job in Chicago, IL
Scaylor helps mid-market and enterprise companies make sense of their data. Most teams past $20M ARR are stuck with fragmented systems, old databases, and hundreds of spreadsheets that never quite line up. We build the pipelines that clean, normalize, and connect that data so it can actually be used.
Our platform handles everything from parsing financial models and reports to mapping tables across legacy databases and modern APIs. The goal is simple: give companies a single source of truth they can trust.
We're a small team of four - one backend engineer, one frontend engineer, and 2 founders. We're looking for our fifth teammate to help us scale the engine behind it all.
⸻
The Role
You'll work across data engineering and full-stack development, helping us build reliable data infrastructure that powers real workflows. You'll touch everything from ingestion and transformation pipelines to the APIs and dashboards that surface insights to clients.
You'll work directly with the founding team and help make technical decisions that define the next version of Scaylor's core platform.
⸻
What You'll Work On
• Build data pipelines that extract, clean, and standardize information from Excel files, PDFs, APIs, and legacy databases
• Design schemas and transformation logic for structured and semi-structured data
• Develop and maintain backend APIs (Python/FastAPI or Node/Express) for data access and analytics
• Help connect backend services to our frontend dashboards (React, Node.js, or similar)
• Set up and maintain AWS infrastructure (Lambda, S3, ECS, CloudFormation)
• Collaborate with clients to understand their data problems and design workflows that fix them
⸻
You'd Be Great Here If You
• Have 3-6 years of experience in data engineering, backend, or full-stack roles
• Write clean, maintainable code in Python + JS
• Understand ETL, data normalization, and schema mapping
• Have experience with SQL and working with legacy databases or systems
• Are comfortable managing cloud services and debugging data pipelines
• Enjoy solving messy data problems and care about building things that last
⸻
Nice to Have
• Familiarity with GCP or SQL databases
• Understanding of enterprise data flows (ERP, CRM, or financial systems)
• Experience building and deploying containers (Docker, GitHub Actions, CI/CD)
• Interest in lightweight ML or LLM-assisted data transformation
⸻
Why Join Scaylor
• Be one of the first five team members shaping the product and the company
• Work directly with the founder and help define Scaylor's technical direction
• Build infrastructure that solves real problems for real companies
• Earn meaningful equity and have a say in how the company grows
⸻
Compensation
• $130k - $150k with a raise based on set revenue triggers
• .4% equity
• Relocation to Chicago, IL required
Data Engineer
Data engineer job in Chicago, IL
Data Engineer - Build the Data Engine Behind AI Execution - Starting Salary $150,000
You'll be part architect, part systems designer, part execution partner - someone who thrives at the intersection of engineering precision, scalability, and impact.
As the builder behind the AI data platform, you'll turn raw, fragmented data into powerful, reliable systems that feed intelligent products. You'll shape how data flows, how it scales, and how it powers decision-making across AI, analytics, and product teams.
Your work won't be behind the scenes - it will be the foundation of everything we build.
You'll be joining a company built for builders. Our model combines AI consulting, venture building, and company creation into one execution flywheel. Here, you won't just build data pipelines - you'll build the platforms that power real products and real companies.
You know that feeling when a data system scales cleanly under real-world pressure, when latency drops below target, when complexity turns into clarity - and everything just flows? That's exactly what you'll build here.
Ready to engineer the platform that powers AI execution? Let's talk.
No up-to-date resume required.
Data Engineer
Data engineer job in Chicago, IL
We are seeking a highly skilled Data Engineer with strong expertise in Scala, AWS, and Apache Spark. The ideal candidate will have 7+ years of hands-on experience building scalable data pipelines, distributed processing systems, and cloud-native data solutions.
Key Responsibilities
Design, build, and optimize large-scale data pipelines using Scala and Spark.
Develop and maintain ETL/ELT workflows across AWS services.
Work on distributed data processing using Spark, Hadoop, or similar.
Build data ingestion, transformation, cleansing, and validation routines.
Optimize pipeline performance and ensure reliability in production environments.
Collaborate with cross-functional teams to understand requirements and deliver robust solutions.
Implement CI/CD best practices, testing, and version control.
Troubleshoot and resolve issues in complex data flow systems.
Required Skills & Experience
7+ years of Data Engineering experience.
Strong programming experience with Scala (must-have).
Hands-on experience with Apache Spark (core, SQL, streaming).
Solid experience with AWS cloud services (Glue, EMR, Lambda, S3, EC2, IAM, etc.).
High proficiency in SQL and relational/no SQL data stores.
Strong understanding of data modeling, data architecture, and distributed systems.
Experience with workflow orchestration tools (Airflow, Step Functions, etc.).
Strong communication and problem-solving skills.
Preferred Skills
Experience with Kafka, Kinesis, or other streaming platforms.
Knowledge of containerization tools like Docker or Kubernetes.
Background in data warehousing or modern data lake architectures.
Big Data Consultant
Data engineer job in Chicago, IL
Job Title: Bigdata Engineer
Employment Type: W2 Contract
Detailed Job Description:
We are seeking a skilled and experienced Big Data Platform Engineer who is having 7+ yrs of experience with a strong background in both development and administration of big data ecosystems. The ideal candidate will be responsible for designing, building, maintaining, and optimizing scalable data platforms that support advanced analytics, machine learning, and real-time data processing.
Key Responsibilities:
Platform Engineering & Administration:
• Install, configure, and manage big data tools such as Hadoop, Spark, Kafka, Hive, HBase, and others.
• Monitor cluster performance, troubleshoot issues, and ensure high availability and reliability.
• Implement security policies, access controls, and data governance practices.
• Manage upgrades, patches, and capacity planning for big data infrastructure.
Development & Data Engineering:
• Design and develop scalable data pipelines using tools like Apache Spark, Flink, NiFi, or Airflow.
• Build ETL/ELT workflows to ingest, transform, and load data from various sources.
• Optimize data storage and retrieval for performance and cost-efficiency.
• Collaborate with data scientists and analysts to support model deployment and data exploration.
Senior Data Engineer
Data engineer job in Chicago, IL
requires visa independent candidates.
Note: (OPT, CPT, H1B holders will not work at this time)
Design, develop, and maintain scalable ETL pipelines using AWSGlue
Collaborate with data engineers and analysts to understand data requirements
Build and manage data extraction, transformation, and loading processes
Optimize and troubleshoot existing Glue jobs and workflows
Ensure data quality, integrity, and security throughout the ETL process
Integrate AWS Glue with other AWS services like S3, Lambda, Redshift, and Step Functions
Maintain documentation of data workflows and processes
Stay updated with the latest AWS tools and best practices
Required Skills
Strong hands-on experience with AWS Glue, PySpark, and Python
Proficiency in SQL and working with structured/unstructured data (JSON, CSV, Parquet)
Experience with data warehousing concepts and tools
Familiarity with CI/CD pipelines, Terraform, and scripting (PowerShell, Bash)
Solid understanding of data modeling, data integration, and data management
Exposure to AWS Batch, Step Functions, and Data Catalogs
Data Engineer
Data engineer job in Chicago, IL
Job Title: Data Engineer - Workflow Automation
Employment Type: Contract to Hire or Full-Time
Department: Project Scion / Information Management Solutions
Key Responsibilities:
Design, build, and manage workflows using Automic or experience with similar tools like Autosys, Apache Airflow, or Cybermation.
workflow orchestration across multi-cloud ecosystems (AWS, Azure, Snowflake, Databricks, Redshift).
Monitor and troubleshoot workflow execution, ensuring high availability, reliability, and performance.
Administer and maintain workflow platforms.
Collaborate with architecture and infrastructure teams to align workflows with cloud strategies.
Support migrations, upgrades, and workflow optimization efforts
Required Skills:
Has 5+ years of experience in IT managing production grade system
Hands-on experience with Automic or similar enterprise workflow automation tools.
Strong analytical and problem-solving skills.
Good communication and documenting skills.
Familiarity with cloud platforms and technologies (e.g., AWS, Azure, Snowflake, Databricks).
Scripting proficiency (e.g., Shell, Python).
Ability to manage workflows across hybrid environments and optimize performance.
Experience managing production operations & support activities
Preferred Skills:
Experience with CI/CD pipeline integration.
Knowledge of cloud-native orchestration tools
Exposure to monitoring and alerting systems.
Snowflake Data Engineer
Data engineer job in Chicago, IL
Join a dynamic team focused on building innovative data solutions that drive strategic insights for the business. This is an opportunity to leverage your expertise in Snowflake, ETL processes, and data integration.
Key Responsibilities
Develop Snowflake-based data models to support enterprise-level reporting.
Design and implement batch ETL pipelines for efficient data ingestion from legacy systems.
Collaborate with stakeholders to gather and understand data requirements.
Required Qualifications
Hands-on experience with Snowflake for data modeling and schema design.
Proven track record in developing ETL pipelines and understanding transformation logic.
Solid SQL skills to perform complex data transformations and optimization.
If you are passionate about building cutting-edge data solutions and want to make a significant impact, we would love to see your application!
#11290
Data Engineer
Data engineer job in Chicago, IL
The Data Engineer will design, build, and optimize the data pipelines and models that support the firm's evolving research, analytics, and systematic portfolio construction environment. This role is central to enabling data-driven investment processes, including quantitative research, AI/ML capabilities, and front-office automation.
Candidates must have deep expertise with Snowflake, strong SQL skills, and experience integrating diverse datasets used across investment organizations. The role is highly collaborative and requires comfort working in an iterative, fast-moving environment where data needs evolve rapidly based on stakeholder input.
Responsibilities
Design, build, and enhance ETL/ELT pipelines in Snowflake, ensuring high performance, reliability, and scalability.
Integrate internal and external datasets, including pricing, research content, economic releases, market data, and security reference data.
Support real-time or near-real-time data flows where needed (e.g., pricing, indicative quotes, market-sensitive inputs).
Collaborate closely with Product Leads, Quant Developers, and UI/UX teams to ensure data structures meet the requirements of research workflows, analytical models, and user-facing applications.
Partner with front-office stakeholders to rapidly iterate on evolving analytical and data needs.
Implement data validation, monitoring, and quality frameworks to ensure accuracy and reliability across critical datasets.
Translate prototype pipelines into production-ready workflows with appropriate documentation, standards, and controls.
Contribute to data modeling standards, metadata frameworks, and data governance practices across the platform.
Requirements
10+ years of data engineering experience within investment management, financial technology, or similar data-intensive environments.
Expert-level SQL, including complex queries, schema design, and performance optimization.
Deep hands-on experience with Snowflake, including advanced features such as tasks, streams, performance tuning, and secure data sharing.
Strong Python capabilities for ETL/ELT development, data processing, and workflow automation.
Experience integrating APIs and working with structured, semi-structured, and unstructured datasets.
Familiarity with NLP or AI/ML-oriented datasets (e.g., textual research content, PDFs) is a plus.
Experience with Domino or willingness to work within a Domino-based model environment.
Working knowledge of investment data structures (holdings, benchmarks, pricing, exposures) is highly preferred.
Ability to thrive in a rapid prototyping environment with evolving requirements and close partnership with front-office teams.
Senior Data Architect
Data engineer job in Oak Brook, IL
We are seeking a highly skilled and strategic Senior Data Solution Architect to join our IT Enterprise Data Warehouse team. This role is responsible for designing and implementing scalable, secure, and high-performing data solutions that bridge business needs with technical execution. Design solutions for provisioning data to our cloud data platform using ingestion, transformation, and semantic layer techniques. Additionally, this position provides technical thought leadership and guidance to ensure that data platforms and pipelines effectively support ODS, analytics, reporting, and AI initiatives across the organization.
Key Responsibilities:
Architecture & Design:
Design end-to-end data architecture solutions including operational data stores, data warehouses, and real-time data pipelines.
Define standards and best practices for data modeling, integration, and governance.
Evaluate and recommend tools, platforms, and frameworks for data management and analytics.
Collaboration & Leadership:
Partner with business stakeholders, data engineers, data analysts, and other IT teams to translate business requirements into technical solutions.
Lead architecture reviews and provide technical guidance to development teams.
Advocate for data quality, security, and compliance across all data initiatives.
Implementation & Optimization
Oversee the implementation of data solutions, ensuring scalability, performance, and reliability.
Optimize data workflows and storage strategies for cost and performance efficiency.
Monitor and troubleshoot data systems, ensuring high availability and integrity.
Required Qualifications:
Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or related field.
7+ years of experience in data architecture, data engineering, or related roles.
Strong expertise in cloud platforms (e.g., Azure, AWS, GCP) and modern data stack tools (e.g., Snowflake, Databricks).
Proficiency in SQL, Python, and data modeling techniques (e.g. Data Vault 2.0)
Experience with ETL/ELT tools, APIs, and real-time streaming technologies (e.g., dbt, Coalesce, SSIS, Datastage, Kafka, Spark).
Familiarity with data governance, security, and compliance frameworks
Preferred Qualifications:
Certifications in cloud architecture or data engineering (e.g., SnowPro Advanced: Architect).
Strong communication and stakeholder management skills.
Why Join Us?
Work on cutting-edge data platforms and technologies.
Collaborate with cross-functional teams to drive data-driven decision-making.
Be part of a culture that values innovation, continuous learning, and impact.
** This is a full-time, W2 position with Hub Group - We are NOT able to provide sponsorship at this time **
Salary: $135,000 - $175,000/year base salary
+ bonus eligibility
This is an estimated range based on the circumstances at the time of posting, however, may change based on a combination of factors, including but not limited to skills, experience, education, market factors, geographical location, budget, and demand
Benefits
We offer a comprehensive benefits plan including:
Medical
Dental
Vision
Flexible Spending Account (FSA)
Employee Assistance Program (EAP)
Life & AD&D Insurance
Disability
Paid Time Off
Paid Holidays
BEWARE OF FRAUD!
Hub Group has become aware of online recruiting related scams in which individuals who are not affiliated with or authorized by Hub Group are using Hub Group's name in fraudulent emails, job postings, or social media messages. In light of these scams, please bear the following in mind
Hub Group will never solicit money or credit card information in connection with a Hub Group job application.
Hub Group does not communicate with candidates via online chatrooms such as Signal or Discord using email accounts such as Gmail or Hotmail.
Hub Group job postings are posted on our career site: ********************************
About Us
Hub Group is the premier, customer-centric supply chain company offering comprehensive transportation and logistics management solutions. Keeping our customers' needs in focus, Hub Group designs, continually optimizes and applies industry-leading technology to our customers' supply chains for better service, greater efficiency and total visibility. As an award-winning, publicly traded company (NASDAQ: HUBG) with $4 billion in revenue, our 6,000 employees and drivers across the globe are always in pursuit of "The Way Ahead" - a commitment to service, integrity and innovation. We believe the way you do something is just as important as what you do. For more information, visit ****************
Data Engineer
Data engineer job in Itasca, IL
Primary Location: Itasca, IL Hybrid in Chicago's Northwest Suburbs
2 Days In-Office, 3 Days WFH
TYPE: Direct Hire / Permanent Role
MUST BE Citizen and Green Card
The Data Engineer will be responsible for designing, developing, and maintaining scalable data pipelines and integrations that support data analytics and business intelligence across the organization. This role is essential to ensuring high-quality data delivery, optimizing performance, and enabling effective decision-making through reliable data solutions.
What You Bring to the Role (Ideal Experience)
Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent work experience).
5+ years of experience as a Data Engineer.
3+ years of experience with the following:
Building and supporting data lakehouse architectures using Delta Lake and change data feeds.
Working with PySpark and Python, with strong Object-Oriented Programming (OOP) experience to extend existing frameworks.
Designing data warehouse table architecture such as star schema or Kimball method.
Writing and maintaining versioned Python wheel packages to manage dependencies and distribute code.
Creating and managing CI/CD pipelines, especially using Azure DevOps for Microsoft Fabric-related assets.
Experience establishing scalable and maintainable data integrations and pipelines in Databricks environments.
Nice to Have's
Hands-on experience implementing data solutions using Microsoft Fabric.
Experience with machine learning/ML and data science tools.
Knowledge of data governance and security best practices.
Experience in a larger IT environment with 3,000+ users and multiple domains.
Current industry certifications from Microsoft cloud/data platforms or equivalent certifications. One or more of the following is preferred:
Microsoft Certified: Fabric Data Engineer Associate
Microsoft Certified: Azure Data Scientist Associate
Microsoft Certified: Azure Data Fundamentals
Google Professional Data Engineer
Certified Data Management Professional (CDMP)
IBM Certified Data Architect - Big Data
What You'll Do (Skills Used in this Position)
Design and develop scalable data pipelines to collect, process, and store large volumes of structured and unstructured data.
Extend and enhance existing OOP-based frameworks developed in Python and PySpark.
Partner with data scientists and analysts to define requirements and design robust data analytics solutions.
Ensure data quality and integrity through data cleansing, validation, and automated testing procedures.
Develop and maintain technical documentation, including requirements, design specifications, and test plans.
Implement and manage data integrations from multiple internal and external sources.
Optimize data workflows to improve performance, reliability, and reduce cloud consumption.
Monitor, troubleshoot, and resolve data pipeline issues to ensure consistent data delivery.
Establish and manage CI/CD pipelines and release processes, particularly using Azure DevOps for Microsoft Fabric.
Provide technical leadership and coordination for global development and support teams.
Participate in creating a safe and healthy workplace by adhering to organizational safety protocols.
Support additional projects and initiatives as assigned by management.
AWS Data Engineer - PERM - Local to Illinois
Data engineer job in Naperville, IL
Resource 1 is in need of a Sr. AWS Data Engineer for a full-time/ permanent position with our client in Naperville, IL. Candidate must be
local to Illinois
because of future hybrid onsite in Naperville expected. Our client is an employee-owned company with excellent benefits, growth opportunities and profit-sharing bonus.
This position is focused on building modern data pipelines, integrations and back-end data solutions. Selected individual will work within cross-functional Agile teams, collaborating with product owners, business analysts and other engineers to design and deliver data solutions that power business insights and AI products.
Responsibilities:
Design and develop scalable data pipelines for ingestion, transformation and integration using AWS services.
Pull data from PostgreSQL and SQL Server to migrate to AWS.
Create and modify jobs in AWS and modify logic in SQL Server.
Create SQL queries, stored procedures and functions in PostgreSQL and RedShift.
Provide input on data modeling and schema design as needed.
Manage infrastructure through infrastructure-as-code templates (Serverless Framework), supporting new data products and services in AWS.
Support inbound/ outbound data flows, including APIs, S3 replication and secured data.
Assist with data visualization/ reporting as needed.
Follow an Agile development methodology, with regular workshops and standup meetings, working in two-week sprints.
Qualifications:
5+ years of data engineering experience.
Experience with AWS and its associated array of offerings (Glue, Redshift, Athena, S3, Lambda, Spectrum).
Strong experience with SQL, Python and PySpark.
A background in supply chain, logistics or distribution would be a plus.
Experience with Power BI is a plus.
Junior Data Engineer
Data engineer job in Chicago, IL
Job Title - Junior Data Engineer
Duration - Fulltime
No of Positions - 8
Interview Process - Imocha test & 1 CG Interview
Key Responsibilities
Design, develop, and maintain scalable data pipelines and ETL processes using Databricks.
Design an develop Python scripts for data transformation, automation, and integration tasks.
Develop and optimize SQL queries for data extraction, transformation, and loading.
Collaborate with data scientists, analysts, and business stakeholders
Ensure data integrity, security, and compliance with organizational standards.
Participate in code reviews and contribute to best practices in data engineering
Required Skills
3-5 years of professional experience in data engineering or related roles.
Strong proficiency in Databricks (including Spark-based data processing).
Strong programming skills in Python
Advanced knowledge of SQL for querying and data modeling.
Familiarity with Azure cloud and ADF
Understanding of ETL frameworks, data governance, and performance tuning.
Knowledge of CI/CD practices and version control (Git).
Exposure to BI tools (Power BI, Tableau) for data visualization
Mandatory Skills
Python, Databricks, SQL, ETL, Power BI & tableau (good to have)
If your Interested. Kindly, share us your resume on
****************************
Life At Capgemini
Capgemini supports all aspects of your well-being throughout the changing stages of your life and career. For eligible employees, we offer:
Flexible work
Healthcare including dental, vision, mental health, and well-being programs
Financial well-being programs such as 401(k) and Employee Share Ownership Plan
Paid time off and paid holidays
Paid parental leave
Family building benefits like adoption assistance, surrogacy, and cryopreservation
Social well-being benefits like subsidized back-up child/elder care and tutoring
Mentoring, coaching and learning programs
Employee Resource Groups
Disaster Relief
Disclaimer
Capgemini is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law.
This is a general description of the Duties, Responsibilities and Qualifications required for this position. Physical, mental, sensory or environmental demands may be referenced in an attempt to communicate the manner in which this position traditionally is performed. Whenever necessary to provide individuals with disabilities an equal employment opportunity, Capgemini will consider reasonable accommodations that might involve varying job requirements and/or changing the way this job is performed, provided that such accommodations do not pose an undue hardship.
Capgemini is committed to providing reasonable accommodations during our recruitment process. If you need assistance or accommodation, please reach out to your recruiting contact.
Click the following link for more information on your rights as an Applicant **************************************************************************
Salary Transparency
Capgemini discloses salary range information in compliance with state and local pay transparency obligations. The disclosed range represents the lowest to highest salary we, in good faith, believe we would pay for this role at the time of this posting, although we may ultimately pay more or less than the disclosed range, and the range may be modified in the future. The disclosed range takes into account the wide range of factors that are considered in making compensation decisions including, but not limited to, geographic location, relevant education, qualifications, certifications, experience, skills, seniority, performance, sales or revenue-based metrics, and business or organizational needs. At Capgemini, it is not typical for an individual to be hired at or near the top of the range for their role. The base salary range for the tagged location is $56186 to $87556 /yearly. This role may be eligible for other compensation including variable compensation, bonus, or commission. Full time regular employees are eligible for paid time off, medical/dental/vision insurance, 401(k), and any other benefits to eligible employees.
Note: No amount of pay is considered to be wages or compensation until such amount is earned, vested, and determinable. The amount and availability of any bonus, commission, or any other form of compensation that are allocable to a particular employee remains in the Company's sole discretion unless and until paid and may be modified at the Company's sole discretion, consistent with the law.
Data Architect - Pharma
Data engineer job in Chicago, IL
MathCo
Role - Data/AI Engineering Manager
Onsite - Chicago - 4 days in office (Mandatory)
Industry - Pharma (Mandatory)
As platform architect/owner, you will:
Lead the end-to-end architecture, lifecycle, and governance of the AI/Analytics platform, defining standards, reusable components, and integration patterns.
Partner with AI/Data architects to enable scalable model deployment and enhance agentic orchestration.
Translate business needs into platform features, manage onboarding, documentation, and cross-functional collaboration for platform adoption.
Oversee infrastructure-as-code, CI/CD, observability, and containerized environments to ensure reliability and scalability.
Evaluate complex technical proposals and develop actionable platform roadmaps and architecture recommendations
Stay current on key AI platform developments and assess their impact on architecture and client strategy
Coach others, recognize their strengths, and encourage them to take ownership of their personal development
Skills Required
Experience in designing, architecting, or managing distributed data and AI platforms in cloud environments (AWS, Azure, or GCP)
Proven ability to carry out complex Proof of Concept (POC), pilot projects, and limited production rollouts for AI use-cases, focusing on developing new or improved techniques and procedures.
Strong skills in pipeline/workflow optimization and data processing frameworks to evaluate architectural choices
Years of Experience
Minimum of 8 years in relevant experience, preferably with a consulting
background and experience with Pharma clients
Senior Back End Developer - Distributed Systems (C# or Golang)
Data engineer job in Chicago, IL
Our client, a fast-growing organization developing secure, scalable technologies for next-generation AI applications, is seeking a Backend Engineer to join their core platform team.
In this role, you'll help build and refine the foundational services that power authentication, observability, data flows, and high-availability systems across a distributed ecosystem. This is an opportunity to work on complex backend challenges while shaping the infrastructure that supports mission-critical applications.
What You'll Do
Develop, enhance, and support backend services that form the foundation of the platform.
Build and maintain core authentication and authorization capabilities.
Apply principles of Domain-Driven Design to guide how services and components evolve over time.
Architect, extend, and support event-sourced systems to ensure durable, consistent operations at scale.
Participate in API design and integration efforts across internal and external stakeholders.
Implement and support messaging frameworks (e.g., NATS) to enable reliable service-to-service communication.
Maintain and improve observability tooling-including metrics, tracing, and logging-to ensure healthy system performance.
Work closely with infrastructure, DevOps, and engineering teams to ensure robust, secure, and maintainable operations.
What You Bring
3-6+ years of experience as a backend engineer.
Strong knowledge of distributed systems and microservices.
Proficiency in at least one modern backend programming language (C#, Go, Rust, etc.).
Practical experience with IAM concepts and authentication/authorization frameworks.
Exposure to event-sourcing patterns, DDD, and common messaging systems (e.g., NATS, Kafka, SNS, RabbitMQ).
Familiarity with Redis or similar in-memory caching technologies.
Experience working with observability tools such as Prometheus, Jaeger, ELK, or Application Insights.
Understanding of cloud-native environments and deployment workflows (AWS, Azure, or GCP).
Why This Role Is Compelling
You'll contribute directly to a foundational platform used across an entire organization-impacting performance, reliability, and security at every layer. If you enjoy solving distributed-system challenges and working on complex, high-scale backend services, this is a strong match.
#BackendEngineering #DistributedSystems #PlatformEngineering #CloudNative #SoftwareJobs
AI Software Engineer
Data engineer job in Chicago, IL
Be a part of our success story. Launch offers talented and motivated people the opportunity to do the best work of their lives in a dynamic and growing company. Through competitive salaries, outstanding benefits, internal advancement opportunities, and recognized community involvement, you will have the chance to create a career you can be proud of. Your new trajectory starts here at Launch!
Launch is actively seeking qualified, energetic engineers with passion for building solutions leveraging new and emerging technologies related to AI. This is a software engineering role specializing in applications with use cases powered by AI solutions, especially Generative AI, such as LLM integration, vector embeddings, real-time inference, and semi-automated, human-in-the-loop workflows. This role offers an exciting opportunity to be at the forefront of AI technology, working on diverse projects that drive real-world impact. If you're passionate about AI and have the technical expertise to back it up, this role may be perfect for you!
Responsibilities Include:
Write high-quality, maintainable code in languages such as Python, JavaScript, C#, or others relevant to AI development
Work closely with and in cross-functional teams including software engineers, project managers, designers, QA, data engineers, and data scientists
Integrate with a variety of different APIs, services, and technologies to bring pre-trained models and other technologies to bear, such as cloud-based vector databases
Develop APIs and interfaces to enable easy interaction between AI models and client applications
Fine-tune and/or customize integration with pre-trained models to meet unique client needs
Handle data preprocessing, cleaning, and augmentation to enhance model performance
Implement strategies for managing and securing sensitive client data
Monitor and optimize the performance of AI model integrations to optimize efficiency and accuracy
Provide technical guidance and support to clients and internal stakeholders
Stay up-to-date with the latest advancements in NLP and machine learning
Qualifications:
Bachelor's or Master's degree in Computer Science, Data Science, Artificial Intelligence, or a related field (strongly preferred)
Prior IT digital consulting experience is highly preferred
Proven experience in software development, with a focus on AI and machine learning
Hands-on experience with integrating language models into applications and platforms
Proficiency in programming languages such as Python, JavaScript, C#, or similar
Experience with AI frameworks and libraries (e.g., TensorFlow, PyTorch, Hugging Face Transformers)
Experience with Generative AI tooling (e.g., LangChain, Semantic Kernel)
Knowledge of API development and integration
Strong understanding of NLP concepts and techniques, including language modeling, text generation, and sentiment analysis
Experience with large-scale language models (e.g., GPT, BERT) and their practical applications
Excellent analytical and problem-solving skills with a keen ability to troubleshoot and resolve technical issues
Strong verbal and written communication skills, with the ability to explain complex technical concepts to non-technical stakeholders
Proven ability to work effectively in a team environment and manage client relationships
Experience in project management and ability to handle multiple tasks simultaneously
Experience with cloud platforms (e.g., AWS, Azure, GCP) and containerization tools (e.g., Docker) a plus
Familiarity with agile development methodologies and DevOps practices
Innovative and curious, with a passion for emerging technologies and continuous learning
Detail-oriented and committed to delivering high-quality results
Compensation & Benefits:
As an employee at Launch, you will grow your skills and experience through a variety of exciting project work (across industries and technologies) with some of the top companies in the world! Our employees receive full benefits-medical, dental, vision, short-term disability, long-term disability, life insurance, and matched 401k. We also have an uncapped, take-what-you-need PTO policy. The anticipated base wage range for this role is $155,000 - $175,000. Education and experience will be highly considered, and we are happy to discuss your wage expectations in more detail throughout our internal interview process.
Senior DevOps Engineer
Data engineer job in Chicago, IL
The Aspen Group (TAG) is one of the largest and most trusted retail healthcare business support organizations in the U.S. and has supported over 20,000 healthcare professionals and team members with close to 1,500 health and wellness offices across 48 states in four distinct categories: dental care, urgent care, medical aesthetics, and animal health. Working in partnership with independent practice owners and clinicians, the team is united by a single purpose: to prove that healthcare can be better and smarter for everyone. TAG provides a comprehensive suite of centralized business support services that power the impact of five consumer-facing businesses: Aspen Dental, ClearChoice Dental Implant Centers, WellNow Urgent Care, Chapter Aesthetic Studio, and Lovet. Each brand has access to a deep community of experts, tools and resources to grow their practices, and an unwavering commitment to delivering high-quality consumer healthcare experiences at scale.
Job Description:
We are seeking a skilled and proactive Delivery Engineer to join our IT Platform team. As a Delivery Engineer, you will be responsible for ensuring the successful delivery, deployment, and integration of IT platform solutions, aligning them with business needs and technical specifications. The role involves managing the end-to-end deployment process, troubleshooting issues, and collaborating with cross-functional teams to implement robust platform solutions. The ideal candidate will possess a strong background in systems engineering, cloud technologies, and automation, with a focus on delivering scalable and high-performance IT infrastructure.
Key Responsibilities:
Platform Delivery and Deployment:
Coordinate and execute the delivery of IT platform solutions, ensuring smooth implementation in production environments.
Manage end-to-end deployment processes, including configuration, testing, and integration of software and infrastructure solutions.
Collaborate with stakeholders to define deployment requirements and ensure all technical and operational needs are met.
Platform Integration and Automation:
Integrate new platform components with existing infrastructure, ensuring seamless interoperability and performance optimization.
Automate routine delivery and deployment tasks to increase efficiency and minimize manual interventions.
Utilize tools such as CI/CD pipelines, infrastructure as code (IaC), and automation platforms to streamline delivery processes.
Collaboration with Development and Operations Teams:
Work closely with development teams to understand platform requirements and ensure that solutions are deployed efficiently and effectively.
Collaborate with operations teams to ensure that platform deployments are stable, scalable, and meet performance targets.
Serve as a technical liaison between development, operations, and other cross-functional teams to ensure alignment across the delivery lifecycle.
Troubleshooting and Issue Resolution:
Identify, troubleshoot, and resolve issues during the delivery and deployment of platform solutions.
Conduct root cause analysis of delivery failures and work with teams to implement preventive measures.
Provide ongoing support for platform deployments, ensuring stability and performance post-deployment.
Documentation and Reporting:
Maintain detailed documentation of deployment procedures, configuration settings, and troubleshooting steps.
Create deployment reports and provide regular updates on platform delivery status, including risk assessments and mitigation plans.
Develop and maintain knowledge base articles to aid in future deployments and team knowledge sharing.
Performance Monitoring and Optimization:
Monitor the performance of deployed platforms, ensuring that they meet defined SLAs and performance standards.
Analyze performance metrics and recommend optimizations to improve the efficiency and reliability of platform solutions.
Implement monitoring tools and dashboards to provide real-time visibility into platform health and delivery success.
Security and Compliance:
Ensure that all platform deliveries are compliant with security policies, data protection regulations, and industry standards.
Work with the security team to implement and maintain platform security best practices during deployment and post-deployment phases.
Conduct security assessments and audits as part of the deployment process.
Qualifications:
Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent work experience).
3-5 years of experience in IT platform engineering, systems administration, or a similar role.
Strong knowledge of cloud platforms (e.g., AWS, Azure, Google Cloud) and containerization technologies (e.g., Docker, Kubernetes).
Experience with deployment automation tools and CI/CD pipelines (e.g., Jenkins, GitLab, Terraform).
Familiarity with infrastructure as code (IaC) and configuration management tools (e.g., Ansible, Puppet, Chef).
Proficient in scripting and automation languages (e.g., Python, PowerShell, Bash).
Strong troubleshooting and problem-solving skills with a proactive and analytical mindset.
Ability to work collaboratively with cross-functional teams in a fast-paced environment.
Excellent communication skills, both verbal and written, with the ability to convey complex technical concepts to non-technical stakeholders.
Preferred Skills:
Experience with DevOps practices and tools.
Knowledge of networking, load balancing, and database technologies.
Experience in platform performance tuning and optimization.
Familiarity with security practices and compliance frameworks such as ISO 27001, SOC 2, and GDPR.
Relevant certifications such as AWS Certified Solutions Architect, Azure Administrator, or Google Cloud Professional Engineer are a plus.
Annual Salary Range: $125,000-$150,000/year, with a generous benefits package that includes paid time off, health, dental, vision, and 401(k) savings plan with match.
If you are an applicant residing in California, please view our privacy policy here: *********************************************************************************
Director of Automation and SDET
Data engineer job in Chicago, IL
is bonus eligible***
Prestigious Financial Institution is currently seeking a Director of Automation and SDET with AI/ML experience. Candidate will be responsible for defining, driving and scaling enterprise-wide test automation, quality engineering practices. This role will architect and implement advanced automation solutions across applications, data and platforms, enable adoption of best practices, and establish the governance, metrics and tools. The role combines technical expertise with strong leadership and stakeholder collaboration skills to deliver next generation automation infusing AI capabilities.
Responsibilities:
Define and execute the enterprise automation strategy aligned with business and technology modernization goals.
Drive adoption of automation in all phases of testing including automated regression and smoke tests to improve quality and accelerate testing.
Implement automated quality gates, pre/post deployment checks and shift-left testing.
Architect scalable, reusable automation frameworks covering UI, API, microservices, data pipelines, Kafka/event driven systems, batch jobs, reports and databases.
Define standards for BDD, contract testing, CI/CD integration, synthetic data generation and environment-agnostic test automation.
Establish tagging and traceability across automation framework, Jira, Confluence, Test management tools, CI/CD pipelines and Splunk.
Introduce and scale synthetic test data management, environment/service virtualization for complex integration testing.
Envision and implement AI/ML and Generative AI infused solutions for test case generation, test data generation, automation script generation and quality insights.
Build quality engineering and automation center of excellence to drive training, reusable asset libraries and knowledge management artifacts.
Partner with development, product, DevOps and Platform Engineering leaders to embed automation into all stages of SDLC.
Define, monitor and report KPIs/OKRs for automation outcomes to executives and product owners.
Drive compliance with industry standards, regulatory requirements, and audit readiness across automation and QE practices.
Manages a team of people managers, individual contributors, and consultants/contractors
Qualifications:
Minimum ten (15) years' of IT experience with ten (10)+ years in test automation.
Proven track record of leading enterprise-scale automation initiatives in complex, distributed environments (microservices, cloud, batch applications, data, MQ, Kafka event driven systems).
Hands-on experience with service virtualization, synthetic test data management.
Strong hands-on expertise in testing and test automation tools and frameworks including Jira, BDD, Selenium, Cucumber, REST-assured, JMeter, Playwright.
Strong programming experience in Java and Python.
Deep understanding of DevOps, CI/CD pipelines (Jenkins, Harness, GitHub), cloud platforms (AWS, Azure) and containerized environments (Kubernetes, Docker).
Experience with Kafka/event-driven testing, large data set validations
Experience with Agile development processes for enterprise software solutions
Strong background in metrics-driven quality reporting and risk-based decision making.
Strong organizational leadership skills
Ability to manage multiple, competing priorities and make decisions quickly
Knowledgeable about industry trends, best practices, and change management
Strong communication skills with the ability to communicate and interact with a variety of internal/external customers, coworkers, and Executive Management
Strong work ethic, hands-on, detail oriented with a customer service mentality
Team player, self-driven, motivated, and able to work under pressure
Results-oriented and demonstrated record of developing initiatives that impact productivity
Technical Skills:
Proficiency with modern quality engineering tools including Jira, Jenkins, automation frameworks, test management tools.
Software QA methodologies (requirements analysis, test planning, functional testing, usability testing, performance testing, etc.)
Familiarity with AI/ML/GenAI Solutions in QE.
Utilizing best practices in software engineering, software test automation, test management tools, and defect tracking software
Past/Current experience of 3+ years working on a large-scale cloud native project Experience with cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Security groups, EC2
Education and/or Experience:
BS degree in Computer Science or Information Systems Management or a similar technical field
10+ years of experience in Quality Assurance space preferably on complex systems and large programs.
ML Engineer only W2 and Local
Data engineer job in Chicago, IL
Hello
One of My Clients is Hiring for AWS Engineers at DENVER, CO & Chicago, IL for a Long term Contract.
Role : AWS Engineer with Bedrock & AI/ML
Type : Long term Contract
Pay : On my Client payroll directly - W2 Only
Note : Only Independent candidate is eligible to apply on this role.
Must Haves:
5+years of experience in cloud engineering (AWS) AWS Bedrock and related AI/ML services
AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified DevOps Engineer)
Experience with networking, security in cloud environments
Experience with Helm charts and Kubernetes deployments
CI/CD pipelines and DevOps experience
Plusses:
Enterprise experience
Knowledge of observability tools (e.g., Prometheus, Grafana)
Day to Day:
Design,deploy, and manage cloud infrastructure primarily on AWS
Configure and maintain AWS Bedrock services for generative AI integrations
Implement and manage Terraform scripts for infrastructure automation
Utilize Helm charts for Kubernetes-based deployments
Collaborate with vendor teams to ensure smooth SaaS application deployment and
integration
Monitor system performance, troubleshoot issues, and optimize cloud resources
Ensure compliance with security best practices and organizational standards
Document architecture, processes, and operational procedures
Thanks & Regards
Rakesh kumar
Work : ************ , Ext :1002 | Email : **************************
Web: ***********************
IT Software Engineer 5
Data engineer job in Chicago, IL
Cullerton Group has a new opportunity for an IT Software Engineer 5. The work will be done hybrid in Chicago, IL, with 2 days per week onsite required; candidates must also be comfortable with a future potential transition to 5 days onsite. This is a long-term 12-month position that can lead to permanent employment with our client. Compensation is up to $100.43/hr + full benefits (vision, dental, health insurance, 401k, and holiday pay).
Job Summary
Cullerton Group is seeking two highly experienced Lead Software Engineers to drive the design, development, testing, and deployment of enterprise-scale backend software systems. These engineers will take ownership of complex technical challenges, lead development teams, perform code reviews, and guide best practices in modern application architecture. The role is both hands-on and leadership-focused, with responsibility for mentoring developers, resolving complex system issues, and collaborating with product owners to deliver iterative, high-value features. These positions are ideal for senior engineers with deep backend, distributed systems, and cloud experience who thrive in a hybrid, fast-paced environment.
Key Responsibilities
• Lead design, development, deployment, and testing of backend software systems and enterprise applications
• Mentor and guide junior and mid-level engineers; review code and enforce engineering best practices
• Serve as a technical lead for development and support teams, owning complex programming and project assignments
• Troubleshoot complex application and technical problems, including after-hours or weekend escalations
• Work independently on infrastructure and system components used across multiple applications
• Collaborate with product owners to develop and execute iterative delivery plans
• Drive development of structured application/interface code, documentation, and user guides
• Communicate with internal customers and end users to validate design, functionality, and integration
• Lead development of new functionality in a cross-functional Agile team environment
• Conduct integrated and customer-acceptance testing to ensure quality, accuracy, and completeness of solutions
Required Qualifications
• Bachelor's degree in Computer Science, Electrical Engineering, or related field (required)
• 10+ years of software engineering experience (or 8+ years with a Master's degree)
• 8+ years designing and developing software in Java or Scala
• 7+ years building backend applications using Spring Framework, Hibernate, and enterprise design patterns
• 7+ years working with relational and non-relational databases and caching frameworks
• Strong leadership background, including experience leading development initiatives and mentoring engineers
• Excellent verbal and written communication skills
Preferred Qualifications
• Experience designing, developing, deploying, and maintaining software at scale
• Strong understanding of architectural patterns (Microservices, MVC, event-driven, etc.)
• Experience deploying via CI/CD tools such as Jenkins, GoCD, Azure DevOps, etc.
• Experience with cloud platforms such as AWS or Azure
• Experience with message brokers such as Kafka, RabbitMQ, SQS, Kinesis, SNS
• Experience building and maintaining REST APIs and API gateways (Apigee, AWS API Gateway, Azure API Gateway)
• Knowledge of batch or stream processing (Spark, Flink, Akka, Storm)
• Experience with TDD/BDD, Selenium, Cucumber, and automated pipeline integration
• Experience with datastores such as Postgres, MongoDB, Cassandra, Redis, Elasticsearch, Oracle, MySQL
• Experience working in Linux/Unix environments
• Experience with front-end state management libraries (Redux)
• Strong understanding of computer science fundamentals (data structures, algorithms)
• Demonstrated leadership on small- and medium-scale strategic projects
Why This Role?
This role offers the opportunity to lead highly technical software development initiatives within a collaborative and impactful engineering organization. You will guide modern application development, influence system architecture, and mentor engineers while contributing to solutions used across multiple business areas. Cullerton Group provides a professional environment with long-term stability, strong technical challenges, expansive cloud and backend development exposure, and meaningful opportunities for leadership and career growth.
Software Engineer
Data engineer job in Chicago, IL
TBSCG is a modern consulting and engineering company trusted by well‑known enterprise brands. We design, build, and support digital experiences and platforms across financial services, manufacturing, technology, public sector and global consumer brands. We combine the feel of a close‑knit, supportive team with the impact and credibility of large‑scale, high‑visibility programs.
About the Role
We're looking for a Full‑Stack Engineer who enjoys working across the stack - from backend services and APIs to modern frontends. You'll build end‑to‑end solutions that power digital platforms for enterprise clients, working with both Java/Node.js and React.
What You'll Do
• Build features end‑to‑end across backend and frontend
• Write clean, modular code that is well‑tested and maintainable
• Work with APIs, headless/CMS platforms, cloud services and integrations
• Participate in solution design and contribute to technical choices
• Collaborate with architects, designers and engineers across disciplines
• Help improve engineering standards, tooling and reusable components
Must‑Have
• Solid web fundamentals & API understanding (HTTP, REST, JSON)
• Experience in Typescript, React and Next.JS
• Git, secure development mindset, and CI/CD familiarity
• Ability to deliver end‑to‑end features with some autonomy
Useful to Have
• Experience with Java.
• Terraform
• SQL/NoSQL; Docker; cloud‑ready development
• Automated testing across front & backend
Bonus
• Integrations with CMS/DXP, DAM, CRM or e‑commerce
• Magnolia CMS + React headless or hybrid experience
• AWS cloud experience (backend or frontend delivery)
• Consulting or client‑facing experience
Please note that TBSCG does not provide visa sponsorship or assistance.
If you would like to know more about how your personal data is used, in relation to the recruitment process, please see our Recruitment Privacy Policy (
******************************************
TBSCG participates in the E-Verify program to verify the employment eligibility of all new hires. If you are selected and hired, your eligibility to work in the United States will be verified within the first three days of employment