Regulatory Engineer
Data engineer job in Cordova, IL
WHO WE ARE
As the nation's largest producer of clean, carbon-free energy, Constellation is focused on our purpose: accelerating the transition to a carbon-free future. We have been the leader in clean energy production for more than a decade, and we are cultivating a workplace where our employees can grow, thrive, and contribute.
Our culture and employee experience make it clear: We are powered by passion and purpose. Together, we're creating healthier communities and a cleaner planet, and our people are the driving force behind our success. At Constellation, you can build a fulfilling career with opportunities to learn, grow and make an impact. By doing our best work and meeting new challenges, we can accomplish great things and help fight climate change. Join us to lead the clean energy future.
The Senior Regulatory Engineer position is based out of our Quad Cities Generating Station in Cordova, IL.
TOTAL REWARDS
Constellation offers a wide range of benefits and rewards to help our employees thrive professionally and personally. We provide competitive compensation and benefits that support both employees and their families, helping them prepare for the future. In addition to highly competitive salaries, we offer a bonus program, 401(k) with company match, employee stock purchase program comprehensive medical, dental and vision benefits, including a robust wellness program paid time off for vacation, holidays, and sick days and much more.
***This Engineering role can be filled at the Mid-level or Senior Engineer level. Please see minimum qualifications list below for each level***
Expected salary range varies based on experience, along with comprehensive benefits package that includes bonus and 401(k).
Mid-Level - $94,500 - $105,000
Sr Level - $124,200 - $138,000
PRIMARY PURPOSE OF POSITION
Performs advanced regulatory/technical problem solving in support of nuclear plant operations. Responsible for regulatory/technical decisions. Possesses excellent knowledge in functional discipline and its practical application and has detailed knowledge of applicable industry codes and regulations.
PRIMARY DUTIES AND ACCOUNTABILITIES
Provide in-depth regulatory/technical expertise to develop, manage and implement regulatory analyses, activities and programs.
Provide regulatory/technical expertise and consultation through direct involvement to identify and resolve regulatory issues.
Provide complete task management of regulatory issues.
Perform regulatory tasks as assigned by supervision.
Accountable for the accuracy, completeness, and timeliness of work ensuring proper licensing basis management and assuring that standard design criteria, practices, procedures, regulations and codes are used in preparation of products.
Perform independent research, reviews, studies and analyses in support of regulatory/technical projects and programs.
Recommend new concepts and techniques to improve performance, simplify operation, reduce costs, reduce regulatory burden, correct regulatory non-compliances, or comply with changes in codes or regulations.
All other job assignments and/or duties pursuant to company policy or as directed by management to include but not limited to: (Emergency Response duties and/or coverage, Department duty coverage and/or call out, and positions
MINIMUM QUALIFICATIONS for Mid-level E02 Engineer
Bachelor&rsquos degree in Engineering with 1-year of relevant position experience OR
Associate degree in Engineering with a minimum of 3 years of relevant experience OR
High school diploma (or equivalent) with at least 5 years of relevant experience
Effective written and oral communication skills
Maintain minimum access requirement or unescorted access requirements, as applicable, and favorable medical examination and/or testing in accordance with position duties
MINIMUM QUALIFICATIONS for Senior E03 Engineer
Bachelor&rsquos degree in Engineering with 5-years of relevant position experience OR
Associate&rsquos degree in Engineering with 7 years of experience OR
High School Diploma or Equivalent with 8 years of experience
Effective written and oral communication skills
Maintain minimum access requirement or unescorted access requirements, as applicable, and favorable medical examination and/or testing in accordance with position duties
PREFERRED QUALIFICATIONS
Previous Senior Reactor Operator (SRO) license/certification
1 year nuclear power experience
NRC experience
Advanced technical degree or related coursework
Regulatory related work experience or previous experience in a military or other government organization
Data Engineer
Data engineer job in Chicago, IL
Data Engineer - Build the Data Engine Behind AI Execution - Starting Salary $150,000
You'll be part architect, part systems designer, part execution partner - someone who thrives at the intersection of engineering precision, scalability, and impact.
As the builder behind the AI data platform, you'll turn raw, fragmented data into powerful, reliable systems that feed intelligent products. You'll shape how data flows, how it scales, and how it powers decision-making across AI, analytics, and product teams.
Your work won't be behind the scenes - it will be the foundation of everything we build.
You'll be joining a company built for builders. Our model combines AI consulting, venture building, and company creation into one execution flywheel. Here, you won't just build data pipelines - you'll build the platforms that power real products and real companies.
You know that feeling when a data system scales cleanly under real-world pressure, when latency drops below target, when complexity turns into clarity - and everything just flows? That's exactly what you'll build here.
Ready to engineer the platform that powers AI execution? Let's talk.
No up-to-date resume required.
Data Engineer
Data engineer job in Chicago, IL
Scaylor helps mid-market and enterprise companies make sense of their data. Most teams past $20M ARR are stuck with fragmented systems, old databases, and hundreds of spreadsheets that never quite line up. We build the pipelines that clean, normalize, and connect that data so it can actually be used.
Our platform handles everything from parsing financial models and reports to mapping tables across legacy databases and modern APIs. The goal is simple: give companies a single source of truth they can trust.
We're a small team of four - one backend engineer, one frontend engineer, and 2 founders. We're looking for our fifth teammate to help us scale the engine behind it all.
⸻
The Role
You'll work across data engineering and full-stack development, helping us build reliable data infrastructure that powers real workflows. You'll touch everything from ingestion and transformation pipelines to the APIs and dashboards that surface insights to clients.
You'll work directly with the founding team and help make technical decisions that define the next version of Scaylor's core platform.
⸻
What You'll Work On
• Build data pipelines that extract, clean, and standardize information from Excel files, PDFs, APIs, and legacy databases
• Design schemas and transformation logic for structured and semi-structured data
• Develop and maintain backend APIs (Python/FastAPI or Node/Express) for data access and analytics
• Help connect backend services to our frontend dashboards (React, Node.js, or similar)
• Set up and maintain AWS infrastructure (Lambda, S3, ECS, CloudFormation)
• Collaborate with clients to understand their data problems and design workflows that fix them
⸻
You'd Be Great Here If You
• Have 3-6 years of experience in data engineering, backend, or full-stack roles
• Write clean, maintainable code in Python + JS
• Understand ETL, data normalization, and schema mapping
• Have experience with SQL and working with legacy databases or systems
• Are comfortable managing cloud services and debugging data pipelines
• Enjoy solving messy data problems and care about building things that last
⸻
Nice to Have
• Familiarity with GCP or SQL databases
• Understanding of enterprise data flows (ERP, CRM, or financial systems)
• Experience building and deploying containers (Docker, GitHub Actions, CI/CD)
• Interest in lightweight ML or LLM-assisted data transformation
⸻
Why Join Scaylor
• Be one of the first five team members shaping the product and the company
• Work directly with the founder and help define Scaylor's technical direction
• Build infrastructure that solves real problems for real companies
• Earn meaningful equity and have a say in how the company grows
⸻
Compensation
• $130k - $150k with a raise based on set revenue triggers
• .4% equity
• Relocation to Chicago, IL required
Big Data Consultant
Data engineer job in Chicago, IL
Job Title: Bigdata Engineer
Employment Type: W2 Contract
Detailed Job Description:
We are seeking a skilled and experienced Big Data Platform Engineer who is having 7+ yrs of experience with a strong background in both development and administration of big data ecosystems. The ideal candidate will be responsible for designing, building, maintaining, and optimizing scalable data platforms that support advanced analytics, machine learning, and real-time data processing.
Key Responsibilities:
Platform Engineering & Administration:
• Install, configure, and manage big data tools such as Hadoop, Spark, Kafka, Hive, HBase, and others.
• Monitor cluster performance, troubleshoot issues, and ensure high availability and reliability.
• Implement security policies, access controls, and data governance practices.
• Manage upgrades, patches, and capacity planning for big data infrastructure.
Development & Data Engineering:
• Design and develop scalable data pipelines using tools like Apache Spark, Flink, NiFi, or Airflow.
• Build ETL/ELT workflows to ingest, transform, and load data from various sources.
• Optimize data storage and retrieval for performance and cost-efficiency.
• Collaborate with data scientists and analysts to support model deployment and data exploration.
Data Engineer
Data engineer job in Chicago, IL
We are seeking a highly skilled Data Engineer with strong expertise in Scala, AWS, and Apache Spark. The ideal candidate will have 7+ years of hands-on experience building scalable data pipelines, distributed processing systems, and cloud-native data solutions.
Key Responsibilities
Design, build, and optimize large-scale data pipelines using Scala and Spark.
Develop and maintain ETL/ELT workflows across AWS services.
Work on distributed data processing using Spark, Hadoop, or similar.
Build data ingestion, transformation, cleansing, and validation routines.
Optimize pipeline performance and ensure reliability in production environments.
Collaborate with cross-functional teams to understand requirements and deliver robust solutions.
Implement CI/CD best practices, testing, and version control.
Troubleshoot and resolve issues in complex data flow systems.
Required Skills & Experience
7+ years of Data Engineering experience.
Strong programming experience with Scala (must-have).
Hands-on experience with Apache Spark (core, SQL, streaming).
Solid experience with AWS cloud services (Glue, EMR, Lambda, S3, EC2, IAM, etc.).
High proficiency in SQL and relational/no SQL data stores.
Strong understanding of data modeling, data architecture, and distributed systems.
Experience with workflow orchestration tools (Airflow, Step Functions, etc.).
Strong communication and problem-solving skills.
Preferred Skills
Experience with Kafka, Kinesis, or other streaming platforms.
Knowledge of containerization tools like Docker or Kubernetes.
Background in data warehousing or modern data lake architectures.
Senior Data Engineer
Data engineer job in Chicago, IL
requires visa independent candidates.
Note: (OPT, CPT, H1B holders will not work at this time)
Design, develop, and maintain scalable ETL pipelines using AWSGlue
Collaborate with data engineers and analysts to understand data requirements
Build and manage data extraction, transformation, and loading processes
Optimize and troubleshoot existing Glue jobs and workflows
Ensure data quality, integrity, and security throughout the ETL process
Integrate AWS Glue with other AWS services like S3, Lambda, Redshift, and Step Functions
Maintain documentation of data workflows and processes
Stay updated with the latest AWS tools and best practices
Required Skills
Strong hands-on experience with AWS Glue, PySpark, and Python
Proficiency in SQL and working with structured/unstructured data (JSON, CSV, Parquet)
Experience with data warehousing concepts and tools
Familiarity with CI/CD pipelines, Terraform, and scripting (PowerShell, Bash)
Solid understanding of data modeling, data integration, and data management
Exposure to AWS Batch, Step Functions, and Data Catalogs
Data Engineer
Data engineer job in Chicago, IL
The Data Engineer will design, build, and optimize the data pipelines and models that support the firm's evolving research, analytics, and systematic portfolio construction environment. This role is central to enabling data-driven investment processes, including quantitative research, AI/ML capabilities, and front-office automation.
Candidates must have deep expertise with Snowflake, strong SQL skills, and experience integrating diverse datasets used across investment organizations. The role is highly collaborative and requires comfort working in an iterative, fast-moving environment where data needs evolve rapidly based on stakeholder input.
Responsibilities
Design, build, and enhance ETL/ELT pipelines in Snowflake, ensuring high performance, reliability, and scalability.
Integrate internal and external datasets, including pricing, research content, economic releases, market data, and security reference data.
Support real-time or near-real-time data flows where needed (e.g., pricing, indicative quotes, market-sensitive inputs).
Collaborate closely with Product Leads, Quant Developers, and UI/UX teams to ensure data structures meet the requirements of research workflows, analytical models, and user-facing applications.
Partner with front-office stakeholders to rapidly iterate on evolving analytical and data needs.
Implement data validation, monitoring, and quality frameworks to ensure accuracy and reliability across critical datasets.
Translate prototype pipelines into production-ready workflows with appropriate documentation, standards, and controls.
Contribute to data modeling standards, metadata frameworks, and data governance practices across the platform.
Requirements
10+ years of data engineering experience within investment management, financial technology, or similar data-intensive environments.
Expert-level SQL, including complex queries, schema design, and performance optimization.
Deep hands-on experience with Snowflake, including advanced features such as tasks, streams, performance tuning, and secure data sharing.
Strong Python capabilities for ETL/ELT development, data processing, and workflow automation.
Experience integrating APIs and working with structured, semi-structured, and unstructured datasets.
Familiarity with NLP or AI/ML-oriented datasets (e.g., textual research content, PDFs) is a plus.
Experience with Domino or willingness to work within a Domino-based model environment.
Working knowledge of investment data structures (holdings, benchmarks, pricing, exposures) is highly preferred.
Ability to thrive in a rapid prototyping environment with evolving requirements and close partnership with front-office teams.
Senior Data Engineer
Data engineer job in Chicago, IL
We are seeking a highly skilled Architect / Senior Data Engineer to design, build, and optimize our modern data ecosystem. The ideal candidate will have deep experience with AWS cloud services, Snowflake, and dbt, along with a strong understanding of scalable data architecture, ETL/ELT development, and data modeling best practices.
Job Title: Architect / Senior Data Engineer
Job location: Chicago or Michigan
Key Responsibilities
Architect, design, and implement scalable, reliable, and secure data solutions using AWS, Snowflake, and dbt.
Develop end-to-end data pipelines (batch and streaming) to support analytics, machine learning, and business intelligence needs.
Lead the modernization and migration of legacy data systems to cloud-native architectures.
Define and enforce data engineering best practices including coding standards, CI/CD, testing, and monitoring.
Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements and translate them into technical solutions.
Optimize Snowflake performance through query tuning, warehouse sizing, and cost management.
Establish and maintain data governance, security, and compliance standards across the data platform.
Mentor and guide junior data engineers, providing technical leadership and direction.
Required Skills & Qualifications
8+ years of experience in Data Engineering, with at least 3+ years in a cloud-native data environment.
Hands-on expertise in AWS services such as S3, Glue, Lambda, Step Functions, Redshift, and IAM.
Strong experience with Snowflake - data modeling, warehouse design, performance optimization, and cost governance.
Proven experience with dbt (data build tool) - model development, documentation, and deployment automation.
Proficient in SQL, Python, and ETL/ELT pipeline development.
Experience with CI/CD pipelines, version control (Git), and workflow orchestration tools (Airflow, Dagster, Prefect, etc.).
Familiarity with data governance and security best practices, including role-based access control and data masking.
Strong understanding of data modeling techniques (Kimball, Data Vault, etc.) and data architecture principles.
Preferred Qualifications
AWS Certification (e.g., AWS Certified Data Analytics - Specialty, Solutions Architect).
Strong communication and collaboration skills, with a track record of working in agile environments.
Snowflake Data Engineer
Data engineer job in Chicago, IL
Join a dynamic team focused on building innovative data solutions that drive strategic insights for the business. This is an opportunity to leverage your expertise in Snowflake, ETL processes, and data integration.
Key Responsibilities
Develop Snowflake-based data models to support enterprise-level reporting.
Design and implement batch ETL pipelines for efficient data ingestion from legacy systems.
Collaborate with stakeholders to gather and understand data requirements.
Required Qualifications
Hands-on experience with Snowflake for data modeling and schema design.
Proven track record in developing ETL pipelines and understanding transformation logic.
Solid SQL skills to perform complex data transformations and optimization.
If you are passionate about building cutting-edge data solutions and want to make a significant impact, we would love to see your application!
#11290
Data Engineer
Data engineer job in Itasca, IL
Primary Location: Itasca, IL Hybrid in Chicago's Northwest Suburbs
2 Days In-Office, 3 Days WFH
TYPE: Direct Hire / Permanent Role
MUST BE Citizen and Green Card
The Data Engineer will be responsible for designing, developing, and maintaining scalable data pipelines and integrations that support data analytics and business intelligence across the organization. This role is essential to ensuring high-quality data delivery, optimizing performance, and enabling effective decision-making through reliable data solutions.
What You Bring to the Role (Ideal Experience)
Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent work experience).
5+ years of experience as a Data Engineer.
3+ years of experience with the following:
Building and supporting data lakehouse architectures using Delta Lake and change data feeds.
Working with PySpark and Python, with strong Object-Oriented Programming (OOP) experience to extend existing frameworks.
Designing data warehouse table architecture such as star schema or Kimball method.
Writing and maintaining versioned Python wheel packages to manage dependencies and distribute code.
Creating and managing CI/CD pipelines, especially using Azure DevOps for Microsoft Fabric-related assets.
Experience establishing scalable and maintainable data integrations and pipelines in Databricks environments.
Nice to Have's
Hands-on experience implementing data solutions using Microsoft Fabric.
Experience with machine learning/ML and data science tools.
Knowledge of data governance and security best practices.
Experience in a larger IT environment with 3,000+ users and multiple domains.
Current industry certifications from Microsoft cloud/data platforms or equivalent certifications. One or more of the following is preferred:
Microsoft Certified: Fabric Data Engineer Associate
Microsoft Certified: Azure Data Scientist Associate
Microsoft Certified: Azure Data Fundamentals
Google Professional Data Engineer
Certified Data Management Professional (CDMP)
IBM Certified Data Architect - Big Data
What You'll Do (Skills Used in this Position)
Design and develop scalable data pipelines to collect, process, and store large volumes of structured and unstructured data.
Extend and enhance existing OOP-based frameworks developed in Python and PySpark.
Partner with data scientists and analysts to define requirements and design robust data analytics solutions.
Ensure data quality and integrity through data cleansing, validation, and automated testing procedures.
Develop and maintain technical documentation, including requirements, design specifications, and test plans.
Implement and manage data integrations from multiple internal and external sources.
Optimize data workflows to improve performance, reliability, and reduce cloud consumption.
Monitor, troubleshoot, and resolve data pipeline issues to ensure consistent data delivery.
Establish and manage CI/CD pipelines and release processes, particularly using Azure DevOps for Microsoft Fabric.
Provide technical leadership and coordination for global development and support teams.
Participate in creating a safe and healthy workplace by adhering to organizational safety protocols.
Support additional projects and initiatives as assigned by management.
Data Scientist
Data engineer job in Peoria, IL
Typical task breakdown:
- Assist with monthly reporting on team metrics, cost savings and tariff analysis
- Lead development of data analytics to assist category teams in making strategic sourcing decisions
Interaction with team:
- Will work as a support to multiple category teams
Team Structure
- Report to the MC&H Strategy Manager and collaborate with Category Managers and buyers
Work environment:
Office environment
Education & Experience Required:
- Years of experience: 3-5
- Degree requirement: Bachelors degree
- Do you accept internships as job experience: Yes
Top 3 Skills
· Communicates effectively to develop standard procedures
· Applies problem-solving techniques across diverse procurement scenarios
Analyzes procurement data to generate actionable insights
Additional Technical Skills
(Required)
- Proficient in PowerBI, PROcure, and tools like CICT, Lognet, MRC, PO Inquiry, AoS
- Expertise in Snowflake and data mining
(Desired)
- Prior experience in Procurement
- Familiarity with monthly reporting processes, including ABP (Annual Business Plan) and RBM (Rolling Business Management)
- Demonstrated expertise in cost savings initiatives
- Machine Learning and AI
Soft Skills
(Required)
- Strong written and verbal communication skills
- Balances speed with accuracy in task execution
- Defines problems and evaluates their impact
(Desired)
- Emotional Intelligence
- Leadership and team management capabilities
Junior Data Engineer
Data engineer job in Chicago, IL
Job Title - Junior Data Engineer
Duration - Fulltime
No of Positions - 8
Interview Process - Imocha test & 1 CG Interview
Key Responsibilities
Design, develop, and maintain scalable data pipelines and ETL processes using Databricks.
Design an develop Python scripts for data transformation, automation, and integration tasks.
Develop and optimize SQL queries for data extraction, transformation, and loading.
Collaborate with data scientists, analysts, and business stakeholders
Ensure data integrity, security, and compliance with organizational standards.
Participate in code reviews and contribute to best practices in data engineering
Required Skills
3-5 years of professional experience in data engineering or related roles.
Strong proficiency in Databricks (including Spark-based data processing).
Strong programming skills in Python
Advanced knowledge of SQL for querying and data modeling.
Familiarity with Azure cloud and ADF
Understanding of ETL frameworks, data governance, and performance tuning.
Knowledge of CI/CD practices and version control (Git).
Exposure to BI tools (Power BI, Tableau) for data visualization
Mandatory Skills
Python, Databricks, SQL, ETL, Power BI & tableau (good to have)
If your Interested. Kindly, share us your resume on
****************************
Life At Capgemini
Capgemini supports all aspects of your well-being throughout the changing stages of your life and career. For eligible employees, we offer:
Flexible work
Healthcare including dental, vision, mental health, and well-being programs
Financial well-being programs such as 401(k) and Employee Share Ownership Plan
Paid time off and paid holidays
Paid parental leave
Family building benefits like adoption assistance, surrogacy, and cryopreservation
Social well-being benefits like subsidized back-up child/elder care and tutoring
Mentoring, coaching and learning programs
Employee Resource Groups
Disaster Relief
Disclaimer
Capgemini is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law.
This is a general description of the Duties, Responsibilities and Qualifications required for this position. Physical, mental, sensory or environmental demands may be referenced in an attempt to communicate the manner in which this position traditionally is performed. Whenever necessary to provide individuals with disabilities an equal employment opportunity, Capgemini will consider reasonable accommodations that might involve varying job requirements and/or changing the way this job is performed, provided that such accommodations do not pose an undue hardship.
Capgemini is committed to providing reasonable accommodations during our recruitment process. If you need assistance or accommodation, please reach out to your recruiting contact.
Click the following link for more information on your rights as an Applicant **************************************************************************
Salary Transparency
Capgemini discloses salary range information in compliance with state and local pay transparency obligations. The disclosed range represents the lowest to highest salary we, in good faith, believe we would pay for this role at the time of this posting, although we may ultimately pay more or less than the disclosed range, and the range may be modified in the future. The disclosed range takes into account the wide range of factors that are considered in making compensation decisions including, but not limited to, geographic location, relevant education, qualifications, certifications, experience, skills, seniority, performance, sales or revenue-based metrics, and business or organizational needs. At Capgemini, it is not typical for an individual to be hired at or near the top of the range for their role. The base salary range for the tagged location is $56186 to $87556 /yearly. This role may be eligible for other compensation including variable compensation, bonus, or commission. Full time regular employees are eligible for paid time off, medical/dental/vision insurance, 401(k), and any other benefits to eligible employees.
Note: No amount of pay is considered to be wages or compensation until such amount is earned, vested, and determinable. The amount and availability of any bonus, commission, or any other form of compensation that are allocable to a particular employee remains in the Company's sole discretion unless and until paid and may be modified at the Company's sole discretion, consistent with the law.
Data Architect - Pharma
Data engineer job in Chicago, IL
MathCo
Role - Data/AI Engineering Manager
Onsite - Chicago - 4 days in office (Mandatory)
Industry - Pharma (Mandatory)
As platform architect/owner, you will:
Lead the end-to-end architecture, lifecycle, and governance of the AI/Analytics platform, defining standards, reusable components, and integration patterns.
Partner with AI/Data architects to enable scalable model deployment and enhance agentic orchestration.
Translate business needs into platform features, manage onboarding, documentation, and cross-functional collaboration for platform adoption.
Oversee infrastructure-as-code, CI/CD, observability, and containerized environments to ensure reliability and scalability.
Evaluate complex technical proposals and develop actionable platform roadmaps and architecture recommendations
Stay current on key AI platform developments and assess their impact on architecture and client strategy
Coach others, recognize their strengths, and encourage them to take ownership of their personal development
Skills Required
Experience in designing, architecting, or managing distributed data and AI platforms in cloud environments (AWS, Azure, or GCP)
Proven ability to carry out complex Proof of Concept (POC), pilot projects, and limited production rollouts for AI use-cases, focusing on developing new or improved techniques and procedures.
Strong skills in pipeline/workflow optimization and data processing frameworks to evaluate architectural choices
Years of Experience
Minimum of 8 years in relevant experience, preferably with a consulting
background and experience with Pharma clients
Engineer 1
Data engineer job in Decatur, IL
About the Role
Our Engineer 1 role is the perfect career launch pad for engineers looking to apply and build their experience. We provide an accelerated learning journey in manufacturing and chemical engineering. You get responsibility right away and the opportunity to make your mark through exposure to a wide range of processes and technologies.
Because we operate a collaborative, team approach, you'll be surrounded by expert colleagues, always ready to share their expertise and mentorship. Before long, you'll be working on your own ideas - taking process ownership and leading on projects. You'll also join our structured development program: the Engineering Ladder. As your skills grow, you will progress into more responsible, impactful roles.
Engineer 1 > Engineer 2 > Engineer 3 > Sr. Engineer > Principal Engineer > Sr. Leadership
Key responsibilities: Engineer 1
Ensure products meet our quality, cost, and efficiency requirements.
Monitor day to day production results.
Design, develop, and implement continuous process improvements.
Assist with process troubleshooting and developing corrective action plans.
Contribute to the design of moderately complex projects and/or handle elements of major projects.
About You
We're looking for engineers who are keen to learn fast in an environment of excellence. You will need:
BSc in Chemical Engineering or any related engineering field.
Strong written and verbal communication skills.
Working knowledge of basic chemical unit operations.
Any relevant internships, placements or work experience are useful but certainly not essential - we can quickly skill you up in chemical processes and production.
Total Rewards
The annual pay range estimated for this position is $80,000 - $99,000 and is bonus eligible.
Please note that while this range reflects the full spectrum of compensation available for this role, individual compensation will be determined based on several factors including your experience, skills, and alignment with the role's responsibilities. During the interview process there will be an opportunity to discuss how your background fits into the pay range.
We offer a comprehensive Total Rewards package that our U.S. colleagues and their families can count on, which includes :
Competitive Pay
Multiple Healthcare plan choices
Dental and vision insurance
A 401(k) plan with company and matching contributions
Short- and Long-Term Disability
Life, AD&D, and Voluntary Insurance plans
Paid holidays & vacation
Floating days off
Parental leave for new parents
Employee resource groups
Learning & development programs
Fun culture where you have an opportunity in shaping our future
Senior Back End Developer - Distributed Systems (C# or Golang)
Data engineer job in Chicago, IL
Our client, a fast-growing organization developing secure, scalable technologies for next-generation AI applications, is seeking a Backend Engineer to join their core platform team.
In this role, you'll help build and refine the foundational services that power authentication, observability, data flows, and high-availability systems across a distributed ecosystem. This is an opportunity to work on complex backend challenges while shaping the infrastructure that supports mission-critical applications.
What You'll Do
Develop, enhance, and support backend services that form the foundation of the platform.
Build and maintain core authentication and authorization capabilities.
Apply principles of Domain-Driven Design to guide how services and components evolve over time.
Architect, extend, and support event-sourced systems to ensure durable, consistent operations at scale.
Participate in API design and integration efforts across internal and external stakeholders.
Implement and support messaging frameworks (e.g., NATS) to enable reliable service-to-service communication.
Maintain and improve observability tooling-including metrics, tracing, and logging-to ensure healthy system performance.
Work closely with infrastructure, DevOps, and engineering teams to ensure robust, secure, and maintainable operations.
What You Bring
3-6+ years of experience as a backend engineer.
Strong knowledge of distributed systems and microservices.
Proficiency in at least one modern backend programming language (C#, Go, Rust, etc.).
Practical experience with IAM concepts and authentication/authorization frameworks.
Exposure to event-sourcing patterns, DDD, and common messaging systems (e.g., NATS, Kafka, SNS, RabbitMQ).
Familiarity with Redis or similar in-memory caching technologies.
Experience working with observability tools such as Prometheus, Jaeger, ELK, or Application Insights.
Understanding of cloud-native environments and deployment workflows (AWS, Azure, or GCP).
Why This Role Is Compelling
You'll contribute directly to a foundational platform used across an entire organization-impacting performance, reliability, and security at every layer. If you enjoy solving distributed-system challenges and working on complex, high-scale backend services, this is a strong match.
#BackendEngineering #DistributedSystems #PlatformEngineering #CloudNative #SoftwareJobs
AI Software Engineer
Data engineer job in Chicago, IL
Be a part of our success story. Launch offers talented and motivated people the opportunity to do the best work of their lives in a dynamic and growing company. Through competitive salaries, outstanding benefits, internal advancement opportunities, and recognized community involvement, you will have the chance to create a career you can be proud of. Your new trajectory starts here at Launch!
Launch is actively seeking qualified, energetic engineers with passion for building solutions leveraging new and emerging technologies related to AI. This is a software engineering role specializing in applications with use cases powered by AI solutions, especially Generative AI, such as LLM integration, vector embeddings, real-time inference, and semi-automated, human-in-the-loop workflows. This role offers an exciting opportunity to be at the forefront of AI technology, working on diverse projects that drive real-world impact. If you're passionate about AI and have the technical expertise to back it up, this role may be perfect for you!
Responsibilities Include:
Write high-quality, maintainable code in languages such as Python, JavaScript, C#, or others relevant to AI development
Work closely with and in cross-functional teams including software engineers, project managers, designers, QA, data engineers, and data scientists
Integrate with a variety of different APIs, services, and technologies to bring pre-trained models and other technologies to bear, such as cloud-based vector databases
Develop APIs and interfaces to enable easy interaction between AI models and client applications
Fine-tune and/or customize integration with pre-trained models to meet unique client needs
Handle data preprocessing, cleaning, and augmentation to enhance model performance
Implement strategies for managing and securing sensitive client data
Monitor and optimize the performance of AI model integrations to optimize efficiency and accuracy
Provide technical guidance and support to clients and internal stakeholders
Stay up-to-date with the latest advancements in NLP and machine learning
Qualifications:
Bachelor's or Master's degree in Computer Science, Data Science, Artificial Intelligence, or a related field (strongly preferred)
Prior IT digital consulting experience is highly preferred
Proven experience in software development, with a focus on AI and machine learning
Hands-on experience with integrating language models into applications and platforms
Proficiency in programming languages such as Python, JavaScript, C#, or similar
Experience with AI frameworks and libraries (e.g., TensorFlow, PyTorch, Hugging Face Transformers)
Experience with Generative AI tooling (e.g., LangChain, Semantic Kernel)
Knowledge of API development and integration
Strong understanding of NLP concepts and techniques, including language modeling, text generation, and sentiment analysis
Experience with large-scale language models (e.g., GPT, BERT) and their practical applications
Excellent analytical and problem-solving skills with a keen ability to troubleshoot and resolve technical issues
Strong verbal and written communication skills, with the ability to explain complex technical concepts to non-technical stakeholders
Proven ability to work effectively in a team environment and manage client relationships
Experience in project management and ability to handle multiple tasks simultaneously
Experience with cloud platforms (e.g., AWS, Azure, GCP) and containerization tools (e.g., Docker) a plus
Familiarity with agile development methodologies and DevOps practices
Innovative and curious, with a passion for emerging technologies and continuous learning
Detail-oriented and committed to delivering high-quality results
Compensation & Benefits:
As an employee at Launch, you will grow your skills and experience through a variety of exciting project work (across industries and technologies) with some of the top companies in the world! Our employees receive full benefits-medical, dental, vision, short-term disability, long-term disability, life insurance, and matched 401k. We also have an uncapped, take-what-you-need PTO policy. The anticipated base wage range for this role is $155,000 - $175,000. Education and experience will be highly considered, and we are happy to discuss your wage expectations in more detail throughout our internal interview process.
Director of Automation and SDET
Data engineer job in Chicago, IL
is bonus eligible***
Prestigious Financial Institution is currently seeking a Director of Automation and SDET with AI/ML experience. Candidate will be responsible for defining, driving and scaling enterprise-wide test automation, quality engineering practices. This role will architect and implement advanced automation solutions across applications, data and platforms, enable adoption of best practices, and establish the governance, metrics and tools. The role combines technical expertise with strong leadership and stakeholder collaboration skills to deliver next generation automation infusing AI capabilities.
Responsibilities:
Define and execute the enterprise automation strategy aligned with business and technology modernization goals.
Drive adoption of automation in all phases of testing including automated regression and smoke tests to improve quality and accelerate testing.
Implement automated quality gates, pre/post deployment checks and shift-left testing.
Architect scalable, reusable automation frameworks covering UI, API, microservices, data pipelines, Kafka/event driven systems, batch jobs, reports and databases.
Define standards for BDD, contract testing, CI/CD integration, synthetic data generation and environment-agnostic test automation.
Establish tagging and traceability across automation framework, Jira, Confluence, Test management tools, CI/CD pipelines and Splunk.
Introduce and scale synthetic test data management, environment/service virtualization for complex integration testing.
Envision and implement AI/ML and Generative AI infused solutions for test case generation, test data generation, automation script generation and quality insights.
Build quality engineering and automation center of excellence to drive training, reusable asset libraries and knowledge management artifacts.
Partner with development, product, DevOps and Platform Engineering leaders to embed automation into all stages of SDLC.
Define, monitor and report KPIs/OKRs for automation outcomes to executives and product owners.
Drive compliance with industry standards, regulatory requirements, and audit readiness across automation and QE practices.
Manages a team of people managers, individual contributors, and consultants/contractors
Qualifications:
Minimum ten (15) years' of IT experience with ten (10)+ years in test automation.
Proven track record of leading enterprise-scale automation initiatives in complex, distributed environments (microservices, cloud, batch applications, data, MQ, Kafka event driven systems).
Hands-on experience with service virtualization, synthetic test data management.
Strong hands-on expertise in testing and test automation tools and frameworks including Jira, BDD, Selenium, Cucumber, REST-assured, JMeter, Playwright.
Strong programming experience in Java and Python.
Deep understanding of DevOps, CI/CD pipelines (Jenkins, Harness, GitHub), cloud platforms (AWS, Azure) and containerized environments (Kubernetes, Docker).
Experience with Kafka/event-driven testing, large data set validations
Experience with Agile development processes for enterprise software solutions
Strong background in metrics-driven quality reporting and risk-based decision making.
Strong organizational leadership skills
Ability to manage multiple, competing priorities and make decisions quickly
Knowledgeable about industry trends, best practices, and change management
Strong communication skills with the ability to communicate and interact with a variety of internal/external customers, coworkers, and Executive Management
Strong work ethic, hands-on, detail oriented with a customer service mentality
Team player, self-driven, motivated, and able to work under pressure
Results-oriented and demonstrated record of developing initiatives that impact productivity
Technical Skills:
Proficiency with modern quality engineering tools including Jira, Jenkins, automation frameworks, test management tools.
Software QA methodologies (requirements analysis, test planning, functional testing, usability testing, performance testing, etc.)
Familiarity with AI/ML/GenAI Solutions in QE.
Utilizing best practices in software engineering, software test automation, test management tools, and defect tracking software
Past/Current experience of 3+ years working on a large-scale cloud native project Experience with cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Security groups, EC2
Education and/or Experience:
BS degree in Computer Science or Information Systems Management or a similar technical field
10+ years of experience in Quality Assurance space preferably on complex systems and large programs.
ML Engineer only W2 and Local
Data engineer job in Chicago, IL
Hello
One of My Clients is Hiring for AWS Engineers at DENVER, CO & Chicago, IL for a Long term Contract.
Role : AWS Engineer with Bedrock & AI/ML
Type : Long term Contract
Pay : On my Client payroll directly - W2 Only
Note : Only Independent candidate is eligible to apply on this role.
Must Haves:
5+years of experience in cloud engineering (AWS) AWS Bedrock and related AI/ML services
AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified DevOps Engineer)
Experience with networking, security in cloud environments
Experience with Helm charts and Kubernetes deployments
CI/CD pipelines and DevOps experience
Plusses:
Enterprise experience
Knowledge of observability tools (e.g., Prometheus, Grafana)
Day to Day:
Design,deploy, and manage cloud infrastructure primarily on AWS
Configure and maintain AWS Bedrock services for generative AI integrations
Implement and manage Terraform scripts for infrastructure automation
Utilize Helm charts for Kubernetes-based deployments
Collaborate with vendor teams to ensure smooth SaaS application deployment and
integration
Monitor system performance, troubleshoot issues, and optimize cloud resources
Ensure compliance with security best practices and organizational standards
Document architecture, processes, and operational procedures
Thanks & Regards
Rakesh kumar
Work : ************ , Ext :1002 | Email : **************************
Web: ***********************
Software Engineer
Data engineer job in Chicago, IL
TBSCG is a modern consulting and engineering company trusted by well‑known enterprise brands. We design, build, and support digital experiences and platforms across financial services, manufacturing, technology, public sector and global consumer brands. We combine the feel of a close‑knit, supportive team with the impact and credibility of large‑scale, high‑visibility programs.
About the Role
We're looking for a Full‑Stack Engineer who enjoys working across the stack - from backend services and APIs to modern frontends. You'll build end‑to‑end solutions that power digital platforms for enterprise clients, working with both Java/Node.js and React.
What You'll Do
• Build features end‑to‑end across backend and frontend
• Write clean, modular code that is well‑tested and maintainable
• Work with APIs, headless/CMS platforms, cloud services and integrations
• Participate in solution design and contribute to technical choices
• Collaborate with architects, designers and engineers across disciplines
• Help improve engineering standards, tooling and reusable components
Must‑Have
• Solid web fundamentals & API understanding (HTTP, REST, JSON)
• Experience in Typescript, React and Next.JS
• Git, secure development mindset, and CI/CD familiarity
• Ability to deliver end‑to‑end features with some autonomy
Useful to Have
• Experience with Java.
• Terraform
• SQL/NoSQL; Docker; cloud‑ready development
• Automated testing across front & backend
Bonus
• Integrations with CMS/DXP, DAM, CRM or e‑commerce
• Magnolia CMS + React headless or hybrid experience
• AWS cloud experience (backend or frontend delivery)
• Consulting or client‑facing experience
Please note that TBSCG does not provide visa sponsorship or assistance.
If you would like to know more about how your personal data is used, in relation to the recruitment process, please see our Recruitment Privacy Policy (
******************************************
TBSCG participates in the E-Verify program to verify the employment eligibility of all new hires. If you are selected and hired, your eligibility to work in the United States will be verified within the first three days of employment
Software Engineer
Data engineer job in Urbandale, IA
We are seeking a Sr. Software Engineer (Java) for a large insurance company in Des Moines, IA. In this role, the Senior Software Engineer will be deeply involved in the full software development lifecycle, working on the Digital team. They will be focusing on a new build project supporting multiple insurance subsidiaries. Success in this role requires a strong sense of accountability, clear and proactive communication, and a commitment to customer-focused outcomes. The engineer will be expected to contribute to team goals, foster collaboration, and help drive the delivery of high-quality software solutions that meet business needs. Beyond technical execution, this person will mentor junior developers, offering guidance on best practices, code reviews, and architectural decisions
Qualifications
5+ years of Software Engineering Experience or equivalent
Full Stack web development tech stack (React, Node, Next, & TypeScript)
Experience with GraphQL
2+ years of Mentoring other Engineers
Pay varies depending on experience $55/hr - $75/hr.