Regulatory Engineer
Requirements engineer job in Cordova, IL
WHO WE ARE
As the nation's largest producer of clean, carbon-free energy, Constellation is focused on our purpose: accelerating the transition to a carbon-free future. We have been the leader in clean energy production for more than a decade, and we are cultivating a workplace where our employees can grow, thrive, and contribute.
Our culture and employee experience make it clear: We are powered by passion and purpose. Together, we're creating healthier communities and a cleaner planet, and our people are the driving force behind our success. At Constellation, you can build a fulfilling career with opportunities to learn, grow and make an impact. By doing our best work and meeting new challenges, we can accomplish great things and help fight climate change. Join us to lead the clean energy future.
The Senior Regulatory Engineer position is based out of our Quad Cities Generating Station in Cordova, IL.
TOTAL REWARDS
Constellation offers a wide range of benefits and rewards to help our employees thrive professionally and personally. We provide competitive compensation and benefits that support both employees and their families, helping them prepare for the future. In addition to highly competitive salaries, we offer a bonus program, 401(k) with company match, employee stock purchase program comprehensive medical, dental and vision benefits, including a robust wellness program paid time off for vacation, holidays, and sick days and much more.
***This Engineering role can be filled at the Mid-level or Senior Engineer level. Please see minimum qualifications list below for each level***
Expected salary range varies based on experience, along with comprehensive benefits package that includes bonus and 401(k).
Mid-Level - $94,500 - $105,000
Sr Level - $124,200 - $138,000
PRIMARY PURPOSE OF POSITION
Performs advanced regulatory/technical problem solving in support of nuclear plant operations. Responsible for regulatory/technical decisions. Possesses excellent knowledge in functional discipline and its practical application and has detailed knowledge of applicable industry codes and regulations.
PRIMARY DUTIES AND ACCOUNTABILITIES
Provide in-depth regulatory/technical expertise to develop, manage and implement regulatory analyses, activities and programs.
Provide regulatory/technical expertise and consultation through direct involvement to identify and resolve regulatory issues.
Provide complete task management of regulatory issues.
Perform regulatory tasks as assigned by supervision.
Accountable for the accuracy, completeness, and timeliness of work ensuring proper licensing basis management and assuring that standard design criteria, practices, procedures, regulations and codes are used in preparation of products.
Perform independent research, reviews, studies and analyses in support of regulatory/technical projects and programs.
Recommend new concepts and techniques to improve performance, simplify operation, reduce costs, reduce regulatory burden, correct regulatory non-compliances, or comply with changes in codes or regulations.
All other job assignments and/or duties pursuant to company policy or as directed by management to include but not limited to: (Emergency Response duties and/or coverage, Department duty coverage and/or call out, and positions
MINIMUM QUALIFICATIONS for Mid-level E02 Engineer
Bachelor&rsquos degree in Engineering with 1-year of relevant position experience OR
Associate degree in Engineering with a minimum of 3 years of relevant experience OR
High school diploma (or equivalent) with at least 5 years of relevant experience
Effective written and oral communication skills
Maintain minimum access requirement or unescorted access requirements, as applicable, and favorable medical examination and/or testing in accordance with position duties
MINIMUM QUALIFICATIONS for Senior E03 Engineer
Bachelor&rsquos degree in Engineering with 5-years of relevant position experience OR
Associate&rsquos degree in Engineering with 7 years of experience OR
High School Diploma or Equivalent with 8 years of experience
Effective written and oral communication skills
Maintain minimum access requirement or unescorted access requirements, as applicable, and favorable medical examination and/or testing in accordance with position duties
PREFERRED QUALIFICATIONS
Previous Senior Reactor Operator (SRO) license/certification
1 year nuclear power experience
NRC experience
Advanced technical degree or related coursework
Regulatory related work experience or previous experience in a military or other government organization
Microsoft 365 Engineer
Requirements engineer job in Chicago, IL
The Microsoft 365 Engineer will serve as the primary administrator and owner of our Microsoft 365 platform. This individual will be responsible for day-to-day operations, end-user support, service enhancements, and feature rollouts across a suite of M365 services used throughout the company. This includes but is not limited to: Exchange Online, Outlook, SharePoint, OneDrive, Teams, Purview, and Intune.
This role is critical to enabling productivity, ensuring data security and compliance, and supporting our continued growth through effective platform management.
The Microsoft 365 Engineer operates in a team environment and will provide input on the feasibility of design solutions through the application of advanced skills obtained through several years of experience solving complex issues. They will recommend improvements and new solutions.
Key Responsibilities
Administration & Operations
Provision, configure, and manage user identities, groups, and licenses in Entra ID (Azure AD). Routinely audit the organization's user and application identities using Entra ID while adhering to Identity and Access Management best practices.
Manage and maintain the Microsoft 365 tenant, including user accounts, licenses, and security settings. This includes the proper backup of the M365 content including email and files.
Administer Exchange Online mailboxes, mail flow rules, hybrid connectivity, and retention policies.
Oversee Outlook on the web/mobile configuration, mailbox policies, and troubleshoot client connectivity.
Configure, secure, and maintain Microsoft Teams and meeting policies.
Manage SharePoint Online sites, site collections, permissions, hub sites, and policies for external sharing.
Implement and maintain data governance and compliance controls with Microsoft Purview (e.g., DLP, Information Protection, Insider Risk Management).
Develop best practices, governance frameworks, and lifecycle management plans for collaboration services.
Security & Compliance
Enforce conditional access, multifactor authentication, and identity protection policies in Entra ID.
Ensure proper integration between on-premises Active Directory and Azure Active Directory for seamless user authentication and access management.
Configure retention labels, sensitivity labels, and compliance policies across Exchange, Teams, and SharePoint via Purview.
Conduct periodic audits, reviews of access, and remediation of security vulnerabilities.
Implement and enforce cloud security protocols to prevent unauthorized access and cyber threats.
Monitoring & Troubleshooting
Utilize Microsoft 365 admin center, Defender portal, and PowerShell to monitor service health, usage, and security alerts.
Investigate and resolve complex issues related to mail routing, client connectivity, Teams meetings, and SharePoint access.
Develop and maintain operational runbooks, automation scripts (PowerShell, Graph API), and dashboards.
Respond to security alerts to eradicate threats within the environment.
Collaboration & Support
Work with service desk and application teams to integrate third-party tools and enforce change management.
Provide end-user training, documentation, and proactive communications on feature updates and best practices.
Participate in on-call rotation and perform off-hours support and maintenance as required.
Windows Server Management
Manage Windows Server environments to ensure optimal performance, security, and reliability.
Install, configure, and maintain server hardware and software components.
Monitor system performance, troubleshoot issues, and perform regular system updates and patches.
Skills and Experience
Bachelor's degree in Computer Science, Information Technology, or a related field.
Minimum of 5 years' experience in Information Technology roles.
Minimum of 3 years' experience administering M365 environments with an emphasis on Exchange, SharePoint, Teams, Intune, and Entra ID.
Minimum of 3 years' experience supporting the Azure Cloud Computing Platform with an emphasis on Virtual Machines, Azure Kubernetes Service, SQL managed instance, and Azure firewall.
Advanced knowledge of security and network protocols.
Experience implementing backup and disaster recovery solutions.
Experience with automation and scripting, particularly using PowerShell.
Excellent problem-solving and troubleshooting skills.
Experience leading cross-functional technology projects.
Strong communication and interpersonal skills.
Relevant certifications (e.g., Microsoft Certified: Azure Solutions Architect, Azure Administrator, CCNA) are a plus.
The base salary range for this position is $110,000 annually. However, actual compensation offered may vary depending on skills, experience, and other job-related factors permitted by Law. This position is also eligible for an annual bonus as part of total compensation.
In addition to base salary and bonus, we offer a comprehensive benefits package, including health insurance, retirement plans, paid time off, and other benefits
Platform Engineer - Chicago
Requirements engineer job in Chicago, IL
Platform Engineer - AWS/Terraform
Global Brokerage
Salary: $130,000 - $150,000 + benefits + discretionary bonus (20 - 40%)
Hybrid: In office 3 days per week
Join a leading global brokerage in the heart of Chicago and drive innovation. The Platform Engineering team seeks an ambitious Engineer specialising in AWS, Python, Terraform, Git for CI/CD, and Docker to create a cutting-edge platform for rapid, reliable software development and deployment.
What's In It For You
Global Brokerage: Join a dynamic team at a top brokerage.
Continuous Learning: Access paid training for professional growth.
Work-Life Balance: Flexible work arrangements, with 3 days in the central Chicago office.
Collaboration: Work closely with development teams on monitoring metrics and CI/CD pipeline development.
Comprehensive benefits package offering employer‑provided insurance (life/AD&D, disability, EAP), health & dental coverage, flexible accounts (HSA/FSA), generous time off (vacation, unlimited sick leave, parental leave), plus retirement plan with a company match - all designed to support employees' wellness, financial security, and work-life balance.
Exciting Projects: Engage in complex, challenging projects.
Role Responsibilities
System Management: Monitor and manage AWS-focused systems and infrastructure.
Platform Operations: Build and maintain a platform for cloud and on-premise application provisioning and operation.
Configuration and Maintenance: Install, configure, test, and maintain operating systems, software, and infrastructure tools.
CI/CD Development: Develop a continuous integration and deployment platform integrated with development tools.
Developer Experience: Enhance developer experience in cloud infrastructure and application deployment.
Policy and Guidelines: Develop policies, standards, and guidelines for Infrastructure as Code (IAC) and CI/CD processes.
Automation: Implement workflow and toolchain improvements for task automation.
Required Skills and Experience
Experience: Minimum 5 years in a professional Platform Engineering role.
Cloud Proficiency: Securely deploy and manage AWS applications, with cloud-native tooling and security best practices.
Scripting Skills: Proficiency in Python.
Docker Proficiency: Experience with Docker containerisation for software deployment and scalability.
CI/CD Expertise: Automate software build, test, and deployment pipelines following agile methodologies.
Terraform Exposure: Beneficial experience with Terraform.
Observability Tools: Experience with Grafana and Splunk is beneficial, particularly in developing and applying an observability strategy across a large organisation.
Learn More
For more information, contact George Harris at Harrington Starr for a confidential conversation, or click "Apply" to start your application.
BigID Engineer
Requirements engineer job in Deerfield, IL
W2 Contract
Pay Rate: $65-$70/hr
We are seeking an experienced BigID Engineer to ensure our data security and privacy platform operates at peak performance. This role involves configuring and maintaining BigID, connecting it to various data sources, and monitoring system health through logs and performance metrics.
Key Responsibilities:
Integrate BigID with multiple data sources and validate connectivity.
Review and analyze logs to ensure optimal product performance.
Experience with DLP/Data classification tools.
Monitor scanning throughput and identify areas for improvement.
Troubleshoot issues and ensure no defects in production.
Perform regular assessments to maintain system integrity.
Nice-to-Have Skills:
Experience with scanning Amazon S3 buckets.
Conducting assessment scans for compliance and security.
Hands-on experience as a BigID Administrator.
Preferred Certifications
CISSP, GIAC (GSEC), or similar cybersecurity certifications.
Cellular Engineer
Requirements engineer job in Edwardsville, IL
Sr. Network Engineer (Cellular SME)
5 days onsite in Edwardsville, IL
12 months contract (will be extended)
Essential Functions:
Senior network engineer with advanced experience working with Arista fabric switching technologies (VxLAN/EBGP)
Ideal candidate would have cross-platform experience with one or more of the following: Versa/Fortinet (SD-WAN), Juniper (Wi-Fi), Forescout eye Sight/Segment (NAC), and/or Cradlepoint/cellular.
Excellent communication and problem-solving skills
Create runbooks for L1
Receive escalations from L1
Identify automation opportunities
Develop reporting
7+ - 10 years of experience
Preferred Certifications:
Juniper JNCIP or JNCIE
Arista ACE or equivalent
SD-WAN certification (Versa or similar) is a plus
ITIL or other relevant process management certification is a plus.
Splunk Engineer
Requirements engineer job in Chicago, IL
Title- Splunk Engineer
Must Have Skills:
at least 3-5 years of hands-on experience with Splunk development, including dashboard creation, query optimization, and alerting.
Strong proficiency in SPL (Search Processing Language) and familiarity with Splunk Enterprise Security or ITSI.
Experience integrating data from various sources (e.g., syslog, APIs, cloud services) into Splunk.
Knowledge of scripting languages such as Python, Bash, or PowerShell for data manipulation and automation.
Familiarity with log management and observability tools beyond Splunk (e.g., ELK stack, Grafana, Prometheus).
Understanding of security and compliance requirements in logging and monitoring.
Ability to work independently and collaboratively in a fast-paced, agile environment.
Strong analytical and problem-solving skills with attention to detail.
Excellent communication skills to translate technical findings into business-relevant insights."
What You'll Do -
As a Splunk Developer, you will play a key role in designing, developing, and maintaining Splunk dashboards, alerts, and reports that provide actionable insights across our systems and applications. You'll collaborate with cross-functional teams to ensure data is collected, parsed, and visualized effectively to support operational and security objectives.
Key Responsibilities:
• Develop and maintain Splunk dashboards, queries, and alerts to monitor system performance, application health, and security events.
• Work with stakeholders to gather requirements and translate them into effective Splunk visualizations and reports.
• Optimize and troubleshoot existing Splunk configurations to improve performance and usability.
• Integrate data sources into Splunk using forwarders, APIs, and custom scripts.
• Support incident response and root cause analysis by providing relevant Splunk data and insights.
• Collaborate with DevOps, Security, and Infrastructure teams to ensure comprehensive logging and monitoring coverage.
• Stay current with Splunk best practices, new features, and industry trends to continuously improve our observability capabilities.
Preferred Qualifications -
• at least 4-5 years of hands-on experience with Splunk development, including dashboard creation, query optimization, and alerting.
• Strong proficiency in SPL (Search Processing Language) and familiarity with Splunk Enterprise Security or ITSI.
• Experience integrating data from various sources (e.g., syslog, APIs, cloud services) into Splunk.
• Knowledge of scripting languages such as Python, Bash, or PowerShell for data manipulation and automation.
• Familiarity with log management and observability tools beyond Splunk (e.g., ELK stack, Grafana, Prometheus).
• Understanding of security and compliance requirements in logging and monitoring.
• Ability to work independently and collaboratively in a fast-paced, agile environment.
• Strong analytical and problem-solving skills with attention to detail.
• Excellent communication skills to translate technical findings into business-relevant insights.
ServiceNow ITOM Engineer
Requirements engineer job in Chicago, IL
Position focused on executing and maintaining Service Mapping within the ServiceNow platform. The role supports Discover's IT Asset Management modernization and compliance efforts by mapping application and infrastructure dependencies across the enterprise. The Service Mapper will work closely with application owners, infrastructure teams, and network engineers to ensure accurate and complete service maps are built and maintained. This is a hands-on technical role requiring experience with ServiceNow Service Mapping tools and methodologies, including both automated and manual mapping techniques. The candidate must be capable of working independently, managing mapping coverage across scoped applications, and collaborating with cross-functional teams to validate and operationalize service maps.
Key Responsibilities:
Execute End-to-End Service Mapping Lifecycle: Perform all phases of Service Mapping-data gathering, map construction, validation, and operationalization-for scoped applications and infrastructure.
Build and Maintain Accurate Service Maps: Create and maintain service maps in ServiceNow for critical business applications, including Payment Network/PULSE systems, ensuring complete dependency visibility.
Support Network Layer Mapping: Collaborate with network engineering teams to map network dependencies and entry points for business services.
Perform Manual Mapping for Non-Discoverable Assets: Execute manual mapping for legacy systems and restricted environments (e.g., mainframes, Oracle databases, BlackBox appliances) where automated discovery is not feasible.
Validate CI Relationships and Dependencies: Work with application owners, infrastructure SMEs, and CI Class Owners to confirm service map accuracy and completeness.
Remediate Mapping Gaps: Identify and resolve gaps in application-to-CI relationships, ensuring alignment with CMDB lifecycle and regulatory dependency visibility requirements.
Maintain Mapping Documentation and Audit Readiness: Document mapping processes, dependencies, and validation outcomes to support internal governance and external audit requirements.
Analytics Engineer
Requirements engineer job in Chicago, IL
Role::Analytics Engineer / Digital Engineer
We are seeking a Lead Analytics Engineer with strong technical expertise in modern web development and hands-on experience in Customer Experience (CX) and Digital Analytics implementation. This role combines engineering excellence with MarTech and data instrumentation-supporting event tracking, data pipelines, customer identity, and audience activation across platforms such as mParticle, Mixpanel, and paid media ecosystems.
As a lead, you will guide engineering best practices, oversee implementation quality, and collaborate with cross-functional teams to build scalable and reliable analytics foundations. You will ensure that data collection, CDP integrations, and tracking architectures are robust, performant, and aligned with business needs.
Key Responsibilities
Backend & Frontend Development (Light Full Stack Scope)
• Build and maintain backend services and APIs using PHP/Laravel or Node.js.
• Develop lightweight frontend interfaces and internal tools using React.
• Support ingestion pipelines and microservices that enable analytics and CX data flows.
• Ensure secure and scalable data movement across systems.
CX & Digital Analytics Implementation
• Lead the implementation of event instrumentation, user behavior tracking, and data collection strategies for mParticle and Mixpanel.
• Define and configure data schemas, identity resolution rules, and event attributes within CDP platforms.
• Partner with CX, product, and analytics teams to translate tracking requirements into high-quality technical execution.
• Support funnel tracking, conversion measurement, and end-to-end customer journey instrumentation.
Paid Media & Audience Activation
• Build and manage paid media audiences using mParticle/Mixpanel integrations (Meta, Google, TikTok, DV360, etc.).
• Configure audience segmentation, data sync rules, and activation logic.
• Ensure accuracy, governance, and compliance in audience delivery workflows.
Cloud & Deployment
• Deploy and manage applications on AWS or GCP (baseline cloud proficiency).
• Utilize cloud services for hosting, storage, and operational workflows.
• Support CI/CD pipelines and automated deployment processes.
Leadership & Collaboration
• Provide technical guidance to junior engineers and analytics implementation teams.
• Establish best practices for tagging, SDK usage, data quality, and integration architecture.
• Participate in solution design, code reviews, and technical documentation.
• Act as a key liaison across engineering, analytics, CX, and marketing teams.
General Engineering Responsibilities
• Troubleshoot data quality, API, SDK, and integration issues.
• Maintain clean documentation for architectures, tracking setups, and implementation logic.
• Stay updated with modern MarTech, CDP, and analytics engineering trends.
Qualifications
• 6-10 years of experience in engineering, analytics implementation, MarTech, or related technical roles.
• Deep understanding of mParticle, Mixpanel, or similar CDP/analytics platforms.
• Understanding of PHP (Laravel) or Node.js, along with basic React proficiency.
• Experience with event schemas, SDK implementation, data contracts, tagging, and tracking strategies.
• Familiarity with paid media audience activation and MarTech ecosystem integrations.
• Knowledge of relational/NoSQL databases and cloud platforms (AWS/GCP).
• Exposure to containerization (Docker; Kubernetes is a plus).
• Strong debugging, analytical, and communication skills.
• Proven ability to lead technical implementation and collaborate across functions.
Education
Bachelor's or Master's Degree in Engineering, Computer Science, Information Systems, Mathematics, or related discipline.
DevOps Engineer
Requirements engineer job in Chicago, IL
About Us
Founded in 2014, we offer the industry's first and only cloud-based, fully-customizable, end-to-end software solution to automate securities-based lending from origination through the life of the loan. By combining thought leadership in suitability and risk management with industry-leading education and the latest technology, Supernova enables advisors to deliver holistic, goals-based advice and to help their clients achieve financial wellness. We partner with the industry's largest banks, most prominent insurance companies and leading online brokerages to democratize access to securities-based lending and better the entire financial ecosystem.
Why Join Supernova?
At Supernova Technology, we believe that the best results come from a team that is passionate, driven, and supported in all aspects of their professional lives. Here, you'll work alongside talented and innovative individuals who are committed to driving the future of securities-based lending technology. We foster a culture of collaboration, continuous learning, and growth, where each person's contributions make a real impact.
JOB DESCRIPTION
We are seeking an experienced DevOps Engineer to lead and enhance our organization's technical infrastructure, AWS, and system reliability. This role requires a strategic leader with a strong technical background to ensure our systems are robust, secure, and scalable.
RESPONSIBILITIES:
Lead the design, implementation, and maintenance of infrastructure systems, including AWS cloud services, data centers, co-location facilities, and network connectivity
Ensure the reliability and scalability of software delivery pipelines and disaster recovery systems
Collaborate with development and operations teams to support system migrations, monitor critical systems, and maintain system operations
Implement and maintain disaster recovery and business continuity plans to minimize downtime and data loss
Supervise and mentor the infrastructure team, fostering a culture of continuous improvement and operational excellence
Manage product delivery infrastructure via CI/CD pipelines
QUALIFICATIONS:
Bachelor's degree in Computer Science, Information Technology, or a related field.
Minimum of 5 years of experience in Cloud infrastructure management, with at least 4 years in a leadership role
Proficiency in AWS services is required
Experience in infrastructure as code (i.e. cloudformation, CDK, troposphere, etc.)
Experience with ECS, Lambda, and Docker, github workflows, Python or Java is preferred
Strong understanding of IT operations, platform services, and system reliability
Excellent leadership and team management skills
Familiarity with managing offshore teams and coordinating with international offices is preferred
Our Employee Benefits
At Supernova Technology, we provide a robust benefits package to support the health and well-being of our employees. Our offerings include:
Medical, Dental, and Vision Insurance: Multiple plans with coverage for employees and dependents.
HSA and FSA Accounts: Tax-advantaged accounts for health and dependent care expenses.
Life and Disability Insurance: Employer-paid basic coverage with options for additional voluntary coverage.
Compensation: $140,000 - $180,000 per year
Retirement Savings: 401(k) plan with employer contributions.
Employee Assistance Program (EAP): Confidential support services, including free therapy sessions.
Paid Time Off: Flexible PTO policies.
Additional Perks: Commuter benefits, pet insurance, continuing education assistance, and more.
Note: Actual salary at the time of hire may vary and may be above or below the range based on various factors, including but not limited to, the candidate's relevant qualifications, skills and experience, and the location where this position may be filled.
Our Core Values
Our core values drive everything we do. At Supernova, we...
Form, execute, and communicate new ideas that add value to our employees and customers
Strive through obstacles and failures
Follow-through on promises or commitments to others, accept responsibility, and answer for actions & decisions
Listen to, understand, and support our employees and customers
Act with speed, positive attitude, and flexibility
Exceed expectations and surpass ourselves every day; we embrace a sense of pride and never stop growing
Join us and make an impact while growing your career at Supernova.
Data Engineer
Requirements engineer job in Chicago, IL
Scaylor helps mid-market and enterprise companies make sense of their data. Most teams past $20M ARR are stuck with fragmented systems, old databases, and hundreds of spreadsheets that never quite line up. We build the pipelines that clean, normalize, and connect that data so it can actually be used.
Our platform handles everything from parsing financial models and reports to mapping tables across legacy databases and modern APIs. The goal is simple: give companies a single source of truth they can trust.
We're a small team of four - one backend engineer, one frontend engineer, and 2 founders. We're looking for our fifth teammate to help us scale the engine behind it all.
⸻
The Role
You'll work across data engineering and full-stack development, helping us build reliable data infrastructure that powers real workflows. You'll touch everything from ingestion and transformation pipelines to the APIs and dashboards that surface insights to clients.
You'll work directly with the founding team and help make technical decisions that define the next version of Scaylor's core platform.
⸻
What You'll Work On
• Build data pipelines that extract, clean, and standardize information from Excel files, PDFs, APIs, and legacy databases
• Design schemas and transformation logic for structured and semi-structured data
• Develop and maintain backend APIs (Python/FastAPI or Node/Express) for data access and analytics
• Help connect backend services to our frontend dashboards (React, Node.js, or similar)
• Set up and maintain AWS infrastructure (Lambda, S3, ECS, CloudFormation)
• Collaborate with clients to understand their data problems and design workflows that fix them
⸻
You'd Be Great Here If You
• Have 3-6 years of experience in data engineering, backend, or full-stack roles
• Write clean, maintainable code in Python + JS
• Understand ETL, data normalization, and schema mapping
• Have experience with SQL and working with legacy databases or systems
• Are comfortable managing cloud services and debugging data pipelines
• Enjoy solving messy data problems and care about building things that last
⸻
Nice to Have
• Familiarity with GCP or SQL databases
• Understanding of enterprise data flows (ERP, CRM, or financial systems)
• Experience building and deploying containers (Docker, GitHub Actions, CI/CD)
• Interest in lightweight ML or LLM-assisted data transformation
⸻
Why Join Scaylor
• Be one of the first five team members shaping the product and the company
• Work directly with the founder and help define Scaylor's technical direction
• Build infrastructure that solves real problems for real companies
• Earn meaningful equity and have a say in how the company grows
⸻
Compensation
• $130k - $150k with a raise based on set revenue triggers
• .4% equity
• Relocation to Chicago, IL required
Sr. Data Engineer - PERM - MUST BE LOCAL
Requirements engineer job in Naperville, IL
Resource 1 is in need of a Sr. Data Engineer for a full-time/ permanent position with our client in Naperville, IL. Candidate must be
local to Illinois
because of future hybrid onsite in Naperville expected. Our client is an employee-owned company with excellent benefits, growth opportunities and profit-sharing bonus.
This position is focused on building modern data pipelines, integrations and back-end data solutions. Selected individual will work within cross-functional Agile teams, collaborating with product owners, business analysts and other engineers to design and deliver data solutions that power business insights and AI products.
Responsibilities:
Design and develop scalable data pipelines for ingestion, transformation and integration using AWS services.
Pull data from PostgreSQL and SQL Server to migrate to AWS.
Create and modify jobs in AWS and modify logic in SQL Server.
Create SQL queries, stored procedures and functions in PostgreSQL and RedShift.
Provide input on data modeling and schema design as needed.
Manage infrastructure through infrastructure-as-code templates (Serverless Framework), supporting new data products and services in AWS.
Support inbound/ outbound data flows, including APIs, S3 replication and secured data.
Assist with data visualization/ reporting as needed.
Follow an Agile development methodology, with regular workshops and standup meetings, working in two-week sprints.
Qualifications:
5+ years of data engineering experience.
Experience with AWS and its associated array of offerings (Glue, Redshift, Athena, S3, Lambda, Spectrum).
Strong experience with SQL, Python and PySpark.
A background in supply chain, logistics or distribution would be a plus.
Experience with Power BI is a plus.
Data Engineer
Requirements engineer job in Itasca, IL
Primary Location: Itasca, IL Hybrid in Chicago's Northwest Suburbs
2 Days In-Office, 3 Days WFH
TYPE: Direct Hire / Permanent Role
MUST BE Citizen and Green Card
The Data Engineer will be responsible for designing, developing, and maintaining scalable data pipelines and integrations that support data analytics and business intelligence across the organization. This role is essential to ensuring high-quality data delivery, optimizing performance, and enabling effective decision-making through reliable data solutions.
What You Bring to the Role (Ideal Experience)
Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent work experience).
5+ years of experience as a Data Engineer.
3+ years of experience with the following:
Building and supporting data lakehouse architectures using Delta Lake and change data feeds.
Working with PySpark and Python, with strong Object-Oriented Programming (OOP) experience to extend existing frameworks.
Designing data warehouse table architecture such as star schema or Kimball method.
Writing and maintaining versioned Python wheel packages to manage dependencies and distribute code.
Creating and managing CI/CD pipelines, especially using Azure DevOps for Microsoft Fabric-related assets.
Experience establishing scalable and maintainable data integrations and pipelines in Databricks environments.
Nice to Have's
Hands-on experience implementing data solutions using Microsoft Fabric.
Experience with machine learning/ML and data science tools.
Knowledge of data governance and security best practices.
Experience in a larger IT environment with 3,000+ users and multiple domains.
Current industry certifications from Microsoft cloud/data platforms or equivalent certifications. One or more of the following is preferred:
Microsoft Certified: Fabric Data Engineer Associate
Microsoft Certified: Azure Data Scientist Associate
Microsoft Certified: Azure Data Fundamentals
Google Professional Data Engineer
Certified Data Management Professional (CDMP)
IBM Certified Data Architect - Big Data
What You'll Do (Skills Used in this Position)
Design and develop scalable data pipelines to collect, process, and store large volumes of structured and unstructured data.
Extend and enhance existing OOP-based frameworks developed in Python and PySpark.
Partner with data scientists and analysts to define requirements and design robust data analytics solutions.
Ensure data quality and integrity through data cleansing, validation, and automated testing procedures.
Develop and maintain technical documentation, including requirements, design specifications, and test plans.
Implement and manage data integrations from multiple internal and external sources.
Optimize data workflows to improve performance, reliability, and reduce cloud consumption.
Monitor, troubleshoot, and resolve data pipeline issues to ensure consistent data delivery.
Establish and manage CI/CD pipelines and release processes, particularly using Azure DevOps for Microsoft Fabric.
Provide technical leadership and coordination for global development and support teams.
Participate in creating a safe and healthy workplace by adhering to organizational safety protocols.
Support additional projects and initiatives as assigned by management.
Senior Data Engineer
Requirements engineer job in Chicago, IL
requires visa independent candidates.
Note: (OPT, CPT, H1B holders will not work at this time)
Design, develop, and maintain scalable ETL pipelines using AWSGlue
Collaborate with data engineers and analysts to understand data requirements
Build and manage data extraction, transformation, and loading processes
Optimize and troubleshoot existing Glue jobs and workflows
Ensure data quality, integrity, and security throughout the ETL process
Integrate AWS Glue with other AWS services like S3, Lambda, Redshift, and Step Functions
Maintain documentation of data workflows and processes
Stay updated with the latest AWS tools and best practices
Required Skills
Strong hands-on experience with AWS Glue, PySpark, and Python
Proficiency in SQL and working with structured/unstructured data (JSON, CSV, Parquet)
Experience with data warehousing concepts and tools
Familiarity with CI/CD pipelines, Terraform, and scripting (PowerShell, Bash)
Solid understanding of data modeling, data integration, and data management
Exposure to AWS Batch, Step Functions, and Data Catalogs
Azure Devops Engineer
Requirements engineer job in Chicago, IL
Set up CI/CD pipelines to support automated deployment of resources to Cloud environments, all at medium to high level of complexity
· This is a hands-on role that develops and supports build and release automation pipelines. You will be part of the team that will deploy a highly available full software stack in public/ private clouds
· Remediate gaps and support the automation requirements of continuous integration and continuous deployment
· Identify and develop metrics and dashboards to monitor adoption and maturity of DevOps
· Experience in Docker/Containerization and Kubernetes
· Ability to contribute to architecture discussions around technology controls and their implementation in a DevOps/Cloud environment
· Work collaboratively with architecture, security and other engineers to estimate, design, code, deploy and support working software / technology components
· Foster the adoption of DevSecOps culture and capabilities across Agile product delivery teams
· Embed “shift-left” security practices using tools like Checkmarx, SonarQube, PrismaCloud.
· Work in an Agile/Scrum environment; planning, estimating, and completing tasks on
· Liaison with Agile Delivery Process teams to support necessary configurations/setup in Azure DevOps (ADO) for Agile ceremonies
Champion Modern SDLC by leading the consistent application of the redesigned SDLC framework, aligning with Agile, DevSecOps, and platform standards
· Work with development and support teams to design improved deployment, provisioning and integration workflows, ensure environments stability and identify areas and plans for improvement
· Contribute to new technology, vendor package and tool road mapping, evaluation and introduction
Ensure compliance with Performance, Security, Availability, Recoverability standards and policies and provide Monitoring recommendations for tasks of low to medium level of complexity
· 5+ years of demonstrable software engineering and DevOps experience
· 5+ years working in SCRUM/Agile software development environment
· Experience deploying and administering Continuous integration tools such as Azure DevOps, is a must
· Experience with Infrastructure cloud tools such as Terraform, Docker, and Aspire etc.
· Experience with automated testing solutions for unit testing, integration and system testing
· Bachelor's Degree or equivalent experience. Computer Science or related field preferred.
· Strong cloud engineering experience primarily with Azure and AWS.
· Experience in working with Terraform, Ansible, and/or Chef for infrastructure automation and configuration
· Experience with Docker and Kubernetes on platforms such as AWS ECS and AWS EKS
· Experience with programming languages such as Python, Poweshell, and C++ is a plus
· Experience with APM, monitoring and logging tools such as Datadog, Solarwinds, Cloud watch and Splunk
· Experience with SQL databases such as MySQL and , NoSQL databases like AWS Dynamo DB and MongoDB, graph DB such as Neo4J, AWS Neptune.
Experience with project management and workflow tools and concepts such as Jira, Agile, Scrum/Kanban, etc.
· Proficiency in cross-platform scripting language and build tools (Python,ANT,Artifactory, MS Build,NuGet)
· Proficiency in OOP software development using C# or similar languages
· Ability to define scalable and secure CI/CD pipelines
· Understanding of deployment strategies using Docker and Podman for containerization
· Experience with pair programming using GitHub Copilot
Strong communication/presentation skills and ability to explain standards, processes, and cloud architecture with team and management.
Azure Cloud & DevOps Engineer
Requirements engineer job in Chicago, IL
📱 Azure Cloud & DevOps Engineer
📍 Chicago, IL | 🏢 Hybrid | 💼 Full-Time
At Sprocket Sports, We are currently seeking an Azure Cloud & DevOps Engineer to join our Team. The ideal candidate has a passion for youth sports and managing a best-in-class software platform that will be used by thousands of youth sports club administrators, coaches, parents and players.
About Sprocket
Sprocket Sports is a fast-growing technology company based in Chicago and a national leader in the youth sports space. Our software and services help clubs streamline operations, reduce costs, and grow faster, so they can focus on what really matters: kids playing sports. We're also proud to be a certified Great Place to Work 2024, with a culture that balances high standards, accountability, and fun.
What You'll Do
As an experienced DevOps / cloud engineer you will help us scale and maintain a high-performing, reliable, and cost-effective cloud infrastructure. As an Azure Cloud Engineer, you will be the backbone of our cloud infrastructure, ensuring our platform is always available, fast, and secure for our users. You will manage our resources in Microsoft Azure, focusing heavily on performance optimization, cost control, and proactive system health monitoring. This role is perfect for someone passionate about cloud technology, DevOps principles, and continuous improvement. In this role you will interact with our software engineers, product managers and occasionally with operational stakeholders. We are seeking individuals who like to think creatively and have a passion for continually improving the platform.
Responsibilities:
Core Azure Cloud Management
Resource & Cost Optimization:
Manage, provision, and maintain our complete suite of Azure resources (e.g., App Services, Azure Functions, AKS, VMs).
Proactively manage and reduce cloud costs by identifying and implementing efficiencies in resource utilization and recommending right-sizing strategies.
Security and Compliance:
Ensure security best practices are implemented across all Azure services, including network segmentation, access control (IAM), and patching.
Performance & Reliability Engineering (SRE Focus)
System Health and Monitoring:
Ongoing monitoring of application and system performance using Azure and DataDog to detect and diagnose issues before they impact users.
Review system logs, metrics, and tracing data to identify areas of concern, bottlenecks, and opportunities for performance tuning.
Performance Testing
Lead efforts to conduct load testing and performance testing on the system.
Database Performance Tuning:
Review and optimize SQL performance by analyzing query plans, identifying slow-running queries, and recommending improvements (indexing, schema changes, stored procedures).
Manage and monitor our Azure SQL Database resources for optimal health and throughput.
Incident Response: Participate in on-call rotation to provide 24/7 support for critical infrastructure incidents and drive root cause analysis (RCA).
DevOps Automation
Infrastructure as Code (IaC):
Implement Infrastructure-as-Code (ARM, Bicep, or Terraform) to maintain consistent, auditable deployments.
Continuous Integration / Continuous Delivery (CI/CD):
Work closely with the development team to automate and streamline deployment pipelines (CI/CD) using Azure DevOps, ensuring fast and reliable releases.
Configuration Management: Implement and manage configuration for applications and infrastructure.
What We're Looking For:
Bachelor's degree in a Computer Science or related field.
3+ years of professional experience in Cloud Engineering, DevOps, or a similar role, with a strong focus on Microsoft Azure.
Deep hands-on experience with core Azure services and strong networking fundamentals.
Solid experience with monitoring and observability platforms, specifically DataDog.
Scripting proficiency in PowerShell.
Demonstrated ability to analyze and optimize relational database performance (SQL/T-SQL).
Strong problem-solving skills.
Strong communication and interpersonal skills; ability to analytically defend design decisions and take feedback without ego.
Strong attention to detail and accountability.
Why Join Us?
✅ Certified Great Place to Work 2024
🤝 Mission-driven team with a big vision
🚀 Fast-growing startup with room to grow
💼 Competitive salary + equity
📊 401(k) with company match
🩺 Comprehensive medical and dental
🎉 A culture built on Higher Standards, Greater Accountability, and More Fun
Senior DevOps Engineer
Requirements engineer job in Chicago, IL
Qorali is excited to have a new role that we can share with you to take your career to the next level! This role is involved with modern technologies that are integrated deeply with different platforms and operations with significant opportunities within the company to have continuous growth. You will work with teams to implement monitoring practices to enhance the environment and efficiency for both cloud and on-prem spaces.
Expectations for role
Tracking metrics with alerts and notifications with runbooks for operational monitoring, availability and scalability
Implementation of resolutions for optimization for different services with the team
Incident response production while keeping automation
Lead the for the team improvement in research, retrospectives, and discussion/code reviews
Mentoring junior team members
Maintenance of large-scale systems with the ability to troubleshoot and problem solve.
Technical Skills
6+ years of DevOps experience
AWS (preferred) or Azure
Experience with monitoring environments including tools such as Splunk, AppDynamics, Datadog, Prometheus or Grafana.
Scripting languages (Java, Python)
Containerization creation in Kubernetes, Docker or Rancher
CI/CD experience (Jenkins preferred)
Leveraging the use of language models to enhance DevOps automation workflow
Benefits
15% bonus
20+ PTO
6% 401k match
Health, vision, dental and life plans
Two days of remote working per week
This role is unable to support Visa Sponsorship or C2C. C2H.
DevOps Cloud Engineer
Requirements engineer job in Chicago, IL
Duties: You will be responsible for: (1) Designing, deploying, securing, and managing enterprise cloud and hybrid infrastructure across compute, storage, database, networking, and security domains using services within Amazon Web Services (including EC2, Lambda, S3, RDS, VPC, IAM, and related technologies); (2) Implementing and maintaining Infrastructure as Code (IaC) using tools such as GitHub, Pulumi, or AWS CloudFormation to automate provisioning, configuration, and lifecycle management; (3) Continuously evaluating and optimizing AWS environments to ensure performance, availability, scalability, cost efficiency, and operational stability; (4) Designing, building, and maintaining CI/CD pipelines using GitHub Actions, AWS CodePipeline, or Jenkins, including integration of automated testing, security scanning, and compliance checks (e.g., Orca Security or similar tools); (5) Leveraging automation and AI-based tools to strengthen the efficiency and intelligence of CI/CD and DevOps processes; (6) Implementing security best practices across identity and access management, network architecture, encryption, monitoring, logging, and incident response in coordination with the Information Security team; (7) Supporting vulnerability management, incident response, remediation, and follow-up to ensure secure and compliant cloud operations; (8) Setting up and maintaining monitoring, logging, alerting, and SIEM integrations using platforms such as AWS CloudWatch, LogicMonitor, Splunk, or Orca Security; (9) Troubleshooting infrastructure, networking, and deployment issues across hybrid environments and participating in weekly on-call rotation for production support; (10) Managing Windows and Linux patching, BC/DR capabilities, and policy governance using AWS Systems Manager, Cloud Custodian, and related tooling; (11) Collaborating with developers, system administrators, engineers, and business stakeholders to design and deliver reliable and secure cloud solutions; (12) Evaluating, recommending, and implementing new tools, frameworks, and automation opportunities to enhance performance, availability, security, and operational maturity; (13) Documenting system standards, architecture diagrams, operating procedures, and best practices to ensure alignment, maintainability, and operational excellence; (14) Contributing to a culture of collaboration, agility, innovation, continuous improvement, and cross-team partnership.
Required:
****Critical Note: This is NOT a traditional DevOps Cloud Engineer and traditional DevOps Cloud Engineers should not invest time in applying. The requirements for consideration are shared specifically below this critical note, but to provide important and essential insight for you to be considered, the following is being shared:
ALL applicants must have hands-on experience at some point in their professional work experience foundational or traditional IT infrastructure skills---not cloud based (e.g. actual non-cloud based system administration, network engineering/administration, firewalls/security) with background/experience in building/administering/engineering/supporting/operating either on-premises or hybrid IT infrastructures, who grew into more of the DevOps space, would be highly preferred versus a pure cloud-only person.
Required:
A completed and verifiable Bachelor's degree in Computer Science, Information Systems, or a related STEM field is required.
Must have 3 or more years of professional Dev/Ops and Cloud Engineering experience with Prior experience as a Systems Engineer, Systems Adminstration, or Network Engineer with pater exeperience in DevOps practices, cloud automation, and modern infrastructure. Both components of this requirement are an absolute must have.
Must have strong, hands-on expertise with AWS compute, storage, networking, database, serverless, and security services, including EC2, Lambda, S3, RDS, CloudFormation, VPC, IAM, and container services such as ECS/EKS.
Must have experience building and managing Infrastructure as Code using Pulumi, Terraform, AWS CloudFormation, and scripting languages such as Python, Bash, or Node.js.
Must have hands-on experience administering and developing CI/CD pipelines using GitHub Actions, AWS CodeCommit/CodePipeline, or equivalent automation platforms.
Must have working knowledge of networking technologies including routing, switching, VPNs, firewalls, and network security principles, along with experience managing hybrid connectivity.
Must have familiarity with IAM, SIEM, SASE, and the integration of security within CI/CD pipelines.
Must have experience with monitoring and observability tools such as AWS CloudWatch, LogicMonitor, Splunk, Orca Security, or similar enterprise platforms.
Must demonstrate strong communication skills, the ability to work closely with peers and stakeholders, and the ability to operate effectively in a fast-paced, dynamic environment.
Pluses: AWS certifications such as AWS Certified Solutions Architect - Associate or AWS Certified DevOps Engineer - Associate. Experience in financial services or other regulated industries. Experience supporting governance, compliance, or cloud security programs.
Data Engineer
Requirements engineer job in Chicago, IL
Job Title: Data Engineer - Workflow Automation
Employment Type: Contract to Hire or Full-Time
Department: Project Scion / Information Management Solutions
Key Responsibilities:
Design, build, and manage workflows using Automic or experience with similar tools like Autosys, Apache Airflow, or Cybermation.
workflow orchestration across multi-cloud ecosystems (AWS, Azure, Snowflake, Databricks, Redshift).
Monitor and troubleshoot workflow execution, ensuring high availability, reliability, and performance.
Administer and maintain workflow platforms.
Collaborate with architecture and infrastructure teams to align workflows with cloud strategies.
Support migrations, upgrades, and workflow optimization efforts
Required Skills:
Has 5+ years of experience in IT managing production grade system
Hands-on experience with Automic or similar enterprise workflow automation tools.
Strong analytical and problem-solving skills.
Good communication and documenting skills.
Familiarity with cloud platforms and technologies (e.g., AWS, Azure, Snowflake, Databricks).
Scripting proficiency (e.g., Shell, Python).
Ability to manage workflows across hybrid environments and optimize performance.
Experience managing production operations & support activities
Preferred Skills:
Experience with CI/CD pipeline integration.
Knowledge of cloud-native orchestration tools
Exposure to monitoring and alerting systems.
Data Engineer
Requirements engineer job in Chicago, IL
The Data Engineer will design, build, and optimize the data pipelines and models that support the firm's evolving research, analytics, and systematic portfolio construction environment. This role is central to enabling data-driven investment processes, including quantitative research, AI/ML capabilities, and front-office automation.
Candidates must have deep expertise with Snowflake, strong SQL skills, and experience integrating diverse datasets used across investment organizations. The role is highly collaborative and requires comfort working in an iterative, fast-moving environment where data needs evolve rapidly based on stakeholder input.
Responsibilities
Design, build, and enhance ETL/ELT pipelines in Snowflake, ensuring high performance, reliability, and scalability.
Integrate internal and external datasets, including pricing, research content, economic releases, market data, and security reference data.
Support real-time or near-real-time data flows where needed (e.g., pricing, indicative quotes, market-sensitive inputs).
Collaborate closely with Product Leads, Quant Developers, and UI/UX teams to ensure data structures meet the requirements of research workflows, analytical models, and user-facing applications.
Partner with front-office stakeholders to rapidly iterate on evolving analytical and data needs.
Implement data validation, monitoring, and quality frameworks to ensure accuracy and reliability across critical datasets.
Translate prototype pipelines into production-ready workflows with appropriate documentation, standards, and controls.
Contribute to data modeling standards, metadata frameworks, and data governance practices across the platform.
Requirements
10+ years of data engineering experience within investment management, financial technology, or similar data-intensive environments.
Expert-level SQL, including complex queries, schema design, and performance optimization.
Deep hands-on experience with Snowflake, including advanced features such as tasks, streams, performance tuning, and secure data sharing.
Strong Python capabilities for ETL/ELT development, data processing, and workflow automation.
Experience integrating APIs and working with structured, semi-structured, and unstructured datasets.
Familiarity with NLP or AI/ML-oriented datasets (e.g., textual research content, PDFs) is a plus.
Experience with Domino or willingness to work within a Domino-based model environment.
Working knowledge of investment data structures (holdings, benchmarks, pricing, exposures) is highly preferred.
Ability to thrive in a rapid prototyping environment with evolving requirements and close partnership with front-office teams.
Snowflake Data Engineer
Requirements engineer job in Chicago, IL
Join a dynamic team focused on building innovative data solutions that drive strategic insights for the business. This is an opportunity to leverage your expertise in Snowflake, ETL processes, and data integration.
Key Responsibilities
Develop Snowflake-based data models to support enterprise-level reporting.
Design and implement batch ETL pipelines for efficient data ingestion from legacy systems.
Collaborate with stakeholders to gather and understand data requirements.
Required Qualifications
Hands-on experience with Snowflake for data modeling and schema design.
Proven track record in developing ETL pipelines and understanding transformation logic.
Solid SQL skills to perform complex data transformations and optimization.
If you are passionate about building cutting-edge data solutions and want to make a significant impact, we would love to see your application!
#11290