Senior Data Scientist (Senior Consultant)
Data scientist job in Chicago, IL
Job Family:
Data Science Consulting
Travel Required:
Up to 10%
Clearance Required:
Ability to Obtain Public Trust
About our AI and Data Capability Team
Our consultants on the AI and Data Analytics Capability team help clients maximize the value of their data and automate business processes. This high performing team works with clients to implement the full spectrum of data analytics and data science services, from data architecture and storage to data engineering and querying, to data visualization and dashboarding, to predictive analytics, machine learning, and artificial intelligence as well as intelligent automation. Our services enable our clients to define their information strategy, enable mission critical insights and data-driven decision making, reduce cost and complexity, increase trust, and improve operational effectiveness.
What You Will Do:
Data Collection & Management: Identify, gather, and manage data from primary and secondary sources, ensuring its accuracy and integrity.
Data Cleaning & Preprocessing: Clean raw data by identifying and addressing inconsistencies, missing values, and errors to prepare it for analysis.
Data Analysis & Interpretation: Apply statistical techniques and analytical methods to explore datasets, discover trends, find patterns, and derive insights.
Data Visualization & Reporting: Develop reports, dashboards, and visualizations using tools like Tableau or Power BI to present complex findings clearly to stakeholders.
Collaboration & Communication: Work with cross-functional teams, understand business requirements, and effectively communicate insights to support data-driven decision-making.
Problem Solving: Address specific business challenges by using data to identify underperforming processes, pinpoint areas for growth, and determine optimal strategies.
What You Will Need:
US Citizenship is required
Bachelor's degree is required
Minimum THREE (3) Years Experience using Power BI, Tableau and other visualization tools to develop intuitive and user friendly dashboards and visualizations.
Skilled in SQL, R, and other languages to assist in database querying and statistical programming.
Strong foundational knowledge and experience in statistics, probability, and experimental design.
Familiarity with cloud platforms (e.g., Amazon Web Services, Azure, or Google Cloud) and containerization (e.g., Docker).
Experience applying data governance concepts and techniques to assure greater data quality and reliability.
he curiosity and creativity to uncover hidden patterns and opportunities.
Strong communication skills to bridge technical and business worlds.
What Would Be Nice To Have:
Hands-on experience with Python, SQL, and modern ML frameworks.
Experience in data and AI system development, with a proven ability to design scalable architectures and implement reliable models.
Expertise in Python or Java for data processing.
Demonstrated work experience within the public sector.
Ability to support business development including RFP/RFQ/RFI responses involving data science / analytics.
The annual salary range for this position is $113,000.00-$188,000.00. Compensation decisions depend on a wide range of factors, including but not limited to skill sets, experience and training, security clearances, licensure and certifications, and other business and organizational needs.
What We Offer:
Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace.
Benefits include:
Medical, Rx, Dental & Vision Insurance
Personal and Family Sick Time & Company Paid Holidays
Position may be eligible for a discretionary variable incentive bonus
Parental Leave and Adoption Assistance
401(k) Retirement Plan
Basic Life & Supplemental Life
Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts
Short-Term & Long-Term Disability
Student Loan PayDown
Tuition Reimbursement, Personal Development & Learning Opportunities
Skills Development & Certifications
Employee Referral Program
Corporate Sponsored Events & Community Outreach
Emergency Back-Up Childcare Program
Mobility Stipend
About Guidehouse
Guidehouse is an Equal Opportunity Employer-Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation.
Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco.
If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at ************** or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation.
All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or ************************. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process.
If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse's Ethics Hotline. If you want to check the validity of correspondence you have received, please contact *************************. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant's dealings with unauthorized third parties.
Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.
Auto-ApplyData Scientist
Data scientist job in Chicago, IL
The Aspen Group (TAG) is one of the largest and most trusted retail healthcare business support organizations in the U.S. and has supported over 20,000 healthcare professionals and team members with close to 1,500 health and wellness offices across 48 states in four distinct categories: dental care, urgent care, medical aesthetics, and animal health. Working in partnership with independent practice owners and clinicians, the team is united by a single purpose: to prove that healthcare can be better and smarter for everyone. TAG provides a comprehensive suite of centralized business support services that power the impact of five consumer-facing businesses: Aspen Dental, ClearChoice Dental Implant Centers, WellNow Urgent Care, Chapter Aesthetic Studio, and Lovet Pet Health Care. Each brand has access to a deep community of experts, tools and resources to grow their practices, and an unwavering commitment to delivering high-quality consumer healthcare experiences at scale.
As a reflection of our current needs and planned growth we are very pleased to offer a new opportunity to join our dedicated team as Data Scientist.
Join us as a Data Scientist and play a key role in shaping how patients access care across a growing network of brands including Aspen Dental, ClearChoice, WellNow, Chapter Aesthetic Studio, and Lovet. In this role, you won't just analyze data - you'll lead end-to-end initiatives that shape how we optimize revenue across offices, days, and even hours. From designing strategies, deploying solutions and building performance dashboards to partnering with data science on automation and collaborating with teams across Finance, Marketing, Product, Technology and Operations, you'll have a direct hand in driving measurable results.
This role is ideal for someone who excels at turning data into decisions, building repeatable processes, and uncovering insights that drive measurable improvements in revenue performance and patient access. You will lead initiatives across forecasting, scheduling optimization, demand modeling, capacity planning, and revenue strategy, while also shaping how analytics is delivered and scaled across the organization. If you're a builder who loves solving complex problems with data, operational logic, and automation, this opportunity is for you.
Essential Responsibilities:
Revenue Strategy & Optimization
Lead strategy development for optimizing revenue performance at the office, day, and hour level by leveraging forecasting, scheduling, and demand modeling - while balancing access for patients and operational efficiency.
Build analytical frameworks to support pricing, demand forecasting, scheduling, and access optimization.
Identify revenue opportunities through data-driven analysis of booking trends, cancellations, no-shows, and utilization.
Monitor and update demand and schedule availability through the analysis of historical and future booking trends, the pricing environment, industry capacity trends, competitive landscape, and other factors.
Analytics, Insights & Experimentation
Develop and maintain forecasting, demand models, dashboards, and scenario analyses.
Run experiments and structured tests to evaluate new operational and scheduling strategies.
Create clear, actionable insights that influence senior leaders and cross-functional partners.
Process Building & Automation
Map existing manual workflows and identify opportunities to automate recurring analyses or reporting.
Cross-Functional Leadership
Work closely with Operations, Finance, Product, Marketing, and Clinical teams to align strategies and execution.
Help shape and scale the function by building new playbooks, reports, and best practices.
Act as a subject matter expert in forecasting, demand modeling, and capacity optimization.
Qualifications (Skills-Based):
We welcome candidates with diverse academic and career pathways. You may have gained your skills through industry experience, coursework, certificates, or hands-on practice.
Experience/Education:
5+ years of experience in Revenue Management, Pricing, Operations Research, Supply/Demand Optimization (Airline, Travel, Healthcare, or multi-location service industries preferred).
Bachelor's degree in Business, Finance, Economics, Analytics, or Statistics required; Master's degree a plus.
Experience working alongside data science/engineering teams to automate and scale analytics processes.
Exceptional analytical, problem-solving, and communication skills - with the ability to influence senior stakeholders.
Detail-oriented, self-starter mindset with a passion for driving results.
Strong analytical and quantitative skills, with experience in forecasting, modeling, or optimization.
Strong technical proficiency in SQL and a modern BI platform (e.g., Tableau, Looker).
Familiarity with scripting (e.g., Python or R) or automation tools (e.g., DBT, Airflow) - not required, but helpful.
Additional Job Description:
Base Pay Range: $115,000 - $130,000, plus 10% annual bonus (Actual pay may vary based on experience, performance, and qualifications.)
A generous benefits package that includes paid time off, health, dental, vision, and 401(k) savings plan with match.
If you are an applicant residing in California, please view our privacy policy here: *********************************************************************************
Data Scientist
Data scientist job in Chicago, IL
This role supports a financial services organization by applying advanced data science and machine learning techniques to solve complex business problems using large-scale datasets. The position focuses on end-to-end feature engineering, model development, and writing production-quality code in a fast-paced, collaborative environment. The individual partners closely with product and engineering teams to uncover trends, improve algorithm performance, and drive data-informed decisions.
Key Responsibilities
Independently analyze and aggregate large, complex datasets to identify anomalies that affect model and algorithm performance
Own the full lifecycle of feature engineering, including ideation, development, validation, and selection
Develop and maintain production-quality code in a fast-paced, agile environment
Solve challenging analytical problems using extremely large (terabyte-scale) datasets
Evaluate and apply a range of machine learning techniques to determine the most effective approach for business use cases
Collaborate closely with product and engineering partners to identify trends, opportunities, and data-driven solutions
Communicate insights, results, and model performance clearly through visualizations and explanations tailored to non-technical stakeholders
Adhere to established standards and practices to ensure the security, integrity, and confidentiality of systems and data
Minimum Qualifications
Bachelor's degree in Mathematics, Statistics, Computer Science, Operations Research, or a related field
At least 4 years of professional experience in data science, analytics, engineering, or a closely related discipline
Hands-on experience building data science pipelines and workflows using Python, R, or similar programming languages
Strong SQL skills, including query development and performance tuning
Experience working with large-scale, high-volume datasets (terabyte-scale)
Practical experience applying a variety of machine learning methods and understanding the parameters that impact model performance
Familiarity with common machine learning libraries (e.g., scikit-learn, Spark ML, or similar)
Experience with data visualization tools and techniques
Ability to write clean, maintainable, and production-ready code
Strong interest in rapid prototyping, experimentation, and proof-of-concept development
Proven ability to communicate complex analytical findings to non-technical audiences
Ability to meet standard employment screening requirements
Data Scientist
Data scientist job in Chicago, IL
Minimum Qualifications:
● 5+ years of experience leading data science projects that have a direct impact on a company's
objectives or PhD degree in quantitative fields such as Statistics, Data Science, Computer Science
with 3+ years of experience
● 5+ years of experience utilizing data mining techniques, ML models to assist business decision
making. Hands-on experience with deep learning frameworks , LLMs, GenAI tools, and NLP
techniques.
● Deep expertise in statistical methods and machine learning concepts, with the ability to mentor
team members on methodologies, model tuning, and evaluation techniques.
● 2+ years hands-on experience with deep learning frameworks, LLMs, GenAI tools, and NLP
techniques.
● 5+ years of experience using Python to process large, diverse datasets, and develop and deploy
predictive models in cloud-based environments and other computing platforms.
● 5+ years of experience in SQL and cloud-hosted data platforms (Google Cloud Platform, AWS,
etc.).
● Demonstrated ability to assist business decision-making through data mining and machine
learning.
● Strong communication skills to collaborate effectively with business stakeholders. Must be able
to interact cross-functionally and drive both business and technical discussions.
● Ability to translate complex business problems into actionable project plans and solve
Data Scientist
Data scientist job in Chattanooga, TN
BUILT TO CONNECT
At Astec, we believe in the power of connection and the importance of building long-lasting relationships with our employees, customers and the communities we call home. With a team more than 4,000 strong, our employees are our #1 advantage. We invest in skills training and provide opportunities for career development to help you grow along with the business. We offer programs that support physical safety, as well as benefits and resources to enhance total health and wellbeing, so you can be your best at work and at home.
Our equipment is used to build the roads and infrastructure that connects us to each other and to the goods and services we use. We are an industry leader known for delivering innovative solutions that create value for our customers. As our industry evolves, we are using new technology and data like never before.
We're looking for creative problem solvers to build the future with us. Connect with us today and build your career at Astec.
LOCATION: Chattanooga, TN On-site / Hybrid (Role must report on-site regularly)
ABOUT THE POSITION
The Data Scientist will play a key role in establishing the analytical foundation of Astec Smart Services. This individual will lead efforts to build pipelines from source to cloud, define data workflows, build predictive models, and help guide the team's approach to turning data into customer value. He or she will work closely within Smart Services and cross-functionally to ensure insights are actionable and impactful. The role blends Data architecture, data engineering, and data science to help build Smart Services analytical foundation. This person will be instrumental in helping to build Astec's digital transformation and aftermarket strategy.
Deliverables & Responsibilities
Data Engineering:
Build and maintain robust data pipelines for ingestion, transformation, and storage.
Optimize ETL processes for scalability and performance.
Data Architecture:
Design and implement data models that support analytics and operational needs.
Define standards for data governance, security, and integration.
Data Science:
Develop predictive models and advanced analytics to support business decisions.
Apply statistical and machine learning techniques to large datasets.
Strong business acumen to understand decision drivers with internal and external customers
Collaborate with individuals and departments across the company to ensure insights are aligned with customer needs and drive value.
To be successful in this role, your experience and competencies are:
Bachelor's degree in data science, engineering, or related field. (Adv. degrees a plus.)
5+ years of experience in data science, including at least 3 years in industrial or operational environments.
Strong communication and project management skills are critical.
Proficiency in data pipeline tools (e.g., Spark, Airflow) and cloud platforms (Azure, AWS, GCP).
Strong understanding of data modeling principles and database technologies (SQL/NoSQL).
Hands-on experience with machine learning frameworks (e.g., TensorFlow, PyTorch) and statistical analysis.
Ability to work across data architecture design and data science experimentation.
Programming: Python, SQL, and optionally Scala or Java.
Familiarity with distributed systems and big data technologies.
Strong communication skills for translating technical insights into business value.
Ability to work across technical, commercial, and customer-facing teams.
Supervisor and Leadership Expectations
This role will not have supervisory or managerial responsibilities.
This role will have program management responsibilities.
Our Culture and Values
Employees that become part of Astec embody the values below throughout their work.
Continuous devotion to meeting the needs of our customers
Honesty and integrity in all aspects of business
Respect for all individuals
Preserving entrepreneurial spirit and innovation
Safety, quality and productivity as means to ensure success
EQUAL OPPORTUNITY EMPLOYER
As an Equal Opportunity Employer, Astec does not discriminate on the basis of race, creed, color, religion, gender (sex), sexual orientation, gender identity, marital status, national origin, ancestry, age, disability, citizenship status, a person's veteran status or any other characteristic protected by law or executive order.
Senior Data Scientist
Data scientist job in Knoxville, TN
This person is responsible for extracting insights and building predictive models from data. Analyzes curated data, builds ML models, and validates them with experiments. The DS researches and defines business problems and translates them into analytical or ML tasks. Conducts exploratory data analysis to identify patterns, correlations, and anomalies.
They will be in charge of designing, training, and evaluating statistical, ML, or AI models. Works with SMEs to determine use cases, model requirements, and alternative solutions. Defines requirements and specifications from which Data Specialists build data structures. Works closely with Data Engineers and customers to ensure correct implementation.
Required Experience:
Bachelor's in Mathematics, OR, Business, Economics, or related field preferred.
Minimum 7+ years designing and implementing mathematical and statistical models.
Strong experience with Python/R, PyTorch, Scikit-Learn, Jupyter, visualization, A/B testing.
SQL (Oracle, MySQL, or Hive).
Oracle DB and Hadoop experience preferred.
Ability to work independently and under deadlines.
Strong communication and problem-solving skills.
Clearance Requirement:
Must be a US Citizen with an active DoD Secret security clearance.
#cjjobs
Marathon TS is committed to the development of a creative, diverse and inclusive work environment. In order to provide equal employment and advancement opportunities to all individuals, employment decisions at Marathon TS will be based on merit, qualifications, and abilities. Marathon TS does not discriminate against any person because of race, color, creed, religion, sex, national origin, disability, age or any other characteristic protected by law (referred to as "protected status").
Data Architect
Data scientist job in Oak Brook, IL
GeoWealth is a Chicago-based fintech firm that offers an award-winning digital advisory platform, including Turnkey Asset Management Platform (“TAMP”) capabilities. We deliver a comprehensive and fully integrated wealth management technology platform to professionals in the financial services industry.
OPPORTUNITY:
We're looking for a Data Architect to join our Engineering Team. In this role, you will oversee the overall data architecture, helping us deliver our best-in-class solutions to our customers. This role will be key in organizing, designing, and leading our team through well-designed data architecture. If you love architecting complex systems, delivering customer focused software, designing best-in-class systems and leading data architecture design this role is for you.
RESPONSIBILITIES:
Own data architecture and oversee data implementation
Set coding/implementation standards
Lead our data warehouse design
Deliver performant, maintainable, and quality software in collaboration with our teams.
Improve our database design to reduce replication and increase performance
Partner with other architects and engineers to produce better designed systems
SKILLS, KNOWLEDGE, AND EXPERIENCE:
5+ years of experience as Data Architect or equivalent role
Bachelor's degree in computer science or equivalent degree
Hands-on experience with Oracle
Designed and implemented data warehouse
Experience with the following is preferred but not required: designing and building monolithic and distributed systems, Postgres, Logi Symphony, PowerBI, Java and JIRA/Confluence
COMPANY CULTURE & PERKS - HIGHLIGHTS:
Investing in Your Growth 🌱
Casual work environment with fun, hard-working, and open-minded coworkers
Competitive salary with opportunity for performance-based annual bonus
Opportunities to up-skill, explore new responsibilities, and network across departments
Defined and undefined career pathways allowing you to grow your own way
Work/Life Balance 🗓️
Flexible PTO and work schedule to ensure our team balances work and life
Hybrid work schedule
Maternity and paternity leave
Taking Care of Your Future ♥️
Medical, Dental, and Vision, Disability insurance
Free access to Spring Health, a comprehensive mental health solution
401(k) with company match and a broad selection of investments
Voluntary insurance: short-term disability, long-term disability, and life insurance
FSA and transit benefits for employees that contribute pre-tax dollars
Other Fun Stuff ⭐
Free on-site gym and parking
Weekly catered lunches in the office, plus monthly happy hours
Stocked kitchen with snacks and drinks
GeoWealth was recognized as “Best Place to Work” by Purpose Job's 2025, 2024 and 2022
GeoWealth was recognized as “Best Place to Work” by Built In in 2024, 2023 and 2022
SALARY RANGE:
Starting at $170,000-$220,000 + Benefits + Opportunity for Performance Bonus
This is an estimated range based on the circumstances at the time of posting, however, may change based on a combination of factors, including but not limited to skills, experience, education, market factors, geographical location, budget, and demand.
AI Data Strategy Internship
Data scientist job in Chicago, IL
Chicago Based, no relocation support; We are looking to fill this role ASAP
*This role is intended to be converted to a full-time position for the right candidate.
*No ChatGPT during interview
The AI Data Strategy Intern will play a key role in advancing data-driven decision-making processes while contributing to the development of AI models and supporting and enhancing the organization's AI environment. Reporting to the Director of Data Strategy, the intern will work with diverse datasets, explore their sources, and utilize the Datawarehouse for data extraction, transformation, and analysis. This role offers an excellent opportunity to gain hands-on experience in AI and data analytics while supporting the mission to strengthen faith communities and make a meaningful impact.
Essential Functions & Job Performance Criteria
Analyze donor data to uncover insights and patterns.
Utilize SQL to cleanse, transform, and extract donor information.
Assist in developing and maintaining fundraising performance reports using Power BI.
Contribute to the development, maintenance, and optimization of AI models to support predictive fundraising outcomes and other organizational initiatives.
Support and enhance the organization's AI environment, ensuring its effective use in analytics and decision-making.
Perform data management tasks such as deduplication, migration, and addressing data integrity issues.
Adhere to security standards and protocols in handling donor information.
Troubleshoot and resolve data-related challenges effectively.
Identify opportunities for process enhancement and efficiency within fundraising operations.
Assist with other duties as needed to support fundraising and AI-related initiatives.
Qualifications
Pursuing or holding a bachelor's degree in Data Science, Computer Science, or Engineering
Proficiency in data structures and SQL, with strong data manipulation skills.
Familiarity with data analytics concepts and AI/ML fundamentals.
Interest in AI development and its applications.
Strong problem-solving skills with a technical mindset.
Ability to communicate analysis results to both technical and non-technical audiences.
Demonstrates a growth mindset and a positive attitude.
Interest in working with data for performance measurement, decision-making, and AI applications.
Ability to collaborate effectively in a team-oriented, fast-paced environment at our downtown Chicago location.
Strong organizational and prioritization skills.
Capacity to work independently while maintaining open communication.
Self-motivated with excellent organizational abilities.
Demonstrates personal integrity, credibility, and a dedication to the organization's mission.
Appreciation for and understanding of the teachings and traditions of the Roman Catholic Church.
A passion for your field and a sense of humor.
Full time position but part time optional if need be (students only)
Data Engineer
Data scientist job in OFallon, MO
We are seeking an experienced Data Engineer, the ideal candidate should have experience with data engineering tools and technologies, especially in Hadoop environments. Client's data ecosystem is built on an on-premise Hadoop system, and they're transitioning some data to the cloud to support external systems and enhance data sharing.
Here are some of the specific details:
Job Title: Data Engineer
Job Duration: 12 months
Job Location: O'Fallon, MO (3 days onsite).
REQUIRED SKILLS:
8 years of experience in Data Engineering.
Deep experience with Hadoop-based data ecosystems (on-premise).
Hands-on experience with Spark, Python, and SQL.
Experience with ETL tools such as Alteryx, NiFi, and SSIS.
Familiarity with cloud platforms like AWS and Azure, especially for hybrid cloud/on-premise data ecosystems.
A reasonable, good faith estimate of the minimum and maximum for this position is $65/hr to $75/hr on w2.
Data Engineer
Data scientist job in Chicago, IL
Scaylor helps mid-market and enterprise companies make sense of their data. Most teams past $20M ARR are stuck with fragmented systems, old databases, and hundreds of spreadsheets that never quite line up. We build the pipelines that clean, normalize, and connect that data so it can actually be used.
Our platform handles everything from parsing financial models and reports to mapping tables across legacy databases and modern APIs. The goal is simple: give companies a single source of truth they can trust.
We're a small team of four - one backend engineer, one frontend engineer, and 2 founders. We're looking for our fifth teammate to help us scale the engine behind it all.
⸻
The Role
You'll work across data engineering and full-stack development, helping us build reliable data infrastructure that powers real workflows. You'll touch everything from ingestion and transformation pipelines to the APIs and dashboards that surface insights to clients.
You'll work directly with the founding team and help make technical decisions that define the next version of Scaylor's core platform.
⸻
What You'll Work On
• Build data pipelines that extract, clean, and standardize information from Excel files, PDFs, APIs, and legacy databases
• Design schemas and transformation logic for structured and semi-structured data
• Develop and maintain backend APIs (Python/FastAPI or Node/Express) for data access and analytics
• Help connect backend services to our frontend dashboards (React, Node.js, or similar)
• Set up and maintain AWS infrastructure (Lambda, S3, ECS, CloudFormation)
• Collaborate with clients to understand their data problems and design workflows that fix them
⸻
You'd Be Great Here If You
• Have 3-6 years of experience in data engineering, backend, or full-stack roles
• Write clean, maintainable code in Python + JS
• Understand ETL, data normalization, and schema mapping
• Have experience with SQL and working with legacy databases or systems
• Are comfortable managing cloud services and debugging data pipelines
• Enjoy solving messy data problems and care about building things that last
⸻
Nice to Have
• Familiarity with GCP or SQL databases
• Understanding of enterprise data flows (ERP, CRM, or financial systems)
• Experience building and deploying containers (Docker, GitHub Actions, CI/CD)
• Interest in lightweight ML or LLM-assisted data transformation
⸻
Why Join Scaylor
• Be one of the first five team members shaping the product and the company
• Work directly with the founder and help define Scaylor's technical direction
• Build infrastructure that solves real problems for real companies
• Earn meaningful equity and have a say in how the company grows
⸻
Compensation
• $130k - $150k with a raise based on set revenue triggers
• .4% equity
• Relocation to Chicago, IL required
Sr. Data Engineer - PERM - MUST BE LOCAL
Data scientist job in Naperville, IL
Resource 1 is in need of a Sr. Data Engineer for a full-time/ permanent position with our client in Naperville, IL. Candidate must be
local to Illinois
because of future hybrid onsite in Naperville expected. Our client is an employee-owned company with excellent benefits, growth opportunities and profit-sharing bonus.
This position is focused on building modern data pipelines, integrations and back-end data solutions. Selected individual will work within cross-functional Agile teams, collaborating with product owners, business analysts and other engineers to design and deliver data solutions that power business insights and AI products.
Responsibilities:
Design and develop scalable data pipelines for ingestion, transformation and integration using AWS services.
Pull data from PostgreSQL and SQL Server to migrate to AWS.
Create and modify jobs in AWS and modify logic in SQL Server.
Create SQL queries, stored procedures and functions in PostgreSQL and RedShift.
Provide input on data modeling and schema design as needed.
Manage infrastructure through infrastructure-as-code templates (Serverless Framework), supporting new data products and services in AWS.
Support inbound/ outbound data flows, including APIs, S3 replication and secured data.
Assist with data visualization/ reporting as needed.
Follow an Agile development methodology, with regular workshops and standup meetings, working in two-week sprints.
Qualifications:
5+ years of data engineering experience.
Experience with AWS and its associated array of offerings (Glue, Redshift, Athena, S3, Lambda, Spectrum).
Strong experience with SQL, Python and PySpark.
A background in supply chain, logistics or distribution would be a plus.
Experience with Power BI is a plus.
Data Architect
Data scientist job in Chicago, IL
Job Title: Architect / Senior Data Engineer
We are seeking a highly skilled Architect / Senior Data Engineer to design, build, and optimize our modern data ecosystem. The ideal candidate will have deep experience with AWS cloud services, Snowflake, and dbt, along with a strong understanding of scalable data architecture, ETL/ELT development, and data modeling best practices.
Key Responsibilities
Architect, design, and implement scalable, reliable, and secure data solutions using AWS, Snowflake, and dbt.
Develop end-to-end data pipelines (batch and streaming) to support analytics, machine learning, and business intelligence needs.
Lead the modernization and migration of legacy data systems to cloud-native architectures.
Define and enforce data engineering best practices including coding standards, CI/CD, testing, and monitoring.
Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements and translate them into technical solutions.
Optimize Snowflake performance through query tuning, warehouse sizing, and cost management.
Establish and maintain data governance, security, and compliance standards across the data platform.
Mentor and guide junior data engineers, providing technical leadership and direction.
Required Skills & Qualifications
8+ years of experience in Data Engineering, with at least 3+ years in a cloud-native data environment.
Hands-on expertise in AWS services such as S3, Glue, Lambda, Step Functions, Redshift, and IAM.
Strong experience with Snowflake - data modeling, warehouse design, performance optimization, and cost governance.
Proven experience with dbt (data build tool) - model development, documentation, and deployment automation.
Proficient in SQL, Python, and ETL/ELT pipeline development.
Experience with CI/CD pipelines, version control (Git), and workflow orchestration tools (Airflow, Dagster, Prefect, etc.).
Familiarity with data governance and security best practices, including role-based access control and data masking.
Strong understanding of data modeling techniques (Kimball, Data Vault, etc.) and data architecture principles.
Preferred Qualifications
AWS Certification (e.g., AWS Certified Data Analytics - Specialty, Solutions Architect).
Strong communication and collaboration skills, with a track record of working in agile environments.
Data Architect - Pharma
Data scientist job in Chicago, IL
MathCo
Role - Data/AI Engineering Manager
Onsite - Chicago - 4 days in office (Mandatory)
Industry - Pharma (Mandatory)
As platform architect/owner, you will:
Lead the end-to-end architecture, lifecycle, and governance of the AI/Analytics platform, defining standards, reusable components, and integration patterns.
Partner with AI/Data architects to enable scalable model deployment and enhance agentic orchestration.
Translate business needs into platform features, manage onboarding, documentation, and cross-functional collaboration for platform adoption.
Oversee infrastructure-as-code, CI/CD, observability, and containerized environments to ensure reliability and scalability.
Evaluate complex technical proposals and develop actionable platform roadmaps and architecture recommendations
Stay current on key AI platform developments and assess their impact on architecture and client strategy
Coach others, recognize their strengths, and encourage them to take ownership of their personal development
Skills Required
Experience in designing, architecting, or managing distributed data and AI platforms in cloud environments (AWS, Azure, or GCP)
Proven ability to carry out complex Proof of Concept (POC), pilot projects, and limited production rollouts for AI use-cases, focusing on developing new or improved techniques and procedures.
Strong skills in pipeline/workflow optimization and data processing frameworks to evaluate architectural choices
Years of Experience
Minimum of 8 years in relevant experience, preferably with a consulting
background and experience with Pharma clients
Cloud Data Architect
Data scientist job in Chicago, IL
Must Have Technical/Functional Skills
· Cloud Data Architect in Data & Analytics will play a key role in driving solution architecture design, evaluation, and selection, buy vs.
· build decisions, and early-phase project estimates which contribute to the business case.
· Partner with Key Roles (Business and Technology) to shape the Cloud Data & Analytics strategy, provide hands on guidance to convert the strategy to architecture/Solutions that are aligned to architecture standards and principles, leverage common solutions and services, and meet the financial targets (costs and benefits) and actionable plan.
· Furthermore, shape large Data and Analytics opportunities for TCS. Prior experience in Banking Financial and Insurance combined with strong articulation skills, strategic Thinking exposure, Leadership traits, Strong business acumen, Customer focus, Social are essential.
· Must Have skills : Cloud Data Architect, Snowflake, demonstrate hands-On on Databricks (Delta Lake, Unity Catalogue), Snowflake, Cortex AI, Five Tran, DBT.
· On Data front Hands on in Data Quality, Data Governance, MDM and Data Modeling (Data Products, Data Mesh, Data vault)
Roles & Responsibilities
· Strong experience in Cloud Data Architecture and Roadmap and effectively engaging various client Stakeholders on the architecture proposition.
· Strong experience in Solution Architecture (Data & Analytics)
· Must have prior experience building large scale enterprise data architectures using commercial and/or open source Data Analytics technologies. Must demonstrate knowledge of data architecture patterns
· Strong cloud architecture expertise (Azure, AWS, GCP) across large project with minimum 2-3 years experience in doing cloud projects
· Strong solution implementation experience (platform engineering, developer experience)
· Data modelling and architecting skills including strong foundation in data warehousing concepts, data normalization, dimensional data modelling and a variety of data models including data vault
· Demonstrated experience across different data platform such as Databricks (Delta Lake, Unity Catalogue), Snowflake and Cortex AI, FiveTran, dbt, Data Virtualization tools
· Demonstrated experience in Big Data, data warehouse, data analytics projects, and/or any Information Management related projects
· Lead transformation effort for Large opportunities i.e. build strategy and multi-year Transformation roadmap for our customers
· Strong ability and experience in engaging with both technical and non-technical stakeholders
· Strong teaming, communications and analytical skills
· Dedicated, resourceful and able to work under pressure
· As part of Presales support, should be able to shape large multi-million value Deals both proactive and reactive by driving TCS positioning with customers and build business for TCS
· Develop Point of views, solution approaches & pitches for new opportunities
· Define Data and Technology Architecture and patterns on-prem/hybrid/Cloud using native and other partners/tools with recommendations that suits customer's context and objectives
· On Data front strong in Data Governance, MDM and Data Modeling
· Drive and Participate in Architecture reviews both internal and client teams
· Oversee data management, warehousing, integration, data movement, and delivery across assigned data initiatives.
· Examine data, processes, and technologies to determine the current state and critical problems of Customer's data and analytics journey
· Research and make recommendations for training, equipment, and technology to improve data use
· Build connect with Client's Senior Leadership CXOs/D&A Heads and work with them to develop clear Data and Analytics Strategy and Architecture that supports their business goals and objectives.
Data Architect
Data scientist job in Springfield, IL
Krasan Consulting Services is a Chicago boutique management consulting and technology solutions integrator. As a certified BEP WBE with the State of Illinois and a certified WBE, MBE, and DBE recognized by the City of Chicago, Krasan provides complex technology solutions and services for clients in the Public and Commercial sectors. With decades of experience, Krasan specializes in end-to-end technology solutions, integration, and implementation services and is proud to enable the success of our customers.
Role Title: Data Architect
Location - Remote - Springfield, IL
Job Description:
Determine where the data resides.
Create an architectural design for migration.
Plan the migration path from start to finish.
Collaborate with the IEPA during system design.
Confirm data security and integrity during migration.
Design and integrate the business and technical components of the EIMS, including future enhancements.
Confirm the solution meets all IEPA requirements and integrates seamlessly with other environmental information systems.
Provide technical leadership and guidance to development teams.
Confirm system compliance with relevant regulations
Minimum seven (7) years of experience in solution architecture and system integration.
Knowledge of environmental information or permitting systems and interoperability.
Experience with technical leadership and project management.
Knowledge of CROMERR and FedRAMP solution design.
Experience serving as Offeror's business architect (or similar role) for at least two (2) full life cycle implementations of the offered solution end-to-end.
2013
Cloud Data Engineer only on w2(USC or GC )
Data scientist job in Saint Louis, MO
Cloud Data Engineering (Production Support)
St Louis, MO(Hybrid)
Contract
Local consultants will be considerded
Only on w2(USC or GC)
Core Responsibilities
Production support for Databricks notebooks, Spark streaming apps, and APIs.
Ownership of Level 1 / Bronze batch ingestion for ~60 data sources.
Work includes:
Data ingestion from various relational DBs or AWS → Delta Lake.
Adding new tables/columns, change management, GDPR/purge processes.
Product optimization: upgrades, resiliency work, performance improvements.
Occasional new development for shared-service tools, not business-facing projects.
Required Technical Skills
Python (must have)
Spark fundamentals (architecture, DataFrames, Spark SQL)
Databricks experience
Relational DB experience (Oracle, SQL Server, Postgres; not a DBA)
Understanding of Delta Lake
Nice-to-Haves:
Spark streaming
API development experience
Azure experience preferred; AWS acceptable
Scala not required
Senior Data Engineer
Data scientist job in Chicago, IL
requires visa independent candidates.
Note: (OPT, CPT, H1B holders will not work at this time)
Design, develop, and maintain scalable ETL pipelines using AWSGlue
Collaborate with data engineers and analysts to understand data requirements
Build and manage data extraction, transformation, and loading processes
Optimize and troubleshoot existing Glue jobs and workflows
Ensure data quality, integrity, and security throughout the ETL process
Integrate AWS Glue with other AWS services like S3, Lambda, Redshift, and Step Functions
Maintain documentation of data workflows and processes
Stay updated with the latest AWS tools and best practices
Required Skills
Strong hands-on experience with AWS Glue, PySpark, and Python
Proficiency in SQL and working with structured/unstructured data (JSON, CSV, Parquet)
Experience with data warehousing concepts and tools
Familiarity with CI/CD pipelines, Terraform, and scripting (PowerShell, Bash)
Solid understanding of data modeling, data integration, and data management
Exposure to AWS Batch, Step Functions, and Data Catalogs
Data Architect
Data scientist job in Saint Louis, MO
Data Engineering Lead/Architect
Contract-to-Hire
Banking Experience is highly preferred
Ideal Candidate: Experienced in a banking data modernization effort previously and can help lead/mentor the data engineering team. This individual needs to have a good solutioning/creative mindset and be willing to speak up.
Data Engineering Lead/Solution Architect
The ideal candidate will have a deep understanding of Microsoft data services, including Azure Fabric, Azure Data Factory (ADF), Azure Synapse, and ETL/ELT processes. This role focuses on designing, developing, and maintaining cloud-based data pipelines and solutions to drive our analytics and business intelligence capabilities.
Key Responsibilities:
Provide technical leadership in modernizing legacy data ingestion, ETL/ELT, and databases to cloud technologies (AWS/Azure).
Demonstrate a self-driven, ownership mindset to navigate ambiguity, resolve constraints, and mitigate risks with minimal supervision.
Implement data access, classification, and security patterns that comply with regulatory standards (PII, locational data, contractual obligations, etc.).
Build strong relationships with technical teams through effective communication, presentation, and collaboration skills.
Collaborate with stakeholders, business analysts, and SMEs to translate business requirements into scalable solutions.
Integrate data from multiple sources into cloud-based architectures, collaborating with cross-functional teams.
Work closely with data scientists, analysts, and stakeholders to meet data requirements with high-quality solutions.
Function within a matrixed team environment, sharing responsibilities across various teams.
Perform data profiling and analysis on both structured and unstructured data.
Design and map ETL/ELT pipelines for new or modified data streams, ensuring integration into on-prem or cloud-based data storage.
Automate, validate and maintain ETL/ELT processes using technologies such as Databricks, ADF, SSIS, Spark, Python, and Scala.
Proactively identify design, scope, or development issues and provide recommendations for improvement.
Conduct unit, system, and integration testing for ETL/ELT solutions, ensuring defects are resolved.
Create detailed documentation for data processes, architectures, and workflows.
Monitor and optimize the performance of data pipelines and databases.
Required Skills and Qualifications:
experience in designing and implementing data warehouse and analytics solutions (on-premise and cloud).
expertise in data warehousing concepts (ETL/ELT, data quality management, privacy/security, MDM) with hands-on experience using ADF, Data Factory, SSIS, and related tools.
experience with cloud data and cloud-native data lakes/warehouses. Microsoft Azure services (Fabric Lakehouse, ADF, Data Factory, Synapse, etc.).
experience in Python, Scala, or Java for use with distributed processing and analytics, such as Spark.
Familiarity with CI/CD practices and tools such as Azure DevOps, Git, or Jenkins.
Soft Skills:
Proven ability to mentor team members and guide best practices for data engineering.
Strong problem-solving skills with high attention to detail.
Excellent communication skills for effective collaboration with diverse teams.
Nice to Have:
Experience with Snowflake, Databricks, AWS
Experience with containerization, microservices, streaming, and event-sourcing architecture patterns.
Knowledge of Kafka, Eventstream, architectures.
Experience with Microsoft Purview
Previous experience in the financial or banking sector.
Familiarity with machine learning concepts and frameworks.
Experience with reporting tools such as Power BI or Tableau.
Education:
Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent experience).
Data Engineer
Data scientist job in Chicago, IL
The Data Engineer will design, build, and optimize the data pipelines and models that support the firm's evolving research, analytics, and systematic portfolio construction environment. This role is central to enabling data-driven investment processes, including quantitative research, AI/ML capabilities, and front-office automation.
Candidates must have deep expertise with Snowflake, strong SQL skills, and experience integrating diverse datasets used across investment organizations. The role is highly collaborative and requires comfort working in an iterative, fast-moving environment where data needs evolve rapidly based on stakeholder input.
Responsibilities
Design, build, and enhance ETL/ELT pipelines in Snowflake, ensuring high performance, reliability, and scalability.
Integrate internal and external datasets, including pricing, research content, economic releases, market data, and security reference data.
Support real-time or near-real-time data flows where needed (e.g., pricing, indicative quotes, market-sensitive inputs).
Collaborate closely with Product Leads, Quant Developers, and UI/UX teams to ensure data structures meet the requirements of research workflows, analytical models, and user-facing applications.
Partner with front-office stakeholders to rapidly iterate on evolving analytical and data needs.
Implement data validation, monitoring, and quality frameworks to ensure accuracy and reliability across critical datasets.
Translate prototype pipelines into production-ready workflows with appropriate documentation, standards, and controls.
Contribute to data modeling standards, metadata frameworks, and data governance practices across the platform.
Requirements
10+ years of data engineering experience within investment management, financial technology, or similar data-intensive environments.
Expert-level SQL, including complex queries, schema design, and performance optimization.
Deep hands-on experience with Snowflake, including advanced features such as tasks, streams, performance tuning, and secure data sharing.
Strong Python capabilities for ETL/ELT development, data processing, and workflow automation.
Experience integrating APIs and working with structured, semi-structured, and unstructured datasets.
Familiarity with NLP or AI/ML-oriented datasets (e.g., textual research content, PDFs) is a plus.
Experience with Domino or willingness to work within a Domino-based model environment.
Working knowledge of investment data structures (holdings, benchmarks, pricing, exposures) is highly preferred.
Ability to thrive in a rapid prototyping environment with evolving requirements and close partnership with front-office teams.
Data Engineer
Data scientist job in Chicago, IL
Design, develop, and maintain robust ETL/ELT data pipelines
Write and optimize complex SQL queries for analytics use cases
Work with GCP services such as BigQuery, GCS, and Dataflow
Build and manage scalable data solutions using Databricks and Apache Spark
Ensure data quality, reliability, and performance
Collaborate with data analysts, data scientists, and business stakeholders
Troubleshoot and resolve data pipeline and performance issues
Required Skills
Strong expertise in SQL
Hands-on experience with Google Cloud Platform (GCP)
Experience with Databricks / Apache Spark
Proficiency in Python
Experience with data warehousing and data modeling concepts
Nice to Have
Experience with Airflow or other orchestration tools
Exposure to streaming technologies (Kafka, Pub/Sub)
Knowledge of CI/CD, Git, and DevOps practices