Data Scientist
Data engineer job in Indianapolis, IN
We are seeking a Junior Data Scientist to join our large Utility client in downtown Indianapolis. This position will be hired as a Full-Time employee. This entry-level position is perfect for individuals eager to tackle real-world energy challenges through data exploration, predictive modeling, and collaborative problem-solving. As part of our team, you'll work closely with seasoned data scientists, analysts, architects, engineers, and governance specialists to generate insights that power smarter decisions and help shape the future of energy.
Key Responsibilities
Partner cross-functionally with data scientists, data architects and engineers, machine learning engineers, data analysts, and data governance experts to deliver integrated data solutions.
Collaborate with business stakeholders and analysts to define clear project requirements.
Collect, clean, and preprocess both structured and unstructured data from utility systems (e.g., meter data, customer data).
Conduct exploratory data analysis to uncover trends, anomalies, and opportunities to enhance grid operations and customer service.
Apply traditional machine learning techniques and generative AI tools to build predictive models that address utility-focused challenges, particularly in the customer domain (e.g., outage restoration, program adoption, revenue assurance).
Present insights to internal stakeholders in a clear, compelling format, including data visualizations that drive predictive decision-making.
Document methodologies, workflows, and results to ensure transparency and reproducibility.
Serve as a champion of data and AI across all levels of the client's US Utilities organization.
Stay informed on emerging industry trends in utility analytics and machine learning.
Requirements
Bachelor's degree in data science, statistics, computer science, engineering, or a related field. Master's degree or Ph.D. is preferred.
1-3 years of experience in a data science or analytics role.
Strong applied analytics and statistics skills, such as distributions, statistical testing, regression, etc.
Proficiency in Python or R, with experience using libraries such as pandas, NumPy, and scikit-learn.
Proficiency in traditional machine learning algorithms and techniques, including k-nearest neighbors (k-NN), naive Bayes, support vector machines (SVM), convolutional neural networks (CNN), random forest, gradient-boosted trees, etc.
Familiarity with generative AI tools and techniques, including large language models (LLMs) and Retrieval-Augmented Generation (RAG), with an understanding of how these can be applied to enhance contextual relevance and integrate enterprise data into intelligent workflows.
Proficiency in SQL, with experience writing complex queries and working with relational data structures. Google BigQuery experience is preferred, including the use of views, tables, materialized views, stored procedures, etc.
Proficient in Git for version control, including repository management, branching, merging, and collaborating on code and notebooks in data science projects. Experience integrating Git with CI/CD pipelines to automate testing and deployment is preferred.
Experience with cloud computing platforms (GCP preferred).
Ability to manage multiple priorities in a fast-paced environment.
Interest in learning more about the customer-facing side of the utility industry.
Compensation: Up to $130,000 per year annual salary. Exact compensation may vary based on several factors, including skills, experience, and education. Benefit packages for this role may include healthcare insurance offerings and paid leave as provided by applicable law.
Senior Data Engineer
Data engineer job in Indianapolis, IN
Senior Data Engineer - Azure Data Warehouse (5-7+ Years Experience)
Long term renewing contract
Azure-based data warehouse and dashboarding initiatives.
Work alongside architects, analysts, and researchers to build scalable, auditable, and business-aligned data assets using modern cloud tools and best practices.
Key Responsibilities
· Design and implement scalable data pipelines using ADF, Databricks, and Azure SQL Server
· Apply Medallion architecture principles and best practices for data lake and warehouse design
· Collaborate with Data Architects, Analysts, and Researchers to translate business needs into technical solutions
· Develop and maintain CI/CD pipelines for data workflows and dashboard deployments
· Lead troubleshooting and debugging efforts across ETL, SQL, and cloud environments
· Mentor junior team members and promote best practices in data modeling, cleansing, and promotion
· Support dashboarding initiatives with Power BI and wireframe collaboration
· Ensure auditability, lineage, and performance across SQL Server and Oracle environments
Required Skills & Experience
· 5-7+ years in data engineering, data warehouse design, and ETL development
· Strong expertise in Azure Data Factory, Data Bricks, and Python
· Deep understanding of SQL Server, Oracle, Postgres SQL & Cosmos DB and data modeling standards
· Proven experience with Medallion architecture and data Lakehouse best practices
· Hands-on with CI/CD, DevOps, and deployment automation
· Agile mindset with ability to manage multiple priorities and deliver on time
· Excellent communication and documentation skills
Bonus Skills
· Experience with GCP or AWS
· Familiarity with Jira, Confluence, and AppDynamics
Senior Data Engineer
Data engineer job in Indianapolis, IN
Pinnacle Partners is assisting our client in the search for a Senior Data Engineer to join their team in the Indianapolis, IN area. This successful resource will be responsible for supporting the large-scale data modernization initiative and operationalize the platform moving forward.
RESPONSIBILITIES:
Design, develop, and refine BI focused data architecture and data platforms
Work with internal teams to gather requirements and translate business needs into technical solutions
Build and maintain data pipelines supporting transformation
Develop technical designs, data models, and roadmaps
Troubleshoot and resolve data quality and processing issues
Create and maintain detailed documentation for data warehouses, data stores, and end-to-end data flows
Mentor and support junior team members
REQUIREMENTS:
5+ years of hands-on experience with data warehousing, databases, and dimensional data modeling
5+ years of experience across end-to-end data analysis and development
Experience using GIT version control
Advanced SQL skills
Strong experience with AWS cloud
PREFERRED SKILLS:
Experience with Snowflake
Experience with Python or R
Bachelor's degree in an IT-Related field
TERMS:
This is a direct hire opportunity with a salary up to $130K based on experience. They offer benefits including medical, dental, and vision along with generous PTO, 401K matching, wellness programs, and other benefits.
Configuration Engineer
Data engineer job in Indianapolis, IN
This is a contract role available on a W2 basis. NOT AVAILABLE ON C2C.
Ability to pass a Public Trust Clearance REQUIRED
You MUST be located within 50 miles or 1 hour of one of the below locations
o Indianapolis, IN
o Denison, TX
o Baltimore, MD
o Harrisburg, PA
o Syracuse, NY
o Portland, ME
o Hingham, MA
This role offers the opportunity to influence large-scale systems, optimize deployment processes, and solve complex challenges in a fast-paced environment.
Required Skills
5+ years in DevOps, cloud engineering, or infrastructure as code roles
Extensive experience with AWS, including serverless services (Lambda, API Gateway, CloudFront)
Strong knowledge of Windows (2019+) and Linux systems, scripting (shell, Python), and networking
Proficiency with configuration management (Ansible, Jenkins) and orchestration tools (Terraform, Kubernetes)
Experience designing and troubleshooting container deployments, pods, and manifests
Familiarity with CI/CD pipelines, Jira, Git, and Confluence
Willingness to obtain AWS certifications if not already certified
Nice to Have Skills
Knowledge of service meshes, Helm, GitOps, and cluster security
Experience designing complex system architecture and operational workflows
Contributions to open-source projects or public repositories
Sr. Software Engineer (.NET/C#)
Data engineer job in Greenfield, IN
We are seeking a highly skilled Senior .NET Developer with a strong background in manufacturing environments and MES (Manufacturing Execution Systems) to design, develop, and maintain our internally developed plant production and business systems. This role is hands-on, yet also requires the ability to independently lead projects from concept to completion. The ideal candidate excels at cross-functional collaboration, stakeholder communication, and the entire software development lifecycle-ensuring robust and scalable solutions that meet plant and business needs.
Key Responsibilities:
Lead the design, development, and maintenance of our custom Plant Production Systems (MES, Data Collection, and traceability).
Own project scope, deadlines, and execution, including communication of status updates, risks, and deliverables to internal stakeholders and management.
Maintain and evolve our Corporate and Plant Production Systems' software code base, adhering to best practices in .NET development.
Ensure compliance with IT security policies and regulatory standards.
Provide on-call support for plant floor systems, troubleshooting issues and driving root-cause analysis.
Collaborate with IT infrastructure and server teams on networking, servers, and security
Qualifications and Experience:
5+ years of progressive experience in .NET software development (C#, .NET Framework, .NET Core).
Proven track record in independently leading complex software projects, from requirements gathering to deployment.
Demonstrated experience with lean manufacturing concepts and supporting MES, SCADA, or traceability systems in a production environment.
Hands-on experience with SAP (or other ERP systems), ServiceNow, Ignition, and Leading2Lean is highly desirable.
Strong understanding of IT Security and Business Risk Controls.
Bachelor's degree in Computer Science, Information Technology, or equivalent experience
Skills and Abilities:
Technical Leadership: Ability to define technical roadmaps, architect solutions, and lead development efforts.
Excellent Debugging Skills: Capable of diagnosing complex issues spanning multiple systems or components.
Database Proficiency: Demonstrated ability to write efficient SQL queries, stored procedures, and manage database objects (SQL Server, Oracle).
Project Management: Skilled at stakeholder communication, setting realistic timelines, and adapting to shifting priorities.
Collaborative Mindset: Proven success working with cross-functional teams (e.g., Production, Operations, QA).
Strong Communication: Adept at conveying technical concepts to non-technical audiences; capable of producing clear technical documentation.
Core Technologies (Preferred):
NET Ecosystem: C#, VB.NET, ASP.NET, .NET Core
Web Development: HTML, CSS, JavaScript
Web Services: WebAPI, RESTful, JSON
Databases: SQL Server, Oracle
DevOps & Source Control: Git
Manufacturing Tools: Inductive Automation Ignition, Leading2Lean, PTC Kepware, Telit Devicewise
Enterprise Systems: SAP, ServiceNow
Collaboration: Microsoft Office, SharePoint
Working conditions:
Location: This role is based on-site in our Greenfield, Indiana facility. While some hybrid flexibility may be available, the position requires regular presence at the manufacturing plant to effectively support production and IT operations.
Physical Demands: Required to sit or stand for long periods of time. Ability to work in a manufacturing environment, including 24/7 production facilities where associate may be on-call. Visual ability to work accurately with detailed information and computer screens.
Travel: May require occasional domestic (and possibly international) travel to other facilities.
C&Q Engineer
Data engineer job in Lebanon, IN
Process Alliance is a leading engineering consultancy firm dedicated to delivering innovative solutions in engineering, automation, manufacturing services, and medical devices. With a commitment to being a better model of problem solving, we have been at the forefront of providing cutting-edge engineering services to clients across the life science industry. Our team of experts thrives on solving complex challenges and driving technological advancements to meet the evolving needs of our clients.
Overview:
We are seeking a C&Q Engineer to support the successful startup, commissioning, and operational handover of new or upgraded pharmaceutical manufacturing systems. This role is ideal for an engineer with strong project execution skills and hands-on experience in commissioning, qualification, and operations support in GMP environments.
Key Responsibilities:
Support operational readiness planning for new equipment, utilities, and manufacturing areas.
Own or support Commissioning & Qualification (C&Q) activities including protocol development, execution, and issue resolution.
Assist with manufacturing equipment and process startup, troubleshooting issues during early batches, and improving operational reliability.
Develop and update SOPs, batch records, and operational readiness documentation.
Coordinate operator training, readiness checklists, and manufacturing process walkdowns.
Drive operational improvements and ensure compliance with GMP, safety, and regulatory requirements.
Required Qualifications:
Bachelor's degree in Engineering (Chemical, Mechanical, Biomedical, Industrial, or related discipline required).
3-5 years of engineering experience in pharmaceutical, biotech, or other regulated life sciences manufacturing.
Project management experience supporting capital projects, equipment installation, or cross-functional readiness activities.
Hands-on experience with startups, C&Q activities, FAT/SAT, or equipment installation.
Strong understanding of GMP operations, documentation, and validation principles.
Ability to work collaboratively across engineering, quality, and operations teams.
About Our Culture:
At Process Alliance, we strive to be a better model for how problems are solved, and solutions are delivered. We believe in providing a supportive and inclusive work environment where employees can thrive both personally and professionally. Join our team and be part of a company that is shaping the future of engineering solutions.
Learn more about us:
Visit our website at *********************** to explore our projects, expertise, and the impact we make in the engineering and consultancy space.
Process Alliance is an equal opportunity employer. We encourage applications from candidates of all backgrounds and experiences
Backend Software Engineer
Data engineer job in Indianapolis, IN
Apexon is a digital-first technology services firm specializing in accelerating business transformation and delivering human-centric digital experiences. We have been meeting customers wherever they are in the digital lifecycle and helping them outperform their competition through speed and innovation. Apexon brings together distinct core competencies - in AI, analytics, app development, cloud, commerce, CX, data, DevOps, IoT, mobile, quality engineering and UX, and our deep expertise in BFSI, healthcare, and life sciences - to help businesses capitalize on the unlimited opportunities digital offers. Our reputation is built on a comprehensive suite of engineering services, a dedication to solving clients' toughest technology problems, and a commitment to continuous improvement. Backed by Goldman Sachs Asset Management and Everstone Capital, Apexon now has a global presence of 15 offices (and 10 delivery centers) across four continents.
We enable #HumanFirstDIGITAL
Backend Software Engineer
Who We Are
We are passionate about transforming patients' lives, and we are bold in both
decision and action - we believe that good business means a better world. That is why we come
to work every single day. We commit ourselves to scientific rigor, unassailable ethics, and
access to medical innovations for all. We do this today to build a better tomorrow.
Role Purpose (Summary of position)
Developing software is great, but developing software with a purpose is even better! As a
Principal Backend Software Engineer, you'll work on a product that helps people with the most
precious thing they have - their health. In collaborative teams of engineers, designers, product
owners, and QA experts, you'll experience best-in-class software development practices daily
and contribute to software that meets the highest expectations - we do not put our users' lives
at risk!
Here's what we're looking for:
We are looking for an experienced, motivated Principal Backend Software Engineer who will
work closely with their backend colleagues, and who ideally has built digital products and
platforms. As a code-magician, you will support our efforts to improve the digital health
ecosystem. You will contribute with your knowledge of Java, Spring Boot, relational databases &
REST within our agile and cross-functional teams. As a flexible and open-minded person with a
passion for clean code you will be a perfect addition to our team. We are committed to quality,
dedicating time to code reviews, test coverage, quality days and CI/CD principles. If this
resonates with you, we would love to hear from you!
You will be part of the Platform Engineering chapter working on our navify platform.
Essentials skills for your mission:
You have the required years of experience as specified by your educational background:
At least 10 years of experience working as a software engineer with a Bachelor's degree, including 7-8 years as backend engineer.
At least 6 years of experience working as a software engineer with a Master's degree, including 5 years as a backend engineer.
At least 3 years of experience working as a software engineer for candidates with a PhD.
Equivalent work experience, which includes at least 8 years as a software engineer and 5 years as a backend engineer.
You are familiar with the following backend technologies: Java 21+ and frameworks like Spring Boot 3+
SQL and relational databases (e.g.PostgreSQL) are second nature to you
You have experience with OpenID Connect standard and Keycloak or other open source software product that allows single sign-on with identity and access management
You enjoy developing clean, stable, testable, and performant backend code, serving our beautiful applications
You are passionate about solid technical design, clean code, and future-proof architectures
You have experience with Amazon Web Services (AWS) or other cloud providers
You enjoy guiding and sharing your knowledge with other engineers
Great written and verbal communication in English
Bonus skills:
Experienced in automated testing with Selenium or Selenide
Knowledge of Infra as Code, Terraform and Github Actions
Understanding of medical, security, and privacy regulations
Knowledge of the diabetes industry or other comparable health industries
Our Commitment to Diversity & Inclusion:
Did you know that Apexon has been Certified™ by Great Place To Work , the global authority on workplace culture, in each of the three regions in which it operates: USA (for the fourth time in 2023), India (seven consecutive certifications as of 2023), and the UK.Apexon is committed to being an equal opportunity employer and promoting diversity in the workplace. We take affirmative action to ensure equal employment opportunity for all qualified individuals. Apexon strictly prohibits discrimination and harassment of any kind and provides equal employment opportunities to employees and applicants without regard to gender, race, color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. You can read about our Job Applicant Privacy policy here Job Applicant Privacy Policy (apexon.com)
Manager, Data Scientist, DMP
Data engineer job in Indiana
Apply now Work Type: Office Working Employment Type: Permanent Job Description: This role is a role within the Deposit pricing analytics team in SCMAC. The primary focus of the role is:
* To develop AI solutions that are fit for purpose by leveraging advanced data & analytical tools and technology with in WRB. The individual will be responsible for end-to-end analytics solution development, deployment, performance assessment and to produce high-quality data science conclusions, backed up by results for WRB business.
* Takes end-to-end responsibility for translating business question into data science requirements and actions. Ensures model governance, including documentation, validation, maintenance, etc.
* Responsible for performing the AI solution development and delivery for enabling high impact marketing use cases across products, segments in WRB markets.
* Responsible for alignment with country product, segment and Group product and segment teams on key business use cases to address with AI solutions, in accordance with the model governance framework.
* Responsible for development of pricing and optimization solutions for markets
* Responsible for conceptualizing and building high impact use cases for deposits portfolio
* Responsible for implementation and tracking of use case in markets and leading discussions with governance team on model approvals
Key Responsibilities
Business
* Analyse and agree on the solution Design for Analytics projects
* On the agreed methodology develop and deliver analytical solutions and models
* Partner creating implementation plan with Project owner including models benefit
* Support on the deployment of the initiatives including scoring or implementation though any system
* Consolidate or Track Model performance for periodic model performance assessment
* Create the technical and review documents for approval
* Client Lifecycle Management ( Acquire, Activation, Cross Sell/Up Sell, Retention & Win-back)
* Enable scientific "test and learn" for direct to client campaigns
* Pricing analytics and optimization
* Digital analytics including social media data analytics for any new methodologies
* Channel optimization
* Client wallet utilization prediction both off-us and on-us
* Client and product profitability prediction
Processes
* Continuously improve the operational efficiency and effectiveness of processes
* Ensure effective management of operational risks within the function and compliance with applicable internal policies, and external laws and regulations
Key stakeholders
* Group/Region Analytics teams
* Group / Region/Country Product & Segment Teams
* Group / Region / Country Channels/distribution
* Group / Region / Country Risk Analytics Teams
* Group / Regional / Country Business Teams
* Support functions including Finance, Technology, Analytics Operation
Skills and Experience
* Data Science
* Anti Money Laundering Policies & procedures
* Modelling: Data, Process, Events, Objects
* Banking Product
* 2-4 years of experience (Overrall)
About Standard Chartered
We're an international bank, nimble enough to act, big enough for impact. For more than 170 years, we've worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If you're looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we can't wait to see the talents you can bring us.
Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, you'll see how we value difference and advocate inclusion.
Together we:
* Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do
* Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well
* Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term
What we offer
In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing.
* Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations.
* Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum.
* Flexible working options based around home and office locations, with flexible working patterns.
* Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits
* A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning.
* Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential.
Apply now
Information at a Glance
*
*
*
*
*
Data Scientist
Data engineer job in Indianapolis, IN
**Who We Are and What We Do** At Corteva Agriscience, you will help us grow what's next. No matter your role, you will be part of a team that is building the future of agriculture - leading breakthroughs in the innovation and application of science and technology that will better the lives of people all over the world and fuel the progress of humankind.
We are seeking a highly skilled **Data Scientist** with experience in bioprocess control to join our global Data Science team. This role will focus on applying artificial intelligence, machine learning, and statistical modeling to develop active bioprocess control algorithms, optimize bioprocess workflows, improve operational efficiency, and enable data-driven decision-making in manufacturing and R&D environments.
**What You'll Do:**
+ **Design and implement active process control strategies** , leveraging online measurements and dynamic parameter adjustment for improved productivity and safety.
+ **Develop and deploy predictive models** for bioprocess optimization, including fermentation and downstream processing.
+ Partner with engineers and scientists to **translate process insights into actionable improvements** , such as yield enhancement and cost reduction.
+ **Analyze high-complexity datasets** from bioprocess operations, including sensor data, batch records, and experimental results.
+ **Collaborate with cross-functional teams** to integrate data science solutions into plant operations, ensuring scalability and compliance.
+ **Communicate findings** through clear visualizations, reports, and presentations to technical and non-technical stakeholders.
+ **Contribute to continuous improvement** of data pipelines and modeling frameworks for bioprocess control.
**What Skills You Need:**
+ M.S. + 3 years' experience or Ph.D. in Data Science, Computer Science, Chemical Engineering, Bioprocess Engineering, Statistics, or related quantitative field.
+ Strong foundation in machine learning, statistical modeling, and process control.
+ Proficiency in Python, R, or another programming language.
+ Excellent communication and collaboration skills; ability to work in multidisciplinary teams.
**Preferred Skills:**
+ Familiarity with bioprocess workflows, fermentation, and downstream processing.
+ Hands-on experience with bioprocess optimization models and active process control strategies.
+ Experience with industrial data systems and cloud platforms.
+ Knowledge of reinforcement learning or adaptive experimentation for process improvement.
\#LI-BB1
**Benefits - How We'll Support You:**
+ Numerous development opportunities offered to build your skills
+ Be part of a company with a higher purpose and contribute to making the world a better place
+ Health benefits for you and your family on your first day of employment
+ Four weeks of paid time off and two weeks of well-being pay per year, plus paid holidays
+ Excellent parental leave which includes a minimum of 16 weeks for mother and father
+ Future planning with our competitive retirement savings plan and tuition reimbursement program
+ Learn more about our total rewards package here - Corteva Benefits (*******************************************************************************
+ Check out life at Corteva! *************************************
Are you a good match? Apply today! We seek applicants from all backgrounds to ensure we get the best, most creative talent on our team.
Corteva Agriscience is an equal opportunity employer. We are committed to embracing our differences to enrich lives, advance innovation, and boost company performance. Qualified applicants will be considered without regard to race, color, religion, creed, sex, sexual orientation, gender identity, marital status, national origin, age, military or veteran status, pregnancy related conditions (including pregnancy, childbirth, or related medical conditions), disability or any other protected status in accordance with federal, state, or local laws.
Corteva Agriscience is an equal opportunity employer. We are committed to boldly embracing the power of inclusion, diversity, and equity to enrich the lives of our employees and strengthen the performance of our company, while advancing equity in agriculture. Qualified applicants will be considered without regard to race, color, religion, creed, sex, sexual orientation, gender identity, marital status, national origin, age, veteran status, disability or any other protected class. Discrimination, harassment and retaliation are inconsistent with our values and will not be tolerated. If you require a reasonable accommodation to search or apply for a position, please visit:Accessibility Page for Contact Information
For US Applicants: See the 'Equal Employment Opportunity is the Law' poster. To all recruitment agencies: Corteva does not accept unsolicited third party resumes and is not responsible for any fees related to unsolicited resumes.
Join the Squad | Now Hiring a DataOps Consultant
Data engineer job in Indianapolis, IN
Onebridge, a Marlabs Company, is an AI and data analytics consulting firm that strives to improve outcomes for the people we serve through data and technology. We have served some of the largest healthcare, life sciences, manufacturing, financial services, and government entities in the U.S. since 2005. We have an exciting opportunity for a highly skilled DataOps Consultant to join an innovative and dynamic group of professionals at a company rated among the top “Best Places to Work” in Indianapolis since 2015.
DataOps Consultant | About You
As a DataOps Consultant, you are responsible for ensuring the seamless integration, automation, and optimization of data pipelines and infrastructure. You excel at collaborating with cross-functional teams to deliver scalable and efficient data solutions that meet business needs. With expertise in cloud platforms, data processing tools, and version control, you maintain the reliability and performance of data operations. Your focus on data integrity, quality, and continuous improvement drives the success of data workflows. Always proactive, you are committed to staying ahead of industry trends and solving complex challenges to enhance the organization's data ecosystem.
DataOps Consultant | Day-to-Day
Develop, deploy, and maintain scalable and efficient data pipelines that handle large volumes of data from various sources.
Collaborate with Data Engineers, Data Scientists, and Business Analysts to ensure that data solutions meet business requirements and optimize workflows.
Monitor, troubleshoot, and optimize data pipelines to ensure high availability, reliability, and performance.
Implement and maintain automation for data ingestion, transformation, and deployment processes to improve efficiency.
Ensure data quality by implementing validation checks and continuous monitoring to detect and resolve issues.
Document data processes, pipeline configurations, and troubleshooting steps to maintain clarity and consistency across teams.
DataOps Consultant | Skills & Experience
5+ years of experience working in DataOps or related fields, with strong hands-on experience in cloud platforms (AWS, Azure, Google Cloud) for data storage, processing, and analytics.
Proficiency in programming languages such as Python, Java, or Scala for building and maintaining data pipelines.
Experience with data orchestration tools like Apache Airflow, Azure Data Factory, or similar automated data workflows.
Expertise in big data processing frameworks (e.g., Apache Kafka, Apache Spark, Hadoop) for handling large volumes of data.
Hands-on experience with version control systems such as Git for managing code and deployment pipelines.
Solid understanding of data governance, security best practices, and regulatory compliance standards (e.g., GDPR, HIPAA).
A Best Place to Work in Indiana since 2015
Associate Client Data Consultant
Data engineer job in Indianapolis, IN
As an employee-owned company, DMA prioritizes employees. Low turnover rates and tenured teams are living proof:
2025 Great Places to Work Certified
Employee stock ownership program eligibility begins on day one of employment (ESOP contribution is targeted at 6% of your annual compensation)
Company paid parental leave
Generous time off package
Multiple benefit plans, eligibility begins on day one of employment
Culturally focused on work/life balance, mental health, and the overall wellness of our employees
This will be a hybrid position based in our Indianapolis office with a 2 days per week in office requirement.
Position Summary
Support the Transaction Tax division by providing technical data support to Tax Managers and Directors. Convert client data to usable formats for use by Transaction Tax Associates, Managers, and Directors.
Essential Duties and Responsibilities
Convert clients' data files
Verify clients' data files, including identification of shifting data, etc.
Add data fields to existing client files
Obtain and review clients' SAP, Oracle, JDE, or other ERP tables as appropriate and determine how they relate
Handle specific data requests from Transaction Tax Associates, Managers, and Directors
Manage large workloads in an efficient manner
Prepare and maintain working notes for all client data projects
Track time spent on client data projects for ROI analysis and client billings
Non-essential Duties and Responsibilities
Purge old data files based on DMA's retention policy
Perform other duties as assigned
Education and Qualifications
Associate degree required, Bachelor's degree preferred
2-4 years related professional experience
Advanced skills in Microsoft Excel and beginner skills in Word and Access
Prior experience with IDEA, SQL, SAP, OmniPage, and Ultra Edit preferred
Strong organizational skills and attention to detail
Excellent verbal and written communication skills
Ability to successfully work under deadlines and comfortable with multi-tasking
Must be authorized to work in the U.S. without the need for employment-based visa sponsorship now or in the future. This position does not qualify for employment-based sponsorship.
#LI-HYBRID
#LI-AL1
The Company is an equal employment opportunity employer and is committed to providing equal employment opportunities to its applicants and employees. The Company does not discriminate in employment opportunities or practices on the basis of race, color, religion, gender, national origin, citizenship, age, disability, veteran status, genetic information, or any other category covered by applicable federal, state, or local law. This equal employment opportunity policy applies to all employment policies, procedures, and practices, including but not limited to hiring, promotion, compensation, training, benefits, work assignments, discipline, termination, and all other terms and conditions of employment.
It is DMA's policy to make reasonable accommodations for qualified individuals with disabilities. If you have a disability and either need assistance applying online or need to request an accommodation during any part of the application process, please contact our Human Resources team at *********************** or ************ and choosing selection 6.
Auto-ApplyAdvisory, Data Scientist - CMC Data Products
Data engineer job in Indianapolis, IN
At Lilly, we unite caring with discovery to make life better for people around the world. We are a global healthcare leader headquartered in Indianapolis, Indiana. Our employees around the world work to discover and bring life-changing medicines to those who need them, improve the understanding and management of disease, and give back to our communities through philanthropy and volunteerism. We give our best effort to our work, and we put people first. We're looking for people who are determined to make life better for people around the world.
Organizational & Position Overview: The Bioproduct Research and Development (BR&D) organization strives to deliver creative medicines to patients by developing and commercializing insulins, monoclonal antibodies, novel therapeutic proteins, peptides, oligonucleotide therapies, and gene therapy systems. This multidisciplinary group works collaboratively with our discovery and manufacturing colleagues.
We are seeking an exceptional Data Scientist with deep data expertise in the pharmaceutical domain to lead the development and delivery of enterprise-scale data products that power AI-driven insights, process optimization, and regulatory compliance. In this role, you'll bridge pharmaceutical sciences with modern data engineering to transform complex CMC, PAT, and analytical data into strategic assets that accelerate drug development and manufacturing excellence.
Responsibilities:
Data Product Development: Define the roadmap and deliver analysis-ready and AI-ready data products that enable AI/ML applications, PAT systems, near-time analytical testing, and process intelligence across CMC workflows.
Data Archetypes & Modern Data Management: Define pharmaceutical-specific data archetypes (process, analytical, quality, CMC submission) and create reusable data models aligned with industry standards (ISA-88, ISA-95, CDISC, eCTD).
Modern Data Management for Regulated Environments: Implement data frameworks that ensure 21 CFR Part 11, ALCOA+, and data integrity compliance, while enabling scientific innovation and self-service access.
AI/ML-ready Data Products: Build training datasets for lab automation, process optimization, and predictive CQA models, and support generative AI applications for knowledge management and regulatory Q&A.
Cross-Functional Leadership: Collaborate with analytical R&D, process development, manufacturing science, quality, and regulatory affairs to standardize data products.
Deliverables include:
Scalable data integration platform that automates compilation of technical-review-ready and submission-ready data packages with demonstrable quality assurance.
Unified CMC data repository supporting current process and analytical method development while enabling future AI/ML applications across R&D and manufacturing
Data flow frameworks that enable self-service access while maintaining GxP compliance and audit readiness
Comprehensive documentation, standards, and training programs that democratize data access and accelerate product development
Basic Requirements:
Master's degree in Computer Science, Data Science, Machine Learning, AI, or related technical field
8+ years of product management experience focused on data products, data platforms, or scientific data systems and a strong grasp of modern data architecture patterns (data warehouses, data lakes, real-time streaming)
Knowledge of modern data stack technologies (Microsoft Fabric, Databricks, Airflow) and cloud platforms (AWS- S3, RDS, Lambda/Glue, Azure)
Demonstrated experience designing data products that support AI/ML workflows and advanced analytics in scientific domains
Proficiency with SQL, Python, and data visualization tools
Experience with analytical instrumentation and data systems (HPLC/UPLC, spectroscopy, particle characterization, process sensors)
Knowledge of pharmaceutical manufacturing processes, including batch and continuous manufacturing, unit operations, and process control
Expertise in data modeling for time-series, spectroscopic, chromatographic, and hierarchical batch/lot data
Experience with laboratory data management systems (LIMS, ELN, SDMS, CDS) and their integration patterns
Additional Preferences
Understanding of Design of Experiments (DoE), Quality by Design (QbD), and process validation strategies
Experience implementing data mesh architectures in scientific organizations
Knowledge of MLOps practices and model deployment in validated environments
Familiarity with regulatory submissions (eCTD, CTD) and how analytical data supports marketing applications
Experience with CI/CD pipelines (GitHub Actions, CloudFormation) for scientific applications
Lilly is dedicated to helping individuals with disabilities to actively engage in the workforce, ensuring equal opportunities when vying for positions. If you require accommodation to submit a resume for a position at Lilly, please complete the accommodation request form (******************************************************** for further assistance. Please note this is for individuals to request an accommodation as part of the application process and any other correspondence will not receive a response.
Lilly is proud to be an EEO Employer and does not discriminate on the basis of age, race, color, religion, gender identity, sex, gender expression, sexual orientation, genetic information, ancestry, national origin, protected veteran status, disability, or any other legally protected status.
Our employee resource groups (ERGs) offer strong support networks for their members and are open to all employees. Our current groups include: Africa, Middle East, Central Asia Network, Black Employees at Lilly, Chinese Culture Network, Japanese International Leadership Network (JILN), Lilly India Network, Organization of Latinx at Lilly (OLA), PRIDE (LGBTQ+ Allies), Veterans Leadership Network (VLN), Women's Initiative for Leading at Lilly (WILL), en Able (for people with disabilities). Learn more about all of our groups.
Actual compensation will depend on a candidate's education, experience, skills, and geographic location. The anticipated wage for this position is
$126,000 - $244,200
Full-time equivalent employees also will be eligible for a company bonus (depending, in part, on company and individual performance). In addition, Lilly offers a comprehensive benefit program to eligible employees, including eligibility to participate in a company-sponsored 401(k); pension; vacation benefits; eligibility for medical, dental, vision and prescription drug benefits; flexible benefits (e.g., healthcare and/or dependent day care flexible spending accounts); life insurance and death benefits; certain time off and leave of absence benefits; and well-being benefits (e.g., employee assistance program, fitness benefits, and employee clubs and activities).Lilly reserves the right to amend, modify, or terminate its compensation and benefit programs in its sole discretion and Lilly's compensation practices and guidelines will apply regarding the details of any promotion or transfer of Lilly employees.
#WeAreLilly
Auto-ApplyData Engineer (No Sponsorship Available)
Data engineer job in Indianapolis, IN
Build Your Career at Heritage Construction + Materials!
We are looking for a highly motivated, strategic-thinking, and data-driven technical expert to join our team as a HC+M Data Engineer. This individual will help develop and implement strategic data initiatives. They will be responsible for collecting and analyzing data, implementing technical solutions/algorithms, and developing and maintaining data pipelines.
This position requires U.S. work authorization
Essential Functions
Develop and support data pipelines within our Cloud Data Platform Databricks
Monitor and optimize Databricks cluster performance, ensuring cost-effective scaling and resource utilization
Design, implement, and maintain Delta Lake tables and pipelines to ensure optimized storage, data reliability, performance, and versioning
Python application development
Automate CI/CD pipelines for data workflows using Azure DevOps
Integrate Databricks with other Azure services, such as Azure Data Factory and ADLS
Design and implement monitoring solutions, leveraging Databricks monitoring tools, Azure Log Analytics, or custom dashboards for cluster and job performance
Collaborate with cross-functional teams to support data governance using Databricks Unity Catalog
Additional duties and responsibilities as assigned, including but not limited to continuously growing in alignment with the Company's core values, competencies, and skills.
Education Qualifications
Bachelor's or Master's degree in Computer Science, Engineering, Data Science, or a related field required
Experience Qualifications
Expertise with programming languages such as Python and/or SQL
Experience with Python Data Science Libraries (PySpark, Pandas, etc.)
Experience with Test Drive Development (TDD)
Automated CI/CD Pipelines
Linux / Bash / Docker
Database Schema Design and Optimization
Experience working in a cloud environment (Azure preferred) with strong understanding of cloud data architecture
Hands-on experience with Databricks or Snowflake Cloud Data Platforms
Experience with workflow orchestration (e.g., Databricks Jobs, or Azure Data Factory pipelines)
Skills and Abilities
Strong analytical and problem-solving skills
Experience with programming languages such as Python, Java, R, or SQL
Experience with Databricks or Snowflake Cloud Data Platforms
Knowledge of statistical analysis and data visualization tools, such as PowerBI or Tableau
Proficient in Microsoft Office
About Heritage Construction + Materials
Heritage Construction + Materials (HC+M) is part of The Heritage Group, a privately held, family-owned business headquartered in Indianapolis. HC+M has core capabilities in infrastructure building. Its collection of companies provides innovative road construction and materials services across the Midwest. HC+M companies, including Asphalt Materials, Inc., Evergreen Roadworks, Milestone Contractors and US Aggregates, proudly employ 3,000 people at 68 locations across seven states. Learn more at ***********************
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
#HeritageConstruction+Materials
Auto-ApplyData Engineer Lead
Data engineer job in Evansville, IN
Old National Bank has been serving clients and communities since 1834. With over $70 billion in total assets, we are a regional powerhouse deeply rooted in the communities we serve. As a trusted partner, we thrive on helping our clients achieve their goals and dreams, and we are committed to social responsibility and investing in our communities through volunteering and charitable giving.
We continually seek highly motivated and talented individuals as our people are critical to our success. In return, we offer competitive compensation with our salary and incentive program, in addition to medical, dental, and vision insurance. 401K, continuing education opportunities and an employee assistance program are also included in our benefit suite. Old National also offers a variety of Impact Network Groups led by team members who are passionate about driving engagement, creating awareness of diverse backgrounds and experiences, and building inclusion across the organization. We offer a unique opportunity to join a growing, community and client-focused company that is firmly rooted in its core values.
Responsibilities
Position Summary
A Cloud Data Engineer Lead is a high-level professional responsible for designing, developing, and maintaining the data architecture in our organization. This role is responsible for overseeing the technical aspects of data management, including data integration, processing, storage, and retrieval.
The Cloud Data Engineer Lead provides technical leadership to a team of Data Engineers in the enablement and support of our cloud data platform, creation and maintenance of data pipelines, data product development, and data governance patterns to ensure the timeliness and quality of data. The role is responsible for ensuring that data is accurate, reliable, and available to Analysts, Data Scientists, and other stakeholders within our organization.
Our Data Engineering teams work in a highly collaborative environments, engage in daily standups, operate within Agile development practices, and participate in a GitOps development model in that everything is code.
Salary Range
The salary range for this position is $98,400 - $199,000 per year. Final compensation will be determined by location, skills, experience, qualifications and the career level at which the position is filled.
Key Accountabilities
Key Accountability 1: Data Engineering and Architecture Strategic Leadership
* Provide direction for the organization's data engineering and architecture strategy.
* Make final decisions on data infrastructure design and tool selection, balancing business needs with technical feasibility.
* Collaborate on requirements gathering while owning the architectural vision and execution.
* Champion data management best practices across the organization.
Key Accountability 2: Engineering Execution and Technical Implementation
* Write production-quality code and implement scalable ETL pipelines using tools like AWS, S3, Glue, Lambda, and Databricks.
* Apply business intelligence best practices such as dimensional modeling and distributed data processing.
* Design, build, and maintain modern cloud-based data platforms to support enterprise data needs including integrations, analytics, and AI.
Key Accountability 3: Team Development and Cross-Functional Collaboration
* Bootstrap and grow the data engineering team, setting standards and mentoring new members.
* Translate business requirements into technical solutions, ensuring alignment with broader organizational goals.
* Enable data-driven decision-making by delivering reliable analytics pipelines and insights.
Qualifications and Education Requirements
* B.S. degree in a related field and 7 to 9 years professional experience OR a total of 7 plus years' experience as Senior Data Engineer, Senior Software Engineer, or equivalent.
* 5 plus years working in AWS Cloud (AWS Solution Architect Certification is a plus) and hybrid cloud environments.
* 7 plus years working with data warehouse platforms.
* Coding experience in SQL and scripting languages such as Python and Bash and expert working knowledge of the Spark platform, preferably in a cloud data platform environment such as Databricks.
* Experience working with large and complex data sets with multi-terabyte scale is a plus.
* Knowledge of Agile and GitOps methodologies
* Banking experience / knowledge is a plus
* Ability to communicate effectively, both orally and in writing, with clients and groups of employees to build and maintain positive relationships
* Strong leadership and communication skills. You must be able to collaborate with other teams, effectively manage projects, and communicate technical information to non-technical stakeholders.
Auto-ApplyData Engineer
Data engineer job in Carmel, IN
Insight Global is seeking a talented Azure Data Engineer to join one of our large utility clients on-site in Carmel, Indiana. Please find more details below, we look forward to connecting with you!
**This client works closely with the US Government, so candidates need to eligible to receive a Secret Clearance or higher.
Title: Azure Data Engineer
Client: Utilities Administration Company
Location: Carmel, IN 46032
Schedule: Hybrid onsite - 4 days per week (Monday - Thursday)
Skills Needed:
Ideally, 5+ years of prior Data Engineering experience
Expertise in Azure Cloud*** (experience with Azure Monitor is a plus)
Experience with the following: Azure Data Factory, Azure Synapse, PySpark, Python and SQL
Bachelor's Degree (or higher) in a related STEM discipline
Willingness to work in-office 4 days per week in Eagan, MN
Compensation: $60/hour to $75/hour. Exact compensation may vary based on several factors, including skills, experience, and education.
Benefit packages for this role will start on the 31st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law.
Data Engineer
Data engineer job in Indiana
Apply now Work Type: Office Working Employment Type: Permanent * We are seeking a skilled and motivated Data Engineer to design, implement, and optimise data systems and pipelines that support critical business operations. The ideal candidate will have expertise in database management, data modeling, and distributed systems, with a strong focus on both relational and non-relational databases. Additionally, experience in cloud infrastructure, DevOps practices, and automation is highly desirable.
Key Responsibilities
* Design, develop, and maintain scalable and efficient data pipelines for structured and unstructured data.
* Manage and optimize relational databases (PostgreSQL, RDS, Aurora) and NoSQL databases (MongoDB).
* Implement and maintain search solutions using Elasticsearch.
* Develop and manage messaging systems using Kafka for real-time data streaming, and event stores like Axon Server
* Collaborate with cross-functional teams to ensure data integrity, security, and availability.
* Monitor and troubleshoot database performance, ensuring high availability and reliability.
* Automate database operations using scripting (Shell, Liquibase) and tools like Ansible.
* Support CloudOps initiatives across AWS, Azure, and on-premise environments.
* Implement CI/CD pipelines for data workflows and infrastructure provisioning.
* Drive engagement and motivate the team to success;
* Be committed to continuously challenge self for the better;
* Adhere to small and consistent increments that can deliver value and impact to the business;
* Have Agile and growth mindsets; and strive for excellence.
Strategy
* Awareness and understanding of the Group's business strategy and model appropriate to the role.
Business
* Awareness and understanding of Trade Finance, the wider business, economic, and market environment in which the Group operates.
Processes
* Awareness and understanding of the Bank's change management processes and DevOps practices.
People & Talent
* Lead through example and build the appropriate culture and values. Set appropriate tone and expectations from their team and work in collaboration with risk and control partners.
* Ensure the provision of ongoing training and development of people and ensure that holders of all critical functions are suitably skilled and qualified for their roles ensuring that they have effective supervision in place to mitigate any risks.
* Employ, engage, and retain high quality people, with succession planning for critical roles.
* Responsibility to review team structure/capacity plans.
* Set and monitor job descriptions and objectives for direct reports and provide feedback and rewards in line with their performance against those responsibilities and objectives.
Risk Management
* Awareness and understanding of the Bank's risk management processes and practices.
Governance
* Awareness and understanding of the Bank's governance frameworks applicable to Trade Finance and practices that support them.
Regulatory & Business Conduct
* Display exemplary conduct and live by the Group's Values and Code of Conduct.
* Take personal responsibility for embedding the highest standards of ethics, including regulatory and business conduct, across Standard Chartered Bank. This includes understanding and ensuring compliance with, in letter and spirit, all applicable laws, regulations, guidelines and the Group Code of Conduct.
* Effectively and collaboratively identify, escalate, mitigate and resolve risk, conduct and compliance matters.
* Lead to achieve the outcomes set out in the Bank's Conduct Principles: [Fair Outcomes for Clients; Effective Financial Markets; Financial Crime Compliance; The Right Environment.]
* Serve as a Director of the Board
* Exercise authorities delegated by the Board of Directors and act in accordance with Articles of Association (or equivalent)
Key stakeholders
* CIB Trade ITO Team
Qualifications
Primary Skills
* Strong experience as a DBA with expertise in RDBMS (PostgreSQL, RDS, Aurora).
* Proficiency in NoSQL databases (MongoDB).
* Hands-on experience with Elasticsearch for search and indexing.
* Knowledge of messaging systems like Kafka for distributed data processing, and event stores like Axon Server.
Secondary Skills
* Familiarity with CloudOps across AWS, Azure, or on-premise platforms.
* Experience with DevOps practices, including CI/CD pipelines.
* Automation skills using Shell scripting, Liquibase, and configuration management tools like Ansible.
Skills and Experience
* System Administration (GNU/Linux, *nix)
* On-demand Infrastructure/Cloud Computing, Storage, and Infrastructure (SaaS, PaaS, IaaS)
* Virtualization, Containerisation, and Orchestration (Docker, Podman, EKS, AKS, Kubernetes)
* Continuous Integration/Deployment (CI/CD) and Automation (Jenkins, Ansible)
* Project Management
* Infrastructure/service monitoring and log aggregation design and implementation (Appdynamics, ELK, Grafana, Prometheus etc)
* Distributed data processing frameworks (Hadoop, Spark, etc), big data platforms (EMR, HDInsight, etc), and event stores (Axon Server)
* NoSQL and RDBMS Design and Administration (MongoDB, PostgreSQL, Elasticsearch)
* Change Management Coordination
* Software Development
* DevOps Process Design and Implementation
About Standard Chartered
We're an international bank, nimble enough to act, big enough for impact. For more than 170 years, we've worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If you're looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we can't wait to see the talents you can bring us.
Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, you'll see how we value difference and advocate inclusion.
Together we:
* Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do
* Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well
* Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term
What we offer
In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing.
* Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations.
* Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum.
* Flexible working options based around home and office locations, with flexible working patterns.
* Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits
* A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning.
* Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential.
Apply now
Information at a Glance
*
*
*
*
*
Metabolic Modeling Data Scientist
Data engineer job in Indianapolis, IN
Who We Are and What We Do At Corteva Agriscience, you will help us grow what's next. No matter your role, you will be part of a team that is building the future of agriculture - leading breakthroughs in the innovation and application of science and technology that will better the lives of people all over the world and fuel the progress of humankind.
Corteva Agriscience has an exciting opportunity for a **Metabolic Modeling Data Scientist** to develop and deploy predictive genome-scale metabolic models that accelerate microbial strain optimization and bioprocess development. This role will apply their expertise to innovate nature-inspired solutions to global challenges in agriculture. They will join a strong molecular data science team and work closely with cross-functional teams from early discovery to downstream process engineering to generate actionable hypotheses, quantify flux bottlenecks, and translate multiomics and process data into decisions for Crop Health R&D.
**What You'll Do:**
+ Develop and maintain genome‑scale metabolic models for model and non‑model organisms; implement flux balance analysis (FBA/dFBA) and related approaches.
+ Integrate multi‑omics datasets (genomics, transcriptomics, proteomics, metabolomics) into metabolic models to generate testable hypotheses.
+ Apply AI/ML tools (e.g., ML‑assisted flux prediction, pathway inference etc.), perform metabolic flux analysis and estimate theoretical yields to inform media, feeding strategies, and optimization of titer/rate/yield.
+ Reconstruct and validate metabolic pathways, including completing missing steps and proposing/assessing novel routes to increased metabolite productivity.
+ Collaborate closely with strain engineering, biochemistry, and process engineering to ensure models are experimentally grounded and actionable.
+ Communicate results via clear summaries and decision‑ready recommendations for program reviews and R&D planning.
What Skills You Need:
+ PhD in Systems Biology, Computational Biology, Bioinformatics, Chemical/Biochemical Engineering, Microbiology, or related fields.
+ Demonstrated experience building genome scale metabolic models, performing flux analysis, and integrating multi‑omics (such as genomics, transcriptomics, metabolomics) with experimental data for model refinement.
+ Strong programming skills in Python (and R) for model development and data integration
**Preferred Skills:**
+ Experience with industrially relevant non‑model microbes and fermentation datasets.
+ Strong familiarity and proficiency working with latest models and techniques in this space, including tools such as COBRA Toolbox, CobraPy, KBase, or similar frameworks.
+ Experience applying ML/AI to predictive metabolic modeling, pathway prediction, and/or flux estimation.
+ Familiarity with version control (Git), workflow/pipeline tools, and reproducible model development practices.
+ Ability to develop, improve and apply metabolic modeling techniques on non‑model organisms;
+ Experience collaborating in cross‑disciplinary teams and ability to translate metabolic modeling insights into well-designed strain engineering experiments for validation in partnership with wet‑lab teams.
\#LI-BB1
**Benefits - How We'll Support You:**
+ Numerous development opportunities offered to build your skills
+ Be part of a company with a higher purpose and contribute to making the world a better place
+ Health benefits for you and your family on your first day of employment
+ Four weeks of paid time off and two weeks of well-being pay per year, plus paid holidays
+ Excellent parental leave which includes a minimum of 16 weeks for mother and father
+ Future planning with our competitive retirement savings plan and tuition reimbursement program
+ Learn more about our total rewards package here - Corteva Benefits (*******************************************************************************
+ Check out life at Corteva! *************************************
Are you a good match? Apply today! We seek applicants from all backgrounds to ensure we get the best, most creative talent on our team.
Corteva Agriscience is an equal opportunity employer. We are committed to embracing our differences to enrich lives, advance innovation, and boost company performance. Qualified applicants will be considered without regard to race, color, religion, creed, sex, sexual orientation, gender identity, marital status, national origin, age, military or veteran status, pregnancy related conditions (including pregnancy, childbirth, or related medical conditions), disability or any other protected status in accordance with federal, state, or local laws.
Corteva Agriscience is an equal opportunity employer. We are committed to boldly embracing the power of inclusion, diversity, and equity to enrich the lives of our employees and strengthen the performance of our company, while advancing equity in agriculture. Qualified applicants will be considered without regard to race, color, religion, creed, sex, sexual orientation, gender identity, marital status, national origin, age, veteran status, disability or any other protected class. Discrimination, harassment and retaliation are inconsistent with our values and will not be tolerated. If you require a reasonable accommodation to search or apply for a position, please visit:Accessibility Page for Contact Information
For US Applicants: See the 'Equal Employment Opportunity is the Law' poster. To all recruitment agencies: Corteva does not accept unsolicited third party resumes and is not responsible for any fees related to unsolicited resumes.
Join the Squad | Now Hiring a Snowflake Data Engineer
Data engineer job in Indianapolis, IN
Onebridge, a Marlabs Company, is a global AI and Data Analytics Consulting Firm that empowers organizations worldwide to drive better outcomes through data and technology. Since 2005, we have partnered with some of the largest healthcare, life sciences, financial services, and government entities across the globe. We have an exciting opportunity for a highly skilled Snowflake Data Engineer to join our innovative and dynamic team.
Snowflake Data Engineer | About You
As a Snowflake Data Engineer, you are responsible for transforming raw data into trusted, authoritative sources that drive business decisions. You thrive in a fast-paced, agile environment and enjoy solving complex data integration challenges. You're passionate about data commercialization and enabling self-service analytics across the organization. Your ability to collaborate across teams and leverage modern cloud technologies makes you a key player in our data strategy. You also bring a personal passion for AI and coding into your professional life, exploring cutting-edge tools like Anthropic's Claude Code, Kizer, Cursor, and more. This curiosity fuels your innovation and keeps you ahead of emerging trends in data engineering and automation.
Snowflake Data Engineer | Day-to-Day
Design, build, and maintain scalable data pipelines using Snowflake and Azure Service Bus to support enterprise-wide data initiatives.
Partner with business and technical stakeholders to develop authoritative data sources that enable data commercialization and strategic insights.
Manage and track work using Agile tools such as Jira and ServiceNow, contributing to sprint planning, backlog grooming, and daily stand-ups.
Support and optimize data integration processes, including transitioning away from legacy tools like SnowMirror to more modern solutions.
Collaborate with reporting teams to enable self-service analytics using platforms like Power BI and Sigma, while maintaining data integrity and accessibility.
Participate in code reviews, documentation efforts, and continuous improvement activities to enhance data engineering standards and practices.
Leverage personal experience with AI-powered coding assistants and automation tools to streamline development workflows and explore innovative solutions.
Snowflake Data Engineer | Skills & Experience
5+ years of experience in data engineering, data architecture, or a related technical field.
Strong hands-on experience with Snowflake and cloud-based data platforms, preferably within the Azure ecosystem.
Familiarity with Agile methodologies and experience using tools such as Jira and ServiceNow for work management and team collaboration.
Experience with data integration and messaging systems, including Azure Service Bus and similar technologies used in enterprise environments.
Working knowledge of reporting and visualization tools such as Power BI and Sigma, with an understanding of self-service analytics principles and implementation.
Exposure to Python and a willingness to collaborate with developers who use it for data-related tasks, automation, and scripting.
Enthusiasm for AI-assisted coding and experimentation with tools like Claude Code, Codex, GPT-5, Climb, Kizer, and Cursor to enhance productivity and innovation.
A Best Place to Work in Indiana since 2015
Advisory, Data Scientist - CMC Data Products
Data engineer job in Gas City, IN
At Lilly, we unite caring with discovery to make life better for people around the world. We are a global healthcare leader headquartered in Indianapolis, Indiana. Our employees around the world work to discover and bring life-changing medicines to those who need them, improve the understanding and management of disease, and give back to our communities through philanthropy and volunteerism. We give our best effort to our work, and we put people first. We're looking for people who are determined to make life better for people around the world.
Organizational & Position Overview: The Bioproduct Research and Development (BR&D) organization strives to deliver creative medicines to patients by developing and commercializing insulins, monoclonal antibodies, novel therapeutic proteins, peptides, oligonucleotide therapies, and gene therapy systems. This multidisciplinary group works collaboratively with our discovery and manufacturing colleagues.
We are seeking an exceptional Data Scientist with deep data expertise in the pharmaceutical domain to lead the development and delivery of enterprise-scale data products that power AI-driven insights, process optimization, and regulatory compliance. In this role, you'll bridge pharmaceutical sciences with modern data engineering to transform complex CMC, PAT, and analytical data into strategic assets that accelerate drug development and manufacturing excellence.
Responsibilities:
Data Product Development: Define the roadmap and deliver analysis-ready and AI-ready data products that enable AI/ML applications, PAT systems, near-time analytical testing, and process intelligence across CMC workflows.
Data Archetypes & Modern Data Management: Define pharmaceutical-specific data archetypes (process, analytical, quality, CMC submission) and create reusable data models aligned with industry standards (ISA-88, ISA-95, CDISC, eCTD).
Modern Data Management for Regulated Environments: Implement data frameworks that ensure 21 CFR Part 11, ALCOA+, and data integrity compliance, while enabling scientific innovation and self-service access.
AI/ML-ready Data Products: Build training datasets for lab automation, process optimization, and predictive CQA models, and support generative AI applications for knowledge management and regulatory Q&A.
Cross-Functional Leadership: Collaborate with analytical R&D, process development, manufacturing science, quality, and regulatory affairs to standardize data products.
Deliverables include:
Scalable data integration platform that automates compilation of technical-review-ready and submission-ready data packages with demonstrable quality assurance.
Unified CMC data repository supporting current process and analytical method development while enabling future AI/ML applications across R&D and manufacturing
Data flow frameworks that enable self-service access while maintaining GxP compliance and audit readiness
Comprehensive documentation, standards, and training programs that democratize data access and accelerate product development
Basic Requirements:
Master's degree in Computer Science, Data Science, Machine Learning, AI, or related technical field
8+ years of product management experience focused on data products, data platforms, or scientific data systems and a strong grasp of modern data architecture patterns (data warehouses, data lakes, real-time streaming)
Knowledge of modern data stack technologies (Microsoft Fabric, Databricks, Airflow) and cloud platforms (AWS- S3, RDS, Lambda/Glue, Azure)
Demonstrated experience designing data products that support AI/ML workflows and advanced analytics in scientific domains
Proficiency with SQL, Python, and data visualization tools
Experience with analytical instrumentation and data systems (HPLC/UPLC, spectroscopy, particle characterization, process sensors)
Knowledge of pharmaceutical manufacturing processes, including batch and continuous manufacturing, unit operations, and process control
Expertise in data modeling for time-series, spectroscopic, chromatographic, and hierarchical batch/lot data
Experience with laboratory data management systems (LIMS, ELN, SDMS, CDS) and their integration patterns
Additional Preferences
Understanding of Design of Experiments (DoE), Quality by Design (QbD), and process validation strategies
Experience implementing data mesh architectures in scientific organizations
Knowledge of MLOps practices and model deployment in validated environments
Familiarity with regulatory submissions (eCTD, CTD) and how analytical data supports marketing applications
Experience with CI/CD pipelines (GitHub Actions, CloudFormation) for scientific applications
Lilly is dedicated to helping individuals with disabilities to actively engage in the workforce, ensuring equal opportunities when vying for positions. If you require accommodation to submit a resume for a position at Lilly, please complete the accommodation request form (******************************************************** for further assistance. Please note this is for individuals to request an accommodation as part of the application process and any other correspondence will not receive a response.
Lilly is proud to be an EEO Employer and does not discriminate on the basis of age, race, color, religion, gender identity, sex, gender expression, sexual orientation, genetic information, ancestry, national origin, protected veteran status, disability, or any other legally protected status.
Our employee resource groups (ERGs) offer strong support networks for their members and are open to all employees. Our current groups include: Africa, Middle East, Central Asia Network, Black Employees at Lilly, Chinese Culture Network, Japanese International Leadership Network (JILN), Lilly India Network, Organization of Latinx at Lilly (OLA), PRIDE (LGBTQ+ Allies), Veterans Leadership Network (VLN), Women's Initiative for Leading at Lilly (WILL), en Able (for people with disabilities). Learn more about all of our groups.
Actual compensation will depend on a candidate's education, experience, skills, and geographic location. The anticipated wage for this position is
$126,000 - $244,200
Full-time equivalent employees also will be eligible for a company bonus (depending, in part, on company and individual performance). In addition, Lilly offers a comprehensive benefit program to eligible employees, including eligibility to participate in a company-sponsored 401(k); pension; vacation benefits; eligibility for medical, dental, vision and prescription drug benefits; flexible benefits (e.g., healthcare and/or dependent day care flexible spending accounts); life insurance and death benefits; certain time off and leave of absence benefits; and well-being benefits (e.g., employee assistance program, fitness benefits, and employee clubs and activities).Lilly reserves the right to amend, modify, or terminate its compensation and benefit programs in its sole discretion and Lilly's compensation practices and guidelines will apply regarding the details of any promotion or transfer of Lilly employees.
#WeAreLilly
Auto-ApplyData Scientist
Data engineer job in Indianapolis, IN
Who We Are and What We Do At Corteva Agriscience, you will help us grow what's next. No matter your role, you will be part of a team that is building the future of agriculture - leading breakthroughs in the innovation and application of science and technology that will better the lives of people all over the world and fuel the progress of humankind.
We are seeking a highly skilled Data Scientist with experience in bioprocess control to join our global Data Science team. This role will focus on applying artificial intelligence, machine learning, and statistical modeling to develop active bioprocess control algorithms, optimize bioprocess workflows, improve operational efficiency, and enable data-driven decision-making in manufacturing and R&D environments.
What You'll Do:
* Design and implement active process control strategies, leveraging online measurements and dynamic parameter adjustment for improved productivity and safety.
* Develop and deploy predictive models for bioprocess optimization, including fermentation and downstream processing.
* Partner with engineers and scientists to translate process insights into actionable improvements, such as yield enhancement and cost reduction.
* Analyze high-complexity datasets from bioprocess operations, including sensor data, batch records, and experimental results.
* Collaborate with cross-functional teams to integrate data science solutions into plant operations, ensuring scalability and compliance.
* Communicate findings through clear visualizations, reports, and presentations to technical and non-technical stakeholders.
* Contribute to continuous improvement of data pipelines and modeling frameworks for bioprocess control.
What Skills You Need:
* M.S. + 3 years' experience or Ph.D. in Data Science, Computer Science, Chemical Engineering, Bioprocess Engineering, Statistics, or related quantitative field.
* Strong foundation in machine learning, statistical modeling, and process control.
* Proficiency in Python, R, or another programming language.
* Excellent communication and collaboration skills; ability to work in multidisciplinary teams.
Preferred Skills:
* Familiarity with bioprocess workflows, fermentation, and downstream processing.
* Hands-on experience with bioprocess optimization models and active process control strategies.
* Experience with industrial data systems and cloud platforms.
* Knowledge of reinforcement learning or adaptive experimentation for process improvement.
#LI-BB1
Benefits - How We'll Support You:
* Numerous development opportunities offered to build your skills
* Be part of a company with a higher purpose and contribute to making the world a better place
* Health benefits for you and your family on your first day of employment
* Four weeks of paid time off and two weeks of well-being pay per year, plus paid holidays
* Excellent parental leave which includes a minimum of 16 weeks for mother and father
* Future planning with our competitive retirement savings plan and tuition reimbursement program
* Learn more about our total rewards package here - Corteva Benefits
* Check out life at Corteva! *************************************
Are you a good match? Apply today! We seek applicants from all backgrounds to ensure we get the best, most creative talent on our team.
Corteva Agriscience is an equal opportunity employer. We are committed to embracing our differences to enrich lives, advance innovation, and boost company performance. Qualified applicants will be considered without regard to race, color, religion, creed, sex, sexual orientation, gender identity, marital status, national origin, age, military or veteran status, pregnancy related conditions (including pregnancy, childbirth, or related medical conditions), disability or any other protected status in accordance with federal, state, or local laws.