Circle is a financial technology company at the epicenter of the emerging internet of money, where value can finally travel like other digital data - globally, nearly instantly and less expensively than legacy settlement systems. This ground-breaking new internet layer opens up previously unimaginable possibilities for payments, commerce and markets that can help raise global economic prosperity and enhance inclusion. Our infrastructure - including USDC, a blockchain-based dollar - helps businesses, institutions and developers harness these breakthroughs and capitalize on this major turning point in the evolution of money and technology.
What you'll be part of:
Circle is committed to visibility and stability in everything we do. As we grow as an organization, we're expanding into some of the world's strongest jurisdictions. Speed and efficiency are motivators for our success and our employees live by our company values: High Integrity, Future Forward, Multistakeholder, Mindful, and Driven by Excellence. We have built a flexible and diverse work environment where new ideas are encouraged and everyone is a stakeholder.
What you'll be responsible for:
Circle is more than USDC. Circle builds a variety of tools to help make the crypto ecosystem more efficient; from programmable wallets to CCTP to our upcoming ARC chain. These products are critical to multiple participants: developers, blockchains, applications, exchanges, traders, individuals, and protocols (just to name a few). We need to understand how our products and our competitor's products are being used, by whom, and come up with reasonable understandings of why. When we build new products like ARC, we need to be sure that the system works for all participants. We are looking for a Staff Data Scientists with deep roots in crypto to help us do this. This role will be critical to how we make decisions across the company (product, engineering, business development, marketing, compliance, etc).
What you'll work on:
Help address a variety of questions across ARC, CCTP, Chain Expansion, programmable wallets and a suite of products we might build in the future.
Partner with and mentor a team of eager, thoughtful, and hard working data analysts and data scientists trying to improve our products and serve the ecosystem.
Build models and frameworks to understand problems around token distribution, competitive intelligence, USDC dispersion, and the decentralized ecosystem
Keeping up with Blockchain research: Work closely with Blockchain Researchers across Circle to share insights and collaborate on transaction tracing, blockchain forensics, network analysis, and anomaly detection. This collaboration will help uncover deeper insights, identify fraudulent activity, improve security, and enhance compliance procedures.
What you'll bring to Circle:
Deep experience working with datain the crypto space.
6+ years of experience indata analysis or data science, with 3+ years focusing on blockchain data.
Proficiency in SQL and at least one programming language, such as R or Python, for data manipulation and analysis.
Strong understanding of statistics and experience applying statistical methods to interpret blockchain data.
Proven experience in planning and driving analytical projects.
Excellent analytical and problem-solving skills, with attention to detail.
Strong communication skills to convey complex findings and insights to both technical and non-technical partners.
Strong preference for candidates with deep knowledge and/or experience in:
Blockchain administration and tokenomics
Blockchain transaction analysis and the ability to conduct thorough investigations.
Blockchain technologies and protocols (e.g., Ethereum, Bitcoin).
Blockchain research methods, including transaction tracing, network analysis, and anomaly detection.
Blockchain analytics tools and platforms (e.g., Arkham, TRM Labs, Chainalysis, Elliptic)
Maintaining labels and tags for onchain addresses.
Stablecoin mechanics and practical knowledge of analyzing their movement on blockchains.
Join us as a Data Scientist to unlock the potential of blockchain data and contribute to the growth and security of our organization in this constantly evolving space.
Circle is on a mission to create an inclusive financial future, with transparency at our core. We consider a wide variety of elements when crafting our compensation ranges and total compensation packages.
Starting pay is determined by various factors, including but not limited to: relevant experience, skill set, qualifications, and other business and organizational needs. Please note that compensation ranges may differ for candidates in other locations.
Base Pay Range: $172,500 - $227,500
We are an equal opportunity employer and value diversity at Circle. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. Additionally, Circle participates in the E-Verify Program in certain locations, as required by law.
Should you require accommodations or assistance in our interview process because of a disability, please reach out to
accommodations@circle.com
for support. We respect your privacy and will connect with you separately from our interview process to accommodate your needs.
#LI-Remote
$172.5k-227.5k yearly Auto-Apply 60d+ ago
Looking for a job?
Let Zippia find it for you.
Data Scientist (Technical Leadership)
Meta 4.8
Data engineer job in Indianapolis, IN
We are seeking experienced Data Scientists to join our team and drive impact across various product areas. As a Data Scientist, you will collaborate with cross-functional partners to identify and solve complex problems using data and analysis. Your role will involve shaping product development, quantifying new opportunities, and ensuring products bring value to users and the company. You will guide teams using data-driven insights, develop hypotheses, and employ rigorous analytical approaches to test them. You will tell data-driven stories, present clear insights, and build credibility with stakeholders. By joining our team, you will become part of a world-class analytics community dedicated to skill development and career growth in analytics and beyond.
**Required Skills:**
Data Scientist (Technical Leadership) Responsibilities:
1. Work with complex data sets to solve challenging problems using analytical and statistical approaches
2. Apply technical expertise in quantitative analysis, experimentation, and data mining to develop product strategies
3. Identify and measure success through goal setting, forecasting, and monitoring key metrics
4. Partner with cross-functional teams to inform and execute product strategy and investment decisions
5. Build long-term vision and strategy for programs and products
6. Collaborate with executives to define and develop data platforms and instrumentation
7. Effectively communicate insights and recommendations to stakeholders
8. Define success metrics, forecast changes, and set team goals
9. Support developing roadmaps and coordinate analytics efforts across teams
**Minimum Qualifications:**
Minimum Qualifications:
10. Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
11. Mathematics, Statistics, Operations Research) 5+ years of experience with data querying languages (e.g. SQL), scripting languages (e.g. Python), or statistical/mathematical software (e.g. R, SAS, Matlab)
12. 8+ years of work experience leading analytics work in IC capacity, working collaboratively with Engineering and cross-functional partners, and guiding data-influenced product planning, prioritization and strategy development
13. Experience working effectively with and leading complex analytics projects across multiple stakeholders and cross-functional teams, including Engineering, PM/TPM, Analytics and Finance
14. Experience with predictive modeling, machine learning, and experimentation/causal inference methods
15. Experience communicating complex technical topics in a clear, precise, and actionable manner
**Preferred Qualifications:**
Preferred Qualifications:
16. 10+ years of experience communicating the results of analyses to leadership teams to influence the strategy
17. Masters or Ph.D. Degree in a quantitative field
18. Bachelor's Degree in an analytical or scientific field (e.g. Computer Science, Engineering, Mathematics, Statistics, Operations Research)
19. 10+ years of experience doing complex quantitative analysis in product analytics
**Public Compensation:**
$210,000/year to $281,000/year + bonus + equity + benefits
**Industry:** Internet
**Equal Opportunity:**
Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Meta participates in the E-Verify program in certain locations, as required by law. Please note that Meta may leverage artificial intelligence and machine learning technologies in connection with applications for employment.
Meta is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance or accommodations due to a disability, please let us know at accommodations-ext@fb.com.
$210k-281k yearly 60d+ ago
Data Scientist, NLP
Datavant
Data engineer job in Indianapolis, IN
Datavant is a data platform company and the world's leader in health data exchange. Our vision is that every healthcare decision is powered by the right data, at the right time, in the right format. Our platform is powered by the largest, most diverse health data network in the U.S., enabling data to be secure, accessible and usable to inform better health decisions. Datavant is trusted by the world's leading life sciences companies, government agencies, and those who deliver and pay for care.
By joining Datavant today, you're stepping onto a high-performing, values-driven team. Together, we're rising to the challenge of tackling some of healthcare's most complex problems with technology-forward solutions. Datavanters bring a diversity of professional, educational and life experiences to realize our bold vision for healthcare.
We are looking for a motivated Data Scientist to help Datavant revolutionize the healthcare industry with AI. This is a critical role where the right candidate will have the ability to work on a wide range of problems in the healthcare industry with an unparalleled amount of data.
You'll join a team focused on deep medical document understanding, extracting meaning, intent, and structure from unstructured medical and administrative records. Our mission is to build intelligent systems that can reliably interpret complex, messy, and high-stakes healthcare documentation at scale.
This role is a unique blend of applied machine learning, NLP, and product thinking. You'll collaborate closely with cross-functional teams to:
+ Design and develop models to extract entities, detect intents, and understand document structure
+ Tackle challenges like long-context reasoning, layout-aware NLP, and ambiguous inputs
+ Evaluate model performance where ground truth is partial, uncertain, or evolving
+ Shape the roadmap and success metrics for replacing legacy document processing systems with smarter, scalable solutions
We operate in a high-trust, high-ownership environment where experimentation and shipping value quickly are key. If you're excited by building systems that make healthcare data more usable, accurate, and safe, please reach out.
**Qualifications**
+ 3+ years of experience with data science and machine learning in an industry setting, particularly in designing and building NLP models.
+ Proficiency with Python
+ Experience with the latest in language models (transformers, LLMs, etc.)
+ Proficiency with standard data analysis toolkits such as SQL, Numpy, Pandas, etc.
+ Proficiency with deep learning frameworks like PyTorch (preferred) or TensorFlow
+ Industry experience shepherding ML/AI projects from ideation to delivery
+ Demonstrated ability to influence company KPIs with AI
+ Demonstrated ability to navigate ambiguity
**Bonus Experience**
+ Experience with document layout analysis (using vision, NLP, or both).
+ Experience with Spark/PySpark
+ Experience with Databricks
+ Experience in the healthcare industry
**Responsibilities**
+ Play a key role in the success of our products by developing models for document understanding tasks.
+ Perform error analysis, data cleaning, and other related tasks to improve models.
+ Collaborate with your team by making recommendations for the development roadmap of a capability.
+ Work with other data scientists and engineers to optimize machine learning models and insert them into end-to-end pipelines.
+ Understand product use-cases and define key performance metrics for models according to business requirements.
+ Set up systems for long-term improvement of models and data quality (e.g. active learning, continuous learning systems, etc.).
**After 3 Months, You Will...**
+ Have a strong grasp of technologies upon which our platform is built.
+ Be fully integrated into ongoing model development efforts with your team.
**After 1 Year, You Will...**
+ Be independent in reading literature and doing research to develop models for new and existing products.
+ Have ownership over models internally, communicating with product managers, customer success managers, and engineers to make the model and the encompassing product succeed.
+ Be a subject matter expert on Datavant's models and a source from which other teams can seek information and recommendations.
\#LI-BC1
We are committed to building a diverse team of Datavanters who are all responsible for stewarding a high-performance culture in which all Datavanters belong and thrive. We are proud to be an Equal Employment Opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, disability, veteran status, or other legally protected status.
At Datavant our total rewards strategy powers a high-growth, high-performance, health technology company that rewards our employees for transforming health care through creating industry-defining data logistics products and services.
The range posted is for a given job title, which can include multiple levels. Individual rates for the same job title may differ based on their level, responsibilities, skills, and experience for a specific job.
The estimated total cash compensation range for this role is:
$136,000-$170,000 USD
To ensure the safety of patients and staff, many of our clients require post-offer health screenings and proof and/or completion of various vaccinations such as the flu shot, Tdap, COVID-19, etc. Any requests to be exempted from these requirements will be reviewed by Datavant Human Resources and determined on a case-by-case basis. Depending on the state in which you will be working, exemptions may be available on the basis of disability, medical contraindications to the vaccine or any of its components, pregnancy or pregnancy-related medical conditions, and/or religion.
This job is not eligible for employment sponsorship.
Datavant is committed to a work environment free from job discrimination. We are proud to be an Equal Employment Opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, disability, veteran status, or other legally protected status. To learn more about our commitment, please review our EEO Commitment Statement here (************************************************** . Know Your Rights (*********************************************************************** , explore the resources available through the EEOC for more information regarding your legal rights and protections. In addition, Datavant does not and will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay.
At the end of this application, you will find a set of voluntary demographic questions. If you choose to respond, your answers will be anonymous and will help us identify areas for improvement in our recruitment process. (We can only see aggregate responses, not individual ones. In fact, we aren't even able to see whether you've responded.) Responding is entirely optional and will not affect your application or hiring process in any way.
Datavant is committed to working with and providing reasonable accommodations to individuals with physical and mental disabilities. If you need an accommodation while seeking employment, please request it here, (************************************************************** Id=**********48790029&layout Id=**********48795462) by selecting the 'Interview Accommodation Request' category. You will need your requisition ID when submitting your request, you can find instructions for locating it here (******************************************************************************************************* . Requests for reasonable accommodations will be reviewed on a case-by-case basis.
For more information about how we collect and use your data, please review our Privacy Policy (**************************************** .
$136k-170k yearly 35d ago
Data Scientist, Generative AI
Amira Learning 3.8
Data engineer job in Indiana
REMOTE / FULL TIME Amira Learning accelerates literacy outcomes by delivering the latest reading and neuroscience with AI. As the leader in third-generation edtech, Amira listens to students read out loud, assesses mastery, helps teachers supplement instruction and delivers 1:1 tutoring. Validated by independent university and SEA efficacy research, Amira is the only AI literacy platform proven to achieve gains surpassing 1:1 human tutoring, consistently delivering effect sizes over 0.4.
Rooted in over thirty years of research, Amira is the first, foremost, and only proven Intelligent Assistant for teachers and AI Reading Tutor for students. The platform serves as a school district's Intelligent Growth Engine, driving instructional coherence by unifying assessment, instruction, and tutoring around the chosen curriculum.
Unlike any other edtech tool, Amira continuously identifies each student's skill gaps and collaborates with teachers to build lesson plans aligned with district curricula, pulling directly from the district's high-quality instructional materials. Teachers can finally differentiate instruction with evidence and ease, and students get the 1:1 practice they specifically need, whether they are excelling or working below grade level.
Trusted by more than 2,000 districts and working in partnership with twelve state education agencies, Amira is helping 3.5 million students worldwide become motivated and masterful readers.
About this role:
We are seeking a Data Scientist with expertise in the domain of reading science, education, literacy, and NLP; with practical experience building and utilizing Gen AI (LLM, image, and/or video) models. You will help to create Gen AI based apps that will power the most widely used Intelligent Assistant in U.S. schools, already helping more than 2 million children.
We are looking for strong, education focused engineers who have a background using the latest generative AI models, with experience in areas such as prompt engineering, model evaluation; data processing for training and fine-tuning; model alignment, and human-feedback-based model training.
Responsibilities include:
* Design methods, tools, and infrastructure to enable Amira to interact with students and educators in novel ways.
* Define approaches to content creation that will enable Amira to safely assist students to build their reading skills. This includes defining internal pipelines to interact with our content team.
* Contribute to experiments, including designing experimental details and hypothesis testing, writing reusable code, running evaluations, and organizing and presenting results.
* Work hands on with large, complex codebases, contributing meaningfully to enhance the capabilities of the machine learning team.
* Work within a fully distributed (remote) team.
* Find mechanisms for enabling the use of the Gen AI to be economically viable given the limited budgets of public schools.
Who You Are:
* You have a background in early education, reading science, literacy, and/or NLP.
* You have at least one year of experience working with LLMs and Gen AI models.
* You have a degree in computer science or a related technical area.
* You are a proficient Python programmer.
* You have created performant Machine Learning models.
* You want to continue to be hands-on with LLMs and other Gen AI models over the next few years.
* You have a desire to be at a Silicon Valley start-up, with the desire and commitment that requires.
* You are able to enjoy working on a remote, distributed team and are a natural collaborator.
* You love writing code - creating good products means a lot to you. Working is fun - not a passport to get to the next weekend.
Qualifications
* Bachelor's degree, and/or relevant experience
* 1+ years of Gen AI experience - preferably in the Education SaaS industry
* Ability to operate in a highly efficient manner by multitasking in a fast-paced, goal-oriented environment.
* Exceptional organizational, analytical, and detail-oriented thinking skills.
* Proven track record of meeting/exceeding goals and targets.
* Great interpersonal, written and oral communication skills.
* Experience working across remote teams.
Amira's Culture
* Flexibility - We encourage and support you to live and work where you desire. Amira works as a truly distributed team. We worked remotely before COVID and we'll be working remotely after the pandemic is long gone. Our office is Slack. Our coffee room is Zoom. Our team works hard but we work when we want, where we want.
* Collaboration - We work together closely, using collaborative tools and periodic face to face get togethers. We believe great software is like movie-making. Lots of talented people with very different skills have to band together to build a great experience.
* Lean & Agile -- We believe in ownership and continuous feedback. Yes, we employ Scrum ceremonies. But, what we're really after is using data and learning to be better and to do better for our teachers, students, and players.
* Mission-Driven - What's important to us is helping kids. We're about tangible, measured impact.
Benefits:
* Competitive Salary
* Medical, dental, and vision benefits
* 401(k) with company matching
* Flexible time off
* Stock option ownership
* Cutting-edge work
* The opportunity to help children around the world reach their full potential
Commitment to Diversity:
Amira Learning serves a diverse group of students and educators across the United States and internationally. We believe every student should have access to a high-quality education and that it takes a diverse group of people with a wide range of experiences to develop and deliver a product that meets that goal. We are proud to be an equal opportunity employer.
The posted salary range reflects the minimum and maximum base salary the company reasonably expects to pay for this role. Salary ranges are determined by role, level, and location. Individual pay is based on location, job-related skills, experience, and relevant education or training. We are an equal opportunity employer. We do not discriminate on the basis of race, religion, color, ancestry, national origin, sex, sexual orientation, gender identity or expression, age, disability, medical condition, pregnancy, genetic information, marital status, military service, or any other status protected by law.
$66k-93k yearly est. 56d ago
Lead Data Scientist
Here Holding 4.4
Data engineer job in Indiana
What's the role?
At HERE we take it upon ourselves to be the change we wish to see. We create solutions that fuel innovation, provide opportunity and foster inclusion to improve people's lives. If you are inspired by an open world and driven to create positive change, join us. Learn more about us on our YouTube Channel.
The Lead Data Scientist in this team would have to be passionate about innovating and developing machine learning and data analytics solutions to build our industry-leading map. We provide the opportunity to collaborate with an energetic and dedicated team that works on cutting-edge technology to create tools and services. The candidate will work with researchers, developers, architects, IT to develop, deploy, and maintain applications in multiple environments.
Responsibilities:
Help design and build the next iteration of process automation in HERE Core Map processes employing a highly scalable Big Data infrastructure and machine learning as applied to global-scale digital map-making.
Build and test analytic and statistical models to improve a wide variety of both internal data-driven processes for map-making data decisions and system control needs.
Act as an expert and evangelist in areas of data mining, machine learning, statistics, and predictive analysis and modeling.
Function as a predictive modeling or application team lead on Core Map projects.
Who are you?
MS or PhD in a discipline such as Statistics, Applied Mathematics, Computer Science, or Econometrics with an emphasis or thesis work on one or more of the following: computational statistics/science/engineering, data mining, machine learning, and optimization.
Minimum of 8+ years related professional experience.
Knowledge of data mining and analytic methods such as regression, classifiers, clustering, association rules, decision trees, Bayesian network analysis, etc. Should have expert-level knowledge in one or more of these areas.
Knowledge of Computer Vision, Deep Learning and Point Cloud Processing algorithms.
Proficiency with a statistical analysis package and associated scripting language such as Python, R, Matlab, SAS, etc.
Programming experience with SQL, shell script, Python, etc.
Knowledge of and ideally some experience with tools such as Pig, Hive, etc., for working with big datain Hadoop and/or Spark for data extraction and data prep for analysis.
Experience with and demonstrated capability to effectively interact with both internal and external customer executives, technical and non-technical to explain uses and value of predictive systems and techniques.
Demonstrated proficiency with understanding, specifying and explaining predictive modeling solutions and organizing teams of other data scientists and engineers to execute projects delivering those solutions.
Preferred Qualifications:
Development experience with and Scala
Development experience with Docker
Development experience with GIS data
Development experience with NoSQL (i.e. DynamoDB)
What You'll Get:
Challenging problems to solve
Opportunities to learn cool new things
Work that makes a difference in the world
Freedom to decide how to perform your work
Variety in the types of projects
Feedback so you will know how well you are doing
Collaborative, Supportive Colleagues
Who are we?
HERE Technologies is a location data and technology platform company. We empower our customers to achieve better outcomes - from helping a city manage its infrastructure or a business optimize its assets to guiding drivers to their destination safely.
At HERE we take it upon ourselves to be the change we wish to see. We create solutions that fuel innovation, provide opportunity and foster inclusion to improve people's lives. If you are inspired by an open world and driven to create positive change, join us. Learn more about us. Watch Video
$66k-90k yearly est. Auto-Apply 1d ago
Manager, Data Scientist, DMP
Standard Chartered 4.8
Data engineer job in Indiana
Apply now Work Type: Office Working Employment Type: Permanent Job Description: This role is a role within the Deposit pricing analytics team in SCMAC. The primary focus of the role is:
* To develop AI solutions that are fit for purpose by leveraging advanced data & analytical tools and technology with in WRB. The individual will be responsible for end-to-end analytics solution development, deployment, performance assessment and to produce high-quality data science conclusions, backed up by results for WRB business.
* Takes end-to-end responsibility for translating business question into data science requirements and actions. Ensures model governance, including documentation, validation, maintenance, etc.
* Responsible for performing the AI solution development and delivery for enabling high impact marketing use cases across products, segments in WRB markets.
* Responsible for alignment with country product, segment and Group product and segment teams on key business use cases to address with AI solutions, in accordance with the model governance framework.
* Responsible for development of pricing and optimization solutions for markets
* Responsible for conceptualizing and building high impact use cases for deposits portfolio
* Responsible for implementation and tracking of use case in markets and leading discussions with governance team on model approvals
Key Responsibilities
Business
* Analyse and agree on the solution Design for Analytics projects
* On the agreed methodology develop and deliver analytical solutions and models
* Partner creating implementation plan with Project owner including models benefit
* Support on the deployment of the initiatives including scoring or implementation though any system
* Consolidate or Track Model performance for periodic model performance assessment
* Create the technical and review documents for approval
* Client Lifecycle Management ( Acquire, Activation, Cross Sell/Up Sell, Retention & Win-back)
* Enable scientific "test and learn" for direct to client campaigns
* Pricing analytics and optimization
* Digital analytics including social media data analytics for any new methodologies
* Channel optimization
* Client wallet utilization prediction both off-us and on-us
* Client and product profitability prediction
Processes
* Continuously improve the operational efficiency and effectiveness of processes
* Ensure effective management of operational risks within the function and compliance with applicable internal policies, and external laws and regulations
Key stakeholders
* Group/Region Analytics teams
* Group / Region/Country Product & Segment Teams
* Group / Region / Country Channels/distribution
* Group / Region / Country Risk Analytics Teams
* Group / Regional / Country Business Teams
* Support functions including Finance, Technology, Analytics Operation
Skills and Experience
* Data Science
* Anti Money Laundering Policies & procedures
* Modelling: Data, Process, Events, Objects
* Banking Product
* 2-4 years of experience (Overrall)
About Standard Chartered
We're an international bank, nimble enough to act, big enough for impact. For more than 170 years, we've worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If you're looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we can't wait to see the talents you can bring us.
Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, you'll see how we value difference and advocate inclusion.
Together we:
* Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do
* Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well
* Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term
What we offer
In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing.
* Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations.
* Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum.
* Flexible working options based around home and office locations, with flexible working patterns.
* Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits
* A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning.
* Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential.
Apply now
Information at a Glance
*
*
*
*
*
$61k-83k yearly est. 6d ago
Engineer II
Blue Chip Casino Hotel Spa
Data engineer job in Michigan City, IN
Boyd Gaming Corporation has been successful in gaming jurisdiction in which we operate in the United States and is one of the premier casino entertainment companies in the United States. Never content to rest upon our successes, we will continue to evolve and retain a position of leadership in our industry. Our past success, our current business philosophies and our sound business planning, combine to position Boyd Gaming Corporation to maximize value for our shareholders, our team members and our communities.
Job Description
Perform maintenance duties within the specified trade skill in all casino/hotel facilities. The Engineer II incumbent will specialize in painting, carpentry, drywall, and wallpaper
Qualifications
Plans project layouts.
Utilize proper personal protective equipment.
Responsible for dry-wall repair and preparing surfaces for painting.
Job tasks must be finished in a timely manner according to department standards.
Follow preventative maintenance schedules on equipment and rooms in assigned areas and document work performed indatabase using a computer.
Respond to guest calls concerning maintenance issues in hotel rooms, on the casino floor, and other areas of the property.
Respond to calls from employees concerning maintenance issues in offices, shops, kitchen, laundry, the hotel, and casino.
Perform repair and maintenance of equipment.
Maintain good housekeeping practices in shops and work areas.
Assist Chief Engineer with maintaining machinery and safety items on the vessel.
Knowledge of general maintenance procedures.
Apprentice level knowledge of skills. Knowledge of state and local code as it applies to his or her trade skill.
Ability to apply mathematical concepts such as fractions, percentages, ratios, and proportions to practical situations.
Ability to work with mathematical concepts such as probability and statistical inference and fundamentals of plane and solid geometry and trigonometry.
Ability to read and interpret documents such as operating and maintenance instructions, procedures manuals, safety rules, blueprints, or schematics.
Ability to solve practical problems and deal with a variety of variables in situations where only limited standardization exists.
Ability to interpret a variety of instructions furnished in written, oral and diagram form.
Additional Information
Boyd Gaming is proud to be an Equal Opportunity Employer and does not discriminate against any employee or applicant for employment because of race, color, sex, age, national origin, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any other federal, state, or local protected class.
Boyd Gaming is proud to be an Equal Opportunity Employer and does not discriminate against any employee or applicant for employment because of race, color, sex, age, national origin, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any other federal, state, or local protected class.
$60k-80k yearly est. 5d ago
Junior Data Scientist
AES Us 4.8
Data engineer job in Indianapolis, IN
Are you ready to be part of a company that's not just talking about the future, but actively shaping it? Join The AES Corporation (NYSE: AES), a Fortune 500 company that's leading the charge in the global energy revolution. With operations spanning 14 countries, AES is committed to shaping a future through innovation and collaboration. Our dedication to innovation has earned us recognition as one of the Top Ten Best Workplaces for Innovators by Fast Company in 2022. And with our certification as a Great Place to Work, you can be confident that you're joining a company that values its people just as much as its groundbreaking ideas.
AES is proudly ranked #1 globally in renewable energy sales to corporations, and with $12.7B in revenues in 2023, we have the resources and expertise to make a significant impact as we provide electricity to 25 million customers worldwide. As the world moves towards a net-zero future, AES is committed to meeting the Paris Agreement's goals by 2050. Our innovative solutions, such as 24/7 carbon-free energy for data centers, are setting the pace for rapid, global decarbonization.
If you're ready to be part of a company that's not just adapting to change, but driving it, AES is the place for you. We're not just building a cleaner, more sustainable future - we're powering it. Apply now and energize your career with a true leader in the global energy transformation.
Position Summary
The Junior Data Scientist will support AES's US Utilities operations by applying data science techniques to improve grid reliability, customer experience, and operational efficiency. This entry-level role is ideal for candidates passionate about solving real-world energy challenges through data exploration, predictive modeling, and cross-functional collaboration. You'll work alongside our team of experienced data scientists, data analysts, data architects and engineers, and data governance experts to deliver insights that drive smarter decisions and accelerate the future of energy.
Key Responsibilities
Work cross-functionally within the team of data scientists, data architects & engineers, machine learning engineers, data analysts, and data governance experts to support integrated data solutions.
Collaborate with business stakeholders and business analysts to define project requirements.
Collect, clean, and preprocess structured and unstructured data from utility systems (e.g., meter data, customer data).
Perform exploratory data analysis to identify trends, anomalies, and opportunities for improvement in grid operations and customer service.
Use both traditional machine learning methods and generative AI tools to create predictive models that solve utilities-focused problems, particularly in the customer space (e.g., outage restoration, customer program adoption, revenue assurance).
Present data-driven insights to internal stakeholders in a clear, concise manner, including visualizing data to provide predictive insights and drive decision making.
Document methodologies, workflows, and results to ensure reproducibility and transparency.
Be a champion of data and AI at all levels of the AES US Utilities organization.
Stay current with industry trends in utility analytics and machine learning.
Qualifications
Must Have
Bachelor's degree indata science, statistics, computer science, engineering, or a related field. Master's degree or Ph.D. is preferred.
5 years of experience in a data science or analytics role.
Strong applied analytics and statistics skills, such as distributions, statistical testing, regression, etc.
Proficiency in Python or R, with experience using libraries such as pandas, NumPy, and scikit-learn.
Proficiency in traditional machine learning algorithms and techniques, including k-nearest neighbors (k-NN), naive Bayes, support vector machines (SVM), convolutional neural networks (CNN), random forest, gradient-boosted trees, etc.
Familiarity with generative AI tools and techniques, including large language models (LLMs) and Retrieval-Augmented Generation (RAG), with an understanding of how these can be applied to enhance contextual relevance and integrate enterprise data into intelligent workflows.
Proficiency in SQL, with experience writing complex queries and working with relational data structures. Google BigQuery experience is preferred, including the use of views, tables, materialized views, stored procedures, etc.
Proficient in Git for version control, including repository management, branching, merging, and collaborating on code and notebooks indata science projects. Experience integrating Git with CI/CD pipelines to automate testing and deployment is preferred.
Proficiency indata visualization tools (e.g., Power BI, Tableau, Looker).
Strong analytical thinking and problem-solving skills.
Excellent communication and storytelling abilities.
Experience with cloud computing platforms (GCP preferred).
Ability to manage multiple priorities in a fast-paced environment.
Interest in learning more about the customer-facing side of the utility industry.
Nice to Have
Experience in the energy and/or utilities sectors.
Understanding of utility-specific data sources (e.g., SCADA, CIS, GIS).
Familiarity with SAP (especially IS-U and ERP modules).
Exposure to data governance tools (e.g., enterprise data catalog and data quality tools).
Physical Responsibilities
Ability to sit or stand for extended periods while working at a computer.
Capable of walking through office environments and occasionally visiting field or operational sites.
Must be able to communicate effectively with colleagues in person, via video conferencing, and through written formats.
Occasional travel may be required to attend meetings or support project work.
Valid driver's license may be required depending on team assignment.
AES is an Equal Opportunity Employer who is committed to building strength and delivering long-term sustainability through diversity and inclusion. Respecting all backgrounds, differences and perspectives enables us to improve the lives of our people, customers, suppliers, contractors, and the communities in which we live and work. All qualified applicants will receive consideration for employment without regard to sex, sexual orientation, gender, gender identity and/or expression, race, national origin, ethnicity, age, religion, marital status, physical or mental disability, pregnancy, childbirth, or related medical condition, military or veteran status, or any other characteristic protected under applicable law. E-Verify Notice: AES will provide the Social Security Administration (SSA) and if necessary, the Department of Homeland Security (DHS) with information from each new employee's I-9 to confirm work authorization.
$63k-81k yearly est. Auto-Apply 15d ago
Data Scientist/Engineer
Cayuse Shared Services
Data engineer job in Indiana
Job Title: Data Scientist/Engineer Type: Corp to Corp
Contract Length: 12 months - potential conversion to FTE
We are seeking a highly skilled and motivated Data Scientist/Engineer to join our dynamic and innovative team. The ideal candidate will have hands-on experience designing, building, and maintaining scalable data processing pipelines, implementing machine learning solutions, and ensuring data quality across the organization. This role requires a strong technical foundation in Azure cloud platforms, dataengineering, and applied data science to support critical business decisions and technological advancements.
Responsibilities DataEngineering
Build and Maintain Data Pipelines: Develop and manage scalable data pipelines using Azure Data Factory, Azure Synapse Analytics, or Azure Databricks to process large volumes of data.
Data Quality and Transformation: Ensure the transformation, cleansing, and ingestion of data from a wide range of structured and unstructured sources with appropriate error handling.
Optimize Data Storage: Utilize and optimize data storage solutions, such as Azure Data Lake and Blob Storage, to ensure cost-effective and efficient data storage practices.
Machine Learning Support
Collaboration with ML Engineers and Architects: Work with Machine Learning Engineers and Solution Architects to seamlessly deploy machine learning models into production environments.
Automated Retraining Pipelines: Build automated systems to monitor model performance, detect model drift, and trigger retraining processes as needed.
Experiment Reproducibility: Ensure reproducibility of ML experiments by maintaining proper version control for models, data, and code.
Data Analysis and Preprocessing
Data Ingestion and Exploration: Ingest, explore, and preprocess both structured and unstructured data with tools such as:
Azure Data Lake Storage
Azure Synapse Analytics
Azure Data Factory
Exploratory Data Analysis (EDA): Perform exploratory data analysis using notebooks like Azure Machine Learning Notebooks or Azure Databricks to derive actionable insights.
Data Quality Assessments: Identify data anomalies, evaluate data quality, and recommend appropriate data cleansing or remediation strategies.
General Responsibilities
Pipeline Monitoring and Optimization: Continuously monitor the performance of data pipelines and workloads, identifying opportunities for optimization and improvement.
Collaboration and Communication: Communicate findings and technical requirements effectively with cross-functional teams, including data scientists, software engineers, and business stakeholders.
Documentation: Document all data workflows, experiments, and model implementations to facilitate knowledge sharing and maintain continuity of operations.
Qualifications
Proven experience in building and managing data pipelines using Azure Data Factory, Azure Synapse Analytics, or Databricks.
Strong knowledge of Azure storage solutions, including Azure Data Lake and Blob Storage.
Familiarity with data transformation, ingestion techniques, and data quality methodologies.
Proficiency in programming languages such as Python or Scala for data processing and ML integration.
Experience in exploratory data analysis and working with notebooks like Jupyter, Azure Machine Learning Notebooks, or Azure Databricks.
Solid understanding of machine learning lifecycle management and model deployment in production environments.
Strong problem-solving skills with experience detecting and addressing data anomalies.
Other Duties:
Please note this job description is not designed to cover or contain a comprehensive list of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities, and activities may change at any time with or without notice.
Cayuse is an Equal Opportunity Employer. All employment decisions are based on merit, qualifications, skills, and abilities. All qualified applicants will receive consideration for employment in accordance with any applicable federal, state, or local law.
Pay Range USD $28.00 - USD $30.00 /Hr.
$28-30 hourly Auto-Apply 35d ago
Data Scientist
Corteva, Inc. 3.7
Data engineer job in Indianapolis, IN
Who We Are and What We Do At Corteva Agriscience, you will help us grow what's next. No matter your role, you will be part of a team that is building the future of agriculture - leading breakthroughs in the innovation and application of science and technology that will better the lives of people all over the world and fuel the progress of humankind.
We are seeking a highly skilled Data Scientist with experience in bioprocess control to join our global Data Science team. This role will focus on applying artificial intelligence, machine learning, and statistical modeling to develop active bioprocess control algorithms, optimize bioprocess workflows, improve operational efficiency, and enable data-driven decision-making in manufacturing and R&D environments.
What You'll Do:
* Design and implement active process control strategies, leveraging online measurements and dynamic parameter adjustment for improved productivity and safety.
* Develop and deploy predictive models for bioprocess optimization, including fermentation and downstream processing.
* Partner with engineers and scientists to translate process insights into actionable improvements, such as yield enhancement and cost reduction.
* Analyze high-complexity datasets from bioprocess operations, including sensor data, batch records, and experimental results.
* Collaborate with cross-functional teams to integrate data science solutions into plant operations, ensuring scalability and compliance.
* Communicate findings through clear visualizations, reports, and presentations to technical and non-technical stakeholders.
* Contribute to continuous improvement of data pipelines and modeling frameworks for bioprocess control.
What Skills You Need:
* M.S. + 3 years' experience or Ph.D. inData Science, Computer Science, Chemical Engineering, Bioprocess Engineering, Statistics, or related quantitative field.
* Strong foundation in machine learning, statistical modeling, and process control.
* Proficiency in Python, R, or another programming language.
* Excellent communication and collaboration skills; ability to work in multidisciplinary teams.
Preferred Skills:
* Familiarity with bioprocess workflows, fermentation, and downstream processing.
* Hands-on experience with bioprocess optimization models and active process control strategies.
* Experience with industrial data systems and cloud platforms.
* Knowledge of reinforcement learning or adaptive experimentation for process improvement.
#LI-BB1
Benefits - How We'll Support You:
* Numerous development opportunities offered to build your skills
* Be part of a company with a higher purpose and contribute to making the world a better place
* Health benefits for you and your family on your first day of employment
* Four weeks of paid time off and two weeks of well-being pay per year, plus paid holidays
* Excellent parental leave which includes a minimum of 16 weeks for mother and father
* Future planning with our competitive retirement savings plan and tuition reimbursement program
* Learn more about our total rewards package here - Corteva Benefits
* Check out life at Corteva! *************************************
Are you a good match? Apply today! We seek applicants from all backgrounds to ensure we get the best, most creative talent on our team.
Corteva Agriscience is an equal opportunity employer. We are committed to embracing our differences to enrich lives, advance innovation, and boost company performance. Qualified applicants will be considered without regard to race, color, religion, creed, sex, sexual orientation, gender identity, marital status, national origin, age, military or veteran status, pregnancy related conditions (including pregnancy, childbirth, or related medical conditions), disability or any other protected status in accordance with federal, state, or local laws.
$67k-89k yearly est. 27d ago
Join the Squad | Now Hiring a DataOps Consultant
Onebridge 4.3
Data engineer job in Indianapolis, IN
Onebridge, a Marlabs Company, is an AI and data analytics consulting firm that strives to improve outcomes for the people we serve through data and technology. We have served some of the largest healthcare, life sciences, manufacturing, financial services, and government entities in the U.S. since 2005. We have an exciting opportunity for a highly skilled DataOps Consultant to join an innovative and dynamic group of professionals at a company rated among the top “Best Places to Work” in Indianapolis since 2015.
DataOps Consultant | About You
As a DataOps Consultant, you are responsible for ensuring the seamless integration, automation, and optimization of data pipelines and infrastructure. You excel at collaborating with cross-functional teams to deliver scalable and efficient data solutions that meet business needs. With expertise in cloud platforms, data processing tools, and version control, you maintain the reliability and performance of data operations. Your focus on data integrity, quality, and continuous improvement drives the success of data workflows. Always proactive, you are committed to staying ahead of industry trends and solving complex challenges to enhance the organization's data ecosystem.
DataOps Consultant | Day-to-Day
Develop, deploy, and maintain scalable and efficient data pipelines that handle large volumes of data from various sources.
Collaborate with DataEngineers, Data Scientists, and Business Analysts to ensure that data solutions meet business requirements and optimize workflows.
Monitor, troubleshoot, and optimize data pipelines to ensure high availability, reliability, and performance.
Implement and maintain automation for data ingestion, transformation, and deployment processes to improve efficiency.
Ensure data quality by implementing validation checks and continuous monitoring to detect and resolve issues.
Document data processes, pipeline configurations, and troubleshooting steps to maintain clarity and consistency across teams.
DataOps Consultant | Skills & Experience
5+ years of experience working inDataOps or related fields, with strong hands-on experience in cloud platforms (AWS, Azure, Google Cloud) for data storage, processing, and analytics.
Proficiency in programming languages such as Python, Java, or Scala for building and maintaining data pipelines.
Experience with data orchestration tools like Apache Airflow, Azure Data Factory, or similar automated data workflows.
Expertise in big data processing frameworks (e.g., Apache Kafka, Apache Spark, Hadoop) for handling large volumes of data.
Hands-on experience with version control systems such as Git for managing code and deployment pipelines.
Solid understanding of data governance, security best practices, and regulatory compliance standards (e.g., GDPR, HIPAA).
A Best Place to Work inIndiana since 2015
$60k-81k yearly est. Auto-Apply 60d+ ago
Data Scientist Manufacturing Quality - Duneland
Monosol 4.3
Data engineer job in Portage, IN
Join MonoSol's forward thinking Quality team and turn raw manufacturing data into actionable insights that raise product quality and process efficiency. You'll sit at the intersection of data science and manufacturing engineering to mine and utilize high-frequency production data for building predictive models and guiding teams on the levers that control critical quality attributes. You'll be an internal expert who bridges advanced data science with practical plant operations and a critical member to drive digital transformation across the company.
Responsibilities
* Leadership and Direction
* Champion best practices indata governance, reproducibility, and experiment design; contribute to MonoSol's growing analytics community.
* DataEngineering
* Connect to historians/MES (manufacturing execution software), write efficient data pipelines to profile large time-series and batch datasets to discover key factors driving manufacturing defects and variability.
* Predictive Modeling & Analytics
* Train, test, and maintain regression/classification models (e.g. linear regression, XGBoost, TensorFlow). Emphasize and effectively communicate model outputs to key stakeholders.
* Performance Improvement through ML
* Integrate predictive models with real-time dashboards or control-room alerts that have a positive financial impact
* Efficient Reporting with Visualization & Storytelling
* Build clear dashboards (e.g. Power BI, Spotfire, or Custom) and present findings to production, maintenance, and leadership teams
* Model Deployment & Monitoring
* Package models for deployment in production environments using either cloud-based or on-premises infrastructure as needed. Set up dashboards and alerts to provide near-real-time insights and leading indicators for operators and engineers.
* Data and Analytics Strategy
* Make recommendations to improve data and analytics systems and platforms, contributing to the continuous improvement and refinement of data and analytics strategy at MonoSol.
* Data Architecture
* Help define data standards (naming, sampling, governance) for projects.
* Continuous Improvement & Collaboration
* Partner across manufacturing teams to translate model outputs into actions; coach colleagues on data-driven methods. Translate model findings into root-cause actions.
* Personal Development
* Stay current on manufacturing analytics, MLOps, and Six Sigma best practices; pursue certifications or conferences as needed.
Typical Tasks
* Extract, clean, and feature-engineer high-velocity plant data from manufacturing systems.
* Design and extract meaningful features from time-series, batch, and categorical data to improve model performance and interpretability.
* Build and evaluate predictive models (regularized linear regression, gaussian process regression, gradient boosting, neural networks); create simulation notebooks for process scenarios.
* Deploy models in production environments (cloud or on-premises) using APIs or integrated systems; monitor performance for drift and retrain as needed to maintain accuracy.
* Perform root cause analysis using data-driven techniques to identify sources of defects or process inefficiencies
* Create intuitive dashboards that link inputs to predicted quality metrics.
* Work closely with quality engineers, process engineers, and production teams to define the problem, validate model results, and co-develop solutions that are both technically sound and operationally practical.
* Document methodologies, models, and findings in a clear and reproducible manner.
* Present insights to operators, engineers, and executives
Qualifications
* Education
* Bachelor's degree in chemical engineering, Mechanical Engineering, Data Science, Statistics, Computer Science, or related field (required).
* Master's or graduate certificate inData Science, Analytics, or related field (preferred).
* Experience
* 3+ years of statistical modeling, applied machine learning, data science, or advanced analytics preferably in process control and manufacturing.
* Proven success in improving yield, uptime, or quality with statistical or ML models.
* Familiarity with statistical process control (SPC), control charts, and quality metrics.
* Six Sigma green belt or higher (preferred)
* Technical Skills
* Proficiency in Python or R for data analysis and modeling. (e.g., pandas/scikit-learn or dplyr/caret/tidymodels).
* Experience with deep learning frameworks (e.g. TensorFlow, py Torch)
* Strong SQL skills and working with relational databases; experience with cloud platforms (AWS SageMaker, Azure ML) is a plus.
* Soft Skills
* Ability to clearly communicate and translate model outputs into actionable recommendations for manufacturing-floor teams, engineering staff, and executive leadership.
* Comfortable working in cross-functional settings and driving change
Applicable only to applicants applying to a position in any location with a pay disclosure requirements under state or local law:
The compensation range that is described below is the possible base pay compensation that the company believes in good faith that it will pay for this role at the time of posting based on job grade for the position. Individual compensation within this range is based on many factors such as years of experience etc. so the company might pay more or less than the posted range and it is understood that this range may be modified in the future.
In addition to base compensation, MonoSol provides a yearly incentive compensation bonus, a profit sharing bonus when eligible, a comprehensive benefits package including medical, dental, vision insurances, short term disability, long term disability, accidental death and dismemberment, term life insurance, voluntary term life insurance, transit flexible spending account (if applicable), employee assistance program, identity theft protection, 401k and paid time off (vacation and sick days).
Compensation range - $86,907.87 - $146,142.27
Incentive Compensation Bonus Target - 10%
Paid time off amount - 15 days
Closing
The above statements are intended to describe the general nature and level of the work being performed by employees assigned to this position. This is not intended as an exhaustive list of all responsibilities, duties, and skills required. MonoSol, LLC reserves the right to make changes to the job description whenever necessary.
Disclaimer
As part of MonoSol, LLC's employment process, finalist candidates will be required to complete a drug / alcohol test, physical, and background check prior to employment commencing. MonoSol, LLC is an equal opportunity employer. All qualified applicants will be considered without regard to race, national origin, gender, age, disability, sexual orientation, veteran status, or marital status.
$86.9k-146.1k yearly 35d ago
Advisor, Data Scientist - CMC Data Products
Eli Lilly and Company 4.6
Data engineer job in Indianapolis, IN
At Lilly, we unite caring with discovery to make life better for people around the world. We are a global healthcare leader headquartered in Indianapolis, Indiana. Our employees around the world work to discover and bring life-changing medicines to those who need them, improve the understanding and management of disease, and give back to our communities through philanthropy and volunteerism. We give our best effort to our work, and we put people first. We're looking for people who are determined to make life better for people around the world.
Organizational & Position Overview:
The Bioproduct Research & Development (BR&D) organization strives to deliver creative medicines to patients by developing and commercializing insulins, monoclonal antibodies, novel therapeutic proteins, peptides, oligonucleotide therapies, and gene therapy systems. This multidisciplinary group works collaboratively with our discovery and manufacturing colleagues.
We are seeking an exceptional Data Scientist with deep data expertise in the pharmaceutical domain to lead the development and delivery of enterprise-scale data products that power AI-driven insights, process optimization, and regulatory compliance. In this role, you'll bridge pharmaceutical sciences with modern dataengineering to transform complex CMC, PAT, and analytical data into strategic assets that accelerate drug development and manufacturing excellence.
Responsibilities:
Data Product Development: Define the roadmap and deliver analysis-ready and AI-ready data products that enable AI/ML applications, PAT systems, near-time analytical testing, and process intelligence across CMC workflows.
Data Archetypes & Modern Data Management: Define pharmaceutical-specific data archetypes (process, analytical, quality, CMC submission) and create reusable data models aligned with industry standards (ISA-88, ISA-95, CDISC, eCTD).
Modern Data Management for Regulated Environments: Implement data frameworks that ensure 21 CFR Part 11, ALCOA+, and data integrity compliance, while enabling scientific innovation and self-service access.
AI/ML-ready Data Products: Build training datasets for lab automation, process optimization, and predictive CQA models, and support generative AI applications for knowledge management and regulatory Q&A.
Cross-Functional Leadership: Collaborate with analytical R&D, process development, manufacturing science, quality, and regulatory affairs to standardize data products.
Deliverables include:
Scalable data integration platform that automates compilation of technical-review-ready and submission-ready data packages with demonstrable quality assurance.
Unified CMC data repository supporting current process and analytical method development while enabling future AI/ML applications across R&D and manufacturing
Data flow frameworks that enable self-service access while maintaining GxP compliance and audit readiness
Comprehensive documentation, standards, and training programs that democratize data access and accelerate product development
Basic Requirements:
Master's degree in Computer Science, Data Science, Machine Learning, AI, or related technical field
8+ years of product management experience focused on data products, data platforms, or scientific data systems and a strong grasp of modern data architecture patterns (data warehouses, data lakes, real-time streaming)
Knowledge of modern data stack technologies (Microsoft Fabric, Databricks, Airflow) and cloud platforms (AWS- S3, RDS, Lambda/Glue, Azure)
Demonstrated experience designing data products that support AI/ML workflows and advanced analytics in scientific domains
Proficiency with SQL, Python, and data visualization tools
Experience with analytical instrumentation and data systems (HPLC/UPLC, spectroscopy, particle characterization, process sensors)
Knowledge of pharmaceutical manufacturing processes, including batch and continuous manufacturing, unit operations, and process control
Expertise indata modeling for time-series, spectroscopic, chromatographic, and hierarchical batch/lot data
Experience with laboratory data management systems (LIMS, ELN, SDMS, CDS) and their integration patterns
Additional Preferences:
Understanding of Design of Experiments (DoE), Quality by Design (QbD), and process validation strategies
Experience implementing data mesh architectures in scientific organizations
Knowledge of MLOps practices and model deployment in validated environments
Familiarity with regulatory submissions (eCTD, CTD) and how analytical data supports marketing applications
Experience with CI/CD pipelines (GitHub Actions, CloudFormation) for scientific applications
Lilly is dedicated to helping individuals with disabilities to actively engage in the workforce, ensuring equal opportunities when vying for positions. If you require accommodation to submit a resume for a position at Lilly, please complete the accommodation request form (******************************************************** for further assistance. Please note this is for individuals to request an accommodation as part of the application process and any other correspondence will not receive a response.
Lilly is proud to be an EEO Employer and does not discriminate on the basis of age, race, color, religion, gender identity, sex, gender expression, sexual orientation, genetic information, ancestry, national origin, protected veteran status, disability, or any other legally protected status.
Our employee resource groups (ERGs) offer strong support networks for their members and are open to all employees. Our current groups include: Africa, Middle East, Central Asia Network, Black Employees at Lilly, Chinese Culture Network, Japanese International Leadership Network (JILN), Lilly India Network, Organization of Latinx at Lilly (OLA), PRIDE (LGBTQ+ Allies), Veterans Leadership Network (VLN), Women's Initiative for Leading at Lilly (WILL), en Able (for people with disabilities). Learn more about all of our groups.
Actual compensation will depend on a candidate's education, experience, skills, and geographic location. The anticipated wage for this position is
$126,000 - $244,200
Full-time equivalent employees also will be eligible for a company bonus (depending, in part, on company and individual performance). In addition, Lilly offers a comprehensive benefit program to eligible employees, including eligibility to participate in a company-sponsored 401(k); pension; vacation benefits; eligibility for medical, dental, vision and prescription drug benefits; flexible benefits (e.g., healthcare and/or dependent day care flexible spending accounts); life insurance and death benefits; certain time off and leave of absence benefits; and well-being benefits (e.g., employee assistance program, fitness benefits, and employee clubs and activities).Lilly reserves the right to amend, modify, or terminate its compensation and benefit programs in its sole discretion and Lilly's compensation practices and guidelines will apply regarding the details of any promotion or transfer of Lilly employees.
#WeAreLilly
$85k-109k yearly est. Auto-Apply 5d ago
Data Engineer (No Sponsorship Available)
Heritage Construction + Materials 3.6
Data engineer job in Indianapolis, IN
Build Your Career at Heritage Construction + Materials!
We are looking for a highly motivated, strategic-thinking, and data-driven technical expert to join our team as a HC+M DataEngineer. This individual will help develop and implement strategic data initiatives. They will be responsible for collecting and analyzing data, implementing technical solutions/algorithms, and developing and maintaining data pipelines.
This position requires U.S. work authorization
Essential Functions
Develop and support data pipelines within our Cloud Data Platform Databricks
Monitor and optimize Databricks cluster performance, ensuring cost-effective scaling and resource utilization
Design, implement, and maintain Delta Lake tables and pipelines to ensure optimized storage, data reliability, performance, and versioning
Python application development
Automate CI/CD pipelines for data workflows using Azure DevOps
Integrate Databricks with other Azure services, such as Azure Data Factory and ADLS
Design and implement monitoring solutions, leveraging Databricks monitoring tools, Azure Log Analytics, or custom dashboards for cluster and job performance
Collaborate with cross-functional teams to support data governance using Databricks Unity Catalog
Additional duties and responsibilities as assigned, including but not limited to continuously growing in alignment with the Company's core values, competencies, and skills.
Education Qualifications
Bachelor's or Master's degree in Computer Science, Engineering, Data Science, or a related field required
Experience Qualifications
Expertise with programming languages such as Python and/or SQL
Experience with Python Data Science Libraries (PySpark, Pandas, etc.)
Experience with Test Drive Development (TDD)
Automated CI/CD Pipelines
Linux / Bash / Docker
Database Schema Design and Optimization
Experience working in a cloud environment (Azure preferred) with strong understanding of cloud data architecture
Hands-on experience with Databricks or Snowflake Cloud Data Platforms
Experience with workflow orchestration (e.g., Databricks Jobs, or Azure Data Factory pipelines)
Skills and Abilities
Strong analytical and problem-solving skills
Experience with programming languages such as Python, Java, R, or SQL
Experience with Databricks or Snowflake Cloud Data Platforms
Knowledge of statistical analysis and data visualization tools, such as PowerBI or Tableau
Proficient in Microsoft Office
About Heritage Construction + Materials
Heritage Construction + Materials (HC+M) is part of The Heritage Group, a privately held, family-owned business headquartered in Indianapolis. HC+M has core capabilities in infrastructure building. Its collection of companies provides innovative road construction and materials services across the Midwest. HC+M companies, including Asphalt Materials, Inc., Evergreen Roadworks, Milestone Contractors and US Aggregates, proudly employ 3,000 people at 68 locations across seven states. Learn more at ***********************
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
#HeritageConstruction+Materials
$68k-92k yearly est. Auto-Apply 60d+ ago
Data Engineer Lead
Old National Bank 4.4
Data engineer job in Evansville, IN
Old National Bank has been serving clients and communities since 1834. With over $70 billion in total assets, we are a regional powerhouse deeply rooted in the communities we serve. As a trusted partner, we thrive on helping our clients achieve their goals and dreams, and we are committed to social responsibility and investing in our communities through volunteering and charitable giving.
We continually seek highly motivated and talented individuals as our people are critical to our success. In return, we offer competitive compensation with our salary and incentive program, in addition to medical, dental, and vision insurance. 401K, continuing education opportunities and an employee assistance program are also included in our benefit suite. Old National also offers a variety of Impact Network Groups led by team members who are passionate about driving engagement, creating awareness of diverse backgrounds and experiences, and building inclusion across the organization. We offer a unique opportunity to join a growing, community and client-focused company that is firmly rooted in its core values.
Responsibilities
Salary Range
The salary range for this position is $98,400 - $199,000 per year. Final compensation will be determined by location, skills, experience, qualifications and the career level at which the position is filled.
We are currently seeking a Cloud DataEngineer Lead which requires a high-level professional that is responsible for designing, developing, and maintaining the data architecture in our organization. This role is responsible for overseeing the technical aspects of data management, including data integration, processing, storage, and retrieval.
The Cloud DataEngineer Lead provides technical leadership to a team of DataEngineersin the enablement and support of our cloud data platform, creation and maintenance of data pipelines, data product development, and data governance patterns to ensure the timeliness and quality of data. The role is responsible for ensuring that data is accurate, reliable, and available to Analysts, Data Scientists, and other stakeholders within our organization.
Our DataEngineering teams work in a highly collaborative environments, engage in daily standups, operate within Agile development practices, and participate in a GitOps development model in that everything is code.
We are an in-office working environment. For this role we are considering the following locations; Evansville IN, Chicago at 8750 W. Bryn Mawr (near Rosemont and the Cumberland Blue Line station), Lake Elmo MN, St. Paul MN, Minneapolis MN, and St. Louis Park MN.
Key Accountabilities
Key Accountability 1: DataEngineering and Architecture Strategic Leadership
* Provide direction for the organization's dataengineering and architecture strategy.
* Make final decisions on data infrastructure design and tool selection, balancing business needs with technical feasibility.
* Collaborate on requirements gathering while owning the architectural vision and execution.
* Champion data management best practices across the organization.
Key Accountability 2: Engineering Execution and Technical Implementation
* Write production-quality code and implement scalable ETL pipelines using tools like AWS, S3, Glue, Lambda, and Databricks.
* Apply business intelligence best practices such as dimensional modeling and distributed data processing.
* Design, build, and maintain modern cloud-based data platforms to support enterprise data needs including integrations, analytics, and AI.
Key Accountability 3: Team Development and Cross-Functional Collaboration
* Bootstrap and grow the dataengineering team, setting standards and mentoring new members.
* Translate business requirements into technical solutions, ensuring alignment with broader organizational goals.
* Enable data-driven decision-making by delivering reliable analytics pipelines and insights.
Key Competencies for Position -
* Promotes Change - Actively seeks information to understand the rationale, implications, and impact for changes. Remains agile by quickly modifying daily behavior, leveraging resources, and trying new approaches to effectively embrace change. Willing to act quickly, learn and adjust as needed. Identifies and recommends changes to leadership to improve performance. Aligns activities to meet individual, team and organizational goals
* Strategy in Action - Breaks down larger goals into smaller achievable goals and communicates how they are contributing to the broader goal. Actively seeks to understand factors and trends that may influence role. Anticipates risks and develops contingency plans to manage risks. Identifies opportunities for improvement and seeks insights from other sources to generate potential solutions. Aligns activities to meet individual, team and organizational goals.
* Compelling Communication - Effectively and transparently shares information and ideas with others. Tailors the delivery of communication in a way that engages the audience and that is easy to understand and retain. Unites others towards common goal. Asks for others' opinions and ideas and listens actively to gain their support when clarifying expectations, agreeing on a solution and checking for satisfaction.
* Makes Decisions & Solves Problems - Takes ownership of the problem while collaborating with others on a resolution with an appropriate level of urgency. Collaborates and seeks to understand the root causes of problems. Evaluates the implications of new information or events and recommends solutions using decisions that are sound based on what is known at the time. Takes action that is consistent with available facts, constraints and probable consequences.
Qualifications and Education Requirements
* BS. degree in a related field and 7 to 9 years professional experience OR a total of 7 plus years' experience as Senior DataEngineer, Senior Software Engineer, or equivalent.
* 5 plus years working in AWS Cloud (AWS Solution Architect Certification is a plus) and hybrid cloud environments.
* 7 plus years working with data warehouse platforms.
* Coding experience in SQL and scripting languages such as Python and Bash and expert working knowledge of the Spark platform, preferably in a cloud data platform environment such as Databricks.
* Experience working with large and complex data sets with multi-terabyte scale is a plus.
* Knowledge of Agile and GitOps methodologies
* Banking experience / knowledge is a plus
* Ability to communicate effectively, both orally and in writing, with clients and groups of employees to build and maintain positive relationships
* Strong leadership and communication skills. You must be able to collaborate with other teams, effectively manage projects, and communicate technical information to non-technical stakeholders.
Old National is proud to be an equal opportunity employer focused on fostering an inclusive workplace and committed to hiring a workforce comprised of diverse backgrounds, cultures and thinking styles.
As such, all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, protected veteran status, status as a qualified individual with disability, sexual orientation, gender identity or any other characteristic protected by law.
We do not accept resumes from external staffing agencies or independent recruiters for any of our openings unless we have an agreement signed by the Director of Talent Acquisition, SVP, to fill a specific position.
Our culture is firmly rooted in our core values.
We are optimistic. We are collaborative. We are inclusive. We are agile. We are ethical.
We are Old National Bank. Join our team!
$70k-88k yearly est. Auto-Apply 21d ago
Data Scientist, Product Analytics
Meta 4.8
Data engineer job in Indianapolis, IN
As a Data Scientist at Meta, you will shape the future of people-facing and business-facing products we build across our entire family of applications (Facebook, Instagram, Messenger, WhatsApp, Oculus). By applying your technical skills, analytical mindset, and product intuition to one of the richest data sets in the world, you will help define the experiences we build for billions of people and hundreds of millions of businesses around the world. You will collaborate on a wide array of product and business problems with a wide-range of cross-functional partners across Product, Engineering, Research, DataEngineering, Marketing, Sales, Finance and others. You will use data and analysis to identify and solve product development's biggest challenges. You will influence product strategy and investment decisions with data, be focused on impact, and collaborate with other teams. By joining Meta, you will become part of a world-class analytics community dedicated to skill development and career growth in analytics and beyond.Product leadership: You will use data to shape product development, quantify new opportunities, identify upcoming challenges, and ensure the products we build bring value to people, businesses, and Meta. You will help your partner teams prioritize what to build, set goals, and understand their product's ecosystem.Analytics: You will guide teams using data and insights. You will focus on developing hypotheses and employ a varied toolkit of rigorous analytical approaches, different methodologies, frameworks, and technical approaches to test them.Communication and influence: You won't simply present data, but tell data-driven stories. You will convince and influence your partners using clear insights and recommendations. You will build credibility through structure and clarity, and be a trusted strategic partner.
**Required Skills:**
Data Scientist, Product Analytics Responsibilities:
1. Work with large and complex data sets to solve a wide array of challenging problems using different analytical and statistical approaches
2. Apply technical expertise with quantitative analysis, experimentation, data mining, and the presentation of data to develop strategies for our products that serve billions of people and hundreds of millions of businesses
3. Identify and measure success of product efforts through goal setting, forecasting, and monitoring of key product metrics to understand trends
4. Define, understand, and test opportunities and levers to improve the product, and drive roadmaps through your insights and recommendations
5. Partner with Product, Engineering, and cross-functional teams to inform, influence, support, and execute product strategy and investment decisions
**Minimum Qualifications:**
Minimum Qualifications:
6. Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
7. Bachelor's degree in Mathematics, Statistics, a relevant technical field, or equivalent
8. 4+ years of work experience in analytics, data querying languages such as SQL, scripting languages such as Python, and/or statistical mathematical software such as R (minimum of 2 years with a Ph.D.)
9. 4+ years of experience solving analytical problems using quantitative approaches, understanding ecosystems, user behaviors & long-term product trends, and leading data-driven projects from definition to execution [including defining metrics, experiment, design, communicating actionable insights]
**Preferred Qualifications:**
Preferred Qualifications:
10. Master's or Ph.D. Degree in a quantitative field
**Public Compensation:**
$147,000/year to $208,000/year + bonus + equity + benefits
**Industry:** Internet
**Equal Opportunity:**
Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Meta participates in the E-Verify program in certain locations, as required by law. Please note that Meta may leverage artificial intelligence and machine learning technologies in connection with applications for employment.
Meta is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance or accommodations due to a disability, please let us know at accommodations-ext@fb.com.
$147k-208k yearly 60d+ ago
Data Engineer
Standard Chartered 4.8
Data engineer job in Indiana
Apply now Work Type: Office Working Employment Type: Permanent * We are seeking a skilled and motivated DataEngineer to design, implement, and optimise data systems and pipelines that support critical business operations. The ideal candidate will have expertise indatabase management, data modeling, and distributed systems, with a strong focus on both relational and non-relational databases. Additionally, experience in cloud infrastructure, DevOps practices, and automation is highly desirable.
Key Responsibilities
* Design, develop, and maintain scalable and efficient data pipelines for structured and unstructured data.
* Manage and optimize relational databases (PostgreSQL, RDS, Aurora) and NoSQL databases (MongoDB).
* Implement and maintain search solutions using Elasticsearch.
* Develop and manage messaging systems using Kafka for real-time data streaming, and event stores like Axon Server
* Collaborate with cross-functional teams to ensure data integrity, security, and availability.
* Monitor and troubleshoot database performance, ensuring high availability and reliability.
* Automate database operations using scripting (Shell, Liquibase) and tools like Ansible.
* Support CloudOps initiatives across AWS, Azure, and on-premise environments.
* Implement CI/CD pipelines for data workflows and infrastructure provisioning.
* Drive engagement and motivate the team to success;
* Be committed to continuously challenge self for the better;
* Adhere to small and consistent increments that can deliver value and impact to the business;
* Have Agile and growth mindsets; and strive for excellence.
Strategy
* Awareness and understanding of the Group's business strategy and model appropriate to the role.
Business
* Awareness and understanding of Trade Finance, the wider business, economic, and market environment in which the Group operates.
Processes
* Awareness and understanding of the Bank's change management processes and DevOps practices.
People & Talent
* Lead through example and build the appropriate culture and values. Set appropriate tone and expectations from their team and work in collaboration with risk and control partners.
* Ensure the provision of ongoing training and development of people and ensure that holders of all critical functions are suitably skilled and qualified for their roles ensuring that they have effective supervision in place to mitigate any risks.
* Employ, engage, and retain high quality people, with succession planning for critical roles.
* Responsibility to review team structure/capacity plans.
* Set and monitor job descriptions and objectives for direct reports and provide feedback and rewards in line with their performance against those responsibilities and objectives.
Risk Management
* Awareness and understanding of the Bank's risk management processes and practices.
Governance
* Awareness and understanding of the Bank's governance frameworks applicable to Trade Finance and practices that support them.
Regulatory & Business Conduct
* Display exemplary conduct and live by the Group's Values and Code of Conduct.
* Take personal responsibility for embedding the highest standards of ethics, including regulatory and business conduct, across Standard Chartered Bank. This includes understanding and ensuring compliance with, in letter and spirit, all applicable laws, regulations, guidelines and the Group Code of Conduct.
* Effectively and collaboratively identify, escalate, mitigate and resolve risk, conduct and compliance matters.
* Lead to achieve the outcomes set out in the Bank's Conduct Principles: [Fair Outcomes for Clients; Effective Financial Markets; Financial Crime Compliance; The Right Environment.]
* Serve as a Director of the Board
* Exercise authorities delegated by the Board of Directors and act in accordance with Articles of Association (or equivalent)
Key stakeholders
* CIB Trade ITO Team
Qualifications
Primary Skills
* Strong experience as a DBA with expertise in RDBMS (PostgreSQL, RDS, Aurora).
* Proficiency in NoSQL databases (MongoDB).
* Hands-on experience with Elasticsearch for search and indexing.
* Knowledge of messaging systems like Kafka for distributed data processing, and event stores like Axon Server.
Secondary Skills
* Familiarity with CloudOps across AWS, Azure, or on-premise platforms.
* Experience with DevOps practices, including CI/CD pipelines.
* Automation skills using Shell scripting, Liquibase, and configuration management tools like Ansible.
Skills and Experience
* System Administration (GNU/Linux, *nix)
* On-demand Infrastructure/Cloud Computing, Storage, and Infrastructure (SaaS, PaaS, IaaS)
* Virtualization, Containerisation, and Orchestration (Docker, Podman, EKS, AKS, Kubernetes)
* Continuous Integration/Deployment (CI/CD) and Automation (Jenkins, Ansible)
* Project Management
* Infrastructure/service monitoring and log aggregation design and implementation (Appdynamics, ELK, Grafana, Prometheus etc)
* Distributed data processing frameworks (Hadoop, Spark, etc), big data platforms (EMR, HDInsight, etc), and event stores (Axon Server)
* NoSQL and RDBMS Design and Administration (MongoDB, PostgreSQL, Elasticsearch)
* Change Management Coordination
* Software Development
* DevOps Process Design and Implementation
About Standard Chartered
We're an international bank, nimble enough to act, big enough for impact. For more than 170 years, we've worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If you're looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we can't wait to see the talents you can bring us.
Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, you'll see how we value difference and advocate inclusion.
Together we:
* Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do
* Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well
* Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term
What we offer
In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing.
* Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations.
* Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum.
* Flexible working options based around home and office locations, with flexible working patterns.
* Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits
* A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning.
* Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential.
Apply now
Information at a Glance
*
*
*
*
*
$70k-96k yearly est. 50d ago
Data Scientist
Corteva Agriscience 3.7
Data engineer job in Indianapolis, IN
**Who We Are and What We Do** At Corteva Agriscience, you will help us grow what's next. No matter your role, you will be part of a team that is building the future of agriculture - leading breakthroughs in the innovation and application of science and technology that will better the lives of people all over the world and fuel the progress of humankind.
We are seeking a highly skilled **Data Scientist** with experience in bioprocess control to join our global Data Science team. This role will focus on applying artificial intelligence, machine learning, and statistical modeling to develop active bioprocess control algorithms, optimize bioprocess workflows, improve operational efficiency, and enable data-driven decision-making in manufacturing and R&D environments.
**What You'll Do:**
+ **Design and implement active process control strategies** , leveraging online measurements and dynamic parameter adjustment for improved productivity and safety.
+ **Develop and deploy predictive models** for bioprocess optimization, including fermentation and downstream processing.
+ Partner with engineers and scientists to **translate process insights into actionable improvements** , such as yield enhancement and cost reduction.
+ **Analyze high-complexity datasets** from bioprocess operations, including sensor data, batch records, and experimental results.
+ **Collaborate with cross-functional teams** to integrate data science solutions into plant operations, ensuring scalability and compliance.
+ **Communicate findings** through clear visualizations, reports, and presentations to technical and non-technical stakeholders.
+ **Contribute to continuous improvement** of data pipelines and modeling frameworks for bioprocess control.
**What Skills You Need:**
+ M.S. + 3 years' experience or Ph.D. inData Science, Computer Science, Chemical Engineering, Bioprocess Engineering, Statistics, or related quantitative field.
+ Strong foundation in machine learning, statistical modeling, and process control.
+ Proficiency in Python, R, or another programming language.
+ Excellent communication and collaboration skills; ability to work in multidisciplinary teams.
**Preferred Skills:**
+ Familiarity with bioprocess workflows, fermentation, and downstream processing.
+ Hands-on experience with bioprocess optimization models and active process control strategies.
+ Experience with industrial data systems and cloud platforms.
+ Knowledge of reinforcement learning or adaptive experimentation for process improvement.
\#LI-BB1
**Benefits - How We'll Support You:**
+ Numerous development opportunities offered to build your skills
+ Be part of a company with a higher purpose and contribute to making the world a better place
+ Health benefits for you and your family on your first day of employment
+ Four weeks of paid time off and two weeks of well-being pay per year, plus paid holidays
+ Excellent parental leave which includes a minimum of 16 weeks for mother and father
+ Future planning with our competitive retirement savings plan and tuition reimbursement program
+ Learn more about our total rewards package here - Corteva Benefits (*******************************************************************************
+ Check out life at Corteva! *************************************
Are you a good match? Apply today! We seek applicants from all backgrounds to ensure we get the best, most creative talent on our team.
Corteva Agriscience is an equal opportunity employer. We are committed to embracing our differences to enrich lives, advance innovation, and boost company performance. Qualified applicants will be considered without regard to race, color, religion, creed, sex, sexual orientation, gender identity, marital status, national origin, age, military or veteran status, pregnancy related conditions (including pregnancy, childbirth, or related medical conditions), disability or any other protected status in accordance with federal, state, or local laws.
Corteva Agriscience is an equal opportunity employer. We are committed to boldly embracing the power of inclusion, diversity, and equity to enrich the lives of our employees and strengthen the performance of our company, while advancing equity in agriculture. Qualified applicants will be considered without regard to race, color, religion, creed, sex, sexual orientation, gender identity, marital status, national origin, age, veteran status, disability or any other protected class. Discrimination, harassment and retaliation are inconsistent with our values and will not be tolerated. If you require a reasonable accommodation to search or apply for a position, please visit:Accessibility Page for Contact Information
For US Applicants: See the 'Equal Employment Opportunity is the Law' poster. To all recruitment agencies: Corteva does not accept unsolicited third party resumes and is not responsible for any fees related to unsolicited resumes.
$67k-89k yearly est. 26d ago
Advisor, Data Scientist - CMC Data Products
Eli Lilly and Company 4.6
Data engineer job in Gas City, IN
At Lilly, we unite caring with discovery to make life better for people around the world. We are a global healthcare leader headquartered in Indianapolis, Indiana. Our employees around the world work to discover and bring life-changing medicines to those who need them, improve the understanding and management of disease, and give back to our communities through philanthropy and volunteerism. We give our best effort to our work, and we put people first. We're looking for people who are determined to make life better for people around the world.
Organizational & Position Overview:
The Bioproduct Research & Development (BR&D) organization strives to deliver creative medicines to patients by developing and commercializing insulins, monoclonal antibodies, novel therapeutic proteins, peptides, oligonucleotide therapies, and gene therapy systems. This multidisciplinary group works collaboratively with our discovery and manufacturing colleagues.
We are seeking an exceptional Data Scientist with deep data expertise in the pharmaceutical domain to lead the development and delivery of enterprise-scale data products that power AI-driven insights, process optimization, and regulatory compliance. In this role, you'll bridge pharmaceutical sciences with modern dataengineering to transform complex CMC, PAT, and analytical data into strategic assets that accelerate drug development and manufacturing excellence.
Responsibilities:
Data Product Development: Define the roadmap and deliver analysis-ready and AI-ready data products that enable AI/ML applications, PAT systems, near-time analytical testing, and process intelligence across CMC workflows.
Data Archetypes & Modern Data Management: Define pharmaceutical-specific data archetypes (process, analytical, quality, CMC submission) and create reusable data models aligned with industry standards (ISA-88, ISA-95, CDISC, eCTD).
Modern Data Management for Regulated Environments: Implement data frameworks that ensure 21 CFR Part 11, ALCOA+, and data integrity compliance, while enabling scientific innovation and self-service access.
AI/ML-ready Data Products: Build training datasets for lab automation, process optimization, and predictive CQA models, and support generative AI applications for knowledge management and regulatory Q&A.
Cross-Functional Leadership: Collaborate with analytical R&D, process development, manufacturing science, quality, and regulatory affairs to standardize data products.
Deliverables include:
Scalable data integration platform that automates compilation of technical-review-ready and submission-ready data packages with demonstrable quality assurance.
Unified CMC data repository supporting current process and analytical method development while enabling future AI/ML applications across R&D and manufacturing
Data flow frameworks that enable self-service access while maintaining GxP compliance and audit readiness
Comprehensive documentation, standards, and training programs that democratize data access and accelerate product development
Basic Requirements:
Master's degree in Computer Science, Data Science, Machine Learning, AI, or related technical field
8+ years of product management experience focused on data products, data platforms, or scientific data systems and a strong grasp of modern data architecture patterns (data warehouses, data lakes, real-time streaming)
Knowledge of modern data stack technologies (Microsoft Fabric, Databricks, Airflow) and cloud platforms (AWS- S3, RDS, Lambda/Glue, Azure)
Demonstrated experience designing data products that support AI/ML workflows and advanced analytics in scientific domains
Proficiency with SQL, Python, and data visualization tools
Experience with analytical instrumentation and data systems (HPLC/UPLC, spectroscopy, particle characterization, process sensors)
Knowledge of pharmaceutical manufacturing processes, including batch and continuous manufacturing, unit operations, and process control
Expertise indata modeling for time-series, spectroscopic, chromatographic, and hierarchical batch/lot data
Experience with laboratory data management systems (LIMS, ELN, SDMS, CDS) and their integration patterns
Additional Preferences:
Understanding of Design of Experiments (DoE), Quality by Design (QbD), and process validation strategies
Experience implementing data mesh architectures in scientific organizations
Knowledge of MLOps practices and model deployment in validated environments
Familiarity with regulatory submissions (eCTD, CTD) and how analytical data supports marketing applications
Experience with CI/CD pipelines (GitHub Actions, CloudFormation) for scientific applications
Lilly is dedicated to helping individuals with disabilities to actively engage in the workforce, ensuring equal opportunities when vying for positions. If you require accommodation to submit a resume for a position at Lilly, please complete the accommodation request form (******************************************************** for further assistance. Please note this is for individuals to request an accommodation as part of the application process and any other correspondence will not receive a response.
Lilly is proud to be an EEO Employer and does not discriminate on the basis of age, race, color, religion, gender identity, sex, gender expression, sexual orientation, genetic information, ancestry, national origin, protected veteran status, disability, or any other legally protected status.
Our employee resource groups (ERGs) offer strong support networks for their members and are open to all employees. Our current groups include: Africa, Middle East, Central Asia Network, Black Employees at Lilly, Chinese Culture Network, Japanese International Leadership Network (JILN), Lilly India Network, Organization of Latinx at Lilly (OLA), PRIDE (LGBTQ+ Allies), Veterans Leadership Network (VLN), Women's Initiative for Leading at Lilly (WILL), en Able (for people with disabilities). Learn more about all of our groups.
Actual compensation will depend on a candidate's education, experience, skills, and geographic location. The anticipated wage for this position is
$126,000 - $244,200
Full-time equivalent employees also will be eligible for a company bonus (depending, in part, on company and individual performance). In addition, Lilly offers a comprehensive benefit program to eligible employees, including eligibility to participate in a company-sponsored 401(k); pension; vacation benefits; eligibility for medical, dental, vision and prescription drug benefits; flexible benefits (e.g., healthcare and/or dependent day care flexible spending accounts); life insurance and death benefits; certain time off and leave of absence benefits; and well-being benefits (e.g., employee assistance program, fitness benefits, and employee clubs and activities).Lilly reserves the right to amend, modify, or terminate its compensation and benefit programs in its sole discretion and Lilly's compensation practices and guidelines will apply regarding the details of any promotion or transfer of Lilly employees.
#WeAreLilly
$85k-109k yearly est. Auto-Apply 34d ago
Data Engineer
Old National Bank 4.4
Data engineer job in Evansville, IN
Old National Bank has been serving clients and communities since 1834. With over $70 billion in total assets, we are a regional powerhouse deeply rooted in the communities we serve. As a trusted partner, we thrive on helping our clients achieve their goals and dreams, and we are committed to social responsibility and investing in our communities through volunteering and charitable giving.
We continually seek highly motivated and talented individuals as our people are critical to our success. In return, we offer competitive compensation with our salary and incentive program, in addition to medical, dental, and vision insurance. 401K, continuing education opportunities and an employee assistance program are also included in our benefit suite. Old National also offers a variety of Impact Network Groups led by team members who are passionate about driving engagement, creating awareness of diverse backgrounds and experiences, and building inclusion across the organization. We offer a unique opportunity to join a growing, community and client-focused company that is firmly rooted in its core values.
Responsibilities
Salary Range
The salary range for this position is $62,300 - $199,000 per year. The base salary indicated for this position reflects the compensation range applicable to all levels of the role across the United States. Actual salary offers within this range may vary based on a number of factors, including the specific responsibilities of the position, the candidate's relevant skills and professional experience, educational qualifications, and geographic location.
We are currently seeking a DataEngineer. This position will require knowledge of enterprise data governance, data analysis, business analysis, and the technical tools to execute on those roadmaps. You will assist with creating a unified data model that will be central to all Data Governance activities and helping to set the foundation of our Master Data Management strategy. This includes crucial external and internal reporting, line of business specific needs, and system integration. Activities will include working with stakeholders, subject matter experts, business analysts, and technical teams to execute, maintain, and own the Old National Core Data Model.
Key Accountabilities
Key Accountability 1: Create and Maintain Data Governance Data Quality Framework
* Enable technical frameworks with tech stack that enable monitoring, reporting, and remediation of data quality issues
* Partner with business stakeholders, data owners, and information owners to implement appropriate data quality rules
* Documents procedural level documents to support technical teams and business partners adherence to Data Quality Framework
Key Accountability 2: Data Analysis and Profiling
* Support business partners with proper profiling techniques to better understand the data.
* Drive data accountability to understand data gaps and needs within systems and business processes
Key Accountability 3: Data Cleansing Framework
* Direct framework and strategy to support data cleansing based on Data Quality Framework and Profiling activities.
* Partner with IT design a technical framework that will enable business partners and data analysts to transform their datain a consistent and traceable framework.
Key Competencies for Position
* Promotes Change - Actively seeks information to understand the rationale, implications, and impact for changes. Remains agile by quickly modifying daily behavior, leveraging resources, and trying new approaches to effectively embrace change. Willing to act quickly, learn and adjust as needed. Identifies and recommends changes to leadership to improve performance. Aligns activities to meet individual, team and organizational goals
* Strategy in Action - Breaks down larger goals into smaller achievable goals and communicates how they are contributing to the broader goal. Actively seeks to understand factors and trends that may influence role. Anticipates risks and develops contingency plans to manage risks. Identifies opportunities for improvement and seeks insights from other sources to generate potential solutions. Aligns activities to meet individual, team and organizational goals.
* Compelling Communication - Effectively and transparently shares information and ideas with others. Tailors the delivery of communication in a way that engages the audience and that is easy to understand and retain. Unites others towards common goal. Asks for others' opinions and ideas and listens actively to gain their support when clarifying expectations, agreeing on a solution and checking for satisfaction.
* Makes Decisions & Solves Problems - Takes ownership of the problem while collaborating with others on a resolution with an appropriate level of urgency. Collaborates and seeks to understand the root causes of problems. Evaluates the implications of new information or events and recommends solutions using decisions that are sound based on what is known at the time. Takes action that is consistent with available facts, constraints and probable consequences.
Qualifications and Education Requirements
* Bachelor's degree in related field; advanced coursework/training related to data management, computer science, management information systems
* 3+ years' experience working with data analysis/profile tools, including but not limited to Informatica, SQL Server, MS Power Query, etc.
* 3+ years' working with data transformations and various ETL tools, DBT, FiveTran, Azure Data Factory
* 3+ years' experience working in a variety of data models, ODS, Relational, Fact/Dimension
* Understanding of reporting and data analytics to ensure overall accuracy
* Knowledge of industry leading practices in Analytics and Data Movement
* Strong written, verbal, and interpersonal skills to advance a data driven culture and work priorities.
* Working knowledge of data governance principles and frameworks
* Source system profiling via a variety of tools and methods
* Ability to write data quality rules (business facing and pseudo code)
* Data lineage/data Catalog knowledge
* Master data management
* Exposure to data platforms such as AWS, GCP, Azure, and/or Databricks
Old National is proud to be an equal opportunity employer focused on fostering an inclusive workplace and committed to hiring a workforce comprised of diverse backgrounds, cultures and thinking styles.
As such, all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, protected veteran status, status as a qualified individual with disability, sexual orientation, gender identity or any other characteristic protected by law.
We do not accept resumes from external staffing agencies or independent recruiters for any of our openings unless we have an agreement signed by the Director of Talent Acquisition, SVP, to fill a specific position
Our culture is firmly rooted in our core values.
We are optimistic. We are collaborative. We are inclusive. We are agile. We are ethical.
We are Old National Bank. Join our team!