Good with MATLAB/Simulink/Model in Loop
Core2 , Autosar Software Application layer model develoment/CSAR layer modeling experience
Capability in MATLAB scripting, Simulink and stateflow
MIL,SIL, Polyspace Testing Experience
Calterm, Polyspace Testing experience
Windchill and clearcase understanding
Requirements management
Demonstrable capability in OBD, Aftertreatment Controls, MATLAB/Simulink to develop/maintain/test/debug algorithms involved in development of typical control systems (like lookup tables, filters, PI/D loops, timers/counters, fixed point arithmetic)
Testing on open loop test benches
Debugging/problem solving skills
Auto code generation and build process
Unit Testing/Integration Testing/HIL Testing
Good Knowledge of Fuel System and Air Handling system, Performance features, Customer Feature, Cruise Control, ADAS domain experience, Powertrain Automotive Experience
$62k-83k yearly est. 3d ago
Looking for a job?
Let Zippia find it for you.
Data Scientist, Analytics (Technical Leadership)
Meta 4.8
Data engineer job in Indianapolis, IN
We are seeking experienced Data Scientists to join our team and drive impact across various product areas. As a Data Scientist, you will collaborate with cross-functional partners to identify and solve complex problems using data and analysis. Your role will involve shaping product development, quantifying new opportunities, and ensuring products bring value to users and the company. You will guide teams using data-driven insights, develop hypotheses, and employ rigorous analytical approaches to test them. You will tell data-driven stories, present clear insights, and build credibility with stakeholders. By joining our team, you will become part of a world-class analytics community dedicated to skill development and career growth in analytics and beyond.
**Required Skills:**
Data Scientist, Analytics (Technical Leadership) Responsibilities:
1. Work with complex data sets to solve challenging problems using analytical and statistical approaches
2. Apply technical expertise in quantitative analysis, experimentation, and data mining to develop product strategies
3. Identify and measure success through goal setting, forecasting, and monitoring key metrics
4. Partner with cross-functional teams to inform and execute product strategy and investment decisions
5. Build long-term vision and strategy for programs and products
6. Collaborate with executives to define and develop data platforms and instrumentation
7. Effectively communicate insights and recommendations to stakeholders
8. Define success metrics, forecast changes, and set team goals
9. Support developing roadmaps and coordinate analytics efforts across teams
**Minimum Qualifications:**
Minimum Qualifications:
10. Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
11. Mathematics, Statistics, Operations Research) 5+ years of experience with data querying languages (e.g. SQL), scripting languages (e.g. Python), or statistical/mathematical software (e.g. R, SAS, Matlab)
12. 8+ years of work experience leading analytics work in IC capacity, working collaboratively with Engineering and cross-functional partners, and guiding data-influenced product planning, prioritization and strategy development
13. Experience working effectively with and leading complex analytics projects across multiple stakeholders and cross-functional teams, including Engineering, PM/TPM, Analytics and Finance
14. Experience with predictive modeling, machine learning, and experimentation/causal inference methods
15. Experience communicating complex technical topics in a clear, precise, and actionable manner
**Preferred Qualifications:**
Preferred Qualifications:
16. 10+ years of experience communicating the results of analyses to leadership teams to influence the strategy
17. Masters or Ph.D. Degree in a quantitative field
18. Bachelor's Degree in an analytical or scientific field (e.g. Computer Science, Engineering, Mathematics, Statistics, Operations Research)
19. 10+ years of experience doing complex quantitative analysis in product analytics
**Public Compensation:**
$210,000/year to $281,000/year + bonus + equity + benefits
**Industry:** Internet
**Equal Opportunity:**
Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Meta participates in the E-Verify program in certain locations, as required by law. Please note that Meta may leverage artificial intelligence and machine learning technologies in connection with applications for employment.
Meta is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance or accommodations due to a disability, please let us know at accommodations-ext@meta.com.
$210k-281k yearly 60d+ ago
Data Scientist, Privacy
Datavant
Data engineer job in Indianapolis, IN
Datavant is a data platform company and the world's leader in health data exchange. Our vision is that every healthcare decision is powered by the right data, at the right time, in the right format. Our platform is powered by the largest, most diverse health data network in the U.S., enabling data to be secure, accessible and usable to inform better health decisions. Datavant is trusted by the world's leading life sciences companies, government agencies, and those who deliver and pay for care.
By joining Datavant today, you're stepping onto a high-performing, values-driven team. Together, we're rising to the challenge of tackling some of healthcare's most complex problems with technology-forward solutions. Datavanters bring a diversity of professional, educational and life experiences to realize our bold vision for healthcare.
As part of the Privacy Science team within Privacy Hub you will play a crucial role in ensuring that privacy of patients is safeguarded in the modern world of data sharing. As well as working on real data, you will be involved in exciting research to keep us as industry leaders in this area, and stimulating discussions on re-identification risk. You will be supported in developing/consolidating data analysis and coding skills to become proficient in the analysis of large health-related datasets.
**You Will:**
+ Critically analyze large health datasets using standard and bespoke software libraries
+ Discuss your findings and progress with internal and external stakeholders
+ Produce high quality reports which summarise your findings
+ Contribute to research activities as we explore novel and established sources of re-identification risk
**What You Will Bring to the Table:**
+ Excellent communication skills. Meticulous attention to detail in the production of comprehensive, well-presented reports
+ A good understanding of statistical probability distributions, bias, error and power as well as sampling and resampling methods
+ Seeks to understand real-world datain context rather than consider it in abstraction.
+ Familiarity or proficiency with programmable data analysis software R or Python, and the desire to develop expertise in its language
+ Application of scientific methods to practical problems through experimental design, exploratory data analysis and hypothesis testing to reach robust conclusions
+ Strong time management skills and demonstrable experience of prioritising work to meet tight deadlines
+ Initiative and ability to independently explore and research novel topics and concepts as they arise, to expand Privacy Hub's knowledge base
+ An appreciation of the need for effective methods indata privacy and security, and an awareness of the relevant legislation
+ Familiarity with Amazon Web Services cloud-based storage and computing facilities
**Bonus Points If You Have:**
+ Experience creating documents using LATEX
+ Detailed knowledge of one or more types of health information, e.g., genomics, disease, health images
+ Experience working with or supporting public sector organizations, such as federal agencies (e.g., CMS, NIH, VA, CDC), state health departments, or public health research partners. Familiarity with government data environments, procurement processes, or privacy frameworks in regulated settings is highly valued.
\#LI-BC1
We are committed to building a diverse team of Datavanters who are all responsible for stewarding a high-performance culture in which all Datavanters belong and thrive. We are proud to be an Equal Employment Opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, disability, veteran status, or other legally protected status.
At Datavant our total rewards strategy powers a high-growth, high-performance, health technology company that rewards our employees for transforming health care through creating industry-defining data logistics products and services.
The range posted is for a given job title, which can include multiple levels. Individual rates for the same job title may differ based on their level, responsibilities, skills, and experience for a specific job.
The estimated total cash compensation range for this role is:
$104,000-$130,000 USD
To ensure the safety of patients and staff, many of our clients require post-offer health screenings and proof and/or completion of various vaccinations such as the flu shot, Tdap, COVID-19, etc. Any requests to be exempted from these requirements will be reviewed by Datavant Human Resources and determined on a case-by-case basis. Depending on the state in which you will be working, exemptions may be available on the basis of disability, medical contraindications to the vaccine or any of its components, pregnancy or pregnancy-related medical conditions, and/or religion.
This job is not eligible for employment sponsorship.
Datavant is committed to a work environment free from job discrimination. We are proud to be an Equal Employment Opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, disability, veteran status, or other legally protected status. To learn more about our commitment, please review our EEO Commitment Statement here (************************************************** . Know Your Rights (*********************************************************************** , explore the resources available through the EEOC for more information regarding your legal rights and protections. In addition, Datavant does not and will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay.
At the end of this application, you will find a set of voluntary demographic questions. If you choose to respond, your answers will be anonymous and will help us identify areas for improvement in our recruitment process. (We can only see aggregate responses, not individual ones. In fact, we aren't even able to see whether you've responded.) Responding is entirely optional and will not affect your application or hiring process in any way.
Datavant is committed to working with and providing reasonable accommodations to individuals with physical and mental disabilities. If you need an accommodation while seeking employment, please request it here, (************************************************************** Id=**********48790029&layout Id=**********48795462) by selecting the 'Interview Accommodation Request' category. You will need your requisition ID when submitting your request, you can find instructions for locating it here (******************************************************************************************************* . Requests for reasonable accommodations will be reviewed on a case-by-case basis.
For more information about how we collect and use your data, please review our Privacy Policy (**************************************** .
$104k-130k yearly 24d ago
Join the Squad | Now Hiring a DataOps Consultant
Onebridge 4.3
Data engineer job in Indianapolis, IN
Onebridge, a Marlabs Company, is an AI and data analytics consulting firm that strives to improve outcomes for the people we serve through data and technology. We have served some of the largest healthcare, life sciences, manufacturing, financial services, and government entities in the U.S. since 2005. We have an exciting opportunity for a highly skilled DataOps Consultant to join an innovative and dynamic group of professionals at a company rated among the top “Best Places to Work” inIndianapolis since 2015.
DataOps Consultant | About You
As a DataOps Consultant, you are responsible for ensuring the seamless integration, automation, and optimization of data pipelines and infrastructure. You excel at collaborating with cross-functional teams to deliver scalable and efficient data solutions that meet business needs. With expertise in cloud platforms, data processing tools, and version control, you maintain the reliability and performance of data operations. Your focus on data integrity, quality, and continuous improvement drives the success of data workflows. Always proactive, you are committed to staying ahead of industry trends and solving complex challenges to enhance the organization's data ecosystem.
DataOps Consultant | Day-to-Day
Develop, deploy, and maintain scalable and efficient data pipelines that handle large volumes of data from various sources.
Collaborate with DataEngineers, Data Scientists, and Business Analysts to ensure that data solutions meet business requirements and optimize workflows.
Monitor, troubleshoot, and optimize data pipelines to ensure high availability, reliability, and performance.
Implement and maintain automation for data ingestion, transformation, and deployment processes to improve efficiency.
Ensure data quality by implementing validation checks and continuous monitoring to detect and resolve issues.
Document data processes, pipeline configurations, and troubleshooting steps to maintain clarity and consistency across teams.
DataOps Consultant | Skills & Experience
5+ years of experience working inDataOps or related fields, with strong hands-on experience in cloud platforms (AWS, Azure, Google Cloud) for data storage, processing, and analytics.
Proficiency in programming languages such as Python, Java, or Scala for building and maintaining data pipelines.
Experience with data orchestration tools like Apache Airflow, Azure Data Factory, or similar automated data workflows.
Expertise in big data processing frameworks (e.g., Apache Kafka, Apache Spark, Hadoop) for handling large volumes of data.
Hands-on experience with version control systems such as Git for managing code and deployment pipelines.
Solid understanding of data governance, security best practices, and regulatory compliance standards (e.g., GDPR, HIPAA).
A Best Place to Work inIndiana since 2015
$60k-81k yearly est. Auto-Apply 60d+ ago
Metabolic Modeling Data Scientist
Corteva Agriscience 3.7
Data engineer job in Indianapolis, IN
Who We Are and What We Do At Corteva Agriscience, you will help us grow what's next. No matter your role, you will be part of a team that is building the future of agriculture - leading breakthroughs in the innovation and application of science and technology that will better the lives of people all over the world and fuel the progress of humankind.
Corteva Agriscience has an exciting opportunity for a **Metabolic Modeling Data Scientist** to develop and deploy predictive genome-scale metabolic models that accelerate microbial strain optimization and bioprocess development. This role will apply their expertise to innovate nature-inspired solutions to global challenges in agriculture. They will join a strong molecular data science team and work closely with cross-functional teams from early discovery to downstream process engineering to generate actionable hypotheses, quantify flux bottlenecks, and translate multiomics and process data into decisions for Crop Health R&D.
**What You'll Do:**
+ Develop and maintain genome‑scale metabolic models for model and non‑model organisms; implement flux balance analysis (FBA/dFBA) and related approaches.
+ Integrate multi‑omics datasets (genomics, transcriptomics, proteomics, metabolomics) into metabolic models to generate testable hypotheses.
+ Apply AI/ML tools (e.g., ML‑assisted flux prediction, pathway inference etc.), perform metabolic flux analysis and estimate theoretical yields to inform media, feeding strategies, and optimization of titer/rate/yield.
+ Reconstruct and validate metabolic pathways, including completing missing steps and proposing/assessing novel routes to increased metabolite productivity.
+ Collaborate closely with strain engineering, biochemistry, and process engineering to ensure models are experimentally grounded and actionable.
+ Communicate results via clear summaries and decision‑ready recommendations for program reviews and R&D planning.
What Skills You Need:
+ PhD in Systems Biology, Computational Biology, Bioinformatics, Chemical/Biochemical Engineering, Microbiology, or related fields.
+ Demonstrated experience building genome scale metabolic models, performing flux analysis, and integrating multi‑omics (such as genomics, transcriptomics, metabolomics) with experimental data for model refinement.
+ Strong programming skills in Python (and R) for model development and data integration
**Preferred Skills:**
+ Experience with industrially relevant non‑model microbes and fermentation datasets.
+ Strong familiarity and proficiency working with latest models and techniques in this space, including tools such as COBRA Toolbox, CobraPy, KBase, or similar frameworks.
+ Experience applying ML/AI to predictive metabolic modeling, pathway prediction, and/or flux estimation.
+ Familiarity with version control (Git), workflow/pipeline tools, and reproducible model development practices.
+ Ability to develop, improve and apply metabolic modeling techniques on non‑model organisms;
+ Experience collaborating in cross‑disciplinary teams and ability to translate metabolic modeling insights into well-designed strain engineering experiments for validation in partnership with wet‑lab teams.
\#LI-BB1
**Benefits - How We'll Support You:**
+ Numerous development opportunities offered to build your skills
+ Be part of a company with a higher purpose and contribute to making the world a better place
+ Health benefits for you and your family on your first day of employment
+ Four weeks of paid time off and two weeks of well-being pay per year, plus paid holidays
+ Excellent parental leave which includes a minimum of 16 weeks for mother and father
+ Future planning with our competitive retirement savings plan and tuition reimbursement program
+ Learn more about our total rewards package here - Corteva Benefits (*******************************************************************************
+ Check out life at Corteva! *************************************
Are you a good match? Apply today! We seek applicants from all backgrounds to ensure we get the best, most creative talent on our team.
Corteva Agriscience is an equal opportunity employer. We are committed to embracing our differences to enrich lives, advance innovation, and boost company performance. Qualified applicants will be considered without regard to race, color, religion, creed, sex, sexual orientation, gender identity, marital status, national origin, age, military or veteran status, pregnancy related conditions (including pregnancy, childbirth, or related medical conditions), disability or any other protected status in accordance with federal, state, or local laws.
Corteva Agriscience is an equal opportunity employer. We are committed to boldly embracing the power of inclusion, diversity, and equity to enrich the lives of our employees and strengthen the performance of our company, while advancing equity in agriculture. Qualified applicants will be considered without regard to race, color, religion, creed, sex, sexual orientation, gender identity, marital status, national origin, age, veteran status, disability or any other protected class. Discrimination, harassment and retaliation are inconsistent with our values and will not be tolerated. If you require a reasonable accommodation to search or apply for a position, please visit:Accessibility Page for Contact Information
For US Applicants: See the 'Equal Employment Opportunity is the Law' poster. To all recruitment agencies: Corteva does not accept unsolicited third party resumes and is not responsible for any fees related to unsolicited resumes.
$67k-89k yearly est. 41d ago
Principal Data Scientist
Maximus 4.3
Data engineer job in Indianapolis, IN
Description & Requirements Maximus has an exciting opportunity for a Principal Data Scientist to join the Maximus AI Accelerator supporting both the enterprise and our clients. We are looking for an accomplished hands-on individual contributor and team player to be a part of the AI Accelerator team.
You will be responsible for architecting and optimizing scalable, secure AI systems and integrating AI models in production using MLOps best practices, ensuring systems are resilient, compliant, and efficient. This role requires strong systems thinking, problem-solving abilities, and the capacity to manage risk and change in complex environments. Success depends on cross-functional collaboration, strategic communication, and adaptability in fast-paced, evolving technology landscapes.
This position will be focused on strategic company-wide initiatives but will also play a role in project delivery and capture solutioning (i.e., leaning in on existing or future projects and providing solutioning to capture new work.)
This is a remote position.
Essential Duties and Responsibilities:
- Make deep dives into the data, pulling out objective insights for business leaders.
- Initiate, craft, and lead advanced analyses of operational data.
- Provide a strong voice for the importance of data-driven decision making.
- Provide expertise to others indata wrangling and analysis.
- Convert complex data into visually appealing presentations.
- Develop and deploy advanced methods to analyze operational data and derive meaningful, actionable insights for stakeholders and business development partners.
- Understand the importance of automation and look to implement and initiate automated solutions where appropriate.
- Initiate and take the lead on AI/ML initiatives as well as develop AI/ML code for projects.
- Utilize various languages for scripting and write SQL queries. Serve as the primary point of contact for data and analytical usage across multiple projects.
- Guide operational partners on product performance and solution improvement/maturity options.
- Participate in intra-company data-related initiatives as well as help foster and develop relationships throughout the organization.
- Learn new skills in advanced analytics/AI/ML tools, techniques, and languages.
- Mentor more junior data analysts/data scientists as needed.
- Apply strategic approach to lead projects from start to finish;
Job-Specific Essential Duties and Responsibilities:
- Develop, collaborate, and advance the applied and responsible use of AI, ML, simulation, and data science solutions throughout the enterprise and for our clients by finding the right fit of tools, technologies, processes, and automation to enable effective and efficient solutions for each unique situation.
- Contribute and lead the creation, curation, and promotion of playbooks, best practices, lessons learned and firm intellectual capital.
- Contribute to efforts across the enterprise to support the creation of solutions and real mission outcomes leveraging AI capabilities from Computer Vision, Natural Language Processing, LLMs and classical machine learning.
- Maintain current knowledge and evaluation of the AI technology landscape and emerging developments and their applicability for use in production/operational environments
Minimum Requirements
- Bachelor's degree in related field required.
- 10-12 years of relevant professional experience required.
Job-Specific Minimum Requirements (required skills that align with contract LCAT, verifiable, and measurable):
- 10+ years of relevant Software Development + AI / ML / DS experience
- Professional Programming experience (e.g. Python, R, etc.)
- Experience with AI / Machine Learning
- Experience working as a contributor on a team
- Experience leading AI/DS/or Analytics teams
- Experience mentoring Junior Staff
- Experience with Modeling and Simulation
- Experience with program management
Preferred Skills and Qualifications:
- Masters in quantitative discipline (Math, Operations Research, Computer Science, etc.)
- Experience developing machine learning or signal processing algorithms:
- Ability to leverage mathematical principles to model new and novel behaviors
- Ability to leverage statistics to identify true signals from noise or clutter
- Experience working as an individual contributor in AI or modeling and simulation
- Use of state-of-the-art technology to solve operational problems in AI, Machine Learning, or Modeling and Simulation spheres
- Strong knowledge of data structures, common computing infrastructures/paradigms (stand alone and cloud), and software engineering principles
- Ability to design custom solutions in the AI and Advanced Analytics sphere for customers. This includes the ability to scope customer needs, identify currently existing technologies, and develop custom software solutions to fill any gaps in available off the shelf solutions
- Use and development of program automation, CI/CD, DevSecOps, and Agile
- Experience managing technical teams delivering technical solutions for clients.
- Experience working with optimization problems like scheduling
- Experience with Data Analytics and Visualizations
- Cloud certifications (AWS, Azure, or GCP)
- 10+ yrs of related experience in AI, advanced analytics, computer science, or software development
#techjobs #Veteranspage
EEO Statement
Maximus is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, age, national origin, disability, veteran status, genetic information and other legally protected characteristics.
Pay Transparency
Maximus compensation is based on various factors including but not limited to job location, a candidate's education, training, experience, expected quality and quantity of work, required travel (if any), external market and internal value analysis including seniority and merit systems, as well as internal pay alignment. Annual salary is just one component of Maximus's total compensation package. Other rewards may include short- and long-term incentives as well as program-specific awards. Additionally, Maximus provides a variety of benefits to employees, including health insurance coverage, life and disability insurance, a retirement savings plan, paid holidays and paid time off. Compensation ranges may differ based on contract value but will be commensurate with job duties and relevant work experience. An applicant's salary history will not be used in determining compensation. Maximus will comply with regulatory minimum wage rates and exempt salary thresholds in all instances.
Accommodations
Maximus provides reasonable accommodations to individuals requiring assistance during any phase of the employment process due to a disability, medical condition, or physical or mental impairment. If you require assistance at any stage of the employment process-including accessing job postings, completing assessments, or participating in interviews,-please contact People Operations at **************************.
Minimum Salary
$
156,640.00
Maximum Salary
$
234,960.00
$66k-92k yearly est. Easy Apply 2d ago
Sr Data Engineer (MFT - IBM Sterling)
The Hertz Corporation 4.3
Data engineer job in Indianapolis, IN
**A Day in the Life:** The IBM Sterling Senior MFT Engineer will have expertise in Sterling B2B Integrator, Sterling File Gateway, Sterling Secure Proxy, Control Center. This position requires an expert level knowledge of these technologies. You'll provide third-level support for core hardware, software, data and infrastructure components within the B2B integration team. The software focus areas will include software infrastructure components, continuous improvement and automation related to middleware and enterprise data movement within a mission-critical, multi-site enterprise environment.
The ideal candidate will have a passion for technology; possess the ability to create change and be the facilitator for this transformation. They will have experience tailoring software design, developing, and implementing of solutions in support of data quality and automation. They will need to understand the requirements of a project, test the performance of the infrastructure, and verify that the requirements have been successfully met.
We expect the starting salary to be around $135k but will be commensurate with experience.
**What You'll Do:**
TECHNICAL SENIORSHIP
+ Communication with internal and external business users on Sterling Integrator mappings
+ Making changes to existing partner integrations to meet internal and external requirements
+ Design, develop and implement solutions based on standards and processes that establish consistency across the enterprise data, reduce risks, and promote efficiencies in support of the organizational goals and objectives.
+ Diagnose and troubleshoot complex issues, restore services and perform root cause analysis.
+ Facilitate the review, vetting of these designs with the architecture governance bodies, as required.
+ Be aware of all aspects of security related to the Sterling environment and integrations
INNOVATIVE INFRASTRUCTURE & PROBLEM SOLVING
+ Strive to engineering excellence by simplifying, optimizing, and automating processes and workflows.
TEAMWORK & COMMUNICATION
+ Superior & demonstrated team building & development skills to harness powerful teams
+ Ability to communicate effectively with different levels of Seniorship within the organization
+ Provide timely updates so that progress against each individual incident can be updated as required
+ Write and review high quality technical documentation
CONTROL & AUDIT
+ Ensures their workstation and all processes and procedures, follow organization standards
CONTINUOUS IMPROVEMENT
+ Encourages and maintains a 'Best practice sharing culture', always striving to find ways to improve service and change mind set.
**What We're Looking For:**
+ Bachelor's degree inEngineering, Statistics, Computer Science or other quantitative fields, required
+ 5+ years of IT experience
+ 4+ years' experience working with Sterling B2B Integrator, Sterling File Gateway, PCM (preferred)
+ 3+ years' experience with scripting to enable automation of standard activities (example: Ansible, Python, Bash, Java)
+ Strong interpersonal and communication skills with Agile/Scrum experience.
+ Strong problem solving and critical thinking skills with a proven record for identifying and diagnosing problems, and solving complex problems with simple, logical solutions with the ability to develop custom setups.
+ Outstanding verbal, written, presentation, facilitation, and interaction skills, including ability to effectively communicate technical and non-technical issues and concepts to multiple organization levels.
+ Prefer Travel, transportation, or hospitality experience
+ Prefer experience with designing application data models for mobile or web applications
+ Excellent written and verbal communication skills.
+ Flexibility in scheduling which may include nights, weekends, and holidays
**What You'll Get:**
+ Up to 40% off the base rate of any standard Hertz Rental
+ Paid Time Off
+ Medical, Dental & Vision plan options
+ Retirement programs, including 401(k) employer matching
+ Paid Parental Leave & Adoption Assistance
+ Employee Assistance Program for employees & family
+ Educational Reimbursement & Discounts
+ Voluntary Insurance Programs - Pet, Legal/Identity Theft, Critical Illness
+ Perks & Discounts -Theme Park Tickets, Gym Discounts & more
The Hertz Corporation operates the Hertz, Dollar Car Rental, Thrifty Car Rental brands in approximately 9,700 corporate and franchisee locations throughout North America, Europe, The Caribbean, Latin America, Africa, the Middle East, Asia, Australia and New Zealand. The Hertz Corporation is one of the largest worldwide airport general use vehicle rental companies, and the Hertz brand is one of the most recognized in the world.
**US EEO STATEMENT**
At Hertz, we champion and celebrate a culture of diversity and inclusion. We take affirmative steps to promote employment and advancement opportunities. The endless variety of perspectives, experiences, skills and talents that our employees invest in their work every day represent a significant part of our culture - and our success and reputation as a company.
Individuals are encouraged to apply for positions because of the characteristics that make them unique.
EOE, including disability/veteran
$135k yearly 60d+ ago
Advisor, Data Scientist - CMC Data Products
Eli Lilly and Company 4.6
Data engineer job in Indianapolis, IN
At Lilly, we unite caring with discovery to make life better for people around the world. We are a global healthcare leader headquartered inIndianapolis, Indiana. Our employees around the world work to discover and bring life-changing medicines to those who need them, improve the understanding and management of disease, and give back to our communities through philanthropy and volunteerism. We give our best effort to our work, and we put people first. We're looking for people who are determined to make life better for people around the world.
Organizational & Position Overview:
The Bioproduct Research & Development (BR&D) organization strives to deliver creative medicines to patients by developing and commercializing insulins, monoclonal antibodies, novel therapeutic proteins, peptides, oligonucleotide therapies, and gene therapy systems. This multidisciplinary group works collaboratively with our discovery and manufacturing colleagues.
We are seeking an exceptional Data Scientist with deep data expertise in the pharmaceutical domain to lead the development and delivery of enterprise-scale data products that power AI-driven insights, process optimization, and regulatory compliance. In this role, you'll bridge pharmaceutical sciences with modern dataengineering to transform complex CMC, PAT, and analytical data into strategic assets that accelerate drug development and manufacturing excellence.
Responsibilities:
Data Product Development: Define the roadmap and deliver analysis-ready and AI-ready data products that enable AI/ML applications, PAT systems, near-time analytical testing, and process intelligence across CMC workflows.
Data Archetypes & Modern Data Management: Define pharmaceutical-specific data archetypes (process, analytical, quality, CMC submission) and create reusable data models aligned with industry standards (ISA-88, ISA-95, CDISC, eCTD).
Modern Data Management for Regulated Environments: Implement data frameworks that ensure 21 CFR Part 11, ALCOA+, and data integrity compliance, while enabling scientific innovation and self-service access.
AI/ML-ready Data Products: Build training datasets for lab automation, process optimization, and predictive CQA models, and support generative AI applications for knowledge management and regulatory Q&A.
Cross-Functional Leadership: Collaborate with analytical R&D, process development, manufacturing science, quality, and regulatory affairs to standardize data products.
Deliverables include:
Scalable data integration platform that automates compilation of technical-review-ready and submission-ready data packages with demonstrable quality assurance.
Unified CMC data repository supporting current process and analytical method development while enabling future AI/ML applications across R&D and manufacturing
Data flow frameworks that enable self-service access while maintaining GxP compliance and audit readiness
Comprehensive documentation, standards, and training programs that democratize data access and accelerate product development
Basic Requirements:
Master's degree in Computer Science, Data Science, Machine Learning, AI, or related technical field
8+ years of product management experience focused on data products, data platforms, or scientific data systems and a strong grasp of modern data architecture patterns (data warehouses, data lakes, real-time streaming)
Knowledge of modern data stack technologies (Microsoft Fabric, Databricks, Airflow) and cloud platforms (AWS- S3, RDS, Lambda/Glue, Azure)
Demonstrated experience designing data products that support AI/ML workflows and advanced analytics in scientific domains
Proficiency with SQL, Python, and data visualization tools
Experience with analytical instrumentation and data systems (HPLC/UPLC, spectroscopy, particle characterization, process sensors)
Knowledge of pharmaceutical manufacturing processes, including batch and continuous manufacturing, unit operations, and process control
Expertise indata modeling for time-series, spectroscopic, chromatographic, and hierarchical batch/lot data
Experience with laboratory data management systems (LIMS, ELN, SDMS, CDS) and their integration patterns
Additional Preferences:
Understanding of Design of Experiments (DoE), Quality by Design (QbD), and process validation strategies
Experience implementing data mesh architectures in scientific organizations
Knowledge of MLOps practices and model deployment in validated environments
Familiarity with regulatory submissions (eCTD, CTD) and how analytical data supports marketing applications
Experience with CI/CD pipelines (GitHub Actions, CloudFormation) for scientific applications
Lilly is dedicated to helping individuals with disabilities to actively engage in the workforce, ensuring equal opportunities when vying for positions. If you require accommodation to submit a resume for a position at Lilly, please complete the accommodation request form (******************************************************** for further assistance. Please note this is for individuals to request an accommodation as part of the application process and any other correspondence will not receive a response.
Lilly is proud to be an EEO Employer and does not discriminate on the basis of age, race, color, religion, gender identity, sex, gender expression, sexual orientation, genetic information, ancestry, national origin, protected veteran status, disability, or any other legally protected status.
Our employee resource groups (ERGs) offer strong support networks for their members and are open to all employees. Our current groups include: Africa, Middle East, Central Asia Network, Black Employees at Lilly, Chinese Culture Network, Japanese International Leadership Network (JILN), Lilly India Network, Organization of Latinx at Lilly (OLA), PRIDE (LGBTQ+ Allies), Veterans Leadership Network (VLN), Women's Initiative for Leading at Lilly (WILL), en Able (for people with disabilities). Learn more about all of our groups.
Actual compensation will depend on a candidate's education, experience, skills, and geographic location. The anticipated wage for this position is
$126,000 - $244,200
Full-time equivalent employees also will be eligible for a company bonus (depending, in part, on company and individual performance). In addition, Lilly offers a comprehensive benefit program to eligible employees, including eligibility to participate in a company-sponsored 401(k); pension; vacation benefits; eligibility for medical, dental, vision and prescription drug benefits; flexible benefits (e.g., healthcare and/or dependent day care flexible spending accounts); life insurance and death benefits; certain time off and leave of absence benefits; and well-being benefits (e.g., employee assistance program, fitness benefits, and employee clubs and activities).Lilly reserves the right to amend, modify, or terminate its compensation and benefit programs in its sole discretion and Lilly's compensation practices and guidelines will apply regarding the details of any promotion or transfer of Lilly employees.
#WeAreLilly
$85k-109k yearly est. Auto-Apply 21d ago
Data Engineer
Insight Global
Data engineer job in Indianapolis, IN
Insight Global is looking for a DataEngineer to support a large healthcare client in a remote capacity. The DataEngineer will be responsible for developing and maintaining new and existing data ETL pipelines and data source marts for internal use under general supervision. Additionally, they will provide support on development projects, participate in the full development life cycle including requirements analysis.
We are a company committed to creating inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity employer that believes everyone matters. Qualified candidates will receive consideration for employment opportunities without regard to race, religion, sex, age, marital status, national origin, sexual orientation, citizenship status, disability, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to Human Resources Request Form (****************************************** Og4IQS1J6dRiMo) . The EEOC "Know Your Rights" Poster is available here (*********************************************************************************************** .
To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy: *************************************************** .
Skills and Requirements
3-5 years of experience in an information technology field
Experience with Python, Terraform, and Databricks
Understanding of JSON, jinja templates, and Lakehouse Data Architecture
$69k-92k yearly est. 60d+ ago
Cloud Data Engineer (Central Indiana Residents Only)
Cspring
Data engineer job in Indianapolis, IN
We are seeking a Cloud DataEngineer - an execution-focused engineer with strong design instincts - to help build and evolve our modern Azure-based data platform. You will contribute directly to the design and implementation of data pipelines, transformations, and analytics-ready datasets, while helping reinforce sound engineering patterns and practices through your day-to-day work. You will be writing high-quality code and collaborating closely with teammates so that data solutions are reliable, scalable, and easy to extend. This is an ideal role for a strong dataengineer who wants to grow technically while working within a modern Azure data ecosystem, enabling current and future teammates to "fall into the pit of success" when building data solutions on Azure.
Key Responsibilities:
* Build: Design, implement, and maintain reliable Azure-based data pipelines that support analytics, AI/ML, and enterprise integration use cases.
* Hands-on Engineering: Actively contribute code across the data platform - developing pipelines, writing transformations, optimizing SQL, and supporting production workloads.
* Component Ownership: Own specific areas of the data platform (pipelines, datasets, transformations, or integrations), ensuring quality, performance, and maintainability.
* Engineering Best Practices: Apply established patterns, frameworks, and standards when building data solutions, helping reinforce consistency and quality across the platform.
* Collaboration: Work closely with dataengineers, analysts, product owners, and stakeholders to translate business requirements into effective technical solutions.
* Platform Awareness: Stay informed on Azure data services and evolving platform capabilities, applying new tools or approaches when appropriate.
* Quality & Reliability: Contribute to CI/CD practices, testing, monitoring, and operational readiness for data pipelines and downstream consumers.
Core Technologies (experience with some or all of the following is expected):
* Microsoft Fabric
* Azure Data Factory / Synapse Pipelines
* Databricks
* PySpark
* Microsoft Synapse Analytics
* SQL Server / Azure SQL
* Modern ELT / ETL practices
Required Qualifications:
* 4+ years of hands-on experience engineeringdata pipelines and data platforms.
* Demonstrated experience building and maintaining cloud-based data solutions in Azure (or comparable cloud environments).
* Practical, hands-on exposure to data lakes, data marts, and data warehouses, data pipelines and orchestration, and modern ELT/ETL patterns.
* Strong coding ability using modern dataengineering tools and frameworks.
* Solid SQL skills for data transformation, optimization, and downstream consumption.
* Clear communication skills and the ability to collaborate effectively within cross-functional teams.
* Bachelor's degree in Computer Science, Data Science, Information Systems, or related field.
* Located in or around Indianapolis, IN.
Preferred Qualifications:
* Azure certifications (e.g., Azure DataEngineer / DP-203) or equivalent cloud credentials.
* Experience with CI/CD pipelines for data platforms.
* Familiarity with Agile delivery models and iterative development practices.
* Exposure to streaming, near-real-time data, or observability/monitoring for data systems.
* Interest in continuous learning and technical growth within the dataengineering discipline.
Come Collaborate with Us!
At CSpring, we believe that unlocking potential in others unlocks our own. You'll join a collaborative, values-driven community where curiosity and connection thrive. We come together regularly for team-building, service events, learning lunches, and more. We celebrate wins, support one another through challenges, and continually invest in each other's growth. If you're ready to join a positive, energetic, and purpose-driven team where your work truly matters - apply now and help us build what's next.
$69k-92k yearly est. 2d ago
Data Scientist
Alliance for Cooperativ
Data engineer job in Carmel, IN
Carmel, IN
This position will be responsible for providing analysis and insight into the performance of client portfolios as well as building tools and automation for efficiency, and accuracy. They will act as a liaison between information technology and other departments to ensure accurate interpretation and evaluation of user and client requests. This position performs operational data analysis, regular reporting, ad hoc analysis, gap analysis, technical research, and develops numerus forecasting models. Be innovative by creating new performance metrics, new methods of analysis and new tools to provide internal and external clients further insight into the data and patterns. Display large amounts of complex datain an easy to read and understandable manner. They will interface frequently with the power traders but must be flexible to work with the entire company.
Duties and Responsibilities:
Create and maintain renewable, and gas generation models for power traders including ongoing data analysis
Create and maintain preschedule and real-time position tools for power, and gas traders
Visually display large amounts of complex datain an easy to read and understandable manner
Acts as a liaison between the business and IT in the analysis, design, configuration, testing, and maintenance of systems
Functions as a subject matter expert indata, analytics, design, build, maintenance and distribution of reports
Assist traders with analytical needs, design and creation of new analysis and performance measures
Coordinate with Information Delivery when new business requirements arise, which includes defining business requirements, being involved in the design and development of the business process, and testing solutions for correctness
Analyzes problems, determines root cause, and offers multiple solutions based upon outcome trade-offs
Perform creative ad-hoc analysis to address complex questions regarding energy markets and client portfolios
Continually monitor and refine workflow and business processes to enhance performance
Partner with cross-functional teams to inform, and execute business strategies
Software integration, and implementation
Adheres to and is supportive of all ACES corporate policies and complies with all regulatory requirements including but not limited to NERC, FERC and relevant state regulations as applicable to the position
Any additional responsibilities assigned by management
Required Qualifications:
Bachelor's Degree in Mathematics, Computer Science, or related degree
Programming experience in SQL, VBA, R, Java, and/or C is preferred
Familiarity with data practices including database structure, working with APIs, and data analysis from multiple systems
Pattern recognition, and predictive modeling
High degree of analytical and computer skills, and the ability to build complex forecasting models involving interrelated factors, and market movement.
Design, and develop data visualizations, dashboards, KPIs, and reports of large amounts of complex data into an easy to read, and understandable manner
Excellent interpretive, and interpersonal skills
Mathematical and statistical experience
Ability to identify, prioritize and solve multiple problems in a timely and effective manner
Ability to understand, assimilate, and communicate information effectively in both an oral and written manner
Ability to elicit requirements, understand current operational procedures, identify current, and future problems, perform gap analysis, and ensure the solution will satisfy the objectives.
Working knowledge of power systems operations, wholesale energy transactions, and portfolio optimization is preferred
$65k-89k yearly est. Auto-Apply 3d ago
Data Engineer
Schwarz Partners 3.9
Data engineer job in Carmel, IN
Schwarz Partners has an exciting opportunity available for a DataEngineerin Carmel, IN! DataEngineers build data pipelines that transform raw, unstructured data into formats that can be used for analysis. They are responsible for creating and maintaining the analytics infrastructure that enables almost every other data function. This includes architectures such as databases, servers, and large-scale processing systems. A DataEngineer uses different technologies to collect and map an organization's data landscape to help decision-makers find cost savings and optimization opportunities. In addition, dataEngineers use this data to display trends in collected analytics information, encouraging transparency with stakeholders.
Schwarz Partners is one of the largest independent manufacturers of corrugated sheets and packaging materials in the U.S. Through our family of companies, we continuously build and strengthen our capabilities. You'll find our products wherever goods are packaged, shipped, and sold-from innovative retail packaging to colorful in-store displays at pharmacies and grocers. You also may have spotted our trucks on the highway. Schwarz Partners is built around the idea that independence and innovation go hand in hand. Our structure allows us to adapt to change quickly, get new ideas off the ground, and thrive in the marketplace. Our people are empowered to tap into their talents, build their skills, and grow with us.
ESSENTIAL JOB FUNCTIONS FOR THIS POSITION:
Build and maintain the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and cloud technologies.
Assemble large, complex sets of data that meet non-functional and functional business requirements.
Identify, design, and implement internal data-related process improvements.
Working with stakeholders including data, design, product and executive teams and assisting them with data-related technical issues.
Conduct configuration & design of application to better leverage the enterprise.
Prepare data for prescriptive and predictive modeling.
Use effective communication to work with application vendors.
Assist in the creation and quality assurance review of design documents and test results to ensure all project requirements are satisfied.
Ability to advise and implement on improvements to data warehousing and data workflow architecture.
Think outside the box and come up with improvement and efficiency opportunities to streamline business and operational workflows.
Document high-level business workflows and transform into low-level technical requirements.
Ability to analyze complex information sets and communicate that information in a clear well thought out and well laid out manner.
Ability to communicate at varying levels of detail (30,000 ft. view, 10,000 ft. view, granular level) and to produce corresponding documentation at varying levels of abstraction.
Be an advocate for best practices and continued learning.
Ability to communicate with business stakeholders on status of projects/issues.
Ability to prioritize and multi-task between duties at any given time.
Solid communication and interpersonal skills.
Comply with company policies and procedures and all applicable laws and regulations.
General DBA work as needed.
Maintain and troubleshoot existing ETL processes.
Create and maintain BI reports.
Additional duties as assigned.
REQUIRED EDUCATION / EXPERIENCE:
Bachelor's degree in Computer Science or 4+ years' experience in related field.
PREFERRED EDUCATION / EXPERIENCE:
Experience developing data workflows.
Ability to perform prescriptive and predictive modeling.
REQUIRED SKILLS:
Demonstrated experience with SQL in a large database environment.
Direct experience utilizing SQL to develop queries or profile data.
Experience in quantitative and qualitative analysis of data.
Experienced level skills in Systems Analysis.
Experienced level skills in Systems Engineering.
Ability to function as a self-starter.
REQUIRED MICROSOFT FABRIC SKILLS:
Strong grasp of OneLake concepts: lakehouses vs. warehouses, shortcuts, mirroring, item/workspace structure.
Hands-on with Delta Lake (Parquet, Delta tables, partitioning, V-ordering, Z-ordering, Vacuum retention).
Understanding of Direct Lake, Import, and DirectQuery trade-offs and when to use each.
Experience designing star schemas and modern medallion architectures (bronze/silver/gold).
Spark/PySpark notebooks (jobs, clusters, caching, optimization, broadcast joins).
Data Factory in Fabric (Pipelines): activities, triggers, parameterization, error handling/retries.
Dataflows Gen2 (Power Query/M) for ELT, incremental refresh, and reusable transformations.
Building/optimizing semantic models; DAX (measures, calculation groups, aggregations).
Ability to multi-task, think on his/her feet and react, apply attention to detail and follow-up, and work effectively and collegially with management staff and end users.
$73k-99k yearly est. 18d ago
Analytics Data Engineer II/III
Indiana Health Information Exchange 4.0
Data engineer job in Indianapolis, IN
Department
Business & Product Development
Employment Type
Full Time
Location
Indianapolis, Indiana
Workplace type
Hybrid
Reporting To
Adam Fair - Director, Business Intelligence
Essential Functions: Requirements: Benefits: About Indiana Health Information Exchange The Indiana Health Information Exchange (IHIE) has been around for over a decade, with its roots dating back 30 years. We have the largest and broadest participant community in the nation, and we are at the center of where digital health, payment reform and patient care intersect.
We enable hospitals, physicians, laboratories, payers, and other health service providers to avoid redundancy and deliver faster, more efficient, higher quality healthcare to patients inIndiana. Today, by making information available to approximately 50,000 healthcare providers inIndiana and neighboring states, we deliver services that make a real difference in health and healthcare.
$72k-99k yearly est. 15d ago
Data Engineer (No Sponsorship Available)
Heritage Construction + Materials 3.6
Data engineer job in Indianapolis, IN
Build Your Career at Heritage Construction + Materials!
We are looking for a highly motivated, strategic-thinking, and data-driven technical expert to join our team as a HC+M DataEngineer. This individual will help develop and implement strategic data initiatives. They will be responsible for collecting and analyzing data, implementing technical solutions/algorithms, and developing and maintaining data pipelines.
This position requires U.S. work authorization
Essential Functions
Develop and support data pipelines within our Cloud Data Platform Databricks
Monitor and optimize Databricks cluster performance, ensuring cost-effective scaling and resource utilization
Design, implement, and maintain Delta Lake tables and pipelines to ensure optimized storage, data reliability, performance, and versioning
Python application development
Automate CI/CD pipelines for data workflows using Azure DevOps
Integrate Databricks with other Azure services, such as Azure Data Factory and ADLS
Design and implement monitoring solutions, leveraging Databricks monitoring tools, Azure Log Analytics, or custom dashboards for cluster and job performance
Collaborate with cross-functional teams to support data governance using Databricks Unity Catalog
Additional duties and responsibilities as assigned, including but not limited to continuously growing in alignment with the Company's core values, competencies, and skills.
Education Qualifications
Bachelor's or Master's degree in Computer Science, Engineering, Data Science, or a related field required
Experience Qualifications
Expertise with programming languages such as Python and/or SQL
Experience with Python Data Science Libraries (PySpark, Pandas, etc.)
Experience with Test Drive Development (TDD)
Automated CI/CD Pipelines
Linux / Bash / Docker
Database Schema Design and Optimization
Experience working in a cloud environment (Azure preferred) with strong understanding of cloud data architecture
Hands-on experience with Databricks or Snowflake Cloud Data Platforms
Experience with workflow orchestration (e.g., Databricks Jobs, or Azure Data Factory pipelines)
Skills and Abilities
Strong analytical and problem-solving skills
Experience with programming languages such as Python, Java, R, or SQL
Experience with Databricks or Snowflake Cloud Data Platforms
Knowledge of statistical analysis and data visualization tools, such as PowerBI or Tableau
Proficient in Microsoft Office
About Heritage Construction + Materials
Heritage Construction + Materials (HC+M) is part of The Heritage Group, a privately held, family-owned business headquartered inIndianapolis. HC+M has core capabilities in infrastructure building. Its collection of companies provides innovative road construction and materials services across the Midwest. HC+M companies, including Asphalt Materials, Inc., Evergreen Roadworks, Milestone Contractors and US Aggregates, proudly employ 3,000 people at 68 locations across seven states. Learn more at ***********************
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
#HeritageConstruction+Materials
$68k-92k yearly est. Auto-Apply 60d+ ago
Data Scientist
Corteva Agriscience 3.7
Data engineer job in Indianapolis, IN
**Who We Are and What We Do** At Corteva Agriscience, you will help us grow what's next. No matter your role, you will be part of a team that is building the future of agriculture - leading breakthroughs in the innovation and application of science and technology that will better the lives of people all over the world and fuel the progress of humankind.
We are seeking a highly skilled **Data Scientist** with experience in bioprocess control to join our global Data Science team. This role will focus on applying artificial intelligence, machine learning, and statistical modeling to develop active bioprocess control algorithms, optimize bioprocess workflows, improve operational efficiency, and enable data-driven decision-making in manufacturing and R&D environments.
**What You'll Do:**
+ **Design and implement active process control strategies** , leveraging online measurements and dynamic parameter adjustment for improved productivity and safety.
+ **Develop and deploy predictive models** for bioprocess optimization, including fermentation and downstream processing.
+ Partner with engineers and scientists to **translate process insights into actionable improvements** , such as yield enhancement and cost reduction.
+ **Analyze high-complexity datasets** from bioprocess operations, including sensor data, batch records, and experimental results.
+ **Collaborate with cross-functional teams** to integrate data science solutions into plant operations, ensuring scalability and compliance.
+ **Communicate findings** through clear visualizations, reports, and presentations to technical and non-technical stakeholders.
+ **Contribute to continuous improvement** of data pipelines and modeling frameworks for bioprocess control.
**What Skills You Need:**
+ M.S. + 3 years' experience or Ph.D. inData Science, Computer Science, Chemical Engineering, Bioprocess Engineering, Statistics, or related quantitative field.
+ Strong foundation in machine learning, statistical modeling, and process control.
+ Proficiency in Python, R, or another programming language.
+ Excellent communication and collaboration skills; ability to work in multidisciplinary teams.
**Preferred Skills:**
+ Familiarity with bioprocess workflows, fermentation, and downstream processing.
+ Hands-on experience with bioprocess optimization models and active process control strategies.
+ Experience with industrial data systems and cloud platforms.
+ Knowledge of reinforcement learning or adaptive experimentation for process improvement.
\#LI-BB1
**Benefits - How We'll Support You:**
+ Numerous development opportunities offered to build your skills
+ Be part of a company with a higher purpose and contribute to making the world a better place
+ Health benefits for you and your family on your first day of employment
+ Four weeks of paid time off and two weeks of well-being pay per year, plus paid holidays
+ Excellent parental leave which includes a minimum of 16 weeks for mother and father
+ Future planning with our competitive retirement savings plan and tuition reimbursement program
+ Learn more about our total rewards package here - Corteva Benefits (*******************************************************************************
+ Check out life at Corteva! *************************************
Are you a good match? Apply today! We seek applicants from all backgrounds to ensure we get the best, most creative talent on our team.
Corteva Agriscience is an equal opportunity employer. We are committed to embracing our differences to enrich lives, advance innovation, and boost company performance. Qualified applicants will be considered without regard to race, color, religion, creed, sex, sexual orientation, gender identity, marital status, national origin, age, military or veteran status, pregnancy related conditions (including pregnancy, childbirth, or related medical conditions), disability or any other protected status in accordance with federal, state, or local laws.
Corteva Agriscience is an equal opportunity employer. We are committed to boldly embracing the power of inclusion, diversity, and equity to enrich the lives of our employees and strengthen the performance of our company, while advancing equity in agriculture. Qualified applicants will be considered without regard to race, color, religion, creed, sex, sexual orientation, gender identity, marital status, national origin, age, veteran status, disability or any other protected class. Discrimination, harassment and retaliation are inconsistent with our values and will not be tolerated. If you require a reasonable accommodation to search or apply for a position, please visit:Accessibility Page for Contact Information
For US Applicants: See the 'Equal Employment Opportunity is the Law' poster. To all recruitment agencies: Corteva does not accept unsolicited third party resumes and is not responsible for any fees related to unsolicited resumes.
$67k-89k yearly est. 42d ago
Sr Data Engineer, Palantir
The Hertz Corporation 4.3
Data engineer job in Indianapolis, IN
**A Day in the Life:** We are seeking a talented **Sr DataEngineer, Palantir (experience required)** to join our Strategic Data & Analytics team working on Hertz's strategic applications and initiatives. This role will work in multi-disciplinary teams rapidly building high-value products that directly impact our financial performance and customer experience. You'll build cloud-native, large-scale, employee facing software using modern technologies including React, Python, Java, AWS, and Palantir Foundry.
The ideal candidate will have strong development skills across the full stack, a growth mindset, and a passion for building software at a sustainable pace in a highly productive engineering culture. Experience with Palantir Foundry is highly preferred but not required - we're looking for engineers who are eager to learn and committed to engineering excellence.
We expect the starting salary to be around $135k but will be commensurate with experience.
**What You'll Do:**
Day-to-Day Responsibilities
+ Work in balanced teams consisting of Product Managers, Product Designers, and engineers
+ Test first - We strive for Test-Driven Development (TDD) for all production code
+ CI (Continuous Integration) everything - Automation is core to our development process
+ Architect user-facing interfaces and design functions that help users visualize and interact with their data
+ Contribute to both frontend and backend codebases to enhance and develop projects
+ Build software at a sustainable pace to ensure longevity, reliability, and higher quality output
Frontend Development
+ Design and develop responsive, intuitive user interfaces using React and modern JavaScript/TypeScript
+ Build reusable component libraries and implement best practices for frontend architecture
+ Generate UX/UI designs (no dedicated UX/UI designers on team) with considerations for usability and efficiency
+ Optimize applications for maximum speed, scalability, and accessibility
+ Develop large-scale, web and mobile software utilizing appropriate technologies for use by our employees
Backend Development
+ Develop and maintain RESTful APIs and backend services using Python or Java
+ Design and implement data models and database schemas
+ Deploy to cloud environments (primarily AWS)
+ Integrate with third-party services and APIs
+ Write clean, maintainable, and well-documented code
Palantir Foundry Development (Highly Preferred)
+ Build custom applications and integrations within the Palantir Foundry platform
+ Develop Ontology-based applications leveraging object types, link types, and actions
+ Create data pipelines and transformations using Python transforms
+ Implement custom widgets and user experiences using the Foundry SDK
+ Design and build functions that assist users to visualize and interact with their data
Product Development & Delivery
+ Research problems and break them into deliverable parts
+ Work with a Lean mindset and deliver value quickly
+ Participate in all stages of the product development and deployment lifecycle
+ Conduct code reviews and provide constructive feedback to team members
+ Work with product managers and stakeholders to define requirements and deliverables
+ Contribute to architectural decisions and technical documentation
**What We're Looking For:**
+ Experience with Palantir Foundry platform, required
+ 5+ years in web front-end or mobile development
+ Bachelor's or Master's degree in Computer Science or other related field, preferred
+ Strong proficiency in React, JavaScript/TypeScript, HTML, and CSS for web front-end development
+ Strong knowledge of one or more Object Oriented Programming or Functional Programming languages such as JavaScript, Typescript, Java, Python, or Kotlin
+ Experience with RESTful API design and development
+ Experience deploying to cloud environments (AWS preferred)
+ Understanding of version control systems, particularly GitHub
+ Experience with relational and/or NoSQL databases
+ Familiarity with modern frontend build tools and package managers (e.g., Webpack, npm, yarn)
+ Experience with React, including React Native for mobile app development, preferred
+ Experience in Android or iOS development, preferred
+ Experience with data visualization libraries (e.g., D3.js, Plotly, Chart.js), preferred
+ Familiarity with CI/CD pipelines and DevOps practices, preferred
+ Experience with Spring framework, preferred
+ Working knowledge of Lean, User Centered Design, and Agile methodologies
+ Strong communication skills and ability to collaborate effectively across teams
+ Growth mindset - Aptitude and willingness to learn new technologies
+ Empathy - Kindness and empathy when building software for end users
+ Pride - Takes pride inengineering excellence and quality craftsmanship
+ Customer obsession - Obsessed with the end user experience of products
+ Strong problem-solving skills and attention to detail
+ Ability to work independently and as part of a balanced, multi-disciplinary team
+ Self-motivated with a passion for continuous learning and improvement
**What You'll Get:**
+ Up to 40% off the base rate of any standard Hertz Rental
+ Paid Time Off
+ Medical, Dental & Vision plan options
+ Retirement programs, including 401(k) employer matching
+ Paid Parental Leave & Adoption Assistance
+ Employee Assistance Program for employees & family
+ Educational Reimbursement & Discounts
+ Voluntary Insurance Programs - Pet, Legal/Identity Theft, Critical Illness
+ Perks & Discounts -Theme Park Tickets, Gym Discounts & more
The Hertz Corporation operates the Hertz, Dollar Car Rental, Thrifty Car Rental brands in approximately 9,700 corporate and franchisee locations throughout North America, Europe, The Caribbean, Latin America, Africa, the Middle East, Asia, Australia and New Zealand. The Hertz Corporation is one of the largest worldwide airport general use vehicle rental companies, and the Hertz brand is one of the most recognized in the world.
**US EEO STATEMENT**
At Hertz, we champion and celebrate a culture of diversity and inclusion. We take affirmative steps to promote employment and advancement opportunities. The endless variety of perspectives, experiences, skills and talents that our employees invest in their work every day represent a significant part of our culture - and our success and reputation as a company.
Individuals are encouraged to apply for positions because of the characteristics that make them unique.
EOE, including disability/veteran
$135k yearly 56d ago
Cloud Data Engineer (Central Indiana Residents Only)
Cspring
Data engineer job in Indianapolis, IN
Apply Description
We are seeking a Cloud DataEngineer - an execution-focused engineer with strong design instincts - to help build and evolve our modern Azure-based data platform. You will contribute directly to the design and implementation of data pipelines, transformations, and analytics-ready datasets, while helping reinforce sound engineering patterns and practices through your day-to-day work. You will be writing high-quality code and collaborating closely with teammates so that data solutions are reliable, scalable, and easy to extend. This is an ideal role for a strong dataengineer who wants to grow technically while working within a modern Azure data ecosystem, enabling current and future teammates to “fall into the pit of success” when building data solutions on Azure.
Key Responsibilities:
Build: Design, implement, and maintain reliable Azure-based data pipelines that support analytics, AI/ML, and enterprise integration use cases.
Hands-on Engineering: Actively contribute code across the data platform - developing pipelines, writing transformations, optimizing SQL, and supporting production workloads.
Component Ownership: Own specific areas of the data platform (pipelines, datasets, transformations, or integrations), ensuring quality, performance, and maintainability.
Engineering Best Practices: Apply established patterns, frameworks, and standards when building data solutions, helping reinforce consistency and quality across the platform.
Collaboration: Work closely with dataengineers, analysts, product owners, and stakeholders to translate business requirements into effective technical solutions.
Platform Awareness: Stay informed on Azure data services and evolving platform capabilities, applying new tools or approaches when appropriate.
Quality & Reliability: Contribute to CI/CD practices, testing, monitoring, and operational readiness for data pipelines and downstream consumers.
Core Technologies (experience with some or all of the following is expected):
Microsoft Fabric
Azure Data Factory / Synapse Pipelines
Databricks
PySpark
Microsoft Synapse Analytics
SQL Server / Azure SQL
Modern ELT / ETL practices
Required Qualifications:
4+ years of hands-on experience engineeringdata pipelines and data platforms.
Demonstrated experience building and maintaining cloud-based data solutions in Azure (or comparable cloud environments).
Practical, hands-on exposure to data lakes, data marts, and data warehouses, data pipelines and orchestration, and modern ELT/ETL patterns.
Strong coding ability using modern dataengineering tools and frameworks.
Solid SQL skills for data transformation, optimization, and downstream consumption.
Clear communication skills and the ability to collaborate effectively within cross-functional teams.
Bachelor's degree in Computer Science, Data Science, Information Systems, or related field.
Located in or around Indianapolis, IN.
Preferred Qualifications:
Azure certifications (e.g., Azure DataEngineer / DP-203) or equivalent cloud credentials.
Experience with CI/CD pipelines for data platforms.
Familiarity with Agile delivery models and iterative development practices.
Exposure to streaming, near-real-time data, or observability/monitoring for data systems.
Interest in continuous learning and technical growth within the dataengineering discipline.
Come Collaborate with Us!
At CSpring, we believe that unlocking potential in others unlocks our own. You'll join a collaborative, values-driven community where curiosity and connection thrive. We come together regularly for team-building, service events, learning lunches, and more. We celebrate wins, support one another through challenges, and continually invest in each other's growth.
If you're ready to join a positive, energetic, and purpose-driven team where your work truly matters - apply now and help us build what's next.
$69k-92k yearly est. 4d ago
Join the Squad | Now Hiring a Data Engineer - Adobe Experience Platform
Onebridge 4.3
Data engineer job in Indianapolis, IN
Onebridge, a Marlabs Company, is a global AI and Data Analytics Consulting Firm that empowers organizations worldwide to drive better outcomes through data and technology. Since 2005, we have partnered with some of the largest healthcare, life sciences, financial services, and government entities across the globe. We have an exciting opportunity for a highly skilled DataEngineer - Adobe Experience Platform to join our innovative and dynamic team.
DataEngineer - Adobe Experience Platform | About You
As a DataEngineer - Adobe Experience Platform, you are responsible for delivering scalable experimentation, personalization, and data workflows across enterprise digital ecosystems. You have a strong understanding of AEP components, including XDM schemas, identity management, Real-Time Customer Profile, and Alloy.js, and know how to apply them to real-world use cases. You work effectively across technical and business teams, including onshore stakeholders and offshore development partners, to bring high‑quality solutions from concept to production. You excel at diagnosing and resolving issues across SDK implementations, tagging systems, APIs, and data pipelines to ensure accuracy and reliability. You thrive in fast-paced, high‑visibility environments where personalization, experimentation, and data integrity drive meaningful customer impact.
DataEngineer - Adobe Experience Platform | Day-to-Day
Build and deploy A/B tests, multivariate tests, and personalization campaigns across digital channels using Adobe Target and modern rendering frameworks.
Design and maintain AEP components including XDM schemas, identities, Real-Time Customer Profile, and Adobe Web SDK (Alloy.js) configurations.
Implement and manage Adobe Tags (Launch) and event tracking needed for data collection, targeting, and analytics workflows.
Collaborate with onshore stakeholders and offshore development teams in India to deliver high-quality experimentation and personalization solutions.
Troubleshoot issues across audience qualification, test delivery, APIs, tags, and data flows, ensuring accurate and reliable execution.
Participate in Agile processes, daily standups, sprint planning, and documentation, using Jira and Confluence while managing multiple priorities in a fast-moving environment.
DataEngineer - Adobe Experience Platform | Skills & Experience
5+ years of development experience in digital marketing/data platforms and 1-2 years of hands-on experience with AEP (XDM schemas, identities, RTCP, Alloy.js, segmentation).
Strong JavaScript development skills and experience with Adobe Target libraries and APIs (at.js, Alloy.js, Delivery APIs, Recommendations APIs).
Experience implementing experimentation and personalization in SPA, SSR, and hybrid architectures.
Comfortable working with REST APIs, JSON structures, authentication tokens, and Adobe I/O integrations.
Enterprise-level experience with strong collaboration skills, including working across onshore teams and offshore development partners.
Experienced in Agile workflows with the ability to manage multiple tasks in a high-visibility, fast-paced environment.
$72k-100k yearly est. Auto-Apply 6d ago
Data Engineer - Lilly Medicines Foundry
Eli Lilly and Company 4.6
Data engineer job in Lebanon, IN
At Lilly, we unite caring with discovery to make life better for people around the world. We are a global healthcare leader headquartered inIndianapolis, Indiana. Our employees around the world work to discover and bring life-changing medicines to those who need them, improve the understanding and management of disease, and give back to our communities through philanthropy and volunteerism. We give our best effort to our work, and we put people first. We're looking for people who are determined to make life better for people around the world.
This is an opportunity you don't want to miss!
Lilly is entering an exciting period of growth, and we are committed to delivering innovative medicines to patients around the world. LRL has increasing needs for in-house manufacture of material for clinical supplies and will therefore construct a new campus to manufacture Clinical Trial (CT) Active Pharmaceutical Ingredient (API) to meet needs for an expanding portfolio (more and new areas), to accelerate development timelines, and to enhance supply chain robustness.
The brand-new facility also known as Lilly Medicine Foundry (LMF) will utilize the latest technology to augment the current clinical supply chain for small molecules (SM), oligonucleotides, peptides, and Antibody Drug Conjugates (ADCs), monoclonal antibodies and bioconjugates, and add new capabilities including mRNA.
The new site will be built using the latest high-tech equipment, advanced highly integrated and automated manufacturing systems, and have a focus on minimizing the impact to our environment.
What You'll Be Doing:
As DataEngineer you will be responsible for engaging with business stakeholders to design, develop, and maintain the data pipelines and data solutions that ensure the availability and quality of data sets and actionable insights for the Foundry. This includes data capture, integration, acquisition, , contextualization, and harmonization, leading to the delivery of data-as-a-product and reusable data domains and products. The focus is to integrate IT/OT systems with cloud data lakehouse architecture (AWS/Azure) to enable advanced analytics and AI/ML capabilities while ensuring data integrity and compliance with relevant regulatory standards and best practices.
The DataEngineer will work closely with the Data Architect and Data Scientists. They also work with business and IT groups beyond the data sphere, understanding the enterprise infrastructure and the many source systems.
How You'll Succeed:
Bring a foundational set of knowledge in: communication, leadership, teamwork, problem solving skills, solution definition, business acumen, architectural processes (e.g. blueprinting, reference architecture, governance, etc.), technical standards, project delivery, and industry knowledge.
Provide Business Analysis and Technical Leadership including:
Engaging with business and proactively seeking opportunities to deliver business value.
Understanding business requirements and effectively translating business needs and process into technical terms, and vice versa
Eliciting and defining requirements.
Participating in design reviews to ensure traceability of requirements.
Seeking opportunities to reuse existing processes and services to streamline support and implementation of key systems.
Staying abreast of tools and technologies to influence Tech at Lilly strategy so that it provides best usage opportunities for business
Analyze large, complex data domains and craft practical solutions for subsequent data exploitation via analytics.
Design, develop and maintain data solutions for data capture, storage, integration and analytics in partnership with Tech at Lilly teams.
Review and provide practical recommendations on design patterns, performance considerations & optimization, database versions, and database deployment strategies
Ensure that Data solutions adhere to regulatory requirements including FDA guidelines and Good Manufacturing Practices (GMP).
Knowledgeable indata functions such as Data Governance, Master Data Management, Business Intelligence
Your Basic Qualifications:
Bachelor's degree in Computer Science, Data Science, Engineering or related field
At least 3 years of experience in several of the following disciplines: statistical methods, data modeling, ETL/ELT, ontology development, semantic graph construction and linked data, relational schema design.
At least 1 year of experience in a pharmaceutical GxP or Scientific environment.
Qualified applicants must be authorized to work in the United States on a full-time basis. Lilly will not provide support or sponsor work authorization and/or visas for this role
Additional Skills / Preferences:
1-3 years of experience designing large scale data models for functional, operational, and analytical environments (Conceptual, Logical, Physical & Dimensional).
Demonstrated SQL and data modeling proficiency.
Experience with data modeling tools such as, ER*Studio and Erwin or TOAD.
Experience with cloud platforms (e.g., AWS, Azure).
Experience with AI/ML/LLM Concepts and tools and building agentic AI solution sets
Experience with data integration such as data streaming, Industrial IOT, using MQTT, AQMP, Kafka and related protocols.
Understanding of modern data architecture, data lakehouse, data warehousing and/or big data concepts.
Experience with security models and development on large data sets.
Experience with multiple database solutions (e.g. Postgres, Redshift, Aurora, Athena, Graph DB like Neptune, No SQL like DynamoDB, MongoDB) and formal database designs (3NF, Dimensional Models).
Experience with Agile Development, CI/CD, Github, Automation platforms.
Demonstrated ability to analyze large, complex data domains and craft practical solutions for subsequent data exploitation via analytics.
Ability to review and provide practical recommendations on design patterns, performance considerations & optimization, database versions, and database deployment strategies.
Knowledgeable indata functions such as Data Governance, Master Data Management, Business Intelligence.
Prior work experience working in pharma or other GMP setting.
Solid knowledge of Computer System Validation process.
Demonstrated ability to analyze, anticipate, and resolve complex issues through sound problem-solving skills.
Demonstrated learning agility and curiosity.
Desire and ability to communicate using a variety of methods in diverse forums.
Additional Information:
Must be willing to relocate to the Indianapolis/Lebanon, IN area
Work schedule is hybrid: At least three days onsite.
Lilly is dedicated to helping individuals with disabilities to actively engage in the workforce, ensuring equal opportunities when vying for positions. If you require accommodation to submit a resume for a position at Lilly, please complete the accommodation request form (******************************************************** for further assistance. Please note this is for individuals to request an accommodation as part of the application process and any other correspondence will not receive a response.
Lilly is proud to be an EEO Employer and does not discriminate on the basis of age, race, color, religion, gender identity, sex, gender expression, sexual orientation, genetic information, ancestry, national origin, protected veteran status, disability, or any other legally protected status.
Our employee resource groups (ERGs) offer strong support networks for their members and are open to all employees. Our current groups include: Africa, Middle East, Central Asia Network, Black Employees at Lilly, Chinese Culture Network, Japanese International Leadership Network (JILN), Lilly India Network, Organization of Latinx at Lilly (OLA), PRIDE (LGBTQ+ Allies), Veterans Leadership Network (VLN), Women's Initiative for Leading at Lilly (WILL), en Able (for people with disabilities). Learn more about all of our groups.
Actual compensation will depend on a candidate's education, experience, skills, and geographic location. The anticipated wage for this position is
$64,500 - $158,400
Full-time equivalent employees also will be eligible for a company bonus (depending, in part, on company and individual performance). In addition, Lilly offers a comprehensive benefit program to eligible employees, including eligibility to participate in a company-sponsored 401(k); pension; vacation benefits; eligibility for medical, dental, vision and prescription drug benefits; flexible benefits (e.g., healthcare and/or dependent day care flexible spending accounts); life insurance and death benefits; certain time off and leave of absence benefits; and well-being benefits (e.g., employee assistance program, fitness benefits, and employee clubs and activities).Lilly reserves the right to amend, modify, or terminate its compensation and benefit programs in its sole discretion and Lilly's compensation practices and guidelines will apply regarding the details of any promotion or transfer of Lilly employees.
#WeAreLilly
$64.5k-158.4k yearly Auto-Apply 7d ago
Data Scientist
Corteva Agriscience 3.7
Data engineer job in Indianapolis, IN
Who We Are and What We Do
At Corteva Agriscience, you will help us grow what's next. No matter your role, you will be part of a team that is building the future of agriculture - leading breakthroughs in the innovation and application of science and technology that will better the lives of people all over the world and fuel the progress of humankind.
We are seeking a highly skilled Data Scientist with experience in bioprocess control to join our global Data Science team. This role will focus on applying artificial intelligence, machine learning, and statistical modeling to develop active bioprocess control algorithms, optimize bioprocess workflows, improve operational efficiency, and enable data-driven decision-making in manufacturing and R&D environments.
What You'll Do:
Design and implement active process control strategies, leveraging online measurements and dynamic parameter adjustment for improved productivity and safety.
Develop and deploy predictive models for bioprocess optimization, including fermentation and downstream processing.
Partner with engineers and scientists to translate process insights into actionable improvements, such as yield enhancement and cost reduction.
Analyze high-complexity datasets from bioprocess operations, including sensor data, batch records, and experimental results.
Collaborate with cross-functional teams to integrate data science solutions into plant operations, ensuring scalability and compliance.
Communicate findings through clear visualizations, reports, and presentations to technical and non-technical stakeholders.
Contribute to continuous improvement of data pipelines and modeling frameworks for bioprocess control.
What Skills You Need:
M.S. + 3 years' experience or Ph.D. inData Science, Computer Science, Chemical Engineering, Bioprocess Engineering, Statistics, or related quantitative field.
Strong foundation in machine learning, statistical modeling, and process control.
Proficiency in Python, R, or another programming language.
Excellent communication and collaboration skills; ability to work in multidisciplinary teams.
Preferred Skills:
Familiarity with bioprocess workflows, fermentation, and downstream processing.
Hands-on experience with bioprocess optimization models and active process control strategies.
Experience with industrial data systems and cloud platforms.
Knowledge of reinforcement learning or adaptive experimentation for process improvement.
#LI-BB1
Benefits - How We'll Support You:
Numerous development opportunities offered to build your skills
Be part of a company with a higher purpose and contribute to making the world a better place
Health benefits for you and your family on your first day of employment
Four weeks of paid time off and two weeks of well-being pay per year, plus paid holidays
Excellent parental leave which includes a minimum of 16 weeks for mother and father
Future planning with our competitive retirement savings plan and tuition reimbursement program
Learn more about our total rewards package here - Corteva Benefits
Check out life at Corteva! *************************************
Are you a good match? Apply today! We seek applicants from all backgrounds to ensure we get the best, most creative talent on our team.
Corteva Agriscience is an equal opportunity employer. We are committed to embracing our differences to enrich lives, advance innovation, and boost company performance. Qualified applicants will be considered without regard to race, color, religion, creed, sex, sexual orientation, gender identity, marital status, national origin, age, military or veteran status, pregnancy related conditions (including pregnancy, childbirth, or related medical conditions), disability or any other protected status in accordance with federal, state, or local laws.
How much does a data engineer earn in Indianapolis, IN?
The average data engineer in Indianapolis, IN earns between $60,000 and $106,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.
Average data engineer salary in Indianapolis, IN
$80,000
What are the biggest employers of Data Engineers in Indianapolis, IN?
The biggest employers of Data Engineers in Indianapolis, IN are: