Post job

Senior data scientist jobs in Fort Wayne, IN

- 761 jobs
All
Senior Data Scientist
Data Engineer
Data Scientist
Senior Data Analyst-
Senior Data Architect
  • Data Scientist

    Insight Global

    Senior data scientist job in Indianapolis, IN

    We are seeking a Junior Data Scientist to join our large Utility client in downtown Indianapolis. This position will be hired as a Full-Time employee. This entry-level position is perfect for individuals eager to tackle real-world energy challenges through data exploration, predictive modeling, and collaborative problem-solving. As part of our team, you'll work closely with seasoned data scientists, analysts, architects, engineers, and governance specialists to generate insights that power smarter decisions and help shape the future of energy. Key Responsibilities Partner cross-functionally with data scientists, data architects and engineers, machine learning engineers, data analysts, and data governance experts to deliver integrated data solutions. Collaborate with business stakeholders and analysts to define clear project requirements. Collect, clean, and preprocess both structured and unstructured data from utility systems (e.g., meter data, customer data). Conduct exploratory data analysis to uncover trends, anomalies, and opportunities to enhance grid operations and customer service. Apply traditional machine learning techniques and generative AI tools to build predictive models that address utility-focused challenges, particularly in the customer domain (e.g., outage restoration, program adoption, revenue assurance). Present insights to internal stakeholders in a clear, compelling format, including data visualizations that drive predictive decision-making. Document methodologies, workflows, and results to ensure transparency and reproducibility. Serve as a champion of data and AI across all levels of the client's US Utilities organization. Stay informed on emerging industry trends in utility analytics and machine learning. Requirements Bachelor's degree in data science, statistics, computer science, engineering, or a related field. Master's degree or Ph.D. is preferred. 1-3 years of experience in a data science or analytics role. Strong applied analytics and statistics skills, such as distributions, statistical testing, regression, etc. Proficiency in Python or R, with experience using libraries such as pandas, NumPy, and scikit-learn. Proficiency in traditional machine learning algorithms and techniques, including k-nearest neighbors (k-NN), naive Bayes, support vector machines (SVM), convolutional neural networks (CNN), random forest, gradient-boosted trees, etc. Familiarity with generative AI tools and techniques, including large language models (LLMs) and Retrieval-Augmented Generation (RAG), with an understanding of how these can be applied to enhance contextual relevance and integrate enterprise data into intelligent workflows. Proficiency in SQL, with experience writing complex queries and working with relational data structures. Google BigQuery experience is preferred, including the use of views, tables, materialized views, stored procedures, etc. Proficient in Git for version control, including repository management, branching, merging, and collaborating on code and notebooks in data science projects. Experience integrating Git with CI/CD pipelines to automate testing and deployment is preferred. Experience with cloud computing platforms (GCP preferred). Ability to manage multiple priorities in a fast-paced environment. Interest in learning more about the customer-facing side of the utility industry. Compensation: Up to $130,000 per year annual salary. Exact compensation may vary based on several factors, including skills, experience, and education. Benefit packages for this role may include healthcare insurance offerings and paid leave as provided by applicable law.
    $130k yearly 1d ago
  • Data Scientist with Hands On development experience with R, SQL & Python

    Central Point Partners 3.7company rating

    Senior data scientist job in Columbus, OH

    *Per the client, No C2C's!* Central Point Partners is currently interviewing candidates in the Columbus, Oh area for a large client. only GC's and USC's. This position is Hybrid (4 Days onsite)! Only candidates who are local to Columbus, Oh will be considered. Data Scientist with Hands On development experience with R, SQL & Python Summary: Our client is seeking a passionate, data-savvy Senior Data Scientist to join the Enterprise Analytics team to fuel our mission of growth through data-driven insights and opportunity discovery. This dynamic role uses a consultative approach with the business segments to dive into our customer, product, channel, and digital data to uncover opportunities for consumer experience optimization and customer value delivery. You will also enable stakeholders with actionable, intuitive performance insights that provide the business with direction for growth. The ideal candidate will have a robust mix of technical and communication skills, with a passion for optimization, data storytelling, and data visualization. You will collaborate with a centralized team of data scientists as well as teams across the organization including Product, Marketing, Data, Finance, and senior leadership. This is an exciting opportunity to be a key influencer to the company's strategic decisions and to learn and grow with our Analytics team. Notes from the manager The skills that will be critical will be Python or R and a firm understanding of SQL along with foundationally understanding what data is needed to perform studies now and in the future. For a high-level summary that should help describe what this person will be asked to do alongside their peers: I would say this person will balance analysis with development, knowing when to jump in and knowing when to step back to lend their expertise. Feature & Functional Design Data scientists are embedded in the team's designing the feature. Their main job here is to define the data tracking needed to evaluate the business case-things like event logging, Adobe tagging, third-party data ingestion, and any other tracking requirements. They are also meant to consult and outline if/when business should be bringing data into the bank and will help connect business with CDAO and IT warehousing and data engineering partners should new data need to be brought forward. Feature Engineering & Development The same data scientists stay involved as the feature moves into execution. They support all necessary functions (Amigo, QA, etc.) to ensure data tracking is in place when the feature goes live. They also begin preparing to support launch evaluation and measurement against experimentation design or business case success criteria. Feature Rollout & Performance Evaluation Owns tracking the rollout, running A/B tests, and conducting impact analysis for all features that they have been involved in the Feature & Functional Design and Feature Engineering & Development stages. They provide an unbiased view of how the feature performs against the original business case along with making objective recommendations that will provide direction for business. They will roll off once the feature has matured through business case/experiment design and evaluation. In addition to supporting feature rollouts… Data scientists on the team are also encouraged to pursue self-driven initiatives during periods when they are not actively supporting other projects. These initiatives may include designing experiments, conducting exploratory analyses, developing predictive models, or identifying new opportunities for impact. For more information about this opportunity, please contact Bill Hart at ************ AND email your resume to **********************************!
    $58k-73k yearly est. 3d ago
  • Senior Data Engineer

    Pinnacle Partners, Inc. 4.4company rating

    Senior data scientist job in Indianapolis, IN

    Pinnacle Partners is assisting our client in the search for a Senior Data Engineer to join their team in the Indianapolis, IN area. This successful resource will be responsible for supporting the large-scale data modernization initiative and operationalize the platform moving forward. RESPONSIBILITIES: Design, develop, and refine BI focused data architecture and data platforms Work with internal teams to gather requirements and translate business needs into technical solutions Build and maintain data pipelines supporting transformation Develop technical designs, data models, and roadmaps Troubleshoot and resolve data quality and processing issues Create and maintain detailed documentation for data warehouses, data stores, and end-to-end data flows Mentor and support junior team members REQUIREMENTS: 5+ years of hands-on experience with data warehousing, databases, and dimensional data modeling 5+ years of experience across end-to-end data analysis and development Experience using GIT version control Advanced SQL skills Strong experience with AWS cloud PREFERRED SKILLS: Experience with Snowflake Experience with Python or R Bachelor's degree in an IT-Related field TERMS: This is a direct hire opportunity with a salary up to $130K based on experience. They offer benefits including medical, dental, and vision along with generous PTO, 401K matching, wellness programs, and other benefits.
    $130k yearly 1d ago
  • Sr Data Engineer

    Emerald Resource Group

    Senior data scientist job in Beachwood, OH

    Rate: Up to $75/hr The Opportunity: Emerald Resource Group is exclusively partnering with a Fortune 500-level Manufacturing & Technology Leader to identify a Senior Data Engineer. This organization operates globally and is currently investing heavily in a massive digital transformation to modernize how they utilize R&D and manufacturing data. This is a rare opportunity to join a stable, high-revenue enterprise environment where you will build the "data plumbing" that supports critical analytics for global operations. The Role: Architect & Build: You will design and implement robust, scalable data pipelines using the Microsoft Azure stack, ensuring data flows seamlessly from legacy on-prem sources to the cloud. Data Strategy: Partner with the Agile Data Project Manager to translate complex business requirements into technical data models. Performance Tuning: Serve as the Subject Matter Expert (SME) for query optimization and database performance, handling massive datasets generated by global labs and factories. Responsibilities: Develop and maintain ETL/ELT processes using Azure Data Factory (ADF) and Databricks. Write advanced, high-efficiency SQL queries and stored procedures. Design data lakes and data warehouses that support Power BI reporting and advanced analytics. Collaborate with Data Scientists to prepare raw data for machine learning models. Mentor junior engineers and ensure code quality through rigorous peer reviews. Requirements (Senior/Principal Level): 8+ years of hands-on experience in Data Engineering or Database Development. Deep expertise in the Azure Data Stack (Azure SQL, Azure Data Factory, Azure Synapse/Data Warehouse, Databricks). Mastery of SQL (T-SQL) and experience with Python or Scala for data manipulation. Proven experience migrating on-premise data (from ERPs like SAP) to the Cloud. Perfered: Experience in Manufacturing or Process Industries (Chemical/Pharma). Knowledge of SAP data structures (extracting data from SAP ECC or S/4HANA). Familiarity with DevOps practices (CI/CD pipelines for data).
    $75 hourly 4d ago
  • Data Engineer

    Agility Partners 4.6company rating

    Senior data scientist job in Columbus, OH

    We're seeking a skilled Data Engineer based in Columbus, OH, to support a high-impact data initiative. The ideal candidate will have hands-on experience with Python, Databricks, SQL, and version control systems, and be comfortable building and maintaining robust, scalable data solutions. Key Responsibilities Design, implement, and optimize data pipelines and workflows within Databricks. Develop and maintain data models and SQL queries for efficient ETL processes. Partner with cross-functional teams to define data requirements and deliver business-ready solutions. Use version control systems to manage code and ensure collaborative development practices. Validate and maintain data quality, accuracy, and integrity through testing and monitoring. Required Skills Proficiency in Python for data engineering and automation. Strong, practical experience with Databricks and distributed data processing. Advanced SQL skills for data manipulation and analysis. Experience with Git or similar version control tools. Strong analytical mindset and attention to detail. Preferred Qualifications Experience with cloud platforms (AWS, Azure, or GCP). Familiarity with enterprise data lake architectures and best practices. Excellent communication skills and the ability to work independently or in team environments.
    $95k-127k yearly est. 5d ago
  • Data Engineer

    Iqventures

    Senior data scientist job in Dublin, OH

    The Data Engineer is a technical leader and hands-on developer responsible for designing, building, and optimizing data pipelines and infrastructure to support analytics and reporting. This role will serve as the lead developer on strategic data initiatives, ensuring scalable, high-performance solutions are delivered effectively and efficiently. The ideal candidate is self-directed, thrives in a fast-paced project environment, and is comfortable making technical decisions and architectural recommendations. The ideal candidate has prior experience in modern data platforms, most notable Databricks and the “lakehouse” architecture. They will work closely with cross-functional teams, including business stakeholders, data analysts, and engineering teams, to develop data solutions that align with enterprise strategies and business goals. Experience in the financial industry is a plus, particularly in designing secure and compliant data solutions. Responsibilities: Design, build, and maintain scalable ETL/ELT pipelines for structured and unstructured data. Optimize data storage, retrieval, and processing for performance, security, and cost-efficiency. Ensure data integrity and governance by implementing robust validation, monitoring, and compliance processes. Consume and analyze data from the data pipeline to infer, predict and recommend actionable insight, which will inform operational and strategic decision making to produce better results. Empower departments and internal consumers with metrics and business intelligence to operate and direct our business, better serving our end customers. Determine technical and behavioral requirements, identify strategies as solutions, and section solutions based on resource constraints. Work with the business, process owners, and IT team members to design solutions for data and advanced analytics solutions. Perform data modeling and prepare data in databases for analysis and reporting through various analytics tools. Play a technical specialist role in championing data as a corporate asset. Provide technical expertise in collaborating with project and other IT teams, internal and external to the company. Contribute to and maintain system data standards. Research and recommend innovative, and where possible automated approaches for system data administration tasks. Identify approaches that leverage our resources and provide economies of scale. Engineer system that balances and meets performance, scalability, recoverability (including backup design), maintainability, security, high availability requirements and objectives. Skills: Databricks and related - SQL, Python, PySpark, Delta Live Tables, Data pipelines, AWS S3 object storage, Parquet/Columnar file formats, AWS Glue. Systems Analysis - The application of systems analysis techniques and procedures, including consulting with users, to determine hardware, software, platform, or system functional specifications. Time Management - Managing one's own time and the time of others. Active Listening - Giving full attention to what other people are saying, taking time to understand the points being made, asking questions as appropriate, and not interrupting at inappropriate times. Critical Thinking - Using logic and reasoning to identify the strengths and weaknesses of alternative solutions, conclusions or approaches to problems. Active Learning - Understanding the implications of new information for both current and future problem-solving and decision-making. Writing - Communicating effectively in writing as appropriate for the needs of the audience. Speaking - Talking to others to convey information effectively. Instructing - Teaching others how to do something. Service Orientation - Actively looking for ways to help people. Complex Problem Solving - Identifying complex problems and reviewing related information to develop and evaluate options and implement solutions. Troubleshooting - Determining causes of operating errors and deciding what to do about it. Judgment and Decision Making - Considering the relative costs and benefits of potential actions to choose the most appropriate one. Experience and Education: High School Diploma (or GED or High School Equivalence Certificate). Associate degree or equivalent training and certification. 5+ years of experience in data engineering including SQL, data warehousing, cloud-based data platforms. Databricks experience. 2+ years Project Lead or Supervisory experience preferred. Must be legally authorized to work in the United States. We are unable to sponsor or take over sponsorship at this time.
    $76k-103k yearly est. 3d ago
  • Data Engineer (Databricks)

    Comresource 3.6company rating

    Senior data scientist job in Columbus, OH

    ComResource is searching for a highly skilled Data Engineer with a background in SQL and Databricks that can handle the design and construction of scalable management systems, ensure that all data systems meet company requirements, and also research new uses for data acquisition. Requirements: Design, construct, install, test and maintain data management systems. Build high-performance algorithms, predictive models, and prototypes. Ensure that all systems meet the business/company requirements as well as industry practices. Integrate up-and-coming data management and software engineering technologies into existing data structures. Develop set processes for data mining, data modeling, and data production. Create custom software components and analytics applications. Research new uses for existing data. Employ an array of technological languages and tools to connect systems together. Recommend different ways to constantly improve data reliability and quality. Qualifications: 5+ years data quality engineering Experience with Cloud-based systems, preferably Azure Databricks and SQL Server testing Experience with ML tools and LLMs Test automation frameworks Python and SQL for data quality checks Data profiling and anomaly detection Documentation and quality metrics Healthcare data validation experience preferred Test automation and quality process development Plus: Azure Databricks Azure Cognitive Services integration Databricks Foundational model Integration Claude API implementation a plus Python and NLP frameworks (spa Cy, Hugging Face, NLTK)
    $79k-102k yearly est. 1d ago
  • Data Engineer / Architect

    CBTS 4.9company rating

    Senior data scientist job in Cincinnati, OH

    Role: Data Engineer / Architect Contract Must Have Skills: Business Intelligence - Data Engineering Data Stage DBT Snowflake SQL JOB DESCRIPTION: Bachelor's degree in Computer Science/Information Systems or equivalent combination of education and experience. Must be able to communicate ideas both verbally and in writing to management, business and IT sponsors, and technical resources in language that is appropriate for each group. Four+ years of relevant IT experience in data engineering or related disciplines. Significant experience with at least one major relational database management system (RDBMS). Experience working with and supporting Unix/Linux and Windows systems. Proficiency in relational database modeling concepts and techniques. Solid conceptual understanding of distributed computing principles and scalable data architectures. Working knowledge of application and data security concepts, best practices, and common vulnerabilities. Experience in one or more of the following disciplines preferred: scalable data platforms and modern data architectures technologies and distributions, metadata management products, commercial ETL tools, data reporting and visualization tools, messaging systems, data warehousing, major version control systems, continuous integration/delivery tools, infrastructure automation and virtualization tools, major cloud platforms (AWS, Azure, GCP), or rest API design and development. Previous experience working with offshore teams desired. Financial industry experience, especially Regulatory Reporting, is a plus.
    $67k-95k yearly est. 2d ago
  • Senior Data Architect

    Intelliswift-An LTTS Company

    Senior data scientist job in Marysville, OH

    4 days onsite - Marysville, OH Skillset: Bachelor's degree in computer science, data science, engineering, or related field 10 years minimum relevant experience in design and implementation of data models (Erwin) for enterprise data warehouse initiatives Experience leading projects involving cloud data lakes, data warehousing, data modeling, and data analysis Proficiency in the design and implementation of modern data architectures and concepts such as cloud services (AWS), real-time data distribution (Kinesis, Kafka, Dataflow), and modern data warehouse tools (Redshift, Snowflake, Databricks) Experience with various database platforms, including DB2, MS SQL Server, PostgreSQL, Couchbase, MongoDB, etc. Understanding of entity-relationship modeling, metadata systems, and data security, quality tools and techniques Ability to design traditional/relational and modern big-data architecture based on business needs Experience with business intelligence tools and technologies such as Informatica, Power BI, and Tableau Exceptional communication and presentation skills Strong analytical and problem-solving skills Ability to collaborate and excel in complex, cross-functional teams involving data scientists, business analysts, and stakeholders Ability to guide solution design and architecture to meet business needs.
    $93k-125k yearly est. 1d ago
  • Senior Data Engineer

    Brooksource 4.1company rating

    Senior data scientist job in Indianapolis, IN

    Senior Data Engineer - Azure Data Warehouse (5-7+ Years Experience) Long term renewing contract Azure-based data warehouse and dashboarding initiatives. Work alongside architects, analysts, and researchers to build scalable, auditable, and business-aligned data assets using modern cloud tools and best practices. Key Responsibilities · Design and implement scalable data pipelines using ADF, Databricks, and Azure SQL Server · Apply Medallion architecture principles and best practices for data lake and warehouse design · Collaborate with Data Architects, Analysts, and Researchers to translate business needs into technical solutions · Develop and maintain CI/CD pipelines for data workflows and dashboard deployments · Lead troubleshooting and debugging efforts across ETL, SQL, and cloud environments · Mentor junior team members and promote best practices in data modeling, cleansing, and promotion · Support dashboarding initiatives with Power BI and wireframe collaboration · Ensure auditability, lineage, and performance across SQL Server and Oracle environments Required Skills & Experience · 5-7+ years in data engineering, data warehouse design, and ETL development · Strong expertise in Azure Data Factory, Data Bricks, and Python · Deep understanding of SQL Server, Oracle, Postgres SQL & Cosmos DB and data modeling standards · Proven experience with Medallion architecture and data Lakehouse best practices · Hands-on with CI/CD, DevOps, and deployment automation · Agile mindset with ability to manage multiple priorities and deliver on time · Excellent communication and documentation skills Bonus Skills · Experience with GCP or AWS · Familiarity with Jira, Confluence, and AppDynamics
    $77k-104k yearly est. 4d ago
  • Data Engineer

    Dataexl Information LLC

    Senior data scientist job in Cincinnati, OH

    Title: Azure Data Engineer (Only W2) Duration: 1 Year Contract (potential for conversion/extension) We need a strong Azure Data Engineer with expertise in Databricks, including Unity Catalog experience), strong Pyspark and understanding of CI/CD and Infrastructure as Code (IaC) using Terraform. Requirements • 7+ years of experience as a Data Engineer • Hands-on experience with Azure Databricks, Spark, and Python • Experience with Delta Live Tables (DLT) and Databricks SQL • Strong SQL and database background • Experience with Azure Functions, messaging services, or orchestration tools • Familiarity with data governance, lineage, or cataloging tools (e.g., Purview, Unity Catalog) • Experience monitoring and optimizing Databricks clusters or workflows • Experience working with Azure cloud data services and understanding how they integrate with Databricks and enterprise data platforms • Experience with Terraform for cloud infrastructure provisioning • Experience with GitHub and GitHub Actions for version control and CI/CD automation • Strong understanding of distributed computing concepts (partitions, joins, shuffles, cluster behavior) • Familiarity with SDLC and modern engineering practices • Ability to balance multiple priorities, work independently, and stay organized
    $75k-101k yearly est. 2d ago
  • Senior Data Engineer

    Vista Applied Solutions Group Inc. 4.0company rating

    Senior data scientist job in Cincinnati, OH

    Data Engineer III About the Role We're looking for a Data Engineer III to play a key role in a large-scale data migration initiative within Client's commercial lending, underwriting, and reporting areas. This is a hands-on engineering role that blends technical depth with business analysis, focused on transforming legacy data systems into modern, scalable pipelines. What You'll Do Analyze legacy SQL, DataStage, and SAS code to extract business logic and identify key data dependencies. Document current data usage and evaluate the downstream impact of migrations. Design, build, and maintain data pipelines and management systems to support modernization goals. Collaborate with business and technology teams to translate requirements into technical solutions. Improve data quality, reliability, and performance across multiple environments. Develop backend solutions using Python, Java, or J2EE, and integrate with tools like DataStage and dbt. What You Bring 5+ years of experience with relational and non-relational databases (SQL, Snowflake, DB2, MongoDB). Strong background in legacy system analysis (SQL, DataStage, SAS). Experience with Python or Java for backend development. Proven ability to build and maintain ETL pipelines and automate data processes. Exposure to AWS, Azure, or GCP. Excellent communication and stakeholder engagement skills. Financial domain experience-especially commercial lending or regulatory reporting-is a big plus. Familiarity with Agile methodologies preferred.
    $74k-97k yearly est. 2d ago
  • GCP Data Engineer

    Miracle Software Systems, Inc. 4.2company rating

    Senior data scientist job in Dearborn, MI

    Experience Required: 8+ years Work Status: Hybrid We're seeking an experienced GCP Data Engineer who can build cloud analytics platform to meet ever expanding business requirements with speed and quality using lean Agile practices. You will work on analyzing and manipulating large datasets supporting the enterprise by activating data assets to support Enabling Platforms and Analytics in the Google Cloud Platform (GCP). You will be responsible for designing the transformation and modernization on GCP, as well as landing data from source applications to GCP. Experience with large scale solution and operationalization of data warehouses, data lakes and analytics platforms on Google Cloud Platform or other cloud environment is a must. We are looking for candidates who have a broad set of technology skills across these areas and who can demonstrate an ability to design right solutions with appropriate combination of GCP and 3rd party technologies for deploying on Google Cloud Platform. You will: Work in collaborative environment including pairing and mobbing with other cross-functional engineers Work on a small agile team to deliver working, tested software Work effectively with fellow data engineers, product owners, data champions and other technical experts Demonstrate technical knowledge/leadership skills and advocate for technical excellence Develop exceptional Analytics data products using streaming, batch ingestion patterns in the Google Cloud Platform with solid Data Warehouse principles Be the Subject Matter Expert in Data Engineering and GCP tool technologies Skills Required: Big Query Skills Preferred: N/A Experience Required: In-depth understanding of Google's product technology (or other cloud platform) and underlying architectures 5+ years of analytics application development experience required 5+ years of SQL development experience 3+ years of Cloud experience (GCP preferred) with solution designed and implemented at production scale Experience working in GCP based Big Data deployments (Batch/Real-Time) leveraging Terraform, Big Query, Google Cloud Storage, PubSub, Dataflow, Dataproc, Airflow, etc. 2 + years professional development experience in Java or Python, and Apache Beam Extracting, Loading, Transforming, cleaning, and validating data Designing pipelines and architectures for data processing 1+ year of designing and building CI/CD pipelines Experience Preferred: Experience building Machine Learning solutions using TensorFlow, BigQueryML, AutoML, Vertex AI Experience in building solution architecture, provision infrastructure, secure and reliable data-centric services and application in GCP Experience with DataPlex is preferred Experience with development eco-system such as Git, Jenkins and CICD Exceptional problem solving and communication skills Experience in working with DBT/Dataform Experience in working with Agile and Lean methodologies Team player and attention to detail Performance tuning experience Education Required: Bachelor's Degree Education Preferred: Master's Degree Additional Safety Training/Licensing/Personal Protection Requirements: Additional Information: ***POSITION IS HYBRID*** Primary Skills Required: Experience in working in an implementation team from concept to operations, providing deep technical subject matter expertise for successful deployment. Implement methods for automation of all parts of the pipeline to minimize labor in development and production Experience in analyzing complex data, organizing raw data and integrating massive datasets from multiple data sources to build subject areas and reusable data products Experience in working with architects to evaluate and productionalize appropriate GCP tools for data ingestion, integration, presentation, and reporting Experience in working with all stakeholders to formulate business problems as technical data requirement, identify and implement technical solutions while ensuring key business drivers are captured in collaboration with product management This includes designing and deploying a pipeline with automated data lineage. Identify, develop, evaluate and summarize Proof of Concepts to prove out solutions. Test and compare competing solutions and report out a point of view on the best solution. Design and build production data engineering solutions to deliver pipeline patterns using Google Cloud Platform (GCP) services: BigQuery, DataFlow, Pub/Sub, BigTable, Data Fusion, DataProc, Cloud Composer, Cloud SQL, Compute Engine, Cloud Functions, and App Engine. Additional Skills Preferred: Strong drive for results and ability to multi-task and work independently Self-starter with proven innovation skills Ability to communicate and work with cross-functional teams and all levels of management Demonstrated commitment to quality and project timing Demonstrated ability to document complex systems Experience in creating and executing detailed test plans Additional Education Preferred GCP Professional Data Engineer Certified In-depth software engineering knowledge "
    $71k-94k yearly est. 5d ago
  • Senior Data Scientist - Metrics

    May Mobility 3.9company rating

    Senior data scientist job in Ann Arbor, MI

    May Mobility is transforming cities through autonomous technology to create a safer, greener, more accessible world. Based in Ann Arbor, Michigan, May develops and deploys autonomous vehicles (AVs) powered by our innovative Multi-Policy Decision Making (MPDM) technology that literally reimagines the way AVs think. Our vehicles do more than just drive themselves - they provide value to communities, bridge public transit gaps and move people where they need to go safely, easily and with a lot more fun. We're building the world's best autonomy system to reimagine transit by minimizing congestion, expanding access and encouraging better land use in order to foster more green, vibrant and livable spaces. Since our founding in 2017, we've given more than 300,000 autonomy-enabled rides to real people around the globe. And we're just getting started. We're hiring people who share our passion for building the future, today, solving real-world problems and seeing the impact of their work. Join us. May Mobility is experiencing a period of significant growth as we expand our autonomous shuttle and mobility services nationwide. As we advance toward widespread deployment, the ability to measure safety and comfort objectively, accurately, and at scale is critical. The Senior Data Scientist in this role will shape how we evaluate AV performance, uncover system vulnerabilities, and ensure that every driving decision meets the highest standards of safety and passenger experience. Your work will directly influence product readiness, inform engineering priorities, and accelerate the path to building trustworthy, human-centered autonomous driving systems. Responsibilities Develop and refine safety and comfort metrics for evaluating autonomous vehicle performance across real-world and simulation data. Build ML and non-ML models to detect unsafe, uncomfortable, or anomalous behaviors. Analyze large-scale drive logs and simulation datasets to identify patterns, regressions, and system gaps. Collaborate with perception, prediction, behavior, and simulation teams to integrate metrics into workflows. Communicate insights and recommendations to engineering leaders and cross-functional teams. Skills Success in this role typically requires the following competencies: Strong proficiency in Python, SQL, and data analysis tools (e.g., Pandas, NumPy, Spark). Strong understanding of vehicle dynamics, kinematics, agent interactions, and road/traffic elements. Expertise in analyzing high-dimensional or time-series data from sensors, logs, and simulation systems. Excellent technical communication skills with the ability to clearly present complex model designs and results to both technical and non-technical stakeholders. Detail-oriented with a focus on validation, testing, and error detection. Qualifications and Experience Required B.S, M.S. or Ph.D. Degree in Engineering, Data Science, Computer Science, Math, or a related quantitative field. 5+ years of experience in data science, applied machine learning, robotics, or autonomous systems. 2+ years working in AV, ADAS, robotics, or another safety-critical domain involving vehicle behavior analysis. Demonstrated experience developing or evaluating safety and/or comfort metrics for autonomous or robotic systems. Hands-on experience working with real-world driving logs and/or simulation data. Desired Background in motion planning, behavior prediction, or multi-agent interaction modeling. Experience designing metric-driven development, KPIs, and automated triaging pipelines. Benefits and Perks Comprehensive healthcare suite including medical, dental, vision, life, and disability plans. Domestic partners who have been residing together at least one year are also eligible to participate. Health Savings and Flexible Spending Healthcare and Dependent Care Accounts available. Rich retirement benefits, including an immediately vested employer safe harbor match. Generous paid parental leave as well as a phased return to work. Flexible vacation policy in addition to paid company holidays. Total Wellness Program providing numerous resources for overall wellbeing Don't meet every single requirement? Studies have shown that women and/or people of color are less likely to apply to a job unless they meet every qualification. At May Mobility, we're committed to building a diverse, inclusive, and authentic workforce, so if you're excited about this role but your previous experience doesn't align perfectly with every qualification, we encourage you to apply anyway! You may be the perfect candidate for this or another role at May. Want to learn more about our culture & benefits? Check out our website! May Mobility is an equal opportunity employer. All applicants for employment will be considered without regard to race, color, religion, sex, national origin, age, disability, sexual orientation, gender identity or expression, veteran status, genetics or any other legally protected basis. Below, you have the opportunity to share your preferred gender pronouns, gender, ethnicity, and veteran status with May Mobility to help us identify areas of improvement in our hiring and recruitment processes. Completion of these questions is entirely voluntary. Any information you choose to provide will be kept confidential, and will not impact the hiring decision in any way. If you believe that you will need any type of accommodation, please let us know. Note to Recruitment Agencies: May Mobility does not accept unsolicited agency resumes. Furthermore, May Mobility does not pay placement fees for candidates submitted by any agency other than its approved partners. Salary Range$163,477-$240,408 USD
    $163.5k-240.4k yearly Auto-Apply 11d ago
  • Principal Data Scientist

    Maximus 4.3company rating

    Senior data scientist job in Fort Wayne, IN

    Description & Requirements We now have an exciting opportunity for a Principal Data Scientist to join the Maximus AI Accelerator supporting both the enterprise and our clients. We are looking for an accomplished hands-on individual contributor and team player to be a part of the AI Accelerator team. You will be responsible for architecting and optimizing scalable, secure AI systems and integrating AI models in production using MLOps best practices, ensuring systems are resilient, compliant, and efficient. This role requires strong systems thinking, problem-solving abilities, and the capacity to manage risk and change in complex environments. Success depends on cross-functional collaboration, strategic communication, and adaptability in fast-paced, evolving technology landscapes. This position will be focused on strategic company-wide initiatives but will also play a role in project delivery and capture solutioning (i.e., leaning in on existing or future projects and providing solutioning to capture new work.) This position requires occasional travel to the DC area for client meetings. Essential Duties and Responsibilities: - Make deep dives into the data, pulling out objective insights for business leaders. - Initiate, craft, and lead advanced analyses of operational data. - Provide a strong voice for the importance of data-driven decision making. - Provide expertise to others in data wrangling and analysis. - Convert complex data into visually appealing presentations. - Develop and deploy advanced methods to analyze operational data and derive meaningful, actionable insights for stakeholders and business development partners. - Understand the importance of automation and look to implement and initiate automated solutions where appropriate. - Initiate and take the lead on AI/ML initiatives as well as develop AI/ML code for projects. - Utilize various languages for scripting and write SQL queries. Serve as the primary point of contact for data and analytical usage across multiple projects. - Guide operational partners on product performance and solution improvement/maturity options. - Participate in intra-company data-related initiatives as well as help foster and develop relationships throughout the organization. - Learn new skills in advanced analytics/AI/ML tools, techniques, and languages. - Mentor more junior data analysts/data scientists as needed. - Apply strategic approach to lead projects from start to finish; Job-Specific Minimum Requirements: - Develop, collaborate, and advance the applied and responsible use of AI, ML and data science solutions throughout the enterprise and for our clients by finding the right fit of tools, technologies, processes, and automation to enable effective and efficient solutions for each unique situation. - Contribute and lead the creation, curation, and promotion of playbooks, best practices, lessons learned and firm intellectual capital. - Contribute to efforts across the enterprise to support the creation of solutions and real mission outcomes leveraging AI capabilities from Computer Vision, Natural Language Processing, LLMs and classical machine learning. - Contribute to the development of mathematically rigorous process improvement procedures. - Maintain current knowledge and evaluation of the AI technology landscape and emerging. developments and their applicability for use in production/operational environments. Minimum Requirements - Bachelor's degree in related field required. - 10-12 years of relevant professional experience required. Job-Specific Minimum Requirements: - 10+ years of relevant Software Development + AI / ML / DS experience. - Professional Programming experience (e.g. Python, R, etc.). - Experience in two of the following: Computer Vision, Natural Language Processing, Deep Learning, and/or Classical ML. - Experience with API programming. - Experience with Linux. - Experience with Statistics. - Experience with Classical Machine Learning. - Experience working as a contributor on a team. Preferred Skills and Qualifications: - Masters or BS in quantitative discipline (e.g. Math, Physics, Engineering, Economics, Computer Science, etc.). - Experience developing machine learning or signal processing algorithms: - Ability to leverage mathematical principles to model new and novel behaviors. - Ability to leverage statistics to identify true signals from noise or clutter. - Experience working as an individual contributor in AI. - Use of state-of-the-art technology to solve operational problems in AI and Machine Learning. - Strong knowledge of data structures, common computing infrastructures/paradigms (stand alone and cloud), and software engineering principles. - Ability to design custom solutions in the AI and Advanced Analytics sphere for customers. This includes the ability to scope customer needs, identify currently existing technologies, and develop custom software solutions to fill any gaps in available off the shelf solutions. - Ability to build reference implementations of operational AI & Advanced Analytics processing solutions. Background Investigations: - IRS MBI - Eligibility #techjobs #VeteransPage EEO Statement Maximus is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, age, national origin, disability, veteran status, genetic information and other legally protected characteristics. Pay Transparency Maximus compensation is based on various factors including but not limited to job location, a candidate's education, training, experience, expected quality and quantity of work, required travel (if any), external market and internal value analysis including seniority and merit systems, as well as internal pay alignment. Annual salary is just one component of Maximus's total compensation package. Other rewards may include short- and long-term incentives as well as program-specific awards. Additionally, Maximus provides a variety of benefits to employees, including health insurance coverage, life and disability insurance, a retirement savings plan, paid holidays and paid time off. Compensation ranges may differ based on contract value but will be commensurate with job duties and relevant work experience. An applicant's salary history will not be used in determining compensation. Maximus will comply with regulatory minimum wage rates and exempt salary thresholds in all instances. Accommodations Maximus provides reasonable accommodations to individuals requiring assistance during any phase of the employment process due to a disability, medical condition, or physical or mental impairment. If you require assistance at any stage of the employment process-including accessing job postings, completing assessments, or participating in interviews,-please contact People Operations at **************************. Minimum Salary $ 156,740.00 Maximum Salary $ 234,960.00
    $67k-95k yearly est. Easy Apply 9d ago
  • Senior Data Scientist

    Cardinal Health 4.4company rating

    Senior data scientist job in Indianapolis, IN

    **What Data Science contributes to Cardinal Health** The Data & Analytics Function oversees the analytics lifecycle in order to identify, analyze and present relevant insights that drive business decisions and anticipate opportunities to achieve a competitive advantage. This function manages analytics products, the access, design and implementation of reporting/business intelligence solutions, and the application of advanced quantitative modeling. Data Science applies base, scientific methodologies from various disciplines, techniques and tools that extracts knowledge and insight from data to solve complex business problems on large data sets, integrating multiple systems. This role will support the Major Rugby business unit, a legacy supplier of multi-source, generic pharmaceuticals for over 60 years. Major Rugby provides over 1,000 high-quality, Rx, OTC and vitamin, mineral and supplement products to the acute, retail, government and consumer markets. This role will focus on leveraging advanced analytics, machine learning, and optimization techniques to solve complex challenges related to demand forecasting, inventory optimization, logistics efficiency and risk mitigation. Our goal is to uncover insights and drive meaningful deliverables to improve decision making and business outcomes. **Responsibilities:** + Leads the design, development, and deployment of advanced analytics and machine learning models to solve complex business problems + Collaborates cross-functionally with product, engineering, operations, and business teams to identify opportunities for data-driven decision-making + Translates business requirements into analytical solutions and delivers insights that drive strategic initiatives + Develops and maintains scalable data science solutions, ensuring reproducibility, performance, and maintainability + Evaluates and implements new tools, frameworks, and methodologies to enhance the data science toolkit + Drives experimentation and A/B testing strategies to optimize business outcomes + Mentors junior data scientists and contributes to the development of a high-performing analytics team + Ensures data quality, governance, and compliance with organizational and regulatory standards + Stays current with industry trends, emerging technologies, and best practices in data science and AI + Contributes to the development of internal knowledge bases, documentation, and training materials **Qualifications:** + 8-12 years of experience in data science, analytics, or a related field (preferred) + Advanced degree (Master's or Ph.D.) in Data Science, Computer Science, Engineering, Operations Research, Statistics, or a related discipline preferred + Strong programming skills in Python and SQL; + Proficiency in data visualization tools such as Tableau, or Looker, with a proven ability to translate complex data into clear, actionable business insights + Deep understanding of machine learning, statistical modeling, predictive analytics, and optimization techniques + Experience with cloud platforms (e.g., AWS, Azure, GCP) and big data technologies (e.g., Spark, Hadoop) is highly desirable + Excellent communication and storytelling skills, with the ability to influence stakeholders and present findings to both technical and non-technical audiences + Experience in Supervised and Unsupervised Machine Learning including Classification, Forecasting, Anomaly Detection, Pattern Detection, Text Mining, using variety of techniques such as Decision trees, Time Series Analysis, Bagging and Boosting algorithms, Neural Networks, Deep Learning and Natural Language processing (NLP). + Experience with PyTorch or other deep learning frameworks + Strong understanding of RESTful APIs and / or data streaming a big plus + Required experience of modern version control (GitHub, Bitbucket) + Hands-on experience with containerization (Docker, Kubernetes, etc.) + Experience with product discovery and design thinking + Experience with Gen AI + Experience with supply chain analytics is preferred **Anticipated salary range:** $123,400 - $176,300 **Bonus eligible:** Yes **Benefits:** Cardinal Health offers a wide variety of benefits and programs to support health and well-being. + Medical, dental and vision coverage + Paid time off plan + Health savings account (HSA) + 401k savings plan + Access to wages before pay day with my FlexPay + Flexible spending accounts (FSAs) + Short- and long-term disability coverage + Work-Life resources + Paid parental leave + Healthy lifestyle programs **Application window anticipated to close:** 12/02/2025 *if interested in opportunity, please submit application as soon as possible. The salary range listed is an estimate. Pay at Cardinal Health is determined by multiple factors including, but not limited to, a candidate's geographical location, relevant education, experience and skills and an evaluation of internal pay equity. \#LI-Remote \#LI-AP4 _Candidates who are back-to-work, people with disabilities, without a college degree, and Veterans are encouraged to apply._ _Cardinal Health supports an inclusive workplace that values diversity of thought, experience and background. We celebrate the power of our differences to create better solutions for our customers by ensuring employees can be their authentic selves each day. Cardinal Health is an Equal_ _Opportunity/Affirmative_ _Action employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, ancestry, age, physical or mental disability, sex, sexual orientation, gender identity/expression, pregnancy, veteran status, marital status, creed, status with regard to public assistance, genetic status or any other status protected by federal, state or local law._ _To read and review this privacy notice click_ here (***************************************************************************************************************************
    $123.4k-176.3k yearly 60d+ ago
  • Manager, Data Scientist, DMP

    Standard Chartered 4.8company rating

    Senior data scientist job in Indiana

    Apply now Work Type: Office Working Employment Type: Permanent Job Description: This role is a role within the Deposit pricing analytics team in SCMAC. The primary focus of the role is: * To develop AI solutions that are fit for purpose by leveraging advanced data & analytical tools and technology with in WRB. The individual will be responsible for end-to-end analytics solution development, deployment, performance assessment and to produce high-quality data science conclusions, backed up by results for WRB business. * Takes end-to-end responsibility for translating business question into data science requirements and actions. Ensures model governance, including documentation, validation, maintenance, etc. * Responsible for performing the AI solution development and delivery for enabling high impact marketing use cases across products, segments in WRB markets. * Responsible for alignment with country product, segment and Group product and segment teams on key business use cases to address with AI solutions, in accordance with the model governance framework. * Responsible for development of pricing and optimization solutions for markets * Responsible for conceptualizing and building high impact use cases for deposits portfolio * Responsible for implementation and tracking of use case in markets and leading discussions with governance team on model approvals Key Responsibilities Business * Analyse and agree on the solution Design for Analytics projects * On the agreed methodology develop and deliver analytical solutions and models * Partner creating implementation plan with Project owner including models benefit * Support on the deployment of the initiatives including scoring or implementation though any system * Consolidate or Track Model performance for periodic model performance assessment * Create the technical and review documents for approval * Client Lifecycle Management ( Acquire, Activation, Cross Sell/Up Sell, Retention & Win-back) * Enable scientific "test and learn" for direct to client campaigns * Pricing analytics and optimization * Digital analytics including social media data analytics for any new methodologies * Channel optimization * Client wallet utilization prediction both off-us and on-us * Client and product profitability prediction Processes * Continuously improve the operational efficiency and effectiveness of processes * Ensure effective management of operational risks within the function and compliance with applicable internal policies, and external laws and regulations Key stakeholders * Group/Region Analytics teams * Group / Region/Country Product & Segment Teams * Group / Region / Country Channels/distribution * Group / Region / Country Risk Analytics Teams * Group / Regional / Country Business Teams * Support functions including Finance, Technology, Analytics Operation Skills and Experience * Data Science * Anti Money Laundering Policies & procedures * Modelling: Data, Process, Events, Objects * Banking Product * 2-4 years of experience (Overrall) About Standard Chartered We're an international bank, nimble enough to act, big enough for impact. For more than 170 years, we've worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If you're looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we can't wait to see the talents you can bring us. Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, you'll see how we value difference and advocate inclusion. Together we: * Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do * Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well * Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term What we offer In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing. * Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations. * Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum. * Flexible working options based around home and office locations, with flexible working patterns. * Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits * A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning. * Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential. Apply now Information at a Glance * * * * *
    $61k-83k yearly est. 23d ago
  • Data Scientist

    Manifest Solutions 4.6company rating

    Senior data scientist job in Fort Wayne, IN

    in Fort Wayne, IN. Responsible for conducting basic analysis on various data types to uncover hidden patterns and unknown correlations to support business and management decisions. This includes data mining, data auditing, aggregation, validation, and reconciliation. Build basic reports for management as well as conducts basic hypothesis tests (contextualize basic levels of complexity). Utilizes analytics and statistical software such as SQL, R, Python, Excel, Hadoop, Power BI, Cognos, and others to perform analysis and interpret data. Assists with the development of basic analytical tools to solve high-value business problems Utilize analytical tools and data management skills to provide actionable business intelligence. Build business knowledge across the organization and utilize knowledge to assist in the development of meaningful analyses. Help identify, acquire, and provide management of data needed for various analytic studies. Assist in the development of components of predictive and prescriptive systems to optimize operations and support strategic initiatives. Apply data visualization tools to large data sets to enhance data understanding. Acquire knowledge of business processes and data acquisition practices from business units using disparate systems. Collaborate with technology partners to develop data acquisition and data management practices. Help develop concise meaningful written reports and provide oral presentation of information, reporting dashboards that meet business needs for status, trends, and variances. Collaborate with technology and business units to operationalize meaningful reports and dashboards. MINIMUM REQUIREMENTS: Bachelors degree in mathematics, operations research, statistics, economics, data science, computer science or related technical field is required. Distribution experience preferred.
    $62k-86k yearly est. 22d ago
  • Principal Data Scientist : Product to Market (P2M) Optimization

    The Gap 4.4company rating

    Senior data scientist job in Groveport, OH

    About Gap Inc. Our brands bridge the gaps we see in the world. Old Navy democratizes style to ensure everyone has access to quality fashion at every price point. Athleta unleashes the potential of every woman, regardless of body size, age or ethnicity. Banana Republic believes in sustainable luxury for all. And Gap inspires the world to bring individuality to modern, responsibly made essentials. This simple idea-that we all deserve to belong, and on our own terms-is core to who we are as a company and how we make decisions. Our team is made up of thousands of people across the globe who take risks, think big, and do good for our customers, communities, and the planet. Ready to learn fast, create with audacity and lead boldly? Join our team. About the Role Gap Inc. is seeking a Principal Data Scientist with deep expertise in operations research and machine learning to lead the design and deployment of advanced analytics solutions across the Product-to-Market (P2M) space. This role focuses on driving enterprise-scale impact through optimization and data science initiatives spanning pricing, inventory, and assortment optimization. The Principal Data Scientist serves as a senior technical and strategic thought partner, defining solution architectures, influencing product and business decisions, and ensuring that analytical solutions are both technically rigorous and operationally viable. The ideal candidate can lead end-to-end solutioning independently, manage ambiguity and complex stakeholder dynamics, and communicate technical and business risk effectively across teams and leadership levels. What You'll Do * Lead the framing, design, and delivery of advanced optimization and machine learning solutions for high-impact retail supply chain challenges. * Partner with product, engineering, and business leaders to define analytics roadmaps, influence strategic priorities, and align technical investments with business goals. * Provide technical leadership to other data scientists through mentorship, design reviews, and shared best practices in solution design and production deployment. * Evaluate and communicate solution risks proactively, grounding recommendations in realistic assessments of data, system readiness, and operational feasibility. * Evaluate, quantify, and communicate the business impact of deployed solutions using statistical and causal inference methods, ensuring benefit realization is measured rigorously and credibly. * Serve as a trusted advisor by effectively managing stakeholder expectations, influencing decision-making, and translating analytical outcomes into actionable business insights. * Drive cross-functional collaboration by working closely with engineering, product management, and business partners to ensure model deployment and adoption success. * Quantify business benefits from deployed solutions using rigorous statistical and causal inference methods, ensuring that model outcomes translate into measurable value * Design and implement robust, scalable solutions using Python, SQL, and PySpark on enterprise data platforms such as Databricks and GCP. * Contribute to the development of enterprise standards for reproducible research, model governance, and analytics quality. Who You Are * Master's or Ph.D. in Operations Research, Operations Management, Industrial Engineering, Applied Mathematics, or a closely related quantitative discipline. * 10+ years of experience developing, deploying, and scaling optimization and data science solutions in retail, supply chain, or similar complex domains. * Proven track record of delivering production-grade analytical solutions that have influenced business strategy and delivered measurable outcomes. * Strong expertise in operations research methods, including linear, nonlinear, and mixed-integer programming, stochastic modeling, and simulation. * Deep technical proficiency in Python, SQL, and PySpark, with experience in optimization and ML libraries such as Pyomo, Gurobi, OR-Tools, scikit-learn, and MLlib. * Hands-on experience with enterprise platforms such as Databricks and cloud environments * Demonstrated ability to assess, communicate, and mitigate risk across analytical, technical, and business dimensions. * Excellent communication and storytelling skills, with a proven ability to convey complex analytical concepts to technical and non-technical audiences. * Strong collaboration and influence skills, with experience leading cross-functional teams in matrixed organizations. * Experience managing code quality, CI/CD pipelines, and GitHub-based workflows. Preferred Qualifications * Experience shaping and executing multi-year analytics strategies in retail or supply chain domains. * Proven ability to balance long-term innovation with short-term deliverables. * Background in agile product development and stakeholder alignment for enterprise-scale initiatives. Benefits at Gap Inc. * Merchandise discount for our brands: 50% off regular-priced merchandise at Old Navy, Gap, Banana Republic and Athleta, and 30% off at Outlet for all employees. * One of the most competitive Paid Time Off plans in the industry.* * Employees can take up to five "on the clock" hours each month to volunteer at a charity of their choice.* * Extensive 401(k) plan with company matching for contributions up to four percent of an employee's base pay.* * Employee stock purchase plan.* * Medical, dental, vision and life insurance.* * See more of the benefits we offer. * For eligible employees Gap Inc. is an equal-opportunity employer and is committed to providing a workplace free from harassment and discrimination. We are committed to recruiting, hiring, training and promoting qualified people of all backgrounds, and make all employment decisions without regard to any protected status. We have received numerous awards for our long-held commitment to equality and will continue to foster a diverse and inclusive environment of belonging. In 2022, we were recognized by Forbes as one of the World's Best Employers and one of the Best Employers for Diversity.
    $71k-102k yearly est. 24d ago
  • Advisory, Data Scientist - CMC Data Products

    Eli Lilly and Company 4.6company rating

    Senior data scientist job in Indianapolis, IN

    At Lilly, we unite caring with discovery to make life better for people around the world. We are a global healthcare leader headquartered in Indianapolis, Indiana. Our employees around the world work to discover and bring life-changing medicines to those who need them, improve the understanding and management of disease, and give back to our communities through philanthropy and volunteerism. We give our best effort to our work, and we put people first. We're looking for people who are determined to make life better for people around the world. Organizational & Position Overview: The Bioproduct Research and Development (BR&D) organization strives to deliver creative medicines to patients by developing and commercializing insulins, monoclonal antibodies, novel therapeutic proteins, peptides, oligonucleotide therapies, and gene therapy systems. This multidisciplinary group works collaboratively with our discovery and manufacturing colleagues. We are seeking an exceptional Data Scientist with deep data expertise in the pharmaceutical domain to lead the development and delivery of enterprise-scale data products that power AI-driven insights, process optimization, and regulatory compliance. In this role, you'll bridge pharmaceutical sciences with modern data engineering to transform complex CMC, PAT, and analytical data into strategic assets that accelerate drug development and manufacturing excellence. Responsibilities: Data Product Development: Define the roadmap and deliver analysis-ready and AI-ready data products that enable AI/ML applications, PAT systems, near-time analytical testing, and process intelligence across CMC workflows. Data Archetypes & Modern Data Management: Define pharmaceutical-specific data archetypes (process, analytical, quality, CMC submission) and create reusable data models aligned with industry standards (ISA-88, ISA-95, CDISC, eCTD). Modern Data Management for Regulated Environments: Implement data frameworks that ensure 21 CFR Part 11, ALCOA+, and data integrity compliance, while enabling scientific innovation and self-service access. AI/ML-ready Data Products: Build training datasets for lab automation, process optimization, and predictive CQA models, and support generative AI applications for knowledge management and regulatory Q&A. Cross-Functional Leadership: Collaborate with analytical R&D, process development, manufacturing science, quality, and regulatory affairs to standardize data products. Deliverables include: Scalable data integration platform that automates compilation of technical-review-ready and submission-ready data packages with demonstrable quality assurance. Unified CMC data repository supporting current process and analytical method development while enabling future AI/ML applications across R&D and manufacturing Data flow frameworks that enable self-service access while maintaining GxP compliance and audit readiness Comprehensive documentation, standards, and training programs that democratize data access and accelerate product development Basic Requirements: Master's degree in Computer Science, Data Science, Machine Learning, AI, or related technical field 8+ years of product management experience focused on data products, data platforms, or scientific data systems and a strong grasp of modern data architecture patterns (data warehouses, data lakes, real-time streaming) Knowledge of modern data stack technologies (Microsoft Fabric, Databricks, Airflow) and cloud platforms (AWS- S3, RDS, Lambda/Glue, Azure) Demonstrated experience designing data products that support AI/ML workflows and advanced analytics in scientific domains Proficiency with SQL, Python, and data visualization tools Experience with analytical instrumentation and data systems (HPLC/UPLC, spectroscopy, particle characterization, process sensors) Knowledge of pharmaceutical manufacturing processes, including batch and continuous manufacturing, unit operations, and process control Expertise in data modeling for time-series, spectroscopic, chromatographic, and hierarchical batch/lot data Experience with laboratory data management systems (LIMS, ELN, SDMS, CDS) and their integration patterns Additional Preferences Understanding of Design of Experiments (DoE), Quality by Design (QbD), and process validation strategies Experience implementing data mesh architectures in scientific organizations Knowledge of MLOps practices and model deployment in validated environments Familiarity with regulatory submissions (eCTD, CTD) and how analytical data supports marketing applications Experience with CI/CD pipelines (GitHub Actions, CloudFormation) for scientific applications Lilly is dedicated to helping individuals with disabilities to actively engage in the workforce, ensuring equal opportunities when vying for positions. If you require accommodation to submit a resume for a position at Lilly, please complete the accommodation request form (******************************************************** for further assistance. Please note this is for individuals to request an accommodation as part of the application process and any other correspondence will not receive a response. Lilly is proud to be an EEO Employer and does not discriminate on the basis of age, race, color, religion, gender identity, sex, gender expression, sexual orientation, genetic information, ancestry, national origin, protected veteran status, disability, or any other legally protected status. Our employee resource groups (ERGs) offer strong support networks for their members and are open to all employees. Our current groups include: Africa, Middle East, Central Asia Network, Black Employees at Lilly, Chinese Culture Network, Japanese International Leadership Network (JILN), Lilly India Network, Organization of Latinx at Lilly (OLA), PRIDE (LGBTQ+ Allies), Veterans Leadership Network (VLN), Women's Initiative for Leading at Lilly (WILL), en Able (for people with disabilities). Learn more about all of our groups. Actual compensation will depend on a candidate's education, experience, skills, and geographic location. The anticipated wage for this position is $126,000 - $244,200 Full-time equivalent employees also will be eligible for a company bonus (depending, in part, on company and individual performance). In addition, Lilly offers a comprehensive benefit program to eligible employees, including eligibility to participate in a company-sponsored 401(k); pension; vacation benefits; eligibility for medical, dental, vision and prescription drug benefits; flexible benefits (e.g., healthcare and/or dependent day care flexible spending accounts); life insurance and death benefits; certain time off and leave of absence benefits; and well-being benefits (e.g., employee assistance program, fitness benefits, and employee clubs and activities).Lilly reserves the right to amend, modify, or terminate its compensation and benefit programs in its sole discretion and Lilly's compensation practices and guidelines will apply regarding the details of any promotion or transfer of Lilly employees. #WeAreLilly
    $85k-109k yearly est. Auto-Apply 10d ago

Learn more about senior data scientist jobs

How much does a senior data scientist earn in Fort Wayne, IN?

The average senior data scientist in Fort Wayne, IN earns between $63,000 and $114,000 annually. This compares to the national average senior data scientist range of $90,000 to $170,000.

Average senior data scientist salary in Fort Wayne, IN

$84,000
Job type you want
Full Time
Part Time
Internship
Temporary