Post job

Data scientist jobs in Elkhart, IN

- 1,108 jobs
All
Data Scientist
Data Engineer
Data Architect
Data Analyst
Data Integrity Analyst
Data Modeler
  • Data Scientist

    Insight Global

    Data scientist job in Indianapolis, IN

    We are seeking a Junior Data Scientist to join our large Utility client in downtown Indianapolis. This position will be hired as a Full-Time employee. This entry-level position is perfect for individuals eager to tackle real-world energy challenges through data exploration, predictive modeling, and collaborative problem-solving. As part of our team, you'll work closely with seasoned data scientists, analysts, architects, engineers, and governance specialists to generate insights that power smarter decisions and help shape the future of energy. Key Responsibilities Partner cross-functionally with data scientists, data architects and engineers, machine learning engineers, data analysts, and data governance experts to deliver integrated data solutions. Collaborate with business stakeholders and analysts to define clear project requirements. Collect, clean, and preprocess both structured and unstructured data from utility systems (e.g., meter data, customer data). Conduct exploratory data analysis to uncover trends, anomalies, and opportunities to enhance grid operations and customer service. Apply traditional machine learning techniques and generative AI tools to build predictive models that address utility-focused challenges, particularly in the customer domain (e.g., outage restoration, program adoption, revenue assurance). Present insights to internal stakeholders in a clear, compelling format, including data visualizations that drive predictive decision-making. Document methodologies, workflows, and results to ensure transparency and reproducibility. Serve as a champion of data and AI across all levels of the client's US Utilities organization. Stay informed on emerging industry trends in utility analytics and machine learning. Requirements Bachelor's degree in data science, statistics, computer science, engineering, or a related field. Master's degree or Ph.D. is preferred. 1-3 years of experience in a data science or analytics role. Strong applied analytics and statistics skills, such as distributions, statistical testing, regression, etc. Proficiency in Python or R, with experience using libraries such as pandas, NumPy, and scikit-learn. Proficiency in traditional machine learning algorithms and techniques, including k-nearest neighbors (k-NN), naive Bayes, support vector machines (SVM), convolutional neural networks (CNN), random forest, gradient-boosted trees, etc. Familiarity with generative AI tools and techniques, including large language models (LLMs) and Retrieval-Augmented Generation (RAG), with an understanding of how these can be applied to enhance contextual relevance and integrate enterprise data into intelligent workflows. Proficiency in SQL, with experience writing complex queries and working with relational data structures. Google BigQuery experience is preferred, including the use of views, tables, materialized views, stored procedures, etc. Proficient in Git for version control, including repository management, branching, merging, and collaborating on code and notebooks in data science projects. Experience integrating Git with CI/CD pipelines to automate testing and deployment is preferred. Experience with cloud computing platforms (GCP preferred). Ability to manage multiple priorities in a fast-paced environment. Interest in learning more about the customer-facing side of the utility industry. Compensation: Up to $130,000 per year annual salary. Exact compensation may vary based on several factors, including skills, experience, and education. Benefit packages for this role may include healthcare insurance offerings and paid leave as provided by applicable law.
    $130k yearly 2d ago
  • Data Scientist with Hands On development experience with R, SQL & Python

    Central Point Partners 3.7company rating

    Data scientist job in Columbus, OH

    *Per the client, No C2C's!* Central Point Partners is currently interviewing candidates in the Columbus, Oh area for a large client. only GC's and USC's. This position is Hybrid (4 Days onsite)! Only candidates who are local to Columbus, Oh will be considered. Data Scientist with Hands On development experience with R, SQL & Python Summary: Our client is seeking a passionate, data-savvy Senior Data Scientist to join the Enterprise Analytics team to fuel our mission of growth through data-driven insights and opportunity discovery. This dynamic role uses a consultative approach with the business segments to dive into our customer, product, channel, and digital data to uncover opportunities for consumer experience optimization and customer value delivery. You will also enable stakeholders with actionable, intuitive performance insights that provide the business with direction for growth. The ideal candidate will have a robust mix of technical and communication skills, with a passion for optimization, data storytelling, and data visualization. You will collaborate with a centralized team of data scientists as well as teams across the organization including Product, Marketing, Data, Finance, and senior leadership. This is an exciting opportunity to be a key influencer to the company's strategic decisions and to learn and grow with our Analytics team. Notes from the manager The skills that will be critical will be Python or R and a firm understanding of SQL along with foundationally understanding what data is needed to perform studies now and in the future. For a high-level summary that should help describe what this person will be asked to do alongside their peers: I would say this person will balance analysis with development, knowing when to jump in and knowing when to step back to lend their expertise. Feature & Functional Design Data scientists are embedded in the team's designing the feature. Their main job here is to define the data tracking needed to evaluate the business case-things like event logging, Adobe tagging, third-party data ingestion, and any other tracking requirements. They are also meant to consult and outline if/when business should be bringing data into the bank and will help connect business with CDAO and IT warehousing and data engineering partners should new data need to be brought forward. Feature Engineering & Development The same data scientists stay involved as the feature moves into execution. They support all necessary functions (Amigo, QA, etc.) to ensure data tracking is in place when the feature goes live. They also begin preparing to support launch evaluation and measurement against experimentation design or business case success criteria. Feature Rollout & Performance Evaluation Owns tracking the rollout, running A/B tests, and conducting impact analysis for all features that they have been involved in the Feature & Functional Design and Feature Engineering & Development stages. They provide an unbiased view of how the feature performs against the original business case along with making objective recommendations that will provide direction for business. They will roll off once the feature has matured through business case/experiment design and evaluation. In addition to supporting feature rollouts… Data scientists on the team are also encouraged to pursue self-driven initiatives during periods when they are not actively supporting other projects. These initiatives may include designing experiments, conducting exploratory analyses, developing predictive models, or identifying new opportunities for impact. For more information about this opportunity, please contact Bill Hart at ************ AND email your resume to **********************************!
    $58k-73k yearly est. 1d ago
  • Data Architect

    Optech 4.6company rating

    Data scientist job in Cincinnati, OH

    THIS IS A W2 (NOT C2C OR REFERRAL BASED) CONTRACT OPPORTUNITY REMOTE MOSTLY WITH 1 DAY/MO ONSITE IN CINCINNATI-LOCAL CANDIDATES TAKE PREFERENCE RATE: $75-85/HR WITH BENEFITS We are seeking a highly skilled Data Architect to function in a consulting capacity to analyze, redesign, and optimize a Medical Payments client's environment. The ideal candidate will have deep expertise in SQL, Azure cloud services, and modern data architecture principles. Responsibilities Design and maintain scalable, secure, and high-performing data architectures. Lead migration and modernization projects in heavy use production systems. Develop and optimize data models, schemas, and integration strategies. Implement data governance, security, and compliance standards. Collaborate with business stakeholders to translate requirements into technical solutions. Ensure data quality, consistency, and accessibility across systems. Required Qualifications Bachelor's degree in Computer Science, Information Systems, or related field. Proven experience as a Data Architect or similar role. Strong proficiency in SQL (query optimization, stored procedures, indexing). Hands-on experience with Azure cloud services for data management and analytics. Knowledge of data modeling, ETL processes, and data warehousing concepts. Familiarity with security best practices and compliance frameworks. Preferred Skills Understanding of Electronic Health Records systems. Understanding of Big Data technologies and modern data platforms outside the scope of this project.
    $75-85 hourly 3d ago
  • Data Analyst

    Acro Service Corp 4.8company rating

    Data scientist job in Warren, MI

    The main function of a Data Analyst is to coordinate changes to computer databases, test, and implement the database applying knowledge of database management systems. Job Responsibilities: • Work with senior management, technical and client teams in order to determine data requirements, business data implementation approaches, best practices for advanced data manipulation, storage and analysis strategies • Write and code logical and physical database descriptions and specify identifiers of database to management system or direct others in coding descriptions • Design, implement, automate and maintain large scale enterprise data ETL processes • Modify existing databases and database management systems and/or direct programmers and analysts to make changes • Test programs or databases, correct errors and make necessary modifications Qualifications: • Bachelor's degree in a technical field such as computer science, computer engineering or related field required • 2-4 years applicable experience required • Experience with database technologies • Knowledge of the ETL process • Knowledge of at least one scripting language • Strong written and oral communication skills • Strong troubleshooting and problem solving skills • Demonstrated history of success • Desire to be working with data and helping businesses make better data driven decisions
    $66k-94k yearly est. 1d ago
  • Senior Data Engineer(only W2)

    CBTS 4.9company rating

    Data scientist job in Columbus, OH

    Bachelor's Degree in Computer Science or related technical field AND 5+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, or Java. Proficiency with Azure data services, such as Azure Data Lake, Azure Data Factory and Databricks. Expertise using Cloud Security (i.e., Active Directory, network security groups, and encryption services). Proficient in Python for developing and maintaining data solutions. Experience with optimizing or managing technology costs. Ability to build and maintain a data architecture supporting both real-time and batch processing. Ability to implement industry standard programming techniques by mastering advanced fundamental concepts, practices, and procedures, and having the ability to analyze and solve problems in existing systems. Expertise with unit testing, integration testing and performance/stress testing. Database management skills and understanding of legacy and contemporary data modeling and system architecture. Demonstrated leadership skills, team spirit, and the ability to work cooperatively and creatively across an organization Experience on teams leveraging Lean or Agile frameworks.
    $68k-95k yearly est. 4d ago
  • Data Engineer

    Agility Partners 4.6company rating

    Data scientist job in Columbus, OH

    We're seeking a skilled Data Engineer based in Columbus, OH, to support a high-impact data initiative. The ideal candidate will have hands-on experience with Python, Databricks, SQL, and version control systems, and be comfortable building and maintaining robust, scalable data solutions. Key Responsibilities Design, implement, and optimize data pipelines and workflows within Databricks. Develop and maintain data models and SQL queries for efficient ETL processes. Partner with cross-functional teams to define data requirements and deliver business-ready solutions. Use version control systems to manage code and ensure collaborative development practices. Validate and maintain data quality, accuracy, and integrity through testing and monitoring. Required Skills Proficiency in Python for data engineering and automation. Strong, practical experience with Databricks and distributed data processing. Advanced SQL skills for data manipulation and analysis. Experience with Git or similar version control tools. Strong analytical mindset and attention to detail. Preferred Qualifications Experience with cloud platforms (AWS, Azure, or GCP). Familiarity with enterprise data lake architectures and best practices. Excellent communication skills and the ability to work independently or in team environments.
    $95k-127k yearly est. 1d ago
  • Data Architect

    ATC 4.4company rating

    Data scientist job in Cincinnati, OH

    We are seeking a highly skilled Data Architect to function in a consulting capacity to analyze, redesign, and optimize a Medical Payments client's environment. The ideal candidate will have deep expertise in SQL, Azure cloud services, and modern data architecture principles. Responsibilities Design and maintain scalable, secure, and high-performing data architectures. Lead migration and modernization projects in heavy use production systems. Develop and optimize data models, schemas, and integration strategies. Implement data governance, security, and compliance standards. Collaborate with business stakeholders to translate requirements into technical solutions. Ensure data quality, consistency, and accessibility across systems. Required Qualifications Bachelor's degree in Computer Science, Information Systems, or related field. Proven experience as a Data Architect or similar role. Strong proficiency in SQL (query optimization, stored procedures, indexing). Hands-on experience with Azure cloud services for data management and analytics. Knowledge of data modeling, ETL processes, and data warehousing concepts. Familiarity with security best practices and compliance frameworks. Understanding of Electronic Health Records systems. Preferred Skills Understanding of Big Data technologies and modern data platforms outside the scope of this project.
    $83k-117k yearly est. 2d ago
  • Systems Data Analyst

    Beacon Hill 3.9company rating

    Data scientist job in Indianapolis, IN

    Beacon Hill Technologies is seeking proactive, data-driven analyst with strong initiative-someone who can expand on existing frameworks, validate data, and independently build tools that elevate team performance. You communicate clearly, think critically, and enjoy transforming complex technical information into meaningful business insights. You thrive in fast-paced environments and are comfortable working hands-on with evolving data systems. This postion is hybrid! Required Skills: Bachelor's degree in Information Systems, Business Analytics, IT, or a related field (or equivalent experience). 3-5 years of experience in data analysis, IT operations, or A/V-adjacent environment. Proficiency with: Tableau (strongly preferred; team's primary tool) ServiceNow reporting Excel (advanced formulas, macros) Python (especially for Tableau-based scripting) Experience working with large datasets and multiple data sources. Ability to validate, test, and ensure data accuracy and integrity. Strong communication skills; able to translate technical data into clear business insights. Demonstrated ability to independently build new reports, dashboards, or tools when standard solutions are not available. Desired Skills: Experience with Cisco Spaces, digital room utilization analytics, or space-management tools. Familiarity with A/V environments, technologies, or governance frameworks (big plus, but not required). Experience developing or managing lifecycle models, performance metrics, or executive-level reporting dashboards. Knowledge of AI-assisted reporting or automation tools. Experience with procurement forecasting, budgeting data, or operational strategy analytics. Beacon Hill is an equal opportunity employer and individuals with disabilities and/or protected veterans are encouraged to apply. California residents: Qualified applications with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. If you would like to complete our voluntary self-identification form, please click here or copy and paste the following link into an open window in your browser: ***************************************** Completion of this form is voluntary and will not affect your opportunity for employment, or the terms or conditions of your employment. This form will be used for reporting purposes only and will be kept separate from all other records. Company Profile: Beacon Hill Technologies, a premier National Information Technology Staffing Group, provides world class technology talent across all industries utilizing a complete suite of staffing services. Beacon Hill Technologies' dedicated team of recruiting and staffing experts consistently delivers quality IT professionals to solve our customers' technical and business needs. Beacon Hill Technologies covers a broad spectrum of IT positions, including Project Management and Business Analysis, Programming/Development, Database, Infrastructure, Quality Assurance, Production/Support and ERP roles. Learn more about Beacon Hill and our specialty divisions, Beacon Hill Associates, Beacon Hill Financial, Beacon Hill HR, Beacon Hill Legal, Beacon Hill Life Sciences and Beacon Hill Technologies by visiting ************* Benefits Information: Beacon Hill offers a robust benefit package including, but not limited to, medical, dental, vision, and federal and state leave programs as required by applicable agency regulations to those that meet eligibility. Upon successfully being hired, details will be provided related to our benefit offerings. We look forward to working with you. Beacon Hill. Employing the Future™
    $49k-76k yearly est. 3d ago
  • Data Architect

    Millennium Software and Staffing Inc. 4.2company rating

    Data scientist job in Detroit, MI

    Millennium Software is look for a Data Architect for one of its direct client based in Michigan. It is onsite role. Title: Data Architect Tax term: Only w2, no c2c Description: All below are must have Senior Data Architect with 12+ years of experience in Data Modeling. Develop conceptual, logical, and physical data models. Experience with GCP Cloud
    $91k-117k yearly est. 5d ago
  • Data Analyst

    Applus+ Laboratories 4.2company rating

    Data scientist job in Troy, MI

    Data Analyst & Backend Developer with AI The Digital Business Team develops promising digital solutions for global products and processes. It aims to organize Applus+ Laboratories' information, making it useful, fast, and reliable for both the Applus+ Group and its clients. The team's mission is to be recognized as the most digital, innovative, and customer-oriented company, reducing digital operations costs while increasing the value and portfolio of services. We are looking for a Data Science / AI Engineer to join our Digital team and contribute to the development of evolving data products and applications. Responsibilities: Collect, organize, and analyze structured and unstructured data to extract actionable insights, generate code, train models, validate results, and draw meaningful conclusions. Demonstrate advanced proficiency in Power BI, including data modeling, DAX, the creation of interactive dashboards, and connecting to diverse data sources. Possess a strong mathematical and statistical foundation, with experience in numerical methods and a wide range of machine learning algorithms. Exhibit hands-on experience in Natural Language Processing (NLP) and foundation models, with a thorough understanding of transformers, tokenization, and encoding-decoding processes. Apply Explainable AI (XAI) techniques using Python libraries such as SHAP, LIME, or similar tools. Develop and integrate AI models into backend systems utilizing frameworks such as FastAPI, Flask, or Django. Demonstrate logical and organized thinking with attention to detail, capable of identifying data or code anomalies and effectively communicating findings through clear documentation and well-commented code. Maintain a proactive mindset for optimizing analytical workflows and continuously improving models and tools. Exhibit creativity and innovation in problem-solving, with a practical and results-oriented approach. Technical Requirements: Demonstrated experience coding in python specifically working in data science projects and using ML most common libraries. Previous experiences working with Generative AI and explainable AI are welcome. Expertise in Power BI: data modeling, DAX, dashboards, integration with multiple sources. Proficient in SQL for querying, transforming, optimizing databases. Experienced in Python for data analysis, automation, machine learning. Knowledgeable in data analytics, KPIs, business intelligence practices. Skilled in translating business requirements into insights and visualizations. Our current tech stack is: Power BI (DAX) SQL Python Commonly used ML/AI libraries. Azure AI (Open AI) Education Degree in Computer Science, Software Engineering, Applied Mathematics, or a related field. A master's degree in data science or AI Engineering is an advantage. Languages English If you are passionate about analytics, advanced AI algorithms, and challenging yourself, this is the right job for you!
    $59k-90k yearly est. 5d ago
  • Data Engineer

    Iqventures

    Data scientist job in Dublin, OH

    The Data Engineer is a technical leader and hands-on developer responsible for designing, building, and optimizing data pipelines and infrastructure to support analytics and reporting. This role will serve as the lead developer on strategic data initiatives, ensuring scalable, high-performance solutions are delivered effectively and efficiently. The ideal candidate is self-directed, thrives in a fast-paced project environment, and is comfortable making technical decisions and architectural recommendations. The ideal candidate has prior experience in modern data platforms, most notable Databricks and the “lakehouse” architecture. They will work closely with cross-functional teams, including business stakeholders, data analysts, and engineering teams, to develop data solutions that align with enterprise strategies and business goals. Experience in the financial industry is a plus, particularly in designing secure and compliant data solutions. Responsibilities: Design, build, and maintain scalable ETL/ELT pipelines for structured and unstructured data. Optimize data storage, retrieval, and processing for performance, security, and cost-efficiency. Ensure data integrity and governance by implementing robust validation, monitoring, and compliance processes. Consume and analyze data from the data pipeline to infer, predict and recommend actionable insight, which will inform operational and strategic decision making to produce better results. Empower departments and internal consumers with metrics and business intelligence to operate and direct our business, better serving our end customers. Determine technical and behavioral requirements, identify strategies as solutions, and section solutions based on resource constraints. Work with the business, process owners, and IT team members to design solutions for data and advanced analytics solutions. Perform data modeling and prepare data in databases for analysis and reporting through various analytics tools. Play a technical specialist role in championing data as a corporate asset. Provide technical expertise in collaborating with project and other IT teams, internal and external to the company. Contribute to and maintain system data standards. Research and recommend innovative, and where possible automated approaches for system data administration tasks. Identify approaches that leverage our resources and provide economies of scale. Engineer system that balances and meets performance, scalability, recoverability (including backup design), maintainability, security, high availability requirements and objectives. Skills: Databricks and related - SQL, Python, PySpark, Delta Live Tables, Data pipelines, AWS S3 object storage, Parquet/Columnar file formats, AWS Glue. Systems Analysis - The application of systems analysis techniques and procedures, including consulting with users, to determine hardware, software, platform, or system functional specifications. Time Management - Managing one's own time and the time of others. Active Listening - Giving full attention to what other people are saying, taking time to understand the points being made, asking questions as appropriate, and not interrupting at inappropriate times. Critical Thinking - Using logic and reasoning to identify the strengths and weaknesses of alternative solutions, conclusions or approaches to problems. Active Learning - Understanding the implications of new information for both current and future problem-solving and decision-making. Writing - Communicating effectively in writing as appropriate for the needs of the audience. Speaking - Talking to others to convey information effectively. Instructing - Teaching others how to do something. Service Orientation - Actively looking for ways to help people. Complex Problem Solving - Identifying complex problems and reviewing related information to develop and evaluate options and implement solutions. Troubleshooting - Determining causes of operating errors and deciding what to do about it. Judgment and Decision Making - Considering the relative costs and benefits of potential actions to choose the most appropriate one. Experience and Education: High School Diploma (or GED or High School Equivalence Certificate). Associate degree or equivalent training and certification. 5+ years of experience in data engineering including SQL, data warehousing, cloud-based data platforms. Databricks experience. 2+ years Project Lead or Supervisory experience preferred. Must be legally authorized to work in the United States. We are unable to sponsor or take over sponsorship at this time.
    $76k-103k yearly est. 4d ago
  • Senior Data Engineer

    Brooksource 4.1company rating

    Data scientist job in Indianapolis, IN

    Senior Data Engineer - Azure Data Warehouse (5-7+ Years Experience) Long term renewing contract Azure-based data warehouse and dashboarding initiatives. Work alongside architects, analysts, and researchers to build scalable, auditable, and business-aligned data assets using modern cloud tools and best practices. Key Responsibilities · Design and implement scalable data pipelines using ADF, Databricks, and Azure SQL Server · Apply Medallion architecture principles and best practices for data lake and warehouse design · Collaborate with Data Architects, Analysts, and Researchers to translate business needs into technical solutions · Develop and maintain CI/CD pipelines for data workflows and dashboard deployments · Lead troubleshooting and debugging efforts across ETL, SQL, and cloud environments · Mentor junior team members and promote best practices in data modeling, cleansing, and promotion · Support dashboarding initiatives with Power BI and wireframe collaboration · Ensure auditability, lineage, and performance across SQL Server and Oracle environments Required Skills & Experience · 5-7+ years in data engineering, data warehouse design, and ETL development · Strong expertise in Azure Data Factory, Data Bricks, and Python · Deep understanding of SQL Server, Oracle, Postgres SQL & Cosmos DB and data modeling standards · Proven experience with Medallion architecture and data Lakehouse best practices · Hands-on with CI/CD, DevOps, and deployment automation · Agile mindset with ability to manage multiple priorities and deliver on time · Excellent communication and documentation skills Bonus Skills · Experience with GCP or AWS · Familiarity with Jira, Confluence, and AppDynamics
    $77k-104k yearly est. 5d ago
  • Cloud Data Engineer

    GHR Healthcare 3.7company rating

    Data scientist job in Columbus, OH

    This is a 6 month contract and could be 2 different positions or 1 if someone has experience in both. Hybrid on site role so need to be local. Cloud Data engineer at S4 level · Person should have minimum hands on 5 years cloud data engineering experience (Specially on Azure, Databricks and MS Fabric) and overall minimum 10 to 15 years. · Handson experience with ELT & ETL pipelines development, Data modeling, AI/ML pipeline development, and unity catalog & Purview engineering experience. · Certifications on Azure cloud would be preferred.
    $93k-135k yearly est. 2d ago
  • Data Engineer

    Dataexl Information LLC

    Data scientist job in Cincinnati, OH

    Title: Azure Data Engineer (Only W2) Duration: 1 Year Contract (potential for conversion/extension) We need a strong Azure Data Engineer with expertise in Databricks, including Unity Catalog experience), strong Pyspark and understanding of CI/CD and Infrastructure as Code (IaC) using Terraform. Requirements • 7+ years of experience as a Data Engineer • Hands-on experience with Azure Databricks, Spark, and Python • Experience with Delta Live Tables (DLT) and Databricks SQL • Strong SQL and database background • Experience with Azure Functions, messaging services, or orchestration tools • Familiarity with data governance, lineage, or cataloging tools (e.g., Purview, Unity Catalog) • Experience monitoring and optimizing Databricks clusters or workflows • Experience working with Azure cloud data services and understanding how they integrate with Databricks and enterprise data platforms • Experience with Terraform for cloud infrastructure provisioning • Experience with GitHub and GitHub Actions for version control and CI/CD automation • Strong understanding of distributed computing concepts (partitions, joins, shuffles, cluster behavior) • Familiarity with SDLC and modern engineering practices • Ability to balance multiple priorities, work independently, and stay organized
    $75k-101k yearly est. 1d ago
  • Data Architect

    Costrategix 3.7company rating

    Data scientist job in Blue Ash, OH

    Since 2006, CoStrategix has defined and implemented digital transformation initiatives, data and analytics capabilities, and digital commerce solutions for Fortune 500 and mid-market customers. CoStrategix provides thought leadership, strategy, and comprehensive end-to-end technology execution to help organizations transform and stay competitive in today's digital world. As a Gixer (employee) at CoStrategix, you will have broad exposure to diverse industries and technologies. You will work on leading-edge digital projects in areas of Data Engineering, Data Governance, Data Strategy, AI, Cloud,. Gixers operate at the leading edge of technologies, and our projects require compelling human interfaces and modern data platforms. This role is based at our culture hub in Blue Ash, Ohio. About this role: As a Data Architect at CoStrategix, you will define, orchestrate, and implement modern data platforms and architectures. This role is about understanding the current state of data ecosystems, mapping existing data flows and structures, creating an architectural blueprint, and then implementing data strategies and governance frameworks in rapid cycles. In this role, you will provide the following: Strategic & Consultative Responsibilities Act as a trusted data advisor to client stakeholders, clearly communicating trade-offs, guiding decision-making, and influencing the adoption of modern data practices. Lead stakeholder interviews and working sessions to elicit requirements, clarify use cases, and align on priorities, scope, and success metrics. Create phased data roadmaps with clear milestones, dependencies, and value outcomes (e.g., time-to-insight, cost reduction, risk reduction) and track progress against them. Provide architectural input into scoping and pricing of data engagements; ensure solutions balance value, risk, and cost, and support delivery teams in staying aligned to scope and architectural guardrails. Work closely with sales and account teams to understand customer objectives and translate them into practical, scalable data architecture and solution designs. Participate in pre-sales engagements, discovery workshops, proposal development, client presentations, and proof-of-concept activities to showcase solution feasibility and value. Data Governance, Quality & Operating Model Bring consultative competencies around data governance and data quality, helping clients define guiding principles, policies, and operating models. Lead development of comprehensive data strategies for clients that align with their business priorities. Design processes for metadata management, lineage tracking, and master data management (MDM), including stewardship roles and workflows. Establish and maintain data quality standards, metrics, and monitoring processes to ensure accurate, complete, and timely data across critical domains. Develop semantic models and curated datasets, guide adoption of data cataloging and data literacy programs. Enterprise & Solution Architecture Design and maintain conceptual, logical, and physical data architectures to support enterprise, analytical, and operational systems. Assess and recommend data platforms, cloud services, and emerging technologies to meet business needs, while collaborating with Cloud, DevOps, and Security Architects to ensure architectural alignment. Partner with Data Analysts, BI Developers, and Data Scientists to ensure data architectures enable analytics, visualization, and AI/ML initiatives. Define non-functional requirements (performance, scalability, resilience, cost, security, and compliance) for data solutions, and ensure they are addressed in the architecture and design. Maintain architecture decision records, reference architectures, and reusable patterns; define and promote standards and best practices for data modeling, integration, and consumption across teams. Implementation, Delivery & Enablement Lead the implementation of scalable, secure, and high-performing data and transformation frameworks that unify data across platforms and enable real-time, batch, and event-driven use cases. Define and enforce data design standards, patterns, and best practices during implementation to ensure consistency, maintainability, and performance. Mentor and coach engineering and analytics teams in data design principles, governance frameworks, and architectural discipline to ensure consistency and quality in delivery. Qualifications: Bachelor's Degree in Math, Statistics, Computer Science, Information Technology, or a related field 8+ years of experience in data management and architecture roles 3 to 5 years of leading data strategy, governance, or modernization efforts 3 to 5 years of pre-sales, client solutioning, and/or consulting engagement in Data Management Experience designing and implementing modern data architectures Current understanding of best practices regarding data security, governance, and regulatory compliance Experience in data modeling, data engineering, and analytics platform architecture Experience with data engineering tooling such as Databricks, Snowflake, Synapse, BigQuery, Kafka, and dbt Experience with software development, DevOps best practices, and automation methodologies Excellent leadership and negotiation skills are necessary to work effectively with colleagues at various levels of the organization and across multiple locations Communicate complex issues crisply and concisely to various levels of management Coaching and mentoring skills - ability to adapt to all levels of the organization Strong collaboration skills and excellent verbal and written communication skills About CoStrategix We make CoStrategix an awesome place to work, offering a total rewards package that includes comprehensive benefits starting on day one. Benefits include medical, dental, vision, disability, and life insurance, as well as an EAP and 401(k) retirement plan. We are a flexible hybrid workplace committed to a culture of curiosity, collaboration, learning, self-improvement, and, above all, fun. We have been named a finalist for the Cincinnati Business Courier's Best Places to Work Awards for 4 consecutive years. Do the Right Thing. Always. At CoStrategix, we are passionate about our core values. Diversity, equity & inclusion (DE&I) are part of our core values Every Gixer (employee) has an opportunity for success regardless of their race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Creating an environment where everyone, from any background, can do their best work is the right thing to do.
    $75k-104k yearly est. 3d ago
  • GCP Data Architect

    Systems Technology Group, Inc. (STG 4.0company rating

    Data scientist job in Dearborn, MI

    Title: GCP Data Architect Description: STG is a fast-growing Digital Transformation services company providing Fortune 500 companies with Digital Transformation, Mobility, Analytics and Cloud Integration services in both information technology and engineering product lines. STG has a 98% repeat business rate from existing clients and have achieved industry awards and recognition for our services. Responsibilities: Data Modeling and Design: Develop conceptual, logical, and physical data models for business intelligence, analytics, and reporting solutions. Transform requirements into scalable, flexible, and efficient data structures that can support advanced analytics. Requirement Analysis: Collaborate with business analysts, stakeholders, and subject matter experts to gather and interpret requirements for new data initiatives. Translate business questions into data models that can answer these questions. Data Integration: Work closely with data engineers to integrate data from multiple sources, ensuring consistency, accuracy, and reliability. Map data flows and document relationships between datasets. Database Architecture: Design and optimize database schemas using the medallion architecture which includes relational, star schema and denormalized data sets for BI and ML data consumers. Metadata Management: Team with the data governance team so detailed documentation on data definitions, data lineage, and data quality statistics are available to data consumers. Data Quality Assurance: Establish master data management data modeling so the history of how customer, provider and other party data is consolidated into a single version of the truth. Collaboration and Communication: Serve as a bridge between technical teams and business units, clearly communicating the value and limitations of various data sources and structures. Governance and Compliance: Ensure that data models and processes adhere to regulatory standards and organizational policies regarding privacy, access, and security. Experience Required: Specialist Exp: 10+ yrs in IT; 7+ yrs as Data Architect Power Builder PostgreSQL GCP Big Query GCP Data Architect position is based at our corporate office located in Dearborn, Michigan. A great opportunity to experience the corporate environment leading personal career growth. Resume Submittal Instructions: Interested/qualified candidates should email their word formatted resumes to Ms. Shweta Huria at ********************** and/or contact at ************. In the subject line of the email please include: First and Last Name (GCP Data Architect). For more information about STG, please visit us at **************
    $86k-125k yearly est. 5d ago
  • Senior Data Engineer

    Vista Applied Solutions Group Inc. 4.0company rating

    Data scientist job in Cincinnati, OH

    Data Engineer III About the Role We're looking for a Data Engineer III to play a key role in a large-scale data migration initiative within Client's commercial lending, underwriting, and reporting areas. This is a hands-on engineering role that blends technical depth with business analysis, focused on transforming legacy data systems into modern, scalable pipelines. What You'll Do Analyze legacy SQL, DataStage, and SAS code to extract business logic and identify key data dependencies. Document current data usage and evaluate the downstream impact of migrations. Design, build, and maintain data pipelines and management systems to support modernization goals. Collaborate with business and technology teams to translate requirements into technical solutions. Improve data quality, reliability, and performance across multiple environments. Develop backend solutions using Python, Java, or J2EE, and integrate with tools like DataStage and dbt. What You Bring 5+ years of experience with relational and non-relational databases (SQL, Snowflake, DB2, MongoDB). Strong background in legacy system analysis (SQL, DataStage, SAS). Experience with Python or Java for backend development. Proven ability to build and maintain ETL pipelines and automate data processes. Exposure to AWS, Azure, or GCP. Excellent communication and stakeholder engagement skills. Financial domain experience-especially commercial lending or regulatory reporting-is a big plus. Familiarity with Agile methodologies preferred.
    $74k-97k yearly est. 3d ago
  • GCP Data Engineer

    Miracle Software Systems, Inc. 4.2company rating

    Data scientist job in Dearborn, MI

    Experience Required: 8+ years Work Status: Hybrid We're seeking an experienced GCP Data Engineer who can build cloud analytics platform to meet ever expanding business requirements with speed and quality using lean Agile practices. You will work on analyzing and manipulating large datasets supporting the enterprise by activating data assets to support Enabling Platforms and Analytics in the Google Cloud Platform (GCP). You will be responsible for designing the transformation and modernization on GCP, as well as landing data from source applications to GCP. Experience with large scale solution and operationalization of data warehouses, data lakes and analytics platforms on Google Cloud Platform or other cloud environment is a must. We are looking for candidates who have a broad set of technology skills across these areas and who can demonstrate an ability to design right solutions with appropriate combination of GCP and 3rd party technologies for deploying on Google Cloud Platform. You will: Work in collaborative environment including pairing and mobbing with other cross-functional engineers Work on a small agile team to deliver working, tested software Work effectively with fellow data engineers, product owners, data champions and other technical experts Demonstrate technical knowledge/leadership skills and advocate for technical excellence Develop exceptional Analytics data products using streaming, batch ingestion patterns in the Google Cloud Platform with solid Data Warehouse principles Be the Subject Matter Expert in Data Engineering and GCP tool technologies Skills Required: Big Query Skills Preferred: N/A Experience Required: In-depth understanding of Google's product technology (or other cloud platform) and underlying architectures 5+ years of analytics application development experience required 5+ years of SQL development experience 3+ years of Cloud experience (GCP preferred) with solution designed and implemented at production scale Experience working in GCP based Big Data deployments (Batch/Real-Time) leveraging Terraform, Big Query, Google Cloud Storage, PubSub, Dataflow, Dataproc, Airflow, etc. 2 + years professional development experience in Java or Python, and Apache Beam Extracting, Loading, Transforming, cleaning, and validating data Designing pipelines and architectures for data processing 1+ year of designing and building CI/CD pipelines Experience Preferred: Experience building Machine Learning solutions using TensorFlow, BigQueryML, AutoML, Vertex AI Experience in building solution architecture, provision infrastructure, secure and reliable data-centric services and application in GCP Experience with DataPlex is preferred Experience with development eco-system such as Git, Jenkins and CICD Exceptional problem solving and communication skills Experience in working with DBT/Dataform Experience in working with Agile and Lean methodologies Team player and attention to detail Performance tuning experience Education Required: Bachelor's Degree Education Preferred: Master's Degree Additional Safety Training/Licensing/Personal Protection Requirements: Additional Information: ***POSITION IS HYBRID*** Primary Skills Required: Experience in working in an implementation team from concept to operations, providing deep technical subject matter expertise for successful deployment. Implement methods for automation of all parts of the pipeline to minimize labor in development and production Experience in analyzing complex data, organizing raw data and integrating massive datasets from multiple data sources to build subject areas and reusable data products Experience in working with architects to evaluate and productionalize appropriate GCP tools for data ingestion, integration, presentation, and reporting Experience in working with all stakeholders to formulate business problems as technical data requirement, identify and implement technical solutions while ensuring key business drivers are captured in collaboration with product management This includes designing and deploying a pipeline with automated data lineage. Identify, develop, evaluate and summarize Proof of Concepts to prove out solutions. Test and compare competing solutions and report out a point of view on the best solution. Design and build production data engineering solutions to deliver pipeline patterns using Google Cloud Platform (GCP) services: BigQuery, DataFlow, Pub/Sub, BigTable, Data Fusion, DataProc, Cloud Composer, Cloud SQL, Compute Engine, Cloud Functions, and App Engine. Additional Skills Preferred: Strong drive for results and ability to multi-task and work independently Self-starter with proven innovation skills Ability to communicate and work with cross-functional teams and all levels of management Demonstrated commitment to quality and project timing Demonstrated ability to document complex systems Experience in creating and executing detailed test plans Additional Education Preferred GCP Professional Data Engineer Certified In-depth software engineering knowledge "
    $71k-94k yearly est. 1d ago
  • Advisory, Data Scientist - CMC Data Products

    Eli Lilly and Company 4.6company rating

    Data scientist job in Indianapolis, IN

    At Lilly, we unite caring with discovery to make life better for people around the world. We are a global healthcare leader headquartered in Indianapolis, Indiana. Our employees around the world work to discover and bring life-changing medicines to those who need them, improve the understanding and management of disease, and give back to our communities through philanthropy and volunteerism. We give our best effort to our work, and we put people first. We're looking for people who are determined to make life better for people around the world. Organizational & Position Overview: The Bioproduct Research and Development (BR&D) organization strives to deliver creative medicines to patients by developing and commercializing insulins, monoclonal antibodies, novel therapeutic proteins, peptides, oligonucleotide therapies, and gene therapy systems. This multidisciplinary group works collaboratively with our discovery and manufacturing colleagues. We are seeking an exceptional Data Scientist with deep data expertise in the pharmaceutical domain to lead the development and delivery of enterprise-scale data products that power AI-driven insights, process optimization, and regulatory compliance. In this role, you'll bridge pharmaceutical sciences with modern data engineering to transform complex CMC, PAT, and analytical data into strategic assets that accelerate drug development and manufacturing excellence. Responsibilities: Data Product Development: Define the roadmap and deliver analysis-ready and AI-ready data products that enable AI/ML applications, PAT systems, near-time analytical testing, and process intelligence across CMC workflows. Data Archetypes & Modern Data Management: Define pharmaceutical-specific data archetypes (process, analytical, quality, CMC submission) and create reusable data models aligned with industry standards (ISA-88, ISA-95, CDISC, eCTD). Modern Data Management for Regulated Environments: Implement data frameworks that ensure 21 CFR Part 11, ALCOA+, and data integrity compliance, while enabling scientific innovation and self-service access. AI/ML-ready Data Products: Build training datasets for lab automation, process optimization, and predictive CQA models, and support generative AI applications for knowledge management and regulatory Q&A. Cross-Functional Leadership: Collaborate with analytical R&D, process development, manufacturing science, quality, and regulatory affairs to standardize data products. Deliverables include: Scalable data integration platform that automates compilation of technical-review-ready and submission-ready data packages with demonstrable quality assurance. Unified CMC data repository supporting current process and analytical method development while enabling future AI/ML applications across R&D and manufacturing Data flow frameworks that enable self-service access while maintaining GxP compliance and audit readiness Comprehensive documentation, standards, and training programs that democratize data access and accelerate product development Basic Requirements: Master's degree in Computer Science, Data Science, Machine Learning, AI, or related technical field 8+ years of product management experience focused on data products, data platforms, or scientific data systems and a strong grasp of modern data architecture patterns (data warehouses, data lakes, real-time streaming) Knowledge of modern data stack technologies (Microsoft Fabric, Databricks, Airflow) and cloud platforms (AWS- S3, RDS, Lambda/Glue, Azure) Demonstrated experience designing data products that support AI/ML workflows and advanced analytics in scientific domains Proficiency with SQL, Python, and data visualization tools Experience with analytical instrumentation and data systems (HPLC/UPLC, spectroscopy, particle characterization, process sensors) Knowledge of pharmaceutical manufacturing processes, including batch and continuous manufacturing, unit operations, and process control Expertise in data modeling for time-series, spectroscopic, chromatographic, and hierarchical batch/lot data Experience with laboratory data management systems (LIMS, ELN, SDMS, CDS) and their integration patterns Additional Preferences Understanding of Design of Experiments (DoE), Quality by Design (QbD), and process validation strategies Experience implementing data mesh architectures in scientific organizations Knowledge of MLOps practices and model deployment in validated environments Familiarity with regulatory submissions (eCTD, CTD) and how analytical data supports marketing applications Experience with CI/CD pipelines (GitHub Actions, CloudFormation) for scientific applications Lilly is dedicated to helping individuals with disabilities to actively engage in the workforce, ensuring equal opportunities when vying for positions. If you require accommodation to submit a resume for a position at Lilly, please complete the accommodation request form (******************************************************** for further assistance. Please note this is for individuals to request an accommodation as part of the application process and any other correspondence will not receive a response. Lilly is proud to be an EEO Employer and does not discriminate on the basis of age, race, color, religion, gender identity, sex, gender expression, sexual orientation, genetic information, ancestry, national origin, protected veteran status, disability, or any other legally protected status. Our employee resource groups (ERGs) offer strong support networks for their members and are open to all employees. Our current groups include: Africa, Middle East, Central Asia Network, Black Employees at Lilly, Chinese Culture Network, Japanese International Leadership Network (JILN), Lilly India Network, Organization of Latinx at Lilly (OLA), PRIDE (LGBTQ+ Allies), Veterans Leadership Network (VLN), Women's Initiative for Leading at Lilly (WILL), en Able (for people with disabilities). Learn more about all of our groups. Actual compensation will depend on a candidate's education, experience, skills, and geographic location. The anticipated wage for this position is $126,000 - $244,200 Full-time equivalent employees also will be eligible for a company bonus (depending, in part, on company and individual performance). In addition, Lilly offers a comprehensive benefit program to eligible employees, including eligibility to participate in a company-sponsored 401(k); pension; vacation benefits; eligibility for medical, dental, vision and prescription drug benefits; flexible benefits (e.g., healthcare and/or dependent day care flexible spending accounts); life insurance and death benefits; certain time off and leave of absence benefits; and well-being benefits (e.g., employee assistance program, fitness benefits, and employee clubs and activities).Lilly reserves the right to amend, modify, or terminate its compensation and benefit programs in its sole discretion and Lilly's compensation practices and guidelines will apply regarding the details of any promotion or transfer of Lilly employees. #WeAreLilly
    $85k-109k yearly est. Auto-Apply 6d ago
  • Data Engineer

    Graham Allen Partners

    Data scientist job in South Bend, IN

    Position Overview At the heart of our Platform team are our super talented Data Engineers. Data Engineers are data experts who dive right into new client projects and make it their job to understand how a client's data fits into our data models. Utilizing this knowledge and the industry's newest technologies, they create high performance databases that become the very foundation of the work we do. Critical at all stages of the data science process, Data Engineers work cross-functionally with both external and internal teams - from business analysts to data scientists; web app developers to platform engineers; IT teams to high-level executives. Data Engineers also provide valuable feedback to our software team that helps to shape the development of Aunsight, our proprietary end-to-end cloud analytics platform; and the development of our proprietary web applications. The best Data Engineers are patient, persistent, focused, creative, and incredibly curious. They love to learn and seek out opportunities to identify unexpected solutions or develop alternate ways to solve challenging problems. Essential Duties & Responsibilities: Build and own “one source of truth” data sets to facilitate consistency and efficiency in extracting and analyzing data from disparate data sources Ensure data integrity by developing and executing necessary processes and controls around the flow of data Innovate and improve efficiency of managing data to allow for greater speed and accuracy of producing analyses, metrics, and insights Collaborate with internal and external teams to understand business needs/issues, troubleshoot problems, conduct root cause analysis, and develop cost effective resolutions for data anomalies Provides input into data governance initiatives to enhance current systems, ensure development of efficient application systems, influence the development of data policy, and support overall corporate and business goals Utilize technology to analyze data from applicable systems to review data processes, identify issues, and determine actions to resolve or escalate problems that require data, system, or process improvement Verify accuracy of table changes and data transformation processes. Test changes prior to deployment as appropriate Recommend and implement enhancements that standardize and streamline processes, assure data quality and reliability, and reduce processing time to meet client expectations Communicate progress and completion to project team. Escalate roadblocks that may impact delivery schedule Stay up-to-date on data engineering and data science trends and developments Follow company policy and procedures which protect sensitive data and maintain compliance with established security standards and best practices Additional duties as assigned to ensure client and company success Required Skills: Bachelor's degree in Informatics, Computer Science, Computer Engineering, Mathematics, or related field, or 3 plus years of relevant work experience Experience working with relational database structures, SQL and/or flat files and performing table joins, web crawling, and web development Proficiency in one or more of the following programming languages: Java, C#, C++, or Python and a familiarity with Node.js Experience working with commercial relational database systems such as electronic medical records or other clinical systems, client relationship management software, or accounting systems a plus Familiar with various data management methodologies, data exploration techniques, data quality assurance practices, and data discovery/ visualization tools Natural curiosity about what's hidden in the data through exploration, attention to detail, and ability to see the big picture - similar to putting together a 10,000-piece puzzle Ability to communicate your ideas (verbal and written) so that team members and clients can understand them Share our values: growth, relationships, integrity, and passion What's in it for You? Opportunity to work in the booming field of cloud, data management and analytics alongside some of the brightest minds in the industry Opportunity to work with cutting-edge technology in a casual, fun environment Opportunity to be a part of a local company committed to making a difference in our community Chance to work with a rapidly expanding tech company Flexible schedule and paid time off Free snacks and an unlimited supply of coffee Social events such as happy hours, game nights, holiday parties, birthday celebrations, movie days, ice cream sundae bars, fancy coffee carts, company softball team, etc. Competitive salary and benefits package including health, vision, dental and life insurance and 401(k) plan
    $70k-94k yearly est. 60d+ ago

Learn more about data scientist jobs

How much does a data scientist earn in Elkhart, IN?

The average data scientist in Elkhart, IN earns between $57,000 and $105,000 annually. This compares to the national average data scientist range of $75,000 to $148,000.

Average data scientist salary in Elkhart, IN

$77,000
Job type you want
Full Time
Part Time
Internship
Temporary