Lead Data Scientist
Remote data scientist job
Title: Data Scientist Lead
Duration: Fulltime
Location : Cleveland OH ( Remote) This position is 100% remote, however, you must reside within driving distance of offices in Independence, OH, to attend meetings and team collaboration.
Salary Range: 140k - $175k annually
US Citizen and Green Card.
Our Client is in search of a Data Scientist Lead for a Direct Hire position.
No Corp to Corp / No Sponsorship / W2 Only
As our Lead Data Scientist, you will act as an internal business consultant to help optimize every facet of this organization. You will solve problems and answer questions - using data - for other departments to help us reduce costs, reduce errors, and be a better organization for our 17 million members.
Requirements:
To thrive in this role, you must have a solid foundation of statistics and data analysis. You must be able to describe a project where you pulled data, analyzed it, identified trends or patterns, determined significance, and delivered insights or recommendations to the business. What was your approach? What tools did you use and why? Did you get to the root cause of the issue? Did you confirm or refute a hypothesis? You must be able to speak to your involvement and decisions at each stage.
For pulling data, they use SQL so you must be proficient with this tool. To curate and analyze data, we use a variety of languages and programs: Python, R, and even Excel. You can choose whatever tool helps you best, but you must be proficient in either Python or R.
Excellent communication skills and an attitude of flexibility are a must. Very often, we get into the data and some aspect takes longer than we thought, or we have to pivot and change how we approach a problem. A dashboard works better than a model, perhaps. Then you have to explain to the stakeholder or the team why you made that change or why the deliverable is taking longer than expected. It's all about being creative in your approach and communicating that frequently and clearly to people who may be less tech or data-savvy than you.
You have worked previously with an Agile team or understand these concepts. You expect to participate in daily standup meetings, you'll complete your projects or stories during our bimonthly sprints, and you're ready to meet frequent deployment deadlines.
Responsibilities:
Every day, you will help solve business problems presented by stakeholders using data. You will get a scenario or problem, such as, “We need to reduce cycle times,” and then pull, analyze, and interpret data relevant to that scenario.
You will pull data from SQL databases. You will analyze and interpret data using Python, R, or other tools. You will determine patterns or trends and their statistical significance to the problem. To visualize the data, you will use Tableau.
Once you have done your analysis, paired with a data engineer, you will present your findings to the stakeholder and confirm, refute or simply acknowledge a hypothesis. Your work may be implemented or may develop into something else. This role is remote, but you must be able to report to the Independence, OH office for regular collaboration days.
Lead Data Scientist
Data scientist job in Columbus, OH
Candidates MUST go on-site at one of the following locations
Columbus, OH
Cincinnati, OH
Cleveland, OH
Indianapolis, IN
Hagerstown, MD
Chicago, IL
Detroit, MI
Minnetonka, MN
Houston, TX
Charlotte, NC
Akron, OH
Experience:
· Master's degree and 5+ years of experience related work experience using statistics and machine learning to solve complex business problems, experience conducting statistical analysis with advanced statistical software, scripting languages, and packages, experience with big data analysis tools and techniques, and experience building and deploying predictive models, web scraping, and scalable data pipelines
· Expert understanding of statistical methods and skills such as Bayesian Networks Inference, linear and non-linear regression, hierarchical, mixed models/multi-level modeling
Python, R, or SAS SQL and some sort of lending experience (i.e. HELOC, Mortgage etc) is most important
Excellent communication skills
If a candidate has cred card experience (i.e. Discover or Bread financial ) THEY ARE A+ fit!
Education:
Master's degree or PhD in computer science, statistics, economics or related fields
Responsibilities:
· Prioritizes analytical projects based on business value and technological readiness
Performs large-scale experimentation and build data-driven models to answer business questions
Conducts research on cutting-edge techniques and tools in machine learning/deep learning/artificial intelligence
Evangelizes best practices to analytics and products teams
Acts as the go-to resource for machine learning across a range of business needs
Owns the entire model development process, from identifying the business requirements, data sourcing, model fitting, presenting results, and production scoring
Provides leadership, coaching, and mentoring to team members and develops the team to work with all areas of the organization
Works with stakeholders to ensure that business needs are clearly understood and that services meet those needs
Anticipates and analyzes trends in technology while assessing the emerging technology's impact(s)
Coaches' individuals through change and serves as a role model
Skills:
· Up-to-date knowledge of machine learning and data analytics tools and techniques
Strong knowledge in predictive modeling methodology
Experienced at leveraging both structured and unstructured data sources
Willingness and ability to learn new technologies on the job
Demonstrated ability to communicate complex results to technical and non-technical audiences
Strategic, intellectually curious thinker with focus on outcomes
Professional image with the ability to form relationships across functions
Ability to train more junior analysts regarding day-to-day activities, as necessary
Proven ability to lead cross-functional teams
Strong experience with Cloud Machine Learning technologies (e.g., AWS Sagemaker)
Strong experience with machine learning environments (e.g., TensorFlow, scikit-learn, caret)
Demonstrated Expertise with at least one Data Science environment (R/RStudio, Python, SAS) and at least one database architecture (SQL, NoSQL)
Financial Services background preferred
Data Scientist with Hands On development experience with R, SQL & Python
Data scientist job in Columbus, OH
*Per the client, No C2C's!*
Central Point Partners is currently interviewing candidates in the Columbus, Oh area for a large client.
only GC's and USC's.
This position is Hybrid (4 Days onsite)! Only candidates who are local to Columbus, Oh will be considered.
Data Scientist with Hands On development experience with R, SQL & Python
Summary:
Our client is seeking a passionate, data-savvy Senior Data Scientist to join the Enterprise Analytics team to fuel our mission of growth through data-driven insights and opportunity discovery. This dynamic role uses a consultative approach with the business segments to dive into our customer, product, channel, and digital data to uncover opportunities for consumer experience optimization and customer value delivery. You will also enable stakeholders with actionable, intuitive performance insights that provide the business with direction for growth. The ideal candidate will have a robust mix of technical and communication skills, with a passion for optimization, data storytelling, and data visualization. You will collaborate with a centralized team of data scientists as well as teams across the organization including Product, Marketing, Data, Finance, and senior leadership. This is an exciting opportunity to be a key influencer to the company's strategic decisions and to learn and grow with our Analytics team.
Notes from the manager
The skills that will be critical will be Python or R and a firm understanding of SQL along with foundationally understanding what data is needed to perform studies now and in the future. For a high-level summary that should help describe what this person will be asked to do alongside their peers:
I would say this person will balance analysis with development, knowing when to jump in and knowing when to step back to lend their expertise.
Feature & Functional Design
Data scientists are embedded in the team's designing the feature. Their main job here is to define the data tracking needed to evaluate the business case-things like event logging, Adobe tagging, third-party data ingestion, and any other tracking requirements. They are also meant to consult and outline if/when business should be bringing data into the bank and will help connect business with CDAO and IT warehousing and data engineering partners should new data need to be brought forward.
Feature Engineering & Development
The same data scientists stay involved as the feature moves into execution. They support all necessary functions (Amigo, QA, etc.) to ensure data tracking is in place when the feature goes live. They also begin preparing to support launch evaluation and measurement against experimentation design or business case success criteria.
Feature Rollout & Performance Evaluation
Owns tracking the rollout, running A/B tests, and conducting impact analysis for all features that they have been involved in the Feature & Functional Design and Feature Engineering & Development stages. They provide an unbiased view of how the feature performs against the original business case along with making objective recommendations that will provide direction for business. They will roll off once the feature has matured through business case/experiment design and evaluation.
In addition to supporting feature rollouts…
Data scientists on the team are also encouraged to pursue self-driven initiatives during periods when they are not actively supporting other projects. These initiatives may include designing experiments, conducting exploratory analyses, developing predictive models, or identifying new opportunities for impact.
For more information about this opportunity, please contact Bill Hart at ************ AND email your resume to **********************************!
Data Engineer- ETL/ELT - Hybrid/Remote
Remote data scientist job
Crown Equipment Corporation is a leading innovator in world-class forklift and material handling equipment and technology. As one of the world's largest lift truck manufacturers, we are committed to providing the customer with the safest, most efficient and ergonomic lift truck possible to lower their total cost of ownership.
Indefinite US Work Authorization Required.
Primary Responsibilities
Design, build and optimize scalable data pipelines and stores.
Clean, prepare and optimize data for consumption in applications and analytics platforms.
Participate in peer code reviews to uphold internal standards.
Ensure procedures are thoroughly tested before release.
Write unit tests and record test results.
Detect, define and debug programs whenever problems arise.
Provide training to users and knowledge transfer to support personnel and other staff members as required.
Prepare system and programming documentation in accordance with internal standards.
Interface with users to extract functional needs and determine requirements.
Conduct detailed systems analysis to define scope and objectives and design solutions.
Work with Business Analyst to help develop and write system requirements.
Establish project plans and schedules and monitor progress providing status reports as required.
Qualifications
Bachelor's degree in Computer Science, Software/Computer Engineering, Information Systems, or related field is required.
4+ years' experience in SQL, ETL, ELT and SAP Data is required.
Python, Databricks, Snowflakes experience preferred.
Strong written, verbal, analytical and interpersonal skills are necessary.
Remote Work: Crown offers hybrid remote work for this position. A reasonable commute is necessary as some onsite work is required. Relocation assistance is available.
Work Authorization:
Crown will only employ those who are legally authorized to work in the United States. This is not a position for which sponsorship will be provided. Individuals with temporary visas or who need sponsorship for work authorization now or in the future, are not eligible for hire.
No agency calls please.
Compensation and Benefits:
Crown offers an excellent wage and benefits package for full-time employees including Health/Dental/Vision/Prescription Drug Plan, Flexible Benefits Plan, 401K Retirement Savings Plan, Life and Disability Benefits, Paid Parental Leave, Paid Holidays, Paid Vacation, Tuition Reimbursement, and much more.
EOE Veterans/Disabilities
Senior Data Engineer.
Data scientist job in Columbus, OH
Immediate need for a talented Senior Data Engineer. This is a 06+ months contract opportunity with long-term potential and is located in Columbus, OH(Remote). Please review the job description below and contact me ASAP if you are interested.
Job ID: 25-95277
Pay Range: $70 - $71 /hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location).
Key Responsibilities:
Working with Marketing data partners and build data pipelines to automate the data feeds from the partners to internal systems on Snowflake.
Working with Data Analysts to understand their data needs and prepare the datasets for analytics.
Work with Data Scientists to build the infrastructure to deploy the models, monitor the performance, and build the necessary audit infrastructure.
Key Requirements and Technology Experience:
Key skills; Snowflake, Python and AWS
Experience with building data pipelines, data pipeline infrastructure and related tools and environments used in analytics and data science (ex: Python, Unix)
Experience in developing analytic workloads with AWS Services, S3, Simple Queue Service (SQS), Simple Notification Service (SNS), Lambda, EC2, ECR and Secrets Manager.
Strong proficiency in Python, SQL, Linux/Unix shell scripting, GitHub Actions or Docker, Terraform or CloudFormation, and Snowflake.
Order of Importance: Terraform, Docker, GitHub Actions OR Jenkins
Experience with orchestration tools such as Prefect, DBT, or Airflow.
Experience automating data ingestion, processing, and reporting/monitoring.
Experience with other relevant tools used in data engineering (e.g., SQL, GIT, etc.)
Ability to set up environments (Dev, QA, and Prod) using GitHub repo and GitHub rules/methodologies; how to maintain (via SQL coding and proper versioning)
Our client is a leading Insurance Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration
Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, colour, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws.
By applying to our jobs, you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
Senior Data Engineer(only W2)
Data scientist job in Columbus, OH
Bachelor's Degree in Computer Science or related technical field AND 5+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, or Java.
Proficiency with Azure data services, such as Azure Data Lake, Azure Data Factory and Databricks.
Expertise using Cloud Security (i.e., Active Directory, network security groups, and encryption services).
Proficient in Python for developing and maintaining data solutions.
Experience with optimizing or managing technology costs.
Ability to build and maintain a data architecture supporting both real-time and batch processing.
Ability to implement industry standard programming techniques by mastering advanced fundamental concepts, practices, and procedures, and having the ability to analyze and solve problems in existing systems.
Expertise with unit testing, integration testing and performance/stress testing.
Database management skills and understanding of legacy and contemporary data modeling and system architecture.
Demonstrated leadership skills, team spirit, and the ability to work cooperatively and creatively across an organization
Experience on teams leveraging Lean or Agile frameworks.
Junior Data Engineer
Data scientist job in Columbus, OH
Contract-to-Hire
Columbus, OH (Hybrid)
Our healthcare services client is looking for an entry-level Data Engineer to join their team. You will play a pivotal role in maintaining and improving inventory and logistics management programs. Your day-to-day work will include leveraging machine learning and open-source technologies to drive improvements in data processes.
Job Responsibilities
Automate key processes and enhance data quality
Improve injection processes and enhance machine learning capabilities
Manage substitutions and allocations to streamline product ordering
Work on logistics-related data engineering tasks
Build and maintain ML models for predictive analytics
Interface with various customer systems
Collaborate on integrating AI models into customer service
Qualifications
Bachelor's degree in related field
0-2 years of relevant experience
Proficiency in SQL and Python
Understanding of GCP/BigQuery (or any cloud experience, basic certifications a plus).
Knowledge of data science concepts.
Business acumen and understanding (corporate experience or internship preferred).
Familiarity with Tableau
Strong analytical skills
Attitude for collaboration and knowledge sharing
Ability to present confidently in front of leaders
Why Should You Apply?
You will be part of custom technical training and professional development through our Elevate Program!
Start your career with a Fortune 15 company!
Access to cutting-edge technologies
Opportunity for career growth
Eight Eleven Group provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, national origin, age, sex, citizenship, disability, genetic information, gender, sexual orientation, gender identity, marital status, amnesty or status as a covered veteran in accordance with applicable federal, state, and local laws.
Senior Data Engineer
Data scientist job in Columbus, OH
Responsible for understanding, preparing, processing, and analyzing data to make it valuable and useful for operations decision support.
Accountabilities in this role include:
Partnering with Business Analysis and Analytics teams.
Demonstrating problem-solving ability for effective and timely resolution of system issues, including production outages.
Developing and supporting standard processes to harvest data from various sources and perform data blending to develop advanced data sets, analytical cubes, and data exploration.
Utilizing queries, data exploration and transformation, and basic statistical methods.
Creating Python scripts.
Developing Microsoft SQL Server Integration Services Workflows.
Building Microsoft SQL Server Analysis Services Tabular Models.
Focusing on SQL database work with a blend of strong technical and communication skills.
Demonstrating ability to learn and navigate in large complex environments.
Exhibiting Excel acumen to develop complex spreadsheets, formulas, create macros, and understand VBA code within the modules.
Required Skills:
Experience with MS SQL
Proficiency in Python
Desired Skills:
Experience with SharePoint
Advanced Excel Skills (formulas, VBA, Power Pivot, Pivot Table)
Senior Data Analytics Engineer
Data scientist job in Columbus, OH
We are seeking a highly skilled Analytics Data Engineer with deep expertise in building scalable data solutions on the AWS platform. The ideal candidate is a 10/10 expert in Python and PySpark, with strong working knowledge of SQL. This engineer will play a critical role in translating business and end-user needs into robust analytics products-spanning ingestion, transformation, curation, and enablement for downstream reporting and visualization.
You will work closely with both business stakeholders and IT teams to design, develop, and deploy advanced data pipelines and analytical capabilities that power enterprise decision-making.
Key Responsibilities
Data Engineering & Pipeline Development
Design, develop, and optimize scalable data ingestion pipelines using Python, PySpark, and AWS native services.
Build end-to-end solutions to move large-scale big data from source systems into AWS environments (e.g., S3, Redshift, DynamoDB, RDS).
Develop and maintain robust data transformation and curation processes to support analytics, dashboards, and business intelligence tools.
Implement best practices for data quality, validation, auditing, and error-handling within pipelines.
Analytics Solution Design
Collaborate with business users to understand analytical needs and translate them into technical specifications, data models, and solution architectures.
Build curated datasets optimized for reporting, visualization, machine learning, and self-service analytics.
Contribute to solution design for analytics products leveraging AWS services such as AWS Glue, Lambda, EMR, Athena, Step Functions, Redshift, Kinesis, Lake Formation, etc.
Cross-Functional Collaboration
Work with IT and business partners to define requirements, architecture, and KPIs for analytical solutions.
Participate in Daily Scrum meetings, code reviews, and architecture discussions to ensure alignment with enterprise data strategy and coding standards.
Provide mentorship and guidance to junior engineers and analysts as needed.
Engineering (Supporting Skills)
Employ strong skills in Python, Pyspark and SQL to support data engineering tasks, broader system integration requirements, and application layer needs.
Implement scripts, utilities, and micro-services as needed to support analytics workloads.
Required Qualifications
5+ years of professional experience in data engineering, analytics engineering, or full-stack data development roles.
Expert-level proficiency (10/10) in:
Python
PySpark
Strong working knowledge of:
SQL and other programming languages
Demonstrated experience designing and delivering big-data ingestion and transformation solutions through AWS.
Hands-on experience with AWS services such as Glue, EMR, Lambda, Redshift, S3, Kinesis, CloudFormation, IAM, etc.
Strong understanding of data warehousing, ETL/ELT, distributed computing, and data modeling.
Ability to partner effectively with business stakeholders and translate requirements into technical solutions.
Strong problem-solving skills and the ability to work independently in a fast-paced environment.
Preferred Qualifications
Experience with BI/Visualization tools such as Tableau
Experience building CI/CD pipelines for data products (e.g., Jenkins, GitHub Actions).
Familiarity with machine learning workflows or MLOps frameworks.
Knowledge of metadata management, data governance, and data lineage tools.
Data Engineer
Remote data scientist job
We are looking for a Data Engineer in Austin, TX (fully remote - MUST work CST hours).
Job Title: Data Engineer
Contract: 12 Months
Hourly Rate: $75- $82 per hour (only on W2)
Additional Notes:
Fully remote - MUST work CST hours
SQL, Python, DBT, Utilize geospatial data tools (PostGIS, ArcGIS/ArcPy, QGIS, GeoPandas, etc.) to optimize and normalize spatial data storage, run spatial queries and processes to power analysis and data products
Design, create, refine, and maintain data processes and pipelines used for modeling, analysis, and reporting using SQL (ideally Snowflake and PostgreSQL), Python and pipeline and transformation tools like Airflow and dbt
• Conduct detailed data research on internal and external geospatial data (POI, geocoding, map layers, geometrics shapes), identify changes over time and maintain geospatial data (shape files, polygons and metadata)
• Operationalize data products with detailed documentation, automated data quality checks and change alerts
• Support data access through various sharing platforms, including dashboard tools
• Troubleshoot failures in data processes, pipelines, and products
• Communicate and educate consumers on data access and usage, managing transparency in metric and logic definitions
• Collaborate with other data scientists, analysts, and engineers to build full-service data solutions
• Work with cross-functional business partners and vendors to acquire and transform raw data sources
• Provide frequent updates to the team on progress and status of planned work
About us:
Harvey Nash is a national, full-service talent management firm specializing in technology positions. Our company was founded with a mission to serve as the talent partner of choice for the information technology industry.
Our company vision has led us to incredible growth and success in a relatively short period of time and continues to guide us today. We are committed to operating with the highest possible standards of honesty, integrity, and a passionate commitment to our clients, consultants, and employees.
We are part of Nash Squared Group, a global professional services organization with over forty offices worldwide.
For more information, please visit us at ******************************
Harvey Nash will provide benefits please review: 2025 Benefits -- Corporate
Regards,
Dinesh Soma
Recruiting Lead
Data Engineer
Data scientist job in Columbus, OH
We're seeking a skilled Data Engineer based in Columbus, OH, to support a high-impact data initiative. The ideal candidate will have hands-on experience with Python, Databricks, SQL, and version control systems, and be comfortable building and maintaining robust, scalable data solutions.
Key Responsibilities
Design, implement, and optimize data pipelines and workflows within Databricks.
Develop and maintain data models and SQL queries for efficient ETL processes.
Partner with cross-functional teams to define data requirements and deliver business-ready solutions.
Use version control systems to manage code and ensure collaborative development practices.
Validate and maintain data quality, accuracy, and integrity through testing and monitoring.
Required Skills
Proficiency in Python for data engineering and automation.
Strong, practical experience with Databricks and distributed data processing.
Advanced SQL skills for data manipulation and analysis.
Experience with Git or similar version control tools.
Strong analytical mindset and attention to detail.
Preferred Qualifications
Experience with cloud platforms (AWS, Azure, or GCP).
Familiarity with enterprise data lake architectures and best practices.
Excellent communication skills and the ability to work independently or in team environments.
Data Engineer
Remote data scientist job
This is a fully remote 12+ month contract position. No C2C or 3rd party candidates will be considered.
Data Engineer (AI & Automation)
We are seeking a Data Engineer with hands-on experience using AI-driven tools to support automation, system integrations, and continuous process improvement across internal business systems. This role will focus on building and maintaining scalable data pipelines, enabling intelligent workflows, and improving data accessibility and reliability.
Key Responsibilities
Design, build, and maintain automated data pipelines and integrations across internal systems
Leverage AI-enabled tools to streamline workflows and drive process improvements
Develop and orchestrate workflows using Apache Airflow and n8n AI
Model, transform, and optimize data in Snowflake and Azure SQL Data Warehouse
Collaborate with business and technical teams to identify automation opportunities
Ensure data quality, reliability, and performance across platforms
Required Qualifications
Experience as a Data Engineer or similar role
Hands-on experience with Apache Airflow and modern workflow orchestration tools
Strong experience with Snowflake and Azure SQL Data Warehouse
Familiarity with AI-driven automation and integration tools (e.g., n8n AI)
Strong SQL skills and experience building scalable data pipelines
Preferred Qualifications
Experience integrating multiple internal business systems
Background in process improvement or operational automation
Experience working in cloud-based data environments (Azure preferred)
Data Engineer
Data scientist job in Dublin, OH
The Data Engineer is a technical leader and hands-on developer responsible for designing, building, and optimizing data pipelines and infrastructure to support analytics and reporting. This role will serve as the lead developer on strategic data initiatives, ensuring scalable, high-performance solutions are delivered effectively and efficiently.
The ideal candidate is self-directed, thrives in a fast-paced project environment, and is comfortable making technical decisions and architectural recommendations. The ideal candidate has prior experience in modern data platforms, most notable Databricks and the “lakehouse” architecture. They will work closely with cross-functional teams, including business stakeholders, data analysts, and engineering teams, to develop data solutions that align with enterprise strategies and business goals.
Experience in the financial industry is a plus, particularly in designing secure and compliant data solutions.
Responsibilities:
Design, build, and maintain scalable ETL/ELT pipelines for structured and unstructured data.
Optimize data storage, retrieval, and processing for performance, security, and cost-efficiency.
Ensure data integrity and governance by implementing robust validation, monitoring, and compliance processes.
Consume and analyze data from the data pipeline to infer, predict and recommend actionable insight, which will inform operational and strategic decision making to produce better results.
Empower departments and internal consumers with metrics and business intelligence to operate and direct our business, better serving our end customers.
Determine technical and behavioral requirements, identify strategies as solutions, and section solutions based on resource constraints.
Work with the business, process owners, and IT team members to design solutions for data and advanced analytics solutions.
Perform data modeling and prepare data in databases for analysis and reporting through various analytics tools.
Play a technical specialist role in championing data as a corporate asset.
Provide technical expertise in collaborating with project and other IT teams, internal and external to the company.
Contribute to and maintain system data standards.
Research and recommend innovative, and where possible automated approaches for system data administration tasks. Identify approaches that leverage our resources and provide economies of scale.
Engineer system that balances and meets performance, scalability, recoverability (including backup design), maintainability, security, high availability requirements and objectives.
Skills:
Databricks and related - SQL, Python, PySpark, Delta Live Tables, Data pipelines, AWS S3 object storage, Parquet/Columnar file formats, AWS Glue.
Systems Analysis - The application of systems analysis techniques and procedures, including consulting with users, to determine hardware, software, platform, or system functional specifications.
Time Management - Managing one's own time and the time of others.
Active Listening - Giving full attention to what other people are saying, taking time to understand the points being made, asking questions as appropriate, and not interrupting at inappropriate times.
Critical Thinking - Using logic and reasoning to identify the strengths and weaknesses of alternative solutions, conclusions or approaches to problems.
Active Learning - Understanding the implications of new information for both current and future problem-solving and decision-making.
Writing - Communicating effectively in writing as appropriate for the needs of the audience.
Speaking - Talking to others to convey information effectively.
Instructing - Teaching others how to do something.
Service Orientation - Actively looking for ways to help people.
Complex Problem Solving - Identifying complex problems and reviewing related information to develop and evaluate options and implement solutions.
Troubleshooting - Determining causes of operating errors and deciding what to do about it.
Judgment and Decision Making - Considering the relative costs and benefits of potential actions to choose the most appropriate one.
Experience and Education:
High School Diploma (or GED or High School Equivalence Certificate).
Associate degree or equivalent training and certification.
5+ years of experience in data engineering including SQL, data warehousing, cloud-based data platforms.
Databricks experience.
2+ years Project Lead or Supervisory experience preferred.
Must be legally authorized to work in the United States. We are unable to sponsor or take over sponsorship at this time.
Data Engineer (Databricks)
Data scientist job in Columbus, OH
ComResource is searching for a highly skilled Data Engineer with a background in SQL and Databricks that can handle the design and construction of scalable management systems, ensure that all data systems meet company requirements, and also research new uses for data acquisition.
Requirements:
Design, construct, install, test and maintain data management systems.
Build high-performance algorithms, predictive models, and prototypes.
Ensure that all systems meet the business/company requirements as well as industry practices.
Integrate up-and-coming data management and software engineering technologies into existing data structures.
Develop set processes for data mining, data modeling, and data production.
Create custom software components and analytics applications.
Research new uses for existing data.
Employ an array of technological languages and tools to connect systems together.
Recommend different ways to constantly improve data reliability and quality.
Qualifications:
5+ years data quality engineering
Experience with Cloud-based systems, preferably Azure
Databricks and SQL Server testing
Experience with ML tools and LLMs
Test automation frameworks
Python and SQL for data quality checks
Data profiling and anomaly detection
Documentation and quality metrics
Healthcare data validation experience preferred
Test automation and quality process development
Plus:
Azure Databricks
Azure Cognitive Services integration
Databricks Foundational model Integration
Claude API implementation a plus
Python and NLP frameworks (spa Cy, Hugging Face, NLTK)
Sr. Biostatistician
Remote data scientist job
Please no third party applicants
A pharmaceutical company is looking for a Senior Biostatistician for a 6-month renewable project. This consultant must be very hands-on and have proven experience supporting regulatory submissions. In addition, they must possess strong programming skills (TLFs) and CDISC expertise. Strong communication is critical.
EXPERIENCE & QUALIFICATIONS
8-10+ years of biostatistics experience in the pharmaceutical industry with recent Sponsor side experience
Minimum of MS degree in Biostatistics/Statistics
Must have recent hands-on statistical experience such as drafting SAPs and conducting programmatic TLF reviews
Proven track record with regulatory submissions
Ability to analyze data and provide guidance to the statistical programming team if needed
Excellent communication skills to interpret and explain complex results to the study team
LOCATION:
Work will be performed remotely and prefer to accommodate PST core working hours.
Principal Data Scientist
Remote data scientist job
At ServiceLink, we believe in pushing the limits of what's possible through innovation. We're looking for a high-achieving AI enthusiast to lead ground-breaking initiatives that redefine our industry. As our Principal Data Scientist, you'll harness cutting-edge technologies-from advanced machine learning and deep learning to generative AI, Large Language Models, and Agentic AI-to create production-ready systems that solve real-world challenges. This is your opportunity to shape strategy, mentor top talent, and turn ambitious ideas into transformative solutions in an environment that champions bold thinking and continuous innovation.
Applicants must be currently authorized to work in the United States on a full-time basis and must not require sponsorship for employment visa status now or in the future.
A DAY IN THE LIFE
In this role, you will…
Transform complex business challenges into innovative AI solutions that leverage deep learning, LLMs, and autonomous Agentic AI frameworks.
Lead projects end-to-end-from ideation and data gathering to model design, fine-tuning, deployment, and continuous improvement using full MLOps practices.
Collaborate closely with business stakeholders, Data Engineering, Product, and Infrastructure teams to ensure our AI solutions are powerful, secure, and scalable.
Drive both research and production by designing experiments, publishing state-of-the-art work in high-impact journals, and protecting strategic intellectual property.
Mentor and inspire our next generation of data scientists, sharing insights on emerging trends and best practices in AI.
WHO YOU ARE
You possess …
A visionary leader with an advanced degree (Master's or Ph.D.) in Computer Science, Engineering, or a related field, backed by 10+ years of progressive experience in AI and data science.
A technical powerhouse with a solid track record in statistical analysis, machine learning, deep learning, and building production-grade models using transformer architectures and Agentic AI systems.
Proficient in Python-and comfortable with other modern programming environments-armed with real-world experience in cloud platforms (preferably Microsoft Azure) and end-to-end AI development (CRISP-DM and ML-Ops).
An exceptional communicator who can distill complex technical ideas into strategic insights for diverse audiences, from the boardroom to the lab.
A proactive problem solver and collaborative team player who thrives in a fast-paced, interdisciplinary setting, ready to balance innovative risk with practical execution.
Responsibilities
Strategize with leadership and stakeholders to align AI innovations with business objectives-identifying risks, seizing opportunities, and driving measurable outcomes.
Architect and lead the development of next-generation AI solutions, with a special focus on Agentic AI, deep learning models, and transformer-based LLMs.
Build automated MLOps pipelines to ensure continuous integration, deployment, and monitoring of models across diverse data environments.
Act as both a thought leader and an active contributor-publishing in high-impact journals, representing ServiceLink at industry events, and safeguarding our IP.
Collaborate cross-functionally to ensure our AI systems are secure, scalable, and cost-effective, continuously refining them based on rigorous performance metrics
Mentor and empower your peers, fostering a culture of innovation, resilience, and learning.
All other duties as assigned.
Qualifications
Advanced degree (Master's or Ph.D.) in Computer Science, Engineering, or a related quantitative discipline, backed by 10+ years of relevant industry experience.
Demonstrated expertise in Python and practical experience deploying advanced ML/AI solutions-including deep learning, LLMs, and Agentic AI-in production environments.
Proficiency with modern cloud platforms (preferably Microsoft Azure) and a proven record of operationalizing AI via MLOps best practices.
Strong ability to balance innovation with practicality, evaluating technical capabilities versus business and cost considerations.
Excellent communicator with a knack for translating intricate technical strategies into clear, actionable plans.
A collaborative mindset with a history of mentoring teams and building high-impact technology solutions.
We can recommend jobs specifically for you! Click here to get started.
Auto-ApplyLead Data Scientist
Remote data scientist job
May Mobility is transforming cities through autonomous technology to create a safer, greener, more accessible world. Based in Ann Arbor, Michigan, May develops and deploys autonomous vehicles (AVs) powered by our innovative Multi-Policy Decision Making (MPDM) technology that literally reimagines the way AVs think.
Our vehicles do more than just drive themselves - they provide value to communities, bridge public transit gaps and move people where they need to go safely, easily and with a lot more fun. We're building the world's best autonomy system to reimagine transit by minimizing congestion, expanding access and encouraging better land use in order to foster more green, vibrant and livable spaces. Since our founding in 2017, we've given more than 300,000 autonomy-enabled rides to real people around the globe. And we're just getting started. We're hiring people who share our passion for building the future, today, solving real-world problems and seeing the impact of their work. Join us.
Lead Data Scientist
May Mobility is experiencing a period of significant growth as we expand our autonomous shuttle and mobility services nationwide. We are seeking talented data scientists and machine learning engineers to develop automated methods for tagging data collected by our autonomous vehicles. This will enable us to generate valuable insights from our data, making it easily searchable for triaging issues, creating test sets, and building datasets for autonomy improvements. Join us and make a crucial impact on our development and business decisions!
Responsibilities
Work independently with cross functional teams to develop software and system requirements.
Design, implement, and deploy state-of-the-art machine learning models.
Monitor the performance of the auto-tagging system and drive continuous improvement.
Lead team code quality activities including design and code reviews.
Communicate complex analytical findings and model performance metrics to both technical and non-technical stakeholders through clear visualizations and presentations.
Provide technical guidance to team members.
Skills
Expertise in deep learning, with hands-on experience in the design, training, and evaluation of a wide range of algorithms.
Ability to build and productionize machine learning models and large-scale systems.
Awareness of the latest advancements in the field, with the ability to translate innovative concepts into practical solutions for May.
Excellent problem-solving skills with a meticulous approach to model architecture and optimization.
Ability to provide individual and team mentorship, including technical leadership for complex projects.
Strong understanding of data labeling best practices, label consistency, and performance metrics specifically relevant to large-scale auto-tagging accuracy and dataset curation.
Qualifications and Experience
Required
B.S, M.S. or Ph.D. Degree in Engineering, Data Science, Computer Science, Math, or a related quantitative field.
10+ years of hands-on experience as a Data Scientist or ML Engineer with a strong focus on algorithmic design and deep learning.
Expert-level programming skills in Python with extensive use of modern deep learning frameworks like TensorFlow or PyTorch.
Demonstrated experience in building and deploying production-level machine learning systems from conception to delivery.
Experience working with multimodal data like visual data (images/video), structured perception and behavior outputs (e.g., agent tracks, vehicle state estimation, motion planner outputs).
Demonstrated expertise in databases for data extraction, transformation, and analysis.
Prior experience in mentoring and supporting junior engineers.
Desirable
Background in robotics or autonomous systems.
Experience with multi-modal deep learning models, transformers, visual learning models (vLMs) etc.
Experience with classifying driving maneuvers and traffic interactions using machine learning methods.
Solid understanding of ML deployment lifecycle, MLOps practices, and cloud computing platforms (e.g., AWS, GCP).
Expertise in PySpark/Apache Spark for handling large-scale data processing.
Physical Requirements
Standard office working conditions which includes but is not limited to:
Prolonged sitting
Prolonged standing
Prolonged computer use
Lift up to 50 pounds
Remote role based out of Ann Arbor, MI.
Remote employees work primarily from home or an alternative work space.
Travel requirements - 0%
The salary range provided is based on a position located in the state of Michigan. Our salary ranges can vary across different locations in the United States.
Benefits and Perks
Comprehensive healthcare suite including medical, dental, vision, life, and disability plans. Domestic partners who have been residing together at least one year are also eligible to participate.
Health Savings and Flexible Spending Healthcare and Dependent Care Accounts available.
Rich retirement benefits, including an immediately vested employer safe harbor match.
Generous paid parental leave as well as a phased return to work.
Flexible vacation policy in addition to paid company holidays.
Total Wellness Program providing numerous resources for overall wellbeing
Don't meet every single requirement? Studies have shown that women and/or people of color are less likely to apply to a job unless they meet every qualification. At May Mobility, we're committed to building a diverse, inclusive, and authentic workforce, so if you're excited about this role but your previous experience doesn't align perfectly with every qualification, we encourage you to apply anyway! You may be the perfect candidate for this or another role at May.
Want to learn more about our culture & benefits? Check out our website!
May Mobility is an equal opportunity employer. All applicants for employment will be considered without regard to race, color, religion, sex, national origin, age, disability, sexual orientation, gender identity or expression, veteran status, genetics or any other legally protected basis. Below, you have the opportunity to share your preferred gender pronouns, gender, ethnicity, and veteran status with May Mobility to help us identify areas of improvement in our hiring and recruitment processes. Completion of these questions is entirely voluntary. Any information you choose to provide will be kept confidential, and will not impact the hiring decision in any way. If you believe that you will need any type of accommodation, please let us know.
Note to Recruitment Agencies:
May Mobility does not accept unsolicited agency resumes. Furthermore, May Mobility does not pay placement fees for candidates submitted by any agency other than its approved partners.
Salary Range$167,000-$190,000 USD
Auto-ApplyStaff Data Scientist
Remote data scientist job
Role Description
We're looking for a Staff Data Scientist to partner with product, engineering, and design teams to answer key questions and drive impact in the Core Experience and Artificial Intelligence (AI) areas. This area focuses on improving key part of the core product through re-envisioning the home experience, cross platform experience, user onboarding, and building new functionality and launching high impact initiatives. We solve challenging problems and boost business growth through a deep understanding of user behaviors with applied analytics techniques and business insights. An ideal candidate should have robust knowledge of consumer lifecycle, behavior analysis, and customer segmentation. We're looking for someone who can bring opinions and strong narrative framing to proactively influence the business.
Responsibilities
Perform analytical deep-dives to analyze problems and opportunities, identify the hypothesis and design & execute experiments
Inform future experimentation design and roadmaps by performing exploratory analysis to understand user engagement behavior and derive insights
Create personalized segmentation strategies leveraging propensity models to enable targeting of offers and experiences based on user attributes
Identify key trends and build automated reporting & executive-facing dashboards to track the progress of acquisition, monetization, and engagement trends.
Identify opportunities, advocate for new solutions and build momentum cross-functionally to move ideas forward tha tare grounded in data.
Monitor and analyze a high volume of experiments designed to optimize the product for user experience and revenue & promote best practices for multivariate experiments
Translate complex concepts into implications for the business via excellent communication skills, both verbal and written
Understand what matters most and prioritize ruthlessly
Work with cross-functional teams (including Data Science, Marketing, Product, Engineering, Design, User Research, and senior executives) to rapidly execute and iterate
Requirements
Bachelors' or above in quantitative discipline: Statistics, Applied Mathematics, Economics, Computer Science, Engineering, or related field
8+ years experience using analytics to drive key business decisions; examples include business/product/marketing analytics, business intelligence, strategy consulting
Proven track record of being able to work independently and proactively engage with business stakeholders with minimal direction
Significant experience with SQL
Deep understanding of statistical analysis, experimentation design, and common analytical techniques like regression, decision trees
Solid background in running multivariate experiments to optimize a product or revenue flow
Strong verbal and written communication skills
Strong leadership and influence skills
Proficiency in programming/scripting and knowledge of statistical packages like R or Python is a plus
Preferred Qualifications
Product analytics experience in a SAAS company
Masters' or above in a quantitative discipline: Statistics, Applied Mathematics, Economics, Computer Science, Engineering, or related field
Compensation
US Zone 1
This role is not available in Zone 1
US Zone 2$197,400-$267,000 USDUS Zone 3$175,400-$237,400 USD
Auto-ApplyStaff Data Scientist - Product Experimentation & Evaluation - US
Remote data scientist job
You will be a strategic partner to product, engineering, and trust and safety teams, responsible for defining evaluation frameworks, leading experiments (A/B, quasi-experiments, etc.), and turning offline and live model performance into product improvements. This role requires a strong track record in startup-style experimentation (moving quickly with scrappy but rigorous methods) and product experimentation at scale. The ideal candidate will also bring proven experience in leading and managing teams to deliver high-impact data science work.
100% remote
Salary Range $120,000 - $260,000
Essential Job Functions
● Lead end-to-end experimentation: hypothesis generation, metric design, experiment design (A/B, multivariate, sequential, etc.), analysis, and interpretation.
● Build and maintain evaluation frameworks for LLMs: correctness, consistency, safety, hallucination detection, bias/fairness, etc.
● Develop predictive models, classification/ranking systems, and heuristics to improve product features related to AI/language generation.
● Collaborate with prompt engineers & model builders to test prompt strategies, fine-tuning, or model selection; work on failure modes/error analysis.
● Automate experiment pipelines: dashboards, monitoring, alerting, instrumentation. Ensure data quality & measurement integrity.
● Use causal inference / observational studies when randomized experiments are not feasible.
● Present findings and recommendations to both technical and non-technical leadership; influence roadmap decisions.
● Drive experimentation in startup-like environments: rapid iteration, learning from limited data, and balancing speed with rigor.
● Shape large-scale product experimentation: define frameworks for experimentation at scale and integrate results into product strategy.
● Lead and mentor teams of data scientists, analysts, and engineers; set best practices for experiment design and AI product evaluation.
Requirements
~8-12+ years of experience in data science / ML roles, ideally with experiment design/product analytics.
Proven track record in both startup-style and large-scale product experimentation.
Experience leading teams, setting strategy, and driving execution in cross-functional environments.
Strong background with statistical methods, causal inference, and rigorous measurement.
Experience using LLMs / NLP / AI / prompt engineering or a closely related field.
Excellent coding skills in Python (or similar), strong SQL, experience building and deploying models or analytic pipelines.
Ability to work in cross-functional teams, translate technical results into business or product changes.
Strong communication skills; ability to explain complex analyses to non-technical stakeholders.
Preferred Qualifications
Experience fine-tuning or working with multiple LLM providers / APIs.
Experience with experiment platforms or building internal tooling for experimentation & model evaluation.
Experience in voice / ASR or other multi-modal data.
Benefits
Health Care Plan (Medical, Dental & Vision)
Retirement Plan (401k)
Life Insurance (Basic, Voluntary & AD&D)
Flexible Paid Time Off
Family Leave (Maternity, Paternity)
Short Term & Long Term Disability
Training & Development
Work From Home
Stock Option Plan
Auto-ApplyData Scientist
Remote data scientist job
The Data Scientist will contribute to the development of AI and Generative AI (GenAI) models and workflows tailored for the healthcare domain. This role will work on building solutions that address real-world challenges. This role offers an exciting opportunity to apply knowledge of GenAI, machine learning, and healthcare-specific requirements while collaborating with a multidisciplinary team of experts.
Develop workflows to enhance the efficiency and accuracy of note-taking in healthcare settings.
Collaborate with data engineers, product teams, and clinical experts to ensure solutions align with healthcare workflows and standards.
Address healthcare-specific challenges such as HIPAA compliance, medical terminology integration, and adherence to standards like ICD, CPT, and HL7/FHIR.
Stay updated with the latest advancements in GenAI, NLP, computer vision, and healthcare AI technologies.
Perform other duties that support the overall objective of the position.
Education Required:
Bachelor's degree (or higher) in Computer Science, Data Science, Artificial Intelligence, or a related field.
Or, any combination of education and experience which would provide the required qualifications for the position.
Experience Required:
Experience working with multiple GenAI models, including open-source.
Experience with natural language processing (NLP) and computer vision techniques.
Experience in data preprocessing and model deployment.
Knowledge, Skills & Abilities:
Knowledge of: Strong foundational knowledge of generative AI techniques and frameworks (e.g., Transformers, GPT, diffusion models). Proficiency in machine learning tools and frameworks such as TensorFlow, PyTorch, or Scikit-learn. Familiarity with healthcare data structures, workflows, and challenges (e.g., EMR/EHR integration, ICD/CPT coding, HL7/FHIR). Knowledge of voice recognition and transcription technologies is a plus.
Skill in: Strong programming skills in Python or similar languages. Strong problem-solving skills and a growth mindset.
Ability to: Eagerness to learn and adapt in a fast-paced, mission-driven environment.
The company has reviewed this to ensure that essential functions and basic duties have been included. It is intended to provide guidelines for job expectations and the employee's ability to perform the position described. It is not intended to be construed as an exhaustive list of all functions, responsibilities, skills and abilities. Additional functions and requirements may be assigned by supervisors as deemed appropriate. This document does not represent a contract of employment, and the company reserves the right to change this job description and/or assign tasks for the employee to perform, as the company may deem appropriate.
NextGen Healthcare is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
Auto-Apply