Data Scientist
Data scientist job in Phoenix, AZ
We are seeking a Data Scientist to support advanced analytics and machine learning initiatives across the organization. This role involves working with large, complex datasets to uncover insights, validate data integrity, and build predictive models. A key focus will be developing and refining machine learning models that leverage sales and operational data to optimize pricing strategies at the store level.
Day-to-Day Responsibilities
Compare and validate numbers across multiple data systems
Investigate discrepancies and understand how metrics are derived
Perform data science and data analysis tasks
Build and maintain AI/ML models using Python
Interpret model results, fine-tune algorithms, and iterate based on findings
Validate and reconcile data from different sources to ensure accuracy
Work with sales and production data to produce item-level pricing recommendations
Support ongoing development of a new data warehouse and create queries as needed
Review Power BI dashboards (Power BI expertise not required)
Contribute to both ML-focused work and general data science responsibilities
Improve and refine an existing ML pricing model already in production
Qualifications
Strong proficiency with MS SQL Server
Experience creating and deploying machine learning models in Python
Ability to interpret, evaluate, and fine-tune model outputs
Experience validating and reconciling data across systems
Strong foundation in machine learning, data modeling, and backend data operations
Familiarity with querying and working with evolving data environments
IBM Data Power Consultant || Only USC and Green Card
Data scientist job in Phoenix, AZ
IBM Data Power Consultant
Duration: 12+ Months
**Only US Citizen and Green Card**
Job Details:
Mandatory Skill Set
• IBM Data Power, WSP, XFW, MPGW, FSH,
• X152, IDG, X3 Device
• Certificate and Encryption policy experience
• XSLT, XML, SOA, WSDL, REST, Schema, JSON, WTX
• Splunk and ELF experience
Detailed Job Description
• Should be genuine candidate to clear BGV at both Infosys and client end.
• At least 8 years of relevant Information Technology experience and minimum of 5 years hands on working experience in IBM DataPower with 4 years of experience in application development and production support.
• Worked on XI52, IDG, X3 device. Working knowledge on other devices like XB50/52 and XC10 are added as an advantage.
• Must have experience in creating WSP, XFW, MPGW, FSH and Log targets.
• Able to understand and work with AAA policy.
• Should have worked with Certificates and Encryption.
• Programming in XSLT, proficient in XML, sound knowledge on SOA, JavaScript, Webservices, SOAP, WSDL & REST.
• Familiarity with JSON to XML, XML to JSON, and JSON to SOAP transformations using XSLT's.
• Creating XML schema, JSON schema and development using XSLT and WebSphere Transformation Extender (WTX) and configuration using DataPower
• Error Handling and troubleshooting in DataPower
• DP Extension functions.
• Should have knowledge in SoapUI, Postman and other testing tools for DataPower. Knowledge in JMeter is an added advantage.
• A background in software lifecycle management and understanding of testing process.
• Working experience on Splunk and ELF.
• Design, build, test, document, and implement software applications
• Troubleshoot issues
• Collaborate with developers and stakeholders
• Participate in training and assessments
• Provide development, production support, maintenance and technical consulting for software components & infrastructure
• Should be able to work in flexible hours and work in production support activities.
• This position requires to work from client location, no remote.
Minimum years of experience needed- 5 years on required skillset.
Certifications Needed :No
Top 3 responsibilities you would expect the Subcon to shoulder and execute
• work with client directly
Thank You
Aakash Dubey
************************
Data Engineer
Data scientist job in Tempe, AZ
About the Role
We are seeking a highly skilled Databricks Data Engineer with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions.
What We're Looking For
8+ years designing and delivering scalable data pipelines in modern data platforms
Deep experience in data engineering, data warehousing, and enterprise-grade solution delivery
Ability to lead cross-functional initiatives in matrixed teams
Advanced skills in SQL, Python, and ETL/ELT development, including performance tuning
Hands-on experience with Azure, Snowflake, and Databricks, including system integrations
Key Responsibilities
Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform
Modernize and enhance cloud-based data ecosystems on Azure, contributing to architecture, modeling, security, and CI/CD
Use Apache Airflow and similar tools for workflow automation and orchestration
Work with financial or regulated datasets while ensuring strong compliance and governance
Drive best practices in data quality, lineage, cataloging, and metadata management
Primary Technical Skills
Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks Notebooks
Design efficient Delta Lake models for reliability and performance
Implement and manage Unity Catalog for governance, RBAC, lineage, and secure data sharing
Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables
Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems
Automate ingestion and workflows using Python and REST APIs
Support downstream analytics for BI, data science, and application workloads
Write optimized SQL/T-SQL queries, stored procedures, and curated datasets
Automate DevOps workflows, testing pipelines, and workspace configurations
Additional Skills
Azure: Data Factory, Data Lake, Key Vault, Logic Apps, Functions
CI/CD: Azure DevOps
Orchestration: Apache Airflow (plus)
Streaming: Delta Live Tables
MDM: Profisee (nice-to-have)
Databases: SQL Server, Cosmos DB
Soft Skills
Strong analytical and problem-solving mindset
Excellent communication and cross-team collaboration
Detail-oriented with a high sense of ownership and accountability
Data Engineer
Data scientist job in Scottsdale, AZ
📍
Scottsdale, AZ (Hybrid 3 days a week in office)
About the Opportunity
A leading renewable energy organization is seeking a Data Engineer to join its high-growth Performance Engineering team. This is an exceptional role for someone who wants to work at the intersection of data engineering, analytics, and clean energy, supporting a portfolio of utility-scale solar, energy-storage, and solar-plus-storage assets across the U.S.
If you thrive in an environment focused on teamwork, continuous improvement, and driving real operational impact, this role offers both challenge and meaningful purpose.
What You'll Do
As an Associate Data Engineer, you'll help optimize the performance of a large fleet of renewable energy assets by designing and maintaining modern data architectures. Your work will turn vast amounts of operational data into actionable insights for engineering and asset management teams.
Key responsibilities include:
Build and maintain scalable data pipelines using Snowflake or Databricks
Integrate large, diverse datasets from performance systems, CMMS platforms, and drone inspection imagery
Analyze asset performance data to detect underperformance, quantify energy losses, and support predictive maintenance modeling
Manage the full data lifecycle from ingestion (S3) to processing, analysis, and visualization
Evaluate and improve systems, processes, and workflows across engineering teams
Develop metadata documentation and support strong data governance practices
What We're Looking For
Bachelor's degree in Data Science, Computer Science, Engineering, Statistics, or a related quantitative field
3-4 years of experience in a data-focused role
Strong hands-on expertise with Snowflake or Databricks, plus cloud experience with AWS (S3, EC2, Glue, SageMaker)
Experience with Apache Spark for distributed computing (highly preferred)
Expert-level SQL and strong Python skills (Pandas, NumPy)
Experience in statistical modeling, ML, and mathematical modeling
Experience working with aerial or geospatial imagery (OpenCV, Scikit-image, GeoPandas, PyTorch, TensorFlow)
Ability to collaborate effectively, take ownership, and drive process improvements
Strong communication skills and the ability to align technical work with business goals
Why You'll Love Working Here
This organization invests heavily in the well-being, growth, and success of its team members. You can expect:
Flexible, hybrid work environment
Generous PTO
401(k) with 6% company match
Tuition reimbursement
Paid parental & caregiver leave
Inspiring, mission-driven culture
Strong opportunities for professional growth and development
Data Architect
Data scientist job in Phoenix, AZ
The Senior Data Engineer & Test in Phoenix 85029 will play a pivotal role in delivering major data engineering initiatives within the Data & Advanced Analytics space. This position requires hands-on expertise in building, deploying, and maintaining robust data pipelines using Python, PySpark, and Airflow, as well as designing and implementing CI/CD processes for data engineering projects
Key Responsibilities
1. Data Engineering: Design, develop, and optimize scalable data pipelines using Python and PySpark for batch and streaming workloads.
2. Workflow Orchestration: Build, schedule, and monitor complex workflows using Airflow, ensuring reliability and maintainability.
3. CI/CD Pipeline Development: Architect and implement CI/CD pipelines for data engineering projects using GitHub, Docker, and cloud-native solutions.
4. Testing & Quality: Apply test-driven development (TDD) practices and automate unit/integration tests for data pipelines.
5. Secure Development: Implement secure coding best practices and design patterns throughout the development lifecycle.
6. Collaboration: Work closely with Data Architects, QA teams, and business stakeholders to translate requirements into technical solutions.
7. Documentation: Create and maintain technical documentation, including process/data flow diagrams and system design artifacts.
8. Mentorship: Lead and mentor junior engineers, providing guidance on coding, testing, and deployment best practices.
9. Troubleshooting: Analyze and resolve technical issues across the data stack, including pipeline failures and performance bottlenecks.
Cross-Team Knowledge Sharing: Cross-train team members outside the project team (e.g., operations support) for full knowledge coverage. Includes all above skills, plus the following;
· Minimum of 10+ years overall IT experience
· Experienced in waterfall, iterative, and agile methodologies
Technical Requirment:
1. Hands-on Data Engineering : Minimum 5+ yearsof practical experience building production-grade data pipelines using Python and PySpark.
2. Airflow Expertise: Proven track record of designing, deploying, and managing Airflow DAGs in enterprise environments.
3. CI/CD for Data Projects : Ability to build and maintain CI/CD pipelinesfor data engineering workflows, including automated testing and deployment**.
4. Cloud & Containers: Experience with containerization (Docker and cloud platforms (GCP) for data engineering workloads. Appreciation for twelve-factor design principles
5. Python Fluency : Ability to write object-oriented Python code manage dependencies, and follow industry best practices
6. Version Control: Proficiency with **Git** for source code management and collaboration (commits, branching, merging, GitHub/GitLab workflows).
7. Unix/Linux: Strong command-line skills** in Unix-like environments.
8. SQL : Solid understanding of SQL for data ingestion and analysis.
9. Collaborative Development : Comfortable with code reviews, pair programming and usingremote collaboration tools effectively.
10. Engineering Mindset: Writes code with an eye for maintainability and testability; excited to build production-grade software
11. Education: Bachelor's or graduate degree in Computer Science, Data Analytics or related field, or equivalent work experience.
GCP DATA ENGINEER
Data scientist job in Phoenix, AZ
Key Skills Required:
6+ years of experience in Data Engineering with an emphasis on Data Warehousing and Data Analytics.
4+ years of experience with Python with working knowledge on Notebooks.
4+ years of experience in design and build of salable data pipelines that deal with extraction, transformation, and loading.
4+ years of experience with one of the leading public clouds and GCP: 2+ years
2+ years hands on experience on GCP Cloud data implementation projects (Dataflow, DataProc, Cloud Composer, Big Query, Cloud Storage, GKE, Airflow, etc.).
2+ years of experience with Kafka, Pub/Sub, Docker, Kubernetes
Architecture design and documentation experience of 2+ years
Troubleshoot, optimize data platform capabilities
Ability to work independently, solve problems, update the stake holders.
Analyze, design, develop and deploy solutions as per business requirements.
Strong understanding of relational and dimensional data modeling.
Experience in DevOps and CI/CD related technologies.
Excellent written, verbal communication skills, including experience in technical documentation and ability to communicate with senior business managers and executives.
Data Engineer
Data scientist job in Phoenix, AZ
Hybrid - 2-3 days on site
Phoenix, AZ
We're looking for a Data Engineer to help build the cloud-native data pipelines that power critical insights across our organization. You'll work with modern technologies, solve real-world data challenges, and support analytics and reporting systems that drive smarter decision-making in the transportation space.
What You'll Do
Build and maintain data pipelines using Databricks, Azure Data Factory, and Microsoft Fabric
Implement incremental and real-time ingestion using medallion architecture
Develop and optimize complex SQL and Python transformations
Support legacy platforms (SSIS, SQL Server) while contributing to modernization efforts
Troubleshoot data quality and integration issues
Participate in proof-of-concepts and recommend technical solutions
What You Bring
5+ years designing and building data solutions
Strong SQL and Python skills
Experience with ETL pipelines and Data Lake architecture
Ability to collaborate and adapt in a fast-moving environment
Preferred: Azure services, cloud ETL tools, Power BI/Tableau, event-driven systems, NoSQL databases
Bonus: Experience with Data Science or Machine Learning
Benefits
Medical, dental, and vision from day one · PTO & holidays · 401(k) with match · Lifestyle account · Tuition reimbursement · Voluntary benefits · Employee Assistance Program · Well-being & culture programs · Professional development support
Data Engineer
Data scientist job in Phoenix, AZ
Hi,
We do have an job opportunity for Data Engineer Analyst role.
Data Analyst / Data Engineer
Expectations: Our project is data analysis heavy, and we are looking for someone who can grasp business functionality and translate that into working technical solutions.
Job location: Phoenix, Arizona.
Type - Hybrid model (3 days a week in office)
Job Description: Data Analyst / Data Engineer (6+ Years relevant Experience with required skill set)
Summary:
We are seeking a Data Analyst Engineer with a minimum of 6 years in data engineering, data analysis, and data design. The ideal candidate will have strong hands-on expertise in Python and relational databases such as Postgres, SQL Server, or MySQL. Should have good understanding of data modeling theory and normalization forms.
Required Skills:
6+ years of experience in data engineering, data analysis, and data design
Your approach as a data analysis in your previous / current role, and what methods or techniques did you use to extract insights from large datasets
Good proficiency in Python
Do you have any formal training or education in data modeling? If so, please provide details about the course, program, or certification you completed, including when you received it.
Strong experience with relational databases: Postgres, SQL Server, or MySQL.
What are the essential factors that contribute to a project's success, and how do you plan to leverage your skills and expertise to ensure our project meets its objectives?
Expertise in writing complex SQL queries and optimizing database performance
Solid understanding of data modeling theory and normalization forms.
Good communicator with the ability to articulate business problems for technical solutions.
Key Responsibilities:
Analyze complex datasets to derive actionable insights and support business decisions.
Model data solutions for high performance and reliability.
Work extensively with Python for data processing and automation.
Develop and optimize SQL queries for Postgres, SQL Server, or MySQL databases.
Ensure data integrity, security, and compliance across all data solutions.
Collaborate with cross-functional teams to understand data requirements and deliver solutions.
Communicate effectively with stakeholders and articulate business problems to drive technical solutions.
Secondary Skills:
Experience deploying applications in Kubernetes.
API development using FastAPI or Django.
Familiarity with containerization (Docker) and CI/CD tools.
Regards,
Suhas Gharge
Senior Data Engineer
Data scientist job in Phoenix, AZ
As the Senior Data Engineer, you will help build, maintain, and optimize the data infrastructure that powers our decision-making and product development. You'll work with modern tools like Snowflake, Metabase, Mage, Airbyte, and MySQL to enable data visualization, data mining, and efficient access to high-quality insights across our GradGuard ecosystem. This is a key opportunity for someone with around five years of experience who's passionate about turning data into impact.
This position is based in Phoenix, AZ.
Challenges You'll Focus On:
Design, build, and maintain scalable data pipelines and architectures using Mage (or similar orchestrators) and Airbyte for ELT processes.
Ensure efficient and reliable data ingestion, transformation, and loading into Snowflake.
Perform data mining and exploratory data analysis to uncover trends, patterns, and business opportunities.
Ensure the quality, consistency, and reliability of the underlying data.
Promote best practices and quality standards for the data engineering team.
Partner with Data Science, Business Intelligence, and Product teams to define data needs and ensure data infrastructure supports strategic initiatives.
Optimize SQL queries and data models for performance and scalability.
Contribute to improving data standards, documentation, and governance across all data systems.
Help ensure compliance with data security, privacy, and regulatory requirements in the insurance domain.
The person we're looking for has a proven, successful background with:
5+ years of experience as a Data Engineer, Data Analyst, or similar role.
Experience leading and mentoring a data engineering team.
Proficient with SQL for data transformation, querying, and performance optimization.
Proficiency with Python or other languages like Java, JavaScript, and/or Scala.
Proficiency in connecting with APIs for data loading.
Hands-on experience with:
Snowflake (Data Warehousing). Beyond basic SQL, you must understand Snowflake's unique architecture and features which includes: Data Warehouse Design, Performance Optimization, and Data Loading. Knowledge of advance Snowflake feature is nice-to-have.
Mage (Data Pipeline Orchestration) or experience with other orchestration.
Airbyte (ELT Processes) or experience with ELT tools.
Metabase (Data Visualization & Dashboards) or familiarity with other visualization BI tools.
Comfort with data modeling, ETL/ELT best practices, and cloud-based data architectures (preferably AWS).
Excellent problem-solving skills, attention to detail, and ability to work cross-functionally.
Prior experience working in an insurance, fintech, or highly regulated industry is a plus.
Beyond a fulfilling and challenging role, you'll get:
A competitive salary.
Opportunity to enroll in comprehensive health, dental, and vision insurance. We pay 100% of employee premiums and 75% of your family's premiums.
A lifestyle spending account where you can receive up to $400 in reimbursements for wellness activities.
401(K) retirement plan with company matching up to 5% of compensation deferred. Employee and employer contributions are 100% vested.
Student loan and education assistance, after one year of employment at GradGuard. We're learners and embrace education.
Unlimited PTO after completing the 30-day introductory period. Plus, 12 paid holidays and paid parental leave.
About GradGuard
As the leader in college tuition and renters insurance, GradGuard serves more than 1.7 million students across 1,900+ institutions.
Our national technology platform embeds innovative insurance protections into the enrollment processes of over 650 institutional partners, empowering schools to increase college completion rates and reduce the financial impact of preventable losses.
GradGuard supports College Life Protected, a social purpose entity that promotes research, professional development, and best practices that strengthen campus communities, families, society and the economic competitiveness of our nation.
GradGuard was recognized as one of the Top 100 Financial Technology Companies of 2024 by The Financial Technology Report, a RISE Internship Award winner, and a Phoenix Business Journal Best Places to Work finalist, GradGuard remains committed to innovation, excellence, and supporting students and families.
Hear from our students, families, and partners: **********************************
Those that succeed at our company:
Make it happen by turning challenges into opportunities.
Do the right thing even when it's difficult.
Demand excellence from yourself and others.
Learn for life and stay curious.
Enjoy the journey, not just the results.
The above just so happen to be our core values. These values are at the heart of our mission to educate and protect students from the risks of college life, empowering us to create meaningful experiences and make a positive impact.
GradGuard is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
Data Engineer
Data scientist job in Phoenix, AZ
Akkodis is seeking a Data Engineer II that will work on all phases of the software development lifecycle supporting a variety of projects and applications. This role will be a mid-level position, with an expectation of continuing to build technical skills with increasingly more complexity and responsibility on projects. Candidates for this position will have prior experience delivering efficient, secure, high performance, and easy-to-support solutions primarily within the Microsoft suite of technologies.
Hybrid 3 days a week onsite in Phoenix, AZ
Pay: 48 - 55/hr
Essential Functions:
Analyze, fix, and test production support tickets for a variety of systems
Build ETL which loads complex data warehouse objects including dimensions, hierarchies, fact tables, and aggregates
Provide clear and concise explanation and interpretation of the business needs to reach agreement and ensure understanding of needs
Work with a cross functional team to understand the business issues or identify opportunities and gather Design and system requirements
Build and deploy customer facing Bl solutions (Analysis cubes, Power BI) and build the reporting metadata layer
Develop and support data marts and cubes to support functional area data consumers
Support and create data pipelines
Gather, analyze, and document user reporting and/or data feed requirements
Provide escalated support for junior team members
Perform any other assigned tasks deemed necessary by management
Knowledge, Skills and Abilities:
Bachelor's degree in Computer Science or equivalent work experience
3+ years of experience with Microsoft SQL and Microsoft Azure technologies including Power BI
1+ years of experience with Azure Data Lake and Informatica
Strong verbal and written communication skills, analytical and decision-making skills, and ability to work on cross-functional teams
Solid understanding and usage of source control solutions
Change and release management skills
Requirements gathering and documentation skills
Technical Skills:
Databases - MS SQL Server
Tools - Visual Studio, SQL Server Management Studio, Azure DevOps, Redgate, Power BI, Azure Data Lake, Azure API Gateway, Informatica
Languages - SQL, DAX, PowerShell
Other - SharePoint, Microsoft Office Suite, Visio
Core Competencies:
Communication: Convey information, ideas, and feedback clearly and concisely in an engaging manner that helps others understand and retain the message; listening actively to others.
Customer Focus: Place a high priority on the customer's perspective when making decisions and taking action; implementing service practices that meet the customers' and own organization's needs.
Driving for Results: Set SMART goals and measure progress; tenaciously working to meet or exceed goals and making continuous improvement. Seeking innovative ways to solve problems that result in unique and differentiated solutions.
Positive Approach: Demonstrate a positive attitude in the face of difficult or challenging situations; provide an uplifting (yet realistic) outlook on what the future holds and the opportunities it might present.
“Benefit offerings include medical, dental, vision, term life insurance, short-term disability insurance, additional voluntary benefits, commuter benefits and 401K plan. Our program provides employees the flexibility to choose the type of coverage that meets their individual needs. Available paid leave may include Paid Sick Leave, where required by law; any other paid leave required by Federal, State or local law; and Holiday pay upon meeting eligibility criteria.
Disclaimer: These benefit offerings do not apply to client-recruited jobs and jobs which are direct hire to a client”
Data Scientist
Data scientist job in Arizona
Join us for an exciting career with the leading provider of supplemental benefits!
Our Promise
Through skill-building, leadership development and philanthropic opportunities, we provide opportunities to build communities and grow your career, surrounded by diverse colleagues with high ethical standards.
The Data Scientist is responsible for collecting, analyzing, and interpreting large datasets to help organizations make data-driven decisions.
Job Summary:
The Data Scientist is responsible for leveraging their expertise in statistics, computer science and business acumen to extract meaningful insights from complex datasets. This person will play a crucial role in helping organizations make data-driven decisions, solve challenging problems and drive data innovation.
Competencies:
Functional:
Collaborate with colleagues in other departments to improve business outcomes
Identify and mine reliable internal and external data sources
Design custom tools to optimize data mining, cleaning, validation and analysis tasks
Develop and apply custom data models and algorithms to data sets
Develop tools and testing models to ensure data accuracy
Create and present reports that detail your findings, recommendations and solutions
Core:
Data Acquisition and Preparation: Identifying relevant data sources, collecting, cleaning, processing, and transforming raw data into a usable format. This often involves dealing with both structured and unstructured data.
Exploratory Data Analysis (EDA): Investigating datasets to uncover patterns, trends, relationships, and anomalies. This stage helps in formulating hypotheses and understanding the data's potential.
Statistical Analysis and Modeling: Applying statistical methods and building mathematical models to analyze data, test hypotheses, and make predictions. This includes a strong understanding of statistical concepts, regressions, and probability.
Machine Learning and Predictive Modeling: Developing, training, and deploying machine learning algorithms and predictive models to forecast outcomes, classify data, and automate processes. This involves selecting appropriate algorithms, feature engineering, and model evaluation.
Data Mining and Pattern Recognition: Utilizing techniques to discover hidden patterns and insights within large datasets, which can be used for tasks like fraud detection or customer behavior prediction.
Experimentation and Hypothesis Testing: Designing and conducting experiments (e.g., A/B testing) to validate hypotheses, measure the impact of changes, and optimize solutions.
Data Visualization and Communication: Presenting complex findings and insights in a clear, concise, and engaging manner to both technical and non-technical stakeholders. This often involves creating reports, dashboards, and compelling visual representations of data.
Behavioral:
Collegiality: building strong relationships on company-wide, approachable, and helpful, ability to mentor and support team growth.
Initiative: readiness to lead or take action to achieve goals.
Communicative: ability to relay issues, concepts, and ideas to others easily orally and in writing.
Member-focused: going above and beyond to make our members feel seen, valued, and appreciated.
Detail-oriented and thorough: managing and completing details of assignments without too much oversight.
Flexible and responsive: managing new demands, changes, and situations.
Critical Thinking: effectively troubleshoot complex issues, problem solve and multi-task.
Integrity & responsibility: acting with a clear sense of ownership for actions, decisions and to keep information confidential when required.
Collaborative: ability to represent your own interests while being fair to those representing other or competing ideas in search of a workable solution for all parties.
Minimum Qualifications:
Bachelor's degree in computer science, Data Science, Math or related field.
5+ years' experience working in healthcare
Demonstrated expertise in statistics, computer science and business acumen to extract meaningful insights from complex datasets.
Expertise in designing multi-cloud or hybrid data ecosystems.
5+ years' experience using Java, JavaScript, Python, R or SQL
Experience with business intelligence and data analytics tools such as PowerBI
Familiarity with machine learning techniques and when they should be used
Familiarity with statistical and data mining techniques and when it is appropriate to use them
Strong problem-solving skills
Preferred Qualifications:
Master's degree in data science, Computer Science, or related field.
Experience with AI-driven data architecture and advanced analytics solutions.
Proficiency in cloud-native database management and DevOps practices.
At Avēsis, we strive to design equitable, and competitive compensation programs. Base pay within the range is ultimately determined by a candidate's skills, expertise, or experience. In the United States, we have three geographic pay zones. For this role, our current pay ranges for new hires in each zone are:
Zone A:$81,650.00-$136,090.00 Zone B:$89,060.00-$148,440.00 Zone C:$95,840.00-$159,730.00 FLSA Status: Salary/Exempt
This role may also be eligible for benefits, bonuses, and commission.
Please visit Avesis Pay Zones for more information on which locations are included in each of our geographic pay zones. However, please confirm the zone for your specific location with your recruiter.
We Offer
Meaningful and challenging work opportunities to accelerate innovation in a secure and compliant way.
Competitive compensation package.
Excellent medical, dental, supplemental health, life and vision coverage for you and your dependents with no wait period.
Life and disability insurance.
A great 401(k) with company match.
Tuition assistance, paid parental leave and backup family care.
Dynamic, modern work environments that promote collaboration and creativity to develop and empower talent.
Flexible time off, dress code, and work location policies to balance your work and life in the ways that suit you best.
Employee Resource Groups that advocate for inclusion and diversity in all that we do.
Social responsibility in all aspects of our work. We volunteer within our local communities, create educational alliances with colleges, drive a variety of initiatives in sustainability.
How To Stay Safe
Avēsis is aware of fraudulent activity by individuals falsely representing themselves as Avēsis recruiters. In some instances, these individuals may even contact applicants with a job offer letter, ask applicants to make purchases (i.e., a laptop or gift cards) from a designated vendor, have applicants fill out W-2 forms, or ask that applicants ship or send packages of goods to the company.
Avēsis would never make such requests to applicants at any time throughout our job application process. We also would never ask applicants for personal information, such as passport numbers, bank account numbers, or social security numbers, during our process. Our recruitment process takes place by phone and via trusted business communication platform (i.e., Zoom, Webex, Microsoft Teams, etc.). Any emails from Avēsis recruiters will come from a verified email address ending in @ Avēsiscom.
We urge all applicants to exercise caution. If something feels off about your interactions, we encourage you to suspend or cease communications. If you are unsure of the legitimacy of a communication you have received, please reach out to ********************.
To learn more about protecting yourself from fraudulent activity, please refer to this article link (************************************************** If you believe you were a victim of fraudulent activity, please contact your local authorities or file a complaint (Link: ******************************* with the Federal Trade Commission. Avēsis is not responsible for any claims, losses, damages, or expenses resulting from unaffiliated individuals of the company or their fraudulent activity.
Equal Employment Opportunity
At Avēsis, We See You. We celebrate differences and are building a culture of inclusivity and diversity. We are proud to be an Equal Employment Opportunity employer that considers all qualified applicants and does not discriminate against any person based on ancestry, age, citizenship, color, creed, disability, familial status, gender, gender expression, gender identity, marital status, military or veteran status, national origin, race, religion, sexual orientation, or any other characteristic. At Avēsis, we believe that, to operate at the peak of excellence, our workforce needs to represent a rich mixture of diverse people, all focused on providing a world-class experience for our clients. We focus on recruiting, training and retaining those individuals that share similar goals. Come Dare to be Different at Avēsis, where We See You!
Auto-ApplyData Scientist, NLP & Language Models
Data scientist job in Phoenix, AZ
Datavant is a data platform company and the world's leader in health data exchange. Our vision is that every healthcare decision is powered by the right data, at the right time, in the right format. Our platform is powered by the largest, most diverse health data network in the U.S., enabling data to be secure, accessible and usable to inform better health decisions. Datavant is trusted by the world's leading life sciences companies, government agencies, and those who deliver and pay for care.
By joining Datavant today, you're stepping onto a high-performing, values-driven team. Together, we're rising to the challenge of tackling some of healthcare's most complex problems with technology-forward solutions. Datavanters bring a diversity of professional, educational and life experiences to realize our bold vision for healthcare.
Datavant is looking for an enthusiastic and meticulous Data Scientist to join our growing team, which builds machine learning models for use across Datavant in multiple verticals and for multiple customer types.
As part of the Data Science team, you will play a crucial role in developing new product features and automating existing internal processes to drive innovation across Datavant. You will work with tens of millions of patients' worth of healthcare data to develop models, contributing to the entirety of the model development lifecycle from ideation and research to deployment and monitoring. You will collaborate with an experienced team of Data Scientists and Machine Learning Engineers along with application Engineers and Product Managers across the company to achieve Datavant's AI-enabled future.
**You Will:**
+ Play a key role in the success of our products by developing models for NLP (and other) tasks.
+ Perform error analysis, data cleaning, and other related tasks to improve models.
+ Collaborate with your team by making recommendations for the development roadmap of a capability.
+ Work with other data scientists and engineers to optimize machine learning models and insert them into end-to-end pipelines.
+ Understand product use-cases and define key performance metrics for models according to business requirements.
+ Set up systems for long-term improvement of models and data quality (e.g. active learning, continuous learning systems, etc.).
**What You Will Bring to the Table:**
+ Advanced degree in computer science, data science, statistics, or a related field, or equivalent work experience.
+ 4+ years of experience with data science and machine learning in an industry setting.
+ 4+ years experience with Python.
+ Experience designing and building NLP models for tasks such as classification, named-entity recognition, and dependency parsing.
+ Proficiency with standard data analysis toolkits such as SQL, Numpy, Pandas, etc.
+ Proficiency with deep learning frameworks like PyTorch (preferred) or TensorFlow.
+ Demonstrated ability to drive results in a team environment and contribute to team decision-making in the face of ambiguity.
+ Strong time management skills and demonstrable experience of prioritising work to meet tight deadlines.
+ Initiative and ability to independently explore and research novel topics and concepts as they arise.
\#LI-BC1
We are committed to building a diverse team of Datavanters who are all responsible for stewarding a high-performance culture in which all Datavanters belong and thrive. We are proud to be an Equal Employment Opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, disability, veteran status, or other legally protected status.
At Datavant our total rewards strategy powers a high-growth, high-performance, health technology company that rewards our employees for transforming health care through creating industry-defining data logistics products and services.
The range posted is for a given job title, which can include multiple levels. Individual rates for the same job title may differ based on their level, responsibilities, skills, and experience for a specific job.
The estimated total cash compensation range for this role is:
$136,000-$170,000 USD
To ensure the safety of patients and staff, many of our clients require post-offer health screenings and proof and/or completion of various vaccinations such as the flu shot, Tdap, COVID-19, etc. Any requests to be exempted from these requirements will be reviewed by Datavant Human Resources and determined on a case-by-case basis. Depending on the state in which you will be working, exemptions may be available on the basis of disability, medical contraindications to the vaccine or any of its components, pregnancy or pregnancy-related medical conditions, and/or religion.
This job is not eligible for employment sponsorship.
Datavant is committed to a work environment free from job discrimination. We are proud to be an Equal Employment Opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, disability, veteran status, or other legally protected status. To learn more about our commitment, please review our EEO Commitment Statement here (************************************************** . Know Your Rights (*********************************************************************** , explore the resources available through the EEOC for more information regarding your legal rights and protections. In addition, Datavant does not and will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay.
At the end of this application, you will find a set of voluntary demographic questions. If you choose to respond, your answers will be anonymous and will help us identify areas for improvement in our recruitment process. (We can only see aggregate responses, not individual ones. In fact, we aren't even able to see whether you've responded.) Responding is entirely optional and will not affect your application or hiring process in any way.
Datavant is committed to working with and providing reasonable accommodations to individuals with physical and mental disabilities. If you need an accommodation while seeking employment, please request it here, (************************************************************** Id=**********48790029&layout Id=**********48795462) by selecting the 'Interview Accommodation Request' category. You will need your requisition ID when submitting your request, you can find instructions for locating it here (******************************************************************************************************* . Requests for reasonable accommodations will be reviewed on a case-by-case basis.
For more information about how we collect and use your data, please review our Privacy Policy (**************************************** .
DataScientist
Data scientist job in Arizona City, AZ
Java Full Stack Developer (Job Code: J2EE)
3 to 10 years of experience developing web based applications in Java/J2EE technologies
Knowledge of RDBMS and NoSQL data stores and polyglot persistence (Oracle, MongoDB etc.)
Knowledge of event sourcing and distributed message systems (Kafka, RabbitMQ)
AngularJS, React, Backbone or other client-side MVC experience
Experience with JavaScript build tools and dependency management (npm, bower, grunt, gulp)
Experience creating responsive designs (Bootstrap, mobile, etc.)
Experience with unit and automation testing (Jasmine, Protractor, JUnit)
Expert knowledge of build tools and dependency management (gradle, maven)
Knowledge of Domain
Driven
Design concepts and microservices
Participate in software design and development using modern Java and web technology stack.
Should be proficient in Spring boot and Angular
Sound understanding of Microservices architecture
Good understanding of event driven architecture
Experience building Web Services (REST/SOAP)
Experience in writing©Junit
Good to have experience in TDD
Expert in developing highly responsive web application using Angular4 or above
Good Knowledge of HTML/HTML5/CSS, JavaScript/AJAX, and XML
Good understanding of SQL and relational databases and NO SQL databases
Familiarity with design patterns and should be able to design small to medium complexity modules independently
Experience with Agile or similar development methodologies
Experience with a versioning system (e.g., CVS/SVN/Git)
Experience with agile development methodologies including TDD, Scrum and Kanban
Strong verbal communications, cross-group collaboration skills, analytical, structured and strategic thinking.
Great interpersonal skills, cultural awareness, belief in teamwork
Collaborating with product owners, stakeholders and potentially globally distributed teams
Work cross-functional in an Agile environment
Excellent problem-solving, organizational and analytical skills
Qualification : BE / B.Tech / MCA / ME / M.Tech
****************************
TESt Shift 9 hrs
Test Budget 5 hrs
Auto-ApplyJunior Data Scientist
Data scientist job in Scottsdale, AZ
Job Title: Junior Data Scientist - Multi-Agent Systems, Analytics, and Insights
Seniority Level: Intern to Junior
Industry: AI and Healthcare SaaS
Employment Type: Full-time
Mission
Transform the way patients and providers communicate through intelligent, data-driven systems.
2025 Goal
Support one million patients with real-time AI-powered communication and operational insight.
About Peerlogic
Peerlogic is redefining front office operations for dental and veterinary practices. Our conversational AI assistant, Aimee, turns every call into a revenue opportunity. We are building the most intelligent patient engagement platform in healthcare-and we're looking for builders who want to help ship high-impact product features powered by data.
Role Overview
We're hiring a Junior Data Scientist to join our multi-agent systems and product analytics team. This is a product-oriented role-not research-focused-where you'll build features, monitor system performance, recommend improvements, and collaborate with engineering in an agile, fast-paced environment. You'll use data to improve how our AI agents work and scale across thousands of practices.
What You Will Own
• Build and enhance product features based on LangGraph multi-agent orchestration
• Monitor system performance and behavior in production
• Investigate and triage bugs or regressions through data analysis
• Recommend improvements and automation opportunities
• Collaborate with engineers, designers, and product managers in an agile product team
• Focus on delivering high-quality, reliable solutions that impact real users
Requirements
• Master's degree in Data Science, Computer Science, AI, or related field preferred
• Equivalent hands-on experience in applied analytics or data-driven product work also considered
• Strong Python skills (pandas, NumPy, seaborn, matplotlib); working knowledge of SQL
• Comfortable working with production data, debugging with metrics, and making actionable recommendations
• Experience with agile development and cross-functional collaboration
• Familiarity with LangGraph, LangChain, or AI orchestration tools is a plus
• Exceptional communication and a bias toward execution and product delivery
Why Peerlogic
• Work on real features that ship into production and improve patient experiences
• In-office mentorship, fast feedback loops, and close collaboration with senior talent
• Competitive compensation and equity
• Defined salary progression at 3, 6, and 12 months
Auto-ApplyData Scientist
Data scientist job in Mesa, AZ
Title: Data Scientist
Department: Baseball Operations
Reporting to: Assistant GM, Baseball Development & Technology
Job Classification: Full-time, Exempt
Full-time Location (City, State): Mesa, AZ
About the A's:
The A's are a baseball team founded in 1901. They have a rich history, having won nine World Series championships and 15 American League pennants. The A's are known for pioneering the "Moneyball" approach to team-building, which focuses on using statistical analysis to identify undervalued players.
In addition to their success on the field, the A's also have a positive and dynamic work culture. They have been recognized twice as the Front Office Sports, Best Employers in Sports.
The A's are defined by their core pillars of being Dynamic, Innovative, and Inclusive. Working for the A's offers the opportunity to be part of an innovative organization that values its employees and strives to create a positive work environment.
Description:
The A's are hiring for a full-time Data Scientist for the Baseball Operations Department. The Data Scientist will construct statistical models that inform decision-making in all facets of Baseball Operations. This position requires strong experience in statistics, data analytics, and computer science. This position is primarily based out of Mesa, AZ.
Responsibilities:
Design, build, and maintain predictive models to support player evaluation, acquisition, development, and performance optimization.
Collaborate with Baseball Analytics staff to integrate analytical findings into decision-making tools and ensure seamless implementation.
Analyze and synthesize large-scale data, creating actionable insights for stakeholders within Baseball Operations.
Research and implement advanced statistical methods, including time series modeling, spatial statistics, boosting models, and Bayesian regression, to stay on the cutting edge of sabermetric research.
Develop and maintain robust data modeling pipelines and workflows in cloud environments to ensure scalability and reliability of analytical outputs.
Produce clear, concise written reports and compelling data visualizations to communicate insights effectively across diverse audiences.
Stay current with advancements in data science, statistical methodologies, and player evaluation techniques to identify and propose new opportunities for organizational improvement.
Mentor team members within the Baseball Operations department, fostering a collaborative and innovative research environment.
Requirements:
PhD in Mathematics, Statistics, Computer Science, or a related quantitative field.
Proficiency in SQL, R, Python, or other similar programming languages.
Strong understanding of modern statistical and machine learning methods, including experience with predictive modeling techniques.
Proven experience productionizing machine learning models in cloud environments.
Ability to communicate complex analytical concepts effectively to both technical and non-technical audiences.
Demonstrated ability to independently design, implement, and present rigorous quantitative research.
Passion for sabermetric research and baseball analytics with a deep understanding of player evaluation methodologies.
Strong interpersonal and mentoring skills with a demonstrated ability to work collaboratively in a team-oriented environment.
Preferred Qualifications:
Expertise in time series modeling, spatial statistics, boosting models, and Bayesian regression.
Previous experience in sports analytics, ideally baseball, is a plus.
Familiarity with integrating biomechanical data into analytical frameworks.
The A's Diversity Statement:
Diversity Statement Diversity, Equity, and Inclusion are in our organizational DNA. Our commitment to these values is unwavering - on and off the field. Together, we continue to build an inclusive, innovative, and dynamic culture that encourages, supports, and celebrates belonging and amplifies diverse voices. Combining a collaborative and innovative work environment with talented and diverse team members, we've created a workforce in which every team member has the tools to reach their full potential.
Equal Opportunity Consideration:
We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, sexual orientation, age, disability, gender identity, marital or veteran status, or any other protected class.
Auto-ApplyData Scientist, Product Analytics
Data scientist job in Phoenix, AZ
As a Data Scientist at Meta, you will shape the future of people-facing and business-facing products we build across our entire family of applications (Facebook, Instagram, Messenger, WhatsApp, Oculus). By applying your technical skills, analytical mindset, and product intuition to one of the richest data sets in the world, you will help define the experiences we build for billions of people and hundreds of millions of businesses around the world. You will collaborate on a wide array of product and business problems with a wide-range of cross-functional partners across Product, Engineering, Research, Data Engineering, Marketing, Sales, Finance and others. You will use data and analysis to identify and solve product development's biggest challenges. You will influence product strategy and investment decisions with data, be focused on impact, and collaborate with other teams. By joining Meta, you will become part of a world-class analytics community dedicated to skill development and career growth in analytics and beyond.Product leadership: You will use data to shape product development, quantify new opportunities, identify upcoming challenges, and ensure the products we build bring value to people, businesses, and Meta. You will help your partner teams prioritize what to build, set goals, and understand their product's ecosystem.Analytics: You will guide teams using data and insights. You will focus on developing hypotheses and employ a varied toolkit of rigorous analytical approaches, different methodologies, frameworks, and technical approaches to test them.Communication and influence: You won't simply present data, but tell data-driven stories. You will convince and influence your partners using clear insights and recommendations. You will build credibility through structure and clarity, and be a trusted strategic partner.
**Required Skills:**
Data Scientist, Product Analytics Responsibilities:
1. Work with large and complex data sets to solve a wide array of challenging problems using different analytical and statistical approaches
2. Apply technical expertise with quantitative analysis, experimentation, data mining, and the presentation of data to develop strategies for our products that serve billions of people and hundreds of millions of businesses
3. Identify and measure success of product efforts through goal setting, forecasting, and monitoring of key product metrics to understand trends
4. Define, understand, and test opportunities and levers to improve the product, and drive roadmaps through your insights and recommendations
5. Partner with Product, Engineering, and cross-functional teams to inform, influence, support, and execute product strategy and investment decisions
**Minimum Qualifications:**
Minimum Qualifications:
6. Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
7. A minimum of 6 years of work experience in analytics (minimum of 4 years with a Ph.D.)
8. Bachelor's degree in Mathematics, Statistics, a relevant technical field, or equivalent practical experience
9. Experience with data querying languages (e.g. SQL), scripting languages (e.g. Python), and/or statistical/mathematical software (e.g. R)
**Preferred Qualifications:**
Preferred Qualifications:
10. Master's or Ph.D. Degree in a quantitative field
**Public Compensation:**
$173,000/year to $242,000/year + bonus + equity + benefits
**Industry:** Internet
**Equal Opportunity:**
Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Meta participates in the E-Verify program in certain locations, as required by law. Please note that Meta may leverage artificial intelligence and machine learning technologies in connection with applications for employment.
Meta is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance or accommodations due to a disability, please let us know at accommodations-ext@fb.com.
Data Scientist
Data scientist job in Phoenix, AZ
Insight Global is looking to hire a Data Scientist to join one of the largest non-profit healthcare systems in the US. This position is responsible for ensuring the seamless operation and optimization of our Supply & Service Resource Management (SSRM) data driven insights, solutions, and processes. In this role, you will be responsible for applying your expertise in data analytics, data engineering, machine learning, and modeling to strengthen SSRM data solutions and processes. Your work will directly contribute to improving patient care, operational efficiency, and decision-making within our healthcare organization. Other responsibilities include:
Technical Skill Set
-Data Preprocessing: Clean, transform, and prepare data for development and analysis within a data warehouse composed of supply chain information. Ensure data quality, integrity, and consistency.
-Development: Use Python and SQL to build and maintain automation processes, such as RPA and email bots, utilizing tools like relay servers and workflow schedulers for internal use by a wide range of supply chain professionals.
-Data Integration: Design, code, and maintain data pipelines that bring together disparate healthcare data sources, including Electronic Health Records (EHR) systems, Enterprise Resource Planning (ERP) systems, and external data feeds from SFTP servers and APIs with a heavy emphasis on cloud-based solutions. Ensure that data is collected, stored, and processed efficiently for analysis.
-Data Analysis: Analyze large and complex healthcare datasets to identify patterns, trends, and anomalies. Use statistical techniques to draw meaningful conclusions from data.
-Error Handling: Provide advanced technical support on application performance issues, determine root causes, and implement system fixes and improvements.
-Data Architecture: Build data storage, data warehousing, cloud integration, and data modeling strategies to support data analytic initiatives effectively.
-Data Visualization: Create compelling data visualizations and dashboards to communicate findings and insights to non-technical stakeholders.
-Machine Learning: Develop and implement predictive models, classification algorithms, and other machine learning techniques to solve supply chain oriented problems, such as automating supply chain processes and resource allocation optimization.
Development Practices
-Coding Standards: Write code that meets internal standards for style, maintainability, and best practices for a high-scale database environment. Maintain and advocate for these standards through code review.
-Agile: Work in an agile environment across developers, analysts, project managers, leaders, and stakeholders utilizing Azure DevOps or a similar project management system.
-Pair Programming: Actively participate in pair programming sessions with fellow data scientists involving a driver, a person who writes the code, and a navigator, a person who reviews each line of code.
-Dynamic Environment: Function independently on multiple programming projects with competing priorities and adapt quickly to new ideas and disparate data environments like Oracle and Redshift.
-Data Security and Compliance: Ensure that all data handling and analysis comply with relevant healthcare regulations and protect patient privacy.
-Continuous Improvement/Continuous Development (CI/CD): Adopts a continuous improvement culture, seeking to make good processes better and to elevate those that need work.
-Research and Development: Stay up-to-date with the latest advancements in healthcare analytics and data science, and actively contribute to the development of innovative solutions.
Collaboration and Communication
-Collaboration: Collaborate with cross-functional teams, such as, data engineers, SMEs, and IT professionals, to understand data requirements, business objectives, and supply chain workflows.
-Stakeholder Interaction: Work collaboratively with various stakeholders of differing backgrounds. Being able to explain complex technical concepts in simple terms.
-Presentation: Story-telling with data in a coherent manner to communicate the significance of analysis and findings.
-Understanding Business Objectives: Comprehend business requirements behind analysis, which involves actively engaging with stakeholders to grasp their needs, challenges, and objectives.
-Reporting: Collaborate to develop and refine reports and ad-hoc queries for presentations that convey key findings and their potential impact on healthcare operations so stakeholders can make informed decisions.
We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity/affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to ********************.To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy: ****************************************************
Skills and Requirements
-Experience with Lawson S3 or equivalent procurement system experience.
-Proficient in Python and SQL programming languages.
-Knowledge of the supply chain and healthcare operations.
-Experience with software engineering best practices, such as version control (Git), testing and continuous integrations, to analytics code.
-Proficiency in data manipulation and analysis using tools like Pandas and NumPy.
-Ability to extract, combine, and organize large data sets and knowledge of databases.
-Understanding of machine learning algorithms, model development, and evaluation.
-Proficiency in data visualization tools like Tableau, Google Looker, or similar software.
-Excellent communication and teamwork skills, with the ability to convey complex technical information to non-technical stakeholders. -5+ years of relevant experience or a Bachelor's Degree in Data Science, Computer Science, Statistics, or a related field.
-Master's Degree
-Experience with healthcare data, including electronic health records (EHR), supply chain data, and clinical data.
-Must be ok working 8:30am - 3:30pm MT.
Data Scientist
Data scientist job in Tucson, AZ
Our client, a world leader in diagnostics and life sciences, is looking for a "Data Scientist” based out of Tucson, AZ.
Job Duration: Long Term Contract (Possibility Of Further Extension)
Pay Rate: $60/hr on W2
Company Benefits: Medical, Dental, Vision, Paid Sick leave, 401K
Work closely with the assay and algorithm development teams to ensure seamless data integration and maintain data integrity across complex digital pathology projects. Design, develop, and implement robust solutions to automate manual steps within the algorithm development and data management workflows, enhancing efficiency and reproducibility. Perform sophisticated image processing, data analysis, and predictive modeling using large-scale digital pathology datasets. Build, test, and deploy computational tools and algorithms, primarily using Python or similar languages, adhering to high standards of code quality, efficiency, and maintainability.
Leverage your skills in digital pathology, machine learning (ML), artificial intelligence (AI), and GUI development to create impactful solutions and improve algorithm performance. Meticulously maintain the traceability and integrity of all data, including images and associated metadata, ensuring compliance with relevant standards and supporting contractual obligations.
Minimum Qualifications:
Bachelor's degree or higher in Computer Science, Data Science, Engineering, Statistics, or a related quantitative field with 7+ years of relevant experience.
Proven experience developing, testing, and deploying solutions using Python or similar programming languages within a software development lifecycle.
Solid understanding and practical experience in image analysis, machine learning/deep learning techniques, and AI concepts.
Demonstrated ability to perform complex data modeling, processing, and analysis on large or intricate datasets.
Experience developing graphical user interface (GUI) for technical applications or data visualization.
Preferred Qualifications:
Experience working within the life sciences, biotechnology, or pharmaceutical industry, particularly with digital pathology or medical imaging data.
Familiarity with regulatory requirements and compliance standards applicable to medical devices or software as a medical device (SaMD) (e.g., FDA guidance, IVDR).
Proficiency with cloud computing platforms (e.g., AWS, Azure, GCP) and associated services.
Experience utilizing High-Performance Computing (HPC) environments for computationally intensive tasks.
Knowledge of bioinformatics or molecular modeling techniques.
If interested, please send us your updated resume at hr@dawarconsulting.com/***************************
Easy ApplySenior Data Scientist (Experimentation & Machine Learning)
Data scientist job in Scottsdale, AZ
Recognized as the No. 1 site trusted by real estate professionals, Realtor.com has been at the forefront of online real estate for over 25 years, connecting buyers, sellers, and renters with trusted insights and expert guidance to find their perfect home. Through its robust suite of tools, Realtor.com not only makes a significant impact on the real estate industry at large, but for consumers, navigating the biggest purchase they will make in their life, by providing a user experience that is easy to use, easy to understand, and most of all, easy to make decisions.
Join us on our mission to empower more people to find their way home by breaking barriers to entry, making the right connections, and building confidence through expert guidance.
Senior Data Scientist
The Data Science and Analytics organization at ****************** sits at the heart of our mission. We process and analyze terabytes of data every day that enable decisions for millions of home buyers, sellers, renters, dreamers, and real estate professionals. Our goal is to use this data to make the home buying experience a breeze for our consumers. We empower them with the most up-to-date information on properties, help them find their dream homes in the least amount of time, and match them with the most suitable realtor to meet their unique, individual needs.
Role Description
We are seeking a Senior Data Scientist with a strong background in experimentation, media analytics, and cross-functional stakeholder support to join our Client Analytics team. In this role, you will analyze large-scale product and media experiments, consolidate and rebuild business-critical metrics, and deliver clear recommendations that inform high-impact decisions across product, media, and finance. The ideal candidate is a detail-oriented executor who thrives on repeatable analytics, enjoys collaborating with partners in product and media, and brings expertise in Python, SQL, and Amplitude (or similar analytics platforms).
Responsibilities
* Partner with business stakeholders to translate experiment questions and business needs into actionable analytics, providing timely and accurate answers on A/B test outcomes, media impact, and product changes.
* Analyze and report on dozens of product and media experiments each quarter, using Python (pandas, numpy, scipy), SQL, and Excel to clean, aggregate, and interpret data from sources such as Google Ad Manager and Amplitude.
* Apply standard statistical testing (e.g. t-tests) to assess significance and produce clear, actionable recommendations (including "no effect" findings) for product, media, and business teams.
* Set up, monitor, and analyze live experiments in Amplitude or similar product analytics platforms, ensuring correct instrumentation, sample assignment, and data quality.
* Lead metric consolidation and calculation projects joining multiple data sources and building SQL pipelines for executive-ready business metrics.
* Document methodologies, assumptions, and recommendations clearly for both technical and non-technical audiences.
* Respond to ad hoc and recurring requests for experiment analysis, media reporting, and metric deep-dives with precision, speed, and reliability.
* Balance high experiment throughput with ad hoc media reporting, regularly prioritizing work across multiple stakeholders.
* Foster a culture of accountability and transparency by ensuring reproducibility, traceability, and clear code documentation in all analytics work.
Minimum Qualifications
* Typically requires a minimum of 5 years of related experience with a Bachelor's degree;
or 3 years and a Master's degree; or a PhD without experience; or equivalent work
experience
* Degree in a quantitative field (e.g., Statistics, Data Science, Applied Mathematics, Economics, Engineering, Computer Science).
* Relevant experience as a Data Scientist, Data Analyst, Product Analyst, or similar role, using SQL and Python (pandas, numpy, scipy).
* Proven track record managing and delivering on high-volume experimentation, media analytics, or product analytics projects with multiple stakeholders.
* Experience with Amplitude, Mixpanel, or similar product analytics platforms.
* Strong SQL skills, including experience building and joining complex pipelines; ability to handle large, messy, multi-source data.
* Proficient in Excel/Google Sheets for quick reporting and ad hoc analysis.
* Sound understanding of statistical methods for experimentation (randomization, t-tests, confidence intervals, etc.).
* Excellent written and verbal communication skills, with experience presenting to diverse technical and business audiences.
* Self-motivated and self-managing, with strong time management, documentation, and organizational skills.
Preferred Qualifications
* Master's or Ph.D. degree in a quantitative field (e.g., Statistics, Data Science, Applied Mathematics, Economics, Engineering, Computer Science).
* Experience in ad tech, media analytics, or digital advertising environments.
* Familiarity with revenue or monetization analytics.
* Experience with dashboarding tools (e.g., Tableau, Looker).
* Exposure to real estate, marketplace, or consumer product analytics.
Do the best work of your life at Realtor.com
Here, you'll partner with a diverse team of experts as you use leading-edge tech to empower everyone to meet a crucial goal: finding their way home. And you'll find your way home too. At Realtor.com, you'll bring your full self to work as you innovate with speed, serve our consumers, and champion your teammates. In return, we'll provide you with a warm, welcoming, and inclusive culture; intellectual challenges; and the development opportunities you need to grow.
Diversity is important to us, therefore, Realtor.com is an Equal Opportunity Employer regardless of age, color, national origin, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, marital status, status as a disabled veteran and/or veteran of the Vietnam Era or any other characteristic protected by federal, state or local law. In addition, Realtor.com will provide reasonable accommodations for otherwise qualified disabled individuals.
Auto-ApplySenior Data Scientist
Data scientist job in Phoenix, AZ
**_What Data Science contributes to Cardinal Health_** The Data & Analytics Function oversees the analytics life-cycle in order to identify, analyze and present relevant insights that drive business decisions and anticipate opportunities to achieve a competitive advantage. This function manages analytic data platforms, the access, design and implementation of reporting/business intelligence solutions, and the application of advanced quantitative modeling.
Data Science applies base, scientific methodologies from various disciplines, techniques and tools that extracts knowledge and insight from data to solve complex business problems on large data sets, integrating multiple systems.
At Cardinal Health's Artificial Intelligence Center of Excellence (AI CoE), we are pushing the boundaries of healthcare with cutting-edge Data Science and Artificial Intelligence (AI). Our mission is to leverage the power of data to create innovative solutions that improve patient outcomes, streamline operations, and enhance the overall healthcare experience.
We are seeking a highly motivated and experienced Senior Data Scientist to join our team as a thought leader and architect of our AI strategy. You will play a critical role in fulfilling our vision through delivery of impactful solutions that drive real-world change.
**_Responsibilities_**
+ Lead the Development of Innovative AI solutions: Be responsible for designing, implementing, and scaling sophisticated AI solutions that address key business challenges within the healthcare industry by leveraging your expertise in areas such as Machine Learning, Generative AI, and RAG Technologies.
+ Develop advanced ML models for forecasting, classification, risk prediction, and other critical applications.
+ Explore and leverage the latest Generative AI (GenAI) technologies, including Large Language Models (LLMs), for applications like summarization, generation, classification and extraction.
+ Build robust Retrieval Augmented Generation (RAG) systems to integrate LLMs with vast repositories of healthcare and business data, ensuring accurate and relevant outputs.
+ Shape Our AI Strategy: Work closely with key stakeholders across the organization to understand their needs and translate them into actionable AI-driven or AI-powered solutions.
+ Act as a champion for AI within Cardinal Health, influencing the direction of our technology roadmap and ensuring alignment with our overall business objectives.
+ Guide and mentor a team of Data Scientists and ML Engineers by providing technical guidance, mentorship, and support to a team of skilled and geographically distributed data scientists, while fostering a collaborative and innovative environment that encourages continuous learning and growth.
+ Embrace a AI-Driven Culture: foster a culture of data-driven decision-making, promoting the use of AI insights to drive business outcomes and improve customer experience and patient care.
**_Qualifications_**
+ 8-12 years of experience with a minimum of 4 years of experience in data science, with a strong track record of success in developing and deploying complex AI/ML solutions, preferred
+ Bachelor's degree in related field, or equivalent work experience, preferred
+ GenAI Proficiency: Deep understanding of Generative AI concepts, including LLMs, RAG technologies, embedding models, prompting techniques, and vector databases, along with evaluating retrievals from RAGs and GenAI models without ground truth
+ Experience working with building production ready Generative AI Applications involving RAGs, LLM, vector databases and embeddings model.
+ Extensive knowledge of healthcare data, including clinical data, patient demographics, and claims data. Understanding of HIPAA and other relevant regulations, preferred.
+ Experience working with cloud platforms like Google Cloud Platform (GCP) for data processing, model training, evaluation, monitoring, deployment and support preferred.
+ Proven ability to lead data science projects, mentor colleagues, and effectively communicate complex technical concepts to both technical and non-technical audiences preferred.
+ Proficiency in Python, statistical programming languages, machine learning libraries (Scikit-learn, TensorFlow, PyTorch), cloud platforms, and data engineering tools preferred.
+ Experience in Cloud Functions, VertexAI, MLFlow, Storage Buckets, IAM Principles and Service Accounts preferred.
+ Experience in building end-to-end ML pipelines, from data ingestion and feature engineering to model training, deployment, and scaling preferred.
+ Experience in building and implementing CI/CD pipelines for ML models and other solutions, ensuring seamless integration and deployment in production environments preferred.
+ Familiarity with RESTful API design and implementation, including building robust APIs to integrate your ML models and GenAI solutions with existing systems preferred.
+ Working understanding of software engineering patterns, solutions architecture, information architecture, and security architecture with an emphasis on ML/GenAI implementations preferred.
+ Experience working in Agile development environments, including Scrum or Kanban, and a strong understanding of Agile principles and practices preferred.
+ Familiarity with DevSecOps principles and practices, incorporating coding standards and security considerations into all stages of the development lifecycle preferred.
**_What is expected of you and others at this level_**
+ Applies advanced knowledge and understanding of concepts, principles, and technical capabilities to manage a wide variety of projects
+ Participates in the development of policies and procedures to achieve specific goals
+ Recommends new practices, processes, metrics, or models
+ Works on or may lead complex projects of large scope
+ Projects may have significant and long-term impact
+ Provides solutions which may set precedent
+ Independently determines method for completion of new projects
+ Receives guidance on overall project objectives
+ Acts as a mentor to less experienced colleagues
**Anticipated salary range:** $121,600 - $173,700
**Bonus eligible:** Yes
**Benefits:** Cardinal Health offers a wide variety of benefits and programs to support health and well-being.
+ Medical, dental and vision coverage
+ Paid time off plan
+ Health savings account (HSA)
+ 401k savings plan
+ Access to wages before pay day with my FlexPay
+ Flexible spending accounts (FSAs)
+ Short- and long-term disability coverage
+ Work-Life resources
+ Paid parental leave
+ Healthy lifestyle programs
**Application window anticipated to close:** 11/05/2025
*if interested in opportunity, please submit application as soon as possible.
The salary range listed is an estimate. Pay at Cardinal Health is determined by multiple factors including, but not limited to, a candidate's geographical location, relevant education, experience and skills and an evaluation of internal pay equity.
_Candidates who are back-to-work, people with disabilities, without a college degree, and Veterans are encouraged to apply._
_Cardinal Health supports an inclusive workplace that values diversity of thought, experience and background. We celebrate the power of our differences to create better solutions for our customers by ensuring employees can be their authentic selves each day. Cardinal Health is an Equal_ _Opportunity/Affirmative_ _Action employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, ancestry, age, physical or mental disability, sex, sexual orientation, gender identity/expression, pregnancy, veteran status, marital status, creed, status with regard to public assistance, genetic status or any other status protected by federal, state or local law._
_To read and review this privacy notice click_ here (***************************************************************************************************************************