Develop tools, metric measurement and assessment methods for performance management and predictive modeling. Develop dashboards for product management and executives to drive faster and better decision making Create accountability models for DPG-wide quality, I&W, inventory, product management KPI's and business operations.
Improve DPG-wide quality, install and warranty, and inventory performance consistently from awareness, prioritization, and action through the availability of common data.
Collaborate with quality, install, and warranty, and inventory program managers to analyze trends and patterns in data that drive required improvement in key performance indicators (KPIs) Foster growth and utility of Cost of Quality within the company through correlation of I&W data, ECOGS, identification of causal relationships for quality events and discovery of hidden costs throughout the network.
Improve data utilization via AI and automation leading to real time resolution and speeding systemic action.
Lead and/or advise on multiple projects simultaneously and demonstrate organizational, prioritization, and time management proficiencies.
Bachelor's degree with 8+ years of experience; or master's degree with 5+ years' experience; or equivalent experience.
Basic understand of AI and machine learning and ability to work with DataScientist to use AI to solve complex challenging problems leading to efficiency and effectiveness improvements.
Ability to define problem statements and objectives, development of analysis solution approach, execution of analysis.
Basic knowledge of Lean Six Sigma processes, statistics, or quality systems experience.
Ability to work on multiple problems simultaneously.
Ability to present conclusions and recommendations to executive audiences.
Ownership mindset to drive solutions and positive outcomes.
Excellent communication and presentation skills with the ability to present to audiences at multiple levels in the Company.
Willingness to adapt best practices via benchmarking.
Experience in Semiconductor fabrication, Semiconductor Equipment Operations, or related industries is a plus.
Demonstrated ability to change process and methodologies for capturing and interpreting data.
Demonstrated success in using structured problem-solving methodologies and quality tools to solve complex problems.
Knowledge of programming environments such as Python, R, Matlab, SQL or equivalent.
Experience in structured problem-solving methodologies such as PDCA, DMAIC, 8D and quality tools.
Our commitment We believe it is important for every person to feel valued, included, and empowered to achieve their full potential.
By bringing unique individuals and viewpoints together, we achieve extraordinary results.
Lam is committed to and reaffirms support of equal opportunity in employment and non-discrimination in employment policies, practices and procedures on the basis of race, religious creed, color, national origin, ancestry, physical disability, mental disability, medical condition, genetic information, marital status, sex (including pregnancy, childbirth and related medical conditions), gender, gender identity, gender expression, age, sexual orientation, or military and veteran status or any other category protected by applicable federal, state, or local laws.
It is the Company's intention to comply with all applicable laws and regulations.
Company policy prohibits unlawful discrimination against applicants or employees.
Lam offers a variety of work location models based on the needs of each role.
Our hybrid roles combine the benefits of on-site collaboration with colleagues and the flexibility to work remotely and fall into two categories - On-site Flex and Virtual Flex.
'On-site Flex' you'll work 3+ days per week on-site at a Lam or customer/supplier location, with the opportunity to work remotely for the balance of the week.
'Virtual Flex' you'll work 1-2 days per week on-site at a Lam or customer/supplier location, and remotely the rest of the time.
$71k-91k yearly est. 29d ago
Looking for a job?
Let Zippia find it for you.
Senior Data Scientist
Coinbase 4.2
Senior data scientist job in Salem, OR
***************** is planning to bring a million developers and a billion users onchain. We need your help to make that happen. At Base, we live by ourhttps://x.com/jessepollak/status/***********32673997, where our team rises to the challenge, embraces hard weeks, and makes small to significant personal tradeoffs when necessary to drive impact and innovation.
Data Science is an integral component of Coinbase's product and decision making process: we work in partnership with Product, Engineering and Design to influence the roadmap and better understand our users. With a deep expertise in experimentation, analytics and advanced modeling, we produce insights which directly move the company's bottom line.
*What you'll be doing:*
* Conduct analysis and deep dives on ambiguous problems for our business. Your work will result in insights and recommendations that guide the team's decision making.
* Act as owner for a broad scope of data and metrics, from core logging to presentation of data visualizations.
* Guide code reviews, provide SQL and Python expertise, and create well-maintained ETL jobs.
* Maintain a high bar for statistical rigor on your team. Ensure that we're conducting experimentation and causal analyses that build confidence with your stakeholders.
*What we look for in you:*
* A BA/BS in a quantitative field (ex Math, Stats, Physics, or Computer Science) with ≥5+ years of relevant experience or a PhD in a quantitative field with with ≥3+ years of relevant experience
* Demonstrated experience in driving impactful data science projects that tackle ambiguous problem spaces.
* Ability to influence external stakeholders by synthesizing data learnings into compelling stories.
* Practical expertise in applying complex modeling frameworks to practical business problems.
* Professional experience using SQL and Python.
* Experience working with digital products in an iterative development cycle.
* Demonstration of our core cultural values: clear communication, positive energy, continuous learning, and efficient execution.
Disclaimer: Applying for a specific role does not guarantee consideration for that exact position. Leveling and team matching are assessed throughout the interview process.
ID: G2462
*Pay Transparency Notice:* Depending on your work location, the target annual salary for this position can range as detailed below. Full time offers from Coinbase also include bonus eligibility + equity eligibility**+ benefits (including medical, dental, vision and 401(k)).
Pay Range:
$180,370-$212,200 USD
Please be advised that each candidate may submit a maximum of four applications within any 30-day period. We encourage you to carefully evaluate how your skills and interests align with Coinbase's roles before applying.
Commitment to Equal Opportunity
Coinbase is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, creed, gender, national origin, age, disability, veteran status, sex, gender expression or identity, sexual orientation or any other basis protected by applicable law. Coinbase will also consider for employment qualified applicants with criminal histories in a manner consistent with applicable federal, state and local law. For US applicants, you may view the *********************************************** in certain locations, as required by law.
Coinbase is also committed to providing reasonable accommodations to individuals with disabilities. If you need a reasonable accommodation because of a disability for any part of the employment process, please contact us at accommodations***********************************
Global Data Privacy Notice for Job Candidates and Applicants
Depending on your location, the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) may regulate the way we manage the data of job applicants. Our full notice outlining how data will be processed as part of the application procedure for applicable locations is available ****************************************************************
AI Disclosure
For select roles, Coinbase is piloting an AI tool based on machine learning technologies to conduct initial screening interviews to qualified applicants. The tool simulates realistic interview scenarios and engages in dynamic conversation. A human recruiter will review your interview responses, provided in the form of a voice recording and/or transcript, to assess them against the qualifications and characteristics outlined in the job description.
For select roles, Coinbase is also piloting an AI interview intelligence platform to transcribe and summarize interview notes, allowing our interviewers to fully focus on you as the candidate.
*The above pilots are for testing purposes and Coinbase will not use AI to make decisions impacting employment*. To request a reasonable accommodation due to disability, please contact accommodations[at]coinbase.com.
$180.4k-212.2k yearly 60d+ ago
Senior Data Scientist - Marketing
Mercury 3.5
Senior data scientist job in Portland, OR
In the 1840s, Charles Babbage and Ada Lovelace worked on an early version of the computer known as the “Analytics Engine”. In the words of computing historian Doron Swade, “What Lovelace saw...was that numbers could represent entities other than quantity” and together they laid the foundation for general-purpose computing.
While much has changed since then, the importance of numbers in building great technology remains. We're looking for DataScientists who can help us build our analytics engine by making meaning from data and identifying opportunities for improvement.
As a Marketing DataScientist at Mercury, you will partner with senior Marketing leaders and the Performance Marketing team, as well as Brand Marketing, Product Marketing and Lifecycle Marketing to acquire, engage, and convert Mercury customers around the globe. You will develop various skills as a full-stack DataScientist working on projects end-to-end and build deep domain expertise in the intersection of Data Science and Marketing. You will set the direction for our marketing measurement strategy and ensure it fits within Mercury's broader growth, product and company goals.
Here are some things you'll do on the job:
Collaborate with Marketing stakeholders and other cross-functional partners to identify impactful business questions, conduct deep-dive analysis, and communicate findings and actionable recommendations to audiences at all levels to inform data-driven decisions.
Collaborate with other DataScientists and Data Engineers to build and improve different marketing measurement capabilities.
Develop privacy-resilient measurement strategies using techniques like synthetic control methods and incrementality testing to maintain attribution quality as the industry shifts away from third-party cookies and device identifiers.
Develop and apply marketing measurement capabilities such as A/B Testing, Causal Inference, Marketing Mix Modeling (MMM), and Multi-touch Attribution (MTA) to evaluate the performance of our marketing effort.
Build and deploy machine learning and statistical models such as Customer Lifetime Value, Lead Scoring, Segmentation, and time-series forecasting end to end.
Influence and partner with engineering, design, and business teams to implement data-based recommendations that will improve entrepreneurs' lives and generate revenue for Mercury.
You should:
Have 5+ years of experience working with marketing teams across full funnel measurement from brand awareness and content marketing to product adoption and customer retention.
Have expertise in marketing measurement strategies including brand lift studies, geo-experiments, survey-based measurement, cross-channel attribution, causal impact analysis, and experimentation design to identify growth opportunities.
Have fluency in SQL, and other statistical programming languages (e.g. Python, R, etc.).
Have experience with marketing analytics tools such as Google Analytics, Amplitude, social listening platforms, email/CRM analytics (e.g., Salesforce, HubSpot), and customer data platforms.
Have experience crafting data pipelines and dashboards, and understand different database structures.
Be super organized and communicative. You will need to prioritize and manage projects to maximize impact, supporting multiple stakeholders with varying quantitative skill levels.
The total rewards package at Mercury includes base salary, equity (stock options), and benefits.
Our salary and equity ranges are highly competitive within the SaaS and fintech industry and are updated regularly using the most reliable compensation survey data for our industry. New hire offers are made based on a candidate's experience, expertise, geographic location, and internal pay equity relative to peers.
Our target new hire base salary ranges for this role are the following:
US employees (any location): $200,700 - $250,900
Canadian employees (any location): CAD 189,700 - 237,100
*Mercury is a fintech company, not an FDIC-insured bank. Banking services provided through Choice Financial Group and Column N.A., Members FDIC..
Mercury values diversity & belonging and is proud to be an Equal Employment Opportunity employer. All individuals seeking employment at Mercury are considered without regard to race, color, religion, national origin, age, sex, marital status, ancestry, physical or mental disability, veteran status, gender identity, sexual orientation, or any other legally protected characteristic. We are committed to providing reasonable accommodations throughout the recruitment process for applicants with disabilities or special needs. If you need assistance, or an accommodation, please let your recruiter know once you are contacted about a role.
We use Covey as part of our hiring and / or promotional process for jobs in NYC and certain features may qualify it as an AEDT. As part of the evaluation process we provide Covey with job requirements and candidate submitted applications. We began using Covey Scout for Inbound on January 22, 2024.
[Please see the independent bias audit report covering our use of Covey for more information.]
#LI-AC1
$200.7k-250.9k yearly Auto-Apply 4d ago
Data Scientist, Product Analytics
Meta 4.8
Senior data scientist job in Salem, OR
As a DataScientist at Meta, you will shape the future of people-facing and business-facing products we build across our entire family of applications (Facebook, Instagram, Messenger, WhatsApp, Oculus). By applying your technical skills, analytical mindset, and product intuition to one of the richest data sets in the world, you will help define the experiences we build for billions of people and hundreds of millions of businesses around the world. You will collaborate on a wide array of product and business problems with a wide-range of cross-functional partners across Product, Engineering, Research, Data Engineering, Marketing, Sales, Finance and others. You will use data and analysis to identify and solve product development's biggest challenges. You will influence product strategy and investment decisions with data, be focused on impact, and collaborate with other teams. By joining Meta, you will become part of a world-class analytics community dedicated to skill development and career growth in analytics and beyond.Product leadership: You will use data to shape product development, quantify new opportunities, identify upcoming challenges, and ensure the products we build bring value to people, businesses, and Meta. You will help your partner teams prioritize what to build, set goals, and understand their product's ecosystem.Analytics: You will guide teams using data and insights. You will focus on developing hypotheses and employ a varied toolkit of rigorous analytical approaches, different methodologies, frameworks, and technical approaches to test them.Communication and influence: You won't simply present data, but tell data-driven stories. You will convince and influence your partners using clear insights and recommendations. You will build credibility through structure and clarity, and be a trusted strategic partner.
**Required Skills:**
DataScientist, Product Analytics Responsibilities:
1. Work with large and complex data sets to solve a wide array of challenging problems using different analytical and statistical approaches
2. Apply technical expertise with quantitative analysis, experimentation, data mining, and the presentation of data to develop strategies for our products that serve billions of people and hundreds of millions of businesses
3. Identify and measure success of product efforts through goal setting, forecasting, and monitoring of key product metrics to understand trends
4. Define, understand, and test opportunities and levers to improve the product, and drive roadmaps through your insights and recommendations
5. Partner with Product, Engineering, and cross-functional teams to inform, influence, support, and execute product strategy and investment decisions
**Minimum Qualifications:**
Minimum Qualifications:
6. Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
7. Bachelor's degree in Mathematics, Statistics, a relevant technical field, or equivalent
8. 4+ years of work experience in analytics, data querying languages such as SQL, scripting languages such as Python, and/or statistical mathematical software such as R (minimum of 2 years with a Ph.D.)
9. 4+ years of experience solving analytical problems using quantitative approaches, understanding ecosystems, user behaviors & long-term product trends, and leading data-driven projects from definition to execution [including defining metrics, experiment, design, communicating actionable insights]
**Preferred Qualifications:**
Preferred Qualifications:
10. Master's or Ph.D. Degree in a quantitative field
**Public Compensation:**
$147,000/year to $208,000/year + bonus + equity + benefits
**Industry:** Internet
**Equal Opportunity:**
Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Meta participates in the E-Verify program in certain locations, as required by law. Please note that Meta may leverage artificial intelligence and machine learning technologies in connection with applications for employment.
Meta is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance or accommodations due to a disability, please let us know at accommodations-ext@fb.com.
$147k-208k yearly 60d+ ago
Data Scientist, Generative AI
Amira Learning 3.8
Senior data scientist job in Oregon
REMOTE / FULL TIME Amira Learning accelerates literacy outcomes by delivering the latest reading and neuroscience with AI. As the leader in third-generation edtech, Amira listens to students read out loud, assesses mastery, helps teachers supplement instruction and delivers 1:1 tutoring. Validated by independent university and SEA efficacy research, Amira is the only AI literacy platform proven to achieve gains surpassing 1:1 human tutoring, consistently delivering effect sizes over 0.4.
Rooted in over thirty years of research, Amira is the first, foremost, and only proven Intelligent Assistant for teachers and AI Reading Tutor for students. The platform serves as a school district's Intelligent Growth Engine, driving instructional coherence by unifying assessment, instruction, and tutoring around the chosen curriculum.
Unlike any other edtech tool, Amira continuously identifies each student's skill gaps and collaborates with teachers to build lesson plans aligned with district curricula, pulling directly from the district's high-quality instructional materials. Teachers can finally differentiate instruction with evidence and ease, and students get the 1:1 practice they specifically need, whether they are excelling or working below grade level.
Trusted by more than 2,000 districts and working in partnership with twelve state education agencies, Amira is helping 3.5 million students worldwide become motivated and masterful readers.
About this role:
We are seeking a DataScientist with expertise in the domain of reading science, education, literacy, and NLP; with practical experience building and utilizing Gen AI (LLM, image, and/or video) models. You will help to create Gen AI based apps that will power the most widely used Intelligent Assistant in U.S. schools, already helping more than 2 million children.
We are looking for strong, education focused engineers who have a background using the latest generative AI models, with experience in areas such as prompt engineering, model evaluation; data processing for training and fine-tuning; model alignment, and human-feedback-based model training.
Responsibilities include:
* Design methods, tools, and infrastructure to enable Amira to interact with students and educators in novel ways.
* Define approaches to content creation that will enable Amira to safely assist students to build their reading skills. This includes defining internal pipelines to interact with our content team.
* Contribute to experiments, including designing experimental details and hypothesis testing, writing reusable code, running evaluations, and organizing and presenting results.
* Work hands on with large, complex codebases, contributing meaningfully to enhance the capabilities of the machine learning team.
* Work within a fully distributed (remote) team.
* Find mechanisms for enabling the use of the Gen AI to be economically viable given the limited budgets of public schools.
Who You Are:
* You have a background in early education, reading science, literacy, and/or NLP.
* You have at least one year of experience working with LLMs and Gen AI models.
* You have a degree in computer science or a related technical area.
* You are a proficient Python programmer.
* You have created performant Machine Learning models.
* You want to continue to be hands-on with LLMs and other Gen AI models over the next few years.
* You have a desire to be at a Silicon Valley start-up, with the desire and commitment that requires.
* You are able to enjoy working on a remote, distributed team and are a natural collaborator.
* You love writing code - creating good products means a lot to you. Working is fun - not a passport to get to the next weekend.
Qualifications
* Bachelor's degree, and/or relevant experience
* 1+ years of Gen AI experience - preferably in the Education SaaS industry
* Ability to operate in a highly efficient manner by multitasking in a fast-paced, goal-oriented environment.
* Exceptional organizational, analytical, and detail-oriented thinking skills.
* Proven track record of meeting/exceeding goals and targets.
* Great interpersonal, written and oral communication skills.
* Experience working across remote teams.
Amira's Culture
* Flexibility - We encourage and support you to live and work where you desire. Amira works as a truly distributed team. We worked remotely before COVID and we'll be working remotely after the pandemic is long gone. Our office is Slack. Our coffee room is Zoom. Our team works hard but we work when we want, where we want.
* Collaboration - We work together closely, using collaborative tools and periodic face to face get togethers. We believe great software is like movie-making. Lots of talented people with very different skills have to band together to build a great experience.
* Lean & Agile -- We believe in ownership and continuous feedback. Yes, we employ Scrum ceremonies. But, what we're really after is using data and learning to be better and to do better for our teachers, students, and players.
* Mission-Driven - What's important to us is helping kids. We're about tangible, measured impact.
Benefits:
* Competitive Salary
* Medical, dental, and vision benefits
* 401(k) with company matching
* Flexible time off
* Stock option ownership
* Cutting-edge work
* The opportunity to help children around the world reach their full potential
Commitment to Diversity:
Amira Learning serves a diverse group of students and educators across the United States and internationally. We believe every student should have access to a high-quality education and that it takes a diverse group of people with a wide range of experiences to develop and deliver a product that meets that goal. We are proud to be an equal opportunity employer.
The posted salary range reflects the minimum and maximum base salary the company reasonably expects to pay for this role. Salary ranges are determined by role, level, and location. Individual pay is based on location, job-related skills, experience, and relevant education or training. We are an equal opportunity employer. We do not discriminate on the basis of race, religion, color, ancestry, national origin, sex, sexual orientation, gender identity or expression, age, disability, medical condition, pregnancy, genetic information, marital status, military service, or any other status protected by law.
$89k-124k yearly est. 60d+ ago
Junior Data Scientist
Leo 3.2
Senior data scientist job in Oregon
Looking for an extremely bright and budding DataScientist who can work closely with a solid Engineering core team
KEY RESPONSIBILITIES
Provide data-based solutions of core problems with the help of AI and Machine Learning tools.
Take ownership of building data modelling pipelines for scalable and continuous systems.
Closely monitor and provide expertise on creating industry best Data Science curriculum and creating exciting project problem statements
KEY SKILLS
Sharp problem-solving skills with urgency to deliver best quality products along with an alignment with long-term vision of the company
Very good hands-on with python and SQL (postgresql)
Good working knowledge of backend development in REST Frameworks (Django)
Good knowledge and hands-on with different Machine Learning tools and modelling frameworks (Pandas, Keras/Tensorflow, scikit learn, NLP)
Excellent inter-personal skills to communicate and present ideas to different verticals and stakeholders. Good written and communication skills.
Working knowledge of Data pipeline and data-science model deployment
Bonus Points: * Ability to write great documentations
Ability to make data driven decisions for any small thing
Our Way Of Working
*
An opportunity to work on something that really matters.
A fast-paced environment to learn and grow.
High transparency in decision making.
High autonomy; freedom to take risks, to experiment, and to fail.
We promise a meaningful journey with smart people, with opportunities to learn & grow. Plus, you can sleep peacefully knowing you are impacting lives in a big way, every day!
$89k-124k yearly est. 60d+ ago
Data Scientist, Privacy
Datavant
Senior data scientist job in Salem, OR
Datavant is a data platform company and the world's leader in health data exchange. Our vision is that every healthcare decision is powered by the right data, at the right time, in the right format. Our platform is powered by the largest, most diverse health data network in the U.S., enabling data to be secure, accessible and usable to inform better health decisions. Datavant is trusted by the world's leading life sciences companies, government agencies, and those who deliver and pay for care.
By joining Datavant today, you're stepping onto a high-performing, values-driven team. Together, we're rising to the challenge of tackling some of healthcare's most complex problems with technology-forward solutions. Datavanters bring a diversity of professional, educational and life experiences to realize our bold vision for healthcare.
As part of the Privacy Science team within Privacy Hub you will play a crucial role in ensuring that privacy of patients is safeguarded in the modern world of data sharing. As well as working on real data, you will be involved in exciting research to keep us as industry leaders in this area, and stimulating discussions on re-identification risk. You will be supported in developing/consolidating data analysis and coding skills to become proficient in the analysis of large health-related datasets.
**You Will:**
+ Critically analyze large health datasets using standard and bespoke software libraries
+ Discuss your findings and progress with internal and external stakeholders
+ Produce high quality reports which summarise your findings
+ Contribute to research activities as we explore novel and established sources of re-identification risk
**What You Will Bring to the Table:**
+ Excellent communication skills. Meticulous attention to detail in the production of comprehensive, well-presented reports
+ A good understanding of statistical probability distributions, bias, error and power as well as sampling and resampling methods
+ Seeks to understand real-world data in context rather than consider it in abstraction.
+ Familiarity or proficiency with programmable data analysis software R or Python, and the desire to develop expertise in its language
+ Application of scientific methods to practical problems through experimental design, exploratory data analysis and hypothesis testing to reach robust conclusions
+ Strong time management skills and demonstrable experience of prioritising work to meet tight deadlines
+ Initiative and ability to independently explore and research novel topics and concepts as they arise, to expand Privacy Hub's knowledge base
+ An appreciation of the need for effective methods in data privacy and security, and an awareness of the relevant legislation
+ Familiarity with Amazon Web Services cloud-based storage and computing facilities
**Bonus Points If You Have:**
+ Experience creating documents using LATEX
+ Detailed knowledge of one or more types of health information, e.g., genomics, disease, health images
+ Experience working with or supporting public sector organizations, such as federal agencies (e.g., CMS, NIH, VA, CDC), state health departments, or public health research partners. Familiarity with government data environments, procurement processes, or privacy frameworks in regulated settings is highly valued.
\#LI-BC1
We are committed to building a diverse team of Datavanters who are all responsible for stewarding a high-performance culture in which all Datavanters belong and thrive. We are proud to be an Equal Employment Opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, disability, veteran status, or other legally protected status.
At Datavant our total rewards strategy powers a high-growth, high-performance, health technology company that rewards our employees for transforming health care through creating industry-defining data logistics products and services.
The range posted is for a given job title, which can include multiple levels. Individual rates for the same job title may differ based on their level, responsibilities, skills, and experience for a specific job.
The estimated total cash compensation range for this role is:
$104,000-$130,000 USD
To ensure the safety of patients and staff, many of our clients require post-offer health screenings and proof and/or completion of various vaccinations such as the flu shot, Tdap, COVID-19, etc. Any requests to be exempted from these requirements will be reviewed by Datavant Human Resources and determined on a case-by-case basis. Depending on the state in which you will be working, exemptions may be available on the basis of disability, medical contraindications to the vaccine or any of its components, pregnancy or pregnancy-related medical conditions, and/or religion.
This job is not eligible for employment sponsorship.
Datavant is committed to a work environment free from job discrimination. We are proud to be an Equal Employment Opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, disability, veteran status, or other legally protected status. To learn more about our commitment, please review our EEO Commitment Statement here (************************************************** . Know Your Rights (*********************************************************************** , explore the resources available through the EEOC for more information regarding your legal rights and protections. In addition, Datavant does not and will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay.
At the end of this application, you will find a set of voluntary demographic questions. If you choose to respond, your answers will be anonymous and will help us identify areas for improvement in our recruitment process. (We can only see aggregate responses, not individual ones. In fact, we aren't even able to see whether you've responded.) Responding is entirely optional and will not affect your application or hiring process in any way.
Datavant is committed to working with and providing reasonable accommodations to individuals with physical and mental disabilities. If you need an accommodation while seeking employment, please request it here, (************************************************************** Id=**********48790029&layout Id=**********48795462) by selecting the 'Interview Accommodation Request' category. You will need your requisition ID when submitting your request, you can find instructions for locating it here (******************************************************************************************************* . Requests for reasonable accommodations will be reviewed on a case-by-case basis.
For more information about how we collect and use your data, please review our Privacy Policy (**************************************** .
$104k-130k yearly 14d ago
Data Scientist 1 - Healthcare
Baylor Scott & White Health 4.5
Senior data scientist job in Salem, OR
Value-Based Care (VBC) Analytics is an independent organization covering the Baylor Scott & White Health Plan (Payer) and Baylor Scott & White Quality Alliance (Accountable Care Organization) analytical and data science needs. We are seeking a customer-facing Healthcare DataScientist who works closely with key business stakeholders within the value-based care team, to develop use cases related to difficult to solve and complex business challenges. The ideal candidate will work on creating machine learning models using appropriate techniques to derive predictive insights that enable stakeholders to glean insights and enable actions to improve business outcomes.
**ESSENTIAL FUNCTIONS OF THE ROLE**
+ Communication and Consulting: Summarize and effectively communicate complex data science concepts to inform stakeholders, gain approval, or prompt action from non-technical audience from data-driven recommendations.
+ Applied Machine Learning: Implement machine learning solutions within production environments at scale. Apply appropriate machine learning techniques that directly impact HEDIS/Stars initiatives
+ Data Collection and Optimization: Collect and analyze data from a variety of SQL environments (Snowflake, SQL Server) and other data sources, including vendor derived data, electronic health records, and claims data.
+ Analyze Healthcare Data: Conduct detailed analyses on complex healthcare datasets to identify trends within HEDIS/Stars and utilization, patterns, and insights that support value-based care initiatives, particularly in quality, adherence to standards of care.
+ Stay Informed: Stay up to date on the latest advancements in data science and healthcare analytics to continuously improve our methodologies and tools.
**KEY SUCCESS FACTORS**
The ideal candidate will have some of the following skills and an eagerness to learn the rest.
+ Healthcare Knowledge: Understanding and prior experience in handling data pertaining to HEDIS, Stars measures and Regulatory specifications. Experience in admin claims data sources such as medical/pharmacy claims, social determinants of health (SDOH) and electronic health records is also required.
+ Education: Bachelor's or advanced degree in mathematics, statistics, data science, Public Health or another quantitative field.
+ Effective Communication: Experienced in communicating findings and recommendations directly to Executive-level customers and healthcare professionals.
+ Analytics Skills: Academic or professional experience conducting analytics and experimentation using algorithms associated with advanced analytics topics, including binary classification algorithms, regression algorithms, Neural Network frameworks, Natural Language Processing, etc.
+ Technical Skills: Proficiency in common language / tools for AI/ML such as Python, PySpark. Understanding of software engineering topics, including version control, CI/CD, and unit tests.
+ Problem Solving: A passion for solving puzzles and digging into data.
+ Technology Stack: Familiarity with deploying data science products at scale in a cloud environment such as Snowflake, Databricksor Azure AI/ML Studio.
**BENEFITS**
Our competitive benefits package includes the following
+ · Immediate eligibility for health and welfare benefits
+ · 401(k) savings plan with dollar-for-dollar match up to 5%
+ · Tuition Reimbursement
+ · PTO accrual beginning Day 1
Note: Benefits may vary based upon position type and/or level
**QUALIFICATIONS**
- EDUCATION - Masters' or Bachelors plus 2 years of work experience above the minimum qualification
- EXPERIENCE - 3 Years of Experience
As a health care system committed to improving the health of those we serve, we are asking our employees to model the same behaviours that we promote to our patients. As of January 1, 2012, Baylor Scott & White Health no longer hires individuals who use nicotine products. We are an equal opportunity employer committed to ensuring a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, or any other characteristic protected by law.
$81k-108k yearly est. 3d ago
Data Scientist
Eyecarecenterofsalem
Senior data scientist job in Portland, OR
Job DescriptionWe are looking for a DataScientist to analyze large amounts of raw information to find patterns that will help improve our company. We will rely on you to build data products to extract valuable business insights. In this role, you should be highly analytical with a knack for analysis, math, and statistics. Critical thinking and problem-solving skills are essential for interpreting data. We also want to see a passion for machine learning and research.
Your goal will be to help our company analyze trends to make better decisions.
Responsibilities
Identify valuable data sources and automate collection processes
Undertake to preprocess of structured and unstructured data
Analyze large amounts of information to discover trends and patterns
Build predictive models and machine-learning algorithms
Combine models through ensemble modeling
Present information using data visualization techniques
Propose solutions and strategies to business challenges
Collaborate with engineering and product development teams
Requirements and skills
Proven experience as a DataScientistorData Analyst
Experience in data mining
Understanding of machine learning and operations research
Knowledge of R, SQL, and Python; familiarity with Scala, Java, or C++ is an asset
Experience using business intelligence tools (e.g. Tableau) and data frameworks (e.g. Hadoop)
Analytical mind and business acumen
Strong math skills (e.g. statistics, algebra)
Problem-solving aptitude
Excellent communication and presentation skills
BSc/BA in Computer Science, Engineering, or relevant field; a graduate degree in Data Science or other quantitative field is preferred
$73k-104k yearly est. 6d ago
Senior Data Scientist, Navista
Cardinal Health 4.4
Senior data scientist job in Salem, OR
At Navista, our mission is to empower community oncology practices to deliver patient-centered cancer care. Navista, a Cardinal Health company, is an oncology practice alliance co-created with oncologists and practice leaders that offers advanced support services and technology to help practices remain independent and thrive. True to our name, our experienced team is passionate about helping oncology practices navigate the future.
We are seeking an innovative and highly skilled **SeniorDataScientist** with specialized expertise in Generative AI (GenAI), Large Language Models (LLMs), and Agentic Systems to join the Navista - Data & Advanced Analytics team supporting the growth of our Navista Application Suite and the Integrated Oncology Network (IoN). In this critical role, you will be at the forefront of designing, developing, and deploying advanced AI solutions that leverage the power of generative models and intelligent agents to transform our products and operations. You will be responsible for pushing the boundaries of what's possible, from foundational research to production-ready applications, working with diverse datasets and complex problem spaces, particularly within the oncology domain.
The ideal candidate will possess a deep theoretical understanding and practical experience in building, fine-tuning, and deploying LLMs, as well as architecting and implementing agentic frameworks. You will play a key role in shaping our AI strategy, mentoring junior team members, and collaborating with cross-functional engineering and product teams to bring groundbreaking AI capabilities to life, including developing predictive models from complex, often unstructured, oncology data.
**_Responsibilities_**
+ **Research & Development:** Lead the research, design, and development of novel Generative AI models and algorithms, including but not limited to LLMs, diffusion models, GANs, and VAEs, to address complex business challenges.
+ **LLM Expertise:** Architect, fine-tune, and deploy Large Language Models for various applications such as natural language understanding, generation, summarization, question-answering, and code generation, with a focus on extracting insights from unstructured clinical and research data.
+ **Agentic Systems Design:** Design and implement intelligent agentic systems capable of autonomous decision-making, planning, reasoning, and interaction within complex environments, leveraging LLMs as core components.
+ **Predictive Modeling:** Develop and deploy advanced predictive models and capabilities using both structured and unstructured data, particularly within the oncology space, to forecast outcomes, identify trends, and support clinical or commercial decision-making.
+ **Prompt Engineering & Optimization:** Develop advanced prompt engineering strategies and techniques to maximize the performance and reliability of LLM-based applications.
+ **Data Strategy for GenAI:** Work with data engineers to define and implement data collection, preprocessing, and augmentation strategies specifically tailored for training and fine-tuning generative models and LLMs, including techniques for handling and enriching unstructured oncology data (e.g., clinical notes, pathology reports).
+ **Model Evaluation & Deployment:** Develop robust evaluation metrics and methodologies for generative models, agentic systems, and predictive models. Oversee the deployment, monitoring, and continuous improvement of these models in production environments.
+ **Collaboration & Leadership:** Collaborate closely with machine learning engineers, software engineers, and product managers to integrate AI solutions into our products. Provide technical leadership and mentorship to junior datascientists.
+ **Innovation & Thought Leadership:** Stay abreast of the latest advancements in GenAI, LLMs, and agentic AI research. Proactively identify new opportunities and technologies that can enhance our capabilities and competitive advantage.
+ **Ethical AI:** Ensure the responsible and ethical development and deployment of AI systems, addressing potential biases, fairness, and transparency concerns.
**_Qualifications_**
+ 8-12 years of experience as a DataScientistor Machine Learning Engineer, with a significant focus on deep learning and natural language processing, preferred
+ Bachelor's degree in related field, or equivalent work experience, preferred
+ Proven hands-on experience with Generative AI models (e.g., Transformers, GANs, VAEs, Diffusion Models) and their applications.
+ Extensive experience working with Large Language Models (LLMs), including fine-tuning, prompt engineering, RAG (Retrieval Augmented Generation), and understanding various architectures (e.g., GPT, Llama, BERT, T5).
+ Demonstrated experience in designing, building, and deploying agentic systems or multi-agent systems, including concepts like planning, reasoning, and tool use.
+ Strong experience working with unstructured data, particularly in the oncology domain (e.g., clinical notes, pathology reports, genomic data, imaging reports), and extracting meaningful features for analysis.
+ Demonstrated ability to create and deploy predictive capabilities and models from complex datasets, including those with unstructured components.
+ Proficiency in Python and deep learning frameworks such as PyTorch or TensorFlow.
+ Experience with relevant libraries and tools (e.g., Hugging Face Transformers, LangChain, LlamaIndex).
+ Strong understanding of machine learning fundamentals, statistical modeling, and experimental design.
+ Experience with at least one cloud platforms ( GCP, Azure) for training and deploying large-scale AI models.
+ Excellent problem-solving skills, with the ability to tackle complex, ambiguous problems and drive solutions.
+ Strong communication and presentation skills, capable of explaining comp
+ Experience in the healthcare or life sciences industry, specifically with oncology data and research, highly preferred
+ Experience with MLOps practices for deploying and managing large-scale AI models, highly preferred
+ Familiarity with distributed computing frameworks (e.g., Spark, Dask), highly preferred
+ Experience contributing to open-source AI projects, highly preferred
**_What is expected of you and others at this level_**
+ Applies advanced knowledge and understanding of concepts, principles, and technical capabilities to manage a wide variety of projects
+ Participates in the development of policies and procedures to achieve specific goals
+ Recommends new practices, processes, metrics, or models
+ Works on or may lead complex projects of large scope
+ Projects may have significant and long-term impact
+ Provides solutions which may set precedent
+ Independently determines method for completion of new projects
+ Receives guidance on overall project objectives
+ Acts as a mentor to less experienced colleagues
**Anticipated salary range:** $123,400 - $176,300
**Bonus eligible:** Yes
**Benefits:** Cardinal Health offers a wide variety of benefits and programs to support health and well-being.
+ Medical, dental and vision coverage
+ Paid time off plan
+ Health savings account (HSA)
+ 401k savings plan
+ Access to wages before pay day with my FlexPay
+ Flexible spending accounts (FSAs)
+ Short- and long-term disability coverage
+ Work-Life resources
+ Paid parental leave
+ Healthy lifestyle programs
**Application window anticipated to close:** 02/15/2026 *if interested in opportunity, please submit application as soon as possible.
The salary range listed is an estimate. Pay at Cardinal Health is determined by multiple factors including, but not limited to, a candidate's geographical location, relevant education, experience and skills and an evaluation of internal pay equity.
\#LI-Remote
_Candidates who are back-to-work, people with disabilities, without a college degree, and Veterans are encouraged to apply._
_Cardinal Health supports an inclusive workplace that values diversity of thought, experience and background. We celebrate the power of our differences to create better solutions for our customers by ensuring employees can be their authentic selves each day. Cardinal Health is an Equal_ _Opportunity/Affirmative_ _Action employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, ancestry, age, physical or mental disability, sex, sexual orientation, gender identity/expression, pregnancy, veteran status, marital status, creed, status with regard to public assistance, genetic status or any other status protected by federal, state or local law._
_To read and review this privacy notice click_ here (***************************************************************************************************************************
$123.4k-176.3k yearly 35d ago
AWS Data Migration Consultant
Slalom 4.6
Senior data scientist job in Portland, OR
Candidates can live within commutable distance to any Slalom office in the US. We have a hybrid and flexible environment. Who You'll Work With As a modern technology company, we've never met a technical challenge we didn't like. We enable our clients to learn from their data, create incredible digital experiences, and make the most of new technologies. We blend design, engineering, and analytics expertise to build the future. We surround our technologists with interesting challenges, innovative minds, and emerging technologies.
We are seeking an experienced Cloud Data Migration Architect with deep expertise in SQL Server, Oracle, DB2, or a combination of these platforms, to lead the design, migration, and optimization of scalable database solutions in the AWS cloud. This role will focus on modernizing on-premises database systems by architecting high-performance, secure, and reliable AWS-hosted solutions.
As a key technical leader, you will work closely with data engineers, cloud architects, and business stakeholders to define data strategies, lead complex database migrations, build out ETL pipelines, and optimize performance across legacy and cloud-native environments.
What You'll Do
* Design and optimize database solutions on AWS, including Amazon RDS, EC2-hosted instances, and advanced configurations like SQL Server Always On or Oracle RAC (Real Application Clusters).
* Lead and execute cloud database migrations using AWS Database Migration Service (DMS), Schema Conversion Tool (SCT), and custom automation tools.
* Architect high-performance database schemas, indexing strategies, partitioning models, and query optimization techniques.
* Optimize complex SQL queries, stored procedures, functions, and views to ensure performance and scalability in the cloud.
* Implement high-availability and disaster recovery (HA/DR) strategies including Always-On, Failover Clusters, Log Shipping, and Replication, tailored to each RDBMS.
* Ensure security best practices are followed including IAM-based access control, encryption, and compliance with industry standards.
* Collaborate with DevOps teams to implement Infrastructure-as-Code (IaC) using tools like Terraform, CloudFormation, or AWS CDK.
* Monitor performance using tools such as AWS CloudWatch, Performance Insights, Query Store, Dynamic Management Views (DMVs), or Oracle-native tools.
* Work with software engineers and data teams to integrate cloud databases into enterprise applications and analytics platforms.
What You'll Bring
* 5+ years of experience in database architecture, design, and administration with at least one of the following: SQL Server, Oracle, or DB2.
* Expertise in one or more of the following RDBMS platforms: Microsoft SQL Server, Oracle, DB2.
* Hands-on experience with AWS database services (RDS, EC2-hosted databases).
* Strong understanding of HA/DR solutions and cloud database design patterns.
* Experience with ETL development and data integration, using tools such as SSIS, AWS Glue, or custom solutions.
* Familiarity with AWS networking components (VPCs, security groups) and hybrid cloud connectivity.
* Strong troubleshooting and analytical skills to resolve complex database and performance issues.
* Ability to work independently and lead database modernization initiatives in collaboration with engineering and client stakeholders.
Nice to Have
* AWS certifications such as AWS Certified Database - Specialty or AWS Certified Solutions Architect - Professional.
* Experience with NoSQL databasesor hybrid data architectures.
* Knowledge of analytics and big data tools (e.g., Snowflake, Redshift, Athena, Power BI, Tableau).
* Familiarity with containerization (Docker, Kubernetes) and serverless technologies (AWS Lambda, Fargate).
* Experience with DB2 on-premise or cloud-hosted environments.
About Us
Slalom is a fiercely human business and technology consulting company that leads with outcomes to bring more value, in all ways, always. From strategy through delivery, our agile teams across 52 offices in 12 countries collaborate with clients to bring powerful customer experiences, innovative ways of working, and new products and services to life. We are trusted by leaders across the Global 1000, many successful enterprise and mid-market companies, and 500+ public sector organizations to improve operations, drive growth, and create value. At Slalom, we believe that together, we can move faster, dream bigger, and build better tomorrows for all.
Compensation and Benefits
Slalom prides itself on helping team members thrive in their work and life. As a result, Slalom is proud to invest in benefits that include meaningful time off and paid holidays, parental leave, 401(k) with a match, a range of choices for highly subsidized health, dental, & vision coverage, adoption and fertility assistance, and short/long-term disability. We also offer yearly $350 reimbursement account for any well-being-related expenses, as well as discounted home, auto, and pet insurance.
Slalom is committed to fair and equitable compensation practices. For this position, the target base salary pay range in the following locations:
Boston, Houston, Los Angeles, Orange County, Seattle, San Diego, Washington DC, New York, New Jersey, for Consultant level is $105,000-147,000 and for Senior Consultant level it is $120,000-$169,000 and for Principal level it is $133,000-$187,000.
In all other markets, the target base salary pay range for Consultant level is $96,000-$135,000 and for Senior Consultant level it is $110,000-$155,000 and for Principal level it is $122,000-$172,000.
In addition, individuals may be eligible for an annual discretionary bonus. Actual compensation will depend upon an individual's skills, experience, qualifications, location, and other relevant factors. The salary pay range is subject to change and may be modified at any time.
EEO and Accommodations
Slalom is an equal opportunity employer and is committed to attracting, developing and retaining highly qualified talent who empower our innovative teams through unique perspectives and experiences. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veterans' status, or any other characteristic protected by federal, state, or local laws. Slalom will also consider qualified applications with criminal histories, consistent with legal requirements. Slalom welcomes and encourages applications from individuals with disabilities. Reasonable accommodations are available for candidates during all aspects of the selection process. Please advise the talent acquisition team if you require accommodations during the interview process.
We will accept applications until 1/31/2026 or until the positions are filled.
$133k-187k yearly 3d ago
Data Engineer (Bend, OR)
Solv Energy, LLC
Senior data scientist job in Bend, OR
SOLV Energy is an engineering, procurement, construction (EPC) and solar services provider for utility solar, high voltage substation and energy storage markets across North America.
As a Data Engineer, you will play a crucial role in our data engineering team. Your primary responsibility will be to help design, develop, and maintain data pipelines and ETL (Extract, Transform, Load) processes. You will work to ensure efficient data flow, data quality, usability, and system reliability.
This role is based full-time in our office in Bend, OR. Specific location details and expectations will be discussed during the interview process.
:
* This job description reflects management's assignment of essential functions; it does not prescribe or restrict the tasks that may be assigned
Position Responsibilities and Duties:
Data Pipeline Development
Collaborate with the team lead to design and implement data pipelines that extract, transform (ETL) and load data from various sources into the data warehouse.
Optimize data pipelines for performance, scalability, and reliability.
Monitor and troubleshoot pipeline issues, ensuring data consistency and accuracy.
Document ETL/ELT processes and data lineage.
Collaboration and Communication
Assist in documenting technical specifications, data catalogs, and process workflows.
Communicate effectively with other teams across the organization, including technical & business teams.
Data Quality Assurance
Validate data quality by implementing data validation checks and monitoring data anomalies.
Administer change management policies and procedures and conduct tabletop exercises to assess ETL process effectiveness.
Work closely with data analysts and business stakeholders to understand data requirements and ensure accurate data delivery.
Data Platform Management
Assist in data model design and schema creation.
Perform maintenance on databases and other data stores, such as backups, indexing, and query optimization.
Implement, troubleshoot and maintain the data engineering tools and environments that are necessary for the team's success - ETL tools, documentation/collaboration platforms, servers, etc.
Continuous Learning and Skill Development
Stay up to date with industry trends, best practices, and emerging technologies related to data engineering.
Participate in training sessions and workshops to enhance your technical skills.
Minimum Skills or Experience Requirements:
Required: bachelor's degree in Computer Science, Data Science, Business Analytics, Information Systems, or equivalent technical degree
Experience with SQL and relational databases (e.g., PostgreSQL or MySQL)
2+ years in a professional business data engineering role (5+ to be considered for a SeniorData Engineer position)
Communicating with all corporate functions and understanding diverse data needs
Technical documentation, especially around data; proficiency with related tools such as Confluence
Conceptual data modeling for an organization, master data management, data quality testing, and/or other data governance processes and methods; proficiency with related tools such as Microsoft Purview
Dimensional data modeling and schema design
Data pipelining/ETL tools, especially those in Microsoft Fabric
Cloud solution technologies such as Azure or AWS, and their products
Building data warehouses, data lakes and other data stores for analytics
Business intelligence/reporting; proficiency with related tools, especially Power BI
Code control/version control systems (e.g., Git)
Proficiency with Microsoft office tools including Word, Excel, Outlook
Experience using task management tools such as Jira or Microsoft Planner
Excellent verbal and written communication skills
Strong attention to detail
Eagerness to learn and grow in a dynamic data engineering environment
SOLV Energy Is an Equal Opportunity Employer
At SOLV Energy we celebrate the power of our differences. We are committed to building diverse, equitable, and inclusive workplaces that improve our communities. SOLV Energy prohibits discrimination and harassment of any kind against an employee or applicant based on race, color, age, religion, sex, sexual orientation, gender identity or expression, marital status, national origin, or ethnicity, mental or physical disability, veteran status, parental status, or any other characteristic protected by law.
Benefits:
Employees (and their families) are eligible for medical, dental, vision, basic life and disability insurance. Employees can enroll in our company's 401(k) plan and are provided vacation, sick and holiday pay.
Compensation Range:
$103,430.00 - $129,288.00
Pay Rate Type:
Salary
SOLV Energy does not accept unsolicited candidate introductions, referrals or resumes from third-party recruiters or staffing agencies. We require all third-party recruiters to communicate exclusively with our internal talent acquisition team. SOLV Energy will not pay a placement fee to any third-party recruiter or agency that has not coordinated their recruiting activity with the appropriate member of our internal talent acquisition team.
In addition, candidate introductions or resumes can only be submitted to our internal talent acquisition recruiting team if a signed vendor agreement is already on file and the third-party recruiter or agency has received formal instructions from our internal talent acquisition team to submit candidates for a particular job posting.
Any unsolicited candidate introductions, referrals or resumes sent by third-party recruiters to SOLV Energy or directly to any of our employees, or received through our website or career portal, will be considered property of SOLV Energy and will not be eligible for a placement fee. In the event a third-party recruiter submits a resume or refers a candidate without a previously signed vendor agreement, SOLV Energy explicitly reserves the right to pursue and hire the candidate(s) without financial liability to such third-party recruiter.
Job Number: J12342
If you're interested in a meaningful career with a brighter future, join the SOLV Energy Team.
$103.4k-129.3k yearly Auto-Apply 11d ago
Data Engineer III
Urgenci
Senior data scientist job in Bend, OR
Data Engineers provide data services to stakeholders within portfolios and throughout the enterprise by building and managing data resources to support reporting, business intelligence, analytics, data science projects and operational applications. They engage with
data-centric product managers and developers, technology teams, analysts, and business
partners to understand capability requirements and define and construct data structures and
solutions based on priorities.
The Data Engineer III has advanced proficiency in the design, development, and operational
tasks required to provide accurate, timely, high quality data critical to feed data products (i.e.,
reports, analyses and data science models) and support downstream applications. The Data
Engineer III has deep understanding of available business data resources and how to craft
robust, practical data solutions and works with business stakeholders and technical
counterparts on best practices for stewardship, building and maintaining data resources.
This role is within the Information and Digital Services organization at Les Schwab
headquarters in Bend, OR.
GENERAL % OF WORK TIME
PRIMARY RESPONSIBILITIES/FUNCTIONS
20%
Operational Support
● Build and deliver ad hoc data sets to support business analysis, data
analysis, analytics, proofs-of-concept, and other use cases
● Create and review complex SQL scripts and queries in support of
reporting and analytics applications
● Evaluate and implement appropriate technologies and methods to
automate data preparation and data movement with Les Schwab
standard data stores, tools, and platforms
● Monitor and troubleshoot manual and automated data preparation and
data movement processes
● Solve complex technical problems and mentor/support other technical
staff in developing workarounds and resolving operational issues
● Create, maintain and review operational procedures and related
documentation
30%
Data Structure and Solution Design
● Collaborate with business systems analysts, data analysts, business
stakeholders, and analytics practitioners to understand data product
and downstream system data requirements
● Collaborate with data stewards and data source managers to
understand data definitions and business rules relevant to data
structure design
● Perform detailed design of data structures from inception through
production support
● Create data models for relational and dimensional database schemas
for a range of use cases from targeted reporting solutions to support
of downstream applications
● Conduct information modeling, including conceptual models, relational
database designs and message models
● Perform and review designs and solutions for data manipulation, data
preparation, and data movement processes for a variety of scenarios
from simple file-based export/import to enterprise-grade ETL
workflows connecting multiple structured and unstructured endpoints
20%
Data Structure and Solution Development
● Catalog existing data sources and enable access to resident and
external data sources
● Develop programming, modeling and data integration, supporting task
automation.
● Create physical database tables, views, and flat files for analytics
research projects, reporting, analytics applications, and for publication
to downstream subscribers
● Solve complex technical problems and mentor/support other technical
staff in developing data models, data structures and ETL solutions
February 2016 Headquarters and Prineville Job Description Page 2 of 5
● Create and maintain data dictionaries, data model diagrams, data
mapping documents, data security and quality requirements and
related data platform documentation
● Work in an agile team environment to deliver timely analytics solutions
and insights within a dynamic learning organization
15%
Data Platform Quality and Governance
● Develop a quality assurance framework to ensure the delivery of high
quality data and data structure analyses to stakeholders
● Collaborate with Digital Services colleagues to select and adopt best
practices within a culture of data management excellence
● Implement and continuously improve development best practices,
version control and deployment strategies to ensure product quality,
agility and recoverability
● Implement and test data access roles and permissions to ensure
“least privileged” access to enterprise data and reduce the enterprise
risk of data exposure
● Proactively identify opportunities to improve data work flows and
query performance
10%
Stakeholder Relationship and Vendor Management
● Support Data Stewards to establish and enforce guidelines for data
collection, integration and maintenance
● Provide expert advice to empower Information and Digital Services
colleagues to understand and utilize Enterprise Data Platform
Services and Data resources
● Collaborate with Portfolios across the enterprise to ensure that roles
for data access follow “least privilege” principles, yet there is high data
literacy and awareness of enterprise data available for use
● Make all stakeholders ethically aware of the unintended
consequences of the use of data and identify risks and opportunities
to be communicated throughout the enterprise
5%
Resource Development: Mentoring and Best Practices
● Provide mentoring and expert advice for the development of complex
and high-performance SQL
● Promote best practices for building quality into data structure, ETL
and data solutions to improve efficiency and lower probabilities of
defects within the data platform and downstream
● Move beyond the scope of individual projects and promote, guard and
guide the organization toward common semantics and proper use of
metadata
● Take responsibility for streamlining data pipelines across the
enterprise, ensuring they are coordinated, consistent, efficient and
production ready
Qualifications
Required Technical Skills/Knowledge:
● Expert knowledge of data modeling
● Expert developer of Advanced SQL (analytical functions)
● Deep knowledge of query performance tuning
● Deep knowledge of data analysis techniques for testing and troubleshooting
● Deep knowledge of ETL process development
● Expert proficiency in writing and maintaining data management documentation such as
data dictionaries, data catalogs, and integration data maps.
● Understanding of stakeholder processes for reporting and data analytics to serve
business decision-making
● Understanding of data stewardship concepts
● Proficiency and demonstrated experience with a programming language such as
Python, R, JavaScript, Java C#, Go, or similar. Advanced procedure or function
development in T-SQL, Oracle PL/SQL or equivalent also acceptable
● Proficiency and demonstrated professional experience working with flat file data
formats including delimited files, XML, and JSON
● Practical experience using solution delivery collaboration software such as Service
Now, Jira, TFS, or similar
Additional Information
Educational/Experience Requirements:
● Bachelor's degree (BS or BA) in STEM related discipline with major in Computer
Science/Information Management/Database Development and Analysis or equivalent
disciplines or equivalent experience with appropriate time-in-role
● 6+ years of experience with data warehouse technical architectures, ETL/ELT,
reporting/analytic tools and scripting
● Experience with AWS services including S3, Data-pipeline and cloud based data
warehouses
● Experience with data visualization tools such as Tableau or Birst is a plus
$84k-118k yearly est. 16h ago
DATA Engineer
Infinity Outsourcing
Senior data scientist job in Oregon
Requisition Title : DATA Engineer
Duration : 3-6 Months
Pay Rate : $55-70/hr W2 / C2C
Client : Manufacturers Bank
Experience : 7 + years
Domain - Actimize on Cloud
Experience in any Compliance Technology ( AML Transaction Monitoring, CDD, Sanctions Screening etc.)
Experience with Actimize SAM on Cloud is a BIG PLUS or Actimize on Prem
Experience with any of the core banking platforms - FIS, Mission Lane, Fircosoft
Experience with GCP/Azure, ETL, Operations Data Store (ODS)
Must have skills:
Experience in any Compliance Technology ( AML Transaction Monitoring, CDD, Sanctions Screening etc.)
Experience with Actimize SAM on Cloud is a BIG PLUS or Actimize on Prem
Nice to have skills:
Experience with GCP/Azure, ETL, Operations Data Store (ODS)
Experience with any of the core banking platforms - FIS, Mission Lane, Fircosoft
Role : DATA Engineer + (Cloud or GCP) + Any compliance technology + Actimize
Compliance Technology (AML Transaction Monitoring, CDD, Sanctions Screening etc.)
JOB DESCRIPTION :
Need your immediate attention and profiles for the below urgent position , details as mentioned below:
Highlighted below are mandatory , we need Data Engineer with any of the below compliance technology experience and GCP.
Senior Specialist (Data Engineering) role
$55-70 hourly 60d+ ago
BigData Engineer / Architect
Nitor Infotech
Senior data scientist job in Portland, OR
The hunt is for a strong Big Data Professional, a team player with the ability to manage effective relationships with a wide range of stakeholders (customers & team members alike). Incumbent will demonstrate personal commitment and accountability to ensure standards are continuously sustained and improved both within the internal teams, and with partner organizations and suppliers.
Role: Big Data Engineer
Location: Portland OR.
Duration: Full Time
Skill Matrix:
Map Reduce - Required
Apache Spark - Required
Informatica PowerCenter - Required
Hive - Required
Apache Hadoop - Required
Core Java / Python - Highly Desired
Healthcare Domain Experience - Highly Desired
Job Description
Responsibilities and Duties
Participate in technical planning & requirements gathering phases including architectural design, coding, testing, troubleshooting, and documenting big data-oriented software applications.
Responsible for the ingestion, maintenance, improvement, cleaning, and manipulation of data in the business's operational and analytics databases, and troubleshoots any existent issues.
Implementation, troubleshooting, and optimization distributed solutions based on modern big data technologies like Hive, Hadoop, Spark, Elastic Search, Storm, Kafka, etc. in both an on premise and cloud deployment model to solve large scale processing problems
Design, enhance and implement ETL/data ingestion platform on the cloud.
Strong Data Warehousing skills, including: Data clean-up, ETL, ELT and handling scalability issues for enterprise level data warehouse
Capable of investigating, familiarizing and mastering new data sets quickly
Strong troubleshooting and problem-solving skills in large data environment
Experience with building data platform on cloud (AWS or Azure)
Experience in using Python, Java or any other language to solving data problems
Experience in implementing SDLC best practices and Agile methods.
Qualifications
Required Skills:
Data architecture/ Big Data/ ETL environment
Experience with ETL design using tools Informatica, Talend, Oracle Data Integrator (ODI), Dell Boomi or equivalent
Big Data & Analytics solutions Hadoop, Pig, Hive, Spark, Spark SQL Storm, AWS (EMR, Redshift, S3, etc.)/Azure (HDInsight, Data Lake Design)
Building and managing hosted big data architecture, toolkit familiarity in: Hadoop with Oozy, Sqoop, Pig, Hive, HBase, Avro, Parquet, Spark, NiFi
Foundational data management concepts - RDM and MDM
Experience in working with JIRA/Git/Bitbucket/JUNIT and other code management toolsets
Strong hands-on knowledge of/using solutioning languages like: Java, Scala, Python - any one is fine
Healthcare Domain knowledge
Required Experience, Skills and Qualifications
Qualifications:
Bachelor's Degree with a minimum of 6 to 9 + year's relevant experience or equivalent.
Extensive experience in data architecture/Big Data/ ETL environment.
Additional Information
All your information will be kept confidential according to EEO guidelines.
$84k-118k yearly est. 60d+ ago
Sr. Data Engineer
It Vision Group
Senior data scientist job in Portland, OR
Job Description
Title : Sr. Data Engineer
Duration: 12 Months+
Roles & Responsibilities
Perform data analysis according to business needs
Translate functional business requirements into high-level and low-level technical designs
Design and implement distributed data processing pipelines using Apache Spark, Apache Hive, Python, and other tools and languages prevalent in a modern analytics platform
Create and schedule workflows using Apache Airflow or similar job orchestration tooling
Build utilities, functions, and frameworks to better enable high-volume data processing
Define and build data acquisitions and consumption strategies
Build and incorporate automated unit tests, participate in integration testing efforts
Work with teams to resolve operational & performance issues
Work with architecture/engineering leads and other teams to ensure quality solutions are implemented, and engineering best practices are defined and followed.
Tech Stack
Apache Spark
Apache Spark Streaming using Apache Kafka
Apache Hive
Apache Airflow
Python
AWS EMR and S3
Snowflake
SQL
Other Tools & Technologies :: PyCharm, Jenkin, Github.
Apache Nifi (Optional)
Scala (Optional)
$84k-118k yearly est. 2d ago
Senior Data Engineer
Advance Local 3.6
Senior data scientist job in Portland, OR
**Advance Local** is looking for a **SeniorData Engineer** to design, build, and maintain the enterprise data infrastructure that powers our cloud data platform. This position will combine your deep technical expertise in data engineering with team leadership responsibilities for data engineering, overseeing the ingestion, integration, and reliability of data systems across Snowflake, AWS, Google Cloud, and legacy platforms. You'll partner with data product and across business units to translate requirements into technical solutions, integrate data from numerous third-party platforms, (CDPs, DMPs, analytics platforms, marketing tech) into central data platform, collaborate closely with the Data Architect on platform strategy and ensure scalable, well-engineered solutions for modern data infrastructure using infrastructure as code and API-driven integrations.
The base salary range is $120,000 - $140,000 per year.
**What you'll be doing:**
+ Lead the design and implementation of scalable data ingestion pipelines from diverse sources into Snowflake.
+ Partner with platform owners across business units to establish and maintain data integrations from third party systems into the central data platform.
+ Architect and maintain data infrastructure using IAC, ensuring reproducibility, version control and disaster recovery capabilities.
+ Design and implement API integrations and event-driven data flows to support real time and batch data requirements.
+ Evaluate technical capabilities and integration patterns of existing and potential third-party platforms, advising on platform consolidation and optimization opportunities.
+ Partner with the Data Architect and data product to define the overall data platform strategy, ensuring alignment between raw data ingestion and analytics-ready data products that serve business unit needs.
+ Develop and enforce data engineering best practices including testing frameworks, deployment automation, and observability.
+ Support rapid prototyping of new data products in collaboration with data product by building flexible, reusable data infrastructure components.
+ Design, develop, and maintain scalable data pipelines and ETL processes; optimize and improve existing data systems for performance, cost efficiency, and scalability.
+ Collaborate with data product, third-party platform owners, Data Architects, Analytics Engineers, DataScientists, and business stakeholders to understand data requirements and deliver technical solutions that enable business outcomes across the organization.
+ Implement data quality validation, monitoring, and alerting systems to ensure reliability of data pipelines from all sources.
+ Develop and maintain comprehensive documentation for data engineering processes and systems, architecture, integration patterns, and runbooks.
+ Lead incident response and troubleshooting efforts for data pipeline issues, ensuring minimal business impact.
+ Stay current with the emerging data engineering technologies, cloud services, SaaS platform capabilities, and industry best practices.
**Our ideal candidate will have the following:**
+ Bachelor's degree in computer science, engineering, or a related field
+ Minimum of seven years of experience in data engineering with at least two years in a lead orsenior technical role
+ Expert proficiency in Snowflake data engineering patterns
+ Strong experience with AWS services (S3, Lambda, Glue, Step Functions) and Google Cloud Platform
+ Experience integrating data from SaaS platforms and marketing technology stacks (CDPs, DMPs, analytics platforms, CRMs)
+ Proven ability to work with third party APIs, webhooks, and data exports
+ Experience with infrastructure such as code (Terraform, Cloud Formation) and CI/CD pipelines for data infrastructure
+ Proven ability to design and implement API integrations and event-driven architecture
+ Experience with data modeling, data warehousing, and ETL processes at scale
+ Advanced proficiency in Python and SQL for data pipeline development
+ Experience with data orchestration tools (airflow, dbt, Snowflake tasks)
+ Strong understanding of data security, access controls, and compliance requirements
+ Ability to navigate vendor relationships and evaluate technical capabilities of third-party platforms
+ Excellent problem-solving skills and attention to detail
+ Strong communication and collaboraion skills
**Additional Information**
Advance Local Media offers competitive pay and a comprehensive benefits package with affordable options for your healthcare including medical, dental and vision plans, mental health support options, flexible spending accounts, fertility assistance, a competitive 401(k) plan to help plan for your future, generous paid time off, paid parental and caregiver leave and an employee assistance program to support your work/life balance, optional legal assistance, life insurance options, as well as flexible holidays to honor cultural diversity.
Advance Local Media is one of the largest media groups in the United States, which operates the leading news and information companies in more than 20 cities, reaching 52+ million people monthly with our quality, real-time journalism and community engagement. Our company is built upon the values of Integrity, Customer-first, Inclusiveness, Collaboration and Forward-looking. For more information about Advance Local, please visit ******************** .
Advance Local Media includes MLive Media Group, Advance Ohio, Alabama Media Group, NJ Advance Media, Advance Media NY, MassLive Media, Oregonian Media Group, Staten Island Media Group, PA Media Group, ZeroSum, Headline Group, Adpearance, Advance Aviation, Advance Healthcare, Advance Education, Advance National Solutions, Advance Originals, Advance Recruitment, Advance Travel & Tourism, BookingsCloud, Cloud Theory, Fox Dealer, Hoot Interactive, Search Optics, Subtext.
_Advance Local Media is proud to be an equal opportunity employer, encouraging applications from people of all backgrounds. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, genetic information, national origin, age, disability, sexual orientation, marital status, veteran status, or any other category protected under federal, state or local law._
_If you need a reasonable accommodation because of a disability for any part of the employment process, please contact Human Resources and let us know the nature of your request and your contact information._
Advance Local Media does not provide sponsorship for work visas or employment authorization in the United States. Only candidates who are legally authorized to work in the U.S. will be considered for this position.
$120k-140k yearly 60d+ ago
Data Engineer
Rxcloud
Senior data scientist job in Oregon
Data Engineer Job Description
A Data Engineer is a data professional who uses their expertise in data engineering and programming to build systems that collect, manage, and convert raw data into usable information for business analysts.
Requirements and skills
Previous 7+ years of experience as a data engineer or in a similar role
Technical expertise with data models, data mining, and segmentation techniques
Knowledge of programming languages (e.g., Java and Python)
Hands-on experience with SQL database design
Great numerical and analytical skills
Degree in Computer Science, IT, or similar field; a masters is a plus.
Data engineering certification (e.g., IBM Certified Data Engineer) is a plus.
Big data technologies such as Hadoop, Spark, and Kafka.
Also be familiar with cloud platforms such as AWS, Google Cloud, and Azure.
Should be proficient in SQL, Excel, Tableau, or other BI tools. They should also have a good understanding of statistical analysis and modelling techniques, as well as business acumen.
Responsibilities
Analyse and organize raw data.
Build data systems and pipelines.
Evaluate business needs and objectives.
Interpret trends and patterns.
Conduct complex data analysis and report on results.
Prepare data for prescriptive and predictive modelling.
Build algorithms and prototypes.
Combine raw information from different sources.
Explore ways to enhance data quality and reliability.
Identify opportunities for data acquisition.
Develop analytical tools and programs.
Collaborate with datascientists and architects on several projects.
$84k-118k yearly est. 60d+ ago
AWS Data Engineer Snowflake
Kynite
Senior data scientist job in Myrtle Point, OR
Orchestrated by adept technical architects with over fifty years of applied expertise, KYNITE is an advanced technology company specializing in the disciplines of: Blockchain, Cloud Services, Big Data & Analytics, Artificial Intelligence, Enterprise, Staff Augmentation and Managed Services
We are BigData Experts
We are Cloud Experts
We are Enterprise Architects
We are Artificial Intelligence Innovators
We are Technological Evangelists
We are Doers
We are Kynite
Additional Information
All your
This job is only for individuals residing in US
US Citizens, Green Card holders, EAD's can apply
W2, C2c,
Information will be kept confidential according to EEO guidelines.
$86k-122k yearly est. 60d+ ago
Sr. Data Engineer
Concoracredit
Senior data scientist job in Beaverton, OR
As a Sr. Data Engineer, you'll help drive Concora Credit's Mission to enable customers to Do More with Credit - every single day.
The impact you'll have at Concora Credit:
We are seeking a Sr. Data Engineer with deep expertise in Azure and Databricks to lead the design, development, and optimization of scalable data pipelines and platforms. You'll be responsible for building robust data solutions that power analytics, reporting, and machine learning across the organization using Azure cloud services and Databricks.
We hire people, not positions. That's because, at Concora Credit, we put people first, including our customers, partners, and Team Members. Concora Credit is guided by a single purpose: to help non-prime customers
do more
with credit. Today, we have helped millions of customers access credit. Our industry leadership, resilience, and willingness to adapt ensure we can help our partners responsibly say yes to millions more. As a company grounded in entrepreneurship, we're looking to expand our team and are looking for people who foster innovation, strive to make an impact, and want to Do More! We're an established company with over 20 years of experience, but now we're taking things to the next level. We're seeking someone who wants to impact the business and play a pivotal role in leading the charge for change.
Responsibilities
As our Sr. Data Engineer, you will:
Design and develop scalable, efficient data pipelines using Azure Databricks
Build and manage data ingestion, transformation, and storage solutions leveraging Azure Data Factory, Azure Data Lake, and Delta Lake
Implement CI/CD for data workflows using tools like Azure DevOps, Git, and Terraform
Optimize performance and cost efficiency across large-scale distributed data systems
Collaborate with analysts, datascientists, and business stakeholders to understand data needs and deliver reliable, reusable datasets
Provide guidance and mentor junior engineers and actively contribute to data platform best practices
Monitor, troubleshoot, and optimize existing pipelines and infrastructure to ensure reliability and scalability
These duties must be performed with or without reasonable accommodation.
We know experience comes in many forms and that many skills are transferable. If your experience is close to what we're looking for, consider applying. Diversity has made us the entrepreneurial and innovative company that we are today.
Qualifications
Requirements:
5+ years of experience in data engineering, with a strong focus on Azure cloud technologies
Experience with Azure Databricks, Azure Data Lake, Data Factory including PySpark, SQL, Python and Delta Lake
Strong proficiency in Databricks and Apache Spark
Solid understanding of data warehousing, ETL/ELT, and data modeling best practices
Experience with version control, CI/CD pipelines, and infrastructure as code
Knowledge of Spark performance tuning, partitioning, and job orchestration
Excellent problem-solving skills and attention to detail
Strong communication and collaboration abilities across technical and non-technical teams
Ability to work independently and lead in a fast-paced, agile environment
Passion for delivering clean, high-quality, and maintainable code
Preferred Qualifications:
Experience with Unity Catalog, Databricks Workflows, and Delta Live Tables
Familiarity with DevOps practices or Terraform for Azure resource provisioning
Understanding of data security, RBAC, and compliance in cloud environments
Experience integrating Databricks with Power BI or other analytics platforms
Exposure to real-time data processing using Kafka, Event Hubs, or Structured Streaming
What's In It For You:
Medical, Dental and Vision insurance for you and your family
Relax and recharge with Paid Time Off (PTO)
6 company-observed paid holidays, plus 3 paid floating holidays
401k (after 90 days) plus employer match up to 4%
Pet Insurance for your furry family members
Wellness perks including onsite fitness equipment at both locations, EAP, and access to the Headspace App
We invest in your future through Tuition Reimbursement
Save on taxes with Flexible Spending Accounts
Peace of mind with Life and AD&D Insurance
Protect yourself with company-paid Long-Term Disability and voluntary Short-Term Disability
Concora Credit provides equal employment opportunities to all Team Members and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
Employment-based visa sponsorship is not available for this role.
Concora Credit is an equal opportunity employer (EEO).
Please see the Concora Credit Privacy Policy for more information on how Concora Credit processes your personal information during the recruitment process and, if applicable, based on your location, how you can exercise your privacy rights. If you have questions about this privacy notice or need to contact us in connection with your personal data, including any requests to exercise your legal rights referred to at the end of this notice, please contact caprivacynotice@concoracredit.com.
How much does a senior data scientist earn in Bend, OR?
The average senior data scientist in Bend, OR earns between $91,000 and $180,000 annually. This compares to the national average senior data scientist range of $90,000 to $170,000.