Post job

Senior data scientist jobs in Vancouver, WA - 44 jobs

All
Senior Data Scientist
Data Scientist
Data Engineer
Senior Data Architect
Lead Data Analyst
  • Data Scientist 4

    Lam Research 4.6company rating

    Senior data scientist job in Tualatin, OR

    Develop tools, metric measurement and assessment methods for performance management and predictive modeling. Develop dashboards for product management and executives to drive faster and better decision making Create accountability models for DPG-wide quality, I&W, inventory, product management KPI's and business operations. Improve DPG-wide quality, install and warranty, and inventory performance consistently from awareness, prioritization, and action through the availability of common data. Collaborate with quality, install, and warranty, and inventory program managers to analyze trends and patterns in data that drive required improvement in key performance indicators (KPIs) Foster growth and utility of Cost of Quality within the company through correlation of I&W data, ECOGS, identification of causal relationships for quality events and discovery of hidden costs throughout the network. Improve data utilization via AI and automation leading to real time resolution and speeding systemic action. Lead and/or advise on multiple projects simultaneously and demonstrate organizational, prioritization, and time management proficiencies. Bachelor's degree with 8+ years of experience; or master's degree with 5+ years' experience; or equivalent experience. Basic understand of AI and machine learning and ability to work with Data Scientist to use AI to solve complex challenging problems leading to efficiency and effectiveness improvements. Ability to define problem statements and objectives, development of analysis solution approach, execution of analysis. Basic knowledge of Lean Six Sigma processes, statistics, or quality systems experience. Ability to work on multiple problems simultaneously. Ability to present conclusions and recommendations to executive audiences. Ownership mindset to drive solutions and positive outcomes. Excellent communication and presentation skills with the ability to present to audiences at multiple levels in the Company. Willingness to adapt best practices via benchmarking. Experience in Semiconductor fabrication, Semiconductor Equipment Operations, or related industries is a plus. Demonstrated ability to change process and methodologies for capturing and interpreting data. Demonstrated success in using structured problem-solving methodologies and quality tools to solve complex problems. Knowledge of programming environments such as Python, R, Matlab, SQL or equivalent. Experience in structured problem-solving methodologies such as PDCA, DMAIC, 8D and quality tools. Our commitment We believe it is important for every person to feel valued, included, and empowered to achieve their full potential. By bringing unique individuals and viewpoints together, we achieve extraordinary results. Lam is committed to and reaffirms support of equal opportunity in employment and non-discrimination in employment policies, practices and procedures on the basis of race, religious creed, color, national origin, ancestry, physical disability, mental disability, medical condition, genetic information, marital status, sex (including pregnancy, childbirth and related medical conditions), gender, gender identity, gender expression, age, sexual orientation, or military and veteran status or any other category protected by applicable federal, state, or local laws. It is the Company's intention to comply with all applicable laws and regulations. Company policy prohibits unlawful discrimination against applicants or employees. Lam offers a variety of work location models based on the needs of each role. Our hybrid roles combine the benefits of on-site collaboration with colleagues and the flexibility to work remotely and fall into two categories - On-site Flex and Virtual Flex. 'On-site Flex' you'll work 3+ days per week on-site at a Lam or customer/supplier location, with the opportunity to work remotely for the balance of the week. 'Virtual Flex' you'll work 1-2 days per week on-site at a Lam or customer/supplier location, and remotely the rest of the time.
    $71k-91k yearly est. 30d ago
  • Job icon imageJob icon image 2

    Looking for a job?

    Let Zippia find it for you.

  • Senior Data Scientist - Marketing

    Mercury 3.5company rating

    Senior data scientist job in Portland, OR

    In the 1840s, Charles Babbage and Ada Lovelace worked on an early version of the computer known as the “Analytics Engine”. In the words of computing historian Doron Swade, “What Lovelace saw...was that numbers could represent entities other than quantity” and together they laid the foundation for general-purpose computing. While much has changed since then, the importance of numbers in building great technology remains. We're looking for Data Scientists who can help us build our analytics engine by making meaning from data and identifying opportunities for improvement. As a Marketing Data Scientist at Mercury, you will partner with senior Marketing leaders and the Performance Marketing team, as well as Brand Marketing, Product Marketing and Lifecycle Marketing to acquire, engage, and convert Mercury customers around the globe. You will develop various skills as a full-stack Data Scientist working on projects end-to-end and build deep domain expertise in the intersection of Data Science and Marketing. You will set the direction for our marketing measurement strategy and ensure it fits within Mercury's broader growth, product and company goals. Here are some things you'll do on the job: Collaborate with Marketing stakeholders and other cross-functional partners to identify impactful business questions, conduct deep-dive analysis, and communicate findings and actionable recommendations to audiences at all levels to inform data-driven decisions. Collaborate with other Data Scientists and Data Engineers to build and improve different marketing measurement capabilities. Develop privacy-resilient measurement strategies using techniques like synthetic control methods and incrementality testing to maintain attribution quality as the industry shifts away from third-party cookies and device identifiers. Develop and apply marketing measurement capabilities such as A/B Testing, Causal Inference, Marketing Mix Modeling (MMM), and Multi-touch Attribution (MTA) to evaluate the performance of our marketing effort. Build and deploy machine learning and statistical models such as Customer Lifetime Value, Lead Scoring, Segmentation, and time-series forecasting end to end. Influence and partner with engineering, design, and business teams to implement data-based recommendations that will improve entrepreneurs' lives and generate revenue for Mercury. You should: Have 5+ years of experience working with marketing teams across full funnel measurement from brand awareness and content marketing to product adoption and customer retention. Have expertise in marketing measurement strategies including brand lift studies, geo-experiments, survey-based measurement, cross-channel attribution, causal impact analysis, and experimentation design to identify growth opportunities. Have fluency in SQL, and other statistical programming languages (e.g. Python, R, etc.). Have experience with marketing analytics tools such as Google Analytics, Amplitude, social listening platforms, email/CRM analytics (e.g., Salesforce, HubSpot), and customer data platforms. Have experience crafting data pipelines and dashboards, and understand different database structures. Be super organized and communicative. You will need to prioritize and manage projects to maximize impact, supporting multiple stakeholders with varying quantitative skill levels. The total rewards package at Mercury includes base salary, equity (stock options), and benefits. Our salary and equity ranges are highly competitive within the SaaS and fintech industry and are updated regularly using the most reliable compensation survey data for our industry. New hire offers are made based on a candidate's experience, expertise, geographic location, and internal pay equity relative to peers. Our target new hire base salary ranges for this role are the following: US employees (any location): $200,700 - $250,900 Canadian employees (any location): CAD 189,700 - 237,100 *Mercury is a fintech company, not an FDIC-insured bank. Banking services provided through Choice Financial Group and Column N.A., Members FDIC.. Mercury values diversity & belonging and is proud to be an Equal Employment Opportunity employer. All individuals seeking employment at Mercury are considered without regard to race, color, religion, national origin, age, sex, marital status, ancestry, physical or mental disability, veteran status, gender identity, sexual orientation, or any other legally protected characteristic. We are committed to providing reasonable accommodations throughout the recruitment process for applicants with disabilities or special needs. If you need assistance, or an accommodation, please let your recruiter know once you are contacted about a role. We use Covey as part of our hiring and / or promotional process for jobs in NYC and certain features may qualify it as an AEDT. As part of the evaluation process we provide Covey with job requirements and candidate submitted applications. We began using Covey Scout for Inbound on January 22, 2024. [Please see the independent bias audit report covering our use of Covey for more information.] #LI-AC1
    $200.7k-250.9k yearly Auto-Apply 4d ago
  • AI & GenAI Data Scientist-Senior Associate

    PwC 4.8company rating

    Senior data scientist job in Portland, OR

    Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn't clear, you ask questions, and you use these moments as opportunities to grow. Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: * Respond effectively to the diverse perspectives, needs, and feelings of others. * Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. * Use critical thinking to break down complex concepts. * Understand the broader objectives of your project or role and how your work fits into the overall strategy. * Develop a deeper understanding of the business context and how it is changing. * Use reflection to develop self awareness, enhance strengths and address development areas. * Interpret data to inform insights and recommendations. * Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Minimum Degree Required Bachelor's Degree Minimum Year(s) of Experience 4 year(s) Demonstrates thorough-level abilities and/or a proven record of success managing the identification and addressing of client needs: * Building of GenAI and AI solutions, including but not limited to analytical model development and implementation, prompt engineering, general all-purpose programming (e.g., Python), testing, communication of results, front end and back-end integration, and iterative development with clients * Documenting and analyzing business processes for AI and Generative AI opportunities, including gathering of requirements, creation of initial hypotheses, and development of GenAI and AI solution approach * Collaborating with client team to understand their business problem and select the appropriate analytical models and approaches for AI and GenAI use cases * Designing and solutioning AI/GenAI architectures for clients, specifically for plugin-based solutions (i.e., ChatClient application with plugins) and custom AI/GenAI application builds * Processing unstructured and structured data to be consumed as context for LLMs, including but not limited to embedding of large text corpus, generative development of SQL queries, building connectors to structured databases * Support management of daily operations of a global data and analytics team on client engagements, review developed models, provide feedback and assist in analysis; * Directing data engineers and other data scientists to deliver efficient solutions to meet client requirements; * Leading and contributing to development of proof of concepts, pilots, and production use cases for clients while working in cross-functional teams; * Structuring, write, communicate and facilitate client presentations; and, * Directing associates through coaching, providing feedback, and guiding work performance. Demonstrates thorough abilities and/or a proven record of success learning and performing in functional and technical capacities, including the following areas: * Managing AI/GenAI application development teams including back-end and front-end integrations * Using Python (e.g., Pandas, NLTK, Scikit-learn, Keras etc.), common LLM development frameworks (e.g., Langchain, Semantic Kernel), Relational storage (SQL), Non-relational storage (NoSQL); * Experience in analytical techniques such as Machine Learning, Deep Learning and Optimization * Vectorization and embedding, prompt engineering, RAG (retrieval, augmented, generation) workflow dev * Understanding or hands on experience with Azure, AWS, and / or Google Cloud platforms * Experience with Git Version Control, Unit/Integration/End-to-End Testing, CI/CD, release management, etc. Travel Requirements Up to 80% Job Posting End Date Learn more about how we work: ************************** PwC does not intend to hire experienced or entry level job seekers who will need, now or in the future, PwC sponsorship through the H-1B lottery, except as set forth within the following policy: *********************************** As PwC is an equal opportunity employer, all qualified applicants will receive consideration for employment at PwC without regard to race; color; religion; national origin; sex (including pregnancy, sexual orientation, and gender identity); age; disability; genetic information (including family medical history); veteran, marital, or citizenship status; or, any other status protected by law. For only those qualified applicants that are impacted by the Los Angeles County Fair Chance Ordinance for Employers, the Los Angeles' Fair Chance Initiative for Hiring Ordinance, the San Francisco Fair Chance Ordinance, San Diego County Fair Chance Ordinance, and the California Fair Chance Act, where applicable, arrest or conviction records will be considered for Employment in accordance with these laws. At PwC, we recognize that conviction records may have a direct, adverse, and negative relationship to responsibilities such as accessing sensitive company or customer information, handling proprietary assets, or collaborating closely with team members. We evaluate these factors thoughtfully to establish a secure and trusted workplace for all. Applications will be accepted until the position is filled or the posting is removed, unless otherwise set forth on the following webpage. Please visit this link for information about anticipated application deadlines: *************************************** The salary range for this position is: $77,000 - $202,000. Actual compensation within the range will be dependent upon the individual's skills, experience, qualifications and location, and applicable employment laws. All hired individuals are eligible for an annual discretionary bonus. PwC offers a wide range of benefits, including medical, dental, vision, 401k, holiday pay, vacation, personal and family sick leave, and more. To view our benefits at a glance, please visit the following link: ***********************************
    $77k-202k yearly Auto-Apply 11d ago
  • Data Scientist, Senior

    Chopine Analytic Solutions

    Senior data scientist job in Lewisville, WA

    Job Name: Data Scientist Level: Senior Remote Work: No Required Clearance: TS/SCI Pending funding RESPONSIBILITIES: Work with large structured / unstructured data in a modeling and analytical environment to define and create streamline processes in the evaluation of unique datasets and solve challenging intelligence issues Lead and participate in the design of solutions and refinement of pre-existing processes Work with Customer Stakeholders, Program Managers, and Product Owners to translate road map features into components/tasks, estimate timelines, identify resources, suggest solutions, and recognize possible risks Use exploratory data analysis techniques to identify meaningful relationships, patterns, or trends from complex data Combine applied mathematics, programming skills, analytical techniques, and data to provide impactful insights for decision makers Research and implement optimization models, strategies, and methods to inform data management activities and analysis Apply big data analytic tools to large, diverse sets of data to deliver impactful insights and assessments Conduct peer reviews to improve quality of workflows, procedures, and methodologies Help build high-performing teams; mentor team members providing development opportunities to increase their technical skills and knowledge REQUIRED QUALIFICATIONS: Requires TS/SCI Clearance with the ability to obtain a CI/Poly. 10+ years of relevant experience. (A combination of years of experience & professional certifications/trainings can be used in lieu of years of experience) Experience supporting IC operations Possess expert level knowledge to manipulate and analyze structured/ unstructured data Demonstrated experience in data mining and developing/maintaining/manipulating databases Demonstrated experience in identifying potential systems enhancements, new capabilities, concept demonstrators, and capability business cases Demonstrated experience using GOTS data processing and analytics capabilities to modernize analytic methodologies Demonstrated experience using COTS statistical software (Map Large, Tableau, MatLab) for advanced statistical analysis of operational tools and data visualization which enables large datasets to be interrogated and allows for patterns, relationships, and anticipatory behavioral likelihoods that may not be apparent using traditional single discipline means Knowledge of advanced analytic methodologies, and experience in implementing and executing those methodologies to enable customer satisfaction Demonstrated experience in directing activities of highly skilled technical and analytical teams responsible for developing solutions to highly complex analytical/intelligence problems Experienced in conducting multi-INT and technology specific research to support mission operations Possess effective communications skills; capable of providing highly detailed information in an easy-to-understand format DESIRED QUALIFICATIONS: Possess Master's degree in Data Science or related technical field Experience developing and working with Artificial Intelligence and Machine Learning (AI/ML) Demonstrated experience of advanced programming techniques, using one or more of the following: HTML 5/Javascript, ArcObjects, Python, Model Builder, Oracle, SQL, GIScience, GeospatiavAnalysis, Statistics, ArcGIS Desktop, ArcGIS Server, Arc SDE, ArcIMS. Experience using NET, Python, C++, and/or JAVA programming for web interface development and geodatabase development. Experience building and maintaining databases of GEOINT, SIGINT, or OSINT data related to the area of interest needs. Data Visualization Experience which may include Matrix Analytics, Network Analytics, Graphing Data that assist the analytical workforce in generating common operational pictures depicting fused intelligence and information to support informal assessments and finished products Chopine Analytic Solutions is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex (including pregnancy and gender identity), national origin, political affiliation, sexual orientation, marital status, disability, genetic information, age, membership in an employee organization, retaliation, parental status, military service, any other non-merit factor, or any other characteristic protected by law.
    $117k-166k yearly est. 60d+ ago
  • Lead Data Scientist GenAI, Strategic Analytics - Data Science

    Deloitte 4.7company rating

    Senior data scientist job in Portland, OR

    Deloitte is at the leading edge of GenAI innovation, transforming Strategic Analytics and shaping the future of Finance. We invite applications from highly skilled and experienced Lead Data Scientists ready to drive the development of our next-generation GenAI solutions. The Team Strategic Analytics is a dynamic part of our Finance FP&A organization, dedicated to empowering executive leaders across the firm, as well as our partners in financial and operational functions. Our team harnesses the power of cloud computing, data science, AI, and strategic expertise-combined with deep institutional knowledge-to deliver insights that inform our most critical business decisions and fuel the firm's ongoing growth. GenAI is at the forefront of our innovation agenda and a key strategic priority for our future. We are rapidly developing groundbreaking products and solutions poised to transform both our organization and our clients. As part of our team, the selected candidate will play a pivotal role in driving the success of these high-impact initiatives. Recruiting for this role ends on January 31, 2026 Work You'll Do Client Engagement & Solution Scoping * Partner with stakeholders to analyze business requirements, pain points, and objectives relevant to GenAI use cases. * Facilitate workshops to identify, prioritize, and scope impactful GenAI applications (e.g., text generation, code synthesis, conversational agents). * Clearly articulate GenAI's value proposition, including efficiency gains, risk mitigation, and innovation. * Solution Architecture & Design * Architect holistic GenAI solutions, selecting and customizing appropriate models (GPT, Llama, Claude, Zora AI, etc.). * Design scalable integration strategies for embedding GenAI into existing client systems (ERP, CRM, KM platforms). * Define and govern reliable, ethical, and compliant data sourcing and management. Development & Customization * Lead model fine-tuning, prompt engineering, and customization for client-specific needs. * Oversee the development of GenAI-powered applications and user-friendly interfaces, ensuring robustness and exceptional user experience. * Drive thorough validation, testing, and iteration to ensure quality and accuracy. Implementation, Deployment & Change Management * Manage solution rollout, including cloud setup, configuration, and production deployment. * Guide clients through adoption: deliver training, create documentation, and provide enablement resources for users. Risk, Ethics & Compliance * Lead efforts in responsible AI, ensuring safeguards against bias, privacy breaches, and unethical outcomes. * Monitor performance, implement KPIs, and manage model retraining and auditing processes. Stakeholder Communication * Prepare executive-level reports, dashboards, and demos to summarize progress and impact. * Coordinate across internal teams, tech partners, and clients for effective project delivery. Continuous Improvement & Thought Leadership * Stay current on GenAI trends, best practices, and emerging technologies; share insights across teams. * Mentor junior colleagues, promote knowledge transfer, and contribute to reusable methodologies. Qualifications Required: * Bachelor's or Master's degree in Computer Science, Engineering, Data Science, Mathematics, or related field. * 5+ years of hands-on experience delivering machine learning or AI solutions, preferably including generative AI. * Independent thinker who can create the vision and execute on transforming data into high end client products. * Demonstrated accomplishments in the following areas: * Deep understanding of GenAI models and approaches (LLMs, transformers, prompt engineering). * Proficiency in Python (PyTorch, TensorFlow, HuggingFace), Databricks, ML pipelines, and cloud-based deployment (Azure, AWS, GCP). * Experience integrating AI into enterprise applications, building APIs, and designing scalable workflows. * Knowledge of solution architecture, risk assessment, and mapping technology to business goals. * Familiarity with agile methodologies and iterative delivery. * Commitment to responsible AI, including data ethics, privacy, and regulatory compliance. * Ability to travel 0-10%, on average, based on the work you do and the clients and industries/sectors you serve * Limited immigration sponsorship may be available. Preferred: * Relevant Certifications: May include Google Cloud Professional ML Engineer, Microsoft Azure AI Engineer, AWS Certified Machine Learning, or specialized GenAI/LLM credentials. * Experience with data visualization tools such as Tableau The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Deloitte, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is $102,500 - $188,900. You may also be eligible to participate in a discretionary annual incentive program, subject to the rules governing the program, whereby an award, if any, depends on various factors, including, without limitation, individual and organizational performance. Information for applicants with a need for accommodation ************************************************************************************************************ EA_FA_ExpHire Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte's purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Learn more. Professional development From entry-level employees to senior leaders, we believe there's always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. As used in this posting, "Deloitte" means Deloitte Services LP, a subsidiary of Deloitte LLP. Please see ************************* for a detailed description of the legal structure of Deloitte LLP and its subsidiaries. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law. Requisition code: 316523 Job ID 316523
    $102.5k-188.9k yearly 13d ago
  • Principal, Data Scientist, Nike Sport Research Lab

    Nike 4.7company rating

    Senior data scientist job in Beaverton, OR

    The Nike Sport Research Lab (NSRL) is a multidisciplinary team of researchers, innovators, and scientists who lead with science to make athletes* measurably better. We deliver validated insights and innovative capabilities to drive the future of Nike products and services. * if you have body, you are an athlete WHO YOU'LL WORK WITH The Principal Data Scientist partners with research scientists in physiology, biomechanics, perception, and behavioral science, as well as other data scientists, engineers, product managers, designers, and solution architects. This role reports to the Director of Data Science and collaborates across dynamic, multi-functional teams to accelerate innovative product development. The team is recognized for its culture of curiosity, collaboration, and impact within Nike and the broader industry. WHO WE ARE LOOKING FOR Nike Sport Research Lab seeks a Principal Data Scientist who combines deep technical expertise with a passion for advancing human performance. The ideal candidate is a hands-on creative problem-solver, a collaborative leader, and a strategic thinker who thrives in multidisciplinary environments. This individual brings a proven track record of leveraging science and technology to drive innovation in sport science, product development, or related fields. Required Qualifications: PhD or Master's in Statistics, Computer Science, Electrical/Biomedical engineering, Economics, or related field, Will accept any suitable combination of education, experience and training. 7-9+ years demonstrated experience in Data Science, including expertise in data synthesis, machine learning, statistical modeling, causal inference, or signal processing. Demonstrated expertise with Python, and data analysis stacks (such as NumPy, SciPy, pandas, Spark, etc.) Experience with AWS infrastructure, Spark, Databricks, and SQL. Demonstrated practice of software engineering best practices within technical orgs (git for version control, structured code reviews, automated tests, CI/CD pipelines, and maintainable, reproducible build environments) Proven ability to define, initiate, and supervise analytics and modeling efforts. Strong track record of translating business needs into research requirements and strategy. Ability to interpret and implement methods described in research papers and articles in signal processing, machine learning, deep learning, mathematical modeling, and related fields. Excellent interpersonal and communication skills, with experience communicating technical information in both written and verbal formats. Desired Qualifications: Experience with experimental design and statistical inference, including adaptive experimentation, A/B testing, bandit optimization, and causal inference. Proficiency in advanced machine learning methods, including classification, regression, deep learning, and computer vision algorithms. Familiarity with time-series analysis, predictive modeling, and forecasting using large-scale, real-world datasets (e.g., sensor, image, or behavioral data). Experience developing and deploying models that learn from expert input and emulate expert decision-making. Understanding of, or working knowledge in, cardiovascular physiology, biomechanics, perception science, or related fields. Experience with bio-signals, wearables, and inertial measurement technologies. Peer-reviewed machine learning publications or other demonstrable contributions (e.g., GitHub, apps, products). Experience leading or mentoring teams of data scientists and engineers in research or product settings. Experience designing and delivering clear, impactful data visualizations for technical and non-technical audiences. Experience building and maintaining robust data pipelines, ensuring data quality, integrity, and accessibility for analysis and modeling. WHAT YOU'LL WORK ON As Principal Data Scientist, you will play a pivotal role in advancing Nike's sport science capabilities through innovative data science and machine intelligence. You'll leverage your expertise to solve complex problems, drive impactful research, and deliver scalable solutions that power athlete and product innovation. You will: Apply expertise in signal processing, machine learning, sensor fusion, computer vision, and other areas of machine intelligence to solve complex problems using large, diverse datasets. Develop and deploy predictive models in production environments to power personalized digital experiences for athletes and consumers. Drive the development of large-scale experimentation capabilities and help identify causal factors influencing athletic performance and behavior. Lead the creation, acquisition, and maintenance of high-quality datasets that enable ongoing exploration and unlock value for NSRL and partner organizations. Investigate and evaluate new technologies, software, machine learning techniques, and capabilities to keep NSRL at the forefront of innovation. Communicate technical information, methods, and findings clearly and concisely to technical and non-technical teams, partners, and senior management. Create specifications and documentation for external professional services firms to achieve team objectives efficiently. Stay connected with machine intelligence innovations across the industry and share insights with team members and partners throughout Nike. We offer a number of accommodations to complete our interview process including screen readers, sign language interpreters, accessible and single location for in-person interviews, closed captioning, and other reasonable modifications as needed. If you discover, as you navigate our application process, that you need assistance or an accommodation due to a disability, please complete the Candidate Accommodation Request Form.
    $100k-124k yearly est. Auto-Apply 4d ago
  • Data Scientist

    Eyecarecenterofsalem

    Senior data scientist job in Portland, OR

    Job DescriptionWe are looking for a Data Scientist to analyze large amounts of raw information to find patterns that will help improve our company. We will rely on you to build data products to extract valuable business insights. In this role, you should be highly analytical with a knack for analysis, math, and statistics. Critical thinking and problem-solving skills are essential for interpreting data. We also want to see a passion for machine learning and research. Your goal will be to help our company analyze trends to make better decisions. Responsibilities Identify valuable data sources and automate collection processes Undertake to preprocess of structured and unstructured data Analyze large amounts of information to discover trends and patterns Build predictive models and machine-learning algorithms Combine models through ensemble modeling Present information using data visualization techniques Propose solutions and strategies to business challenges Collaborate with engineering and product development teams Requirements and skills Proven experience as a Data Scientist or Data Analyst Experience in data mining Understanding of machine learning and operations research Knowledge of R, SQL, and Python; familiarity with Scala, Java, or C++ is an asset Experience using business intelligence tools (e.g. Tableau) and data frameworks (e.g. Hadoop) Analytical mind and business acumen Strong math skills (e.g. statistics, algebra) Problem-solving aptitude Excellent communication and presentation skills BSc/BA in Computer Science, Engineering, or relevant field; a graduate degree in Data Science or other quantitative field is preferred
    $73k-104k yearly est. 6d ago
  • Supply Chain Data Scientist

    HP Inc. 4.9company rating

    Senior data scientist job in Vancouver, WA

    HP is the world's leading personal systems and printing company, dedicated to creating technology that enriches lives everywhere. Our innovation thrives on the passion, talent, and curiosity of our people-each bringing unique perspectives, expertise, and ideas to shape the future of how the world works and lives. **Join HP's Strategic Planning and Modeling (SPaM) Data Science Team - Where Vision Meets Impact** Are you a visionary thinker with a passion for learning and innovation? Join our **Strategic Planning and Modeling (SPaM) Data Science team** , where we tackle complex supply chain challenges and drive strategic decision-making through advanced analytics. As a member of SPaM, you will: ✅ **Evaluate** supply chain investment and improvement opportunities. ✅ **Develop** cutting-edge predictive, prescriptive, and generative AI models to optimize business outcomes. ✅ **Deliver** strategic recommendations that shape HP's global supply chain. ✅ **Collaborate** with senior executives to set direction and drive innovation. ✅ **Lead** with technical and analytical expertise to translate theory into impactful business solutions through strong execution and cross-functional orchestration. At SPaM, we bring over **35 years of supply chain innovation** . We value flexibility, invest in your growth through **formal training and mentorship** , and foster a culture where creativity and data-driven insights power the future. **If pioneering data-based innovations to transform HP's supply chain excites you, we want to hear from you!** **Responsibilities** + **Collaborate & Strategize:** Work closely with SPaM team members and cross-functional partners to define business challenges, develop analytical frameworks, gather key data, evaluate solutions, and deliver well-supported recommendations to leadership. + **Drive Supply Chain Innovation:** Develop and implement cutting-edge solutions in inventory management, working capital optimization, network design, forecasting, complexity reduction, product planning, manufacturing, and distribution to enhance efficiency and decision-making. + **Transform Data into Impact:** Leverage data visualization techniques to craft compelling narratives that communicate insights, drive action, and demonstrate business value. + **Build & Scale Advanced Analytics Solutions:** Design, deploy, and support predictive, prescriptive, and generative analytics models that drive measurable impact at scale. + **Enable Change & Adoption:** Apply proven change management strategies to facilitate smooth adoption of new analytics-driven solutions across the organization. + **Stay Ahead of the Curve:** Continuously enhance expertise in data science and supply chain management by staying informed on the latest advancements and industry trends. **Education and Experience Required** + MS/PhD in a business analytics field (i.e. Industrial Engineering, Management Science, Operations Research, Statistics, Data Science) or MBA with Supply Chain or Operations concentration + Operational experience in one of the following: inventory optimization, demand forecasting, network design, planning, forecasting, procurement, logistics, warranty, product quality, product variety management, or similar supply chain domain area + 10+ years of experience **Knowledge and Skills** + Expertise with inventory optimization math and deployment of the mathematical formulas into a business process + Applied experience with machine learning and optimization on large datasets + Proficient with Python, SQL, GitHub Co-Pilot, PowerBI, Excel + Experience with GenAI + Passion for data science & delivering data-driven outcomes + Demonstrated innovation across the supply chain domains (Plan, Source, Make, Deliver, Return) + Demonstrated practical experience in an operational environment + Experience implementing and handing off operational analytical solutions to business users + Demonstrated ability to deal well with ambiguity and work collaboratively in a fast-paced environment + Demonstrated ability to communicate concisely with diplomacy, credibility, and confidence + Experience with data visualization and data presentation in a clear and effective manner + Experience leading, collaborating with, and delivering results in cross-functional teams + Familiarity with statistics, forecasting, simulation, and spreadsheet modelling + A desire to be a part of the HP team with a commitment to build upon our brand that is synonymous with innovation, trust, reliability, and sustainability The base pay range for this role is **$147,050 to $232,850** annually with additional opportunities for pay in the form of bonus and/or equity (applies to US candidates only). Pay varies by work location, job-related knowledge, skills, and experience. **Benefits** HP offers a comprehensive benefits package for this position, including: + Health insurance + Dental insurance + Vision insurance + Long term/short term disability insurance + Employee assistance program + Flexible spending account + Life insurance + Generous time off policies, including: + 4-12 weeks fully paid parental leave based on tenure + 11 paid holidays + Additional flexible paid vacation and sick leave (US benefits overview (********************************** ) The compensation and benefits information are accurate as of the date of this posting. The Company reserves the right to modify this information at any time, with or without notice, subject to applicable law. **\#LI-POST** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
    $147.1k-232.9k yearly 10d ago
  • Lead Data Analyst

    Robert Half 4.5company rating

    Senior data scientist job in Newberg, OR

    As a Lead Data Analyst, you will be responsible for understanding business needs, translating them into data analytics solutions, and taking on a leadership role in executing projects requiring expertise in data analytics and relevant technologies. You will also serve as a resource to other data analysts to define and advance best practices. Key Responsibilities + Drive data analytics initiatives and provide technical leadership for individual projects and deliverables. + Lead discussions across enterprise functions to leverage domain expertise and capabilities. + Coordinate with business process owners for SAP objects to ensure data integrity and alignment with business requirements. + Support data migration projects, including data mapping and validation processes. + Lead and assist with data validation efforts across multiple SAP objects. + Manage individual tasks independently and provide leadership for owned projects. + Engage with business stakeholders to identify opportunities for applying data analytics concepts to real-world business challenges. + Apply expertise to select appropriate data analytics methods and tools for specific use cases. + Deliver presentations and communications that build data science credibility with medium-sized groups. + Lead root cause analysis and advanced troubleshooting efforts. + Translate stakeholder requirements into actionable data analytics solutions. + Execute end-to-end data analytics projects: from problem definition to code/model validation and deployment. Requirements equired Qualifications + Bachelor's Degree in Computer Science, Data Analytics, Business Analytics, Data Science, Computer Information Systems, or similar technical discipline. + 4+ years of work experience. + Minimum 3 years of SQL experience (scripting, creating views/tables for dashboards). + Experience with complex SQL queries in fast-paced environments. + Skilled in data transformation, cleansing, relational database, and data modeling concepts. + Experience with SAP data structures and business process integration. + Knowledge of data governance concepts (definitions, calculations, sources, lineage). + Skilled in BI and analytics platforms (Power BI, Tableau). -Preferred Qualifications + Master's or higher degree in a related discipline. + Experience with Power Query. + Experience with data migration projects and ETL processes. Technology Doesn't Change the World, People Do. Robert Half is the world's first and largest specialized talent solutions firm that connects highly qualified job seekers to opportunities at great companies. We offer contract, temporary and permanent placement solutions for finance and accounting, technology, marketing and creative, legal, and administrative and customer support roles. Robert Half works to put you in the best position to succeed. We provide access to top jobs, competitive compensation and benefits, and free online training. Stay on top of every opportunity - whenever you choose - even on the go. Download the Robert Half app (https://www.roberthalf.com/us/en/mobile-app) and get 1-tap apply, notifications of AI-matched jobs, and much more. All applicants applying for U.S. job openings must be legally authorized to work in the United States. Benefits are available to contract/temporary professionals, including medical, vision, dental, and life and disability insurance. Hired contract/temporary professionals are also eligible to enroll in our company 401(k) plan. Visit roberthalf.gobenefits.net for more information. © 2025 Robert Half. An Equal Opportunity Employer. M/F/Disability/Veterans. By clicking "Apply Now," you're agreeing to Robert Half's Terms of Use (https://www.roberthalf.com/us/en/terms) and Privacy Notice (https://www.roberthalf.com/us/en/privacy) .
    $99k-138k yearly est. 33d ago
  • Senior Data Architect

    Wizeline 4.3company rating

    Senior data scientist job in Beaverton, OR

    Senior Data Architect The Company Wizeline is a global digital services company helping mid-size to Fortune 500 companies build, scale, and deliver high-quality digital products and services. We thrive in solving our customer's challenges through human-centered experiences, digital core modernization, and intelligence everywhere (AI/ML and data). We help them succeed in building digital capabilities that bring technology to the core of their business. Your Day-to-Day IMPORTANT: This position is based in Beaverton, Oregon, and requires office attendance. Partner with information architecture to build logical/canonical, dimensional and physical data models for Databricks, Snowflake and Hive bigdata implementations to support data, BI Analytics and AI/ML products aligned with conceptual data models Work with product teams to define data sources by partnering with information and enterprise architecture. Work with product teams to define Key Performance Indicators and supporting data attributes and business rules Work with product teams to ensure data meets business requirements Optimize data designs and rationalize data objects across data products to reduce data redundancies and duplication Analyze existing data stores and interfaces to support current state and future data models Work with engineering squads to create physical data designs and business rule implementation Partner with enterprise data management and strategy on business glossary implementation and governance and stewardship Work with report developers to align dataset design with report design Partner with information architecture to provide feedback on conceptual models and review logical and physical data models Partner with information architecture on naming standards Provide data consistency across multiple scrum teams within a capability Facilitate data requirements elicitation and modeling sessions including use of interview, document analysis, workshops etc. Provide guidance and partner with squads by applying knowledge of data design Assist with the development and enforcement of data modeling standards Participate in data definition and data management reviews Partner with source system on new initiatives and enhancements Interface with transformational initiatives Partner with engineering and information architect on integrated/aggregated layer guidelines Establish and maintain processes to support and grow data modelling practices Are You a Fit? Sounds awesome, right? Now, let's make sure you're a good fit for the role. Must-have Skills Bachelor's degree in Business, Computer Science or equivalent 7+ years of experience working in technology, preferably in BI and Analytics 5 years' experience in Data Modelling and design 5 years of experience in data analysis, business analysis or system analysis Experience using CA Erwin or Hackolade other similar modelling tool Strong knowledge of relational and dimensional data modelling concepts Strong knowledge of data architecture and database structure Strong data analysis skills Strong SQL skills Experience with big data and cloud with Databricks & Snowflake is preferred Experience, interest and adaptability to working in an Agile delivery environment. Ability to work in a fast paced environment where change is a constant Ability to handle ambiguous requirements Exceptional interpersonal and communication skills (written and verbal) Self-motivated and collaborative About Us Wizeline prioritizes a culture of diversity and development for its nearly 2,000 person team spread across the globe. We believe great technology comes from a mix of talents and perspectives. Our core values of ownership, innovation, community, and inclusivity are central to our work. Wizeline is invested in its employees' growth, offering opportunities to create personalized career paths and develop in-demand skills. We even have a free education program, Wizeline Academy, to help both employees and the broader community upskill in tech. Apply now!
    $128k-169k yearly est. Auto-Apply 60d+ ago
  • BigData Engineer / Architect

    Nitor Infotech

    Senior data scientist job in Portland, OR

    The hunt is for a strong Big Data Professional, a team player with the ability to manage effective relationships with a wide range of stakeholders (customers & team members alike). Incumbent will demonstrate personal commitment and accountability to ensure standards are continuously sustained and improved both within the internal teams, and with partner organizations and suppliers. Role: Big Data Engineer Location: Portland OR. Duration: Full Time Skill Matrix: Map Reduce - Required Apache Spark - Required Informatica PowerCenter - Required Hive - Required Apache Hadoop - Required Core Java / Python - Highly Desired Healthcare Domain Experience - Highly Desired Job Description Responsibilities and Duties Participate in technical planning & requirements gathering phases including architectural design, coding, testing, troubleshooting, and documenting big data-oriented software applications. Responsible for the ingestion, maintenance, improvement, cleaning, and manipulation of data in the business's operational and analytics databases, and troubleshoots any existent issues. Implementation, troubleshooting, and optimization distributed solutions based on modern big data technologies like Hive, Hadoop, Spark, Elastic Search, Storm, Kafka, etc. in both an on premise and cloud deployment model to solve large scale processing problems Design, enhance and implement ETL/data ingestion platform on the cloud. Strong Data Warehousing skills, including: Data clean-up, ETL, ELT and handling scalability issues for enterprise level data warehouse Capable of investigating, familiarizing and mastering new data sets quickly Strong troubleshooting and problem-solving skills in large data environment Experience with building data platform on cloud (AWS or Azure) Experience in using Python, Java or any other language to solving data problems Experience in implementing SDLC best practices and Agile methods. Qualifications Required Skills: Data architecture/ Big Data/ ETL environment Experience with ETL design using tools Informatica, Talend, Oracle Data Integrator (ODI), Dell Boomi or equivalent Big Data & Analytics solutions Hadoop, Pig, Hive, Spark, Spark SQL Storm, AWS (EMR, Redshift, S3, etc.)/Azure (HDInsight, Data Lake Design) Building and managing hosted big data architecture, toolkit familiarity in: Hadoop with Oozy, Sqoop, Pig, Hive, HBase, Avro, Parquet, Spark, NiFi Foundational data management concepts - RDM and MDM Experience in working with JIRA/Git/Bitbucket/JUNIT and other code management toolsets Strong hands-on knowledge of/using solutioning languages like: Java, Scala, Python - any one is fine Healthcare Domain knowledge Required Experience, Skills and Qualifications Qualifications: Bachelor's Degree with a minimum of 6 to 9 + year's relevant experience or equivalent. Extensive experience in data architecture/Big Data/ ETL environment. Additional Information All your information will be kept confidential according to EEO guidelines.
    $84k-118k yearly est. 60d+ ago
  • Sr. Data Engineer

    It Vision Group

    Senior data scientist job in Portland, OR

    Job Description Title : Sr. Data Engineer Duration: 12 Months+ Roles & Responsibilities Perform data analysis according to business needs Translate functional business requirements into high-level and low-level technical designs Design and implement distributed data processing pipelines using Apache Spark, Apache Hive, Python, and other tools and languages prevalent in a modern analytics platform Create and schedule workflows using Apache Airflow or similar job orchestration tooling Build utilities, functions, and frameworks to better enable high-volume data processing Define and build data acquisitions and consumption strategies Build and incorporate automated unit tests, participate in integration testing efforts Work with teams to resolve operational & performance issues Work with architecture/engineering leads and other teams to ensure quality solutions are implemented, and engineering best practices are defined and followed. Tech Stack Apache Spark Apache Spark Streaming using Apache Kafka Apache Hive Apache Airflow Python AWS EMR and S3 Snowflake SQL Other Tools & Technologies :: PyCharm, Jenkin, Github. Apache Nifi (Optional) Scala (Optional)
    $84k-118k yearly est. 2d ago
  • Senior Data Engineer

    Advance Local 3.6company rating

    Senior data scientist job in Portland, OR

    **Advance Local** is looking for a **Senior Data Engineer** to design, build, and maintain the enterprise data infrastructure that powers our cloud data platform. This position will combine your deep technical expertise in data engineering with team leadership responsibilities for data engineering, overseeing the ingestion, integration, and reliability of data systems across Snowflake, AWS, Google Cloud, and legacy platforms. You'll partner with data product and across business units to translate requirements into technical solutions, integrate data from numerous third-party platforms, (CDPs, DMPs, analytics platforms, marketing tech) into central data platform, collaborate closely with the Data Architect on platform strategy and ensure scalable, well-engineered solutions for modern data infrastructure using infrastructure as code and API-driven integrations. The base salary range is $120,000 - $140,000 per year. **What you'll be doing:** + Lead the design and implementation of scalable data ingestion pipelines from diverse sources into Snowflake. + Partner with platform owners across business units to establish and maintain data integrations from third party systems into the central data platform. + Architect and maintain data infrastructure using IAC, ensuring reproducibility, version control and disaster recovery capabilities. + Design and implement API integrations and event-driven data flows to support real time and batch data requirements. + Evaluate technical capabilities and integration patterns of existing and potential third-party platforms, advising on platform consolidation and optimization opportunities. + Partner with the Data Architect and data product to define the overall data platform strategy, ensuring alignment between raw data ingestion and analytics-ready data products that serve business unit needs. + Develop and enforce data engineering best practices including testing frameworks, deployment automation, and observability. + Support rapid prototyping of new data products in collaboration with data product by building flexible, reusable data infrastructure components. + Design, develop, and maintain scalable data pipelines and ETL processes; optimize and improve existing data systems for performance, cost efficiency, and scalability. + Collaborate with data product, third-party platform owners, Data Architects, Analytics Engineers, Data Scientists, and business stakeholders to understand data requirements and deliver technical solutions that enable business outcomes across the organization. + Implement data quality validation, monitoring, and alerting systems to ensure reliability of data pipelines from all sources. + Develop and maintain comprehensive documentation for data engineering processes and systems, architecture, integration patterns, and runbooks. + Lead incident response and troubleshooting efforts for data pipeline issues, ensuring minimal business impact. + Stay current with the emerging data engineering technologies, cloud services, SaaS platform capabilities, and industry best practices. **Our ideal candidate will have the following:** + Bachelor's degree in computer science, engineering, or a related field + Minimum of seven years of experience in data engineering with at least two years in a lead or senior technical role + Expert proficiency in Snowflake data engineering patterns + Strong experience with AWS services (S3, Lambda, Glue, Step Functions) and Google Cloud Platform + Experience integrating data from SaaS platforms and marketing technology stacks (CDPs, DMPs, analytics platforms, CRMs) + Proven ability to work with third party APIs, webhooks, and data exports + Experience with infrastructure such as code (Terraform, Cloud Formation) and CI/CD pipelines for data infrastructure + Proven ability to design and implement API integrations and event-driven architecture + Experience with data modeling, data warehousing, and ETL processes at scale + Advanced proficiency in Python and SQL for data pipeline development + Experience with data orchestration tools (airflow, dbt, Snowflake tasks) + Strong understanding of data security, access controls, and compliance requirements + Ability to navigate vendor relationships and evaluate technical capabilities of third-party platforms + Excellent problem-solving skills and attention to detail + Strong communication and collaboraion skills **Additional Information** Advance Local Media offers competitive pay and a comprehensive benefits package with affordable options for your healthcare including medical, dental and vision plans, mental health support options, flexible spending accounts, fertility assistance, a competitive 401(k) plan to help plan for your future, generous paid time off, paid parental and caregiver leave and an employee assistance program to support your work/life balance, optional legal assistance, life insurance options, as well as flexible holidays to honor cultural diversity. Advance Local Media is one of the largest media groups in the United States, which operates the leading news and information companies in more than 20 cities, reaching 52+ million people monthly with our quality, real-time journalism and community engagement. Our company is built upon the values of Integrity, Customer-first, Inclusiveness, Collaboration and Forward-looking. For more information about Advance Local, please visit ******************** . Advance Local Media includes MLive Media Group, Advance Ohio, Alabama Media Group, NJ Advance Media, Advance Media NY, MassLive Media, Oregonian Media Group, Staten Island Media Group, PA Media Group, ZeroSum, Headline Group, Adpearance, Advance Aviation, Advance Healthcare, Advance Education, Advance National Solutions, Advance Originals, Advance Recruitment, Advance Travel & Tourism, BookingsCloud, Cloud Theory, Fox Dealer, Hoot Interactive, Search Optics, Subtext. _Advance Local Media is proud to be an equal opportunity employer, encouraging applications from people of all backgrounds. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, genetic information, national origin, age, disability, sexual orientation, marital status, veteran status, or any other category protected under federal, state or local law._ _If you need a reasonable accommodation because of a disability for any part of the employment process, please contact Human Resources and let us know the nature of your request and your contact information._ Advance Local Media does not provide sponsorship for work visas or employment authorization in the United States. Only candidates who are legally authorized to work in the U.S. will be considered for this position.
    $120k-140k yearly 60d+ ago
  • Google Cloud Data & AI Engineer

    Slalom 4.6company rating

    Senior data scientist job in Portland, OR

    Who You'll Work With As a modern technology company, our Slalom Technologists are disrupting the market and bringing to life the art of the possible for our clients. We have passion for building strategies, solutions, and creative products to help our clients solve their most complex and interesting business problems. We surround our technologists with interesting challenges, innovative minds, and emerging technologies You will collaborate with cross-functional teams, including Google Cloud architects, data scientists, and business units, to design and implement Google Cloud data and AI solutions. As a Consultant, Senior Consultant or Principal at Slalom, you will be a part of a team of curious learners who lean into the latest technologies to innovate and build impactful solutions for our clients. What You'll Do * Design, build, and operationalize large-scale enterprise data and AI solutions using Google Cloud services such as BigQuery, Vertex AI, Dataflow, Cloud Storage, Pub/Sub and more. * Implement cloud-based data solutions for data ingestion, transformation, and storage; and AI solutions for model development, deployment, and monitoring, ensuring both areas meet performance, scalability, and compliance needs. * Develop and maintain comprehensive architecture plans for data and AI solutions, ensuring they are optimized for both data processing and AI model training within the Google Cloud ecosystem. * Provide technical leadership and guidance on Google Cloud best practices for data engineering (e.g., ETL pipelines, data pipelines) and AI engineering (e.g., model deployment, MLOps). * Conduct assessments of current data architectures and AI workflows, and develop strategies for modernizing, migrating, or enhancing data systems and AI models within Google Cloud. * Stay current with emerging Google Cloud data and AI technologies, such as BigQuery ML, AutoML, and Vertex AI, and lead efforts to integrate new innovations into solutions for clients. * Mentor and develop team members to enhance their skills in Google Cloud data and AI technologies, while providing leadership and training on both data pipeline optimization and AI/ML best practices. What You'll Bring * Proven experience as a Cloud Data and AI Engineer or similar role, with hands-on experience in Google Cloud tools and services (e.g., BigQuery, Vertex AI, Dataflow, Cloud Storage, Pub/Sub, etc.). * Strong knowledge of data engineering concepts, such as ETL processes, data warehousing, data modeling, and data governance. * Proficiency in AI engineering, including experience with machine learning models, model training, and MLOps pipelines using tools like Vertex AI, BigQuery ML, and AutoML. * Strong problem-solving and decision-making skills, particularly with large-scale data systems and AI model deployment. * Strong communication and collaboration skills to work with cross-functional teams, including data scientists, business stakeholders, and IT teams, bridging data engineering and AI efforts. * Experience with agile methodologies and project management tools in the context of Google Cloud data and AI projects. * Ability to work in a fast-paced environment, managing multiple Google Cloud data and AI engineering projects simultaneously. * Knowledge of security and compliance best practices as they relate to data and AI solutions on Google Cloud. * Google Cloud certifications (e.g., Professional Data Engineer, Professional Database Engineer, Professional Machine Learning Engineer) or willingness to obtain certification within a defined timeframe. About Us Slalom is a fiercely human business and technology consulting company that leads with outcomes to bring more value, in all ways, always. From strategy through delivery, our agile teams across 52 offices in 12 countries collaborate with clients to bring powerful customer experiences, innovative ways of working, and new products and services to life. We are trusted by leaders across the Global 1000, many successful enterprise and mid-market companies, and 500+ public sector organizations to improve operations, drive growth, and create value. At Slalom, we believe that together, we can move faster, dream bigger, and build better tomorrows for all. Compensation and Benefits Slalom prides itself on helping team members thrive in their work and life. As a result, Slalom is proud to invest in benefits that include meaningful time off and paid holidays, parental leave, 401(k) with a match, a range of choices for highly subsidized health, dental, & vision coverage, adoption and fertility assistance, and short/long-term disability. We also offer yearly $350 reimbursement account for any well-being-related expenses, as well as discounted home, auto, and pet insurance. Slalom is committed to fair and equitable compensation practices. For this position the target base salaries are listed below. In addition, individuals may be eligible for an annual discretionary bonus. Actual compensation will depend upon an individual's skills, experience, qualifications, location, and other relevant factors. The target salary pay range is subject to change and may be modified at any time. East Bay, San Francisco, Silicon Valley: * Consultant $114,000-$171,000 * Senior Consultant: $131,000-$196,500 * Principal: $145,000-$217,500 San Diego, Los Angeles, Orange County, Seattle, Houston, New Jersey, New York City, Westchester, Boston, Washington DC: * Consultant $105,000-$157,500 * Senior Consultant: $120,000-$180,000 * Principal: $133,000-$199,500 All other locations: * Consultant: $96,000-$144,000 * Senior Consultant: $110,000-$165,000 * Principal: $122,000-$183,000 EEO and Accommodations Slalom is an equal opportunity employer and is committed to inclusion, diversity, and equity in the workplace. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veterans' status, or any other characteristic protected by federal, state, or local laws. Slalom will also consider qualified applications with criminal histories, consistent with legal requirements. Slalom welcomes and encourages applications from individuals with disabilities. Reasonable accommodations are available for candidates during all aspects of the selection process. Please advise the talent acquisition team if you require accommodations during the interview process. We are accepting applications until 12/31. #LI-FB1
    $145k-217.5k yearly 30d ago
  • Need Sr Big Data Engineer at Beaverton, OR Only W2

    USM 4.2company rating

    Senior data scientist job in Beaverton, OR

    Hi, We have immediate opportunity with our direct client send your resume asap upon your interest. Thank you. Sr Big Data Engineer Duration: Long Term Skills Typical Office: This is a typical office job, with no special physical requirements or unusual work environment. Core responsibilities: Expert data engineers will work with product teams across client to help automate and integrate various of data domains with a wide variety of data profiles (different scale, cadence, and volatility) into client's next-gen data and analytics platform. This is an opportunity to work across multiple subject areas and source platforms to ingest, organize, and prepare data through cloud-native processes. Required skills/experience: - 5+ years of professional development experience between either Python (preferred) or Scala/Java; familiarity with both is ideal - 5+ years of data-centric development with a focus on efficient data access and manipulation at multiple scales - 3+ years of experience with the HDFS ecosystem of tools (any distro, Spark experience prioritized) - 3+ years of significant experience developing within the broader AWS ecosystem of platforms and services - 3+ years of experience optimizing data access and analysis in non-HDFS data platforms (Traditional RDMS's, NoSQL / KV stores, etc) - Direct task development and/or configuration experience with a remote workflow orchestration tool - Airflow (preferred), Amazon Data Pipeline, Luigi, Oozie, etc. - Intelligence, strong problem-solving ability, and the ability to effectively communicate to partners with a broad spectrum of experiential backgrounds Several of the following skills are also desired: - A demonstrably strong understanding of security and credential management between application / platform components - A demonstrably strong understanding of core considerations when working with data at scale, in both file-based and database contexts, including SQL optimization - Direct experience with Netflix Genie is another huge plus - Prior experience with the operational backbone of a CI/CD environment (pipeline orchestration + configuration management) is useful - Clean coding practices, passion for development, being a generally good team player, etc, etc (and experience with GitHub) is always nice Keys to Success: - Deliver exceptional customer service - Demonstrate accountability and integrity - Be willing to learn and adapt every day - Embrace change Skills Regards Nithya Additional Information All your information will be kept confidential according to EEO guidelines. please send the profiles to ************************* and contact No# ************.
    $93k-132k yearly est. Easy Apply 60d+ ago
  • Sr. Data Engineer

    Concoracredit

    Senior data scientist job in Beaverton, OR

    As a Sr. Data Engineer, you'll help drive Concora Credit's Mission to enable customers to Do More with Credit - every single day. The impact you'll have at Concora Credit: We are seeking a Sr. Data Engineer with deep expertise in Azure and Databricks to lead the design, development, and optimization of scalable data pipelines and platforms. You'll be responsible for building robust data solutions that power analytics, reporting, and machine learning across the organization using Azure cloud services and Databricks. We hire people, not positions. That's because, at Concora Credit, we put people first, including our customers, partners, and Team Members. Concora Credit is guided by a single purpose: to help non-prime customers do more with credit. Today, we have helped millions of customers access credit. Our industry leadership, resilience, and willingness to adapt ensure we can help our partners responsibly say yes to millions more. As a company grounded in entrepreneurship, we're looking to expand our team and are looking for people who foster innovation, strive to make an impact, and want to Do More! We're an established company with over 20 years of experience, but now we're taking things to the next level. We're seeking someone who wants to impact the business and play a pivotal role in leading the charge for change. Responsibilities As our Sr. Data Engineer, you will: Design and develop scalable, efficient data pipelines using Azure Databricks Build and manage data ingestion, transformation, and storage solutions leveraging Azure Data Factory, Azure Data Lake, and Delta Lake Implement CI/CD for data workflows using tools like Azure DevOps, Git, and Terraform Optimize performance and cost efficiency across large-scale distributed data systems Collaborate with analysts, data scientists, and business stakeholders to understand data needs and deliver reliable, reusable datasets Provide guidance and mentor junior engineers and actively contribute to data platform best practices Monitor, troubleshoot, and optimize existing pipelines and infrastructure to ensure reliability and scalability These duties must be performed with or without reasonable accommodation. We know experience comes in many forms and that many skills are transferable. If your experience is close to what we're looking for, consider applying. Diversity has made us the entrepreneurial and innovative company that we are today. Qualifications Requirements: 5+ years of experience in data engineering, with a strong focus on Azure cloud technologies Experience with Azure Databricks, Azure Data Lake, Data Factory including PySpark, SQL, Python and Delta Lake Strong proficiency in Databricks and Apache Spark Solid understanding of data warehousing, ETL/ELT, and data modeling best practices Experience with version control, CI/CD pipelines, and infrastructure as code Knowledge of Spark performance tuning, partitioning, and job orchestration Excellent problem-solving skills and attention to detail Strong communication and collaboration abilities across technical and non-technical teams Ability to work independently and lead in a fast-paced, agile environment Passion for delivering clean, high-quality, and maintainable code Preferred Qualifications: Experience with Unity Catalog, Databricks Workflows, and Delta Live Tables Familiarity with DevOps practices or Terraform for Azure resource provisioning Understanding of data security, RBAC, and compliance in cloud environments Experience integrating Databricks with Power BI or other analytics platforms Exposure to real-time data processing using Kafka, Event Hubs, or Structured Streaming What's In It For You: Medical, Dental and Vision insurance for you and your family Relax and recharge with Paid Time Off (PTO) 6 company-observed paid holidays, plus 3 paid floating holidays 401k (after 90 days) plus employer match up to 4% Pet Insurance for your furry family members Wellness perks including onsite fitness equipment at both locations, EAP, and access to the Headspace App We invest in your future through Tuition Reimbursement Save on taxes with Flexible Spending Accounts Peace of mind with Life and AD&D Insurance Protect yourself with company-paid Long-Term Disability and voluntary Short-Term Disability Concora Credit provides equal employment opportunities to all Team Members and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Employment-based visa sponsorship is not available for this role. Concora Credit is an equal opportunity employer (EEO). Please see the Concora Credit Privacy Policy for more information on how Concora Credit processes your personal information during the recruitment process and, if applicable, based on your location, how you can exercise your privacy rights. If you have questions about this privacy notice or need to contact us in connection with your personal data, including any requests to exercise your legal rights referred to at the end of this notice, please contact caprivacynotice@concoracredit.com.
    $84k-118k yearly est. Auto-Apply 18d ago
  • Azure Data Engineer - 6013916

    Accenture 4.7company rating

    Senior data scientist job in Beaverton, OR

    Accenture Flex offers you the flexibility of local fixed-duration project-based work powered by Accenture, a leading global professional services company. Accenture is consistently recognized on FORTUNE's 100 Best Companies to Work For and Diversity Inc's Top 50 Companies For Diversity lists. As an Accenture Flex employee, you will apply your skills and experience to help drive business transformation for leading organizations and communities. In addition to delivering innovative solutions for Accenture's clients, you will work with a highly skilled, diverse network of people across Accenture businesses who are using the latest emerging technologies to address today's biggest business challenges. You will receive competitive rewards and access to benefits programs and world-class learning resources. Accenture Flex employees work in their local metro area onsite at the project, significantly reducing and/or eliminating the demands to travel. Job Description: Join our dynamic team and embark on a journey where you will be empowered to perform independently and become an SME. Required active participation/contribution in team discussions will be key as you contribute in providing solutions to work related problems. Let's work together to achieve greatness! Responsibilities: * Create new data pipelines leveraging existing data ingestion frameworks, tools * Orchestrate data pipelines using the Azure Data Factory service. * Develop/Enhance data transformations based on the requirements to parse, transform and load data into Enterprise Data Lake, Delta Lake, Enterprise DWH (Synapse Analytics) * Perform Unit Testing, coordinate integration testing and UAT Create HLD/DD/runbooks for the data pipelines * Configure compute, DQ Rules, Maintenance Performance tuning/optimization Qualification Basic Qualifications: * Minimum of 3 years of work experience with one or more of the following: Databricks Data Engineering, DLT, Azure Data Factory, SQL, PySpark, Synapse Dedicated SQL Pool, Azure DevOps, Python Preferred Qualifications: * Azure Function Apps * Azure Logic Apps * Precisely & COSMOS DB * Advanced proficiency in PySpark. * Advanced proficiency in Microsoft Azure Databricks, Azure DevOps, Databricks Delta Live Tables and Azure Data Factory. * Bachelor's or Associate's degree Compensation at Accenture varies depending on a wide array of factors, which may include but are not limited to the specific office location, role, skill set, and level of experience. As required by local law, Accenture provides a reasonable range of compensation for roles that may be hired as set forth below. We accept applications on an on-going basis and there is no fixed deadline to apply. Information on benefits is here. Role Location: California - $47.69 - $57.69 Cleveland - $47.69 - $57.69 Colorado - $47.69 - $57.69 District of Columbia - $47.69 - $57.69 Illinois - $47.69 - $57.69 Minnesota - $47.69 - $57.69 Maryland - $47.69 - $57.69 Massachusetts - $47.69 - $57.69 New York/New Jersey - $47.69 - $57.69 Washington - $47.69 - $57.69 Locations
    $75k-103k yearly est. 3d ago
  • AI & GenAI Data Scientist-Senior Associate

    PwC 4.8company rating

    Senior data scientist job in Portland, OR

    **Specialty/Competency:** Data, Analytics & AI **Industry/Sector:** Not Applicable **Time Type:** Full time **Travel Requirements:** Up to 80% At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn't clear, you ask questions, and you use these moments as opportunities to grow. Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: + Respond effectively to the diverse perspectives, needs, and feelings of others. + Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. + Use critical thinking to break down complex concepts. + Understand the broader objectives of your project or role and how your work fits into the overall strategy. + Develop a deeper understanding of the business context and how it is changing. + Use reflection to develop self awareness, enhance strengths and address development areas. + Interpret data to inform insights and recommendations. + Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. **Minimum Degree Required** Bachelor's Degree **Minimum Year(s) of Experience** 4 year(s) Demonstrates thorough-level abilities and/or a proven record of success managing the identification and addressing of client needs: + Building of GenAI and AI solutions, including but not limited to analytical model development and implementation, prompt engineering, general all-purpose programming (e.g., Python), testing, communication of results, front end and back-end integration, and iterative development with clients + Documenting and analyzing business processes for AI and Generative AI opportunities, including gathering of requirements, creation of initial hypotheses, and development of GenAI and AI solution approach + Collaborating with client team to understand their business problem and select the appropriate analytical models and approaches for AI and GenAI use cases + Designing and solutioning AI/GenAI architectures for clients, specifically for plugin-based solutions (i.e., ChatClient application with plugins) and custom AI/GenAI application builds + Processing unstructured and structured data to be consumed as context for LLMs, including but not limited to embedding of large text corpus, generative development of SQL queries, building connectors to structured databases + Support management of daily operations of a global data and analytics team on client engagements, review developed models, provide feedback and assist in analysis; + Directing data engineers and other data scientists to deliver efficient solutions to meet client requirements; + Leading and contributing to development of proof of concepts, pilots, and production use cases for clients while working in cross-functional teams; + Structuring, write, communicate and facilitate client presentations; and, + Directing associates through coaching, providing feedback, and guiding work performance. Demonstrates thorough abilities and/or a proven record of success learning and performing in functional and technical capacities, including the following areas: + Managing AI/GenAI application development teams including back-end and front-end integrations + Using Python (e.g., Pandas, NLTK, Scikit-learn, Keras etc.), common LLM development frameworks (e.g., Langchain, Semantic Kernel), Relational storage (SQL), Non-relational storage (NoSQL); + Experience in analytical techniques such as Machine Learning, Deep Learning and Optimization + Vectorization and embedding, prompt engineering, RAG (retrieval, augmented, generation) workflow dev + Understanding or hands on experience with Azure, AWS, and / or Google Cloud platforms + Experience with Git Version Control, Unit/Integration/End-to-End Testing, CI/CD, release management, etc. Learn more about how we work: ************************** PwC does not intend to hire experienced or entry level job seekers who will need, now or in the future, PwC sponsorship through the H-1B lottery, except as set forth within the following policy: *********************************** As PwC is an equal opportunity employer, all qualified applicants will receive consideration for employment at PwC without regard to race; color; religion; national origin; sex (including pregnancy, sexual orientation, and gender identity); age; disability; genetic information (including family medical history); veteran, marital, or citizenship status; or, any other status protected by law. For only those qualified applicants that are impacted by the Los Angeles County Fair Chance Ordinance for Employers, the Los Angeles' Fair Chance Initiative for Hiring Ordinance, the San Francisco Fair Chance Ordinance, San Diego County Fair Chance Ordinance, and the California Fair Chance Act, where applicable, arrest or conviction records will be considered for Employment in accordance with these laws. At PwC, we recognize that conviction records may have a direct, adverse, and negative relationship to responsibilities such as accessing sensitive company or customer information, handling proprietary assets, or collaborating closely with team members. We evaluate these factors thoughtfully to establish a secure and trusted workplace for all. Applications will be accepted until the position is filled or the posting is removed, unless otherwise set forth on the following webpage. Please visit this link for information about anticipated application deadlines: *************************************** The salary range for this position is: $77,000 - $202,000. Actual compensation within the range will be dependent upon the individual's skills, experience, qualifications and location, and applicable employment laws. All hired individuals are eligible for an annual discretionary bonus. PwC offers a wide range of benefits, including medical, dental, vision, 401k, holiday pay, vacation, personal and family sick leave, and more. To view our benefits at a glance, please visit the following link: ***********************************
    $77k-202k yearly 60d+ ago
  • BigData Engineer / Architect

    Nitor Infotech

    Senior data scientist job in Portland, OR

    The hunt is for a strong Big Data Professional, a team player with the ability to manage effective relationships with a wide range of stakeholders (customers & team members alike). Incumbent will demonstrate personal commitment and accountability to ensure standards are continuously sustained and improved both within the internal teams, and with partner organizations and suppliers. Role: Big Data Engineer Location: Portland OR. Duration: Full Time Skill Matrix: Map Reduce - Required Apache Spark - Required Informatica PowerCenter - Required Hive - Required Apache Hadoop - Required Core Java / Python - Highly Desired Healthcare Domain Experience - Highly Desired Job Description Responsibilities and Duties Participate in technical planning & requirements gathering phases including architectural design, coding, testing, troubleshooting, and documenting big data-oriented software applications. Responsible for the ingestion, maintenance, improvement, cleaning, and manipulation of data in the business's operational and analytics databases, and troubleshoots any existent issues. Implementation, troubleshooting, and optimization distributed solutions based on modern big data technologies like Hive, Hadoop, Spark, Elastic Search, Storm, Kafka, etc. in both an on premise and cloud deployment model to solve large scale processing problems Design, enhance and implement ETL/data ingestion platform on the cloud. Strong Data Warehousing skills, including: Data clean-up, ETL, ELT and handling scalability issues for enterprise level data warehouse Capable of investigating, familiarizing and mastering new data sets quickly Strong troubleshooting and problem-solving skills in large data environment Experience with building data platform on cloud (AWS or Azure) Experience in using Python, Java or any other language to solving data problems Experience in implementing SDLC best practices and Agile methods. Qualifications Required Skills: Data architecture/ Big Data/ ETL environment Experience with ETL design using tools Informatica, Talend, Oracle Data Integrator (ODI), Dell Boomi or equivalent Big Data & Analytics solutions Hadoop, Pig, Hive, Spark, Spark SQL Storm, AWS (EMR, Redshift, S3, etc.)/Azure (HDInsight, Data Lake Design) Building and managing hosted big data architecture, toolkit familiarity in: Hadoop with Oozy, Sqoop, Pig, Hive, HBase, Avro, Parquet, Spark, NiFi Foundational data management concepts - RDM and MDM Experience in working with JIRA/Git/Bitbucket/JUNIT and other code management toolsets Strong hands-on knowledge of/using solutioning languages like: Java, Scala, Python - any one is fine Healthcare Domain knowledge Required Experience, Skills and Qualifications Qualifications: Bachelor's Degree with a minimum of 6 to 9 + year's relevant experience or equivalent. Extensive experience in data architecture/Big Data/ ETL environment. Additional Information All your information will be kept confidential according to EEO guidelines.
    $84k-118k yearly est. 21h ago
  • Sr. Data Engineer

    Concora Credit

    Senior data scientist job in Beaverton, OR

    As a Sr. Data Engineer, you'll help drive Concora Credit's Mission to enable customers to Do More with Credit - every single day. The impact you'll have at Concora Credit: We are seeking a Sr. Data Engineer with deep expertise in Azure and Databricks to lead the design, development, and optimization of scalable data pipelines and platforms. You'll be responsible for building robust data solutions that power analytics, reporting, and machine learning across the organization using Azure cloud services and Databricks. We hire people, not positions. That's because, at Concora Credit, we put people first, including our customers, partners, and Team Members. Concora Credit is guided by a single purpose: to help non-prime customers do more with credit. Today, we have helped millions of customers access credit. Our industry leadership, resilience, and willingness to adapt ensure we can help our partners responsibly say yes to millions more. As a company grounded in entrepreneurship, we're looking to expand our team and are looking for people who foster innovation, strive to make an impact, and want to Do More! We're an established company with over 20 years of experience, but now we're taking things to the next level. We're seeking someone who wants to impact the business and play a pivotal role in leading the charge for change. Responsibilities As our Sr. Data Engineer, you will: Design and develop scalable, efficient data pipelines using Azure Databricks Build and manage data ingestion, transformation, and storage solutions leveraging Azure Data Factory, Azure Data Lake, and Delta Lake Implement CI/CD for data workflows using tools like Azure DevOps, Git, and Terraform Optimize performance and cost efficiency across large-scale distributed data systems Collaborate with analysts, data scientists, and business stakeholders to understand data needs and deliver reliable, reusable datasets Provide guidance and mentor junior engineers and actively contribute to data platform best practices Monitor, troubleshoot, and optimize existing pipelines and infrastructure to ensure reliability and scalability These duties must be performed with or without reasonable accommodation. We know experience comes in many forms and that many skills are transferable. If your experience is close to what we're looking for, consider applying. Diversity has made us the entrepreneurial and innovative company that we are today. Qualifications Requirements: 5+ years of experience in data engineering, with a strong focus on Azure cloud technologies Experience with Azure Databricks, Azure Data Lake, Data Factory including PySpark, SQL, Python and Delta Lake Strong proficiency in Databricks and Apache Spark Solid understanding of data warehousing, ETL/ELT, and data modeling best practices Experience with version control, CI/CD pipelines, and infrastructure as code Knowledge of Spark performance tuning, partitioning, and job orchestration Excellent problem-solving skills and attention to detail Strong communication and collaboration abilities across technical and non-technical teams Ability to work independently and lead in a fast-paced, agile environment Passion for delivering clean, high-quality, and maintainable code Preferred Qualifications: Experience with Unity Catalog, Databricks Workflows, and Delta Live Tables Familiarity with DevOps practices or Terraform for Azure resource provisioning Understanding of data security, RBAC, and compliance in cloud environments Experience integrating Databricks with Power BI or other analytics platforms Exposure to real-time data processing using Kafka, Event Hubs, or Structured Streaming What's In It For You: Medical, Dental and Vision insurance for you and your family Relax and recharge with Paid Time Off (PTO) 6 company-observed paid holidays, plus 3 paid floating holidays 401k (after 90 days) plus employer match up to 4% Pet Insurance for your furry family members Wellness perks including onsite fitness equipment at both locations, EAP, and access to the Headspace App We invest in your future through Tuition Reimbursement Save on taxes with Flexible Spending Accounts Peace of mind with Life and AD&D Insurance Protect yourself with company-paid Long-Term Disability and voluntary Short-Term Disability Concora Credit provides equal employment opportunities to all Team Members and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Employment-based visa sponsorship is not available for this role. Concora Credit is an equal opportunity employer (EEO). Please see the Concora Credit Privacy Policy for more information on how Concora Credit processes your personal information during the recruitment process and, if applicable, based on your location, how you can exercise your privacy rights. If you have questions about this privacy notice or need to contact us in connection with your personal data, including any requests to exercise your legal rights referred to at the end of this notice, please contact caprivacynotice@concoracredit.com.
    $84k-118k yearly est. Auto-Apply 60d+ ago

Learn more about senior data scientist jobs

How much does a senior data scientist earn in Vancouver, WA?

The average senior data scientist in Vancouver, WA earns between $100,000 and $195,000 annually. This compares to the national average senior data scientist range of $90,000 to $170,000.

Average senior data scientist salary in Vancouver, WA

$139,000

What are the biggest employers of Senior Data Scientists in Vancouver, WA?

The biggest employers of Senior Data Scientists in Vancouver, WA are:
  1. Pwc
  2. Mercury
Job type you want
Full Time
Part Time
Internship
Temporary