Develop tools, metric measurement and assessment methods for performance management and predictive modeling. Develop dashboards for product management and executives to drive faster and better decision making Create accountability models for DPG-wide quality, I&W, inventory, product management KPI's and business operations.
Improve DPG-wide quality, install and warranty, and inventory performance consistently from awareness, prioritization, and action through the availability of common data.
Collaborate with quality, install, and warranty, and inventory program managers to analyze trends and patterns in data that drive required improvement in key performance indicators (KPIs) Foster growth and utility of Cost of Quality within the company through correlation of I&W data, ECOGS, identification of causal relationships for quality events and discovery of hidden costs throughout the network.
Improve data utilization via AI and automation leading to real time resolution and speeding systemic action.
Lead and/or advise on multiple projects simultaneously and demonstrate organizational, prioritization, and time management proficiencies.
Bachelor's degree with 8+ years of experience; or master's degree with 5+ years' experience; or equivalent experience.
Basic understand of AI and machine learning and ability to work with DataScientist to use AI to solve complex challenging problems leading to efficiency and effectiveness improvements.
Ability to define problem statements and objectives, development of analysis solution approach, execution of analysis.
Basic knowledge of Lean Six Sigma processes, statistics, or quality systems experience.
Ability to work on multiple problems simultaneously.
Ability to present conclusions and recommendations to executive audiences.
Ownership mindset to drive solutions and positive outcomes.
Excellent communication and presentation skills with the ability to present to audiences at multiple levels in the Company.
Willingness to adapt best practices via benchmarking.
Experience in Semiconductor fabrication, Semiconductor Equipment Operations, or related industries is a plus.
Demonstrated ability to change process and methodologies for capturing and interpreting data.
Demonstrated success in using structured problem-solving methodologies and quality tools to solve complex problems.
Knowledge of programming environments such as Python, R, Matlab, SQL or equivalent.
Experience in structured problem-solving methodologies such as PDCA, DMAIC, 8D and quality tools.
Our commitment We believe it is important for every person to feel valued, included, and empowered to achieve their full potential.
By bringing unique individuals and viewpoints together, we achieve extraordinary results.
Lam is committed to and reaffirms support of equal opportunity in employment and non-discrimination in employment policies, practices and procedures on the basis of race, religious creed, color, national origin, ancestry, physical disability, mental disability, medical condition, genetic information, marital status, sex (including pregnancy, childbirth and related medical conditions), gender, gender identity, gender expression, age, sexual orientation, or military and veteran status or any other category protected by applicable federal, state, or local laws.
It is the Company's intention to comply with all applicable laws and regulations.
Company policy prohibits unlawful discrimination against applicants or employees.
Lam offers a variety of work location models based on the needs of each role.
Our hybrid roles combine the benefits of on-site collaboration with colleagues and the flexibility to work remotely and fall into two categories - On-site Flex and Virtual Flex.
'On-site Flex' you'll work 3+ days per week on-site at a Lam or customer/supplier location, with the opportunity to work remotely for the balance of the week.
'Virtual Flex' you'll work 1-2 days per week on-site at a Lam or customer/supplier location, and remotely the rest of the time.
$71k-91k yearly est. 10d ago
Looking for a job?
Let Zippia find it for you.
Lead Data Scientist GenAI, Strategic Analytics - Data Science
Deloitte 4.7
Data scientist job in Portland, OR
Deloitte is at the leading edge of GenAI innovation, transforming Strategic Analytics and shaping the future of Finance. We invite applications from highly skilled and experienced Lead DataScientists ready to drive the development of our next-generation GenAI solutions.
The Team
Strategic Analytics is a dynamic part of our Finance FP&A organization, dedicated to empowering executive leaders across the firm, as well as our partners in financial and operational functions. Our team harnesses the power of cloud computing, data science, AI, and strategic expertise-combined with deep institutional knowledge-to deliver insights that inform our most critical business decisions and fuel the firm's ongoing growth.
GenAI is at the forefront of our innovation agenda and a key strategic priority for our future. We are rapidly developing groundbreaking products and solutions poised to transform both our organization and our clients. As part of our team, the selected candidate will play a pivotal role in driving the success of these high-impact initiatives.
Recruiting for this role ends on December 14, 2025
Work You'll Do
Client Engagement & Solution Scoping
+ Partner with stakeholders to analyze business requirements, pain points, and objectives relevant to GenAI use cases.
+ Facilitate workshops to identify, prioritize, and scope impactful GenAI applications (e.g., text generation, code synthesis, conversational agents).
+ Clearly articulate GenAI's value proposition, including efficiency gains, risk mitigation, and innovation.
+ Solution Architecture & Design
+ Architect holistic GenAI solutions, selecting and customizing appropriate models (GPT, Llama, Claude, Zora AI, etc.).
+ Design scalable integration strategies for embedding GenAI into existing client systems (ERP, CRM, KM platforms).
+ Define and govern reliable, ethical, and compliant data sourcing and management.
Development & Customization
+ Lead model fine-tuning, prompt engineering, and customization for client-specific needs.
+ Oversee the development of GenAI-powered applications and user-friendly interfaces, ensuring robustness and exceptional user experience.
+ Drive thorough validation, testing, and iteration to ensure quality and accuracy.
Implementation, Deployment & Change Management
+ Manage solution rollout, including cloud setup, configuration, and production deployment.
+ Guide clients through adoption: deliver training, create documentation, and provide enablement resources for users.
Risk, Ethics & Compliance
+ Lead efforts in responsible AI, ensuring safeguards against bias, privacy breaches, and unethical outcomes.
+ Monitor performance, implement KPIs, and manage model retraining and auditing processes.
Stakeholder Communication
+ Prepare executive-level reports, dashboards, and demos to summarize progress and impact.
+ Coordinate across internal teams, tech partners, and clients for effective project delivery.
Continuous Improvement & Thought Leadership
+ Stay current on GenAI trends, best practices, and emerging technologies; share insights across teams.
+ Mentor junior colleagues, promote knowledge transfer, and contribute to reusable methodologies.
Qualifications
Required:
+ Bachelor's or Master's degree in Computer Science, Engineering, Data Science, Mathematics, or related field.
+ 5+ years of hands-on experience delivering machine learning or AI solutions, preferably including generative AI.
+ Independent thinker who can create the vision and execute on transforming data into high end client products.
+ Demonstrated accomplishments in the following areas:
+ Deep understanding of GenAI models and approaches (LLMs, transformers, prompt engineering).
+ Proficiency in Python (PyTorch, TensorFlow, HuggingFace), Databricks, ML pipelines, and cloud-based deployment (Azure, AWS, GCP).
+ Experience integrating AI into enterprise applications, building APIs, and designing scalable workflows.
+ Knowledge of solution architecture, risk assessment, and mapping technology to business goals.
+ Familiarity with agile methodologies and iterative delivery.
+ Commitment to responsible AI, including data ethics, privacy, and regulatory compliance.
+ Ability to travel 0-10%, on average, based on the work you do and the clients and industries/sectors you serve
+ Limited immigration sponsorship may be available.
Preferred:
+ Relevant Certifications: May include Google Cloud Professional ML Engineer, Microsoft Azure AI Engineer, AWS Certified Machine Learning, or specialized GenAI/LLM credentials.
+ Experience with data visualization tools such as Tableau
The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Deloitte, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is $102,500 - $188,900.
You may also be eligible to participate in a discretionary annual incentive program, subject to the rules governing the program, whereby an award, if any, depends on various factors, including, without limitation, individual and organizational performance.
Information for applicants with a need for accommodation
************************************************************************************************************
EA_FA_ExpHire
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law.
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law.
$102.5k-188.9k yearly 60d+ ago
Data Scientist
Eyecarecenterofsalem
Data scientist job in Portland, OR
Job DescriptionWe are looking for a DataScientist to analyze large amounts of raw information to find patterns that will help improve our company. We will rely on you to build data products to extract valuable business insights. In this role, you should be highly analytical with a knack for analysis, math, and statistics. Critical thinking and problem-solving skills are essential for interpreting data. We also want to see a passion for machine learning and research.
Your goal will be to help our company analyze trends to make better decisions.
Responsibilities
Identify valuable data sources and automate collection processes
Undertake to preprocess of structured and unstructured data
Analyze large amounts of information to discover trends and patterns
Build predictive models and machine-learning algorithms
Combine models through ensemble modeling
Present information using data visualization techniques
Propose solutions and strategies to business challenges
Collaborate with engineering and product development teams
Requirements and skills
Proven experience as a DataScientist or Data Analyst
Experience in data mining
Understanding of machine learning and operations research
Knowledge of R, SQL, and Python; familiarity with Scala, Java, or C++ is an asset
Experience using business intelligence tools (e.g. Tableau) and data frameworks (e.g. Hadoop)
Analytical mind and business acumen
Strong math skills (e.g. statistics, algebra)
Problem-solving aptitude
Excellent communication and presentation skills
BSc/BA in Computer Science, Engineering, or relevant field; a graduate degree in Data Science or other quantitative field is preferred
$73k-104k yearly est. 16d ago
Human Performance Data Scientist II
General Dynamics Information Technology 4.7
Data scientist job in Lewisville, WA
**Req ID:** RQ210960 **Type of Requisition:** Regular **Clearance Level Must Be Able to Obtain:** Top Secret/SCI **Public Trust/Other Required:** None **Job Family:** Data Science and Data Engineering **Skills:** Business,Data Analysis,Science,Statistical Analysis,Statistics
**Experience:**
3 + years of related experience
**US Citizenship Required:**
Yes
**Job Description:**
Seize your opportunity to make a personal impact as DataScientist II supporting mission critical work on an exciting program. GDIT is your place to make meaningful contributions to challenging projects, build your skills, and grow a rewarding career.
At GDIT, people are our differentiator. As a Human Performance DataScientist II supporting our customer, you will help ensure today is safe and tomorrow is smarter. Our work depends on a DataScientist II Technician joining our team.
The Human Performance (HP) DataScientist II supports the program by applying advanced analytical skills to optimize the readiness, resiliency, and performance of Special Operations Forces (SOF) personnel. The position focuses on human performance insights rather than general data science functions. The role works directly with SOF HP staff, with priority on SOF Operators and Direct Combat Support personnel, to evaluate physical, psychological, cognitive, and social performance indicators.
**HOW A DATASCIENTIST II WILL MAKE AN IMPACT:**
+ The HP DataScientist II is responsible for entering, cleaning, and analyzing HP data collected through program initiatives.
+ Works in collaboration with performance teams and the Government biostatistician, the HP DataScientist II provides subject matter expertise in program evaluation, research methodologies, and applied performance analytics.
+ Partners with strength coaches, dietitians, athletic trainers, physical therapists, psychologists, and cognitive specialists to identify data collection opportunities that support operational readiness and force preservation.
+ Supports development of performance dashboards, readiness trends, return to duty outcomes, and longitudinal monitoring products that directly inform commanders and HP leaders.
+ Prepares reports and presentations that communicate performance trends, risk indicators, and program impact in clear and actionable formats for leadership.
+ Receives access to Government systems for the purpose of HP data entry, management, and analysis.
**WHAT YOU'LL NEED TO SUCCEED:**
**EDUCATION:** Master's or Doctoral degree in quantitative science, social science or related discipline.
+ Must have at least 3 years of research experience in academic, social services, government, healthcare or laboratory settings
+ Advanced proficiency in statistical software such as SPSS, SAS, or R, with emphasis on applied research and performance analytics.
+ Demonstrate advanced proficiency through prior work within performance research, sport science, military HP programs, or similar environments where continuous data collection and evaluation occur.
+ Proficiency is also demonstrated through a record of scientific publications or applied HP research.
+ Possesses excellent communication skills, strong organizational abilities, and at least three years of experience working in HP, government, healthcare, or research settings.
+ Proficient with the suite of Microsoft Office programs, including Word, Excel and Access.
**LOCATION:** Various CONUS SITES
**CLEARANCE:** Ability to obtain and maintain Secret or Top-Secret Clearance.
**This is a contingent posting, expected to start in 2026.**
**LOCATION:** Various CONUS SITES
**GDIT IS YOUR PLACE:**
+ 401K with company match
+ Comprehensive health and wellness packages
+ Internal mobility team dedicated to helping you own your career
+ Professional growth opportunities including paid education and certifications
+ Cutting-edge technology you can learn from
+ Rest and recharge with paid vacation and holidays
\#MilitaryHealthGDITJobs
The likely salary range for this position is $83,927 - $113,549. This is not, however, a guarantee of compensation or salary. Rather, salary will be set based on experience, geographic location and possibly contractual requirements and could fall outside of this range.
Our benefits package for all US-based employees includes a variety of medical plan options, some with Health Savings Accounts, dental plan options, a vision plan, and a 401(k) plan offering the ability to contribute both pre and post-tax dollars up to the IRS annual limits and receive a company match. To encourage work/life balance, GDIT offers employees full flex work weeks where possible and a variety of paid time off plans, including vacation, sick and personal time, holidays, paid parental, military, bereavement and jury duty leave. GDIT typically provides new employees with 15 days of paid leave per calendar year to be used for vacations, personal business, and illness and an additional 10 paid holidays per year. Paid leave and paid holidays are prorated based on the employee's date of hire. The GDIT Paid Family Leave program provides a total of up to 160 hours of paid leave in a rolling 12 month period for eligible employees. To ensure our employees are able to protect their income, other offerings such as short and long-term disability benefits, life, accidental death and dismemberment, personal accident, critical illness and business travel and accident insurance are provided or available. We regularly review our Total Rewards package to ensure our offerings are competitive and reflect what our employees have told us they value most.
We are GDIT. A global technology and professional services company that delivers consulting, technology and mission services to every major agency across the U.S. government, defense and intelligence community. Our 30,000 experts extract the power of technology to create immediate value and deliver solutions at the edge of innovation. We operate across 50 countries worldwide, offering leading capabilities in digital modernization, AI/ML, Cloud, Cyber and application development. Together with our clients, we strive to create a safer, smarter world by harnessing the power of deep expertise and advanced technology.
Join our Talent Community to stay up to date on our career opportunities and events at ********************
Equal Opportunity Employer / Individuals with Disabilities / Protected Veterans
$83.9k-113.5k yearly 28d ago
Senior Data Scientist - Accounting
Mercury 3.5
Data scientist job in Portland, OR
In the 1840s, Charles Babbage and Ada Lovelace worked on an early version of the computer known as the “Analytics Engine.” In the words of computing historian Doron Swade,
“What Lovelace saw…was that numbers could represent entities other than quantity”
and together they laid the foundation for general-purpose computing.
While much has changed since then, the importance of numbers in building great technology remains. We're looking for a DataScientist who can help us build our analytics engine by making meaning from data and identifying opportunities for improvement.
As a DataScientist at Mercury focused on Financial & Accounting Workflows, you will partner with product and engineering teams building tools that help founders, bookkeepers, and finance teams manage their businesses. Your work will span core accounting needs (Books, Accounting Integrations) and broader financial workflows (spend management, invoicing, payroll, etc.). You'll develop deep expertise in how these workflows can be automated, streamlined, and made more insightful through data. Your impact will shape how customers adopt, engage with, and find value in Mercury's financial tools.
Here are some things you'll do on the job:
Partner with Financial & Accounting Workflow stakeholders (Books, Accounting Integrations, Spend Management, Payroll) to define success metrics for new product initiatives and ensure teams have the right measurements in place.
Collaborate with product and engineering to design and analyze experiments that guide product decisions and validate impact.
Contribute to launches of new products like Books by helping scope success criteria, tracking adoption, and analyzing performance post-launch.
Work closely with product teams to surface insights that inform iteration and help identify opportunities for deeper engagement or new product lines.
Partner with engineers to prototype and apply ML/AI solutions (such as autocategorization, reconciliation, or intelligent recommendations) that streamline financial workflows.
Build dashboards, data pipelines, and reporting that enable self-serve analytics for product and business partners.
You should:
Have 3-5 years of experience in product analytics or data science, ideally with exposure to financial workflows, accounting, or B2B SaaS products.
Be fluent in SQL and proficient in a statistical programming language (e.g., Python).
Have experience designing and analyzing A/B tests, and applying causal inference methods.
Be comfortable building data pipelines, dashboards, and reports using modern data stack tools such as Snowflake, dbt, and Omni or Metabase.
Have experience with ERP/accounting systems (QuickBooks, Xero, NetSuite) or accounting workflows.
Experience applying ML or GenAI to real-world product problems.
Be organized and communicative, able to manage multiple priorities and collaborate with stakeholders at different levels of technical fluency.
The total rewards package at Mercury includes base salary, equity (stock options), and benefits. Our salary and equity ranges are highly competitive within the SaaS and fintech industry and are updated regularly using the most reliable compensation survey data for our industry. New hire offers are made based on a candidate's experience, expertise, geographic location, and internal pay equity relative to peers.
Our target new hire base salary ranges for this role are the following:
US employees (any location): $166,600 - $208,300
Canadian employees (any location): CAD 157,400 - 196,800
*Mercury is a fintech company, not an FDIC-insured bank. Banking services provided through Choice Financial Group and Column N.A., Members FDIC.
Mercury values diversity & belonging and is proud to be an Equal Employment Opportunity employer. All individuals seeking employment at Mercury are considered without regard to race, color, religion, national origin, age, sex, marital status, ancestry, physical or mental disability, veteran status, gender identity, sexual orientation, or any other legally protected characteristic. We are committed to providing reasonable accommodations throughout the recruitment process for applicants with disabilities or special needs. If you need assistance, or an accommodation, please let your recruiter know once you are contacted about a role.
We use Covey as part of our hiring and / or promotional process for jobs in NYC and certain features may qualify it as an AEDT. As part of the evaluation process we provide Covey with job requirements and candidate submitted applications. We began using Covey Scout for Inbound on January 22, 2024. Please see the independent bias audit report covering our use of Covey here.
#LI-AC1
$166.6k-208.3k yearly Auto-Apply 12d ago
Transportation Data Scientist
HDR, Inc. 4.7
Data scientist job in Portland, OR
At HDR, our employee-owners are fully engaged in creating a welcoming environment where each of us is valued and respected, a place where everyone is empowered to bring their authentic selves and novel ideas to work every day. As we foster a culture of inclusion throughout our company and within our communities, we constantly ask ourselves: What is our impact on the world?
Watch Our Story:' *********************************
Each and every role throughout our organization makes a difference in our ability to change the world for the better. Read further to learn how you could help make great things possible not only in your community, but around the world.
In the role of a Transportation DataScientist, we'll count on you to:
* Assist on traffic engineering projects, exercise sound engineering judgment, and
* Perform data analysis tasks, including writing SQL queries for data aggregation and visualization
* Develop dashboards summarizing a variety of data
* Develop report documents detailing analysis methodology and outcomes
* Participate in work sessions in conjunction with other staff
* Coordinate workload through entire project development, and ensure completion of tasks on schedule and within budget
* Perform other duties as needed
Preferred Qualifications
* EI preferred
* Experience with cloud-based data analytics platforms, such as Google Big Query, Amazon Web Services or Microsoft Azure/Databricks
* Experience with summarizing findings from analytical processes
* Python, Pandas scripting
* Tableau/Power BI
* SQL Query writing/Database development
* Process-oriented mindset
* Proficiency with Microsoft Office
* Strong oral and written communication skills, presentation skills and ability to work in a team environment
#LI-JM8
Required Qualifications
* Bachelor's degree in Computer Science or Management Information Systems
* A minimum of 5 years of systems analysis, applications development and support experience with business applications
* An attitude and commitment to being an active participant of our employee-owned culture is a must
What We Believe
HDR is our company. Together, we build on each other's life experiences and perspectives to make great things possible every day. This shapes our collaborative culture, encourages organizational trust and connects us closer to the clients and communities we serve.
Our Commitment
As employee owners, we all have a role in creating an inclusive environment where each of us is welcomed, valued, respected and empowered to bring our authentic selves to work every day.
Our eight Employee Network Groups (Asian Pacific, Black, Hispanic/Latino(a), LGBTQ , People with Disabilities, Veterans, Women, Young Professionals) help create a sense of belonging and foster a supportive environment where everyone is empowered to engage and contribute. Each group has an executive sponsor and is open to all employees.
$70k-97k yearly est. 9d ago
Data Scientist, Senior
Chopine Analytic Solutions
Data scientist job in Lewisville, WA
Job Name: DataScientist
Level: Senior
Remote Work: No
Required Clearance: TS/SCI
Immediate opening
RESPONSIBILITIES:
Work with large structured / unstructured data in a modeling and analytical environment to define and create streamline processes in the evaluation of unique datasets and solve challenging intelligence issues
Lead and participate in the design of solutions and refinement of pre-existing processes
Work with Customer Stakeholders, Program Managers, and Product Owners to translate road map features into components/tasks, estimate timelines, identify resources, suggest solutions, and recognize possible risks
Use exploratory data analysis techniques to identify meaningful relationships, patterns, or trends from complex data
Combine applied mathematics, programming skills, analytical techniques, and data to provide impactful insights for decision makers
Research and implement optimization models, strategies, and methods to inform data management activities and analysis
Apply big data analytic tools to large, diverse sets of data to deliver impactful insights and assessments
Conduct peer reviews to improve quality of workflows, procedures, and methodologies
Help build high-performing teams; mentor team members providing development opportunities to increase their technical skills and knowledge
REQUIRED QUALIFICATIONS:
Requires TS/SCI Clearance with the ability to obtain a CI/Poly.
10+ years of relevant experience. (A combination of years of experience & professional certifications/trainings can be used in lieu of years of experience)
Experience supporting IC operations
Possess expert level knowledge to manipulate and analyze structured/ unstructured data
Demonstrated experience in data mining and developing/maintaining/manipulating databases
Demonstrated experience in identifying potential systems enhancements, new capabilities, concept demonstrators, and capability business cases
Demonstrated experience using GOTS data processing and analytics capabilities to modernize analytic methodologies
Demonstrated experience using COTS statistical software (Map Large, Tableau, MatLab) for advanced statistical analysis of operational tools and data visualization which enables large datasets to be interrogated and allows for patterns, relationships, and anticipatory behavioral likelihoods that may not be apparent using traditional single discipline means
Knowledge of advanced analytic methodologies, and experience in implementing and executing those methodologies to enable customer satisfaction
Demonstrated experience in directing activities of highly skilled technical and analytical teams responsible for developing solutions to highly complex analytical/intelligence problems
Experienced in conducting multi-INT and technology specific research to support mission operations
Possess effective communications skills; capable of providing highly detailed information in an easy-to-understand format
DESIRED QUALIFICATIONS:
Possess Master's degree in Data Science or related technical field
Experience developing and working with Artificial Intelligence and Machine Learning (AI/ML)
Demonstrated experience of advanced programming techniques, using one or more of the following: HTML 5/Javascript, ArcObjects, Python, Model Builder, Oracle, SQL, GIScience, GeospatiavAnalysis, Statistics, ArcGIS Desktop, ArcGIS Server, Arc SDE, ArcIMS.
Experience using NET, Python, C++, and/or JAVA programming for web interface development and geodatabase development.
Experience building and maintaining databases of GEOINT, SIGINT, or OSINT data related to the area of interest needs.
Data Visualization Experience which may include Matrix Analytics, Network Analytics, Graphing Data that assist the analytical workforce in generating common operational pictures depicting fused intelligence and information to support informal assessments and finished products
Chopine Analytic Solutions is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex (including pregnancy and gender identity), national origin, political affiliation, sexual orientation, marital status, disability, genetic information, age, membership in an employee organization, retaliation, parental status, military service, any other non-merit factor, or any other characteristic protected by law.
$117k-166k yearly est. 49d ago
Semantic Data Engineer
Cambia Health Solutions 3.9
Data scientist job in Portland, OR
SEMANTIC DATA ENGINEER (HEALTHCARE) Hybrid - within Oregon, Washington, Idaho or Utah
Build a career with purpose. Join our Cause to create a person-focused and economically sustainable health care system.
Who We Are Looking For:
Every day, Cambia's Data & Analytics Engineering Team is living our mission to make health care easier and lives better. We're seeking a skilled Data and Analytics Engineer with significant experience engineering semantic layers to design, implement, expand and enhance our existing semantic layer within our Snowflake data platform to support AI-driven semantic intelligence and BI for our health insurance payer organization. The role will focus on creating a robust, scalable semantic framework that enhances data discoverability, interoperability, and usability for AI and BI tools, enabling advanced analytics, predictive modeling, and actionable insights. This role will focus on implementing and optimizing semantic data models, ensuring seamless integration with AI workflows, and supporting advanced analytics initiatives. - all in service of making our members' health journeys easier.
If you're a motivated and experienced Semantic Engineer looking to make a difference in the healthcare industry, apply for this exciting opportunity today!
What You Bring to Cambia:
Qualifications and Certifications:
Bachelor's degree in computer science, Mathematics, Business Administration, Engineering, or a related field
3+ years relevant experience in a multi-platform environment, including, but not limited to application development or database development
At least 1 year working with Snowflake or similar cloud data platforms
Equivalent combination of education and experience
What You Will Do at Cambia (Not limited to):
Implement Enterprise Semantic Models: Build and maintain semantic data models on Snowflake based on specifications from the Semantic Data Architect and Data Product Owner, ensuring alignment with business, analysis, and AI requirements.
Data Pipeline Development: When necessary, develop and optimize ETL/ELT pipelines to populate the semantic layer, integrating data from diverse sources (e.g., claims, member data, third-party feeds) using Snowflake's capabilities.
Analytics and AI Integration: Enable analytics and AI workflows by preparing and transforming data in the semantic layer for use in predictive models, natural language processing, and other analytics and AI applications.
Performance Tuning: Optimize Snowflake queries and data structures (e.g., tables, views, materialized views) to ensure high performance for semantic data access.
Data Quality and Validation: Implement data quality checks and validation processes to ensure the accuracy and reliability of the semantic layer.
Collaboration: Work with data product owners, business analysts, the semantic data architect, data modelers, and data engineers to create and refine data models and troubleshoot issues in production environments.
Automation and Monitoring: Automate semantic layer maintenance tasks and set up monitoring to ensure system reliability and performance.
Skills and Attributes (Not limited to):
Proficiency in SQL, Python, or other scripting languages for data processing and pipeline development.
Experience using code repositories such as GitLab or GitHub and CI/CD-based deployment.
Experience with semantic technologies, including Snowflake semantic views, MicroStrategy, AtScale, or Business Objects universes and healthcare data standards (e.g., FHIR, HL7, ICD-10).
Experience with semantic technologies, including Snowflake semantic views, Microstrategy, AtScale, or Business Objects universes, and familiarity with healthcare ontologies (e.g., SNOMED, LOINC, ICD-10).
Strong understanding of analytics workflows and their data requirements.
Experience with data governance, metadata management, and compliance in healthcare.
Strong problem-solving skills and experience with data pipeline tools (e.g., dbt, Snowflake's OpenFlow, Airflow).
Knowledge of healthcare regulations (e.g., HIPAA) and data security best practices.
Preferred: Experience with Snowflake features like Streams, Tasks, or data sharing; familiarity with cloud platforms (AWS, Azure, or GCP).
Experience in dimensional data modeling.
Excellent communication skills to bridge technical and business teams.
The expected hiring range for The Semantic Data Engineer is $115k-$135k depending on skills, experience, education, and training; relevant licensure / certifications; performance history; and work location. The bonus target for Semantic Data Engineer is 15%. The current full salary range for the Architect II position is $104k Low/ $130k MRP / $169k High.
About Cambia
Working at Cambia means being part of a purpose-driven, award-winning culture built on trust and innovation anchored in our 100+ year history. Our caring and supportive colleagues are some of the best and brightest in the industry, innovating together toward sustainable, person-focused health care. Whether we're helping members, lending a hand to a colleague or volunteering in our communities, our compassion, empathy and team spirit always shine through.
Why Join the Cambia Team?
At Cambia, you can:
Work alongside diverse teams building cutting-edge solutions to transform health care.
Earn a competitive salary and enjoy generous benefits while doing work that changes lives.
Grow your career with a company committed to helping you succeed.
Give back to your community by participating in Cambia-supported outreach programs.
Connect with colleagues who share similar interests and backgrounds through our employee resource groups.
We believe a career at Cambia is more than just a paycheck - and your compensation should be too. Our compensation package includes competitive base pay as well as a market-leading 401(k) with a significant company match, bonus opportunities and more.
In exchange for helping members live healthy lives, we offer benefits that empower you to do the same. Just a few highlights include:
Medical, dental and vision coverage for employees and their eligible family members, including mental health benefits.
Annual employer contribution to a health savings account.
Generous paid time off varying by role and tenure in addition to 10 company-paid holidays.
Market-leading retirement plan including a company match on employee 401(k) contributions, with a potential discretionary contribution based on company performance (no vesting period).
Up to 12 weeks of paid parental time off (eligibility requires 12 months of continuous service with Cambia immediately preceding leave).
Award-winning wellness programs that reward you for participation.
Employee Assistance Fund for those in need.
Commute and parking benefits.
Learn more about our benefits.
We are happy to offer work from home options for most of our roles. To take advantage of this flexible option, we require employees to have a wired internet connection that is not satellite or cellular and internet service with a minimum upload speed of 5Mb and a minimum download speed of 10 Mb.
We are an Equal Opportunity employer dedicated to a drug and tobacco-free workplace. All qualified applicants will receive consideration for employment without regard to race, color, national origin, religion, age, sex, sexual orientation, gender identity, disability, protected veteran status or any other status protected by law. A background check is required.
If you need accommodation for any part of the application process because of a medical condition or disability, please email ******************************. Information about how Cambia Health Solutions collects, uses, and discloses information is available in our Privacy Policy.
$115k-135k yearly Auto-Apply 15d ago
Principal Data Engineer
Autodesk 4.5
Data scientist job in Portland, OR
**Job Requisition ID #** 25WD90545 We are seeking a Principal Data Engineer to provide technical leadership in designing, building, and scaling data infrastructure that powers machine learning, personalization, and search experiences. You will architect scalable data pipelines in production, drive technical strategy, and mentor engineering teams while partnering with Machine Learning Engineering, Platform Engineering, and Data Science.
Your work will be critical to strategic initiatives including optimization of digital conversion metrics, development of Autodesk Assistant (an LLM-driven chatbot), RAG (Retrieval-Augmented Generation) systems, eCommerce personalization engines, and intelligent search capabilities. As a Principal Engineer, you will set technical direction, establish best practices, and drive innovation across the data engineering organization.
Our team culture is built on collaboration, mutual support, and continuous learning. We emphasize an agile, hands-on, and technical approach at all levels of the team.
**Key Responsibilities**
+ Define and drive technical architecture for data platforms supporting ML and personalization at scale.
+ Design, build, and maintain highly scalable, low-latency data pipelines supporting real-time ML inference and personalization.
+ Build data pipelines for RAG systems, including vector embeddings and semantic search.
+ Design real-time feature engineering and event processing systems for eCommerce personalization and recommendation engines.
+ Develop sophisticated data models optimized for ML training, real-time inference, and analytics workloads.
+ Implement complex stream processing architectures using technologies like Kafka and Flink.
+ Oversee and optimize database systems (SQL, NoSQL, vector databases) ensuring high performance and scalability.
+ Establish data engineering standards, patterns, and best practices across the organization.
+ Provide technical leadership, guidance, and mentorship to senior and mid-level data engineers.
+ Lead cross-functional teams to deliver complex data engineering solutions in production.
**Minimum Qualifications**
+ **8+ years** of data engineering experience with at least **3+ years** in a senior or lead capacity.
+ Expert-level proficiency in Python, Java, or Scala
+ Deep knowledge of SQL and extensive experience with relational and NoSQL databases
+ Strong expertise in big data technologies (Kafka, Flink, Spark, Parquet, Iceberg, Delta Lake)
+ Advanced experience with ETL orchestration tools like Apache Airflow
+ Extensive experience with cloud platforms (AWS, Azure, or GCP) and their data services
+ Deep understanding of data warehousing solutions like Snowflake, Redshift, or BigQuery
+ Proven expertise in data modeling, data architecture design, and ETL/ELT processes at scale
+ Demonstrated ability to lead technical initiatives and mentor engineering talent
+ Strong communication skills with ability to explain complex technical concepts to diverse audiences.
+ Bachelor's degree in computer science, or related field (Master's/PhD strongly preferred)
**Preferred Skills & Experience**
+ Experience building data pipelines for Retrieval-Augmented Generation, vector databases (Pinecone, Weaviate, Milvus), and semantic search
+ Experience with real-time personalization engines, recommendation systems, feature stores, and A/B testing infrastructure
+ Experience building ML data pipelines, feature engineering systems, and model serving infrastructure
+ Knowledge of search technologies (Elasticsearch, Solr, OpenSearch), relevance tuning, and search analytics
+ Deep experience with event streaming, behavioral analytics, and customer data platforms
+ Experience with A/B testing infrastructure and metrics computation at scale
+ Knowledge of MLOps, model monitoring, and ML pipeline orchestration
+ Knowledge of eCommerce metrics, conversion optimization, and digital analytics
**Learn More**
**About Autodesk**
Welcome to Autodesk! Amazing things are created every day with our software - from the greenest buildings and cleanest cars to the smartest factories and biggest hit movies. We help innovators turn their ideas into reality, transforming not only how things are made, but what can be made.
We take great pride in our culture here at Autodesk - it's at the core of everything we do. Our culture guides the way we work and treat each other, informs how we connect with customers and partners, and defines how we show up in the world.
When you're an Autodesker, you can do meaningful work that helps build a better world designed and made for all. Ready to shape the world and your future? Join us!
**Benefits**
From health and financial benefits to time away and everyday wellness, we give Autodeskers the best, so they can do their best work. Learn more about our benefits in the U.S. by visiting ******************************
**Salary transparency**
Salary is one part of Autodesk's competitive compensation package. For U.S.-based roles, we expect a starting base salary between $130,600 and $211,200. Offers are based on the candidate's experience and geographic location, and may exceed this range. In addition to base salaries, our compensation package may include annual cash bonuses, commissions for sales roles, stock grants, and a comprehensive benefits package.
**Equal Employment Opportunity**
At Autodesk, we're building a diverse workplace and an inclusive culture to give more people the chance to imagine, design, and make a better world. Autodesk is proud to be an equal opportunity employer and considers all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender, gender identity, national origin, disability, veteran status or any other legally protected characteristic. We also consider for employment all qualified applicants regardless of criminal histories, consistent with applicable law.
**Diversity & Belonging**
We take pride in cultivating a culture of belonging where everyone can thrive. Learn more here: ********************************************************
**Are you an existing contractor or consultant with Autodesk?**
Please search for open jobs and apply internally (not on this external site).
$130.6k-211.2k yearly 60d+ ago
Databricks Data Engineer - Senior - Consulting - Location Open
EY 4.7
Data scientist job in Portland, OR
At EY, we're all in to shape your future with confidence. We'll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world.
**Technology - Data and Decision Science - Data Engineering - Senior**
We are seeking a highly skilled Senior Consultant Data Engineer with expertise in cloud data engineering, specifically Databricks. The ideal candidate will have strong client management and communication skills, along with a proven track record of successful end-to-end implementations in data engineering projects.
**The opportunity**
In this role, you will design and build analytics solutions that deliver significant business value. You will collaborate with other data and analytics professionals, management, and stakeholders to ensure that technical requirements align with business needs. Your responsibilities will include creating scalable data architecture and modeling solutions that support the entire data asset lifecycle.
**Your key responsibilities**
As a Senior Data Engineer, you will play a pivotal role in transforming data into actionable insights. Your time will be spent on various responsibilities, including:
+ Designing, building, and operating scalable on-premises or cloud data architecture.
+ Analyzing business requirements and translating them into technical specifications.
+ Optimizing data flows for target data platform designs.
+ Design, develop, and implement data engineering solutions using Databricks on cloud platforms (e.g., AWS, Azure, GCP).
+ Collaborate with clients to understand their data needs and provide tailored solutions that meet their business objectives.
+ Lead end-to-end data pipeline development, including data ingestion, transformation, and storage.
+ Ensure data quality, integrity, and security throughout the data lifecycle.
+ Provide technical guidance and mentorship to junior data engineers and team members.
+ Communicate effectively with stakeholders, including technical and non-technical audiences, to convey complex data concepts.
+ Manage client relationships and expectations, ensuring high levels of satisfaction and engagement.
+ Stay updated with the latest trends and technologies in data engineering and cloud computing.
This role offers the opportunity to work with cutting-edge technologies and stay ahead of industry trends, ensuring you gain a competitive advantage in the market. The position may require regular travel to meet with external clients.
**Skills and attributes for success**
To thrive in this role, you will need a blend of technical and interpersonal skills. Your ability to communicate effectively and build relationships will be crucial. Here are some key attributes we look for:
+ Strong analytical and decision-making skills.
+ Proficiency in cloud computing and data architecture design.
+ Experience in data integration and security.
+ Ability to manage complex problem-solving scenarios.
**To qualify for the role, you must have**
+ A Bachelor's degree in Computer Science, Engineering, or a related field required (4-year degree). Master's degree preferred
+ Typically, no less than 2 - 4 years relevant experience in data engineering, with a focus on cloud data solutions.
+ 5+ years of experience in data engineering, with a focus on cloud data solutions.
+ Expertise in Databricks and experience with Spark for big data processing.
+ Proven experience in at least two end-to-end data engineering implementations, including:
+ Implementation of a data lake solution using Databricks, integrating various data sources, and enabling analytics for business intelligence.
+ Development of a real-time data processing pipeline using Databricks and cloud services, delivering insights for operational decision-making.
+ Strong programming skills in languages such as Python, Scala, or SQL.
+ Experience with data modeling, ETL processes, and data warehousing concepts.
+ Excellent problem-solving skills and the ability to work independently and as part of a team.
+ Strong communication and interpersonal skills, with a focus on client management.
**Required Expertise for Senior Consulting Projects:**
+ **Strategic Thinking:** Ability to align data engineering solutions with business strategies and objectives.
+ **Project Management:** Experience in managing multiple projects simultaneously, ensuring timely delivery and adherence to project scope.
+ **Stakeholder Engagement:** Proficiency in engaging with various stakeholders, including executives, to understand their needs and present solutions effectively.
+ **Change Management:** Skills in guiding clients through change processes related to data transformation and technology adoption.
+ **Risk Management:** Ability to identify potential risks in data projects and develop mitigation strategies.
+ **Technical Leadership:** Experience in leading technical discussions and making architectural decisions that impact project outcomes.
+ **Documentation and Reporting:** Proficiency in creating comprehensive documentation and reports to communicate project progress and outcomes to clients.
**Ideally, you'll also have**
+ Experience with data quality management.
+ Familiarity with semantic layers in data architecture.
+ Familiarity with cloud platforms (AWS, Azure, GCP) and their data services.
+ Knowledge of data governance and compliance standards.
+ Experience with machine learning frameworks and tools.
**What we look for**
We seek individuals who are not only technically proficient but also possess the qualities of top performers. You should be adaptable, collaborative, and driven by a desire to achieve excellence in every project you undertake.
FY26NATAID
**What we offer you**
At EY, we'll develop you with future-focused skills and equip you with world-class experiences. We'll empower you in a flexible environment, and fuel you and your extraordinary talents in a diverse and inclusive culture of globally connected teams. Learn more .
+ We offer a comprehensive compensation and benefits package where you'll be rewarded based on your performance and recognized for the value you bring to the business. The base salary range for this job in all geographic locations in the US is $106,900 to $176,500. The base salary range for New York City Metro Area, Washington State and California (excluding Sacramento) is $128,400 to $200,600. Individual salaries within those ranges are determined through a wide variety of factors including but not limited to education, experience, knowledge, skills and geography. In addition, our Total Rewards package includes medical and dental coverage, pension and 401(k) plans, and a wide range of paid time off options.
+ Join us in our team-led and leader-enabled hybrid model. Our expectation is for most people in external, client serving roles to work together in person 40-60% of the time over the course of an engagement, project or year.
+ Under our flexible vacation policy, you'll decide how much vacation time you need based on your own personal circumstances. You'll also be granted time off for designated EY Paid Holidays, Winter/Summer breaks, Personal/Family Care, and other leaves of absence when needed to support your physical, financial, and emotional well-being.
**Are you ready to shape your future with confidence? Apply today.**
EY accepts applications for this position on an on-going basis.
For those living in California, please click here for additional information.
EY focuses on high-ethical standards and integrity among its employees and expects all candidates to demonstrate these qualities.
**EY | Building a better working world**
EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets.
Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow.
EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
EY provides equal employment opportunities to applicants and employees without regard to race, color, religion, age, sex, sexual orientation, gender identity/expression, pregnancy, genetic information, national origin, protected veteran status, disability status, or any other legally protected basis, including arrest and conviction records, in accordance with applicable law.
EY is committed to providing reasonable accommodation to qualified individuals with disabilities including veterans with disabilities. If you have a disability and either need assistance applying online or need to request an accommodation during any part of the application process, please call 1-800-EY-HELP3, select Option 2 for candidate related inquiries, then select Option 1 for candidate queries and finally select Option 2 for candidates with an inquiry which will route you to EY's Talent Shared Services Team (TSS) or email the TSS at ************************** .
$128.4k-200.6k yearly 60d+ ago
Data Engineer
Nike 4.7
Data scientist job in Beaverton, OR
Data Engineer - NIKE USA Inc. - Beaverton, OR. Build and deliver scalable data and analytics solutions focused on Consumer and Marketplace data products; design, implement and integrate new technologies and evolve data and analytics products; contribute to all aspects of data engineering from ingestion, transformation, and consumption in addition to designing and building test-driven development, reusable frameworks, automated workflows, and libraries at scale to support analytics products; participate in architecture and design discussions to process and store high-volume data sets; drive the delivery of scalable data and analytics solutions, implement and integrate new technologies, manage and evolve the data lake; collaborate with other engineers, analysts, and business partners; Analyze information to determine, recommend, and plan installation of a new system or modification of an existing system; analyze user needs and software requirements to determine feasibility of design within time and cost constraints; confer with data processing or project managers to obtain information on limitations or capabilities for data processing projects; confer with systems analysts, engineers, programmers and others to design systems and to obtain information on project limitations and capabilities, performance requirements and interfaces; consult with customers or other departments on project status, proposals, or technical issues, such as software system design or maintenance; coordinate installation of software system; design, develop and modify software systems, using scientific analysis and mathematical models to predict and measure outcomes and consequences of design; develop or direct software system testing or validation procedures, programming, or documentation; modify existing software to correct errors, adapt it to new hardware, or upgrade interfaces and improve performance; monitor functioning of equipment to ensure system operates in conformance with specifications; obtain and evaluate information on factors such as reporting formats required, costs, or security needs to determine hardware configuration; prepare reports or correspondence concerning project specifications, activities, or status; recommend purchase of equipment to control dust, temperature, or humidity in area of system installation; specify power supply requirements and configuration; store, retrieve, and manipulate data for analysis of system capabilities and requirements; and supervise and assign work to programmers, designers, technologists, technicians, or other engineering or scientific personnel. Telecommuting is available from anywhere in the U.S., except from SD, VT, and WV.
Must have a Bachelor's degree in Engineering, Information Technology and 5 years of progressive, post-baccalaureate experience in job offered or in a engineering-related-occupation.
Experience must include:
• Python
• Airflow
• SQL
• Spark
• Cloud - AWS/Azure
• Databricks
• Snowflake
• Data Pipeline Design
Apply at ******************** (Job # R-74933 )
#LI-DNI
We offer a number of accommodations to complete our interview process including screen readers, sign language interpreters, accessible and single location for in-person interviews, closed captioning, and other reasonable modifications as needed. If you discover, as you navigate our application process, that you need assistance or an accommodation due to a disability, please complete the Candidate Accommodation Request Form.
$120k-148k yearly est. Auto-Apply 31d ago
Sr. Data Engineer
It Vision Group
Data scientist job in Portland, OR
Job Description
Title : Sr. Data Engineer
Duration: 12 Months+
Roles & Responsibilities
Perform data analysis according to business needs
Translate functional business requirements into high-level and low-level technical designs
Design and implement distributed data processing pipelines using Apache Spark, Apache Hive, Python, and other tools and languages prevalent in a modern analytics platform
Create and schedule workflows using Apache Airflow or similar job orchestration tooling
Build utilities, functions, and frameworks to better enable high-volume data processing
Define and build data acquisitions and consumption strategies
Build and incorporate automated unit tests, participate in integration testing efforts
Work with teams to resolve operational & performance issues
Work with architecture/engineering leads and other teams to ensure quality solutions are implemented, and engineering best practices are defined and followed.
Tech Stack
Apache Spark
Apache Spark Streaming using Apache Kafka
Apache Hive
Apache Airflow
Python
AWS EMR and S3
Snowflake
SQL
Other Tools & Technologies :: PyCharm, Jenkin, Github.
Apache Nifi (Optional)
Scala (Optional)
$84k-118k yearly est. 12d ago
BigData Engineer / Architect
Nitor Infotech
Data scientist job in Portland, OR
The hunt is for a strong Big Data Professional, a team player with the ability to manage effective relationships with a wide range of stakeholders (customers & team members alike). Incumbent will demonstrate personal commitment and accountability to ensure standards are continuously sustained and improved both within the internal teams, and with partner organizations and suppliers.
Role:
Big Data Engineer
Location:
Portland OR.
Duration:
Full Time
Skill Matrix:
Map Reduce -
Required
Apache Spark -
Required
Informatica PowerCenter -
Required
Hive -
Required
Apache Hadoop -
Required
Core Java / Python -
Highly Desired
Healthcare Domain Experience -
Highly Desired
Job Description
Responsibilities and Duties
Participate in technical planning & requirements gathering phases including architectural design, coding, testing, troubleshooting, and documenting big data-oriented software applications.
Responsible for the ingestion, maintenance, improvement, cleaning, and manipulation of data in the business's operational and analytics databases, and troubleshoots any existent issues.
Implementation, troubleshooting, and optimization distributed solutions based on modern big data technologies like Hive, Hadoop, Spark, Elastic Search, Storm, Kafka, etc. in both an on premise and cloud deployment model to solve large scale processing problems
Design, enhance and implement ETL/data ingestion platform on the cloud.
Strong Data Warehousing skills, including: Data clean-up, ETL, ELT and handling scalability issues for enterprise level data warehouse
Capable of investigating, familiarizing and mastering new data sets quickly
Strong troubleshooting and problem-solving skills in large data environment
Experience with building data platform on cloud (AWS or Azure)
Experience in using Python, Java or any other language to solving data problems
Experience in implementing SDLC best practices and Agile methods.
Qualifications
Required Skills:
Data architecture/ Big Data/ ETL environment
Experience with ETL design using tools Informatica, Talend, Oracle Data Integrator (ODI), Dell Boomi or equivalent
Big Data & Analytics solutions Hadoop, Pig, Hive, Spark, Spark SQL Storm, AWS (EMR, Redshift, S3, etc.)/Azure (HDInsight, Data Lake Design)
Building and managing hosted big data architecture, toolkit familiarity in: Hadoop with Oozy, Sqoop, Pig, Hive, HBase, Avro, Parquet, Spark, NiFi
Foundational data management concepts - RDM and MDM
Experience in working with JIRA/Git/Bitbucket/JUNIT and other code management toolsets
Strong hands-on knowledge of/using solutioning languages like: Java, Scala, Python - any one is fine
Healthcare Domain knowledge
Required Experience, Skills and Qualifications
Qualifications:
Bachelor's Degree with a minimum of 6 to 9 + year's relevant experience or equivalent.
Extensive experience in data architecture/Big Data/ ETL environment.
Additional Information
All your information will be kept confidential according to EEO guidelines.
$84k-118k yearly est. 17h ago
Need Sr Big Data Engineer at Beaverton, OR Only W2
USM 4.2
Data scientist job in Beaverton, OR
Hi,
We have immediate opportunity with our direct client send your resume asap upon your interest. Thank you.
Sr Big Data Engineer
Duration: Long Term
Skills
Typical Office: This is a typical office job, with no special physical requirements or unusual work environment.
Core responsibilities: Expert data engineers will work with product teams across client to help automate and integrate various of data domains with a wide variety of data profiles (different scale, cadence, and volatility) into client's next-gen data and analytics platform. This is an opportunity to work across multiple subject areas and source platforms to ingest, organize, and prepare data through cloud-native processes.
Required skills/experience:
- 5+ years of professional development experience between either Python (preferred) or Scala/Java; familiarity with both is ideal
- 5+ years of data-centric development with a focus on efficient data access and manipulation at multiple scales
- 3+ years of experience with the HDFS ecosystem of tools (any distro, Spark experience prioritized)
- 3+ years of significant experience developing within the broader AWS ecosystem of platforms and services
- 3+ years of experience optimizing data access and analysis in non-HDFS data platforms (Traditional RDMS's, NoSQL / KV stores, etc)
- Direct task development and/or configuration experience with a remote workflow orchestration tool - Airflow (preferred), Amazon Data Pipeline, Luigi, Oozie, etc.
- Intelligence, strong problem-solving ability, and the ability to effectively communicate to partners with a broad spectrum of experiential backgrounds
Several of the following skills are also desired:
- A demonstrably strong understanding of security and credential management between application / platform components
- A demonstrably strong understanding of core considerations when working with data at scale, in both file-based and database contexts, including SQL optimization
- Direct experience with Netflix Genie is another huge plus
- Prior experience with the operational backbone of a CI/CD environment (pipeline orchestration + configuration management) is useful
- Clean coding practices, passion for development, being a generally good team player, etc, etc (and experience with GitHub) is always nice
Keys to Success:
- Deliver exceptional customer service
- Demonstrate accountability and integrity
- Be willing to learn and adapt every day
- Embrace change Skills
Regards
Nithya
Additional Information
All your information will be kept confidential according to EEO guidelines. please send the profiles to ************************* and contact No# ************.
$93k-132k yearly est. Easy Apply 60d+ ago
Data Engineer
Webmd 4.7
Data scientist job in Portland, OR
at WebMD
WebMD is an Equal Opportunity/Affirmative Action employer and does not discriminate on the basis of race, ancestry, color, religion, sex, gender, age, marital status, sexual orientation, gender identity, national origin, medical condition, disability, veterans status, or any other basis protected by law.
The ideal candidate has experience and passion for data engineering. They are an expert in their field - whether that be the front end (data visualization, DAX, Power BI, SQL) or the back end (SSIS, Azure Data Factory, Synapse, data modeling, SQL) of the data engineering world.Exhibits expert knowledge of architecture and system design principles. This person has a consistent record of very strong ownership for their area, and is considered the guru in their technical space. This person is able to solve complex cross-functional challenges and perform complex analysis of business needs and provide innovative solutions. Works across the organization to foster a culture of architecture that allows for iterative, autonomous development and future scaling. Guides teams in the organization in anticipation of future use cases and helps them make design decisions that minimize the cost of future changes.DUTIES & RESPONSIBILITIES
Builds software to extract, transform, and load data - SSIS, Azure Data Factory, Synapse
Models data for efficient consumption by reporting and analytics tools - Azure Analysis Services, Power BI, SQL Databases
Maintains previously deployed software and reports - Power BI, SSIS, AAS, SQL Server
Designs dashboards and reports to meet business needs
Presents technical problems with solutions in mind, in a constructive and understandable fashion
Demonstrates proficiency by sharing learnings with the team and in technical showcases
Independently discovers solutions and collaborates on insights and best practices
Takes ownership of their work deliverables
Actively seeks out opportunities to help improve the team's practices and processes to achieve fast flow
Works with other developers to facilitate knowledge transfer and conduct code reviews
Ensures that credit is shared and given when due
Works to build and improve strong relationships among team members
REQUIREMENTS
3+ years of Data Engineering experience
Bachelor's Degree in Information Systems, Computer Science, Business Operations, or equivalent work experience
Advanced experience with Standard Query Language (SQL) and data modeling/architecture
Advanced experience with Data Analysis Expressions (DAX)
Advanced experience and understanding of data integration engines such as SQL Server Integration Services (SSIS) or Azure Data Factory, or Synapse
All offers are contingent upon the successful completion of a background check
PREFERRED SKILLS AND KNOWLEDGE
Proficiency with data engineering in a cloud development environment - Microsoft Azure is preferred
Proficiency with self-service query tools and dashboards - preferably Power BI
Proficiency with star schema design and managing large data volumes
Familiarity with data science and machine learning capabilities
Ability to interact with people, inside and outside the team, in order to see a project to completion
Experience protecting individual privacy, such as required by HIPAA
$100k-129k yearly est. Auto-Apply 60d+ ago
Senior Data Engineer
Advance Local 3.6
Data scientist job in Portland, OR
**Advance Local** is looking for a **Senior Data Engineer** to design, build, and maintain the enterprise data infrastructure that powers our cloud data platform. This position will combine your deep technical expertise in data engineering with team leadership responsibilities for data engineering, overseeing the ingestion, integration, and reliability of data systems across Snowflake, AWS, Google Cloud, and legacy platforms. You'll partner with data product and across business units to translate requirements into technical solutions, integrate data from numerous third-party platforms, (CDPs, DMPs, analytics platforms, marketing tech) into central data platform, collaborate closely with the Data Architect on platform strategy and ensure scalable, well-engineered solutions for modern data infrastructure using infrastructure as code and API-driven integrations.
The base salary range is $120,000 - $140,000 per year.
**What you'll be doing:**
+ Lead the design and implementation of scalable data ingestion pipelines from diverse sources into Snowflake.
+ Partner with platform owners across business units to establish and maintain data integrations from third party systems into the central data platform.
+ Architect and maintain data infrastructure using IAC, ensuring reproducibility, version control and disaster recovery capabilities.
+ Design and implement API integrations and event-driven data flows to support real time and batch data requirements.
+ Evaluate technical capabilities and integration patterns of existing and potential third-party platforms, advising on platform consolidation and optimization opportunities.
+ Partner with the Data Architect and data product to define the overall data platform strategy, ensuring alignment between raw data ingestion and analytics-ready data products that serve business unit needs.
+ Develop and enforce data engineering best practices including testing frameworks, deployment automation, and observability.
+ Support rapid prototyping of new data products in collaboration with data product by building flexible, reusable data infrastructure components.
+ Design, develop, and maintain scalable data pipelines and ETL processes; optimize and improve existing data systems for performance, cost efficiency, and scalability.
+ Collaborate with data product, third-party platform owners, Data Architects, Analytics Engineers, DataScientists, and business stakeholders to understand data requirements and deliver technical solutions that enable business outcomes across the organization.
+ Implement data quality validation, monitoring, and alerting systems to ensure reliability of data pipelines from all sources.
+ Develop and maintain comprehensive documentation for data engineering processes and systems, architecture, integration patterns, and runbooks.
+ Lead incident response and troubleshooting efforts for data pipeline issues, ensuring minimal business impact.
+ Stay current with the emerging data engineering technologies, cloud services, SaaS platform capabilities, and industry best practices.
**Our ideal candidate will have the following:**
+ Bachelor's degree in computer science, engineering, or a related field
+ Minimum of seven years of experience in data engineering with at least two years in a lead or senior technical role
+ Expert proficiency in Snowflake data engineering patterns
+ Strong experience with AWS services (S3, Lambda, Glue, Step Functions) and Google Cloud Platform
+ Experience integrating data from SaaS platforms and marketing technology stacks (CDPs, DMPs, analytics platforms, CRMs)
+ Proven ability to work with third party APIs, webhooks, and data exports
+ Experience with infrastructure such as code (Terraform, Cloud Formation) and CI/CD pipelines for data infrastructure
+ Proven ability to design and implement API integrations and event-driven architecture
+ Experience with data modeling, data warehousing, and ETL processes at scale
+ Advanced proficiency in Python and SQL for data pipeline development
+ Experience with data orchestration tools (airflow, dbt, Snowflake tasks)
+ Strong understanding of data security, access controls, and compliance requirements
+ Ability to navigate vendor relationships and evaluate technical capabilities of third-party platforms
+ Excellent problem-solving skills and attention to detail
+ Strong communication and collaboraion skills
**Additional Information**
Advance Local Media offers competitive pay and a comprehensive benefits package with affordable options for your healthcare including medical, dental and vision plans, mental health support options, flexible spending accounts, fertility assistance, a competitive 401(k) plan to help plan for your future, generous paid time off, paid parental and caregiver leave and an employee assistance program to support your work/life balance, optional legal assistance, life insurance options, as well as flexible holidays to honor cultural diversity.
Advance Local Media is one of the largest media groups in the United States, which operates the leading news and information companies in more than 20 cities, reaching 52+ million people monthly with our quality, real-time journalism and community engagement. Our company is built upon the values of Integrity, Customer-first, Inclusiveness, Collaboration and Forward-looking. For more information about Advance Local, please visit ******************** .
Advance Local Media includes MLive Media Group, Advance Ohio, Alabama Media Group, NJ Advance Media, Advance Media NY, MassLive Media, Oregonian Media Group, Staten Island Media Group, PA Media Group, ZeroSum, Headline Group, Adpearance, Advance Aviation, Advance Healthcare, Advance Education, Advance National Solutions, Advance Originals, Advance Recruitment, Advance Travel & Tourism, BookingsCloud, Cloud Theory, Fox Dealer, Hoot Interactive, Search Optics, Subtext.
_Advance Local Media is proud to be an equal opportunity employer, encouraging applications from people of all backgrounds. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, genetic information, national origin, age, disability, sexual orientation, marital status, veteran status, or any other category protected under federal, state or local law._
_If you need a reasonable accommodation because of a disability for any part of the employment process, please contact Human Resources and let us know the nature of your request and your contact information._
Advance Local Media does not provide sponsorship for work visas or employment authorization in the United States. Only candidates who are legally authorized to work in the U.S. will be considered for this position.
$120k-140k yearly 46d ago
Google Cloud Data & AI Engineer
Slalom 4.6
Data scientist job in Portland, OR
Who You'll Work With As a modern technology company, our Slalom Technologists are disrupting the market and bringing to life the art of the possible for our clients. We have passion for building strategies, solutions, and creative products to help our clients solve their most complex and interesting business problems. We surround our technologists with interesting challenges, innovative minds, and emerging technologies
You will collaborate with cross-functional teams, including Google Cloud architects, datascientists, and business units, to design and implement Google Cloud data and AI solutions. As a Consultant, Senior Consultant or Principal at Slalom, you will be a part of a team of curious learners who lean into the latest technologies to innovate and build impactful solutions for our clients.
What You'll Do
* Design, build, and operationalize large-scale enterprise data and AI solutions using Google Cloud services such as BigQuery, Vertex AI, Dataflow, Cloud Storage, Pub/Sub and more.
* Implement cloud-based data solutions for data ingestion, transformation, and storage; and AI solutions for model development, deployment, and monitoring, ensuring both areas meet performance, scalability, and compliance needs.
* Develop and maintain comprehensive architecture plans for data and AI solutions, ensuring they are optimized for both data processing and AI model training within the Google Cloud ecosystem.
* Provide technical leadership and guidance on Google Cloud best practices for data engineering (e.g., ETL pipelines, data pipelines) and AI engineering (e.g., model deployment, MLOps).
* Conduct assessments of current data architectures and AI workflows, and develop strategies for modernizing, migrating, or enhancing data systems and AI models within Google Cloud.
* Stay current with emerging Google Cloud data and AI technologies, such as BigQuery ML, AutoML, and Vertex AI, and lead efforts to integrate new innovations into solutions for clients.
* Mentor and develop team members to enhance their skills in Google Cloud data and AI technologies, while providing leadership and training on both data pipeline optimization and AI/ML best practices.
What You'll Bring
* Proven experience as a Cloud Data and AI Engineer or similar role, with hands-on experience in Google Cloud tools and services (e.g., BigQuery, Vertex AI, Dataflow, Cloud Storage, Pub/Sub, etc.).
* Strong knowledge of data engineering concepts, such as ETL processes, data warehousing, data modeling, and data governance.
* Proficiency in AI engineering, including experience with machine learning models, model training, and MLOps pipelines using tools like Vertex AI, BigQuery ML, and AutoML.
* Strong problem-solving and decision-making skills, particularly with large-scale data systems and AI model deployment.
* Strong communication and collaboration skills to work with cross-functional teams, including datascientists, business stakeholders, and IT teams, bridging data engineering and AI efforts.
* Experience with agile methodologies and project management tools in the context of Google Cloud data and AI projects.
* Ability to work in a fast-paced environment, managing multiple Google Cloud data and AI engineering projects simultaneously.
* Knowledge of security and compliance best practices as they relate to data and AI solutions on Google Cloud.
* Google Cloud certifications (e.g., Professional Data Engineer, Professional Database Engineer, Professional Machine Learning Engineer) or willingness to obtain certification within a defined timeframe.
About Us
Slalom is a fiercely human business and technology consulting company that leads with outcomes to bring more value, in all ways, always. From strategy through delivery, our agile teams across 52 offices in 12 countries collaborate with clients to bring powerful customer experiences, innovative ways of working, and new products and services to life. We are trusted by leaders across the Global 1000, many successful enterprise and mid-market companies, and 500+ public sector organizations to improve operations, drive growth, and create value. At Slalom, we believe that together, we can move faster, dream bigger, and build better tomorrows for all.
Compensation and Benefits
Slalom prides itself on helping team members thrive in their work and life. As a result, Slalom is proud to invest in benefits that include meaningful time off and paid holidays, parental leave, 401(k) with a match, a range of choices for highly subsidized health, dental, & vision coverage, adoption and fertility assistance, and short/long-term disability. We also offer yearly $350 reimbursement account for any well-being-related expenses, as well as discounted home, auto, and pet insurance.
Slalom is committed to fair and equitable compensation practices. For this position the target base salaries are listed below. In addition, individuals may be eligible for an annual discretionary bonus. Actual compensation will depend upon an individual's skills, experience, qualifications, location, and other relevant factors. The target salary pay range is subject to change and may be modified at any time.
East Bay, San Francisco, Silicon Valley:
* Consultant $114,000-$171,000
* Senior Consultant: $131,000-$196,500
* Principal: $145,000-$217,500
San Diego, Los Angeles, Orange County, Seattle, Houston, New Jersey, New York City, Westchester, Boston, Washington DC:
* Consultant $105,000-$157,500
* Senior Consultant: $120,000-$180,000
* Principal: $133,000-$199,500
All other locations:
* Consultant: $96,000-$144,000
* Senior Consultant: $110,000-$165,000
* Principal: $122,000-$183,000
EEO and Accommodations
Slalom is an equal opportunity employer and is committed to inclusion, diversity, and equity in the workplace. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veterans' status, or any other characteristic protected by federal, state, or local laws. Slalom will also consider qualified applications with criminal histories, consistent with legal requirements. Slalom welcomes and encourages applications from individuals with disabilities. Reasonable accommodations are available for candidates during all aspects of the selection process. Please advise the talent acquisition team if you require accommodations during the interview process.
We are accepting applications until 12/31.
#LI-FB1
$145k-217.5k yearly 10d ago
Sr. Data Engineer
Concora Credit
Data scientist job in Beaverton, OR
As a Sr. Data Engineer, you'll help drive Concora Credit's Mission to enable customers to Do More with Credit - every single day.
The impact you'll have at Concora Credit:
We are seeking a Sr. Data Engineer with deep expertise in Azure and Databricks to lead the design, development, and optimization of scalable data pipelines and platforms. You'll be responsible for building robust data solutions that power analytics, reporting, and machine learning across the organization using Azure cloud services and Databricks.
We hire people, not positions. That's because, at Concora Credit, we put people first, including our customers, partners, and Team Members. Concora Credit is guided by a single purpose: to help non-prime customers
do more
with credit. Today, we have helped millions of customers access credit. Our industry leadership, resilience, and willingness to adapt ensure we can help our partners responsibly say yes to millions more. As a company grounded in entrepreneurship, we're looking to expand our team and are looking for people who foster innovation, strive to make an impact, and want to Do More! We're an established company with over 20 years of experience, but now we're taking things to the next level. We're seeking someone who wants to impact the business and play a pivotal role in leading the charge for change.
Responsibilities
As our Sr. Data Engineer, you will:
Design and develop scalable, efficient data pipelines using Azure Databricks
Build and manage data ingestion, transformation, and storage solutions leveraging Azure Data Factory, Azure Data Lake, and Delta Lake
Implement CI/CD for data workflows using tools like Azure DevOps, Git, and Terraform
Optimize performance and cost efficiency across large-scale distributed data systems
Collaborate with analysts, datascientists, and business stakeholders to understand data needs and deliver reliable, reusable datasets
Provide guidance and mentor junior engineers and actively contribute to data platform best practices
Monitor, troubleshoot, and optimize existing pipelines and infrastructure to ensure reliability and scalability
These duties must be performed with or without reasonable accommodation.
We know experience comes in many forms and that many skills are transferable. If your experience is close to what we're looking for, consider applying. Diversity has made us the entrepreneurial and innovative company that we are today.
Qualifications
Requirements:
5+ years of experience in data engineering, with a strong focus on Azure cloud technologies
Experience with Azure Databricks, Azure Data Lake, Data Factory including PySpark, SQL, Python and Delta Lake
Strong proficiency in Databricks and Apache Spark
Solid understanding of data warehousing, ETL/ELT, and data modeling best practices
Experience with version control, CI/CD pipelines, and infrastructure as code
Knowledge of Spark performance tuning, partitioning, and job orchestration
Excellent problem-solving skills and attention to detail
Strong communication and collaboration abilities across technical and non-technical teams
Ability to work independently and lead in a fast-paced, agile environment
Passion for delivering clean, high-quality, and maintainable code
Preferred Qualifications:
Experience with Unity Catalog, Databricks Workflows, and Delta Live Tables
Familiarity with DevOps practices or Terraform for Azure resource provisioning
Understanding of data security, RBAC, and compliance in cloud environments
Experience integrating Databricks with Power BI or other analytics platforms
Exposure to real-time data processing using Kafka, Event Hubs, or Structured Streaming
What's In It For You:
Medical, Dental and Vision insurance for you and your family
Relax and recharge with Paid Time Off (PTO)
6 company-observed paid holidays, plus 3 paid floating holidays
401k (after 90 days) plus employer match up to 4%
Pet Insurance for your furry family members
Wellness perks including onsite fitness equipment at both locations, EAP, and access to the Headspace App
We invest in your future through Tuition Reimbursement
Save on taxes with Flexible Spending Accounts
Peace of mind with Life and AD&D Insurance
Protect yourself with company-paid Long-Term Disability and voluntary Short-Term Disability
Concora Credit provides equal employment opportunities to all Team Members and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
Employment-based visa sponsorship is not available for this role.
Concora Credit is an equal opportunity employer (EEO).
Please see the Concora Credit Privacy Policy for more information on how Concora Credit processes your personal information during the recruitment process and, if applicable, based on your location, how you can exercise your privacy rights. If you have questions about this privacy notice or need to contact us in connection with your personal data, including any requests to exercise your legal rights referred to at the end of this notice, please contact caprivacynotice@concoracredit.com.
$84k-118k yearly est. Auto-Apply 60d+ ago
Human Performance Data Scientist I
General Dynamics Information Technology 4.7
Data scientist job in Lewisville, WA
**Req ID:** RQ210954 **Type of Requisition:** Regular **Clearance Level Must Be Able to Obtain:** Top Secret/SCI **Public Trust/Other Required:** None **Job Family:** Data Science and Data Engineering **Skills:** Data Analysis,Data Analytics,Data Science,Data Visualization,Statistics
**Experience:**
0 + years of related experience
**US Citizenship Required:**
Yes
**Job Description:**
Seize your opportunity to make a personal impact as DataScientist I supporting mission critical work on an exciting program. GDIT is your place to make meaningful contributions to challenging projects, build your skills, and grow a rewarding career.
At GDIT, people are our differentiator. As a Human Performance DataScientist I supporting our customer, you will help ensure today is safe and tomorrow is smarter. Our work depends on a Human Performance DataScientist I joining our team.
The Human Performance (HP) DataScientist I supports the program by conducting basic data entry, data cleaning, and performance analysis that directly contributes to the readiness and resilience of Special Operations Forces (SOF) personnel. The position is designed to maximize the accuracy, integrity, and applied use of HP data across domains, with priority on SOF Operators and Direct Combat Support personnel.
**HOW A HUMAN PERFORMANCE DATASCIENTIST I WILL MAKE AN IMPACT:**
+ The Human Performance DataScientist I is responsible for entering, cleaning, and preparing HP data collected through program initiatives.
+ Working with program staff and the Government biostatistician, the Human Performance DataScientist I assist in building and maintaining databases and spreadsheets that capture performance metrics, wellness indicators, and program participation information.
+ Assists in building and maintaining databases and spreadsheets that capture performance metrics, wellness indicators, and program participation information.
+ Collaborates with strength coaches, dietitians, athletic trainers, physical therapists, psychologists, and cognitive enhancement specialists to identify opportunities for meaningful human performance data collection.
+ Supports preparation of basic reports and presentations that communicate trends, participation metrics, and readiness insights for program leadership.
+ The contractor receives access to Government systems and uses these systems to manage and analyze human performance data.
**WHAT YOU'LL NEED TO SUCCEED:**
**EDUCATION:** Bachelor's degree in quantitative science, social science or related discipline.
+ Proficient with the suite of Microsoft Office programs, including Word, Excel and Access.
+ Basic proficiency with commonly used statistical software applications such as SPSS, SAS, or R.
+ Proficiency may be demonstrated through prior work history in sport science, military human performance, healthcare, research, or through a history of publications.
+ Possess excellent communication skills and shall be highly detail oriented and organized
**LOCATION:** Various CONUS SITES
**CLEARANCE:** Ability to obtain and maintain Secret or Top-Secret Clearance.
**This is a contingent posting, expected to start in 2026.**
**GDIT IS YOUR PLACE:**
+ 401K with company match
+ Comprehensive health and wellness packages
+ Internal mobility team dedicated to helping you own your career
+ Professional growth opportunities including paid education and certifications
+ Cutting-edge technology you can learn from
+ Rest and recharge with paid vacation and holidays
The likely salary range for this position is $72,877 - $98,599. This is not, however, a guarantee of compensation or salary. Rather, salary will be set based on experience, geographic location and possibly contractual requirements and could fall outside of this range.
Our benefits package for all US-based employees includes a variety of medical plan options, some with Health Savings Accounts, dental plan options, a vision plan, and a 401(k) plan offering the ability to contribute both pre and post-tax dollars up to the IRS annual limits and receive a company match. To encourage work/life balance, GDIT offers employees full flex work weeks where possible and a variety of paid time off plans, including vacation, sick and personal time, holidays, paid parental, military, bereavement and jury duty leave. GDIT typically provides new employees with 15 days of paid leave per calendar year to be used for vacations, personal business, and illness and an additional 10 paid holidays per year. Paid leave and paid holidays are prorated based on the employee's date of hire. The GDIT Paid Family Leave program provides a total of up to 160 hours of paid leave in a rolling 12 month period for eligible employees. To ensure our employees are able to protect their income, other offerings such as short and long-term disability benefits, life, accidental death and dismemberment, personal accident, critical illness and business travel and accident insurance are provided or available. We regularly review our Total Rewards package to ensure our offerings are competitive and reflect what our employees have told us they value most.
We are GDIT. A global technology and professional services company that delivers consulting, technology and mission services to every major agency across the U.S. government, defense and intelligence community. Our 30,000 experts extract the power of technology to create immediate value and deliver solutions at the edge of innovation. We operate across 50 countries worldwide, offering leading capabilities in digital modernization, AI/ML, Cloud, Cyber and application development. Together with our clients, we strive to create a safer, smarter world by harnessing the power of deep expertise and advanced technology.
Join our Talent Community to stay up to date on our career opportunities and events at ********************
Equal Opportunity Employer / Individuals with Disabilities / Protected Veterans
$72.9k-98.6k yearly 29d ago
BigData Engineer / Architect
Nitor Infotech
Data scientist job in Portland, OR
The hunt is for a strong Big Data Professional, a team player with the ability to manage effective relationships with a wide range of stakeholders (customers & team members alike). Incumbent will demonstrate personal commitment and accountability to ensure standards are continuously sustained and improved both within the internal teams, and with partner organizations and suppliers.
Role: Big Data Engineer
Location: Portland OR.
Duration: Full Time
Skill Matrix:
Map Reduce - Required
Apache Spark - Required
Informatica PowerCenter - Required
Hive - Required
Apache Hadoop - Required
Core Java / Python - Highly Desired
Healthcare Domain Experience - Highly Desired
Job Description
Responsibilities and Duties
Participate in technical planning & requirements gathering phases including architectural design, coding, testing, troubleshooting, and documenting big data-oriented software applications.
Responsible for the ingestion, maintenance, improvement, cleaning, and manipulation of data in the business's operational and analytics databases, and troubleshoots any existent issues.
Implementation, troubleshooting, and optimization distributed solutions based on modern big data technologies like Hive, Hadoop, Spark, Elastic Search, Storm, Kafka, etc. in both an on premise and cloud deployment model to solve large scale processing problems
Design, enhance and implement ETL/data ingestion platform on the cloud.
Strong Data Warehousing skills, including: Data clean-up, ETL, ELT and handling scalability issues for enterprise level data warehouse
Capable of investigating, familiarizing and mastering new data sets quickly
Strong troubleshooting and problem-solving skills in large data environment
Experience with building data platform on cloud (AWS or Azure)
Experience in using Python, Java or any other language to solving data problems
Experience in implementing SDLC best practices and Agile methods.
Qualifications
Required Skills:
Data architecture/ Big Data/ ETL environment
Experience with ETL design using tools Informatica, Talend, Oracle Data Integrator (ODI), Dell Boomi or equivalent
Big Data & Analytics solutions Hadoop, Pig, Hive, Spark, Spark SQL Storm, AWS (EMR, Redshift, S3, etc.)/Azure (HDInsight, Data Lake Design)
Building and managing hosted big data architecture, toolkit familiarity in: Hadoop with Oozy, Sqoop, Pig, Hive, HBase, Avro, Parquet, Spark, NiFi
Foundational data management concepts - RDM and MDM
Experience in working with JIRA/Git/Bitbucket/JUNIT and other code management toolsets
Strong hands-on knowledge of/using solutioning languages like: Java, Scala, Python - any one is fine
Healthcare Domain knowledge
Required Experience, Skills and Qualifications
Qualifications:
Bachelor's Degree with a minimum of 6 to 9 + year's relevant experience or equivalent.
Extensive experience in data architecture/Big Data/ ETL environment.
Additional Information
All your information will be kept confidential according to EEO guidelines.
How much does a data scientist earn in Vancouver, WA?
The average data scientist in Vancouver, WA earns between $79,000 and $154,000 annually. This compares to the national average data scientist range of $75,000 to $148,000.
Average data scientist salary in Vancouver, WA
$110,000
What are the biggest employers of Data Scientists in Vancouver, WA?
The biggest employers of Data Scientists in Vancouver, WA are: