Lead Data Scientist GenAI, Strategic Analytics - Data Science
Data scientist job in Portland, OR
Deloitte is at the leading edge of GenAI innovation, transforming Strategic Analytics and shaping the future of Finance. We invite applications from highly skilled and experienced Lead Data Scientists ready to drive the development of our next-generation GenAI solutions.
The Team
Strategic Analytics is a dynamic part of our Finance FP&A organization, dedicated to empowering executive leaders across the firm, as well as our partners in financial and operational functions. Our team harnesses the power of cloud computing, data science, AI, and strategic expertise-combined with deep institutional knowledge-to deliver insights that inform our most critical business decisions and fuel the firm's ongoing growth.
GenAI is at the forefront of our innovation agenda and a key strategic priority for our future. We are rapidly developing groundbreaking products and solutions poised to transform both our organization and our clients. As part of our team, the selected candidate will play a pivotal role in driving the success of these high-impact initiatives.
Recruiting for this role ends on December 14, 2025
Work You'll Do
Client Engagement & Solution Scoping
+ Partner with stakeholders to analyze business requirements, pain points, and objectives relevant to GenAI use cases.
+ Facilitate workshops to identify, prioritize, and scope impactful GenAI applications (e.g., text generation, code synthesis, conversational agents).
+ Clearly articulate GenAI's value proposition, including efficiency gains, risk mitigation, and innovation.
+ Solution Architecture & Design
+ Architect holistic GenAI solutions, selecting and customizing appropriate models (GPT, Llama, Claude, Zora AI, etc.).
+ Design scalable integration strategies for embedding GenAI into existing client systems (ERP, CRM, KM platforms).
+ Define and govern reliable, ethical, and compliant data sourcing and management.
Development & Customization
+ Lead model fine-tuning, prompt engineering, and customization for client-specific needs.
+ Oversee the development of GenAI-powered applications and user-friendly interfaces, ensuring robustness and exceptional user experience.
+ Drive thorough validation, testing, and iteration to ensure quality and accuracy.
Implementation, Deployment & Change Management
+ Manage solution rollout, including cloud setup, configuration, and production deployment.
+ Guide clients through adoption: deliver training, create documentation, and provide enablement resources for users.
Risk, Ethics & Compliance
+ Lead efforts in responsible AI, ensuring safeguards against bias, privacy breaches, and unethical outcomes.
+ Monitor performance, implement KPIs, and manage model retraining and auditing processes.
Stakeholder Communication
+ Prepare executive-level reports, dashboards, and demos to summarize progress and impact.
+ Coordinate across internal teams, tech partners, and clients for effective project delivery.
Continuous Improvement & Thought Leadership
+ Stay current on GenAI trends, best practices, and emerging technologies; share insights across teams.
+ Mentor junior colleagues, promote knowledge transfer, and contribute to reusable methodologies.
Qualifications
Required:
+ Bachelor's or Master's degree in Computer Science, Engineering, Data Science, Mathematics, or related field.
+ 5+ years of hands-on experience delivering machine learning or AI solutions, preferably including generative AI.
+ Independent thinker who can create the vision and execute on transforming data into high end client products.
+ Demonstrated accomplishments in the following areas:
+ Deep understanding of GenAI models and approaches (LLMs, transformers, prompt engineering).
+ Proficiency in Python (PyTorch, TensorFlow, HuggingFace), Databricks, ML pipelines, and cloud-based deployment (Azure, AWS, GCP).
+ Experience integrating AI into enterprise applications, building APIs, and designing scalable workflows.
+ Knowledge of solution architecture, risk assessment, and mapping technology to business goals.
+ Familiarity with agile methodologies and iterative delivery.
+ Commitment to responsible AI, including data ethics, privacy, and regulatory compliance.
+ Ability to travel 0-10%, on average, based on the work you do and the clients and industries/sectors you serve
+ Limited immigration sponsorship may be available.
Preferred:
+ Relevant Certifications: May include Google Cloud Professional ML Engineer, Microsoft Azure AI Engineer, AWS Certified Machine Learning, or specialized GenAI/LLM credentials.
+ Experience with data visualization tools such as Tableau
The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Deloitte, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is $102,500 - $188,900.
You may also be eligible to participate in a discretionary annual incentive program, subject to the rules governing the program, whereby an award, if any, depends on various factors, including, without limitation, individual and organizational performance.
Information for applicants with a need for accommodation
************************************************************************************************************
EA_FA_ExpHire
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law.
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law.
Data Scientist/Architect
Data scientist job in Beaverton, OR
Designs, develops and programs methods, processes, and systems to consolidate and analyze structured/unstructured, diverse “big data” sources to generate actionable insights and solutions for client services and product enhancement. Builds "products" for Analysis. Interacts with product and service teams to identify questions and issues for data analysis and experiments. Develops and codes software programs, algorithms and automated processes to cleanse, integrate and evaluate large datasets from multiple disparate sources. Identifies meaningful insights from large data and metadata sources; interprets and communicates insights and findings from analysis and experiments to product, service, and business managers.
Lead to the accomplishment of key goals across consumer and commercial analytics functions. Work with key stakeholders to understand requirements, develop sustainable data solutions, and provide insights and recommendations. Document and communicate systems and analytics changes to the business, translating complex functionality into business relevant language. Validate key performance indicators and build queries to quantitatively measure business performance. Communicate with cross-functional teams to understand the business cause of data anomalies and outliers. Develop data governance standards from data ingestion to product dictionaries and documentation. Develop SQL queries and data visualizations to fulfill ad-hoc analysis requests and ongoing reporting needs leveraging standard query syntax. Organize and transform information into comprehensible structures. Use data to predict trends and perform statistical analysis. Use data mining to extract information from data sets and identify correlations and patterns. Monitor data quality and remove corrupt data. Evaluate and utilize new technologies, tools, and frameworks centered around high-volume data processing. Improve existing processes through automation and efficient workflows. Build and deliver scalable data and analytics solutions. Work independently and take initiative to identify, explore and solve problems. Design and build innovative data and analytics solutions to support key decisions Support standard methodologies in reporting and analysis, such as, data integrity, unit testing, data quality control, system integration testing, modeling, validation, and documentation. Independently support end-to-end analysis to advise product strategy, data architecture and reporting decisions.
Data Scientist
Data scientist job in Portland, OR
Job DescriptionWe are looking for a Data Scientist to analyze large amounts of raw information to find patterns that will help improve our company. We will rely on you to build data products to extract valuable business insights. In this role, you should be highly analytical with a knack for analysis, math, and statistics. Critical thinking and problem-solving skills are essential for interpreting data. We also want to see a passion for machine learning and research.
Your goal will be to help our company analyze trends to make better decisions.
Responsibilities
Identify valuable data sources and automate collection processes
Undertake to preprocess of structured and unstructured data
Analyze large amounts of information to discover trends and patterns
Build predictive models and machine-learning algorithms
Combine models through ensemble modeling
Present information using data visualization techniques
Propose solutions and strategies to business challenges
Collaborate with engineering and product development teams
Requirements and skills
Proven experience as a Data Scientist or Data Analyst
Experience in data mining
Understanding of machine learning and operations research
Knowledge of R, SQL, and Python; familiarity with Scala, Java, or C++ is an asset
Experience using business intelligence tools (e.g. Tableau) and data frameworks (e.g. Hadoop)
Analytical mind and business acumen
Strong math skills (e.g. statistics, algebra)
Problem-solving aptitude
Excellent communication and presentation skills
BSc/BA in Computer Science, Engineering, or relevant field; a graduate degree in Data Science or other quantitative field is preferred
Human Performance Data Scientist II
Data scientist job in Lewisville, WA
**Req ID:** RQ210960 **Type of Requisition:** Regular **Clearance Level Must Be Able to Obtain:** Top Secret/SCI **Public Trust/Other Required:** None **Job Family:** Data Science and Data Engineering **Skills:** Business,Data Analysis,Science,Statistical Analysis,Statistics
**Experience:**
3 + years of related experience
**US Citizenship Required:**
Yes
**Job Description:**
Seize your opportunity to make a personal impact as Data Scientist II supporting mission critical work on an exciting program. GDIT is your place to make meaningful contributions to challenging projects, build your skills, and grow a rewarding career.
At GDIT, people are our differentiator. As a Human Performance Data Scientist II supporting our customer, you will help ensure today is safe and tomorrow is smarter. Our work depends on a Data Scientist II Technician joining our team.
The Human Performance (HP) Data Scientist II supports the program by applying advanced analytical skills to optimize the readiness, resiliency, and performance of Special Operations Forces (SOF) personnel. The position focuses on human performance insights rather than general data science functions. The role works directly with SOF HP staff, with priority on SOF Operators and Direct Combat Support personnel, to evaluate physical, psychological, cognitive, and social performance indicators.
**HOW A DATA SCIENTIST II WILL MAKE AN IMPACT:**
+ The HP Data Scientist II is responsible for entering, cleaning, and analyzing HP data collected through program initiatives.
+ Works in collaboration with performance teams and the Government biostatistician, the HP Data Scientist II provides subject matter expertise in program evaluation, research methodologies, and applied performance analytics.
+ Partners with strength coaches, dietitians, athletic trainers, physical therapists, psychologists, and cognitive specialists to identify data collection opportunities that support operational readiness and force preservation.
+ Supports development of performance dashboards, readiness trends, return to duty outcomes, and longitudinal monitoring products that directly inform commanders and HP leaders.
+ Prepares reports and presentations that communicate performance trends, risk indicators, and program impact in clear and actionable formats for leadership.
+ Receives access to Government systems for the purpose of HP data entry, management, and analysis.
**WHAT YOU'LL NEED TO SUCCEED:**
**EDUCATION:** Master's or Doctoral degree in quantitative science, social science or related discipline.
+ Must have at least 3 years of research experience in academic, social services, government, healthcare or laboratory settings
+ Advanced proficiency in statistical software such as SPSS, SAS, or R, with emphasis on applied research and performance analytics.
+ Demonstrate advanced proficiency through prior work within performance research, sport science, military HP programs, or similar environments where continuous data collection and evaluation occur.
+ Proficiency is also demonstrated through a record of scientific publications or applied HP research.
+ Possesses excellent communication skills, strong organizational abilities, and at least three years of experience working in HP, government, healthcare, or research settings.
+ Proficient with the suite of Microsoft Office programs, including Word, Excel and Access.
**LOCATION:** Various CONUS SITES
**CLEARANCE:** Ability to obtain and maintain Secret or Top-Secret Clearance.
**This is a contingent posting, expected to start in 2026.**
**LOCATION:** Various CONUS SITES
**GDIT IS YOUR PLACE:**
+ 401K with company match
+ Comprehensive health and wellness packages
+ Internal mobility team dedicated to helping you own your career
+ Professional growth opportunities including paid education and certifications
+ Cutting-edge technology you can learn from
+ Rest and recharge with paid vacation and holidays
\#MilitaryHealthGDITJobs
The likely salary range for this position is $83,927 - $113,549. This is not, however, a guarantee of compensation or salary. Rather, salary will be set based on experience, geographic location and possibly contractual requirements and could fall outside of this range.
Our benefits package for all US-based employees includes a variety of medical plan options, some with Health Savings Accounts, dental plan options, a vision plan, and a 401(k) plan offering the ability to contribute both pre and post-tax dollars up to the IRS annual limits and receive a company match. To encourage work/life balance, GDIT offers employees full flex work weeks where possible and a variety of paid time off plans, including vacation, sick and personal time, holidays, paid parental, military, bereavement and jury duty leave. GDIT typically provides new employees with 15 days of paid leave per calendar year to be used for vacations, personal business, and illness and an additional 10 paid holidays per year. Paid leave and paid holidays are prorated based on the employee's date of hire. The GDIT Paid Family Leave program provides a total of up to 160 hours of paid leave in a rolling 12 month period for eligible employees. To ensure our employees are able to protect their income, other offerings such as short and long-term disability benefits, life, accidental death and dismemberment, personal accident, critical illness and business travel and accident insurance are provided or available. We regularly review our Total Rewards package to ensure our offerings are competitive and reflect what our employees have told us they value most.
We are GDIT. A global technology and professional services company that delivers consulting, technology and mission services to every major agency across the U.S. government, defense and intelligence community. Our 30,000 experts extract the power of technology to create immediate value and deliver solutions at the edge of innovation. We operate across 50 countries worldwide, offering leading capabilities in digital modernization, AI/ML, Cloud, Cyber and application development. Together with our clients, we strive to create a safer, smarter world by harnessing the power of deep expertise and advanced technology.
Join our Talent Community to stay up to date on our career opportunities and events at ********************
Equal Opportunity Employer / Individuals with Disabilities / Protected Veterans
Senior Data Scientist - Fraud
Data scientist job in Portland, OR
Mercury is building the financial stack for startups. We're here to make banking* intuitive, powerful, and safe for entrepreneurs and businesses of all sizes. We started by imagining what the best banking platform for startups would look like, and several years later we have hundreds of thousands of customers using Mercury's products. As we continue to scale, protecting our customers and Mercury from fraud - while still ensuring the seamless customer experience we have come to be known for - is critical.
We are hiring a Data Scientist to join our Fraud and Limits team. This team is responsible for detecting, monitoring, and mitigating both first-party fraud (fraudulent applicants) and account takeover (ATO). You'll play a key role in strengthening our fraud defenses while ensuring that Mercury continues to deliver a smooth and trustworthy banking experience.
This is an opportunity to join Mercury at a pivotal moment in our growth. You'll be working on some of the most critical challenges facing the business and collaborating across product, engineering, and risk to protect our customers and the financial system at large.
Here are some things you'll do on the job:
Develop dashboards and monitoring systems to track key fraud and account health metrics.
Conduct deep-dive analyses to understand fraud trends and behaviors, and translate findings into actionable recommendations.
Build, validate, and deploy machine learning models to identify and prevent fraud in real time.
Support ad hoc investigations and ensure data quality and reliability across pipelines and tools.
Collaborate with Risk Strategy and Engineering to optimize rules, scoring systems, and other fraud defenses.
Partner with cross-functional teams (Engineering, Product, Design, Operations) to embed data insights into decision-making.
You should have:
5+ years of experience working with and analyzing large datasets to solve problems and drive impact, with 1+ years of relevant domain experience.
Proficiency in SQL and experience using it to understand and manage imperfect data.
Proficiency in Python and experience with statistical modeling and machine learning.
Experience deploying and monitoring machine learning models in production.
The ability to balance high-leverage projects with foundational work such as reporting, dashboarding, and exploratory analyses.
Comfort working in a fast-paced environment with evolving priorities.
Ideally you also have:
Experience building zero-to-one solutions in ambiguous or greenfield problem spaces.
Familiarity with LLMs or other GenAI and how they can be applied to risk or fraud detection.
Experience with modern data tools for pipelines and ETL (e.g., dbt).
The total rewards package at Mercury includes base salary, equity (stock options), and benefits. Our salary and equity ranges are highly competitive within the SaaS and fintech industry and are updated regularly using the most reliable compensation survey data for our industry. New hire offers are made based on a candidate's experience, expertise, geographic location, and internal pay equity relative to peers.
Our target new hire base salary ranges for this role are the following:
US employees (any location): $166,600 - 208,300 USD
Canadian employees (any location): CAD 157,400 - 196,800
*Mercury is a fintech company, not an FDIC-insured bank. Banking services provided through Choice Financial Group and Column N.A., Members FDIC.
Mercury values diversity & belonging and is proud to be an Equal Employment Opportunity employer. All individuals seeking employment at Mercury are considered without regard to race, color, religion, national origin, age, sex, marital status, ancestry, physical or mental disability, veteran status, gender identity, sexual orientation, or any other legally protected characteristic. We are committed to providing reasonable accommodations throughout the recruitment process for applicants with disabilities or special needs. If you need assistance, or an accommodation, please let your recruiter know once you are contacted about a role.
We use Covey as part of our hiring and / or promotional process for jobs in NYC and certain features may qualify it as an AEDT. As part of the evaluation process we provide Covey with job requirements and candidate submitted applications. We began using Covey Scout for Inbound on January 22, 2024. Please see the independent bias audit report covering our use of Covey here.
#LI-RA1
Auto-ApplyTransportation Data Scientist
Data scientist job in Portland, OR
At HDR, our employee-owners are fully engaged in creating a welcoming environment where each of us is valued and respected, a place where everyone is empowered to bring their authentic selves and novel ideas to work every day. As we foster a culture of inclusion throughout our company and within our communities, we constantly ask ourselves: What is our impact on the world?
Watch Our Story:' *********************************
Each and every role throughout our organization makes a difference in our ability to change the world for the better. Read further to learn how you could help make great things possible not only in your community, but around the world.
In the role of a Transportation Data Scientist, we'll count on you to:
* Assist on traffic engineering projects, exercise sound engineering judgment, and
* Perform data analysis tasks, including writing SQL queries for data aggregation and visualization
* Develop dashboards summarizing a variety of data
* Develop report documents detailing analysis methodology and outcomes
* Participate in work sessions in conjunction with other staff
* Coordinate workload through entire project development, and ensure completion of tasks on schedule and within budget
* Perform other duties as needed
Preferred Qualifications
* EI preferred
* Experience with cloud-based data analytics platforms, such as Google Big Query, Amazon Web Services or Microsoft Azure/Databricks
* Experience with summarizing findings from analytical processes
* Python, Pandas scripting
* Tableau/Power BI
* SQL Query writing/Database development
* Process-oriented mindset
* Proficiency with Microsoft Office
* Strong oral and written communication skills, presentation skills and ability to work in a team environment
#LI-JM8
Required Qualifications
* Bachelor's degree in Computer Science or Management Information Systems
* A minimum of 5 years of systems analysis, applications development and support experience with business applications
* An attitude and commitment to being an active participant of our employee-owned culture is a must
What We Believe
HDR is our company. Together, we build on each other's life experiences and perspectives to make great things possible every day. This shapes our collaborative culture, encourages organizational trust and connects us closer to the clients and communities we serve.
Our Commitment
As employee owners, we all have a role in creating an inclusive environment where each of us is welcomed, valued, respected and empowered to bring our authentic selves to work every day.
Our eight Employee Network Groups (Asian Pacific, Black, Hispanic/Latino(a), LGBTQ , People with Disabilities, Veterans, Women, Young Professionals) help create a sense of belonging and foster a supportive environment where everyone is empowered to engage and contribute. Each group has an executive sponsor and is open to all employees.
Data Scientist, Senior
Data scientist job in Lewisville, WA
Job Name: Data Scientist
Level: Senior
Remote Work: No
Required Clearance: TS/SCI
Immediate opening
RESPONSIBILITIES:
Work with large structured / unstructured data in a modeling and analytical environment to define and create streamline processes in the evaluation of unique datasets and solve challenging intelligence issues
Lead and participate in the design of solutions and refinement of pre-existing processes
Work with Customer Stakeholders, Program Managers, and Product Owners to translate road map features into components/tasks, estimate timelines, identify resources, suggest solutions, and recognize possible risks
Use exploratory data analysis techniques to identify meaningful relationships, patterns, or trends from complex data
Combine applied mathematics, programming skills, analytical techniques, and data to provide impactful insights for decision makers
Research and implement optimization models, strategies, and methods to inform data management activities and analysis
Apply big data analytic tools to large, diverse sets of data to deliver impactful insights and assessments
Conduct peer reviews to improve quality of workflows, procedures, and methodologies
Help build high-performing teams; mentor team members providing development opportunities to increase their technical skills and knowledge
REQUIRED QUALIFICATIONS:
Requires TS/SCI Clearance with the ability to obtain a CI/Poly.
10+ years of relevant experience. (A combination of years of experience & professional certifications/trainings can be used in lieu of years of experience)
Experience supporting IC operations
Possess expert level knowledge to manipulate and analyze structured/ unstructured data
Demonstrated experience in data mining and developing/maintaining/manipulating databases
Demonstrated experience in identifying potential systems enhancements, new capabilities, concept demonstrators, and capability business cases
Demonstrated experience using GOTS data processing and analytics capabilities to modernize analytic methodologies
Demonstrated experience using COTS statistical software (Map Large, Tableau, MatLab) for advanced statistical analysis of operational tools and data visualization which enables large datasets to be interrogated and allows for patterns, relationships, and anticipatory behavioral likelihoods that may not be apparent using traditional single discipline means
Knowledge of advanced analytic methodologies, and experience in implementing and executing those methodologies to enable customer satisfaction
Demonstrated experience in directing activities of highly skilled technical and analytical teams responsible for developing solutions to highly complex analytical/intelligence problems
Experienced in conducting multi-INT and technology specific research to support mission operations
Possess effective communications skills; capable of providing highly detailed information in an easy-to-understand format
DESIRED QUALIFICATIONS:
Possess Master's degree in Data Science or related technical field
Experience developing and working with Artificial Intelligence and Machine Learning (AI/ML)
Demonstrated experience of advanced programming techniques, using one or more of the following: HTML 5/Javascript, ArcObjects, Python, Model Builder, Oracle, SQL, GIScience, GeospatiavAnalysis, Statistics, ArcGIS Desktop, ArcGIS Server, Arc SDE, ArcIMS.
Experience using NET, Python, C++, and/or JAVA programming for web interface development and geodatabase development.
Experience building and maintaining databases of GEOINT, SIGINT, or OSINT data related to the area of interest needs.
Data Visualization Experience which may include Matrix Analytics, Network Analytics, Graphing Data that assist the analytical workforce in generating common operational pictures depicting fused intelligence and information to support informal assessments and finished products
Chopine Analytic Solutions is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex (including pregnancy and gender identity), national origin, political affiliation, sexual orientation, marital status, disability, genetic information, age, membership in an employee organization, retaliation, parental status, military service, any other non-merit factor, or any other characteristic protected by law.
Semantic Data Engineer
Data scientist job in Portland, OR
SEMANTIC DATA ENGINEER (HEALTHCARE) Hybrid - within Oregon, Washington, Idaho or Utah
Build a career with purpose. Join our Cause to create a person-focused and economically sustainable health care system.
Who We Are Looking For:
Every day, Cambia's Data & Analytics Engineering Team is living our mission to make health care easier and lives better. We're seeking a skilled Data and Analytics Engineer with significant experience engineering semantic layers to design, implement, expand and enhance our existing semantic layer within our Snowflake data platform to support AI-driven semantic intelligence and BI for our health insurance payer organization. The role will focus on creating a robust, scalable semantic framework that enhances data discoverability, interoperability, and usability for AI and BI tools, enabling advanced analytics, predictive modeling, and actionable insights. This role will focus on implementing and optimizing semantic data models, ensuring seamless integration with AI workflows, and supporting advanced analytics initiatives. - all in service of making our members' health journeys easier.
If you're a motivated and experienced Semantic Engineer looking to make a difference in the healthcare industry, apply for this exciting opportunity today!
What You Bring to Cambia:
Qualifications and Certifications:
Bachelor's degree in computer science, Mathematics, Business Administration, Engineering, or a related field
3+ years relevant experience in a multi-platform environment, including, but not limited to application development or database development
At least 1 year working with Snowflake or similar cloud data platforms
Equivalent combination of education and experience
What You Will Do at Cambia (Not limited to):
Implement Enterprise Semantic Models: Build and maintain semantic data models on Snowflake based on specifications from the Semantic Data Architect and Data Product Owner, ensuring alignment with business, analysis, and AI requirements.
Data Pipeline Development: When necessary, develop and optimize ETL/ELT pipelines to populate the semantic layer, integrating data from diverse sources (e.g., claims, member data, third-party feeds) using Snowflake's capabilities.
Analytics and AI Integration: Enable analytics and AI workflows by preparing and transforming data in the semantic layer for use in predictive models, natural language processing, and other analytics and AI applications.
Performance Tuning: Optimize Snowflake queries and data structures (e.g., tables, views, materialized views) to ensure high performance for semantic data access.
Data Quality and Validation: Implement data quality checks and validation processes to ensure the accuracy and reliability of the semantic layer.
Collaboration: Work with data product owners, business analysts, the semantic data architect, data modelers, and data engineers to create and refine data models and troubleshoot issues in production environments.
Automation and Monitoring: Automate semantic layer maintenance tasks and set up monitoring to ensure system reliability and performance.
Skills and Attributes (Not limited to):
Proficiency in SQL, Python, or other scripting languages for data processing and pipeline development.
Experience using code repositories such as GitLab or GitHub and CI/CD-based deployment.
Experience with semantic technologies, including Snowflake semantic views, MicroStrategy, AtScale, or Business Objects universes and healthcare data standards (e.g., FHIR, HL7, ICD-10).
Experience with semantic technologies, including Snowflake semantic views, Microstrategy, AtScale, or Business Objects universes, and familiarity with healthcare ontologies (e.g., SNOMED, LOINC, ICD-10).
Strong understanding of analytics workflows and their data requirements.
Experience with data governance, metadata management, and compliance in healthcare.
Strong problem-solving skills and experience with data pipeline tools (e.g., dbt, Snowflake's OpenFlow, Airflow).
Knowledge of healthcare regulations (e.g., HIPAA) and data security best practices.
Preferred: Experience with Snowflake features like Streams, Tasks, or data sharing; familiarity with cloud platforms (AWS, Azure, or GCP).
Experience in dimensional data modeling.
Excellent communication skills to bridge technical and business teams.
The expected hiring range for The Semantic Data Engineer is $115k-$135k depending on skills, experience, education, and training; relevant licensure / certifications; performance history; and work location. The bonus target for Semantic Data Engineer is 15%. The current full salary range for the Architect II position is $104k Low/ $130k MRP / $169k High.
About Cambia
Working at Cambia means being part of a purpose-driven, award-winning culture built on trust and innovation anchored in our 100+ year history. Our caring and supportive colleagues are some of the best and brightest in the industry, innovating together toward sustainable, person-focused health care. Whether we're helping members, lending a hand to a colleague or volunteering in our communities, our compassion, empathy and team spirit always shine through.
Why Join the Cambia Team?
At Cambia, you can:
Work alongside diverse teams building cutting-edge solutions to transform health care.
Earn a competitive salary and enjoy generous benefits while doing work that changes lives.
Grow your career with a company committed to helping you succeed.
Give back to your community by participating in Cambia-supported outreach programs.
Connect with colleagues who share similar interests and backgrounds through our employee resource groups.
We believe a career at Cambia is more than just a paycheck - and your compensation should be too. Our compensation package includes competitive base pay as well as a market-leading 401(k) with a significant company match, bonus opportunities and more.
In exchange for helping members live healthy lives, we offer benefits that empower you to do the same. Just a few highlights include:
Medical, dental and vision coverage for employees and their eligible family members, including mental health benefits.
Annual employer contribution to a health savings account.
Generous paid time off varying by role and tenure in addition to 10 company-paid holidays.
Market-leading retirement plan including a company match on employee 401(k) contributions, with a potential discretionary contribution based on company performance (no vesting period).
Up to 12 weeks of paid parental time off (eligibility requires 12 months of continuous service with Cambia immediately preceding leave).
Award-winning wellness programs that reward you for participation.
Employee Assistance Fund for those in need.
Commute and parking benefits.
Learn more about our benefits.
We are happy to offer work from home options for most of our roles. To take advantage of this flexible option, we require employees to have a wired internet connection that is not satellite or cellular and internet service with a minimum upload speed of 5Mb and a minimum download speed of 10 Mb.
We are an Equal Opportunity employer dedicated to a drug and tobacco-free workplace. All qualified applicants will receive consideration for employment without regard to race, color, national origin, religion, age, sex, sexual orientation, gender identity, disability, protected veteran status or any other status protected by law. A background check is required.
If you need accommodation for any part of the application process because of a medical condition or disability, please email ******************************. Information about how Cambia Health Solutions collects, uses, and discloses information is available in our Privacy Policy.
Auto-ApplyPrincipal Data Engineer
Data scientist job in Portland, OR
**Job Requisition ID #** 25WD90545 We are seeking a Principal Data Engineer to provide technical leadership in designing, building, and scaling data infrastructure that powers machine learning, personalization, and search experiences. You will architect scalable data pipelines in production, drive technical strategy, and mentor engineering teams while partnering with Machine Learning Engineering, Platform Engineering, and Data Science.
Your work will be critical to strategic initiatives including optimization of digital conversion metrics, development of Autodesk Assistant (an LLM-driven chatbot), RAG (Retrieval-Augmented Generation) systems, eCommerce personalization engines, and intelligent search capabilities. As a Principal Engineer, you will set technical direction, establish best practices, and drive innovation across the data engineering organization.
Our team culture is built on collaboration, mutual support, and continuous learning. We emphasize an agile, hands-on, and technical approach at all levels of the team.
**Key Responsibilities**
+ Define and drive technical architecture for data platforms supporting ML and personalization at scale.
+ Design, build, and maintain highly scalable, low-latency data pipelines supporting real-time ML inference and personalization.
+ Build data pipelines for RAG systems, including vector embeddings and semantic search.
+ Design real-time feature engineering and event processing systems for eCommerce personalization and recommendation engines.
+ Develop sophisticated data models optimized for ML training, real-time inference, and analytics workloads.
+ Implement complex stream processing architectures using technologies like Kafka and Flink.
+ Oversee and optimize database systems (SQL, NoSQL, vector databases) ensuring high performance and scalability.
+ Establish data engineering standards, patterns, and best practices across the organization.
+ Provide technical leadership, guidance, and mentorship to senior and mid-level data engineers.
+ Lead cross-functional teams to deliver complex data engineering solutions in production.
**Minimum Qualifications**
+ **8+ years** of data engineering experience with at least **3+ years** in a senior or lead capacity.
+ Expert-level proficiency in Python, Java, or Scala
+ Deep knowledge of SQL and extensive experience with relational and NoSQL databases
+ Strong expertise in big data technologies (Kafka, Flink, Spark, Parquet, Iceberg, Delta Lake)
+ Advanced experience with ETL orchestration tools like Apache Airflow
+ Extensive experience with cloud platforms (AWS, Azure, or GCP) and their data services
+ Deep understanding of data warehousing solutions like Snowflake, Redshift, or BigQuery
+ Proven expertise in data modeling, data architecture design, and ETL/ELT processes at scale
+ Demonstrated ability to lead technical initiatives and mentor engineering talent
+ Strong communication skills with ability to explain complex technical concepts to diverse audiences.
+ Bachelor's degree in computer science, or related field (Master's/PhD strongly preferred)
**Preferred Skills & Experience**
+ Experience building data pipelines for Retrieval-Augmented Generation, vector databases (Pinecone, Weaviate, Milvus), and semantic search
+ Experience with real-time personalization engines, recommendation systems, feature stores, and A/B testing infrastructure
+ Experience building ML data pipelines, feature engineering systems, and model serving infrastructure
+ Knowledge of search technologies (Elasticsearch, Solr, OpenSearch), relevance tuning, and search analytics
+ Deep experience with event streaming, behavioral analytics, and customer data platforms
+ Experience with A/B testing infrastructure and metrics computation at scale
+ Knowledge of MLOps, model monitoring, and ML pipeline orchestration
+ Knowledge of eCommerce metrics, conversion optimization, and digital analytics
**Learn More**
**About Autodesk**
Welcome to Autodesk! Amazing things are created every day with our software - from the greenest buildings and cleanest cars to the smartest factories and biggest hit movies. We help innovators turn their ideas into reality, transforming not only how things are made, but what can be made.
We take great pride in our culture here at Autodesk - it's at the core of everything we do. Our culture guides the way we work and treat each other, informs how we connect with customers and partners, and defines how we show up in the world.
When you're an Autodesker, you can do meaningful work that helps build a better world designed and made for all. Ready to shape the world and your future? Join us!
**Benefits**
From health and financial benefits to time away and everyday wellness, we give Autodeskers the best, so they can do their best work. Learn more about our benefits in the U.S. by visiting ******************************
**Salary transparency**
Salary is one part of Autodesk's competitive compensation package. For U.S.-based roles, we expect a starting base salary between $130,600 and $211,200. Offers are based on the candidate's experience and geographic location, and may exceed this range. In addition to base salaries, our compensation package may include annual cash bonuses, commissions for sales roles, stock grants, and a comprehensive benefits package.
**Equal Employment Opportunity**
At Autodesk, we're building a diverse workplace and an inclusive culture to give more people the chance to imagine, design, and make a better world. Autodesk is proud to be an equal opportunity employer and considers all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender, gender identity, national origin, disability, veteran status or any other legally protected characteristic. We also consider for employment all qualified applicants regardless of criminal histories, consistent with applicable law.
**Diversity & Belonging**
We take pride in cultivating a culture of belonging where everyone can thrive. Learn more here: ********************************************************
**Are you an existing contractor or consultant with Autodesk?**
Please search for open jobs and apply internally (not on this external site).
Senior Data Science & Engineer
Data scientist job in Portland, OR
Overview Microsoft Cloud Hardware Infrastructure Engineering (CHIE) is the team behind Microsoft's expanding Cloud Infrastructure and responsible for powering Microsoft's "Intelligent Cloud" mission. CHIE delivers the core infrastructure and foundational technologies for Microsoft's over 200 online businesses including Bing, MSN, Office 365, Xbox Live, Skype, OneDrive and the Microsoft Azure platform globally with our server and data center infrastructure, security and compliance, operations, globalization, and manageability solutions. Our focus is on smart growth, high efficiency, and delivering a trusted experience to customers and partners worldwide and we are looking for passionate, high energy engineers to help achieve that mission. The Cloud Hardware Analytics & Tools (CHAT) Team within SCHIE develops advanced analytical and tooling solutions to support and improve the quality of Microsoft Azure. We collect, manage and analyze data across the full Azure stack, HW & SW components, Silicon development processes, and much more. CHAT Data Analytics & Engineering, a branch of CHAT, provides data support, machine learning and artificial intelligence solutions, and advanced visualizations to cloud engineering and management partners. We are looking for a Data Engineer and Data Science expert to support our custom data analytics and visualizations platform as well and building the enablement of implementing AI solutions and analytics across our division. This high-impact role combines data engineering with data science, and will have direct contributions to Azure-level projects, including working with highly visible platforms through data-driven approaches. This role closely collaborates with other Data Engineers, Software Engineers, other Data Scientists, Program Managers, and HW Engineering teams. We further use that data to power AI solutions across the SCHIE org designed to improve efficiency, quality and innovation. If you're an experienced developer and share our passion please consider applying. #azurehwjobs #SCHIE Responsibilities * Identify opportunities for custom and existing AI solutions that will provide business impact across our division. * Design and develop data solutions to enable complex AI and analytical solutions * Understand and stay connected to industry AI solutions * Work in collaborative environment Qualifications Required/Minimum Qualifications * Doctorate in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field AND 1+ year(s) data-science experience (e.g., managing structured and unstructured data, applying statistical techniques and reporting results) *
* OR Master's Degree in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field AND 3+ years data-science experience (e.g., managing structured and unstructured data, applying statistical techniques and reporting results) * OR Bachelor's Degree in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field AND 5+ years data-science experience (e.g., managing structured and unstructured data, applying statistical techniques and reporting results) * OR equivalent experience. * 2+ years customer-facing, project-delivery experience, professional services, and/or consulting experience. Other Requirements: Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to, the following specialized security screenings: * Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Preferred Qualifications * Doctorate in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, *
OR related field AND 3+ years data-science experience (e.g., managing structured and unstructured data, applying statistical techniques and reporting results) * OR Master's Degree in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, * OR related field AND 5+ years data-science experience (e.g., managing structured and unstructured data, applying statistical techniques and reporting results) * OR Bachelor's Degree in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, * OR related field AND 7+ years data-science experience (e.g., managing structured and unstructured data, applying statistical techniques and reporting results) * OR equivalent experience. Data Science IC4 - The typical base pay range for this role across the U.S. is USD $119,800 - $234,700 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $158,400 - $258,000 per year. Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: **************************************************** This position will be open for a minimum of 5 days, with applications accepted on an ongoing basis until the position is filled. Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance with religious accommodations and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Databricks Data Engineer - Manager - Consulting - Location Open
Data scientist job in Portland, OR
At EY, we're all in to shape your future with confidence. We'll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world.
**Technology - Data and Decision Science - Data Engineering - Manager**
We are looking for a dynamic and experienced Manager of Data Engineering to lead our team in designing and implementing complex cloud analytics solutions with a strong focus on Databricks. The ideal candidate will possess deep technical expertise in data architecture, cloud technologies, and analytics, along with exceptional leadership and client management skills.
**The opportunity:**
In this role, you will design and build analytics solutions that deliver significant business value. You will collaborate with other data and analytics professionals, management, and stakeholders to ensure that business requirements are translated into effective technical solutions. Key responsibilities include:
+ Understanding and analyzing business requirements to translate them into technical requirements.
+ Designing, building, and operating scalable data architecture and modeling solutions.
+ Staying up to date with the latest trends and emerging technologies to maintain a competitive edge.
**Key Responsibilities:**
As a Data Engineering Manager, you will play a crucial role in managing and delivering complex technical initiatives. Your time will be spent across various responsibilities, including:
+ Leading workstream delivery and ensuring quality in all processes.
+ Engaging with clients on a daily basis, actively participating in working sessions, and identifying opportunities for additional services.
+ Implementing resource plans and budgets while managing engagement economics.
This role offers the opportunity to work in a dynamic environment where you will face challenges that require innovative solutions. You will learn and grow as you guide others and interpret internal and external issues to recommend quality solutions. Travel may be required regularly based on client needs.
**Skills and attributes for success:**
To thrive in this role, you should possess a blend of technical and interpersonal skills. The following attributes will make a significant impact:
+ Lead the design and development of scalable data engineering solutions using Databricks on cloud platforms (e.g., AWS, Azure, GCP).
+ Oversee the architecture of complex cloud analytics solutions, ensuring alignment with business objectives and best practices.
+ Manage and mentor a team of data engineers, fostering a culture of innovation, collaboration, and continuous improvement.
+ Collaborate with clients to understand their analytics needs and deliver tailored solutions that drive business value.
+ Ensure the quality, integrity, and security of data throughout the data lifecycle, implementing best practices in data governance.
+ Drive end-to-end data pipeline development, including data ingestion, transformation, and storage, leveraging Databricks and other cloud services.
+ Communicate effectively with stakeholders, including technical and non-technical audiences, to convey complex data concepts and project progress.
+ Manage client relationships and expectations, ensuring high levels of satisfaction and engagement.
+ Stay abreast of the latest trends and technologies in data engineering, cloud computing, and analytics.
+ Strong analytical and problem-solving abilities.
+ Excellent communication skills, with the ability to convey complex information clearly.
+ Proven experience in managing and delivering projects effectively.
+ Ability to build and manage relationships with clients and stakeholders.
**To qualify for the role, you must have:**
+ Bachelor's degree in computer science, Engineering, or a related field required; Master's degree preferred.
+ Typically, no less than 4 - 6 years relevant experience in data engineering, with a focus on cloud data solutions and analytics.
+ Proven expertise in Databricks and experience with Spark for big data processing.
+ Strong background in data architecture and design, with experience in building complex cloud analytics solutions.
+ Experience in leading and managing teams, with a focus on mentoring and developing talent.
+ Strong programming skills in languages such as Python, Scala, or SQL.
+ Excellent problem-solving skills and the ability to work independently and as part of a team.
+ Strong communication and interpersonal skills, with a focus on client management.
**Required Expertise for Managerial Role:**
+ **Strategic Leadership:** Ability to align data engineering initiatives with organizational goals and drive strategic vision.
+ **Project Management:** Experience in managing multiple projects and teams, ensuring timely delivery and adherence to project scope.
+ **Stakeholder Engagement:** Proficiency in engaging with various stakeholders, including executives, to understand their needs and present solutions effectively.
+ **Change Management:** Skills in guiding clients through change processes related to data transformation and technology adoption.
+ **Risk Management:** Ability to identify potential risks in data projects and develop mitigation strategies.
+ **Technical Leadership:** Experience in leading technical discussions and making architectural decisions that impact project outcomes.
+ **Documentation and Reporting:** Proficiency in creating comprehensive documentation and reports to communicate project progress and outcomes to clients.
**Large-Scale Implementation Programs:**
1. **Enterprise Data Lake Implementation:** Led the design and deployment of a cloud-based data lake solution for a Fortune 500 retail client, integrating data from multiple sources (e.g., ERPs, POS systems, e-commerce platforms) to enable advanced analytics and reporting capabilities.
2. **Real-Time Analytics Platform:** Managed the development of a real-time analytics platform using Databricks for a financial services organization, enabling real-time fraud detection and risk assessment through streaming data ingestion and processing.
3. **Data Warehouse Modernization:** Oversaw the modernization of a legacy data warehouse to a cloud-native architecture for a healthcare provider, implementing ETL processes with Databricks and improving data accessibility for analytics and reporting.
**Ideally, you'll also have:**
+ Experience with advanced data analytics tools and techniques.
+ Familiarity with machine learning concepts and applications.
+ Knowledge of industry trends and best practices in data engineering.
+ Familiarity with cloud platforms (AWS, Azure, GCP) and their data services.
+ Knowledge of data governance and compliance standards.
+ Experience with machine learning frameworks and tools.
**What we look for:**
We seek individuals who are not only technically proficient but also possess the qualities of top performers, including a strong sense of collaboration, adaptability, and a passion for continuous learning. If you are driven by results and have a desire to make a meaningful impact, we want to hear from you.
FY26NATAID
**What we offer you**
At EY, we'll develop you with future-focused skills and equip you with world-class experiences. We'll empower you in a flexible environment, and fuel you and your extraordinary talents in a diverse and inclusive culture of globally connected teams. Learn more .
+ We offer a comprehensive compensation and benefits package where you'll be rewarded based on your performance and recognized for the value you bring to the business. The base salary range for this job in all geographic locations in the US is $125,500 to $230,200. The base salary range for New York City Metro Area, Washington State and California (excluding Sacramento) is $150,700 to $261,600. Individual salaries within those ranges are determined through a wide variety of factors including but not limited to education, experience, knowledge, skills and geography. In addition, our Total Rewards package includes medical and dental coverage, pension and 401(k) plans, and a wide range of paid time off options.
+ Join us in our team-led and leader-enabled hybrid model. Our expectation is for most people in external, client serving roles to work together in person 40-60% of the time over the course of an engagement, project or year.
+ Under our flexible vacation policy, you'll decide how much vacation time you need based on your own personal circumstances. You'll also be granted time off for designated EY Paid Holidays, Winter/Summer breaks, Personal/Family Care, and other leaves of absence when needed to support your physical, financial, and emotional well-being.
**Are you ready to shape your future with confidence? Apply today.**
EY accepts applications for this position on an on-going basis.
For those living in California, please click here for additional information.
EY focuses on high-ethical standards and integrity among its employees and expects all candidates to demonstrate these qualities.
**EY | Building a better working world**
EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets.
Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow.
EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
EY provides equal employment opportunities to applicants and employees without regard to race, color, religion, age, sex, sexual orientation, gender identity/expression, pregnancy, genetic information, national origin, protected veteran status, disability status, or any other legally protected basis, including arrest and conviction records, in accordance with applicable law.
EY is committed to providing reasonable accommodation to qualified individuals with disabilities including veterans with disabilities. If you have a disability and either need assistance applying online or need to request an accommodation during any part of the application process, please call 1-800-EY-HELP3, select Option 2 for candidate related inquiries, then select Option 1 for candidate queries and finally select Option 2 for candidates with an inquiry which will route you to EY's Talent Shared Services Team (TSS) or email the TSS at ************************** .
Data Engineer
Data scientist job in Beaverton, OR
Data Engineer - Nike Inc.- Beaverton, OR. Design and implement features in collaboration with product owners, data analysts, and business partners using Agile / Scrum methodology; contribute to overall architecture, frameworks and patterns for processing and storing large data volumes; design and implement distributed data processing pipelines using tools and languages prevalent in the Hadoop or Cloud ecosystems; build utilities, user defined functions, and frameworks to better enable data flow patterns; build and develop job orchestration and scheduling using Airflow; research, evaluate and utilize new technologies/tools/frameworks centered around high-volume data processing; define and apply appropriate data acquisition and consumption strategies for given technical scenarios; build and incorporate automated unit tests and participate in integration testing efforts; work with architecture/engineering leads and other teams to ensure quality solutions are implemented, and engineering best practices are defined and adhered to; and work across teams to resolve operational and performance issues. Telecommuting is available from anywhere in the U.S., except from AK, AL, AR, DE, HI, IA, ID, IN, KS, KY, LA, MT, ND, NE, NH, NM, NV, OH, OK, RI, SD, VT, WV, and WY.
Must have a Master's Degree in Computer Science, Engineering, Computer Information Systems, Electronics and Communications, or Technology and 2 years of experience in the job offered or a data engineering related occupation.
Experience must include:
+ Programming languages such as Python, Java, and Scala;
+ Big Data Frameworks such as Hadoop, Hive, Spark, and Databricks;
+ ETL Tools such as Informatica and PLSQL;
+ Scripting such as Unix, and PowerShell;
+ Databases, such as Oracle, MYSQL, SQL Server, Teradata, and Snowflake;
+ Cloud Technologies such as AWS, Azure Cloud, EC2, S3, Azure Blob, API Gateway, Aurora, EC2, RDS, Elastic Cache, and Spark Streaming;
+ Analytics Tools, such as Tableau and Azure Analysis Services;
+ Agile Teams;
+ Source Control tools such as Github and related dev process; and
+ Airflow
**Apply at** ********************** (Job# R-76207)**
\#LI-DNI
We offer a number of accommodations to complete our interview process including screen readers, sign language interpreters, accessible and single location for in-person interviews, closed captioning, and other reasonable modifications as needed. If you discover, as you navigate our application process, that you need assistance or an accommodation due to a disability, please complete the Candidate Accommodation Request Form (******************************************************************* .
NIKE, Inc. is committed to employing a diverse workforce. Qualified applicants will receive consideration without regard to race, color, religion, sex, national origin, age, sexual orientation, gender identity, gender expression, protected veteran status, or disability. NIKE is committed to working with and providing reasonable accommodation to individuals with disabilities. If, because of a medical condition or disability, you need a reasonable accommodation for any part of the employment process, please call *************** and let us know the nature of your request, your location and your contact information.
Data Engineer Visualization and Solutions
Data scientist job in Portland, OR
Inside the Role
The TT/S Process, Methods, Tools, and Operations team is seeking a specialist with hands-on experience in software development pipeline metrics, complex ETL workflows, and cross-functional data interpretation. This role requires someone who has previously worked with TT/S's internal data systems, understands existing metric definitions, and can independently maintain and enhance our current visualization ecosystem.
This group is part of the Process, Methods, Tools, and Operations (PMTO) department within TT/S. Established in 2021, TT/S is a global software and electronics group that is responsible for all global SW & EE development to deliver world-class software and features. TT/S is a global organization with over 1,400 people across the US, Germany, and India. This position is located in Portland and reports the local PMTO manager.
Posting Information
We provide a scheduled posting end date to assist our candidates with their application planning. While this date reflects our latest plans, it is subject to change, and postings may be extended or removed earlier than expected.
We Take Care of Our Team
Position offers a starting salary range of $71,000 - $91,000 USD
Pay offered dependent on knowledge, skills, and experience.
Benefits include annual bonus program; 401k company contribution with company match up to 6% as well as non-elective company contribution of 3 - 7% depending on age; starting at 4 weeks paid vacation; 13+ calendar holidays; 8 weeks paid parental leave; employee assistance program; comprehensive healthcare plans and wellness programs; onsite fitness (at some locations); tuition assistance and volunteer paid time off; short-term and long-term disability plans.
What You Drive At DTNA
1. Software Development Metrics Expertise
· Build and maintain data pipelines specifically for software development lifecycle (SDLC) metrics, including backlog flow, cycle time, throughput, defect metrics, and team performance indicators.
· Apply knowledge of DTNA's current SDLC tooling, workflows, and metadata structures to ensure metric accuracy and consistency.
2. Advanced Visualization & Reporting
· Design dashboard suites currently used within TTS leadership, ensuring continuity with existing visual standards, business rules, and naming conventions.
· Maintain and extend existing Tableau/Power BI dashboards that are already deployed to internal teams.
3. Tools & Technologies
· Utilize:
o SAP HANA + custom SQL tuned for large-scale metric calculations
o Alteryx for pipeline automation
o AQT for troubleshooting and validating existing data models
o Python scripting for metric calculation, anomaly detection, and reproducibility
· Maintain and optimize existing ETL workflows built for current DTNA pipeline metrics.
4. Data Integration & ETL Ownership
· Independently manage ETL processes already in production, ensuring stability and accuracy.
· Integrate with established APIs, internal databases, and SDLC tools (such as Jira or internal equivalents).
5. User Experience & Adoption
· Work directly with business owners who rely on the current dashboards and pipelines, incorporating feedback that maintains continuity in design and workflow.
· Ensure dashboards remain intuitive for existing stakeholders and align with their current methods of data consumption.
6. Collaboration & Stakeholder Alignment
· Work closely with DTNA's global data owners and engineering leadership, leveraging established relationships and existing knowledge of team structures.
· Translate metric definitions and data issues into actionable solutions with minimal onboarding.
7. Predictive Analytics & Trend Modeling
· Apply domain knowledge of DTNA's historical metric patterns to create forecasting or anomaly-detection models.
· Use prior experience with DTNA data to identify realistic trends and noise.
Knowledge You Should Bring
Bachelor's or Master's in Data Science, Computer Science, Information Systems, or a related field.
0-2 years of related experience
Demonstrated prior experience working with DTNA data environments, SDLC metrics, or equivalent enterprise-scale engineering metrics programs.
Proven record of building software development metric dashboards using Tableau or Power BI.
Proficiency with SAP HANA SQL, Alteryx, and Python for ETL and metric calculations.
Strong communication skills for interacting with engineering leadership and cross-functional teams.
Experience maintaining existing dashboards, ETL flows, and metric definitions in a production environment.
Exceptional Candidates Might Have
Direct prior experience supporting TT/S or similar internal groups.
Familiarity with established DTNA naming conventions, metric definitions, and internal data sources.
Ability to work autonomously with minimal onboarding due to prior exposure to DTNA/DT/TTS systems.
Experience collaborating with global data owners across DT
Network within TT/S, locally and/or globally.
Exposure to a Sr Sw Developer and Agile Coaching Methods
Where We Work
This position is open to applicants who can work in (or relocate to) the following location(s)-
Portland, OR US. Relocation assistance is not available for this position.
Schedule Type:
Hybrid (4 days per week in-office / 1 day remote). This schedule builds our #OneTeamBestTeam culture, provides an unparalleled customer experience, and creates innovative solutions through in-person collaboration.
At Daimler Truck North America, we recognize our world is changing faster than ever before. By listening to the needs of today, we're building to solve with cutting-edge solutions in sustainability and future driving technology across electric, hydrogen and autonomous. These solutions, backed by years of innovative success and achievement, continue DTNA's legacy as the undisputed industry leader. Our evolving brand portfolio is second to none, including Freightliner Trucks, Western Star, Demand Detroit, Thomas Built Buses, Freightliner Custom Chassis, and Financial Services. Together, we work as one team towards our envisioned future - building a cleaner, safer and more efficient tomorrow for all.
That is what we are working toward - for all who keep the world moving.
Additional Information
This position is not open for Visa sponsorship or to existing Visa holders
Applicants must be legally authorized to work permanently in the country the position is located in at the time of application
Final candidate must successfully complete a criminal background check
Final candidate may be required to successfully complete a pre-employment drug screen
Contractors, professional services, or other contingent workers should confirm with their local agency if they are eligible to apply for FTE positions
EEO - Disabled/Veterans
Daimler Truck North America is committed to workforce inclusion and providing an environment where equal employment opportunities are available to all applicants and employees without regard to race, color, sex (including pregnancy), religion, national origin, age, marital status, family relationship, disability, sexual orientation, gender identity and expression (including transgender and transitioning status), genetic information, or veteran status.
For an accommodation or special assistance with applying for a posted position, please contact our Human Resources department at ************ or toll free ************. For TTY/TDD enabled call ************ or toll free ************.
Auto-ApplySr. Data Engineer
Data scientist job in Portland, OR
Job Description
Title : Sr. Data Engineer
Duration: 12 Months+
Roles & Responsibilities
Perform data analysis according to business needs
Translate functional business requirements into high-level and low-level technical designs
Design and implement distributed data processing pipelines using Apache Spark, Apache Hive, Python, and other tools and languages prevalent in a modern analytics platform
Create and schedule workflows using Apache Airflow or similar job orchestration tooling
Build utilities, functions, and frameworks to better enable high-volume data processing
Define and build data acquisitions and consumption strategies
Build and incorporate automated unit tests, participate in integration testing efforts
Work with teams to resolve operational & performance issues
Work with architecture/engineering leads and other teams to ensure quality solutions are implemented, and engineering best practices are defined and followed.
Tech Stack
Apache Spark
Apache Spark Streaming using Apache Kafka
Apache Hive
Apache Airflow
Python
AWS EMR and S3
Snowflake
SQL
Other Tools & Technologies :: PyCharm, Jenkin, Github.
Apache Nifi (Optional)
Scala (Optional)
BigData Engineer / Architect
Data scientist job in Portland, OR
The hunt is for a strong Big Data Professional, a team player with the ability to manage effective relationships with a wide range of stakeholders (customers & team members alike). Incumbent will demonstrate personal commitment and accountability to ensure standards are continuously sustained and improved both within the internal teams, and with partner organizations and suppliers.
Role:
Big Data Engineer
Location:
Portland OR.
Duration:
Full Time
Skill Matrix:
Map Reduce -
Required
Apache Spark -
Required
Informatica PowerCenter -
Required
Hive -
Required
Apache Hadoop -
Required
Core Java / Python -
Highly Desired
Healthcare Domain Experience -
Highly Desired
Job Description
Responsibilities and Duties
Participate in technical planning & requirements gathering phases including architectural design, coding, testing, troubleshooting, and documenting big data-oriented software applications.
Responsible for the ingestion, maintenance, improvement, cleaning, and manipulation of data in the business's operational and analytics databases, and troubleshoots any existent issues.
Implementation, troubleshooting, and optimization distributed solutions based on modern big data technologies like Hive, Hadoop, Spark, Elastic Search, Storm, Kafka, etc. in both an on premise and cloud deployment model to solve large scale processing problems
Design, enhance and implement ETL/data ingestion platform on the cloud.
Strong Data Warehousing skills, including: Data clean-up, ETL, ELT and handling scalability issues for enterprise level data warehouse
Capable of investigating, familiarizing and mastering new data sets quickly
Strong troubleshooting and problem-solving skills in large data environment
Experience with building data platform on cloud (AWS or Azure)
Experience in using Python, Java or any other language to solving data problems
Experience in implementing SDLC best practices and Agile methods.
Qualifications
Required Skills:
Data architecture/ Big Data/ ETL environment
Experience with ETL design using tools Informatica, Talend, Oracle Data Integrator (ODI), Dell Boomi or equivalent
Big Data & Analytics solutions Hadoop, Pig, Hive, Spark, Spark SQL Storm, AWS (EMR, Redshift, S3, etc.)/Azure (HDInsight, Data Lake Design)
Building and managing hosted big data architecture, toolkit familiarity in: Hadoop with Oozy, Sqoop, Pig, Hive, HBase, Avro, Parquet, Spark, NiFi
Foundational data management concepts - RDM and MDM
Experience in working with JIRA/Git/Bitbucket/JUNIT and other code management toolsets
Strong hands-on knowledge of/using solutioning languages like: Java, Scala, Python - any one is fine
Healthcare Domain knowledge
Required Experience, Skills and Qualifications
Qualifications:
Bachelor's Degree with a minimum of 6 to 9 + year's relevant experience or equivalent.
Extensive experience in data architecture/Big Data/ ETL environment.
Additional Information
All your information will be kept confidential according to EEO guidelines.
Need Sr Big Data Engineer at Beaverton, OR Only W2
Data scientist job in Beaverton, OR
Hi,
We have immediate opportunity with our direct client send your resume asap upon your interest. Thank you.
Sr Big Data Engineer
Duration: Long Term
Skills
Typical Office: This is a typical office job, with no special physical requirements or unusual work environment.
Core responsibilities: Expert data engineers will work with product teams across client to help automate and integrate various of data domains with a wide variety of data profiles (different scale, cadence, and volatility) into client's next-gen data and analytics platform. This is an opportunity to work across multiple subject areas and source platforms to ingest, organize, and prepare data through cloud-native processes.
Required skills/experience:
- 5+ years of professional development experience between either Python (preferred) or Scala/Java; familiarity with both is ideal
- 5+ years of data-centric development with a focus on efficient data access and manipulation at multiple scales
- 3+ years of experience with the HDFS ecosystem of tools (any distro, Spark experience prioritized)
- 3+ years of significant experience developing within the broader AWS ecosystem of platforms and services
- 3+ years of experience optimizing data access and analysis in non-HDFS data platforms (Traditional RDMS's, NoSQL / KV stores, etc)
- Direct task development and/or configuration experience with a remote workflow orchestration tool - Airflow (preferred), Amazon Data Pipeline, Luigi, Oozie, etc.
- Intelligence, strong problem-solving ability, and the ability to effectively communicate to partners with a broad spectrum of experiential backgrounds
Several of the following skills are also desired:
- A demonstrably strong understanding of security and credential management between application / platform components
- A demonstrably strong understanding of core considerations when working with data at scale, in both file-based and database contexts, including SQL optimization
- Direct experience with Netflix Genie is another huge plus
- Prior experience with the operational backbone of a CI/CD environment (pipeline orchestration + configuration management) is useful
- Clean coding practices, passion for development, being a generally good team player, etc, etc (and experience with GitHub) is always nice
Keys to Success:
- Deliver exceptional customer service
- Demonstrate accountability and integrity
- Be willing to learn and adapt every day
- Embrace change Skills
Regards
Nithya
Additional Information
All your information will be kept confidential according to EEO guidelines. please send the profiles to ************************* and contact No# ************.
Easy ApplyData Engineer
Data scientist job in Tualatin, OR
Precinmac owns a family of precision machining companies in the US and Canada. This roles home location is Shields MFG location in Tualatin, Oregon, which is an industry leader and Value-Add, climate-controlled production facility specializing in CNC machining and complex mechanical/optical/laser assembly including clean-room environments. The Data Engineer will play a critical role in supporting all areas of the company by enabling reliable, scalable, and secure information systems. Our businesses deliver specialized manufacturing expertise for OEMs with low-volume/high-mix needs, while also driving higher-volume opportunities through our expanding cell system capabilities. As an IT-driven organization, we rely on robust data and management information systems to ensure efficiency, transparency, and informed decision-making across the enterprise.
We offer:
A Highly competitive total compensation package
Medical (3 medical plans to choose from)
Dental
Vision
Life (Company-paid, and options for additional supplemental)
Disability Insurance (company paid short-term and long-term disability)
401(k) with company match
A generous paid time off schedule
Discretionary quarterly bonus program.
We Offer:
A Highly competitive total compensation package
Medical (3 medical plans to choose from)
Dental
Vision
Life (Company-paid, and options for additional supplemental)
Disability Insurance (company paid short-term and long-term disability)
401(k) with company match,
A generous paid time off schedule
Discretionary quarterly bonus program.
Date Engineer
We are looking for a highly motivated Data Engineer to join our growing Data Governance & Analytics team. In this role, you will work closely with senior engineers, architects, and business stakeholders to design and deliver scalable data solutions that power critical business insights and innovation. If you are passionate about building robust data pipelines, ensuring data quality, and leveraging cutting-edge cloud technologies, this is the role for you.
Key Responsibilities:
Partner with the Senior Data Engineer to design, build, and maintain scalable ETL pipelines and dataflows that adhere to enterprise governance and quality standards.
Implement data modeling, normalization, and metadata management practices to ensure consistency and usability across data platforms.
Leverage Azure Data Factory (ADF), Databricks, and Apache Spark to process and transform large volumes of structured and unstructured data.
Integrate data from diverse sources using RESTful APIs and other ingestion methods.
Apply advanced SQL expertise for querying, performance tuning, and ensuring data integrity (both T-SQL and PL/SQL).
Collaborate with business teams and data governance groups to enforce data quality, lineage, and compliance standards.
Contribute to the Agile development lifecycle, participating in sprint planning, stand-ups, and retrospectives.
Partner with data architects, analysts, and business leaders to design and deliver solutions aligned with organizational goals.
Provide technical expertise in Python and other scripting languages to automate data workflows.
Promote best practices in data governance, security, and stewardship across the enterprise.
Required Skills & Experience:
Proven experience in data engineering with exposure to data governance frameworks (preferably GCCHI).
Strong proficiency with Azure Data Factory, Azure Databricks, Apache Spark, and Python.
Solid expertise in SQL (query optimization, performance tuning, complex joins, stored procedures) across T-SQL and PL/SQL.
Hands-on experience with ETL pipelines, dataflows, normalization, and data modeling.
Familiarity with RESTful API integration for data ingestion.
Experience contributing to Agile teams and sprint-based deliverables.
Strong understanding of data structures, metadata management, and governance best practices.
Practical experience automating workflows with Python scripting.
Preferred Skills:
Experience with data cataloging, data lineage, and master data management (MDM) tools.
Knowledge of Azure Synapse Analytics, Power BI, or other BI/visualization platforms.
Familiarity with CI/CD practices for data pipelines.
Exposure to data privacy regulations (CMMC, NIST 800).
Why Join Us?
Work on impactful projects that enable smarter business decisions.
Gain hands-on experience with advanced Azure technologies and modern data tools.
Be part of a collaborative, agile team where innovation and continuous improvement are valued.
Grow your career in a forward-looking data-driven organization.
Work Setting:
General office setting with typical moderate noise levels in a temperature controlled environment. Operates office equipment (computer, fax, copier, phone) as required to perform essential job functions.
Precinmac is an equal opportunity, affirmative action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
Auto-ApplyData Engineer
Data scientist job in Portland, OR
at WebMD
WebMD is an Equal Opportunity/Affirmative Action employer and does not discriminate on the basis of race, ancestry, color, religion, sex, gender, age, marital status, sexual orientation, gender identity, national origin, medical condition, disability, veterans status, or any other basis protected by law.
The ideal candidate has experience and passion for data engineering. They are an expert in their field - whether that be the front end (data visualization, DAX, Power BI, SQL) or the back end (SSIS, Azure Data Factory, Synapse, data modeling, SQL) of the data engineering world.Exhibits expert knowledge of architecture and system design principles. This person has a consistent record of very strong ownership for their area, and is considered the guru in their technical space. This person is able to solve complex cross-functional challenges and perform complex analysis of business needs and provide innovative solutions. Works across the organization to foster a culture of architecture that allows for iterative, autonomous development and future scaling. Guides teams in the organization in anticipation of future use cases and helps them make design decisions that minimize the cost of future changes.DUTIES & RESPONSIBILITIES
Builds software to extract, transform, and load data - SSIS, Azure Data Factory, Synapse
Models data for efficient consumption by reporting and analytics tools - Azure Analysis Services, Power BI, SQL Databases
Maintains previously deployed software and reports - Power BI, SSIS, AAS, SQL Server
Designs dashboards and reports to meet business needs
Presents technical problems with solutions in mind, in a constructive and understandable fashion
Demonstrates proficiency by sharing learnings with the team and in technical showcases
Independently discovers solutions and collaborates on insights and best practices
Takes ownership of their work deliverables
Actively seeks out opportunities to help improve the team's practices and processes to achieve fast flow
Works with other developers to facilitate knowledge transfer and conduct code reviews
Ensures that credit is shared and given when due
Works to build and improve strong relationships among team members
REQUIREMENTS
3+ years of Data Engineering experience
Bachelor's Degree in Information Systems, Computer Science, Business Operations, or equivalent work experience
Advanced experience with Standard Query Language (SQL) and data modeling/architecture
Advanced experience with Data Analysis Expressions (DAX)
Advanced experience and understanding of data integration engines such as SQL Server Integration Services (SSIS) or Azure Data Factory, or Synapse
All offers are contingent upon the successful completion of a background check
PREFERRED SKILLS AND KNOWLEDGE
Proficiency with data engineering in a cloud development environment - Microsoft Azure is preferred
Proficiency with self-service query tools and dashboards - preferably Power BI
Proficiency with star schema design and managing large data volumes
Familiarity with data science and machine learning capabilities
Ability to interact with people, inside and outside the team, in order to see a project to completion
Experience protecting individual privacy, such as required by HIPAA
Auto-ApplySenior Data Engineer
Data scientist job in Portland, OR
**Advance Local** is looking for a **Senior Data Engineer** to design, build, and maintain the enterprise data infrastructure that powers our cloud data platform. This position will combine your deep technical expertise in data engineering with team leadership responsibilities for data engineering, overseeing the ingestion, integration, and reliability of data systems across Snowflake, AWS, Google Cloud, and legacy platforms. You'll partner with data product and across business units to translate requirements into technical solutions, integrate data from numerous third-party platforms, (CDPs, DMPs, analytics platforms, marketing tech) into central data platform, collaborate closely with the Data Architect on platform strategy and ensure scalable, well-engineered solutions for modern data infrastructure using infrastructure as code and API-driven integrations.
The base salary range is $120,000 - $140,000 per year.
**What you'll be doing:**
+ Lead the design and implementation of scalable data ingestion pipelines from diverse sources into Snowflake.
+ Partner with platform owners across business units to establish and maintain data integrations from third party systems into the central data platform.
+ Architect and maintain data infrastructure using IAC, ensuring reproducibility, version control and disaster recovery capabilities.
+ Design and implement API integrations and event-driven data flows to support real time and batch data requirements.
+ Evaluate technical capabilities and integration patterns of existing and potential third-party platforms, advising on platform consolidation and optimization opportunities.
+ Partner with the Data Architect and data product to define the overall data platform strategy, ensuring alignment between raw data ingestion and analytics-ready data products that serve business unit needs.
+ Develop and enforce data engineering best practices including testing frameworks, deployment automation, and observability.
+ Support rapid prototyping of new data products in collaboration with data product by building flexible, reusable data infrastructure components.
+ Design, develop, and maintain scalable data pipelines and ETL processes; optimize and improve existing data systems for performance, cost efficiency, and scalability.
+ Collaborate with data product, third-party platform owners, Data Architects, Analytics Engineers, Data Scientists, and business stakeholders to understand data requirements and deliver technical solutions that enable business outcomes across the organization.
+ Implement data quality validation, monitoring, and alerting systems to ensure reliability of data pipelines from all sources.
+ Develop and maintain comprehensive documentation for data engineering processes and systems, architecture, integration patterns, and runbooks.
+ Lead incident response and troubleshooting efforts for data pipeline issues, ensuring minimal business impact.
+ Stay current with the emerging data engineering technologies, cloud services, SaaS platform capabilities, and industry best practices.
**Our ideal candidate will have the following:**
+ Bachelor's degree in computer science, engineering, or a related field
+ Minimum of seven years of experience in data engineering with at least two years in a lead or senior technical role
+ Expert proficiency in Snowflake data engineering patterns
+ Strong experience with AWS services (S3, Lambda, Glue, Step Functions) and Google Cloud Platform
+ Experience integrating data from SaaS platforms and marketing technology stacks (CDPs, DMPs, analytics platforms, CRMs)
+ Proven ability to work with third party APIs, webhooks, and data exports
+ Experience with infrastructure such as code (Terraform, Cloud Formation) and CI/CD pipelines for data infrastructure
+ Proven ability to design and implement API integrations and event-driven architecture
+ Experience with data modeling, data warehousing, and ETL processes at scale
+ Advanced proficiency in Python and SQL for data pipeline development
+ Experience with data orchestration tools (airflow, dbt, Snowflake tasks)
+ Strong understanding of data security, access controls, and compliance requirements
+ Ability to navigate vendor relationships and evaluate technical capabilities of third-party platforms
+ Excellent problem-solving skills and attention to detail
+ Strong communication and collaboraion skills
**Additional Information**
Advance Local Media offers competitive pay and a comprehensive benefits package with affordable options for your healthcare including medical, dental and vision plans, mental health support options, flexible spending accounts, fertility assistance, a competitive 401(k) plan to help plan for your future, generous paid time off, paid parental and caregiver leave and an employee assistance program to support your work/life balance, optional legal assistance, life insurance options, as well as flexible holidays to honor cultural diversity.
Advance Local Media is one of the largest media groups in the United States, which operates the leading news and information companies in more than 20 cities, reaching 52+ million people monthly with our quality, real-time journalism and community engagement. Our company is built upon the values of Integrity, Customer-first, Inclusiveness, Collaboration and Forward-looking. For more information about Advance Local, please visit ******************** .
Advance Local Media includes MLive Media Group, Advance Ohio, Alabama Media Group, NJ Advance Media, Advance Media NY, MassLive Media, Oregonian Media Group, Staten Island Media Group, PA Media Group, ZeroSum, Headline Group, Adpearance, Advance Aviation, Advance Healthcare, Advance Education, Advance National Solutions, Advance Originals, Advance Recruitment, Advance Travel & Tourism, BookingsCloud, Cloud Theory, Fox Dealer, Hoot Interactive, Search Optics, Subtext.
_Advance Local Media is proud to be an equal opportunity employer, encouraging applications from people of all backgrounds. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, genetic information, national origin, age, disability, sexual orientation, marital status, veteran status, or any other category protected under federal, state or local law._
_If you need a reasonable accommodation because of a disability for any part of the employment process, please contact Human Resources and let us know the nature of your request and your contact information._
Advance Local Media does not provide sponsorship for work visas or employment authorization in the United States. Only candidates who are legally authorized to work in the U.S. will be considered for this position.
Google Cloud Data & AI Engineer
Data scientist job in Portland, OR
Who You'll Work With As a modern technology company, our Slalom Technologists are disrupting the market and bringing to life the art of the possible for our clients. We have passion for building strategies, solutions, and creative products to help our clients solve their most complex and interesting business problems. We surround our technologists with interesting challenges, innovative minds, and emerging technologies
You will collaborate with cross-functional teams, including Google Cloud architects, data scientists, and business units, to design and implement Google Cloud data and AI solutions. As a Consultant, Senior Consultant or Principal at Slalom, you will be a part of a team of curious learners who lean into the latest technologies to innovate and build impactful solutions for our clients.
What You'll Do
* Design, build, and operationalize large-scale enterprise data and AI solutions using Google Cloud services such as BigQuery, Vertex AI, Dataflow, Cloud Storage, Pub/Sub and more.
* Implement cloud-based data solutions for data ingestion, transformation, and storage; and AI solutions for model development, deployment, and monitoring, ensuring both areas meet performance, scalability, and compliance needs.
* Develop and maintain comprehensive architecture plans for data and AI solutions, ensuring they are optimized for both data processing and AI model training within the Google Cloud ecosystem.
* Provide technical leadership and guidance on Google Cloud best practices for data engineering (e.g., ETL pipelines, data pipelines) and AI engineering (e.g., model deployment, MLOps).
* Conduct assessments of current data architectures and AI workflows, and develop strategies for modernizing, migrating, or enhancing data systems and AI models within Google Cloud.
* Stay current with emerging Google Cloud data and AI technologies, such as BigQuery ML, AutoML, and Vertex AI, and lead efforts to integrate new innovations into solutions for clients.
* Mentor and develop team members to enhance their skills in Google Cloud data and AI technologies, while providing leadership and training on both data pipeline optimization and AI/ML best practices.
What You'll Bring
* Proven experience as a Cloud Data and AI Engineer or similar role, with hands-on experience in Google Cloud tools and services (e.g., BigQuery, Vertex AI, Dataflow, Cloud Storage, Pub/Sub, etc.).
* Strong knowledge of data engineering concepts, such as ETL processes, data warehousing, data modeling, and data governance.
* Proficiency in AI engineering, including experience with machine learning models, model training, and MLOps pipelines using tools like Vertex AI, BigQuery ML, and AutoML.
* Strong problem-solving and decision-making skills, particularly with large-scale data systems and AI model deployment.
* Strong communication and collaboration skills to work with cross-functional teams, including data scientists, business stakeholders, and IT teams, bridging data engineering and AI efforts.
* Experience with agile methodologies and project management tools in the context of Google Cloud data and AI projects.
* Ability to work in a fast-paced environment, managing multiple Google Cloud data and AI engineering projects simultaneously.
* Knowledge of security and compliance best practices as they relate to data and AI solutions on Google Cloud.
* Google Cloud certifications (e.g., Professional Data Engineer, Professional Database Engineer, Professional Machine Learning Engineer) or willingness to obtain certification within a defined timeframe.
About Us
Slalom is a fiercely human business and technology consulting company that leads with outcomes to bring more value, in all ways, always. From strategy through delivery, our agile teams across 52 offices in 12 countries collaborate with clients to bring powerful customer experiences, innovative ways of working, and new products and services to life. We are trusted by leaders across the Global 1000, many successful enterprise and mid-market companies, and 500+ public sector organizations to improve operations, drive growth, and create value. At Slalom, we believe that together, we can move faster, dream bigger, and build better tomorrows for all.
Compensation and Benefits
Slalom prides itself on helping team members thrive in their work and life. As a result, Slalom is proud to invest in benefits that include meaningful time off and paid holidays, parental leave, 401(k) with a match, a range of choices for highly subsidized health, dental, & vision coverage, adoption and fertility assistance, and short/long-term disability. We also offer yearly $350 reimbursement account for any well-being-related expenses, as well as discounted home, auto, and pet insurance.
Slalom is committed to fair and equitable compensation practices. For this position the target base salaries are listed below. In addition, individuals may be eligible for an annual discretionary bonus. Actual compensation will depend upon an individual's skills, experience, qualifications, location, and other relevant factors. The target salary pay range is subject to change and may be modified at any time.
East Bay, San Francisco, Silicon Valley:
* Consultant $114,000-$171,000
* Senior Consultant: $131,000-$196,500
* Principal: $145,000-$217,500
San Diego, Los Angeles, Orange County, Seattle, Houston, New Jersey, New York City, Westchester, Boston, Washington DC:
* Consultant $105,000-$157,500
* Senior Consultant: $120,000-$180,000
* Principal: $133,000-$199,500
All other locations:
* Consultant: $96,000-$144,000
* Senior Consultant: $110,000-$165,000
* Principal: $122,000-$183,000
EEO and Accommodations
Slalom is an equal opportunity employer and is committed to inclusion, diversity, and equity in the workplace. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veterans' status, or any other characteristic protected by federal, state, or local laws. Slalom will also consider qualified applications with criminal histories, consistent with legal requirements. Slalom welcomes and encourages applications from individuals with disabilities. Reasonable accommodations are available for candidates during all aspects of the selection process. Please advise the talent acquisition team if you require accommodations during the interview process.
We are accepting applications until 12/31.
#LI-FB1