Post job

Data engineer jobs in Vancouver, WA

- 405 jobs
All
Data Engineer
Data Architect
Software Engineer
Applications Support Engineer
Data Scientist
Hadoop Developer
  • Software Engineer Qualtrics

    Mainz Brady Group

    Data engineer job in Beaverton, OR

    HYBRID ONISTE IN BEAVERTON, OR! MUST HAVE QUALTRICS EXP We're seeking a skilled and experienced Software Engineer who specializes in Qualtrics. This role will be part of a high-visibility, high-impact initiative to optimize and expand our Qualtrics environment. You'll play a key role in designing, developing, and maintaining scalable solutions that enhance user experience, streamline data collection, and improve reporting accuracy. The ideal candidate has a strong background in Qualtrics architecture, API integrations, and automation-plus a passion for creating efficient, user-friendly tools that empower teams to make data-driven decisions. What we're looking for: 3+ years of hands-on Qualtrics engineering or development experience Strong understanding of survey logic, workflows, APIs, and automation Experience with data visualization and analytics tools (Tableau, Power BI, etc.) Background in software engineering (JavaScript, Python, or similar) Ability to partner cross-functionally with researchers, analysts, and product teams
    $77k-108k yearly est. 4d ago
  • Application Support Engineer

    Cvent 4.3company rating

    Data engineer job in Portland, OR

    Pacific Time zone working hours (9am - 6pm PT) Our Culture and Impact Cvent is a leading meetings, events, and hospitality technology provider with more than 5,000+ employees and 24,000+ customers worldwide, including 60% of the Fortune 500. Founded in 1999, Cvent delivers a comprehensive event marketing and management platform for marketers and event professionals and offers software solutions to hotels, special event venues and destinations to help them grow their group/MICE and corporate travel business. Our technology brings millions of people together at events around the world. In short, we're transforming the meetings and events industry through innovative technology that powers the human connection. Cvent's strength lies in its people, fostering a culture where everyone is encouraged to think like entrepreneurs, taking risks and making decisions confidently. We value diverse perspectives and celebrate differences, working together with colleagues and clients to build strong connections. AI at Cvent: Leading the Future Are you ready to shape the future of work at the intersection of human expertise and AI innovation? At Cvent, we're committed to continuous learning and adaptation-AI isn't just a tool for us, it's part of our DNA. We're looking for candidates who are eager to evolve alongside technology. If you love to experiment boldly, share your discoveries, and help define best practices for AI-augmented work, you'll thrive here. Our team values professionals who thoughtfully integrate AI into their daily work, delivering exceptional results while relying on the human judgment and creativity that drive real innovation. Throughout our interview process, you'll have the chance to demonstrate how you use AI to learn, iterate, and amplify your impact. If you're excited to be part of a team that's leading the way in AI-powered collaboration, we'd love to meet you. Do you have a passion for technology? Do you enjoy solving real world problems? Do you have a helpful and inquisitive mindset? If you answered yes to all three questions, then keep reading! This entry level Application Support Engineer opportunity is a jack of all technology trades. This role is technical in nature and provides exposure to all major aspects of cloud-based software (debugging/coding, test, networking, database/infrastructure). Our end goal is to ensure our customers have the best possible experience with our products from a technical perspective. This entails doing everything we can to make sure that our products are bug free and have top-notch performance. As far as important candidate qualities go, strong communication and the ability to work with multiple teams is a must. We care more about your attitude and aptitude than the tools and technologies you have used in the past. In This Role, You Will: Provide top-tier software support for Cvent's product offerings to our customer service team and clients. This role is not customer facing. Assist operations and development teams with debugging software issues. Query databases to generate and analyze data for reporting and troubleshooting purposes. Work with our sales engineering team to ensure the successful operation of our partner and client integrations. Work with multiple teams to find, analyze, and resolve client issues. Troubleshoot and maintain frontend and backend systems. Monitor, document and report system and performance issues. Facilitate communication between technology teams and other departments on issue status and resolution. Supply in-depth technical and business product knowledge to multiple teams. Weekend on-call support on a rotational basis is required for this position. Here's What You Need: Do not worry if you do not know all the specific technologies listed below! Our training program lasts 3-4 weeks and will help bring you up-to-speed on both our products and the technologies they use. BS in Computer Science, Information Systems or equivalent major with strong academic performance Excellent problem solving and analytical skills. Understanding of relational databases and how to query data using SQL. Working knowledge of HTML/CSS. Understanding of the Software Development Life Cycle. Solid knowledge of at least one object-oriented programming language. Outstanding oral and written communication skills. Ability to convey technical information to a general audience. Aptitude for learning new technologies. Zealous attention to detail. Wondering what other technologies and tools we use? See below. Any experience with these is a plus! Monitoring tools: NewRelic, Splunk, Datadog. Hosting: Amazon Web Services Programming: Java, C#, .Net, Node.js Open source and NoSQL database technologies: Couchbase, Elasticsearch, RabbitMQ APIs: SOAP or REST based Build & deploy technologies: Docker and Jenkins Version control: Git The estimated base salary range for new hires into this role is $85-$120k+ annually + bonus depending on factors such as job-related knowledge, relevant experience, and location. We also offer a competitive benefits package, details of which can be found here.” W e are not able to offer sponsorship for this position
    $85k-120k yearly 3d ago
  • Data Scientist

    Eyecarecenterofsalem

    Data engineer job in Portland, OR

    Job DescriptionWe are looking for a Data Scientist to analyze large amounts of raw information to find patterns that will help improve our company. We will rely on you to build data products to extract valuable business insights. In this role, you should be highly analytical with a knack for analysis, math, and statistics. Critical thinking and problem-solving skills are essential for interpreting data. We also want to see a passion for machine learning and research. Your goal will be to help our company analyze trends to make better decisions. Responsibilities Identify valuable data sources and automate collection processes Undertake to preprocess of structured and unstructured data Analyze large amounts of information to discover trends and patterns Build predictive models and machine-learning algorithms Combine models through ensemble modeling Present information using data visualization techniques Propose solutions and strategies to business challenges Collaborate with engineering and product development teams Requirements and skills Proven experience as a Data Scientist or Data Analyst Experience in data mining Understanding of machine learning and operations research Knowledge of R, SQL, and Python; familiarity with Scala, Java, or C++ is an asset Experience using business intelligence tools (e.g. Tableau) and data frameworks (e.g. Hadoop) Analytical mind and business acumen Strong math skills (e.g. statistics, algebra) Problem-solving aptitude Excellent communication and presentation skills BSc/BA in Computer Science, Engineering, or relevant field; a graduate degree in Data Science or other quantitative field is preferred
    $73k-104k yearly est. 19d ago
  • Sr. Data Engineer

    Genoa Employment Solutions 4.8company rating

    Data engineer job in Beaverton, OR

    FlexIT client has an immediate need for Sr. Data Engineer for 12 months Remote contract in Beaverton, Oregon.
    $97k-139k yearly est. 60d+ ago
  • Data Engineer

    Panthalassa

    Data engineer job in Portland, OR

    About the Company We are a renewable energy and ocean technology company committed to rapidly developing and deploying technologies that will ensure a sustainable future for Earth by unlocking the vast energy potential of its oceans. Our focus is on capturing civilizational levels of ultra-low-cost renewable energy for applications including computing and affordable renewable fuels delivered to shore. The company is a public benefit corporation headquartered in Portland, Oregon, and backed by leading venture capitalists, philanthropic investors, university endowments, and private investment offices. We operate as an idea meritocracy in which the best ideas change the company's direction on a regular basis. About the Job We are developing a core technology that will operate in the most extreme marine environments for years at a time without human maintenance or intervention. We are seeking a Data Engineer with strong software development skills to join our team developing next-generation ocean energy systems. You will work at the intersection of data analysis, simulation, and engineering-supporting the development of clean energy technologies designed to operate in some of the world's most challenging marine environments. In this role, you'll help build and maintain data analysis and simulation pipelines, support R&D with tools to process and interpret engineering datasets, and contribute to internal software used by cross-functional teams. You'll work closely with senior engineers and simulation experts, gaining exposure to real-world physics problems, computational tools, and large-scale scientific workflows. This is an opportunity for an early-career engineer or developer who's excited to contribute to a high-impact mission, write clean and maintainable code, and grow alongside experienced technical mentors. Our staff have worked at organizations such as SpaceX, Blue Origin, Boeing, Tesla, Apple, Virgin Orbit, Google, Amazon, Microsoft, New Relic, Bridgewater, Raytheon, Disney Imagineering, and the US Army and Air Force, as well as research universities, startups, and small companies across a range of industries. We are organized as a public benefit corporation and are backed by leading venture capital firms, private investors, philanthropic investors, and endowments. We strive to be the best engineering team on the planet and we compensate our team members accordingly. Responsibilities Develop and maintain data analysis tools to support engineering design, simulation, and testing workflows Clean, process, and analyze large datasets from CFD simulations, experiments, and field deployments Collaborate with senior engineers to extract insights from simulation results and translate them into actionable design feedback Write modular, well-documented code in Python to automate repetitive or computational workflows Assist in the development of internal software used for simulation pipeline orchestration and post-processing Support post-processing of CFD results using tools such as OpenFOAM, Star-CCM+, and ParaView Work with HPC or cloud compute environments to run and manage large simulation or data processing jobs Contribute to the development of internal documentation, best practices, and reusable analysis scripts Participate in code reviews, collaborative debugging sessions, and weekly team check-ins to share findings and progress Continuously learn new tools, frameworks, and domain-specific knowledge to grow within a fast-paced R&D team Required Qualifications Legal authorization to work in the United States. Bachelor's or Master's degree in Computer Science, Data Science, Engineering, Physics, Applied Mathematics, or a related field Proficiency in Python and familiarity with key data analysis libraries (e.g., NumPy, pandas, matplotlib) Experience writing clean, well-structured code for scientific or engineering problems Familiarity with software development best practices including version control (e.g., Git) and modular code design Ability to interpret and work with structured datasets from simulations or experiments Strong analytical and problem-solving skills with attention to detail Excellent collaboration and communication skills within technical teams Self-motivated with a desire to learn and take ownership of tasks in a fast-paced environment Preferred Qualifications Experience with CFD tools such as OpenFOAM, Star-CCM+, or ParaView for post-processing Exposure to scientific computing workflows including simulation automation or batch processing Familiarity with HPC environments, Linux-based systems, and bash or Python scripting for automation Understanding of fluid dynamics, ocean engineering, or physics-based modeling Experience building or contributing to internal software tools for data analysis, simulation, or visualization Academic or internship experience involving simulation data pipelines or engineering R&D projects The above qualifications are desired, not required. We encourage you to apply if you are a strong candidate with only some of the desired skills and experience listed. Additional Requirements Occasional extended hours or weekend work to support key milestones. Strong preference for candidates based in Portland, OR. Exceptional remote candidates will be considered. Compensation and Benefits If hired for this full-time role, you will receive: Cash compensation of $110,000-$175,000. Equity in the company. We're all owners and if we're successful, this equity should be far and away the most valuable component of your compensation. A benefits package that helps you take care of yourself and your family, including: Flexible paid time off Health insurance (the company pays 100% of gold level PPO plan for full time employees, their partners, and dependents) Dental insurance (the company pays 33% for full time employees and 100% for their partners and dependents) Vision insurance (the company pays 100% for full time employees, their partners, and dependents) Disability insurance (the company pays 100% for a policy to provide long term financial support if you become disabled) Ability to contribute to tax-advantaged accounts, including 401(k), health FSA, and dependent care FSA Relocation assistance to facilitate your move to Portland (if needed). Location We have a strong preference for candidates based in Portland, OR as this is an in-office role. Our offices, lab and shop, are located in Portland, Oregon.
    $110k-175k yearly Auto-Apply 60d+ ago
  • Sr. Data Engineer

    Concoracredit

    Data engineer job in Beaverton, OR

    As a Sr. Data Engineer, you'll help drive Concora Credit's Mission to enable customers to Do More with Credit - every single day. The impact you'll have at Concora Credit: We are seeking a Sr. Data Engineer with deep expertise in Azure and Databricks to lead the design, development, and optimization of scalable data pipelines and platforms. You'll be responsible for building robust data solutions that power analytics, reporting, and machine learning across the organization using Azure cloud services and Databricks. This position is located at our Beaverton, OR office and has a hybrid schedule. We're onsite Monday through Wednesday. We hire people, not positions. That's because, at Concora Credit, we put people first, including our customers, partners, and Team Members. Concora Credit is guided by a single purpose: to help non-prime customers do more with credit. Today, we have helped millions of customers access credit. Our industry leadership, resilience, and willingness to adapt ensure we can help our partners responsibly say yes to millions more. As a company grounded in entrepreneurship, we're looking to expand our team and are looking for people who foster innovation, strive to make an impact, and want to Do More! We're an established company with over 20 years of experience, but now we're taking things to the next level. We're seeking someone who wants to impact the business and play a pivotal role in leading the charge for change. Responsibilities As our Sr. Data Engineer, you will: Design and develop scalable, efficient data pipelines using Azure Databricks Build and manage data ingestion, transformation, and storage solutions leveraging Azure Data Factory, Azure Data Lake, and Delta Lake Implement CI/CD for data workflows using tools like Azure DevOps, Git, and Terraform Optimize performance and cost efficiency across large-scale distributed data systems Collaborate with analysts, data scientists, and business stakeholders to understand data needs and deliver reliable, reusable datasets Provide guidance and mentor junior engineers and actively contribute to data platform best practices Monitor, troubleshoot, and optimize existing pipelines and infrastructure to ensure reliability and scalability These duties must be performed with or without reasonable accommodation. We know experience comes in many forms and that many skills are transferable. If your experience is close to what we're looking for, consider applying. Diversity has made us the entrepreneurial and innovative company that we are today. Qualifications Requirements: 5+ years of experience in data engineering, with a strong focus on Azure cloud technologies Experience with Azure Databricks, Azure Data Lake, Data Factory including PySpark, SQL, Python and Delta Lake Strong proficiency in Databricks and Apache Spark Solid understanding of data warehousing, ETL/ELT, and data modeling best practices Experience with version control, CI/CD pipelines, and infrastructure as code Knowledge of Spark performance tuning, partitioning, and job orchestration Excellent problem-solving skills and attention to detail Strong communication and collaboration abilities across technical and non-technical teams Ability to work independently and lead in a fast-paced, agile environment Passion for delivering clean, high-quality, and maintainable code Preferred Qualifications: Experience with Unity Catalog, Databricks Workflows, and Delta Live Tables Familiarity with DevOps practices or Terraform for Azure resource provisioning Understanding of data security, RBAC, and compliance in cloud environments Experience integrating Databricks with Power BI or other analytics platforms Exposure to real-time data processing using Kafka, Event Hubs, or Structured Streaming What's In It For You: Medical, Dental and Vision insurance for you and your family Relax and recharge with Paid Time Off (PTO) 6 company-observed paid holidays, plus 3 paid floating holidays 401k (after 90 days) plus employer match up to 4% Pet Insurance for your furry family members Wellness perks including onsite fitness equipment at both locations, EAP, and access to the Headspace App We invest in your future through Tuition Reimbursement Save on taxes with Flexible Spending Accounts Peace of mind with Life and AD&D Insurance Protect yourself with company-paid Long-Term Disability and voluntary Short-Term Disability Concora Credit provides equal employment opportunities to all Team Members and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Concora Credit is an equal opportunity employer (EEO). Please see the Concora Credit Privacy Policy for more information on how Concora Credit processes your personal information during the recruitment process and, if applicable, based on your location, how you can exercise your privacy rights. If you have questions about this privacy notice or need to contact us in connection with your personal data, including any requests to exercise your legal rights referred to at the end of this notice, please contact caprivacynotice@concoracredit.com.
    $84k-118k yearly est. Auto-Apply 35d ago
  • Sr.Hadoop Developer

    Bridge Tech 4.2company rating

    Data engineer job in Beaverton, OR

    Job DescriptionTypically requires a Bachelors Degree and minimum of 5 years directly relevant work experience Client is embarking on the big data platform in Consumer Digital using Hadoop Distributed File System cluster. As a Sr. Hadoop developer you will work with a variety of talented client team mates and be a driving force for building solutions. You will be working on development projects related to commerce and web analytics. Responsibilities: •Design and implement map reduce jobs to support distributed processing using java, cascading, python, hive and pig; Ability to design and implement end to end solution. •Build libraries, user defined functions, and frameworks around Hadoop •Research, evaluate and utilize new technologies/tools/frameworks around Hadoop eco system •Develop user defined functions to provide custom hive and pig capabilities •Define and build data acquisitions and consumption strategies •Define & develop best practices •Work with support teams in resolving operational & performance issues •Work with architecture/engineering leads and other teams on capacity planning QualificationsQualification: •MS/BS degree in a computer science field or related discipline •6+ years' experience in large-scale software development •1+ year experience in Hadoop •Strong Java programming, shell scripting, Python, and SQL •Strong development skills around Hadoop, MapReduce, Hive, Pig, Impala •Strong understanding of Hadoop internals •Good understanding of AVRO and Json and other compresssion •Experience with build tools such as Maven •Experience with databases like Oracle; •Experience with performance/scalability tuning, algorithms and computational complexity •Experience (at least familiarity) with data warehousing, dimensional modeling and ETL development •Ability to understand and ERDs and relational database schemas •Proven ability to work cross functional teams to deliver appropriate resolution Nice to have: •Experience with open source NOSQL technologies such as HBase and Cassandra •Experience with messaging & complex event processing systems such as Kafka and Storm •Machine learning framework •Statistical analysis with Python, R or similar Additional Information All your information will be kept confidential according to EEO guidelines.
    $90k-118k yearly est. 60d+ ago
  • Sr Data Engineer

    Moda Health 4.5company rating

    Data engineer job in Portland, OR

    Let's do great things, together! About Moda Founded in Oregon in 1955, Moda is proud to be a company of real people committed to quality. Today, like then, we're focused on building a better future for healthcare. That starts by offering outstanding coverage to our members, compassionate support to our community and comprehensive benefits to our employees. It keeps going by connecting with neighbors to create healthy spaces and places, together. Moda values diversity and inclusion in our workplace. We aim to demonstrate our commitment to diversity through all our business practices and invite applications from candidates that share our commitment to this diversity. Our diverse experiences and perspectives help us become a stronger organization. Let's be better together. Position Summary The Senior Data Engineer on the Data Science team builds and automates data pipelines, implements quality controls, and contributes to maintaining existing processes associated with the current Data Warehouse solution. This is a FT WFH position. Pay Range $92,940.40 - $140,000.00 annually (depending on experience). *This role may be classified as hourly (non-exempt) depending on the applicant's location. Actual pay is based on qualifications. Applicants who do not exceed the minimum qualifications will only be eligible for the low end of the pay range. Please fill out an application on our company page, linked below, to be considered for this position. ************************** GK=27763372&refresh=true Benefits: Medical, Dental, Vision, Pharmacy, Life, & Disability 401K- Matching FSA Employee Assistance Program PTO and Company Paid Holidays Required Skills, Experience & Education: 5 - 7 years of experience working with large data sets using relational databases and/or data analysis tools (SQL Server, Oracle, Snowflake, MS Fabric, SAS Enterprise Guide), with experience in tools like Power BI or other Business Intelligence development a plus. 5 - 7 years of regular SQL use, with advanced knowledge of query optimization and performance management across on-prem and cloud solutions (SQL Server, Snowflake, MS Fabric) preferred. 5 - 7 years of experience with data transformation tools such as SQL stored procedures, dbt, Coalesce, or similar. Experience with both on-prem SQL servers and cloud data warehouses (SQL Server, Oracle, Snowflake, MS Fabric, BigQuery); experience with on-prem database server to cloud data warehouse migrations a plus. Experience with data pipeline orchestration tools such as SSIS, SQL Agent, Tidal, Airflow, and dbt. Experience working with healthcare data strongly preferred. Advanced knowledge of data management, relational data structures, and data modeling. Demonstrated ability to identify opportunities, lead development, and implement data-driven solutions that solve real business problems. Working knowledge of Agile methodology. Experience with programming languages such as Python, R, or Java is a plus. 3+ years of experience working with DevOps such as Azure DevOps is nice to have. Primary Functions: Design, develop, and test ETL code to ingest data from various sources into the centralized data warehouse. Automate, maintain, and scale ETL processes using a diverse set of tools, with flexibility for future enhancements. Support data platform modernization and execute migrations to new cloud-based infrastructure. Develop processes and systems to ensure data quality and accuracy. Build enterprise-grade modern data pipelines to encourage organizational adoption of next-generation data architecture strategies. Work closely with other teams to understand their requirements and support their data infrastructure needs. Support the Data Science team in implementing data solutions for complex machine learning problems. Participate collaboratively in Enterprise Data Warehousing, Business Intelligence, and other Data Management projects. Perform other duties as assigned. Working Conditions & Contact with Others Office environment with extensive close PC and keyboard use, constant sitting, and frequent phone communication. Must be able to navigate multiple computer screens. A reliable, high-speed, hard-wired internet connection required to support remote or hybrid work. Must be comfortable being on camera for virtual training and meetings. Work in excess of standard workweek, including evenings and occasional weekends, to meet business need. Internally with Data Science, IT, and various data consumers and stakeholders. Externally with third-party providers of data and business solutions, data consumers, and various stakeholders. Together, we can be more. We can be better. Moda Health seeks to allow equal employment opportunities for all qualified persons without regard to race, religion, color, age, sex, sexual orientation, national origin, marital status, disability, veteran status or any other status protected by law. This is applicable to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absences, compensation, and training. For more information regarding accommodations, please direct your questions to Kristy Nehler & Danielle Baker via our ***************************** email.
    $92.9k-140k yearly Easy Apply 60d+ ago
  • Data Engineer

    Insight Global

    Data engineer job in Vancouver, WA

    A client in the Vancouver, Wa area is looking for a Data Engineer to join their team. This position will be full time/direct hire and will be 5 days on-site, therefore we are looking for someone who enjoys being onsite in a collaborative environment. As a Data Engineer, you will be working with the Data Team and be responsible for Data Engineering as well as Data Operations and maintaining what has been engineered. In this position the tools and technologies you will be working with include, but are not limited to: Python, SQL, ETL processes, APIs (not building but calling APIs), SQL development, working with integrations of 3rd party systems, and assisting on a project to build out a data lake to organize their data. In this role this team is doing data migrations from different systems, 3rd party integrations, building a data lake as data is currently coming from different sources and they want to streamline the process. If this sounds like a role of interest, please apply today! Thank you. We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity/affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to ********************.To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy: **************************************************** Skills and Requirements 2-3 years of professional experience with SQL Server development, 30% of your time will be spent in SQL which includes: writing stored procedures, CTE, building indexes, running queries, and reading queries 3+ years of experience working with ETL; building tables 3+ years of experience with Python writing scripts 3-4+ years of experience with Excel and Power BI supporting the infrastructure for the data structure, not building reports, but supporting data that is performant and correct Experience with APIs Experience with AI: Copilot, Claude code, etc. Azure experience Familiar with Star Schema or Snowflake Schema NetSuite, HubSpot, Pento, Salesforce,3rd party integrations
    $91k-128k yearly est. 14d ago
  • BigData Engineer / Architect

    Nitor Infotech

    Data engineer job in Portland, OR

    The hunt is for a strong Big Data Professional, a team player with the ability to manage effective relationships with a wide range of stakeholders (customers & team members alike). Incumbent will demonstrate personal commitment and accountability to ensure standards are continuously sustained and improved both within the internal teams, and with partner organizations and suppliers. Role: Big Data Engineer Location: Portland OR. Duration: Full Time Skill Matrix: Map Reduce - Required Apache Spark - Required Informatica PowerCenter - Required Hive - Required Apache Hadoop - Required Core Java / Python - Highly Desired Healthcare Domain Experience - Highly Desired Job Description Responsibilities and Duties Participate in technical planning & requirements gathering phases including architectural design, coding, testing, troubleshooting, and documenting big data-oriented software applications. Responsible for the ingestion, maintenance, improvement, cleaning, and manipulation of data in the business's operational and analytics databases, and troubleshoots any existent issues. Implementation, troubleshooting, and optimization distributed solutions based on modern big data technologies like Hive, Hadoop, Spark, Elastic Search, Storm, Kafka, etc. in both an on premise and cloud deployment model to solve large scale processing problems Design, enhance and implement ETL/data ingestion platform on the cloud. Strong Data Warehousing skills, including: Data clean-up, ETL, ELT and handling scalability issues for enterprise level data warehouse Capable of investigating, familiarizing and mastering new data sets quickly Strong troubleshooting and problem-solving skills in large data environment Experience with building data platform on cloud (AWS or Azure) Experience in using Python, Java or any other language to solving data problems Experience in implementing SDLC best practices and Agile methods. Qualifications Required Skills: Data architecture/ Big Data/ ETL environment Experience with ETL design using tools Informatica, Talend, Oracle Data Integrator (ODI), Dell Boomi or equivalent Big Data & Analytics solutions Hadoop, Pig, Hive, Spark, Spark SQL Storm, AWS (EMR, Redshift, S3, etc.)/Azure (HDInsight, Data Lake Design) Building and managing hosted big data architecture, toolkit familiarity in: Hadoop with Oozy, Sqoop, Pig, Hive, HBase, Avro, Parquet, Spark, NiFi Foundational data management concepts - RDM and MDM Experience in working with JIRA/Git/Bitbucket/JUNIT and other code management toolsets Strong hands-on knowledge of/using solutioning languages like: Java, Scala, Python - any one is fine Healthcare Domain knowledge Required Experience, Skills and Qualifications Qualifications: Bachelor's Degree with a minimum of 6 to 9 + year's relevant experience or equivalent. Extensive experience in data architecture/Big Data/ ETL environment. Additional Information All your information will be kept confidential according to EEO guidelines.
    $84k-118k yearly est. 22h ago
  • Sr. Data Engineer

    It Vision Group

    Data engineer job in Portland, OR

    Job Description Title : Sr. Data Engineer Duration: 12 Months+ Roles & Responsibilities Perform data analysis according to business needs Translate functional business requirements into high-level and low-level technical designs Design and implement distributed data processing pipelines using Apache Spark, Apache Hive, Python, and other tools and languages prevalent in a modern analytics platform Create and schedule workflows using Apache Airflow or similar job orchestration tooling Build utilities, functions, and frameworks to better enable high-volume data processing Define and build data acquisitions and consumption strategies Build and incorporate automated unit tests, participate in integration testing efforts Work with teams to resolve operational & performance issues Work with architecture/engineering leads and other teams to ensure quality solutions are implemented, and engineering best practices are defined and followed. Tech Stack Apache Spark Apache Spark Streaming using Apache Kafka Apache Hive Apache Airflow Python AWS EMR and S3 Snowflake SQL Other Tools & Technologies :: PyCharm, Jenkin, Github. Apache Nifi (Optional) Scala (Optional)
    $84k-118k yearly est. 15d ago
  • Senior Data Engineer

    Advance Local 3.6company rating

    Data engineer job in Portland, OR

    **Advance Local** is looking for a **Senior Data Engineer** to design, build, and maintain the enterprise data infrastructure that powers our cloud data platform. This position will combine your deep technical expertise in data engineering with team leadership responsibilities for data engineering, overseeing the ingestion, integration, and reliability of data systems across Snowflake, AWS, Google Cloud, and legacy platforms. You'll partner with data product and across business units to translate requirements into technical solutions, integrate data from numerous third-party platforms, (CDPs, DMPs, analytics platforms, marketing tech) into central data platform, collaborate closely with the Data Architect on platform strategy and ensure scalable, well-engineered solutions for modern data infrastructure using infrastructure as code and API-driven integrations. The base salary range is $120,000 - $140,000 per year. **What you'll be doing:** + Lead the design and implementation of scalable data ingestion pipelines from diverse sources into Snowflake. + Partner with platform owners across business units to establish and maintain data integrations from third party systems into the central data platform. + Architect and maintain data infrastructure using IAC, ensuring reproducibility, version control and disaster recovery capabilities. + Design and implement API integrations and event-driven data flows to support real time and batch data requirements. + Evaluate technical capabilities and integration patterns of existing and potential third-party platforms, advising on platform consolidation and optimization opportunities. + Partner with the Data Architect and data product to define the overall data platform strategy, ensuring alignment between raw data ingestion and analytics-ready data products that serve business unit needs. + Develop and enforce data engineering best practices including testing frameworks, deployment automation, and observability. + Support rapid prototyping of new data products in collaboration with data product by building flexible, reusable data infrastructure components. + Design, develop, and maintain scalable data pipelines and ETL processes; optimize and improve existing data systems for performance, cost efficiency, and scalability. + Collaborate with data product, third-party platform owners, Data Architects, Analytics Engineers, Data Scientists, and business stakeholders to understand data requirements and deliver technical solutions that enable business outcomes across the organization. + Implement data quality validation, monitoring, and alerting systems to ensure reliability of data pipelines from all sources. + Develop and maintain comprehensive documentation for data engineering processes and systems, architecture, integration patterns, and runbooks. + Lead incident response and troubleshooting efforts for data pipeline issues, ensuring minimal business impact. + Stay current with the emerging data engineering technologies, cloud services, SaaS platform capabilities, and industry best practices. **Our ideal candidate will have the following:** + Bachelor's degree in computer science, engineering, or a related field + Minimum of seven years of experience in data engineering with at least two years in a lead or senior technical role + Expert proficiency in Snowflake data engineering patterns + Strong experience with AWS services (S3, Lambda, Glue, Step Functions) and Google Cloud Platform + Experience integrating data from SaaS platforms and marketing technology stacks (CDPs, DMPs, analytics platforms, CRMs) + Proven ability to work with third party APIs, webhooks, and data exports + Experience with infrastructure such as code (Terraform, Cloud Formation) and CI/CD pipelines for data infrastructure + Proven ability to design and implement API integrations and event-driven architecture + Experience with data modeling, data warehousing, and ETL processes at scale + Advanced proficiency in Python and SQL for data pipeline development + Experience with data orchestration tools (airflow, dbt, Snowflake tasks) + Strong understanding of data security, access controls, and compliance requirements + Ability to navigate vendor relationships and evaluate technical capabilities of third-party platforms + Excellent problem-solving skills and attention to detail + Strong communication and collaboraion skills **Additional Information** Advance Local Media offers competitive pay and a comprehensive benefits package with affordable options for your healthcare including medical, dental and vision plans, mental health support options, flexible spending accounts, fertility assistance, a competitive 401(k) plan to help plan for your future, generous paid time off, paid parental and caregiver leave and an employee assistance program to support your work/life balance, optional legal assistance, life insurance options, as well as flexible holidays to honor cultural diversity. Advance Local Media is one of the largest media groups in the United States, which operates the leading news and information companies in more than 20 cities, reaching 52+ million people monthly with our quality, real-time journalism and community engagement. Our company is built upon the values of Integrity, Customer-first, Inclusiveness, Collaboration and Forward-looking. For more information about Advance Local, please visit ******************** . Advance Local Media includes MLive Media Group, Advance Ohio, Alabama Media Group, NJ Advance Media, Advance Media NY, MassLive Media, Oregonian Media Group, Staten Island Media Group, PA Media Group, ZeroSum, Headline Group, Adpearance, Advance Aviation, Advance Healthcare, Advance Education, Advance National Solutions, Advance Originals, Advance Recruitment, Advance Travel & Tourism, BookingsCloud, Cloud Theory, Fox Dealer, Hoot Interactive, Search Optics, Subtext. _Advance Local Media is proud to be an equal opportunity employer, encouraging applications from people of all backgrounds. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, genetic information, national origin, age, disability, sexual orientation, marital status, veteran status, or any other category protected under federal, state or local law._ _If you need a reasonable accommodation because of a disability for any part of the employment process, please contact Human Resources and let us know the nature of your request and your contact information._ Advance Local Media does not provide sponsorship for work visas or employment authorization in the United States. Only candidates who are legally authorized to work in the U.S. will be considered for this position.
    $120k-140k yearly 20d ago
  • Data Engineer

    Certified Languages International LLC 4.3company rating

    Data engineer job in Portland, OR

    Department: Software Development & Data Engineering Certified Languages International (CLI) is modernizing its interpreter services platform and scaling its cloud-based systems to support thousands of interpreters and customers worldwide. We are seeking a Data Engineer to build and optimize data pipelines, design scalable data architectures, and integrate complex on-premises and cloud data sources. This role is engineering-heavy: you will work directly with DBAs, backend developers, and cloud engineers to create robust data ingestion, transformation, and warehousing solutions that power mission-critical analytics, reporting, and machine learning. Our environment spans SQL Server, Azure, and Snowflake-along with integrations into CRM, telephony, and accounting systems. Key Responsibilities Pipeline Engineering: Design, implement, and maintain large-scale, production-grade ETL/ELT pipelines using tools such as Azure Data Factory, Databricks, and SSIS. Data Architecture: Develop and manage data lakes, warehouses, and marts (Azure SQL, Snowflake) using modern patterns like medallion architecture, star schemas, and dimensional modeling. Streaming & Real-Time Data: Build and optimize event-driven and streaming pipelines (Kafka, Event Hubs, Structured Streaming) to capture interpreter session data, call metrics, and workflow events. Integration: Connect diverse systems - on-prem DBs, QuickBooks, Genesys/NICE CXone, MERFi, Salesforce - into unified cloud pipelines with automated validation. Scalability & Performance: Tune SQL queries, ETL/ELT jobs, and orchestration workflows to handle high-volume, low-latency data at scale. Automation & CI/CD: Implement automated workflows for data delivery, testing, and deployment using Azure Git/DevOps pipelines, Airflow, or equivalent orchestration tools. Monitoring & Reliability: Build observability into data pipelines with logging, alerting, and error handling frameworks to guarantee high availability. Collaboration: Partner with DBAs on database refactoring and optimization, backend engineers on service-level integrations, and data analysts/scientists on clean and performant data delivery. Security & Compliance: Engineer pipelines to comply with HIPAA, GDPR, and internal audit requirements, ensuring encryption, access control, and data lineage tracking. Required Qualifications Bachelor's degree in Computer Science, Data Engineering, or equivalent professional experience. 5+ years of data engineering experience building pipelines and data platforms. Deep expertise in SQL Server, T-SQL, and query optimization. Hands-on experience with Python and PySpark for large-scale data processing. Strong background in Azure Data Factory, Databricks, or equivalent cloud ETL frameworks. Experience with Snowflake or Azure SQL Data Warehouse. Familiarity with CI/CD practices for data pipelines (GitHub, Azure DevOps, or GitLab). Strong understanding of data modeling, warehousing, and orchestration. Preferred Qualifications Certification in Azure Data Engineering (DP-203) or Snowflake SnowPro. Experience with Kafka/Event Hubs for real-time ingestion and streaming pipelines. Background with containerized/cloud-native data services (Docker, Azure Container Apps). Familiarity with NoSQL, APIs, and semi-structured data (JSON, XML). Experience with contact center or healthcare data ecosystems. Skills & Attributes Engineering-first mindset with a focus on scalability, maintainability, and performance. Strong debugging, problem-solving, and system design skills. Able to work cross-functionally with DBAs, cloud engineers, and developers on complex systems. Excited by data architecture modernization and building resilient pipelines at scale. Comfortable balancing legacy integration challenges with modern cloud-first solutions.
    $81k-112k yearly est. Auto-Apply 60d+ ago
  • Data Engineer

    Precinmac 3.6company rating

    Data engineer job in Tualatin, OR

    Precinmac owns a family of precision machining companies in the US and Canada. This roles home location is Shields MFG location in Tualatin, Oregon, which is an industry leader and Value-Add, climate-controlled production facility specializing in CNC machining and complex mechanical/optical/laser assembly including clean-room environments. The Data Engineer will play a critical role in supporting all areas of the company by enabling reliable, scalable, and secure information systems. Our businesses deliver specialized manufacturing expertise for OEMs with low-volume/high-mix needs, while also driving higher-volume opportunities through our expanding cell system capabilities. As an IT-driven organization, we rely on robust data and management information systems to ensure efficiency, transparency, and informed decision-making across the enterprise. We offer: A Highly competitive total compensation package Medical (3 medical plans to choose from) Dental Vision Life (Company-paid, and options for additional supplemental) Disability Insurance (company paid short-term and long-term disability) 401(k) with company match A generous paid time off schedule Discretionary quarterly bonus program. We Offer: A Highly competitive total compensation package Medical (3 medical plans to choose from) Dental Vision Life (Company-paid, and options for additional supplemental) Disability Insurance (company paid short-term and long-term disability) 401(k) with company match, A generous paid time off schedule Discretionary quarterly bonus program. Date Engineer We are looking for a highly motivated Data Engineer to join our growing Data Governance & Analytics team. In this role, you will work closely with senior engineers, architects, and business stakeholders to design and deliver scalable data solutions that power critical business insights and innovation. If you are passionate about building robust data pipelines, ensuring data quality, and leveraging cutting-edge cloud technologies, this is the role for you. Key Responsibilities: Partner with the Senior Data Engineer to design, build, and maintain scalable ETL pipelines and dataflows that adhere to enterprise governance and quality standards. Implement data modeling, normalization, and metadata management practices to ensure consistency and usability across data platforms. Leverage Azure Data Factory (ADF), Databricks, and Apache Spark to process and transform large volumes of structured and unstructured data. Integrate data from diverse sources using RESTful APIs and other ingestion methods. Apply advanced SQL expertise for querying, performance tuning, and ensuring data integrity (both T-SQL and PL/SQL). Collaborate with business teams and data governance groups to enforce data quality, lineage, and compliance standards. Contribute to the Agile development lifecycle, participating in sprint planning, stand-ups, and retrospectives. Partner with data architects, analysts, and business leaders to design and deliver solutions aligned with organizational goals. Provide technical expertise in Python and other scripting languages to automate data workflows. Promote best practices in data governance, security, and stewardship across the enterprise. Required Skills & Experience: Proven experience in data engineering with exposure to data governance frameworks (preferably GCCHI). Strong proficiency with Azure Data Factory, Azure Databricks, Apache Spark, and Python. Solid expertise in SQL (query optimization, performance tuning, complex joins, stored procedures) across T-SQL and PL/SQL. Hands-on experience with ETL pipelines, dataflows, normalization, and data modeling. Familiarity with RESTful API integration for data ingestion. Experience contributing to Agile teams and sprint-based deliverables. Strong understanding of data structures, metadata management, and governance best practices. Practical experience automating workflows with Python scripting. Preferred Skills: Experience with data cataloging, data lineage, and master data management (MDM) tools. Knowledge of Azure Synapse Analytics, Power BI, or other BI/visualization platforms. Familiarity with CI/CD practices for data pipelines. Exposure to data privacy regulations (CMMC, NIST 800). Why Join Us? Work on impactful projects that enable smarter business decisions. Gain hands-on experience with advanced Azure technologies and modern data tools. Be part of a collaborative, agile team where innovation and continuous improvement are valued. Grow your career in a forward-looking data-driven organization. Work Setting: General office setting with typical moderate noise levels in a temperature controlled environment. Operates office equipment (computer, fax, copier, phone) as required to perform essential job functions. Precinmac is an equal opportunity, affirmative action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
    $100k-140k yearly est. Auto-Apply 60d+ ago
  • Adobe Real-Time Customer Data Platform (RT-CDP) Architect

    Slalom 4.6company rating

    Data engineer job in Portland, OR

    Who You'll Work With The Adobe team drives strategic direction and solution enablement in support of Marketing Teams. We accelerate innovation and learning, advance sales and delivery excellence with high-caliber Marketing Technology solutions including Adobe technology expertise. Our focus is 4 go-to-market solution areas: Experience and Content Management with a focus on Content Supply Chain, Digital Asset Management; Personalized Insights and Engagement with a focus on Analytics, Customer Data Platforms, and Journey Orchestration; Digital Commerce with a focus on Experience Led Commerce and Product Information Management; Marketing Operations and Workflow with a focus on resource management, reporting and approvals of the content and data required to run Personalization and Campaigns at Scale. We are seeking a talented Adobe RT-CDP Architect to join our team as a senior consultant or principal. This is a client-facing role that involves close collaboration with both technical and non-technical stakeholders. What You'll Do * Implement, configure, and enable Adobe Customer Data Platform (CDP) * Provide the technical design and data architecture for configuring RT-CDP to meet clients' business goals * Responsible for understanding business problems and capturing client requirements by leading effective conversations with business and technical client teams * Interpret how to best apply the out of the box product to provide a solution; including finding alternative approaches that best leverage the platform * Provide analytics domain expertise, consultation, and troubleshooting * Learn new platforms, new capabilities, and new clouds to stay on top of the ever-growing product ecosystem (CJA, AJO, Marketo) What You'll Bring * Expertise in configuration, implementation, and integration of RT-CDP product without significant help from others * Knowledge of, and experience with RT-CDP B2C, B2B and/or B2P * Knowledge of how RT-CDP works with other Adobe Experience Platform products * Experience implementing and driving success with RT-CDP for enterprise clients in an architecture role * Proficient with manipulating, structuring, and merging data from different data sources and understanding of typical data sources within an enterprise environment * Knowledge of how Graph Stitching, profile merge rules, profile collapsing, householding concepts work in RT-CDP * Ability to translate business rules into technical requirements and implementation of those requirements * Proficient with data transformation, API-based integrations and JavaScript tagging * Experience working with SQL, R, and/or Python preferred * Enterprise experience designing multi-solution architecture * Strong communication skills and a passion for learning new technologies and platform capabilities * Build strong relationships with clients and understand them from a business and strategic perspective * Occasional travel as needed by client About Us Slalom is a fiercely human business and technology consulting company that leads with outcomes to bring more value, in all ways, always. From strategy through delivery, our agile teams across 52 offices in 12 countries collaborate with clients to bring powerful customer experiences, innovative ways of working, and new products and services to life. We are trusted by leaders across the Global 1000, many successful enterprise and mid-market companies, and 500+ public sector organizations to improve operations, drive growth, and create value. At Slalom, we believe that together, we can move faster, dream bigger, and build better tomorrows for all. Compensation and Benefits Slalom prides itself on helping team members thrive in their work and life. As a result, Slalom is proud to invest in benefits that include meaningful time off and paid holidays, parental leave, 401(k) with a match, a range of choices for highly subsidized health, dental, & vision coverage, adoption and fertility assistance, and short/long-term disability. We also offer yearly $350 reimbursement account for any well-being-related expenses, as well as discounted home, auto, and pet insurance. Slalom is committed to fair and equitable compensation practices. For this role, we are hiring at the following levels and targeted base pay salary ranges: The targeted base salary range for a Senior Consultant for this position is $110,000 to $203,000 and the targeted base salary for a Principal for this position is $122,000 to $225,000. In addition, individuals may be eligible for an annual discretionary bonus. Actual compensation will depend upon an individual's skills, experience, qualifications, location, and other relevant factors. The salary pay range is subject to change and may be modified at any time. We will accept applicants until December 12th, 2025, or until the position is filled. We are committed to pay transparency and compliance with applicable laws. If you have questions or concerns about the pay range or other compensation information in this posting, please contact us at: ********************. EEO and Accommodations Slalom is an equal opportunity employer and is committed to attracting, developing and retaining highly qualified talent who empower our innovative teams through unique perspectives and experiences. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veterans' status, or any other characteristic protected by federal, state, or local laws. Slalom will also consider qualified applications with criminal histories, consistent with legal requirements. Slalom welcomes and encourages applications from individuals with disabilities. Reasonable accommodations are available for candidates during all aspects of the selection process. Please advise the talent acquisition team or contact ****************************** if you require accommodations during the interview process. #LI-KM
    $122k-225k yearly Easy Apply 9d ago
  • Data Engineer - Python, Spark

    Genoa Employment Solutions 4.8company rating

    Data engineer job in Beaverton, OR

    FlexIT client is looking for an immediate Data Engineer - Python, Sparkfor a 12-month remote contract. The client is looking for great Engineers with talent and persistence who can leverage their existing skills and learn new ones. You should have some of the specific technical skills were looking for and be expert enough in one or two to help ramp others quickly. Job Duties: We are building petabyte-class solutions that consume fast-moving streams from eCommerce, retail, and partner channels and power the critical decisions that drive our business. We are building the Cloud Platform for Data and Analytics on AWS that fuels in digital transformation. Focus areas include: Data Streaming / Enrichment / Business Rules / MDM Data Lake / Warehousing Data Governance / GDPR / SOX Data Strategy / Unified Access / IAM / RBAC Be a great teammate on an agile/SCRUM team that sets and meets aggressive goals. Mentor new and less experienced developers to advance their proficiency. Leverage expert development skills and solid design skills to deliver reliable, scalable, performant solutions with modern tooling, data structures and algorithms. Work with Product Owners, Engineering Managers and Principal Engineers to deliver solutions that enable digital transformation
    $97k-139k yearly est. 60d+ ago
  • Sr Data Engineer

    Moda Health 4.5company rating

    Data engineer job in Portland, OR

    Job Description Let's do great things, together! About Moda Founded in Oregon in 1955, Moda is proud to be a company of real people committed to quality. Today, like then, we're focused on building a better future for healthcare. That starts by offering outstanding coverage to our members, compassionate support to our community and comprehensive benefits to our employees. It keeps going by connecting with neighbors to create healthy spaces and places, together. Moda values diversity and inclusion in our workplace. We aim to demonstrate our commitment to diversity through all our business practices and invite applications from candidates that share our commitment to this diversity. Our diverse experiences and perspectives help us become a stronger organization. Let's be better together. Position Summary The Senior Data Engineer on the Data Science team builds and automates data pipelines, implements quality controls, and contributes to maintaining existing processes associated with the current Data Warehouse solution. This is a FT WFH position. Pay Range $92,940.40 - $140,000.00 annually (depending on experience). *This role may be classified as hourly (non-exempt) depending on the applicant's location. Actual pay is based on qualifications. Applicants who do not exceed the minimum qualifications will only be eligible for the low end of the pay range. Please fill out an application on our company page, linked below, to be considered for this position. ************************** GK=27763372&refresh=true Benefits: Medical, Dental, Vision, Pharmacy, Life, & Disability 401K- Matching FSA Employee Assistance Program PTO and Company Paid Holidays Required Skills, Experience & Education: 5 - 7 years of experience working with large data sets using relational databases and/or data analysis tools (SQL Server, Oracle, Snowflake, MS Fabric, SAS Enterprise Guide), with experience in tools like Power BI or other Business Intelligence development a plus. 5 - 7 years of regular SQL use, with advanced knowledge of query optimization and performance management across on-prem and cloud solutions (SQL Server, Snowflake, MS Fabric) preferred. 5 - 7 years of experience with data transformation tools such as SQL stored procedures, dbt, Coalesce, or similar. Experience with both on-prem SQL servers and cloud data warehouses (SQL Server, Oracle, Snowflake, MS Fabric, BigQuery); experience with on-prem database server to cloud data warehouse migrations a plus. Experience with data pipeline orchestration tools such as SSIS, SQL Agent, Tidal, Airflow, and dbt. Experience working with healthcare data strongly preferred. Advanced knowledge of data management, relational data structures, and data modeling. Demonstrated ability to identify opportunities, lead development, and implement data-driven solutions that solve real business problems. Working knowledge of Agile methodology. Experience with programming languages such as Python, R, or Java is a plus. 3+ years of experience working with DevOps such as Azure DevOps is nice to have. Primary Functions: Design, develop, and test ETL code to ingest data from various sources into the centralized data warehouse. Automate, maintain, and scale ETL processes using a diverse set of tools, with flexibility for future enhancements. Support data platform modernization and execute migrations to new cloud-based infrastructure. Develop processes and systems to ensure data quality and accuracy. Build enterprise-grade modern data pipelines to encourage organizational adoption of next-generation data architecture strategies. Work closely with other teams to understand their requirements and support their data infrastructure needs. Support the Data Science team in implementing data solutions for complex machine learning problems. Participate collaboratively in Enterprise Data Warehousing, Business Intelligence, and other Data Management projects. Perform other duties as assigned. Working Conditions & Contact with Others Office environment with extensive close PC and keyboard use, constant sitting, and frequent phone communication. Must be able to navigate multiple computer screens. A reliable, high-speed, hard-wired internet connection required to support remote or hybrid work. Must be comfortable being on camera for virtual training and meetings. Work in excess of standard workweek, including evenings and occasional weekends, to meet business need. Internally with Data Science, IT, and various data consumers and stakeholders. Externally with third-party providers of data and business solutions, data consumers, and various stakeholders. Together, we can be more. We can be better. Moda Health seeks to allow equal employment opportunities for all qualified persons without regard to race, religion, color, age, sex, sexual orientation, national origin, marital status, disability, veteran status or any other status protected by law. This is applicable to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absences, compensation, and training. For more information regarding accommodations, please direct your questions to Kristy Nehler & Danielle Baker via our ***************************** email.
    $92.9k-140k yearly Easy Apply 9d ago
  • Data Architect

    Advance Local 3.6company rating

    Data engineer job in Portland, OR

    **Advance Local** is looking for a **Data Architect** to lead the design and implementation of enterprise-level data solutions within our modern cloud data platform. This role combines deep technical expertise in analytics engineering with leadership responsibilities to ensure the delivery of well-documented, tested, and high-quality data assets that enable AI, data products, advanced analytics, and self-service reporting. You'll guide strategic data initiatives, mentor a team of analytics engineers, and collaborate with data engineering and business stakeholders to deliver impactful, scalable solutions. The base salary range is $150,000 - $165,000 per year. **What you'll be doing:** + Architect and oversee scalable data models, pipelines, and frameworks in Snowflake using dbt, ensuring that they meet quality standards for AI agents, advanced analytics, and self-service reporting. + Lead the design and governance of analytics-ready data models, ensuring they are well-modeled, performant, and accessible to downstream consumers. + Drive rapid prototyping of new data products and features, providing technical direction and hand-on guidance when needed. + Establish and enforce data quality, testing, and documentation standards across all data assets, ensuring reliability and trustworthiness. + Develop advanced solutions for audience data modeling and identity resolution, supporting personalization and segmentation strategies. + Partner with Audience Strategy and Insights teams to translate requirements into technical solutions and automation. + Collaborate with the Lead Data Engineer on data integration patterns and ensure seamless handoffs between raw data ingestion and analytics-ready models. + Establish data architecture standards and development practices (version control, CI/CD, testing frameworks) that enable team scalability. + Enable data accessibility and integration solutions that support both technical and non-technical users across the organization. + Provide technical leadership to a Data Manager and their team of analytics engineers, fostering a culture of best practices, code review, and continuous improvement. **Our ideal candidate will have the following:** + Bachelor's or master's degree in computer science, data engineering, information systems, or related field + Minimum ten years' experience in data engineering, data analytics engineering, architecture, or related roles, with proven experience leading data teams and managing complex data ecosystems + Expert level proficiency in dbt and Snowflake with demonstrated ability to build production-grade data models and pipelines + Strong knowledge of cloud platforms (AWS, Azure, GCP) and data warehousing best practices + Proficiency in big data technologies (Spark, Hadoop) and streaming frameworks + Familiarity with data governance, security, and compliance standards + Experience with audience segmentation, marketing analytics, or customer data platforms + Knowledge of machine learning pipelines, advanced analytics and AI applications + Strategic thinking and ability to align data initiatives with business objectives + Strong communication and stakeholder management skills + Proven ability to lead cross-functional teams and drive organizational change + Experience building data solutions that support self-service analytics and data demonstrations **Additional Information** Advance Local Media offers competitive pay and a comprehensive benefits package with affordable options for your healthcare including medical, dental and vision plans, mental health support options, flexible spending accounts, fertility assistance, a competitive 401(k) plan to help plan for your future, generous paid time off, paid parental and caregiver leave and an employee assistance program to support your work/life balance, optional legal assistance, life insurance options, as well as flexible holidays to honor cultural diversity. Advance Local Media is one of the largest media groups in the United States, which operates the leading news and information companies in more than 20 cities, reaching 52+ million people monthly with our quality, real-time journalism and community engagement. Our company is built upon the values of Integrity, Customer-first, Inclusiveness, Collaboration and Forward-looking. For more information about Advance Local, please visit ******************** . Advance Local Media includes MLive Media Group, Advance Ohio, Alabama Media Group, NJ Advance Media, Advance Media NY, MassLive Media, Oregonian Media Group, Staten Island Media Group, PA Media Group, ZeroSum, Headline Group, Adpearance, Advance Aviation, Advance Healthcare, Advance Education, Advance National Solutions, Advance Originals, Advance Recruitment, Advance Travel & Tourism, BookingsCloud, Cloud Theory, Fox Dealer, Hoot Interactive, Search Optics, Subtext. _Advance Local Media is proud to be an equal opportunity employer, encouraging applications from people of all backgrounds. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, genetic information, national origin, age, disability, sexual orientation, marital status, veteran status, or any other category protected under federal, state or local law._ _If you need a reasonable accommodation because of a disability for any part of the employment process, please contact Human Resources and let us know the nature of your request and your contact information._ Advance Local Media does not provide sponsorship for work visas or employment authorization in the United States. Only candidates who are legally authorized to work in the U.S. will be considered for this position.
    $150k-165k yearly 16d ago
  • Salesforce Data 360 Architect

    Slalom 4.6company rating

    Data engineer job in Portland, OR

    Who You'll Work With In our Salesforce business, we help our clients bring the most impactful customer experiences to life and we do that in a way that makes our clients the hero of their transformation story. We are passionate about and dedicated to building a diverse and inclusive team, recognizing that diverse team members who are celebrated for bringing their authentic selves to their work build solutions that reach more diverse populations in innovative and impactful ways. Our team is comprised of customer strategy experts, Salesforce-certified experts across all Salesforce capabilities, industry experts, organizational and cultural change consultants, and project delivery leaders. As the 3rd largest Salesforce partner globally and in North America, we are committed to growing and developing our Salesforce talent, offering continued growth opportunities, and exposing our people to meaningful work that aligns to their personal and professional goals. We're looking for individuals who have experience implementing Salesforce Data Cloud or similar platforms and are passionate about customer data. The ideal candidate has a desire for continuous professional growth and can deliver complex, end-to-end Data Cloud implementations from strategy and design, through to data ingestion, segment creation, and activation; all while working alongside both our clients and other delivery disciplines. Our Global Salesforce team is looking to add a passionate Principal or Senior Principal to take on the role of Data Cloud Architect within our Salesforce practice. What You'll Do: * Responsible for business requirements gathering, architecture design, data ingestion and modeling, identity resolution setup, calculated insight configuration, segment creation and activation, end-user training, and support procedures * Lead technical conversations with both business and technical client teams; translate those outcomes into well-architected solutions that best utilize Salesforce Data Cloud and the wider Salesforce ecosystem * Ability to direct technical teams, both internal and client-side * Provide subject matter expertise as warranted via customer needs and business demands * Build lasting relationships with key client stakeholders and sponsors * Collaborate with digital specialists across disciplines to innovate and build premier solutions * Participate in compiling industry research, thought leadership and proposal materials for business development activities * Experience with scoping client work * Experience with hyperscale data platforms (ex: Snowflake), robust database modeling and data governance is a plus. What You'll Bring: * Have been part of at least one Salesforce Data Cloud implementation * Familiarity with Salesforce's technical architecture: APIs, Standard and Custom Objects, APEX. Proficient with ANSI SQL and supported functions in Salesforce Data Cloud * Strong proficiency toward presenting complex business and technical concepts using visualization aids * Ability to conceptualize and craft sophisticated wireframes, workflows, and diagrams * Strong understanding of data management concepts, including data quality, data distribution, data modeling and data governance * Detailed understanding of the fundamentals of digital marketing and complementary Salesforce products that organizations may use to run their business. Experience defining strategy, developing requirements, and implementing practical business solutions. * Experience in delivering projects using Agile-based methodologies * Salesforce Data Cloud certification preferred * Additional Salesforce certifications like Administrator are a plus * Strong interpersonal skills * Bachelor's degree in a related field preferred, but not required * Open to travel (up to 50%) About Us Slalom is a fiercely human business and technology consulting company that leads with outcomes to bring more value, in all ways, always. From strategy through delivery, our agile teams across 52 offices in 12 countries collaborate with clients to bring powerful customer experiences, innovative ways of working, and new products and services to life. We are trusted by leaders across the Global 1000, many successful enterprise and mid-market companies, and 500+ public sector organizations to improve operations, drive growth, and create value. At Slalom, we believe that together, we can move faster, dream bigger, and build better tomorrows for all. Compensation and Benefits Slalom prides itself on helping team members thrive in their work and life. As a result, Slalom is proud to invest in benefits that include meaningful time off and paid holidays, parental leave, 401(k) with a match, a range of choices for highly subsidized health, dental, & vision coverage, adoption and fertility assistance, and short/long-term disability. We also offer yearly $350 reimbursement account for any well-being-related expenses, as well as discounted home, auto, and pet insurance. Slalom is committed to fair and equitable compensation practices. For this position, the base salary pay range for the Senior Consultant role is $110,000 - $203,000. For Principal, the base salary pay range is $122,000- $225,000. For Senior Principal, the base salary pay range is $140,000 -$258,000 In addition, individuals may be eligible for an annual discretionary bonus. Actual compensation will depend upon an individual's skills, experience, qualifications, location, and other relevant factors. The salary pay range is subject to change and may be modified at any time. EEO and Accommodations Slalom is an equal opportunity employer and is committed to attracting, developing and retaining highly qualified talent who empower our innovative teams through unique perspectives and experiences. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veterans' status, or any other characteristic protected by federal, state, or local laws. Slalom will also consider qualified applications with criminal histories, consistent with legal requirements. Slalom welcomes and encourages applications from individuals with disabilities. Reasonable accommodations are available for candidates during all aspects of the selection process. Please advise the talent acquisition team or contact ****************************** if you require accommodations during the interview process. We will accept applications until December 31, 2025. #LI-EC
    $122k-225k yearly 37d ago
  • Data Engineer

    Genoa Employment Solutions 4.8company rating

    Data engineer job in Beaverton, OR

    FlexIT client is looking for a Data Engineer 12 months contract in Beaverton, Oregon. Looking for local candidates to work on site. Top skills: Python, SQL , AWS, Spark
    $97k-139k yearly est. 60d+ ago

Learn more about data engineer jobs

How much does a data engineer earn in Vancouver, WA?

The average data engineer in Vancouver, WA earns between $79,000 and $149,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.

Average data engineer salary in Vancouver, WA

$108,000

What are the biggest employers of Data Engineers in Vancouver, WA?

The biggest employers of Data Engineers in Vancouver, WA are:
  1. WebMD
  2. Moda Health
  3. Autodesk
  4. Advance Local
  5. Certified Languages International
  6. Ernst & Young
  7. Genoa
  8. Anywhere Real Estate
  9. Career-Mover
  10. Insight Global
Job type you want
Full Time
Part Time
Internship
Temporary