Post job

Data engineer jobs in Gresham, OR - 653 jobs

All
Data Engineer
Data Architect
Data Scientist
Data Consultant
Data Warehouse Consultant
  • Data Scientist 4

    Lam Research 4.6company rating

    Data engineer job in Tualatin, OR

    Develop tools, metric measurement and assessment methods for performance management and predictive modeling. Develop dashboards for product management and executives to drive faster and better decision making Create accountability models for DPG-wide quality, I&W, inventory, product management KPI's and business operations. Improve DPG-wide quality, install and warranty, and inventory performance consistently from awareness, prioritization, and action through the availability of common data. Collaborate with quality, install, and warranty, and inventory program managers to analyze trends and patterns in data that drive required improvement in key performance indicators (KPIs) Foster growth and utility of Cost of Quality within the company through correlation of I&W data, ECOGS, identification of causal relationships for quality events and discovery of hidden costs throughout the network. Improve data utilization via AI and automation leading to real time resolution and speeding systemic action. Lead and/or advise on multiple projects simultaneously and demonstrate organizational, prioritization, and time management proficiencies. Bachelor's degree with 8+ years of experience; or master's degree with 5+ years' experience; or equivalent experience. Basic understand of AI and machine learning and ability to work with Data Scientist to use AI to solve complex challenging problems leading to efficiency and effectiveness improvements. Ability to define problem statements and objectives, development of analysis solution approach, execution of analysis. Basic knowledge of Lean Six Sigma processes, statistics, or quality systems experience. Ability to work on multiple problems simultaneously. Ability to present conclusions and recommendations to executive audiences. Ownership mindset to drive solutions and positive outcomes. Excellent communication and presentation skills with the ability to present to audiences at multiple levels in the Company. Willingness to adapt best practices via benchmarking. Experience in Semiconductor fabrication, Semiconductor Equipment Operations, or related industries is a plus. Demonstrated ability to change process and methodologies for capturing and interpreting data. Demonstrated success in using structured problem-solving methodologies and quality tools to solve complex problems. Knowledge of programming environments such as Python, R, Matlab, SQL or equivalent. Experience in structured problem-solving methodologies such as PDCA, DMAIC, 8D and quality tools. Our commitment We believe it is important for every person to feel valued, included, and empowered to achieve their full potential. By bringing unique individuals and viewpoints together, we achieve extraordinary results. Lam is committed to and reaffirms support of equal opportunity in employment and non-discrimination in employment policies, practices and procedures on the basis of race, religious creed, color, national origin, ancestry, physical disability, mental disability, medical condition, genetic information, marital status, sex (including pregnancy, childbirth and related medical conditions), gender, gender identity, gender expression, age, sexual orientation, or military and veteran status or any other category protected by applicable federal, state, or local laws. It is the Company's intention to comply with all applicable laws and regulations. Company policy prohibits unlawful discrimination against applicants or employees. Lam offers a variety of work location models based on the needs of each role. Our hybrid roles combine the benefits of on-site collaboration with colleagues and the flexibility to work remotely and fall into two categories - On-site Flex and Virtual Flex. 'On-site Flex' you'll work 3+ days per week on-site at a Lam or customer/supplier location, with the opportunity to work remotely for the balance of the week. 'Virtual Flex' you'll work 1-2 days per week on-site at a Lam or customer/supplier location, and remotely the rest of the time.
    $71k-91k yearly est. 28d ago
  • Job icon imageJob icon image 2

    Looking for a job?

    Let Zippia find it for you.

  • Data Scientist, Analytics (Technical Leadership)

    Meta 4.8company rating

    Data engineer job in Salem, OR

    We are seeking experienced Data Scientists to join our team and drive impact across various product areas. As a Data Scientist, you will collaborate with cross-functional partners to identify and solve complex problems using data and analysis. Your role will involve shaping product development, quantifying new opportunities, and ensuring products bring value to users and the company. You will guide teams using data-driven insights, develop hypotheses, and employ rigorous analytical approaches to test them. You will tell data-driven stories, present clear insights, and build credibility with stakeholders. By joining our team, you will become part of a world-class analytics community dedicated to skill development and career growth in analytics and beyond. **Required Skills:** Data Scientist, Analytics (Technical Leadership) Responsibilities: 1. Work with complex data sets to solve challenging problems using analytical and statistical approaches 2. Apply technical expertise in quantitative analysis, experimentation, and data mining to develop product strategies 3. Identify and measure success through goal setting, forecasting, and monitoring key metrics 4. Partner with cross-functional teams to inform and execute product strategy and investment decisions 5. Build long-term vision and strategy for programs and products 6. Collaborate with executives to define and develop data platforms and instrumentation 7. Effectively communicate insights and recommendations to stakeholders 8. Define success metrics, forecast changes, and set team goals 9. Support developing roadmaps and coordinate analytics efforts across teams **Minimum Qualifications:** Minimum Qualifications: 10. Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience 11. Mathematics, Statistics, Operations Research) 5+ years of experience with data querying languages (e.g. SQL), scripting languages (e.g. Python), or statistical/mathematical software (e.g. R, SAS, Matlab) 12. 8+ years of work experience leading analytics work in IC capacity, working collaboratively with Engineering and cross-functional partners, and guiding data-influenced product planning, prioritization and strategy development 13. Experience working effectively with and leading complex analytics projects across multiple stakeholders and cross-functional teams, including Engineering, PM/TPM, Analytics and Finance 14. Experience with predictive modeling, machine learning, and experimentation/causal inference methods 15. Experience communicating complex technical topics in a clear, precise, and actionable manner **Preferred Qualifications:** Preferred Qualifications: 16. 10+ years of experience communicating the results of analyses to leadership teams to influence the strategy 17. Masters or Ph.D. Degree in a quantitative field 18. Bachelor's Degree in an analytical or scientific field (e.g. Computer Science, Engineering, Mathematics, Statistics, Operations Research) 19. 10+ years of experience doing complex quantitative analysis in product analytics **Public Compensation:** $210,000/year to $281,000/year + bonus + equity + benefits **Industry:** Internet **Equal Opportunity:** Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Meta participates in the E-Verify program in certain locations, as required by law. Please note that Meta may leverage artificial intelligence and machine learning technologies in connection with applications for employment. Meta is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance or accommodations due to a disability, please let us know at accommodations-ext@fb.com.
    $210k-281k yearly 60d+ ago
  • Data Scientist, Privacy

    Datavant

    Data engineer job in Salem, OR

    Datavant is a data platform company and the world's leader in health data exchange. Our vision is that every healthcare decision is powered by the right data, at the right time, in the right format. Our platform is powered by the largest, most diverse health data network in the U.S., enabling data to be secure, accessible and usable to inform better health decisions. Datavant is trusted by the world's leading life sciences companies, government agencies, and those who deliver and pay for care. By joining Datavant today, you're stepping onto a high-performing, values-driven team. Together, we're rising to the challenge of tackling some of healthcare's most complex problems with technology-forward solutions. Datavanters bring a diversity of professional, educational and life experiences to realize our bold vision for healthcare. As part of the Privacy Science team within Privacy Hub you will play a crucial role in ensuring that privacy of patients is safeguarded in the modern world of data sharing. As well as working on real data, you will be involved in exciting research to keep us as industry leaders in this area, and stimulating discussions on re-identification risk. You will be supported in developing/consolidating data analysis and coding skills to become proficient in the analysis of large health-related datasets. **You Will:** + Critically analyze large health datasets using standard and bespoke software libraries + Discuss your findings and progress with internal and external stakeholders + Produce high quality reports which summarise your findings + Contribute to research activities as we explore novel and established sources of re-identification risk **What You Will Bring to the Table:** + Excellent communication skills. Meticulous attention to detail in the production of comprehensive, well-presented reports + A good understanding of statistical probability distributions, bias, error and power as well as sampling and resampling methods + Seeks to understand real-world data in context rather than consider it in abstraction. + Familiarity or proficiency with programmable data analysis software R or Python, and the desire to develop expertise in its language + Application of scientific methods to practical problems through experimental design, exploratory data analysis and hypothesis testing to reach robust conclusions + Strong time management skills and demonstrable experience of prioritising work to meet tight deadlines + Initiative and ability to independently explore and research novel topics and concepts as they arise, to expand Privacy Hub's knowledge base + An appreciation of the need for effective methods in data privacy and security, and an awareness of the relevant legislation + Familiarity with Amazon Web Services cloud-based storage and computing facilities **Bonus Points If You Have:** + Experience creating documents using LATEX + Detailed knowledge of one or more types of health information, e.g., genomics, disease, health images + Experience working with or supporting public sector organizations, such as federal agencies (e.g., CMS, NIH, VA, CDC), state health departments, or public health research partners. Familiarity with government data environments, procurement processes, or privacy frameworks in regulated settings is highly valued. \#LI-BC1 We are committed to building a diverse team of Datavanters who are all responsible for stewarding a high-performance culture in which all Datavanters belong and thrive. We are proud to be an Equal Employment Opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, disability, veteran status, or other legally protected status. At Datavant our total rewards strategy powers a high-growth, high-performance, health technology company that rewards our employees for transforming health care through creating industry-defining data logistics products and services. The range posted is for a given job title, which can include multiple levels. Individual rates for the same job title may differ based on their level, responsibilities, skills, and experience for a specific job. The estimated total cash compensation range for this role is: $104,000-$130,000 USD To ensure the safety of patients and staff, many of our clients require post-offer health screenings and proof and/or completion of various vaccinations such as the flu shot, Tdap, COVID-19, etc. Any requests to be exempted from these requirements will be reviewed by Datavant Human Resources and determined on a case-by-case basis. Depending on the state in which you will be working, exemptions may be available on the basis of disability, medical contraindications to the vaccine or any of its components, pregnancy or pregnancy-related medical conditions, and/or religion. This job is not eligible for employment sponsorship. Datavant is committed to a work environment free from job discrimination. We are proud to be an Equal Employment Opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, disability, veteran status, or other legally protected status. To learn more about our commitment, please review our EEO Commitment Statement here (************************************************** . Know Your Rights (*********************************************************************** , explore the resources available through the EEOC for more information regarding your legal rights and protections. In addition, Datavant does not and will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay. At the end of this application, you will find a set of voluntary demographic questions. If you choose to respond, your answers will be anonymous and will help us identify areas for improvement in our recruitment process. (We can only see aggregate responses, not individual ones. In fact, we aren't even able to see whether you've responded.) Responding is entirely optional and will not affect your application or hiring process in any way. Datavant is committed to working with and providing reasonable accommodations to individuals with physical and mental disabilities. If you need an accommodation while seeking employment, please request it here, (************************************************************** Id=**********48790029&layout Id=**********48795462) by selecting the 'Interview Accommodation Request' category. You will need your requisition ID when submitting your request, you can find instructions for locating it here (******************************************************************************************************* . Requests for reasonable accommodations will be reviewed on a case-by-case basis. For more information about how we collect and use your data, please review our Privacy Policy (**************************************** .
    $104k-130k yearly 13d ago
  • Data Scientist

    Eyecarecenterofsalem

    Data engineer job in Portland, OR

    Job DescriptionWe are looking for a Data Scientist to analyze large amounts of raw information to find patterns that will help improve our company. We will rely on you to build data products to extract valuable business insights. In this role, you should be highly analytical with a knack for analysis, math, and statistics. Critical thinking and problem-solving skills are essential for interpreting data. We also want to see a passion for machine learning and research. Your goal will be to help our company analyze trends to make better decisions. Responsibilities Identify valuable data sources and automate collection processes Undertake to preprocess of structured and unstructured data Analyze large amounts of information to discover trends and patterns Build predictive models and machine-learning algorithms Combine models through ensemble modeling Present information using data visualization techniques Propose solutions and strategies to business challenges Collaborate with engineering and product development teams Requirements and skills Proven experience as a Data Scientist or Data Analyst Experience in data mining Understanding of machine learning and operations research Knowledge of R, SQL, and Python; familiarity with Scala, Java, or C++ is an asset Experience using business intelligence tools (e.g. Tableau) and data frameworks (e.g. Hadoop) Analytical mind and business acumen Strong math skills (e.g. statistics, algebra) Problem-solving aptitude Excellent communication and presentation skills BSc/BA in Computer Science, Engineering, or relevant field; a graduate degree in Data Science or other quantitative field is preferred
    $73k-104k yearly est. 5d ago
  • AWS Data Migration Consultant

    Slalom 4.6company rating

    Data engineer job in Portland, OR

    Candidates can live within commutable distance to any Slalom office in the US. We have a hybrid and flexible environment. Who You'll Work With As a modern technology company, we've never met a technical challenge we didn't like. We enable our clients to learn from their data, create incredible digital experiences, and make the most of new technologies. We blend design, engineering, and analytics expertise to build the future. We surround our technologists with interesting challenges, innovative minds, and emerging technologies. We are seeking an experienced Cloud Data Migration Architect with deep expertise in SQL Server, Oracle, DB2, or a combination of these platforms, to lead the design, migration, and optimization of scalable database solutions in the AWS cloud. This role will focus on modernizing on-premises database systems by architecting high-performance, secure, and reliable AWS-hosted solutions. As a key technical leader, you will work closely with data engineers, cloud architects, and business stakeholders to define data strategies, lead complex database migrations, build out ETL pipelines, and optimize performance across legacy and cloud-native environments. What You'll Do * Design and optimize database solutions on AWS, including Amazon RDS, EC2-hosted instances, and advanced configurations like SQL Server Always On or Oracle RAC (Real Application Clusters). * Lead and execute cloud database migrations using AWS Database Migration Service (DMS), Schema Conversion Tool (SCT), and custom automation tools. * Architect high-performance database schemas, indexing strategies, partitioning models, and query optimization techniques. * Optimize complex SQL queries, stored procedures, functions, and views to ensure performance and scalability in the cloud. * Implement high-availability and disaster recovery (HA/DR) strategies including Always-On, Failover Clusters, Log Shipping, and Replication, tailored to each RDBMS. * Ensure security best practices are followed including IAM-based access control, encryption, and compliance with industry standards. * Collaborate with DevOps teams to implement Infrastructure-as-Code (IaC) using tools like Terraform, CloudFormation, or AWS CDK. * Monitor performance using tools such as AWS CloudWatch, Performance Insights, Query Store, Dynamic Management Views (DMVs), or Oracle-native tools. * Work with software engineers and data teams to integrate cloud databases into enterprise applications and analytics platforms. What You'll Bring * 5+ years of experience in database architecture, design, and administration with at least one of the following: SQL Server, Oracle, or DB2. * Expertise in one or more of the following RDBMS platforms: Microsoft SQL Server, Oracle, DB2. * Hands-on experience with AWS database services (RDS, EC2-hosted databases). * Strong understanding of HA/DR solutions and cloud database design patterns. * Experience with ETL development and data integration, using tools such as SSIS, AWS Glue, or custom solutions. * Familiarity with AWS networking components (VPCs, security groups) and hybrid cloud connectivity. * Strong troubleshooting and analytical skills to resolve complex database and performance issues. * Ability to work independently and lead database modernization initiatives in collaboration with engineering and client stakeholders. Nice to Have * AWS certifications such as AWS Certified Database - Specialty or AWS Certified Solutions Architect - Professional. * Experience with NoSQL databases or hybrid data architectures. * Knowledge of analytics and big data tools (e.g., Snowflake, Redshift, Athena, Power BI, Tableau). * Familiarity with containerization (Docker, Kubernetes) and serverless technologies (AWS Lambda, Fargate). * Experience with DB2 on-premise or cloud-hosted environments. About Us Slalom is a fiercely human business and technology consulting company that leads with outcomes to bring more value, in all ways, always. From strategy through delivery, our agile teams across 52 offices in 12 countries collaborate with clients to bring powerful customer experiences, innovative ways of working, and new products and services to life. We are trusted by leaders across the Global 1000, many successful enterprise and mid-market companies, and 500+ public sector organizations to improve operations, drive growth, and create value. At Slalom, we believe that together, we can move faster, dream bigger, and build better tomorrows for all. Compensation and Benefits Slalom prides itself on helping team members thrive in their work and life. As a result, Slalom is proud to invest in benefits that include meaningful time off and paid holidays, parental leave, 401(k) with a match, a range of choices for highly subsidized health, dental, & vision coverage, adoption and fertility assistance, and short/long-term disability. We also offer yearly $350 reimbursement account for any well-being-related expenses, as well as discounted home, auto, and pet insurance. Slalom is committed to fair and equitable compensation practices. For this position, the target base salary pay range in the following locations: Boston, Houston, Los Angeles, Orange County, Seattle, San Diego, Washington DC, New York, New Jersey, for Consultant level is $105,000-147,000 and for Senior Consultant level it is $120,000-$169,000 and for Principal level it is $133,000-$187,000. In all other markets, the target base salary pay range for Consultant level is $96,000-$135,000 and for Senior Consultant level it is $110,000-$155,000 and for Principal level it is $122,000-$172,000. In addition, individuals may be eligible for an annual discretionary bonus. Actual compensation will depend upon an individual's skills, experience, qualifications, location, and other relevant factors. The salary pay range is subject to change and may be modified at any time. EEO and Accommodations Slalom is an equal opportunity employer and is committed to attracting, developing and retaining highly qualified talent who empower our innovative teams through unique perspectives and experiences. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veterans' status, or any other characteristic protected by federal, state, or local laws. Slalom will also consider qualified applications with criminal histories, consistent with legal requirements. Slalom welcomes and encourages applications from individuals with disabilities. Reasonable accommodations are available for candidates during all aspects of the selection process. Please advise the talent acquisition team if you require accommodations during the interview process. We will accept applications until 1/31/2026 or until the positions are filled.
    $133k-187k yearly 2d ago
  • Sr. Data Engineer

    Concoracredit

    Data engineer job in Beaverton, OR

    As a Sr. Data Engineer, you'll help drive Concora Credit's Mission to enable customers to Do More with Credit - every single day. The impact you'll have at Concora Credit: We are seeking a Sr. Data Engineer with deep expertise in Azure and Databricks to lead the design, development, and optimization of scalable data pipelines and platforms. You'll be responsible for building robust data solutions that power analytics, reporting, and machine learning across the organization using Azure cloud services and Databricks. We hire people, not positions. That's because, at Concora Credit, we put people first, including our customers, partners, and Team Members. Concora Credit is guided by a single purpose: to help non-prime customers do more with credit. Today, we have helped millions of customers access credit. Our industry leadership, resilience, and willingness to adapt ensure we can help our partners responsibly say yes to millions more. As a company grounded in entrepreneurship, we're looking to expand our team and are looking for people who foster innovation, strive to make an impact, and want to Do More! We're an established company with over 20 years of experience, but now we're taking things to the next level. We're seeking someone who wants to impact the business and play a pivotal role in leading the charge for change. Responsibilities As our Sr. Data Engineer, you will: Design and develop scalable, efficient data pipelines using Azure Databricks Build and manage data ingestion, transformation, and storage solutions leveraging Azure Data Factory, Azure Data Lake, and Delta Lake Implement CI/CD for data workflows using tools like Azure DevOps, Git, and Terraform Optimize performance and cost efficiency across large-scale distributed data systems Collaborate with analysts, data scientists, and business stakeholders to understand data needs and deliver reliable, reusable datasets Provide guidance and mentor junior engineers and actively contribute to data platform best practices Monitor, troubleshoot, and optimize existing pipelines and infrastructure to ensure reliability and scalability These duties must be performed with or without reasonable accommodation. We know experience comes in many forms and that many skills are transferable. If your experience is close to what we're looking for, consider applying. Diversity has made us the entrepreneurial and innovative company that we are today. Qualifications Requirements: 5+ years of experience in data engineering, with a strong focus on Azure cloud technologies Experience with Azure Databricks, Azure Data Lake, Data Factory including PySpark, SQL, Python and Delta Lake Strong proficiency in Databricks and Apache Spark Solid understanding of data warehousing, ETL/ELT, and data modeling best practices Experience with version control, CI/CD pipelines, and infrastructure as code Knowledge of Spark performance tuning, partitioning, and job orchestration Excellent problem-solving skills and attention to detail Strong communication and collaboration abilities across technical and non-technical teams Ability to work independently and lead in a fast-paced, agile environment Passion for delivering clean, high-quality, and maintainable code Preferred Qualifications: Experience with Unity Catalog, Databricks Workflows, and Delta Live Tables Familiarity with DevOps practices or Terraform for Azure resource provisioning Understanding of data security, RBAC, and compliance in cloud environments Experience integrating Databricks with Power BI or other analytics platforms Exposure to real-time data processing using Kafka, Event Hubs, or Structured Streaming What's In It For You: Medical, Dental and Vision insurance for you and your family Relax and recharge with Paid Time Off (PTO) 6 company-observed paid holidays, plus 3 paid floating holidays 401k (after 90 days) plus employer match up to 4% Pet Insurance for your furry family members Wellness perks including onsite fitness equipment at both locations, EAP, and access to the Headspace App We invest in your future through Tuition Reimbursement Save on taxes with Flexible Spending Accounts Peace of mind with Life and AD&D Insurance Protect yourself with company-paid Long-Term Disability and voluntary Short-Term Disability Concora Credit provides equal employment opportunities to all Team Members and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Employment-based visa sponsorship is not available for this role. Concora Credit is an equal opportunity employer (EEO). Please see the Concora Credit Privacy Policy for more information on how Concora Credit processes your personal information during the recruitment process and, if applicable, based on your location, how you can exercise your privacy rights. If you have questions about this privacy notice or need to contact us in connection with your personal data, including any requests to exercise your legal rights referred to at the end of this notice, please contact caprivacynotice@concoracredit.com.
    $84k-118k yearly est. Auto-Apply 17d ago
  • Sr Data Engineer (MFT - IBM Sterling)

    The Hertz Corporation 4.3company rating

    Data engineer job in Salem, OR

    **A Day in the Life:** The IBM Sterling Senior MFT Engineer will have expertise in Sterling B2B Integrator, Sterling File Gateway, Sterling Secure Proxy, Control Center. This position requires an expert level knowledge of these technologies. You'll provide third-level support for core hardware, software, data and infrastructure components within the B2B integration team. The software focus areas will include software infrastructure components, continuous improvement and automation related to middleware and enterprise data movement within a mission-critical, multi-site enterprise environment. The ideal candidate will have a passion for technology; possess the ability to create change and be the facilitator for this transformation. They will have experience tailoring software design, developing, and implementing of solutions in support of data quality and automation. They will need to understand the requirements of a project, test the performance of the infrastructure, and verify that the requirements have been successfully met. We expect the starting salary to be around $135k but will be commensurate with experience. **What You'll Do:** TECHNICAL SENIORSHIP + Communication with internal and external business users on Sterling Integrator mappings + Making changes to existing partner integrations to meet internal and external requirements + Design, develop and implement solutions based on standards and processes that establish consistency across the enterprise data, reduce risks, and promote efficiencies in support of the organizational goals and objectives. + Diagnose and troubleshoot complex issues, restore services and perform root cause analysis. + Facilitate the review, vetting of these designs with the architecture governance bodies, as required. + Be aware of all aspects of security related to the Sterling environment and integrations INNOVATIVE INFRASTRUCTURE & PROBLEM SOLVING + Strive to engineering excellence by simplifying, optimizing, and automating processes and workflows. TEAMWORK & COMMUNICATION + Superior & demonstrated team building & development skills to harness powerful teams + Ability to communicate effectively with different levels of Seniorship within the organization + Provide timely updates so that progress against each individual incident can be updated as required + Write and review high quality technical documentation CONTROL & AUDIT + Ensures their workstation and all processes and procedures, follow organization standards CONTINUOUS IMPROVEMENT + Encourages and maintains a 'Best practice sharing culture', always striving to find ways to improve service and change mind set. **What We're Looking For:** + Bachelor's degree in Engineering, Statistics, Computer Science or other quantitative fields, required + 5+ years of IT experience + 4+ years' experience working with Sterling B2B Integrator, Sterling File Gateway, PCM (preferred) + 3+ years' experience with scripting to enable automation of standard activities (example: Ansible, Python, Bash, Java) + Strong interpersonal and communication skills with Agile/Scrum experience. + Strong problem solving and critical thinking skills with a proven record for identifying and diagnosing problems, and solving complex problems with simple, logical solutions with the ability to develop custom setups. + Outstanding verbal, written, presentation, facilitation, and interaction skills, including ability to effectively communicate technical and non-technical issues and concepts to multiple organization levels. + Prefer Travel, transportation, or hospitality experience + Prefer experience with designing application data models for mobile or web applications + Excellent written and verbal communication skills. + Flexibility in scheduling which may include nights, weekends, and holidays **What You'll Get:** + Up to 40% off the base rate of any standard Hertz Rental + Paid Time Off + Medical, Dental & Vision plan options + Retirement programs, including 401(k) employer matching + Paid Parental Leave & Adoption Assistance + Employee Assistance Program for employees & family + Educational Reimbursement & Discounts + Voluntary Insurance Programs - Pet, Legal/Identity Theft, Critical Illness + Perks & Discounts -Theme Park Tickets, Gym Discounts & more The Hertz Corporation operates the Hertz, Dollar Car Rental, Thrifty Car Rental brands in approximately 9,700 corporate and franchisee locations throughout North America, Europe, The Caribbean, Latin America, Africa, the Middle East, Asia, Australia and New Zealand. The Hertz Corporation is one of the largest worldwide airport general use vehicle rental companies, and the Hertz brand is one of the most recognized in the world. **US EEO STATEMENT** At Hertz, we champion and celebrate a culture of diversity and inclusion. We take affirmative steps to promote employment and advancement opportunities. The endless variety of perspectives, experiences, skills and talents that our employees invest in their work every day represent a significant part of our culture - and our success and reputation as a company. Individuals are encouraged to apply for positions because of the characteristics that make them unique. EOE, including disability/veteran
    $135k yearly 60d+ ago
  • Azure Data Engineer - 6013916

    Accenture 4.7company rating

    Data engineer job in Beaverton, OR

    Accenture Flex offers you the flexibility of local fixed-duration project-based work powered by Accenture, a leading global professional services company. Accenture is consistently recognized on FORTUNE's 100 Best Companies to Work For and Diversity Inc's Top 50 Companies For Diversity lists. As an Accenture Flex employee, you will apply your skills and experience to help drive business transformation for leading organizations and communities. In addition to delivering innovative solutions for Accenture's clients, you will work with a highly skilled, diverse network of people across Accenture businesses who are using the latest emerging technologies to address today's biggest business challenges. You will receive competitive rewards and access to benefits programs and world-class learning resources. Accenture Flex employees work in their local metro area onsite at the project, significantly reducing and/or eliminating the demands to travel. Job Description: Join our dynamic team and embark on a journey where you will be empowered to perform independently and become an SME. Required active participation/contribution in team discussions will be key as you contribute in providing solutions to work related problems. Let's work together to achieve greatness! Responsibilities: * Create new data pipelines leveraging existing data ingestion frameworks, tools * Orchestrate data pipelines using the Azure Data Factory service. * Develop/Enhance data transformations based on the requirements to parse, transform and load data into Enterprise Data Lake, Delta Lake, Enterprise DWH (Synapse Analytics) * Perform Unit Testing, coordinate integration testing and UAT Create HLD/DD/runbooks for the data pipelines * Configure compute, DQ Rules, Maintenance Performance tuning/optimization Qualification Basic Qualifications: * Minimum of 3 years of work experience with one or more of the following: Databricks Data Engineering, DLT, Azure Data Factory, SQL, PySpark, Synapse Dedicated SQL Pool, Azure DevOps, Python Preferred Qualifications: * Azure Function Apps * Azure Logic Apps * Precisely & COSMOS DB * Advanced proficiency in PySpark. * Advanced proficiency in Microsoft Azure Databricks, Azure DevOps, Databricks Delta Live Tables and Azure Data Factory. * Bachelor's or Associate's degree Compensation at Accenture varies depending on a wide array of factors, which may include but are not limited to the specific office location, role, skill set, and level of experience. As required by local law, Accenture provides a reasonable range of compensation for roles that may be hired as set forth below. We accept applications on an on-going basis and there is no fixed deadline to apply. Information on benefits is here. Role Location: California - $47.69 - $57.69 Cleveland - $47.69 - $57.69 Colorado - $47.69 - $57.69 District of Columbia - $47.69 - $57.69 Illinois - $47.69 - $57.69 Minnesota - $47.69 - $57.69 Maryland - $47.69 - $57.69 Massachusetts - $47.69 - $57.69 New York/New Jersey - $47.69 - $57.69 Washington - $47.69 - $57.69 Locations
    $75k-103k yearly est. 1d ago
  • BigData Engineer / Architect

    Nitor Infotech

    Data engineer job in Portland, OR

    The hunt is for a strong Big Data Professional, a team player with the ability to manage effective relationships with a wide range of stakeholders (customers & team members alike). Incumbent will demonstrate personal commitment and accountability to ensure standards are continuously sustained and improved both within the internal teams, and with partner organizations and suppliers. Role: Big Data Engineer Location: Portland OR. Duration: Full Time Skill Matrix: Map Reduce - Required Apache Spark - Required Informatica PowerCenter - Required Hive - Required Apache Hadoop - Required Core Java / Python - Highly Desired Healthcare Domain Experience - Highly Desired Job Description Responsibilities and Duties Participate in technical planning & requirements gathering phases including architectural design, coding, testing, troubleshooting, and documenting big data-oriented software applications. Responsible for the ingestion, maintenance, improvement, cleaning, and manipulation of data in the business's operational and analytics databases, and troubleshoots any existent issues. Implementation, troubleshooting, and optimization distributed solutions based on modern big data technologies like Hive, Hadoop, Spark, Elastic Search, Storm, Kafka, etc. in both an on premise and cloud deployment model to solve large scale processing problems Design, enhance and implement ETL/data ingestion platform on the cloud. Strong Data Warehousing skills, including: Data clean-up, ETL, ELT and handling scalability issues for enterprise level data warehouse Capable of investigating, familiarizing and mastering new data sets quickly Strong troubleshooting and problem-solving skills in large data environment Experience with building data platform on cloud (AWS or Azure) Experience in using Python, Java or any other language to solving data problems Experience in implementing SDLC best practices and Agile methods. Qualifications Required Skills: Data architecture/ Big Data/ ETL environment Experience with ETL design using tools Informatica, Talend, Oracle Data Integrator (ODI), Dell Boomi or equivalent Big Data & Analytics solutions Hadoop, Pig, Hive, Spark, Spark SQL Storm, AWS (EMR, Redshift, S3, etc.)/Azure (HDInsight, Data Lake Design) Building and managing hosted big data architecture, toolkit familiarity in: Hadoop with Oozy, Sqoop, Pig, Hive, HBase, Avro, Parquet, Spark, NiFi Foundational data management concepts - RDM and MDM Experience in working with JIRA/Git/Bitbucket/JUNIT and other code management toolsets Strong hands-on knowledge of/using solutioning languages like: Java, Scala, Python - any one is fine Healthcare Domain knowledge Required Experience, Skills and Qualifications Qualifications: Bachelor's Degree with a minimum of 6 to 9 + year's relevant experience or equivalent. Extensive experience in data architecture/Big Data/ ETL environment. Additional Information All your information will be kept confidential according to EEO guidelines.
    $84k-118k yearly est. 1d ago
  • Sr. Data Engineer

    It Vision Group

    Data engineer job in Portland, OR

    Job Description Title : Sr. Data Engineer Duration: 12 Months+ Roles & Responsibilities Perform data analysis according to business needs Translate functional business requirements into high-level and low-level technical designs Design and implement distributed data processing pipelines using Apache Spark, Apache Hive, Python, and other tools and languages prevalent in a modern analytics platform Create and schedule workflows using Apache Airflow or similar job orchestration tooling Build utilities, functions, and frameworks to better enable high-volume data processing Define and build data acquisitions and consumption strategies Build and incorporate automated unit tests, participate in integration testing efforts Work with teams to resolve operational & performance issues Work with architecture/engineering leads and other teams to ensure quality solutions are implemented, and engineering best practices are defined and followed. Tech Stack Apache Spark Apache Spark Streaming using Apache Kafka Apache Hive Apache Airflow Python AWS EMR and S3 Snowflake SQL Other Tools & Technologies :: PyCharm, Jenkin, Github. Apache Nifi (Optional) Scala (Optional)
    $84k-118k yearly est. 31d ago
  • Senior Data Engineer

    Advance Local 3.6company rating

    Data engineer job in Portland, OR

    **Advance Local** is looking for a **Senior Data Engineer** to design, build, and maintain the enterprise data infrastructure that powers our cloud data platform. This position will combine your deep technical expertise in data engineering with team leadership responsibilities for data engineering, overseeing the ingestion, integration, and reliability of data systems across Snowflake, AWS, Google Cloud, and legacy platforms. You'll partner with data product and across business units to translate requirements into technical solutions, integrate data from numerous third-party platforms, (CDPs, DMPs, analytics platforms, marketing tech) into central data platform, collaborate closely with the Data Architect on platform strategy and ensure scalable, well-engineered solutions for modern data infrastructure using infrastructure as code and API-driven integrations. The base salary range is $120,000 - $140,000 per year. **What you'll be doing:** + Lead the design and implementation of scalable data ingestion pipelines from diverse sources into Snowflake. + Partner with platform owners across business units to establish and maintain data integrations from third party systems into the central data platform. + Architect and maintain data infrastructure using IAC, ensuring reproducibility, version control and disaster recovery capabilities. + Design and implement API integrations and event-driven data flows to support real time and batch data requirements. + Evaluate technical capabilities and integration patterns of existing and potential third-party platforms, advising on platform consolidation and optimization opportunities. + Partner with the Data Architect and data product to define the overall data platform strategy, ensuring alignment between raw data ingestion and analytics-ready data products that serve business unit needs. + Develop and enforce data engineering best practices including testing frameworks, deployment automation, and observability. + Support rapid prototyping of new data products in collaboration with data product by building flexible, reusable data infrastructure components. + Design, develop, and maintain scalable data pipelines and ETL processes; optimize and improve existing data systems for performance, cost efficiency, and scalability. + Collaborate with data product, third-party platform owners, Data Architects, Analytics Engineers, Data Scientists, and business stakeholders to understand data requirements and deliver technical solutions that enable business outcomes across the organization. + Implement data quality validation, monitoring, and alerting systems to ensure reliability of data pipelines from all sources. + Develop and maintain comprehensive documentation for data engineering processes and systems, architecture, integration patterns, and runbooks. + Lead incident response and troubleshooting efforts for data pipeline issues, ensuring minimal business impact. + Stay current with the emerging data engineering technologies, cloud services, SaaS platform capabilities, and industry best practices. **Our ideal candidate will have the following:** + Bachelor's degree in computer science, engineering, or a related field + Minimum of seven years of experience in data engineering with at least two years in a lead or senior technical role + Expert proficiency in Snowflake data engineering patterns + Strong experience with AWS services (S3, Lambda, Glue, Step Functions) and Google Cloud Platform + Experience integrating data from SaaS platforms and marketing technology stacks (CDPs, DMPs, analytics platforms, CRMs) + Proven ability to work with third party APIs, webhooks, and data exports + Experience with infrastructure such as code (Terraform, Cloud Formation) and CI/CD pipelines for data infrastructure + Proven ability to design and implement API integrations and event-driven architecture + Experience with data modeling, data warehousing, and ETL processes at scale + Advanced proficiency in Python and SQL for data pipeline development + Experience with data orchestration tools (airflow, dbt, Snowflake tasks) + Strong understanding of data security, access controls, and compliance requirements + Ability to navigate vendor relationships and evaluate technical capabilities of third-party platforms + Excellent problem-solving skills and attention to detail + Strong communication and collaboraion skills **Additional Information** Advance Local Media offers competitive pay and a comprehensive benefits package with affordable options for your healthcare including medical, dental and vision plans, mental health support options, flexible spending accounts, fertility assistance, a competitive 401(k) plan to help plan for your future, generous paid time off, paid parental and caregiver leave and an employee assistance program to support your work/life balance, optional legal assistance, life insurance options, as well as flexible holidays to honor cultural diversity. Advance Local Media is one of the largest media groups in the United States, which operates the leading news and information companies in more than 20 cities, reaching 52+ million people monthly with our quality, real-time journalism and community engagement. Our company is built upon the values of Integrity, Customer-first, Inclusiveness, Collaboration and Forward-looking. For more information about Advance Local, please visit ******************** . Advance Local Media includes MLive Media Group, Advance Ohio, Alabama Media Group, NJ Advance Media, Advance Media NY, MassLive Media, Oregonian Media Group, Staten Island Media Group, PA Media Group, ZeroSum, Headline Group, Adpearance, Advance Aviation, Advance Healthcare, Advance Education, Advance National Solutions, Advance Originals, Advance Recruitment, Advance Travel & Tourism, BookingsCloud, Cloud Theory, Fox Dealer, Hoot Interactive, Search Optics, Subtext. _Advance Local Media is proud to be an equal opportunity employer, encouraging applications from people of all backgrounds. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, genetic information, national origin, age, disability, sexual orientation, marital status, veteran status, or any other category protected under federal, state or local law._ _If you need a reasonable accommodation because of a disability for any part of the employment process, please contact Human Resources and let us know the nature of your request and your contact information._ Advance Local Media does not provide sponsorship for work visas or employment authorization in the United States. Only candidates who are legally authorized to work in the U.S. will be considered for this position.
    $120k-140k yearly 60d+ ago
  • Supply Chain Data Scientist

    HP Inc. 4.9company rating

    Data engineer job in Vancouver, WA

    HP is the world's leading personal systems and printing company, dedicated to creating technology that enriches lives everywhere. Our innovation thrives on the passion, talent, and curiosity of our people-each bringing unique perspectives, expertise, and ideas to shape the future of how the world works and lives. **Join HP's Strategic Planning and Modeling (SPaM) Data Science Team - Where Vision Meets Impact** Are you a visionary thinker with a passion for learning and innovation? Join our **Strategic Planning and Modeling (SPaM) Data Science team** , where we tackle complex supply chain challenges and drive strategic decision-making through advanced analytics. As a member of SPaM, you will: ✅ **Evaluate** supply chain investment and improvement opportunities. ✅ **Develop** cutting-edge predictive, prescriptive, and generative AI models to optimize business outcomes. ✅ **Deliver** strategic recommendations that shape HP's global supply chain. ✅ **Collaborate** with senior executives to set direction and drive innovation. ✅ **Lead** with technical and analytical expertise to translate theory into impactful business solutions through strong execution and cross-functional orchestration. At SPaM, we bring over **35 years of supply chain innovation** . We value flexibility, invest in your growth through **formal training and mentorship** , and foster a culture where creativity and data-driven insights power the future. **If pioneering data-based innovations to transform HP's supply chain excites you, we want to hear from you!** **Responsibilities** + **Collaborate & Strategize:** Work closely with SPaM team members and cross-functional partners to define business challenges, develop analytical frameworks, gather key data, evaluate solutions, and deliver well-supported recommendations to leadership. + **Drive Supply Chain Innovation:** Develop and implement cutting-edge solutions in inventory management, working capital optimization, network design, forecasting, complexity reduction, product planning, manufacturing, and distribution to enhance efficiency and decision-making. + **Transform Data into Impact:** Leverage data visualization techniques to craft compelling narratives that communicate insights, drive action, and demonstrate business value. + **Build & Scale Advanced Analytics Solutions:** Design, deploy, and support predictive, prescriptive, and generative analytics models that drive measurable impact at scale. + **Enable Change & Adoption:** Apply proven change management strategies to facilitate smooth adoption of new analytics-driven solutions across the organization. + **Stay Ahead of the Curve:** Continuously enhance expertise in data science and supply chain management by staying informed on the latest advancements and industry trends. **Education and Experience Required** + MS/PhD in a business analytics field (i.e. Industrial Engineering, Management Science, Operations Research, Statistics, Data Science) or MBA with Supply Chain or Operations concentration + Operational experience in one of the following: inventory optimization, demand forecasting, network design, planning, forecasting, procurement, logistics, warranty, product quality, product variety management, or similar supply chain domain area + 10+ years of experience **Knowledge and Skills** + Expertise with inventory optimization math and deployment of the mathematical formulas into a business process + Applied experience with machine learning and optimization on large datasets + Proficient with Python, SQL, GitHub Co-Pilot, PowerBI, Excel + Experience with GenAI + Passion for data science & delivering data-driven outcomes + Demonstrated innovation across the supply chain domains (Plan, Source, Make, Deliver, Return) + Demonstrated practical experience in an operational environment + Experience implementing and handing off operational analytical solutions to business users + Demonstrated ability to deal well with ambiguity and work collaboratively in a fast-paced environment + Demonstrated ability to communicate concisely with diplomacy, credibility, and confidence + Experience with data visualization and data presentation in a clear and effective manner + Experience leading, collaborating with, and delivering results in cross-functional teams + Familiarity with statistics, forecasting, simulation, and spreadsheet modelling + A desire to be a part of the HP team with a commitment to build upon our brand that is synonymous with innovation, trust, reliability, and sustainability The base pay range for this role is **$147,050 to $232,850** annually with additional opportunities for pay in the form of bonus and/or equity (applies to US candidates only). Pay varies by work location, job-related knowledge, skills, and experience. **Benefits** HP offers a comprehensive benefits package for this position, including: + Health insurance + Dental insurance + Vision insurance + Long term/short term disability insurance + Employee assistance program + Flexible spending account + Life insurance + Generous time off policies, including: + 4-12 weeks fully paid parental leave based on tenure + 11 paid holidays + Additional flexible paid vacation and sick leave (US benefits overview (********************************** ) The compensation and benefits information are accurate as of the date of this posting. The Company reserves the right to modify this information at any time, with or without notice, subject to applicable law. **\#LI-POST** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
    $147.1k-232.9k yearly 9d ago
  • Azure Data Architect - C60720 9.5 Beaverton, OR

    CapB Infotek

    Data engineer job in Beaverton, OR

    For one of our long-term multiyear projects, we are looking for an Azure Data Architect out of Beaverton, OR. Skills: At least 10+ years of experience in designing, architecting and implementing large scale data processing/data storage/data distribution systems particularly in Azure Most recent 2+ years of experience in delivering large scale Azure projects At least 5+ years real time and streaming experience in Azure based data solutions At least 3+ years in presales and demo using Azure data services including ADW and ADLS At least 5+ years of demonstrated experience at least in the most recent 2+ years of designing and delivering solutions using Cortana Intelligence suite of analytics services part of Microsoft Azure including Azure Data Lake Analytics, Azure Data Warehouse, Streaming Analytics, Data Catalog Should have experience designing service management, orchestration, monitoring and management requirements of cloud platform. At least 3+ years of experience in migrating large volumes of data using standard Azure automation tools from on premise and cloud infrastructure to Azure Ability to produce high quality work products under pressure and within deadlines with specific references VERY strong communication, solution, and client facing skills especially non-technical business users At least 5+ years of working with large multi-vendor environment with multiple teams and people as a part of the project At least 2+ year of working with Power BI At least 5+ years of working with a complex Big Data environment using Microsoft tools 5+ years of experience with Team Foundation Server/JIRA/GitHub and other code management tool sets Preferred Skills and Education: Master's degree in Computer Science or related field Certification in Azure platform
    $92k-128k yearly est. 60d+ ago
  • 208406 / Datawarehouse BI Consultant

    Procom Services

    Data engineer job in Hillsboro, OR

    Procom is a leading provider of professional IT services and staffing to businesses and governments in Canada. With revenues over $500 million, the Branham Group has recognized Procom as the 3rd largest professional services firm in Canada and is now the largest “Canadian-Owned” IT staffing/consulting company. Procom's areas of staffing expertise include: • Application Development • Project Management • Quality Assurance • Business/Systems Analysis • Datawarehouse & Business Intelligence • Infrastructure & Network Services • Risk Management & Compliance • Business Continuity & Disaster Recovery • Security & Privacy Specialties• Contract Staffing (Staff Augmentation) • Permanent Placement (Staff Augmentation) • ICAP (Contractor Payroll) • Flextrack (Vendor Management System) Job Description Consolidating data from multiple online trans-actional systems, scheduling tools, defect management tools, gathering and dropping info in a datawarehouse, then from there creating an online data tool. Need a Datawarehouse BI person - architecting and datawarehouse environment and building and extracting loads. The group has the need to have small applications built to gather data and the person needs to grow the value of this. Identifying what the group wants and growing the entire deliverable potential for other groups in the future as a model. Job Duties: The candidate will possess SharePoint, .NET, Java and MS SQL skills and will apply those skills to create/extend ETL, SQL and application code for Sharepoint Business Intelligence and web applications. Candidate will also troubleshoot, debug, identify and correct problems related to the import and presentation of ETL data. Qualifications Strong development background in SharePoint BI or other Business Intelligence application. Experienced in developing stored procedures, SSIS Packages, with advanced data development skills. Solid software development skills including -Java, Javascript, HTML, T-SQL, CSS XML, ASP.NET 3-5 Years with recent required skills and 7-10 overall experience and with all tools. Degree Type: BS in relevant field..CS, Engineering, etc. Additional Information
    $86k-117k yearly est. 60d+ ago
  • Big Data Architect

    Testingxperts 4.0company rating

    Data engineer job in Hillsboro, OR

    Greetings for the day! My name is Suneetha from Testing Xperts, we are a global staffing, consulting and technology solutions company, offering industry-specific solutions to our fortune 500 clients and worldwide corporations. Role: Big Data Architect Location: Hillsboro, OR Duration: 6+ Months Job Description: · Skills Python, Spark, Map Reduce, Hive, Pig, HBase, Sqoop Hands on experience in Python Spark. · At least 2 years Hands on experience in the Big Data ecosystem HDFS, Map Reduce, Hive, HBase, etc. · Experience working on large data sets Working experience with large scale Hadoop environments. · Build and support design for ingesting large data sets into Hadoop ecosystems, validating enriching the data, and distributing significant amounts of that data to SQL and to other applications. Qualifications Graduate Additional Information All your information will be kept confidential according to EEO guidelines.
    $97k-136k yearly est. 1d ago
  • Data Architect Consultant

    Intermountain Health 3.9company rating

    Data engineer job in Salem, OR

    We're looking for a technical, highly collaborative Data Architect - Consultant who can bridge strategy, engineering, and delivery. This role is responsible for defining the problem to solve, shaping solution approaches, and leading projects end‑to‑end with strong facilitation and execution skills. You'll play a key role in maturing our code review process, establishing data architecture best practices, and elevating the quality and consistency of our delivery. The ideal candidate combines hands-on technical fluency with exceptional communication, enabling them to partner closely with engineers, guide architectural decisions, and drive continuous improvement across teams. **Essential Functions** Lead the design, implementation, and management of complex data infrastructures across multiple clinical and enterprise projects. Apply deep expertise in relational databases, cloud technologies, and big data tools to deliver scalable, efficient, and secure data solutions. Manage multiple complex projects simultaneously, ensuring alignment with clinical program objectives and organizational priorities. Provide leadership and direction on enterprise-level data integration strategies. Mentor junior and senior team members, fostering technical growth through structured guidance and code reviews. Collaborate with cross-functional teams and stakeholders to ensure high-quality data architectures and pipelines across cloud and on-premise systems. **Skills** + Leadership & Project Management - Ability to lead teams and manage multiple complex initiatives concurrently. + Mentorship & Code Review - Skilled in guiding junior team members and enforcing best practices through structured reviews. + Collaboration & Stakeholder Management - Strong interpersonal skills to work effectively with clinical program leaders and technical teams. + Data Architecture & Design - Expertise in designing scalable and secure data solutions. + Cloud Infrastructure & Data Solutions - Proficiency in AWS, Azure, or similar platforms. + ETL/ELT Development & Data Integration - Building and optimizing data pipelines. + Database & Performance Optimization - Ensuring high availability and efficiency. + Data Modeling & Documentation - Creating clear, maintainable models and technical documentation. + Data Governance & Security - Implementing compliance and security best practices. + Coding/Programming - Strong programming skills for data engineering and architecture. Minimum Qualifications: + Expert proficiency with SQL and extensive experience with traditional RDBMS (e.g., Oracle, SQL Server, PostgreSQL). + Extensive experience with cloud platforms such as AWS, Azure, or Google Cloud Platform for data architecture and storage solutions. + Mastery of programming languages such as Python and PySpark for data engineering tasks. + In-depth knowledge of ETL/ELT processes and tools, including both traditional (e.g., SSIS, Informatica) and cloud-native solutions (e.g., Azure Data Factory, Databricks). + Outstanding communication skills for collaborating with stakeholders and teams. + Expert understanding of Product Management, Project Management, or Program Management philosophies and methodologies, and capable of applying them to data architecture projects to ensure alignment with business goals and efficient execution. + Demonstrated ability to stay updated on industry trends and advancements. + Proven experience in providing mentorship and guidance to junior and senior architects. Preferred Qualifications: + A Master's Degree in an analytics-related field such as information systems, data science / analytics, statistics, computer science, mathematics and 4 years of experience. + or + 8 years of professional experience in analytics role in an analytics-related field such as statistics, mathematics, information systems, computer science, data science / analytics. + or + Bachelors degree in an analytics-related field such as information systems, data science / analytics, statistics, computer science, mathematics. With 6 years of experience + Experience with Databricks, Apache Spark, and Delta Lake for real-time and batch data processing. + Proficiency in data streaming technologies such as Kafka, AWS Kinesis, or Azure Event Hubs. + Experience working with APIs to retrieve and integrate data from external systems. + Experience developing APIs to provide data as a product. + Familiarity with CI/CD pipelines for data engineering workflows. + Knowledge of data governance frameworks and compliance standards (e.g., GDPR, HIPAA). + Experience in a healthcare environment + Familiarity with business intelligence tools such as Tableau, Power BI, or Looker for delivering insights from data architectures Remain sitting or standing for long periods of time to perform work on a computer, telephone, or other equipment. **Location:** Lake Park Building **Work City:** West Valley City **Work State:** Utah **Scheduled Weekly Hours:** 40 The hourly range for this position is listed below. Actual hourly rate dependent upon experience. $60.06 - $94.57 We care about your well-being - mind, body, and spirit - which is why we provide our caregivers a generous benefits package that covers a wide range of programs to foster a sustainable culture of wellness that encompasses living healthy, happy, secure, connected, and engaged. Learn more about our comprehensive benefits package here (***************************************************** . Intermountain Health is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. At Intermountain Health, we use the artificial intelligence ("AI") platform, HiredScore to improve your job application experience. HiredScore helps match your skills and experiences to the best jobs for you. While HiredScore assists in reviewing applications, all final decisions are made by Intermountain personnel to ensure fairness. We protect your privacy and follow strict data protection rules. Your information is safe and used only for recruitment. Thank you for considering a career with us and experiencing our AI-enhanced recruitment process. All positions subject to close without notice.
    $80k-107k yearly est. 3d ago
  • Data Scientist, Product Analytics

    Meta 4.8company rating

    Data engineer job in Salem, OR

    As a Data Scientist at Meta, you will shape the future of people-facing and business-facing products we build across our entire family of applications (Facebook, Instagram, Messenger, WhatsApp, Oculus). By applying your technical skills, analytical mindset, and product intuition to one of the richest data sets in the world, you will help define the experiences we build for billions of people and hundreds of millions of businesses around the world. You will collaborate on a wide array of product and business problems with a wide-range of cross-functional partners across Product, Engineering, Research, Data Engineering, Marketing, Sales, Finance and others. You will use data and analysis to identify and solve product development's biggest challenges. You will influence product strategy and investment decisions with data, be focused on impact, and collaborate with other teams. By joining Meta, you will become part of a world-class analytics community dedicated to skill development and career growth in analytics and beyond.Product leadership: You will use data to shape product development, quantify new opportunities, identify upcoming challenges, and ensure the products we build bring value to people, businesses, and Meta. You will help your partner teams prioritize what to build, set goals, and understand their product's ecosystem.Analytics: You will guide teams using data and insights. You will focus on developing hypotheses and employ a varied toolkit of rigorous analytical approaches, different methodologies, frameworks, and technical approaches to test them.Communication and influence: You won't simply present data, but tell data-driven stories. You will convince and influence your partners using clear insights and recommendations. You will build credibility through structure and clarity, and be a trusted strategic partner. **Required Skills:** Data Scientist, Product Analytics Responsibilities: 1. Work with large and complex data sets to solve a wide array of challenging problems using different analytical and statistical approaches 2. Apply technical expertise with quantitative analysis, experimentation, data mining, and the presentation of data to develop strategies for our products that serve billions of people and hundreds of millions of businesses 3. Identify and measure success of product efforts through goal setting, forecasting, and monitoring of key product metrics to understand trends 4. Define, understand, and test opportunities and levers to improve the product, and drive roadmaps through your insights and recommendations 5. Partner with Product, Engineering, and cross-functional teams to inform, influence, support, and execute product strategy and investment decisions **Minimum Qualifications:** Minimum Qualifications: 6. Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience 7. Bachelor's degree in Mathematics, Statistics, a relevant technical field, or equivalent 8. 4+ years of work experience in analytics, data querying languages such as SQL, scripting languages such as Python, and/or statistical mathematical software such as R (minimum of 2 years with a Ph.D.) 9. 4+ years of experience solving analytical problems using quantitative approaches, understanding ecosystems, user behaviors & long-term product trends, and leading data-driven projects from definition to execution [including defining metrics, experiment, design, communicating actionable insights] **Preferred Qualifications:** Preferred Qualifications: 10. Master's or Ph.D. Degree in a quantitative field **Public Compensation:** $147,000/year to $208,000/year + bonus + equity + benefits **Industry:** Internet **Equal Opportunity:** Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Meta participates in the E-Verify program in certain locations, as required by law. Please note that Meta may leverage artificial intelligence and machine learning technologies in connection with applications for employment. Meta is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance or accommodations due to a disability, please let us know at accommodations-ext@fb.com.
    $147k-208k yearly 60d+ ago
  • Sr. Data Engineer

    Concora Credit

    Data engineer job in Beaverton, OR

    As a Sr. Data Engineer, you'll help drive Concora Credit's Mission to enable customers to Do More with Credit - every single day. The impact you'll have at Concora Credit: We are seeking a Sr. Data Engineer with deep expertise in Azure and Databricks to lead the design, development, and optimization of scalable data pipelines and platforms. You'll be responsible for building robust data solutions that power analytics, reporting, and machine learning across the organization using Azure cloud services and Databricks. We hire people, not positions. That's because, at Concora Credit, we put people first, including our customers, partners, and Team Members. Concora Credit is guided by a single purpose: to help non-prime customers do more with credit. Today, we have helped millions of customers access credit. Our industry leadership, resilience, and willingness to adapt ensure we can help our partners responsibly say yes to millions more. As a company grounded in entrepreneurship, we're looking to expand our team and are looking for people who foster innovation, strive to make an impact, and want to Do More! We're an established company with over 20 years of experience, but now we're taking things to the next level. We're seeking someone who wants to impact the business and play a pivotal role in leading the charge for change. Responsibilities As our Sr. Data Engineer, you will: Design and develop scalable, efficient data pipelines using Azure Databricks Build and manage data ingestion, transformation, and storage solutions leveraging Azure Data Factory, Azure Data Lake, and Delta Lake Implement CI/CD for data workflows using tools like Azure DevOps, Git, and Terraform Optimize performance and cost efficiency across large-scale distributed data systems Collaborate with analysts, data scientists, and business stakeholders to understand data needs and deliver reliable, reusable datasets Provide guidance and mentor junior engineers and actively contribute to data platform best practices Monitor, troubleshoot, and optimize existing pipelines and infrastructure to ensure reliability and scalability These duties must be performed with or without reasonable accommodation. We know experience comes in many forms and that many skills are transferable. If your experience is close to what we're looking for, consider applying. Diversity has made us the entrepreneurial and innovative company that we are today. Qualifications Requirements: 5+ years of experience in data engineering, with a strong focus on Azure cloud technologies Experience with Azure Databricks, Azure Data Lake, Data Factory including PySpark, SQL, Python and Delta Lake Strong proficiency in Databricks and Apache Spark Solid understanding of data warehousing, ETL/ELT, and data modeling best practices Experience with version control, CI/CD pipelines, and infrastructure as code Knowledge of Spark performance tuning, partitioning, and job orchestration Excellent problem-solving skills and attention to detail Strong communication and collaboration abilities across technical and non-technical teams Ability to work independently and lead in a fast-paced, agile environment Passion for delivering clean, high-quality, and maintainable code Preferred Qualifications: Experience with Unity Catalog, Databricks Workflows, and Delta Live Tables Familiarity with DevOps practices or Terraform for Azure resource provisioning Understanding of data security, RBAC, and compliance in cloud environments Experience integrating Databricks with Power BI or other analytics platforms Exposure to real-time data processing using Kafka, Event Hubs, or Structured Streaming What's In It For You: Medical, Dental and Vision insurance for you and your family Relax and recharge with Paid Time Off (PTO) 6 company-observed paid holidays, plus 3 paid floating holidays 401k (after 90 days) plus employer match up to 4% Pet Insurance for your furry family members Wellness perks including onsite fitness equipment at both locations, EAP, and access to the Headspace App We invest in your future through Tuition Reimbursement Save on taxes with Flexible Spending Accounts Peace of mind with Life and AD&D Insurance Protect yourself with company-paid Long-Term Disability and voluntary Short-Term Disability Concora Credit provides equal employment opportunities to all Team Members and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Employment-based visa sponsorship is not available for this role. Concora Credit is an equal opportunity employer (EEO). Please see the Concora Credit Privacy Policy for more information on how Concora Credit processes your personal information during the recruitment process and, if applicable, based on your location, how you can exercise your privacy rights. If you have questions about this privacy notice or need to contact us in connection with your personal data, including any requests to exercise your legal rights referred to at the end of this notice, please contact caprivacynotice@concoracredit.com.
    $84k-118k yearly est. Auto-Apply 60d+ ago
  • BigData Engineer / Architect

    Nitor Infotech

    Data engineer job in Portland, OR

    The hunt is for a strong Big Data Professional, a team player with the ability to manage effective relationships with a wide range of stakeholders (customers & team members alike). Incumbent will demonstrate personal commitment and accountability to ensure standards are continuously sustained and improved both within the internal teams, and with partner organizations and suppliers. Role: Big Data Engineer Location: Portland OR. Duration: Full Time Skill Matrix: Map Reduce - Required Apache Spark - Required Informatica PowerCenter - Required Hive - Required Apache Hadoop - Required Core Java / Python - Highly Desired Healthcare Domain Experience - Highly Desired Job Description Responsibilities and Duties Participate in technical planning & requirements gathering phases including architectural design, coding, testing, troubleshooting, and documenting big data-oriented software applications. Responsible for the ingestion, maintenance, improvement, cleaning, and manipulation of data in the business's operational and analytics databases, and troubleshoots any existent issues. Implementation, troubleshooting, and optimization distributed solutions based on modern big data technologies like Hive, Hadoop, Spark, Elastic Search, Storm, Kafka, etc. in both an on premise and cloud deployment model to solve large scale processing problems Design, enhance and implement ETL/data ingestion platform on the cloud. Strong Data Warehousing skills, including: Data clean-up, ETL, ELT and handling scalability issues for enterprise level data warehouse Capable of investigating, familiarizing and mastering new data sets quickly Strong troubleshooting and problem-solving skills in large data environment Experience with building data platform on cloud (AWS or Azure) Experience in using Python, Java or any other language to solving data problems Experience in implementing SDLC best practices and Agile methods. Qualifications Required Skills: Data architecture/ Big Data/ ETL environment Experience with ETL design using tools Informatica, Talend, Oracle Data Integrator (ODI), Dell Boomi or equivalent Big Data & Analytics solutions Hadoop, Pig, Hive, Spark, Spark SQL Storm, AWS (EMR, Redshift, S3, etc.)/Azure (HDInsight, Data Lake Design) Building and managing hosted big data architecture, toolkit familiarity in: Hadoop with Oozy, Sqoop, Pig, Hive, HBase, Avro, Parquet, Spark, NiFi Foundational data management concepts - RDM and MDM Experience in working with JIRA/Git/Bitbucket/JUNIT and other code management toolsets Strong hands-on knowledge of/using solutioning languages like: Java, Scala, Python - any one is fine Healthcare Domain knowledge Required Experience, Skills and Qualifications Qualifications: Bachelor's Degree with a minimum of 6 to 9 + year's relevant experience or equivalent. Extensive experience in data architecture/Big Data/ ETL environment. Additional Information All your information will be kept confidential according to EEO guidelines.
    $84k-118k yearly est. 60d+ ago
  • Google Cloud Data & AI Engineer

    Slalom 4.6company rating

    Data engineer job in Portland, OR

    Who You'll Work With As a modern technology company, our Slalom Technologists are disrupting the market and bringing to life the art of the possible for our clients. We have passion for building strategies, solutions, and creative products to help our clients solve their most complex and interesting business problems. We surround our technologists with interesting challenges, innovative minds, and emerging technologies You will collaborate with cross-functional teams, including Google Cloud architects, data scientists, and business units, to design and implement Google Cloud data and AI solutions. As a Consultant, Senior Consultant or Principal at Slalom, you will be a part of a team of curious learners who lean into the latest technologies to innovate and build impactful solutions for our clients. What You'll Do * Design, build, and operationalize large-scale enterprise data and AI solutions using Google Cloud services such as BigQuery, Vertex AI, Dataflow, Cloud Storage, Pub/Sub and more. * Implement cloud-based data solutions for data ingestion, transformation, and storage; and AI solutions for model development, deployment, and monitoring, ensuring both areas meet performance, scalability, and compliance needs. * Develop and maintain comprehensive architecture plans for data and AI solutions, ensuring they are optimized for both data processing and AI model training within the Google Cloud ecosystem. * Provide technical leadership and guidance on Google Cloud best practices for data engineering (e.g., ETL pipelines, data pipelines) and AI engineering (e.g., model deployment, MLOps). * Conduct assessments of current data architectures and AI workflows, and develop strategies for modernizing, migrating, or enhancing data systems and AI models within Google Cloud. * Stay current with emerging Google Cloud data and AI technologies, such as BigQuery ML, AutoML, and Vertex AI, and lead efforts to integrate new innovations into solutions for clients. * Mentor and develop team members to enhance their skills in Google Cloud data and AI technologies, while providing leadership and training on both data pipeline optimization and AI/ML best practices. What You'll Bring * Proven experience as a Cloud Data and AI Engineer or similar role, with hands-on experience in Google Cloud tools and services (e.g., BigQuery, Vertex AI, Dataflow, Cloud Storage, Pub/Sub, etc.). * Strong knowledge of data engineering concepts, such as ETL processes, data warehousing, data modeling, and data governance. * Proficiency in AI engineering, including experience with machine learning models, model training, and MLOps pipelines using tools like Vertex AI, BigQuery ML, and AutoML. * Strong problem-solving and decision-making skills, particularly with large-scale data systems and AI model deployment. * Strong communication and collaboration skills to work with cross-functional teams, including data scientists, business stakeholders, and IT teams, bridging data engineering and AI efforts. * Experience with agile methodologies and project management tools in the context of Google Cloud data and AI projects. * Ability to work in a fast-paced environment, managing multiple Google Cloud data and AI engineering projects simultaneously. * Knowledge of security and compliance best practices as they relate to data and AI solutions on Google Cloud. * Google Cloud certifications (e.g., Professional Data Engineer, Professional Database Engineer, Professional Machine Learning Engineer) or willingness to obtain certification within a defined timeframe. About Us Slalom is a fiercely human business and technology consulting company that leads with outcomes to bring more value, in all ways, always. From strategy through delivery, our agile teams across 52 offices in 12 countries collaborate with clients to bring powerful customer experiences, innovative ways of working, and new products and services to life. We are trusted by leaders across the Global 1000, many successful enterprise and mid-market companies, and 500+ public sector organizations to improve operations, drive growth, and create value. At Slalom, we believe that together, we can move faster, dream bigger, and build better tomorrows for all. Compensation and Benefits Slalom prides itself on helping team members thrive in their work and life. As a result, Slalom is proud to invest in benefits that include meaningful time off and paid holidays, parental leave, 401(k) with a match, a range of choices for highly subsidized health, dental, & vision coverage, adoption and fertility assistance, and short/long-term disability. We also offer yearly $350 reimbursement account for any well-being-related expenses, as well as discounted home, auto, and pet insurance. Slalom is committed to fair and equitable compensation practices. For this position the target base salaries are listed below. In addition, individuals may be eligible for an annual discretionary bonus. Actual compensation will depend upon an individual's skills, experience, qualifications, location, and other relevant factors. The target salary pay range is subject to change and may be modified at any time. East Bay, San Francisco, Silicon Valley: * Consultant $114,000-$171,000 * Senior Consultant: $131,000-$196,500 * Principal: $145,000-$217,500 San Diego, Los Angeles, Orange County, Seattle, Houston, New Jersey, New York City, Westchester, Boston, Washington DC: * Consultant $105,000-$157,500 * Senior Consultant: $120,000-$180,000 * Principal: $133,000-$199,500 All other locations: * Consultant: $96,000-$144,000 * Senior Consultant: $110,000-$165,000 * Principal: $122,000-$183,000 EEO and Accommodations Slalom is an equal opportunity employer and is committed to inclusion, diversity, and equity in the workplace. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veterans' status, or any other characteristic protected by federal, state, or local laws. Slalom will also consider qualified applications with criminal histories, consistent with legal requirements. Slalom welcomes and encourages applications from individuals with disabilities. Reasonable accommodations are available for candidates during all aspects of the selection process. Please advise the talent acquisition team if you require accommodations during the interview process. We are accepting applications until 12/31. #LI-FB1
    $145k-217.5k yearly 28d ago

Learn more about data engineer jobs

How much does a data engineer earn in Gresham, OR?

The average data engineer in Gresham, OR earns between $72,000 and $137,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.

Average data engineer salary in Gresham, OR

$99,000

What are the biggest employers of Data Engineers in Gresham, OR?

The biggest employers of Data Engineers in Gresham, OR are:
  1. Fisher Investments
  2. Senior Salesforce Developer
Job type you want
Full Time
Part Time
Internship
Temporary