Develop tools, metric measurement and assessment methods for performance management and predictive modeling. Develop dashboards for product management and executives to drive faster and better decision making Create accountability models for DPG-wide quality, I&W, inventory, product management KPI's and business operations.
Improve DPG-wide quality, install and warranty, and inventory performance consistently from awareness, prioritization, and action through the availability of common data.
Collaborate with quality, install, and warranty, and inventory program managers to analyze trends and patterns in data that drive required improvement in key performance indicators (KPIs) Foster growth and utility of Cost of Quality within the company through correlation of I&W data, ECOGS, identification of causal relationships for quality events and discovery of hidden costs throughout the network.
Improve data utilization via AI and automation leading to real time resolution and speeding systemic action.
Lead and/or advise on multiple projects simultaneously and demonstrate organizational, prioritization, and time management proficiencies.
Bachelor's degree with 8+ years of experience; or master's degree with 5+ years' experience; or equivalent experience.
Basic understand of AI and machine learning and ability to work with Data Scientist to use AI to solve complex challenging problems leading to efficiency and effectiveness improvements.
Ability to define problem statements and objectives, development of analysis solution approach, execution of analysis.
Basic knowledge of Lean Six Sigma processes, statistics, or quality systems experience.
Ability to work on multiple problems simultaneously.
Ability to present conclusions and recommendations to executive audiences.
Ownership mindset to drive solutions and positive outcomes.
Excellent communication and presentation skills with the ability to present to audiences at multiple levels in the Company.
Willingness to adapt best practices via benchmarking.
Experience in Semiconductor fabrication, Semiconductor Equipment Operations, or related industries is a plus.
Demonstrated ability to change process and methodologies for capturing and interpreting data.
Demonstrated success in using structured problem-solving methodologies and quality tools to solve complex problems.
Knowledge of programming environments such as Python, R, Matlab, SQL or equivalent.
Experience in structured problem-solving methodologies such as PDCA, DMAIC, 8D and quality tools.
Our commitment We believe it is important for every person to feel valued, included, and empowered to achieve their full potential.
By bringing unique individuals and viewpoints together, we achieve extraordinary results.
Lam is committed to and reaffirms support of equal opportunity in employment and non-discrimination in employment policies, practices and procedures on the basis of race, religious creed, color, national origin, ancestry, physical disability, mental disability, medical condition, genetic information, marital status, sex (including pregnancy, childbirth and related medical conditions), gender, gender identity, gender expression, age, sexual orientation, or military and veteran status or any other category protected by applicable federal, state, or local laws.
It is the Company's intention to comply with all applicable laws and regulations.
Company policy prohibits unlawful discrimination against applicants or employees.
Lam offers a variety of work location models based on the needs of each role.
Our hybrid roles combine the benefits of on-site collaboration with colleagues and the flexibility to work remotely and fall into two categories - On-site Flex and Virtual Flex.
'On-site Flex' you'll work 3+ days per week on-site at a Lam or customer/supplier location, with the opportunity to work remotely for the balance of the week.
'Virtual Flex' you'll work 1-2 days per week on-site at a Lam or customer/supplier location, and remotely the rest of the time.
$71k-91k yearly est. 39d ago
Looking for a job?
Let Zippia find it for you.
Data Scientist, Analytics (Technical Leadership)
Meta 4.8
Data engineer job in Salem, OR
We are seeking experienced Data Scientists to join our team and drive impact across various product areas. As a Data Scientist, you will collaborate with cross-functional partners to identify and solve complex problems using data and analysis. Your role will involve shaping product development, quantifying new opportunities, and ensuring products bring value to users and the company. You will guide teams using data-driven insights, develop hypotheses, and employ rigorous analytical approaches to test them. You will tell data-driven stories, present clear insights, and build credibility with stakeholders. By joining our team, you will become part of a world-class analytics community dedicated to skill development and career growth in analytics and beyond.
**Required Skills:**
Data Scientist, Analytics (Technical Leadership) Responsibilities:
1. Work with complex data sets to solve challenging problems using analytical and statistical approaches
2. Apply technical expertise in quantitative analysis, experimentation, and data mining to develop product strategies
3. Identify and measure success through goal setting, forecasting, and monitoring key metrics
4. Partner with cross-functional teams to inform and execute product strategy and investment decisions
5. Build long-term vision and strategy for programs and products
6. Collaborate with executives to define and develop data platforms and instrumentation
7. Effectively communicate insights and recommendations to stakeholders
8. Define success metrics, forecast changes, and set team goals
9. Support developing roadmaps and coordinate analytics efforts across teams
**Minimum Qualifications:**
Minimum Qualifications:
10. Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
11. Mathematics, Statistics, Operations Research) 5+ years of experience with data querying languages (e.g. SQL), scripting languages (e.g. Python), or statistical/mathematical software (e.g. R, SAS, Matlab)
12. 8+ years of work experience leading analytics work in IC capacity, working collaboratively with Engineering and cross-functional partners, and guiding data-influenced product planning, prioritization and strategy development
13. Experience working effectively with and leading complex analytics projects across multiple stakeholders and cross-functional teams, including Engineering, PM/TPM, Analytics and Finance
14. Experience with predictive modeling, machine learning, and experimentation/causal inference methods
15. Experience communicating complex technical topics in a clear, precise, and actionable manner
**Preferred Qualifications:**
Preferred Qualifications:
16. 10+ years of experience communicating the results of analyses to leadership teams to influence the strategy
17. Masters or Ph.D. Degree in a quantitative field
18. Bachelor's Degree in an analytical or scientific field (e.g. Computer Science, Engineering, Mathematics, Statistics, Operations Research)
19. 10+ years of experience doing complex quantitative analysis in product analytics
**Public Compensation:**
$210,000/year to $281,000/year + bonus + equity + benefits
**Industry:** Internet
**Equal Opportunity:**
Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Meta participates in the E-Verify program in certain locations, as required by law. Please note that Meta may leverage artificial intelligence and machine learning technologies in connection with applications for employment.
Meta is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance or accommodations due to a disability, please let us know at accommodations-ext@meta.com.
$210k-281k yearly 60d+ ago
Data Scientist, Privacy
Datavant
Data engineer job in Salem, OR
Datavant is a data platform company and the world's leader in health data exchange. Our vision is that every healthcare decision is powered by the right data, at the right time, in the right format. Our platform is powered by the largest, most diverse health data network in the U.S., enabling data to be secure, accessible and usable to inform better health decisions. Datavant is trusted by the world's leading life sciences companies, government agencies, and those who deliver and pay for care.
By joining Datavant today, you're stepping onto a high-performing, values-driven team. Together, we're rising to the challenge of tackling some of healthcare's most complex problems with technology-forward solutions. Datavanters bring a diversity of professional, educational and life experiences to realize our bold vision for healthcare.
As part of the Privacy Science team within Privacy Hub you will play a crucial role in ensuring that privacy of patients is safeguarded in the modern world of data sharing. As well as working on real data, you will be involved in exciting research to keep us as industry leaders in this area, and stimulating discussions on re-identification risk. You will be supported in developing/consolidating data analysis and coding skills to become proficient in the analysis of large health-related datasets.
**You Will:**
+ Critically analyze large health datasets using standard and bespoke software libraries
+ Discuss your findings and progress with internal and external stakeholders
+ Produce high quality reports which summarise your findings
+ Contribute to research activities as we explore novel and established sources of re-identification risk
**What You Will Bring to the Table:**
+ Excellent communication skills. Meticulous attention to detail in the production of comprehensive, well-presented reports
+ A good understanding of statistical probability distributions, bias, error and power as well as sampling and resampling methods
+ Seeks to understand real-world data in context rather than consider it in abstraction.
+ Familiarity or proficiency with programmable data analysis software R or Python, and the desire to develop expertise in its language
+ Application of scientific methods to practical problems through experimental design, exploratory data analysis and hypothesis testing to reach robust conclusions
+ Strong time management skills and demonstrable experience of prioritising work to meet tight deadlines
+ Initiative and ability to independently explore and research novel topics and concepts as they arise, to expand Privacy Hub's knowledge base
+ An appreciation of the need for effective methods in data privacy and security, and an awareness of the relevant legislation
+ Familiarity with Amazon Web Services cloud-based storage and computing facilities
**Bonus Points If You Have:**
+ Experience creating documents using LATEX
+ Detailed knowledge of one or more types of health information, e.g., genomics, disease, health images
+ Experience working with or supporting public sector organizations, such as federal agencies (e.g., CMS, NIH, VA, CDC), state health departments, or public health research partners. Familiarity with government data environments, procurement processes, or privacy frameworks in regulated settings is highly valued.
\#LI-BC1
We are committed to building a diverse team of Datavanters who are all responsible for stewarding a high-performance culture in which all Datavanters belong and thrive. We are proud to be an Equal Employment Opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, disability, veteran status, or other legally protected status.
At Datavant our total rewards strategy powers a high-growth, high-performance, health technology company that rewards our employees for transforming health care through creating industry-defining data logistics products and services.
The range posted is for a given job title, which can include multiple levels. Individual rates for the same job title may differ based on their level, responsibilities, skills, and experience for a specific job.
The estimated total cash compensation range for this role is:
$104,000-$130,000 USD
To ensure the safety of patients and staff, many of our clients require post-offer health screenings and proof and/or completion of various vaccinations such as the flu shot, Tdap, COVID-19, etc. Any requests to be exempted from these requirements will be reviewed by Datavant Human Resources and determined on a case-by-case basis. Depending on the state in which you will be working, exemptions may be available on the basis of disability, medical contraindications to the vaccine or any of its components, pregnancy or pregnancy-related medical conditions, and/or religion.
This job is not eligible for employment sponsorship.
Datavant is committed to a work environment free from job discrimination. We are proud to be an Equal Employment Opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, disability, veteran status, or other legally protected status. To learn more about our commitment, please review our EEO Commitment Statement here (************************************************** . Know Your Rights (*********************************************************************** , explore the resources available through the EEOC for more information regarding your legal rights and protections. In addition, Datavant does not and will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay.
At the end of this application, you will find a set of voluntary demographic questions. If you choose to respond, your answers will be anonymous and will help us identify areas for improvement in our recruitment process. (We can only see aggregate responses, not individual ones. In fact, we aren't even able to see whether you've responded.) Responding is entirely optional and will not affect your application or hiring process in any way.
Datavant is committed to working with and providing reasonable accommodations to individuals with physical and mental disabilities. If you need an accommodation while seeking employment, please request it here, (************************************************************** Id=**********48790029&layout Id=**********48795462) by selecting the 'Interview Accommodation Request' category. You will need your requisition ID when submitting your request, you can find instructions for locating it here (******************************************************************************************************* . Requests for reasonable accommodations will be reviewed on a case-by-case basis.
For more information about how we collect and use your data, please review our Privacy Policy (**************************************** .
$104k-130k yearly 24d ago
Principal Data Scientist
Maximus 4.3
Data engineer job in Portland, OR
Description & Requirements Maximus has an exciting opportunity for a Principal Data Scientist to join the Maximus AI Accelerator supporting both the enterprise and our clients. We are looking for an accomplished hands-on individual contributor and team player to be a part of the AI Accelerator team.
You will be responsible for architecting and optimizing scalable, secure AI systems and integrating AI models in production using MLOps best practices, ensuring systems are resilient, compliant, and efficient. This role requires strong systems thinking, problem-solving abilities, and the capacity to manage risk and change in complex environments. Success depends on cross-functional collaboration, strategic communication, and adaptability in fast-paced, evolving technology landscapes.
This position will be focused on strategic company-wide initiatives but will also play a role in project delivery and capture solutioning (i.e., leaning in on existing or future projects and providing solutioning to capture new work.)
This is a remote position.
Essential Duties and Responsibilities:
- Make deep dives into the data, pulling out objective insights for business leaders.
- Initiate, craft, and lead advanced analyses of operational data.
- Provide a strong voice for the importance of data-driven decision making.
- Provide expertise to others in data wrangling and analysis.
- Convert complex data into visually appealing presentations.
- Develop and deploy advanced methods to analyze operational data and derive meaningful, actionable insights for stakeholders and business development partners.
- Understand the importance of automation and look to implement and initiate automated solutions where appropriate.
- Initiate and take the lead on AI/ML initiatives as well as develop AI/ML code for projects.
- Utilize various languages for scripting and write SQL queries. Serve as the primary point of contact for data and analytical usage across multiple projects.
- Guide operational partners on product performance and solution improvement/maturity options.
- Participate in intra-company data-related initiatives as well as help foster and develop relationships throughout the organization.
- Learn new skills in advanced analytics/AI/ML tools, techniques, and languages.
- Mentor more junior data analysts/data scientists as needed.
- Apply strategic approach to lead projects from start to finish;
Job-Specific Essential Duties and Responsibilities:
- Develop, collaborate, and advance the applied and responsible use of AI, ML, simulation, and data science solutions throughout the enterprise and for our clients by finding the right fit of tools, technologies, processes, and automation to enable effective and efficient solutions for each unique situation.
- Contribute and lead the creation, curation, and promotion of playbooks, best practices, lessons learned and firm intellectual capital.
- Contribute to efforts across the enterprise to support the creation of solutions and real mission outcomes leveraging AI capabilities from Computer Vision, Natural Language Processing, LLMs and classical machine learning.
- Maintain current knowledge and evaluation of the AI technology landscape and emerging developments and their applicability for use in production/operational environments
Minimum Requirements
- Bachelor's degree in related field required.
- 10-12 years of relevant professional experience required.
Job-Specific Minimum Requirements (required skills that align with contract LCAT, verifiable, and measurable):
- 10+ years of relevant Software Development + AI / ML / DS experience
- Professional Programming experience (e.g. Python, R, etc.)
- Experience with AI / Machine Learning
- Experience working as a contributor on a team
- Experience leading AI/DS/or Analytics teams
- Experience mentoring Junior Staff
- Experience with Modeling and Simulation
- Experience with program management
Preferred Skills and Qualifications:
- Masters in quantitative discipline (Math, Operations Research, Computer Science, etc.)
- Experience developing machine learning or signal processing algorithms:
- Ability to leverage mathematical principles to model new and novel behaviors
- Ability to leverage statistics to identify true signals from noise or clutter
- Experience working as an individual contributor in AI or modeling and simulation
- Use of state-of-the-art technology to solve operational problems in AI, Machine Learning, or Modeling and Simulation spheres
- Strong knowledge of data structures, common computing infrastructures/paradigms (stand alone and cloud), and software engineering principles
- Ability to design custom solutions in the AI and Advanced Analytics sphere for customers. This includes the ability to scope customer needs, identify currently existing technologies, and develop custom software solutions to fill any gaps in available off the shelf solutions
- Use and development of program automation, CI/CD, DevSecOps, and Agile
- Experience managing technical teams delivering technical solutions for clients.
- Experience working with optimization problems like scheduling
- Experience with Data Analytics and Visualizations
- Cloud certifications (AWS, Azure, or GCP)
- 10+ yrs of related experience in AI, advanced analytics, computer science, or software development
#techjobs #Veteranspage
EEO Statement
Maximus is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, age, national origin, disability, veteran status, genetic information and other legally protected characteristics.
Pay Transparency
Maximus compensation is based on various factors including but not limited to job location, a candidate's education, training, experience, expected quality and quantity of work, required travel (if any), external market and internal value analysis including seniority and merit systems, as well as internal pay alignment. Annual salary is just one component of Maximus's total compensation package. Other rewards may include short- and long-term incentives as well as program-specific awards. Additionally, Maximus provides a variety of benefits to employees, including health insurance coverage, life and disability insurance, a retirement savings plan, paid holidays and paid time off. Compensation ranges may differ based on contract value but will be commensurate with job duties and relevant work experience. An applicant's salary history will not be used in determining compensation. Maximus will comply with regulatory minimum wage rates and exempt salary thresholds in all instances.
Accommodations
Maximus provides reasonable accommodations to individuals requiring assistance during any phase of the employment process due to a disability, medical condition, or physical or mental impairment. If you require assistance at any stage of the employment process-including accessing job postings, completing assessments, or participating in interviews,-please contact People Operations at **************************.
Minimum Salary
$
156,640.00
Maximum Salary
$
234,960.00
$83k-115k yearly est. Easy Apply 2d ago
Data Scientist
Eyecarecenterofsalem
Data engineer job in Portland, OR
Job DescriptionWe are looking for a Data Scientist to analyze large amounts of raw information to find patterns that will help improve our company. We will rely on you to build data products to extract valuable business insights. In this role, you should be highly analytical with a knack for analysis, math, and statistics. Critical thinking and problem-solving skills are essential for interpreting data. We also want to see a passion for machine learning and research.
Your goal will be to help our company analyze trends to make better decisions.
Responsibilities
Identify valuable data sources and automate collection processes
Undertake to preprocess of structured and unstructured data
Analyze large amounts of information to discover trends and patterns
Build predictive models and machine-learning algorithms
Combine models through ensemble modeling
Present information using data visualization techniques
Propose solutions and strategies to business challenges
Collaborate with engineering and product development teams
Requirements and skills
Proven experience as a Data Scientist orData Analyst
Experience in data mining
Understanding of machine learning and operations research
Knowledge of R, SQL, and Python; familiarity with Scala, Java, or C++ is an asset
Experience using business intelligence tools (e.g. Tableau) and data frameworks (e.g. Hadoop)
Analytical mind and business acumen
Strong math skills (e.g. statistics, algebra)
Problem-solving aptitude
Excellent communication and presentation skills
BSc/BA in Computer Science, Engineering, or relevant field; a graduate degree in Data Science or other quantitative field is preferred
$73k-104k yearly est. 16d ago
Data Engineer
Cerium Networks 3.9
Data engineer job in Beaverton, OR
About Us
Cerium Networks is a leading technology solutions integrator and Managed Services Provider. We connect businesses to their potential. We specialize in advanced cybersecurity, robust communications and networking, and unified communications and collaboration tools. Our services also include contact center management, scalable data platforms for analytics and AI, and 24/7 IT support. With a consultative approach and deep technical expertise, we deliver and support advanced technology solutions to drive innovation and excellence in IT services. Join us to be part of a team committed to excellence.
We are seeking a skilled DataEngineer to join our dynamic Data & AI consulting team to help architect and build out modern data estates for our clients. Primary responsibilities focus on post-sales delivery and strategy of data solutions to our clients.
What You'll Do
You will be responsible for designing, implementing, and optimizing data solutions for our clients.
You will deploy data platforms in a medallion architecture for Cerium clients.
You will leverage advanced data architectures to modernize our clients' data estates to better integrate with Generative AI platforms.
You will create data pipelines, develop data models, and provide strategic insights through data analytics and visualization tools.
You will assist in the adoption of data technologies and provide training and documentation.
You will help define service offerings for Cerium to design, propose, and sell to clients including products and services to deliver.
You will support the presales and sales teams in designing and proposing solutions as needed. This includes providing technical consulting services to clients by presenting solutions and performing administrative handoffs to ensure smooth transitions post-implementation.
Requirements
What We're Looking For
Required
Bachelor's degree in computer science, data science, Information Technology, software engineering, or a related field or equivalent experience.
7+ years of experience in dataengineering and solution design.
Must have experience with Microsoft data solutions, Fabric preferred.
Experience consulting with clients and translating complex technical concepts into simple, understandable terms for non-technical audiences.
Ability to manage multiple tasks, projects, and clients simultaneously.
Experience and Interest in mentoring Junior Data Professionals.
Customer service skills and experience.
Strong organization and documentation abilities.
Willing to travel 10-20% of the time depending on client needs.
Preferred
Certifications across data solutions preferred but not required: Microsoft Certified Fabric Analytics Engineer Associate, Microsoft Certified Fabric DataEngineer Associate, Microsoft Certified Azure DataEngineer Associate, Microsoft Certified Azure Data Science Associate
Experience with Microsoft Fabric.
Location: Spokane, WA; Boise, ID; Seattle, WA; Beaverton, OR
Schedule: Monday-Friday, Hybrid (2-days onsite, 3-days remote)
Compensation
Pay Range: $110,000 to $150,000
Actual compensation will vary and may be above or below the range based on various factors including but not limited to location, experience, and performance.
Benefits
· Medical, dental and vision Insurance
· Basic term life and AD&D insurance (fully paid for by the company)
· Paid time off annually: 15 days of PTO and 9 paid holidays
· 401(k) Plan and Employer Matching Contribution of 4%
· Wellness program that includes up to $500 in cash awards
· Employee Assistance Program
$110k-150k yearly 8d ago
Data Engineer
Verato 4.5
Data engineer job in Portland, OR
This position is based in Mérida, Yucatán, México. If you do not live in/around Merida, and are still interested in this position, relocation will be required within 60 days of accepting this position. Verato will provide a relocation bonus of $42,500 MNX pesos to help with your move.
ABOUT VERATO
As digital transformation and AI progress at lightning speed, organizations find themselves data-rich and insights-poor. Digital transformation's promise to drive better experiences and business performance is falling short. Data is often trapped in silos across disconnected systems of record, such as ERPs and EHRs, systems of engagement, such as CRMs, and systems of insight, such as cloud data platforms. These systems cannot integrate seamlessly without a single source of truth for identity, making it impossible to share and consume complete and trusted 360-degree views of people, organizations, and networks.
Verato, the identity intelligence experts, powers exceptional experiences everywhere by solving the problem that drives everything else - knowing who is who. The Verato MDM Cloudä, the next generation of MDM, delivers unprecedented identity intelligence by uniquely combining extraordinary identity resolution and enrichment with identity verification, AI-powered data governance, and advanced insights. Verato re-imagines MDM to be purpose-built and nimble to drive a complete and trusted 360-degree view of people, organizations, and networks across complex ecosystems with unmatched speed to value, enterprise-grade performance, and customer success. More than 75% of the US population flows through Verato, powering a single source of truth for identity across the critical industries of healthcare, life sciences, financial services, public sector, and beyond. For more information, visit verato.com.
Core to Verato's strategy for sustained growth is our commitment to building a strong, people-first culture that attracts, develops, and retains top talent worldwide. Verato operates on the simple principle that a company must prioritize its employees first and foremost. In return, these employees will take care of the company's customers, and in turn, those customers will support the company's shareholders. Verato believes in empowering teams with the best tools and development opportunities available. Staff are given chances to expand their knowledge in areas like technology (e.g., big data, distributed/cloud computing, complex algorithms), healthcare, and organizational development. As Verato continues a path of high growth and significant impact, every team member gains an influential front-row seat as we execute our business strategy. Together, we can bring about a profound and positive transformation in healthcare as we know it today.
VERATO VALUES
We are committed to continually raising the standard of excellence throughout the organization, from marketing to engineering to customer service. Our guiding principles are to Make a Difference, to be Trustworthy, and to be Customer Obsessed.
Verato employees have a precise focus on proactively protecting the privacy and security of all systems while always ensuring they are following documented policies and procedures.
About the Position
We are seeking a DataEngineer who is interested in joining a highly dynamic and creative development team. This is a leadership role within the Technology Department. Verato's SaaS software offering is a Master Data Management ( Verato MDM Cloud) platform that provides our customers with a complete and trusted 360-degree view of their patients, consumers, and providers. This technical software development position is focused on enhancing an established product based on client needs. This position reports to the Director of Data Platform within the Technology Department.
Essential Functions and Responsibilities
Design data pipelines for API, streaming, and batch processing to facilitate data loads into the Snowflake data warehouse.
Collaborate with other engineering and DevOps team members to implement, test, deploy, and operate data pipelines and ETL solutions.
Develop scripts to Extract, Load and Transform data and other utility functions
Optimize data pipelines, ETL processes, and data integrations for large-scale data analytics use cases
Build necessary components to ensure data quality, monitoring, alerting, integrity, and governance standards are maintained in data processing workflows
Able to navigate ambiguity and thrives in a fast-paced environment. Takes initiative and consistently delivers results with minimal supervision.
Performs data profiling and analysis required to perform development work or troubleshoot/assist in the resolution of data issues.
Requirements
Required Skills:
Bachelor's or master's degree in Computer Science, Information Systems, or related field
3+ years of experience in building and maintaining data pipelines and ETL/ELT processes in data-centric organizations
Strong coding skills using Python. Familiar with Python libraries related to dataengineering and cloud services including pandas, boto, etc.
At least 1-2 years of experience with AWS S3, SQS, Kinesis, Lambda, AWS DMS, Glue/EMR, AWS Batch or similar services
Hands-on experience building streaming and batch big datadata pipelines
Must have knowledge of building Infrastructure in AWS cloud using Cloud formation or Terraform
1+ years of working experience with Snowflake cloud data warehouse including Snowflake data shares, Snowpipes, Snow SQL, Tasks etc
Must have working knowledge of various databases, SQL and NoSQL
Must have working knowledge of various file formats like CSV, Json Avro, and Parquet.
Hands-on experience with cloud platforms such as AWS and Google Cloud
Strong experience with Apache Spark, especially using PySpark for large-scale data processing
Experience working with agile development methodology
Experienced in CI/CD and release processes, proficient in Git or other source control management systems, to streamline development and deployment workflows
Other Desired Skills:
Minimum 2 years of designing and implementing operational production grade large-scale data pipelines, ETL/ELT and data integration solutions.
Exposure to multi-tenant/multi-customer environments is a big plus.
Hands on experience with productionized data ingestion and processing pipelines
Strong understanding of Snowflake Internals and integration of Snowflake with other data processing and reporting technologies.
Experience working with structured, semi-structured, and unstructured data
Familiarity with MongoDB or similar NoSQL database systems
Some familiarity with Apache Airflow and building/maintaining DAGs
$99k-141k yearly est. 9d ago
AWS Data Migration Consultant
Slalom 4.6
Data engineer job in Portland, OR
Candidates can live within commutable distance to any Slalom office in the US. We have a hybrid and flexible environment. Who You'll Work With As a modern technology company, we've never met a technical challenge we didn't like. We enable our clients to learn from their data, create incredible digital experiences, and make the most of new technologies. We blend design, engineering, and analytics expertise to build the future. We surround our technologists with interesting challenges, innovative minds, and emerging technologies.
We are seeking an experienced Cloud Data Migration Architect with deep expertise in SQL Server, Oracle, DB2, or a combination of these platforms, to lead the design, migration, and optimization of scalable database solutions in the AWS cloud. This role will focus on modernizing on-premises database systems by architecting high-performance, secure, and reliable AWS-hosted solutions.
As a key technical leader, you will work closely with dataengineers, cloud architects, and business stakeholders to define data strategies, lead complex database migrations, build out ETL pipelines, and optimize performance across legacy and cloud-native environments.
What You'll Do
* Design and optimize database solutions on AWS, including Amazon RDS, EC2-hosted instances, and advanced configurations like SQL Server Always On or Oracle RAC (Real Application Clusters).
* Lead and execute cloud database migrations using AWS Database Migration Service (DMS), Schema Conversion Tool (SCT), and custom automation tools.
* Architect high-performance database schemas, indexing strategies, partitioning models, and query optimization techniques.
* Optimize complex SQL queries, stored procedures, functions, and views to ensure performance and scalability in the cloud.
* Implement high-availability and disaster recovery (HA/DR) strategies including Always-On, Failover Clusters, Log Shipping, and Replication, tailored to each RDBMS.
* Ensure security best practices are followed including IAM-based access control, encryption, and compliance with industry standards.
* Collaborate with DevOps teams to implement Infrastructure-as-Code (IaC) using tools like Terraform, CloudFormation, or AWS CDK.
* Monitor performance using tools such as AWS CloudWatch, Performance Insights, Query Store, Dynamic Management Views (DMVs), or Oracle-native tools.
* Work with software engineers and data teams to integrate cloud databases into enterprise applications and analytics platforms.
What You'll Bring
* 5+ years of experience in database architecture, design, and administration with at least one of the following: SQL Server, Oracle, or DB2.
* Expertise in one or more of the following RDBMS platforms: Microsoft SQL Server, Oracle, DB2.
* Hands-on experience with AWS database services (RDS, EC2-hosted databases).
* Strong understanding of HA/DR solutions and cloud database design patterns.
* Experience with ETL development and data integration, using tools such as SSIS, AWS Glue, or custom solutions.
* Familiarity with AWS networking components (VPCs, security groups) and hybrid cloud connectivity.
* Strong troubleshooting and analytical skills to resolve complex database and performance issues.
* Ability to work independently and lead database modernization initiatives in collaboration with engineering and client stakeholders.
Nice to Have
* AWS certifications such as AWS Certified Database - Specialty or AWS Certified Solutions Architect - Professional.
* Experience with NoSQL databasesor hybrid data architectures.
* Knowledge of analytics and big data tools (e.g., Snowflake, Redshift, Athena, Power BI, Tableau).
* Familiarity with containerization (Docker, Kubernetes) and serverless technologies (AWS Lambda, Fargate).
* Experience with DB2 on-premise or cloud-hosted environments.
About Us
Slalom is a fiercely human business and technology consulting company that leads with outcomes to bring more value, in all ways, always. From strategy through delivery, our agile teams across 52 offices in 12 countries collaborate with clients to bring powerful customer experiences, innovative ways of working, and new products and services to life. We are trusted by leaders across the Global 1000, many successful enterprise and mid-market companies, and 500+ public sector organizations to improve operations, drive growth, and create value. At Slalom, we believe that together, we can move faster, dream bigger, and build better tomorrows for all.
Compensation and Benefits
Slalom prides itself on helping team members thrive in their work and life. As a result, Slalom is proud to invest in benefits that include meaningful time off and paid holidays, parental leave, 401(k) with a match, a range of choices for highly subsidized health, dental, & vision coverage, adoption and fertility assistance, and short/long-term disability. We also offer yearly $350 reimbursement account for any well-being-related expenses, as well as discounted home, auto, and pet insurance.
Slalom is committed to fair and equitable compensation practices. For this position, the target base salary pay range in the following locations:
Boston, Houston, Los Angeles, Orange County, Seattle, San Diego, Washington DC, New York, New Jersey, for Consultant level is $105,000-147,000 and for Senior Consultant level it is $120,000-$169,000 and for Principal level it is $133,000-$187,000.
In all other markets, the target base salary pay range for Consultant level is $96,000-$135,000 and for Senior Consultant level it is $110,000-$155,000 and for Principal level it is $122,000-$172,000.
In addition, individuals may be eligible for an annual discretionary bonus. Actual compensation will depend upon an individual's skills, experience, qualifications, location, and other relevant factors. The salary pay range is subject to change and may be modified at any time.
EEO and Accommodations
Slalom is an equal opportunity employer and is committed to attracting, developing and retaining highly qualified talent who empower our innovative teams through unique perspectives and experiences. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veterans' status, or any other characteristic protected by federal, state, or local laws. Slalom will also consider qualified applications with criminal histories, consistent with legal requirements. Slalom welcomes and encourages applications from individuals with disabilities. Reasonable accommodations are available for candidates during all aspects of the selection process. Please advise the talent acquisition team if you require accommodations during the interview process.
We will accept applications until 1/31/2026 or until the positions are filled.
$133k-187k yearly 13d ago
Sr Data Engineer (MFT - IBM Sterling)
The Hertz Corporation 4.3
Data engineer job in Salem, OR
**A Day in the Life:** The IBM Sterling Senior MFT Engineer will have expertise in Sterling B2B Integrator, Sterling File Gateway, Sterling Secure Proxy, Control Center. This position requires an expert level knowledge of these technologies. You'll provide third-level support for core hardware, software, data and infrastructure components within the B2B integration team. The software focus areas will include software infrastructure components, continuous improvement and automation related to middleware and enterprise data movement within a mission-critical, multi-site enterprise environment.
The ideal candidate will have a passion for technology; possess the ability to create change and be the facilitator for this transformation. They will have experience tailoring software design, developing, and implementing of solutions in support of data quality and automation. They will need to understand the requirements of a project, test the performance of the infrastructure, and verify that the requirements have been successfully met.
We expect the starting salary to be around $135k but will be commensurate with experience.
**What You'll Do:**
TECHNICAL SENIORSHIP
+ Communication with internal and external business users on Sterling Integrator mappings
+ Making changes to existing partner integrations to meet internal and external requirements
+ Design, develop and implement solutions based on standards and processes that establish consistency across the enterprise data, reduce risks, and promote efficiencies in support of the organizational goals and objectives.
+ Diagnose and troubleshoot complex issues, restore services and perform root cause analysis.
+ Facilitate the review, vetting of these designs with the architecture governance bodies, as required.
+ Be aware of all aspects of security related to the Sterling environment and integrations
INNOVATIVE INFRASTRUCTURE & PROBLEM SOLVING
+ Strive to engineering excellence by simplifying, optimizing, and automating processes and workflows.
TEAMWORK & COMMUNICATION
+ Superior & demonstrated team building & development skills to harness powerful teams
+ Ability to communicate effectively with different levels of Seniorship within the organization
+ Provide timely updates so that progress against each individual incident can be updated as required
+ Write and review high quality technical documentation
CONTROL & AUDIT
+ Ensures their workstation and all processes and procedures, follow organization standards
CONTINUOUS IMPROVEMENT
+ Encourages and maintains a 'Best practice sharing culture', always striving to find ways to improve service and change mind set.
**What We're Looking For:**
+ Bachelor's degree in Engineering, Statistics, Computer Science or other quantitative fields, required
+ 5+ years of IT experience
+ 4+ years' experience working with Sterling B2B Integrator, Sterling File Gateway, PCM (preferred)
+ 3+ years' experience with scripting to enable automation of standard activities (example: Ansible, Python, Bash, Java)
+ Strong interpersonal and communication skills with Agile/Scrum experience.
+ Strong problem solving and critical thinking skills with a proven record for identifying and diagnosing problems, and solving complex problems with simple, logical solutions with the ability to develop custom setups.
+ Outstanding verbal, written, presentation, facilitation, and interaction skills, including ability to effectively communicate technical and non-technical issues and concepts to multiple organization levels.
+ Prefer Travel, transportation, or hospitality experience
+ Prefer experience with designing application data models for mobile or web applications
+ Excellent written and verbal communication skills.
+ Flexibility in scheduling which may include nights, weekends, and holidays
**What You'll Get:**
+ Up to 40% off the base rate of any standard Hertz Rental
+ Paid Time Off
+ Medical, Dental & Vision plan options
+ Retirement programs, including 401(k) employer matching
+ Paid Parental Leave & Adoption Assistance
+ Employee Assistance Program for employees & family
+ Educational Reimbursement & Discounts
+ Voluntary Insurance Programs - Pet, Legal/Identity Theft, Critical Illness
+ Perks & Discounts -Theme Park Tickets, Gym Discounts & more
The Hertz Corporation operates the Hertz, Dollar Car Rental, Thrifty Car Rental brands in approximately 9,700 corporate and franchisee locations throughout North America, Europe, The Caribbean, Latin America, Africa, the Middle East, Asia, Australia and New Zealand. The Hertz Corporation is one of the largest worldwide airport general use vehicle rental companies, and the Hertz brand is one of the most recognized in the world.
**US EEO STATEMENT**
At Hertz, we champion and celebrate a culture of diversity and inclusion. We take affirmative steps to promote employment and advancement opportunities. The endless variety of perspectives, experiences, skills and talents that our employees invest in their work every day represent a significant part of our culture - and our success and reputation as a company.
Individuals are encouraged to apply for positions because of the characteristics that make them unique.
EOE, including disability/veteran
$135k yearly 60d+ ago
BigData Engineer / Architect
Nitor Infotech
Data engineer job in Portland, OR
The hunt is for a strong Big Data Professional, a team player with the ability to manage effective relationships with a wide range of stakeholders (customers & team members alike). Incumbent will demonstrate personal commitment and accountability to ensure standards are continuously sustained and improved both within the internal teams, and with partner organizations and suppliers.
Role:
Big DataEngineer
Location:
Portland OR.
Duration:
Full Time
Skill Matrix:
Map Reduce -
Required
Apache Spark -
Required
Informatica PowerCenter -
Required
Hive -
Required
Apache Hadoop -
Required
Core Java / Python -
Highly Desired
Healthcare Domain Experience -
Highly Desired
Job Description
Responsibilities and Duties
Participate in technical planning & requirements gathering phases including architectural design, coding, testing, troubleshooting, and documenting big data-oriented software applications.
Responsible for the ingestion, maintenance, improvement, cleaning, and manipulation of data in the business's operational and analytics databases, and troubleshoots any existent issues.
Implementation, troubleshooting, and optimization distributed solutions based on modern big data technologies like Hive, Hadoop, Spark, Elastic Search, Storm, Kafka, etc. in both an on premise and cloud deployment model to solve large scale processing problems
Design, enhance and implement ETL/data ingestion platform on the cloud.
Strong Data Warehousing skills, including: Data clean-up, ETL, ELT and handling scalability issues for enterprise level data warehouse
Capable of investigating, familiarizing and mastering new data sets quickly
Strong troubleshooting and problem-solving skills in large data environment
Experience with building data platform on cloud (AWS or Azure)
Experience in using Python, Java or any other language to solving data problems
Experience in implementing SDLC best practices and Agile methods.
Qualifications
Required Skills:
Data architecture/ Big Data/ ETL environment
Experience with ETL design using tools Informatica, Talend, Oracle Data Integrator (ODI), Dell Boomi or equivalent
Big Data & Analytics solutions Hadoop, Pig, Hive, Spark, Spark SQL Storm, AWS (EMR, Redshift, S3, etc.)/Azure (HDInsight, Data Lake Design)
Building and managing hosted big data architecture, toolkit familiarity in: Hadoop with Oozy, Sqoop, Pig, Hive, HBase, Avro, Parquet, Spark, NiFi
Foundational data management concepts - RDM and MDM
Experience in working with JIRA/Git/Bitbucket/JUNIT and other code management toolsets
Strong hands-on knowledge of/using solutioning languages like: Java, Scala, Python - any one is fine
Healthcare Domain knowledge
Required Experience, Skills and Qualifications
Qualifications:
Bachelor's Degree with a minimum of 6 to 9 + year's relevant experience or equivalent.
Extensive experience in data architecture/Big Data/ ETL environment.
Additional Information
All your information will be kept confidential according to EEO guidelines.
$84k-118k yearly est. 1d ago
Sr. Data Engineer
It Vision Group
Data engineer job in Portland, OR
Job Description
Title : Sr. DataEngineer
Duration: 12 Months+
Roles & Responsibilities
Perform data analysis according to business needs
Translate functional business requirements into high-level and low-level technical designs
Design and implement distributed data processing pipelines using Apache Spark, Apache Hive, Python, and other tools and languages prevalent in a modern analytics platform
Create and schedule workflows using Apache Airflow or similar job orchestration tooling
Build utilities, functions, and frameworks to better enable high-volume data processing
Define and build data acquisitions and consumption strategies
Build and incorporate automated unit tests, participate in integration testing efforts
Work with teams to resolve operational & performance issues
Work with architecture/engineering leads and other teams to ensure quality solutions are implemented, and engineering best practices are defined and followed.
Tech Stack
Apache Spark
Apache Spark Streaming using Apache Kafka
Apache Hive
Apache Airflow
Python
AWS EMR and S3
Snowflake
SQL
Other Tools & Technologies :: PyCharm, Jenkin, Github.
Apache Nifi (Optional)
Scala (Optional)
$84k-118k yearly est. 12d ago
Jr. Data Engineer
Insight Global
Data engineer job in Portland, OR
An insurance company is looking for a Jr. DataEngineer to join the Modeling Optimization team within the Actuarial Transformation team. This person needs to be eager to learn and grow within this role. This position requires investigating, researching, and reporting findings to the lead analyst on the team. There will be hands on development within Python and interfacing with technical and business team members. This position will provide opportunities to explore different technologies and learn different parts of the business, asking questions and leaning into the mentor's knowledge is encouraged!
We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity/affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to ********************.To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy: ****************************************************
Skills and Requirements
1+ years of experience as a DataEngineer
Experience with Python
Experience analyzing systems, data, and processes for improvements
Ability to gather requirements, investigate & research Dataiku experience
Databricks
$84k-118k yearly est. 8d ago
Sr. Data Engineer
Concoracredit
Data engineer job in Beaverton, OR
As a Sr. DataEngineer, you'll help drive Concora Credit's Mission to enable customers to Do More with Credit - every single day.
The impact you'll have at Concora Credit:
We are seeking a Sr. DataEngineer with deep expertise in Azure and Databricks to lead the design, development, and optimization of scalable data pipelines and platforms. You'll be responsible for building robust data solutions that power analytics, reporting, and machine learning across the organization using Azure cloud services and Databricks.
We hire people, not positions. That's because, at Concora Credit, we put people first, including our customers, partners, and Team Members. Concora Credit is guided by a single purpose: to help non-prime customers
do more
with credit. Today, we have helped millions of customers access credit. Our industry leadership, resilience, and willingness to adapt ensure we can help our partners responsibly say yes to millions more. As a company grounded in entrepreneurship, we're looking to expand our team and are looking for people who foster innovation, strive to make an impact, and want to Do More! We're an established company with over 20 years of experience, but now we're taking things to the next level. We're seeking someone who wants to impact the business and play a pivotal role in leading the charge for change.
Responsibilities
As our Sr. DataEngineer, you will:
Design and develop scalable, efficient data pipelines using Azure Databricks
Build and manage data ingestion, transformation, and storage solutions leveraging Azure Data Factory, Azure Data Lake, and Delta Lake
Implement CI/CD for data workflows using tools like Azure DevOps, Git, and Terraform
Optimize performance and cost efficiency across large-scale distributed data systems
Collaborate with analysts, data scientists, and business stakeholders to understand data needs and deliver reliable, reusable datasets
Provide guidance and mentor junior engineers and actively contribute to data platform best practices
Monitor, troubleshoot, and optimize existing pipelines and infrastructure to ensure reliability and scalability
These duties must be performed with or without reasonable accommodation.
We know experience comes in many forms and that many skills are transferable. If your experience is close to what we're looking for, consider applying. Diversity has made us the entrepreneurial and innovative company that we are today.
Qualifications
Requirements:
5+ years of experience in dataengineering, with a strong focus on Azure cloud technologies
Experience with Azure Databricks, Azure Data Lake, Data Factory including PySpark, SQL, Python and Delta Lake
Strong proficiency in Databricks and Apache Spark
Solid understanding of data warehousing, ETL/ELT, and data modeling best practices
Experience with version control, CI/CD pipelines, and infrastructure as code
Knowledge of Spark performance tuning, partitioning, and job orchestration
Excellent problem-solving skills and attention to detail
Strong communication and collaboration abilities across technical and non-technical teams
Ability to work independently and lead in a fast-paced, agile environment
Passion for delivering clean, high-quality, and maintainable code
Preferred Qualifications:
Experience with Unity Catalog, Databricks Workflows, and Delta Live Tables
Familiarity with DevOps practices or Terraform for Azure resource provisioning
Understanding of data security, RBAC, and compliance in cloud environments
Experience integrating Databricks with Power BI or other analytics platforms
Exposure to real-time data processing using Kafka, Event Hubs, or Structured Streaming
What's In It For You:
Medical, Dental and Vision insurance for you and your family
Relax and recharge with Paid Time Off (PTO)
6 company-observed paid holidays, plus 3 paid floating holidays
401k (after 90 days) plus employer match up to 4%
Pet Insurance for your furry family members
Wellness perks including onsite fitness equipment at both locations, EAP, and access to the Headspace App
We invest in your future through Tuition Reimbursement
Save on taxes with Flexible Spending Accounts
Peace of mind with Life and AD&D Insurance
Protect yourself with company-paid Long-Term Disability and voluntary Short-Term Disability
Concora Credit provides equal employment opportunities to all Team Members and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
Employment-based visa sponsorship is not available for this role.
Concora Credit is an equal opportunity employer (EEO).
Please see the Concora Credit Privacy Policy for more information on how Concora Credit processes your personal information during the recruitment process and, if applicable, based on your location, how you can exercise your privacy rights. If you have questions about this privacy notice or need to contact us in connection with your personal data, including any requests to exercise your legal rights referred to at the end of this notice, please contact caprivacynotice@concoracredit.com.
$84k-118k yearly est. Auto-Apply 28d ago
Sr.Hadoop Developer
Bridge Tech 4.2
Data engineer job in Beaverton, OR
Job DescriptionTypically requires a Bachelors Degree and minimum of 5 years directly relevant work experience Client is embarking on the big data platform in Consumer Digital using Hadoop Distributed File System cluster. As a Sr. Hadoop developer you will work with a variety of talented client team mates and be a driving force for building solutions. You will be working on development projects related to commerce and web analytics.
Responsibilities:
•Design and implement map reduce jobs to support distributed processing using java, cascading, python, hive and pig; Ability to design and implement end to end solution.
•Build libraries, user defined functions, and frameworks around Hadoop
•Research, evaluate and utilize new technologies/tools/frameworks around Hadoop eco system
•Develop user defined functions to provide custom hive and pig capabilities
•Define and build data acquisitions and consumption strategies
•Define & develop best practices
•Work with support teams in resolving operational & performance issues
•Work with architecture/engineering leads and other teams on capacity planning
QualificationsQualification:
•MS/BS degree in a computer science field or related discipline
•6+ years' experience in large-scale software development
•1+ year experience in Hadoop
•Strong Java programming, shell scripting, Python, and SQL
•Strong development skills around Hadoop, MapReduce, Hive, Pig, Impala
•Strong understanding of Hadoop internals
•Good understanding of AVRO and Json and other compresssion
•Experience with build tools such as Maven
•Experience with databases like Oracle;
•Experience with performance/scalability tuning, algorithms and computational complexity
•Experience (at least familiarity) with data warehousing, dimensional modeling and ETL development
•Ability to understand and ERDs and relational database schemas
•Proven ability to work cross functional teams to deliver appropriate resolution
Nice to have:
•Experience with open source NOSQL technologies such as HBase and Cassandra
•Experience with messaging & complex event processing systems such as Kafka and Storm
•Machine learning framework
•Statistical analysis with Python, R or similar
Additional Information
All your information will be kept confidential according to EEO guidelines.
$90k-118k yearly est. 60d+ ago
208406 / Datawarehouse BI Consultant
Procom Services
Data engineer job in Hillsboro, OR
Procom is a leading provider of professional IT services and staffing to businesses and governments in Canada. With revenues over $500 million, the Branham Group has recognized Procom as the 3rd largest professional services firm in Canada and is now the largest “Canadian-Owned” IT staffing/consulting company.
Procom's areas of staffing expertise include:
• Application Development
• Project Management
• Quality Assurance
• Business/Systems Analysis
• Datawarehouse & Business Intelligence
• Infrastructure & Network Services
• Risk Management & Compliance
• Business Continuity & Disaster Recovery
• Security & Privacy
Specialties• Contract Staffing (Staff Augmentation)
• Permanent Placement (Staff Augmentation)
• ICAP (Contractor Payroll)
• Flextrack (Vendor Management System)
Job Description
Consolidating data from multiple online trans-actional systems, scheduling tools, defect management tools, gathering and dropping info in a datawarehouse, then from there creating an online data tool. Need a Datawarehouse BI person - architecting and datawarehouse environment and building and extracting loads. The group has the need to have small applications built to gather data and the person needs to grow the value of this. Identifying what the group wants and growing the entire deliverable potential for other groups in the future as a model.
Job Duties: The candidate will possess SharePoint, .NET, Java and MS SQL skills and will apply those skills to create/extend ETL, SQL and application code for Sharepoint Business Intelligence and web applications. Candidate will also troubleshoot, debug, identify and correct problems related to the import and presentation of ETL data.
Qualifications
Strong development background in SharePoint BI or other Business Intelligence application.
Experienced in developing stored procedures, SSIS Packages, with advanced data development skills.
Solid software development skills including -Java, Javascript, HTML, T-SQL, CSS XML, ASP.NET
3-5 Years with recent required skills and 7-10 overall experience and with all tools.
Degree Type: BS in relevant field..CS, Engineering, etc.
Additional Information
$86k-117k yearly est. 60d+ ago
Data Architect
Advance Local 3.6
Data engineer job in Portland, OR
**Advance Local** is looking for a **Data Architect** to lead the design and implementation of enterprise-level data solutions within our modern cloud data platform. This role combines deep technical expertise in analytics engineering with leadership responsibilities to ensure the delivery of well-documented, tested, and high-quality data assets that enable AI, data products, advanced analytics, and self-service reporting. You'll guide strategic data initiatives, mentor a team of analytics engineers, and collaborate with dataengineering and business stakeholders to deliver impactful, scalable solutions.
The base salary range is $150,000 - $165,000 per year.
**What you'll be doing:**
+ Architect and oversee scalable data models, pipelines, and frameworks in Snowflake using dbt, ensuring that they meet quality standards for AI agents, advanced analytics, and self-service reporting.
+ Lead the design and governance of analytics-ready data models, ensuring they are well-modeled, performant, and accessible to downstream consumers.
+ Drive rapid prototyping of new data products and features, providing technical direction and hand-on guidance when needed.
+ Establish and enforce data quality, testing, and documentation standards across all data assets, ensuring reliability and trustworthiness.
+ Develop advanced solutions for audience data modeling and identity resolution, supporting personalization and segmentation strategies.
+ Partner with Audience Strategy and Insights teams to translate requirements into technical solutions and automation.
+ Collaborate with the Lead DataEngineer on data integration patterns and ensure seamless handoffs between raw data ingestion and analytics-ready models.
+ Establish data architecture standards and development practices (version control, CI/CD, testing frameworks) that enable team scalability.
+ Enable data accessibility and integration solutions that support both technical and non-technical users across the organization.
+ Provide technical leadership to a Data Manager and their team of analytics engineers, fostering a culture of best practices, code review, and continuous improvement.
**Our ideal candidate will have the following:**
+ Bachelor's or master's degree in computer science, dataengineering, information systems, or related field
+ Minimum ten years' experience in dataengineering, data analytics engineering, architecture, or related roles, with proven experience leading data teams and managing complex data ecosystems
+ Expert level proficiency in dbt and Snowflake with demonstrated ability to build production-grade data models and pipelines
+ Strong knowledge of cloud platforms (AWS, Azure, GCP) and data warehousing best practices
+ Proficiency in big data technologies (Spark, Hadoop) and streaming frameworks
+ Familiarity with data governance, security, and compliance standards
+ Experience with audience segmentation, marketing analytics, or customer data platforms
+ Knowledge of machine learning pipelines, advanced analytics and AI applications
+ Strategic thinking and ability to align data initiatives with business objectives
+ Strong communication and stakeholder management skills
+ Proven ability to lead cross-functional teams and drive organizational change
+ Experience building data solutions that support self-service analytics and data demonstrations
**Additional Information**
Advance Local Media offers competitive pay and a comprehensive benefits package with affordable options for your healthcare including medical, dental and vision plans, mental health support options, flexible spending accounts, fertility assistance, a competitive 401(k) plan to help plan for your future, generous paid time off, paid parental and caregiver leave and an employee assistance program to support your work/life balance, optional legal assistance, life insurance options, as well as flexible holidays to honor cultural diversity.
Advance Local Media is one of the largest media groups in the United States, which operates the leading news and information companies in more than 20 cities, reaching 52+ million people monthly with our quality, real-time journalism and community engagement. Our company is built upon the values of Integrity, Customer-first, Inclusiveness, Collaboration and Forward-looking. For more information about Advance Local, please visit ******************** .
Advance Local Media includes MLive Media Group, Advance Ohio, Alabama Media Group, NJ Advance Media, Advance Media NY, MassLive Media, Oregonian Media Group, Staten Island Media Group, PA Media Group, ZeroSum, Headline Group, Adpearance, Advance Aviation, Advance Healthcare, Advance Education, Advance National Solutions, Advance Originals, Advance Recruitment, Advance Travel & Tourism, BookingsCloud, Cloud Theory, Fox Dealer, Hoot Interactive, Search Optics, Subtext.
_Advance Local Media is proud to be an equal opportunity employer, encouraging applications from people of all backgrounds. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, genetic information, national origin, age, disability, sexual orientation, marital status, veteran status, or any other category protected under federal, state or local law._
_If you need a reasonable accommodation because of a disability for any part of the employment process, please contact Human Resources and let us know the nature of your request and your contact information._
Advance Local Media does not provide sponsorship for work visas or employment authorization in the United States. Only candidates who are legally authorized to work in the U.S. will be considered for this position.
$150k-165k yearly 60d+ ago
Consultant, Data Engineer
IBM 4.7
Data engineer job in Portland, OR
**Introduction** At IBM, work is more than a job - it's a calling: To build. To design. To code. To consult. To think along with clients and sell. To make markets. To invent. To collaborate. Not just to do something better, but to attempt things you've never thought possible. Are you ready to lead in this new era of technology and solve some of the world's most challenging problems? If so, lets talk.
**Your role and responsibilities**
We are in search of a skilled Consultant DataEngineer to join our expanding team of experts. This role will be pivotal in the design and development of Snowflake Data Cloud solutions, encompassing responsibilities such as constructing data ingestion pipelines, establishing sound data architecture, and implementing stringent data governance and security protocols.
The ideal candidate brings experience as a proficient data pipeline builder and adept data wrangler, deriving satisfaction from optimizing data systems from their foundational stages. Collaborating closely with database architects, data analysts, and data scientists, the DataEngineer will play a crucial role in ensuring a consistent and optimal data delivery architecture across ongoing customer projects.
This position demands a self-directed individual comfortable navigating the diverse data needs of multiple teams, systems, and products. If you are enthusiastic about the prospect of contributing to a startup environment and supporting our customers in their next generation of data initiatives, we invite you to explore this opportunity.
As of April 2025, Hakkoda has been acquired by IBM and will be integrated in the IBM organization. Your recruitment process will be managed by IBM. IBM will be the hiring entity
This role can be performed from anywhere in the US.
**Required technical and professional expertise**
* Bachelor's degree in engineering, computer science or equivalent area
* 3+yrs in related technical roles with experience in data management, database development, ETL, and/ordata prep domains.
* Experience developing data warehouses.
* Experience building ETL / ELT ingestion pipelines.
* Proficiency in using cloud platform services for dataengineering tasks, including managed database services (Snowflake and its pros and cons vs Redshift, BigQuery etc) and data processing services (AWS Glue, Azure Data Factory, Google Dataflow).
* Skills in designing and implementing scalable and cost-effective solutions using cloud services, with an understanding of best practices for security and compliance.
* Knowledge of how to manipulate, process and extract value from large disconnected datasets.
* SQL and Python scripting experience require, Scala and Javascript is a plus.
* Cloud experience (AWS, Azure or GCP) is a plus.
* Knowledge of any of the following tools is also a plus: Snowflake, Matillion/Fivetran or DBT.
* Strong interpersonal skills including assertiveness and ability to build strong client relationships.
* Strong project management and organizational skills.
* Ability to support and work with cross-functional and agile teams in a dynamic environment.
* Advanced English required.
IBM is committed to creating a diverse environment and is proud to be an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender, gender identity or expression, sexual orientation, national origin, caste, genetics, pregnancy, disability, neurodivergence, age, veteran status, or other characteristics. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
$71k-96k yearly est. 21d ago
Sr Data Engineer, Palantir
The Hertz Corporation 4.3
Data engineer job in Salem, OR
**A Day in the Life:** We are seeking a talented **Sr DataEngineer, Palantir (experience required)** to join our Strategic Data & Analytics team working on Hertz's strategic applications and initiatives. This role will work in multi-disciplinary teams rapidly building high-value products that directly impact our financial performance and customer experience. You'll build cloud-native, large-scale, employee facing software using modern technologies including React, Python, Java, AWS, and Palantir Foundry.
The ideal candidate will have strong development skills across the full stack, a growth mindset, and a passion for building software at a sustainable pace in a highly productive engineering culture. Experience with Palantir Foundry is highly preferred but not required - we're looking for engineers who are eager to learn and committed to engineering excellence.
We expect the starting salary to be around $135k but will be commensurate with experience.
**What You'll Do:**
Day-to-Day Responsibilities
+ Work in balanced teams consisting of Product Managers, Product Designers, and engineers
+ Test first - We strive for Test-Driven Development (TDD) for all production code
+ CI (Continuous Integration) everything - Automation is core to our development process
+ Architect user-facing interfaces and design functions that help users visualize and interact with their data
+ Contribute to both frontend and backend codebases to enhance and develop projects
+ Build software at a sustainable pace to ensure longevity, reliability, and higher quality output
Frontend Development
+ Design and develop responsive, intuitive user interfaces using React and modern JavaScript/TypeScript
+ Build reusable component libraries and implement best practices for frontend architecture
+ Generate UX/UI designs (no dedicated UX/UI designers on team) with considerations for usability and efficiency
+ Optimize applications for maximum speed, scalability, and accessibility
+ Develop large-scale, web and mobile software utilizing appropriate technologies for use by our employees
Backend Development
+ Develop and maintain RESTful APIs and backend services using Python or Java
+ Design and implement data models and database schemas
+ Deploy to cloud environments (primarily AWS)
+ Integrate with third-party services and APIs
+ Write clean, maintainable, and well-documented code
Palantir Foundry Development (Highly Preferred)
+ Build custom applications and integrations within the Palantir Foundry platform
+ Develop Ontology-based applications leveraging object types, link types, and actions
+ Create data pipelines and transformations using Python transforms
+ Implement custom widgets and user experiences using the Foundry SDK
+ Design and build functions that assist users to visualize and interact with their data
Product Development & Delivery
+ Research problems and break them into deliverable parts
+ Work with a Lean mindset and deliver value quickly
+ Participate in all stages of the product development and deployment lifecycle
+ Conduct code reviews and provide constructive feedback to team members
+ Work with product managers and stakeholders to define requirements and deliverables
+ Contribute to architectural decisions and technical documentation
**What We're Looking For:**
+ Experience with Palantir Foundry platform, required
+ 5+ years in web front-end or mobile development
+ Bachelor's or Master's degree in Computer Science or other related field, preferred
+ Strong proficiency in React, JavaScript/TypeScript, HTML, and CSS for web front-end development
+ Strong knowledge of one or more Object Oriented Programming or Functional Programming languages such as JavaScript, Typescript, Java, Python, or Kotlin
+ Experience with RESTful API design and development
+ Experience deploying to cloud environments (AWS preferred)
+ Understanding of version control systems, particularly GitHub
+ Experience with relational and/or NoSQL databases
+ Familiarity with modern frontend build tools and package managers (e.g., Webpack, npm, yarn)
+ Experience with React, including React Native for mobile app development, preferred
+ Experience in Android or iOS development, preferred
+ Experience with data visualization libraries (e.g., D3.js, Plotly, Chart.js), preferred
+ Familiarity with CI/CD pipelines and DevOps practices, preferred
+ Experience with Spring framework, preferred
+ Working knowledge of Lean, User Centered Design, and Agile methodologies
+ Strong communication skills and ability to collaborate effectively across teams
+ Growth mindset - Aptitude and willingness to learn new technologies
+ Empathy - Kindness and empathy when building software for end users
+ Pride - Takes pride in engineering excellence and quality craftsmanship
+ Customer obsession - Obsessed with the end user experience of products
+ Strong problem-solving skills and attention to detail
+ Ability to work independently and as part of a balanced, multi-disciplinary team
+ Self-motivated with a passion for continuous learning and improvement
**What You'll Get:**
+ Up to 40% off the base rate of any standard Hertz Rental
+ Paid Time Off
+ Medical, Dental & Vision plan options
+ Retirement programs, including 401(k) employer matching
+ Paid Parental Leave & Adoption Assistance
+ Employee Assistance Program for employees & family
+ Educational Reimbursement & Discounts
+ Voluntary Insurance Programs - Pet, Legal/Identity Theft, Critical Illness
+ Perks & Discounts -Theme Park Tickets, Gym Discounts & more
The Hertz Corporation operates the Hertz, Dollar Car Rental, Thrifty Car Rental brands in approximately 9,700 corporate and franchisee locations throughout North America, Europe, The Caribbean, Latin America, Africa, the Middle East, Asia, Australia and New Zealand. The Hertz Corporation is one of the largest worldwide airport general use vehicle rental companies, and the Hertz brand is one of the most recognized in the world.
**US EEO STATEMENT**
At Hertz, we champion and celebrate a culture of diversity and inclusion. We take affirmative steps to promote employment and advancement opportunities. The endless variety of perspectives, experiences, skills and talents that our employees invest in their work every day represent a significant part of our culture - and our success and reputation as a company.
Individuals are encouraged to apply for positions because of the characteristics that make them unique.
EOE, including disability/veteran
$135k yearly 56d ago
BigData Engineer / Architect
Nitor Infotech
Data engineer job in Portland, OR
The hunt is for a strong Big Data Professional, a team player with the ability to manage effective relationships with a wide range of stakeholders (customers & team members alike). Incumbent will demonstrate personal commitment and accountability to ensure standards are continuously sustained and improved both within the internal teams, and with partner organizations and suppliers.
Role: Big DataEngineer
Location: Portland OR.
Duration: Full Time
Skill Matrix:
Map Reduce - Required
Apache Spark - Required
Informatica PowerCenter - Required
Hive - Required
Apache Hadoop - Required
Core Java / Python - Highly Desired
Healthcare Domain Experience - Highly Desired
Job Description
Responsibilities and Duties
Participate in technical planning & requirements gathering phases including architectural design, coding, testing, troubleshooting, and documenting big data-oriented software applications.
Responsible for the ingestion, maintenance, improvement, cleaning, and manipulation of data in the business's operational and analytics databases, and troubleshoots any existent issues.
Implementation, troubleshooting, and optimization distributed solutions based on modern big data technologies like Hive, Hadoop, Spark, Elastic Search, Storm, Kafka, etc. in both an on premise and cloud deployment model to solve large scale processing problems
Design, enhance and implement ETL/data ingestion platform on the cloud.
Strong Data Warehousing skills, including: Data clean-up, ETL, ELT and handling scalability issues for enterprise level data warehouse
Capable of investigating, familiarizing and mastering new data sets quickly
Strong troubleshooting and problem-solving skills in large data environment
Experience with building data platform on cloud (AWS or Azure)
Experience in using Python, Java or any other language to solving data problems
Experience in implementing SDLC best practices and Agile methods.
Qualifications
Required Skills:
Data architecture/ Big Data/ ETL environment
Experience with ETL design using tools Informatica, Talend, Oracle Data Integrator (ODI), Dell Boomi or equivalent
Big Data & Analytics solutions Hadoop, Pig, Hive, Spark, Spark SQL Storm, AWS (EMR, Redshift, S3, etc.)/Azure (HDInsight, Data Lake Design)
Building and managing hosted big data architecture, toolkit familiarity in: Hadoop with Oozy, Sqoop, Pig, Hive, HBase, Avro, Parquet, Spark, NiFi
Foundational data management concepts - RDM and MDM
Experience in working with JIRA/Git/Bitbucket/JUNIT and other code management toolsets
Strong hands-on knowledge of/using solutioning languages like: Java, Scala, Python - any one is fine
Healthcare Domain knowledge
Required Experience, Skills and Qualifications
Qualifications:
Bachelor's Degree with a minimum of 6 to 9 + year's relevant experience or equivalent.
Extensive experience in data architecture/Big Data/ ETL environment.
Additional Information
All your information will be kept confidential according to EEO guidelines.
$84k-118k yearly est. 60d+ ago
Google Cloud Data & AI Engineer
Slalom 4.6
Data engineer job in Portland, OR
Who You'll Work With As a modern technology company, our Slalom Technologists are disrupting the market and bringing to life the art of the possible for our clients. We have passion for building strategies, solutions, and creative products to help our clients solve their most complex and interesting business problems. We surround our technologists with interesting challenges, innovative minds, and emerging technologies
You will collaborate with cross-functional teams, including Google Cloud architects, data scientists, and business units, to design and implement Google Cloud data and AI solutions. As a Consultant, Senior Consultant or Principal at Slalom, you will be a part of a team of curious learners who lean into the latest technologies to innovate and build impactful solutions for our clients.
What You'll Do
* Design, build, and operationalize large-scale enterprise data and AI solutions using Google Cloud services such as BigQuery, Vertex AI, Dataflow, Cloud Storage, Pub/Sub and more.
* Implement cloud-based data solutions for data ingestion, transformation, and storage; and AI solutions for model development, deployment, and monitoring, ensuring both areas meet performance, scalability, and compliance needs.
* Develop and maintain comprehensive architecture plans for data and AI solutions, ensuring they are optimized for both data processing and AI model training within the Google Cloud ecosystem.
* Provide technical leadership and guidance on Google Cloud best practices for dataengineering (e.g., ETL pipelines, data pipelines) and AI engineering (e.g., model deployment, MLOps).
* Conduct assessments of current data architectures and AI workflows, and develop strategies for modernizing, migrating, or enhancing data systems and AI models within Google Cloud.
* Stay current with emerging Google Cloud data and AI technologies, such as BigQuery ML, AutoML, and Vertex AI, and lead efforts to integrate new innovations into solutions for clients.
* Mentor and develop team members to enhance their skills in Google Cloud data and AI technologies, while providing leadership and training on both data pipeline optimization and AI/ML best practices.
What You'll Bring
* Proven experience as a Cloud Data and AI Engineeror similar role, with hands-on experience in Google Cloud tools and services (e.g., BigQuery, Vertex AI, Dataflow, Cloud Storage, Pub/Sub, etc.).
* Strong knowledge of dataengineering concepts, such as ETL processes, data warehousing, data modeling, and data governance.
* Proficiency in AI engineering, including experience with machine learning models, model training, and MLOps pipelines using tools like Vertex AI, BigQuery ML, and AutoML.
* Strong problem-solving and decision-making skills, particularly with large-scale data systems and AI model deployment.
* Strong communication and collaboration skills to work with cross-functional teams, including data scientists, business stakeholders, and IT teams, bridging dataengineering and AI efforts.
* Experience with agile methodologies and project management tools in the context of Google Cloud data and AI projects.
* Ability to work in a fast-paced environment, managing multiple Google Cloud data and AI engineering projects simultaneously.
* Knowledge of security and compliance best practices as they relate to data and AI solutions on Google Cloud.
* Google Cloud certifications (e.g., Professional DataEngineer, Professional DatabaseEngineer, Professional Machine Learning Engineer) or willingness to obtain certification within a defined timeframe.
About Us
Slalom is a fiercely human business and technology consulting company that leads with outcomes to bring more value, in all ways, always. From strategy through delivery, our agile teams across 52 offices in 12 countries collaborate with clients to bring powerful customer experiences, innovative ways of working, and new products and services to life. We are trusted by leaders across the Global 1000, many successful enterprise and mid-market companies, and 500+ public sector organizations to improve operations, drive growth, and create value. At Slalom, we believe that together, we can move faster, dream bigger, and build better tomorrows for all.
Compensation and Benefits
Slalom prides itself on helping team members thrive in their work and life. As a result, Slalom is proud to invest in benefits that include meaningful time off and paid holidays, parental leave, 401(k) with a match, a range of choices for highly subsidized health, dental, & vision coverage, adoption and fertility assistance, and short/long-term disability. We also offer yearly $350 reimbursement account for any well-being-related expenses, as well as discounted home, auto, and pet insurance.
Slalom is committed to fair and equitable compensation practices. For this position the target base salaries are listed below. In addition, individuals may be eligible for an annual discretionary bonus. Actual compensation will depend upon an individual's skills, experience, qualifications, location, and other relevant factors. The target salary pay range is subject to change and may be modified at any time.
East Bay, San Francisco, Silicon Valley:
* Consultant $114,000-$171,000
* Senior Consultant: $131,000-$196,500
* Principal: $145,000-$217,500
San Diego, Los Angeles, Orange County, Seattle, Houston, New Jersey, New York City, Westchester, Boston, Washington DC:
* Consultant $105,000-$157,500
* Senior Consultant: $120,000-$180,000
* Principal: $133,000-$199,500
All other locations:
* Consultant: $96,000-$144,000
* Senior Consultant: $110,000-$165,000
* Principal: $122,000-$183,000
EEO and Accommodations
Slalom is an equal opportunity employer and is committed to inclusion, diversity, and equity in the workplace. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veterans' status, or any other characteristic protected by federal, state, or local laws. Slalom will also consider qualified applications with criminal histories, consistent with legal requirements. Slalom welcomes and encourages applications from individuals with disabilities. Reasonable accommodations are available for candidates during all aspects of the selection process. Please advise the talent acquisition team if you require accommodations during the interview process.
We are accepting applications until 12/31.
#LI-FB1
How much does a data engineer earn in Beaverton, OR?
The average data engineer in Beaverton, OR earns between $72,000 and $137,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.
Average data engineer salary in Beaverton, OR
$99,000
What are the biggest employers of Data Engineers in Beaverton, OR?
The biggest employers of Data Engineers in Beaverton, OR are: